source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
234,506 | I've been working on a full stack project recently for my own amusement and would like to add authentication services to this project, but want to make sure I'm doing it the right way. And before anyone says it, yes, I know: a well-tested and trusted authentication framework is highly suggested as it's likely you won't get your homegrown authentication just right . I'm well aware of this; this project is purely for amusement as I said, and I won't be using it in a production environment (albeit I want to develop it as though it will be used in a production environment). I believe I have a fairly adequate understanding of the basic security principles in the authentication realm such as the difference between authentication and authorization, how you should never store plaintext passwords, why you should salt your passwords, why you should always prefer to use TLS/SSL connection between the client and server, etc. My question is this: Should passwords be: a) hashed and salted once server-side, b) hashed and salted once client-side and once server-side, or c) encrypted client-side and decrypted and then hashed and salted once server-side? Many different variations of this question have been asked before on StackExchange (such as this question which links to many other good questions/answers), but I couldn't find one question that addresses all three of these options. I'll summarize what I've been able to glean from the various questions/answers here: Passwords should not only be hashed on the client-side, as this effectively allows any attackers who gain access to the hash to impersonate users by submitting the hash back to the server. It would effectively be the same as simply storing the plaintext password; the only benefit is that the attacker wouldn't be able to determine what the password actually is, meaning that they wouldn't be able to compromise the victim's accounts on other services (assuming salting is being used). Technically speaking, if using TLS/SSL, passwords are encrypted client-side and decrypted server-side before being hashed and salted. Hashing and salting on the client-side presents the unique issue that the salt would somehow need to be provided to the client beforehand. It's my understanding that the most widely used method is to hash and salt the password server side and to rely on TLS/SSL to protect the password in transit. The main question that remains unanswered to me is what if TLS/SSL is breached? Many companies, schools, and government agencies that provide network services on site have provisioning profiles or other systems in place to allow them to decrypt network traffic on their own network. Wouldn't this mean that these institutions could potentially view the plaintext version of passwords over their networks? How do you prevent this? | It's my understanding that the most widely used method is to hash and salt the password server side and to rely on TLS/SSL to protect the password in transit. This is entirely correct. The main question that remains unanswered to me is what if TLS/SSL is breached? Then you are in a lot of trouble, and nothing else much matters. In the context of the hostile network you're describing, reading your password is the least of your worries. They can see everything you see, and they can read everything you write. They can inject arbitrary content into any page you request in a browser. If TLS/SSL is breached by a MITM, nothing can save you, as the fundamental foundation of all other security is gone. Trying to perform hashing on the client doesn't matter, if the MITM injects a keylogger that reads the password as you type it. Many companies, schools, and government agencies that provide network services on site have provisioning profiles or other systems in place to allow them to decrypt network traffic on their own network. They cannot decrypt traffic simply by virtue of it passing through their network. Typically they install a root certificate they own, and then MITM your TLS traffic and replace the certificate of the site you're trying to reach with their own certificate. When you try to establish a secure connection with example.com , you're actually establishing a connection with the MITM, using their cert, then the MITM establishes their own secure connection on your behalf to example.com . When you send something to example.com , you're actually sending it to the MITM, who decrypts it and then forwards it over their own secure channel to example.com . Wouldn't this mean that these institutions could potentially view the plaintext version of passwords over their networks? How do you prevent this? Again, only if you install their root certificate, and then intercepting passwords should be he least of your worries. You should never log in to any personal accounts on these devices, as you have no privacy, and as you say, all of your traffic can be read by the business. | {
"source": [
"https://security.stackexchange.com/questions/234506",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/238192/"
]
} |
234,509 | I'm not from Information Security or any IT related area. But I want to know if there is any security reason for my digital bank to demand my phone to be on "Automatic Date & Time"? For example, if I'm abroad, I cannot transfer some money to a friend, because a dialog box says that my date and time is incorrect. Is that a badly programmed software or does it have a purpose? | One of the reasons can be the usage of the digital signature. If the time on your phone differs essentially from the actual current time, this may cause your phone to reject signatures done by the bank server, or your bank to reject signatures done by your phone. Why is "Automatic Date & Time" important? Of course, the internal time representation on the phone (milliseconds from 01.01.1970 till now) does not depend on the time zone. But it depends on what you do with this. Suppose you are in the time zone ACT = GMT-5. Suppose your local time is 4:00 ACT, which is 9:00 GMT. Now suppose you disabled "Automatic Date & Time" and set the current time zone explicitly to GMT. Your phone shows immediately not 4:00, but 9:00. The internal time representation remains still unchanged, only GUI representation changed. But now you see, that 9:00 on your phone differs from the time on your friend's phone. So, you manually set time to 4:00. Now both your phone and your friend's phones show 4:00. But your friend uses ACT = GMT-5, where as you use GMT. Thus, the internal representation of the time on your phone is 5 hours behind the real time. In such case, even if the bank allows tolerance +- 1 minute, this will be not sufficient. Any operations where time comparison is involved will fail. | {
"source": [
"https://security.stackexchange.com/questions/234509",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/238197/"
]
} |
234,668 | One of the common way of implementing 2FA is using phone number Text message or Call with OTP. As I can see, usually web services show something like: OTP was sent to the number +*********34 Is it done because revealing the number is considered a vulnerability? If yes, then which one and is it described anywhere? I guess it has something to do with not wanting to show too much info about the user. This info might be used for social engineering but maybe there is something else? Having a link to a trusted location with the description would be great as well. | The primary attack method against text message OTP is to ' sim swap ' and take over the target's phone number. If the site provided the full number in this scenario, they'd be giving the attacker exactly the information they need to break the security being used. (To lift up comments: In general, more personal information is needed, if you're going to social engineer telecom staff into swapping the SIM. In some places and under some carriers, it's even harder than that, requiring ID to be presented in person. But there are also cases where nothing more than the phone number is required, even with enhanced protections in place, if the telecom staff are colluding with the attackers .) | {
"source": [
"https://security.stackexchange.com/questions/234668",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/229083/"
]
} |
234,740 | It seems common practice, when denying access to a user because of an incorrect email / password combination, to not specify which of the username or password was incorrect. This avoids leaking the information that an account does or doesn't exist with that email address. For instance, if I try to log in with my spouse's email address on a dating website, and the error message is "Incorrect password" rather than "There is no account with this email address", I can draw the necessary conclusions. But how do I avoid that same information leak on an account creation form, where I must prevent a user from creating an account with an email adress that already has one? | I'm probably stupid and don't understand the issue, but to me it looks like the problem can easily be avoided by providing the necessary information only via email (which an attacker isn't supposed to be able to access). If you don't want to leak any information, then the public messages must not be distinguishable, whether an email is already been used or not. If you really want to be careful, technically, you should also make sure that the response time is the same, to avoid timing attacks. But this, depending on your needs, might be overkill. So what happens is: If the email has not been used yet , you display a message like "Thanks for registering, now go check your email to complete the process" . The email will contain a link to validate the address, and when the user clicks it the account will be created and enabled. You can also create the account at once, before validation, but you will need to make sure an attacker cannot check if it's been created or not (otherwise that's information that is leaking). If the email has already been used , then you display the same message (see the point above), for example "Thanks for registering, now go check your email to complete the process" . This way an attacker will not gain any additional information. Is the email address already been used or not? The attacker can't know, unless they can read the email. But the user will know, because the message you will send via email this time is different, and it might be like: "You just tried registering at example.com with this email address, but you already have an account connected to this email address. Did you forget the password? Blah blah blah. If it wasn't you, just discard this mesage" . As a result, an attacker will not be able to understand if an account has been registered with a specific email address, unless they have access to that email address. A user, instead, since they supposedly have access to the email address, will be able to get all the information they need, whether they signed up correctly or they tried to sign up twice. | {
"source": [
"https://security.stackexchange.com/questions/234740",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/143721/"
]
} |
234,794 | I have a large database where passwords are stored as strtolower(hex(md5(pass))) (which is a bad way to store passwords, prone to rainbow tables, cheap to dictionary attack, no salt, etc.),
and I'm tasked with switching from md5 to bcrypt, I have to use a bcrypt implementation that silently truncates after 72 bytes, and silently truncates on the first null byte (whichever comes first), and bcrypt(strtolower(hex(md5(pass)))) would not be prone to either of those issues. Also it's possible to retroactively apply bcrypt to existing strtolower(hex(md5(pass))) password hashes, without requiring everyone to re-login/switch passwords. Is it a bad idea? I don't think so, but still want to hear what this site has to say. Maybe there is something important I'm missing. | As a password cracker, I encourage all of my targets to use this technique. This (understandably!) seems like a good idea, but it turns out that against real-world attacks, wrapping an unsalted hash with bcrypt is demonstrably weaker than simply using bcrypt . (EDIT: First, to be clear up front, bcrypt(md5($pass)) is much better than md5($pass) alone - so none of this should be take to mean that this scheme should be left as is.) Wrapping an unsalted hash is problematic from a real-world attack perspective because attackers can do this: Acquire existing MD5 passwords from leaks - even MD5s that haven't been cracked yet After simpler attacks have been exhausted, run these MD5s as a "wordlist" against your bcrypt(md5($pass)) corpus, to identify uncracked bcrypts with known MD5s crack those MD5s outside of bcrypt at much higher speed And yes - you do have to discover the MD5 inside the bcrypt first. But the crucial point is that that MD5 can be an otherwise uncracked MD5 that happens to be present in some other leak , which you can then attack at massively increased speeds. This is not a theoretical attack. It is used all the time by advanced password crackers to successfully crack bcrypt hashes that would otherwise be totally out of reach for the attacker . How this attack works is very non-intuitive for non-specialists, so I strongly encourage skeptics to experiment with a real-world scenario to understand how it works: Hash a 6-character random password with MD5. Presume that this MD5 is already present in some other list of leaked passwords, proving that it has been used as a password at some point. Try to attack the MD5 directly with brute force. Wrap the MD5 in bcrypt and try to attack it directly with brute force. Attack the same bcrypt-wrapped MD5, but this time pretend that you haven't cracked the MD5 yet, but instead use a "dictionary" of leaked MD5 that includes your MD5. Once you've "discovered" that you have an MD5 in hand that is inside one of your bcrypts, attack the MD5, then pass the resulting plaintext to your bcrypt(md5($pass)) attack. Again, very non-intuitive, so play with it (and don't feel bad that it takes work to understand it; I argued vigorously against it with Jeremi Gosney for an hour straight before I finally got it!) I don't believe that this technique has an "official" name, but I've been calling it "hash shucking" or just "shucking." So depending on use case, it's totally understandable why wrapping bcrypt can be attractive (for example, to get beyond the 72-character bcrypt maximum, though this can be tricky for other reasons, including the 'null byte' problem ), or to migrate existing hashes. So if someone needs to wrap a hash in bcrypt, the mitigation for this weakness should be clear by now: your inner hash must never appear in any other password storage system that might ever become available to an attacker. This means that you must make the inner hashes globally unique . For your specific use case - in which you need to preserve existing hashes - there are a few options, including: adding a global pepper within your web or DB framework - so, bcrypt($md5.$pepper) This allows you to easily migrate existing MD5s, but that global pepper is still subject to being stolen (but if your web tier is segmented from your DB tier/auth, this might be an acceptable risk, YMMV); adding a global pepper using HSM infrastructure (storing the pepper in such a way that not even the web app can see, so it can't be stolen) adding an extra per-hash salt (but you'd have to store it outside of the hash somehow, which starts to get tricky and verges into 'roll your own crypto' territory); hashing the MD5s with a slow, salted hashing algorithm or HMAC inside the bcrypt layer (not recommended, I'm not even vaguely qualified to advise on how that might be done properly, but is possible - Facebook is doing it , but some very smart people designed that); For more details, including some specific scenarios to illustrate why this is weaker than bcrypt alone, see my SuperUser answer here , this OWASP guidance on "pre-hashing" passwords which supports my assertion with more clarity, and this talk by Sam Croley discussing the technique. Password upgrading in general can be tricky; see - this answer and Michal Špaček's page on password storage upgrade strategies. | {
"source": [
"https://security.stackexchange.com/questions/234794",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17908/"
]
} |
234,954 | Cloud computing provider Blackbaud reported on https://www.blackbaud.com/securityincident "...the cybercriminal removed a copy of a subset of data from our self-hosted environment. ... we paid the cybercriminal’s demand with confirmation that the copy they removed had been destroyed." How can the company be certain that the data is destroyed, and what reparation can it get if it is found later that the data is passed on after payment? I couldn't find any technical solution on Google. I guess the only assurance is the criminals' "reputation": if these particular criminals are well-known, and word gets out that they leaked the data despite being paid, future are victims less willing to pay them(?). | How can the company be certain that the data is destroyed, It cannot be certain. The only hope that it is part of the criminals business model to maintain a good reputation in that one gets what is claimed. But business models might change. For example if the existing ransomware business does not provide enough profit anymore it might be worth checking if one can get more profits from previously collected (and not actually deleted) data. ... what reparation can it get if it is found later that the data is passed on after payment? None. They are dealing with criminals in the first place. | {
"source": [
"https://security.stackexchange.com/questions/234954",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/238755/"
]
} |
234,970 | If I send someone a text, how much information am I giving up? Could they add me to an app like Whatsapp and access my name and profile? Or does their number first have to be saved in my phone contacts for that to happen? What other information could they access just from a text message, assuming I have not added them as a contact? | If you publish you mobile number in a directory such as Spokeo they can look you up If you use Telegram, the other person will be able to see your public name. Your last seen online time could be retrieved either precisely or imprecisely depending on your privacy settings - this information cannot be hidden completely. Telegram will always tell anyone whether you're currently online (using the app) or not. You can make your profile photo visible only for your contacts. If you use WhatsApp you can hide your last seen info and profile photo completely from everyone. Your public name can also be seen by anyone regardless. It's trivial to find out what your cellular operator is and in many cases it's not difficult to pinpoint the location where the SIM card was sold to you. | {
"source": [
"https://security.stackexchange.com/questions/234970",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/238777/"
]
} |
234,994 | Imagine you are carrying highly sensitive information with you, maybe on a mission in a war zone. You get in an ambush and quickly need to erase all the files before they fall in the wrong hands. This has to happen within seconds. What devices are used for such operations, are there special hard drives which have a physical switch to erase all memory at once? I'm thinking of active storage devices, which lose all information once the power supply is separated. Addendum 1: As Artem S. Tashkinov pointed out in his answer, for most use cases encryption is enough. But I think there is information so valuable, that even in 50 years, when quantum code breaking may become a reality, it can be harmful. So I edited my question to ask explicitly for a method, which does not leave any way , as computationally hard it may be, to recover any data. I guess this is only possible by physically destroying the components, which hold the information. Addendum 2: Issues with thermite: Data drives seem to be quite resilient to thermite and even military grade thermate as shown at a talk at defcon 23. It doesn't seem like using either of these substances is a reliable way of getting rid of the data. The experimental results showed that the drive was mostly intact after the thermite/thermate attack and it seems unlikely that Curie temperature has been reached throughout the plate. ( DEF CON 23 - Zoz - And That's How I Lost My Other Eye...Explorations in Data Destruction (Fixed) , thanks to Slava Knyazev for providing this ressource). Issues with encryption: While quantum code breaking will not break all existing encryption (as pointed out by Conor Mancone) there is still a risk that flaws in the encryption are known or will be discovered in the future. | There are two DEFCON videos from 2012 and 2015 exploring this exact issue: DEFCON 19: And That's How I Lost My Eye: Exploring Emergency Data Destruction DEFCON 23: And That's How I Lost My Other Eye...Explorations in Data Destruction (Fixed) Summary of Viable Options Plasma Cutter Oxygen Injection (Difficult setup) Nailguns (depending on adversary) Damped High Explosives (Lots of kinetic energy) HV Power Spike (Inconclusive forensics) In essence, your only viable methods are physical destruction | {
"source": [
"https://security.stackexchange.com/questions/234994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
235,163 | A person talks about a certain thing (product or service) with another person and a short time after the talk the person gets the advertising of the discussed thing on the mobile or desktop device. I heard and read about such occurrences and didn't know what to think about it. Until some days ago I've personally experienced such occurrence: discussed with my wife a certain product and some days after the talk got advertising of it on Facebook. My question: is it just an accident and there is not any cause to think about private security issue or are browsers on mobile devices indeed analyze talking through allowed microphone access? It is true, such issues, if really exist, are very difficult to research, because there is no direct relation between the mention and appearing of the advert. But, if one realizes this sequence of mention and advertising, it is very... alarming? | Listening using the microphone is unlikely Listening secretly without consent While listening using the microphone for collecting data would be technically possible, there's a few things against the theory. The unifying factor is that secretly monitoring conversations is considered unethical and would probably even be illegal. Getting caught of such actions would ruin any company's reputation for quite a long time, which makes it less likely. One would eventually get caught, because: If the device sends the unprocessed audio from the microphone, that would cause notable network traffic. If the device processes the audio with voice recognition, that would cause notable processor activity. So many are reverse engineering both the processes and network protocols. Is it worth it, when there are better and legal alternatives? Data sources without recording offline conversations are already overwhelming, as explained later. To back up this reasoning, the network traffic on both Android and iOS weren't comparable to Hey Siri and OK Google on Wandera's experiment , where they systematically played both pet ads and silence to the devices and compared their metrics. (Thanks, TCooper !) Upon examining the results, we found nothing to suggest our phones are
activating the microphone or transferring data in response to sound.
The data consumption and battery consumption changes were minimal, and
in most cases, there was no change at all. Sources you have given the permission to listen to you There are also legal sources of microphone data like Alexa, Siri and Google voice search. These are not spying on you all the time but do use voice recognition – that's just a voice interface that replaces the search bar. Some problems do arise when such a service is activated accidentally . The closest example of recording everything has been Samsung SmartTVs back in 2015 when their privacy policy transparently stated that: Please be aware that if your spoken words include personal or other
sensitive information, that information will be among the data
captured and transmitted to a third party through your use of Voice
Recognition. Although this was mentioned in the privacy policy, it caused so much uproar that today their privacy policy has changed and they only send recordings related to voice commands, just like the others: Voice information: Recordings of your voice that we make and store on
our servers when you enable this function and use voice commands to
control a Service, or when you contact our Customer Service team. The data collected could also be used in ways you might not know if you haven't read the EULA or the privacy policy carefully – nor understood the legalese used. Alternative explanations The following mechanisms / phenomena both exist and complement / reinforce each other. The Internet knows you better than you do Everyone is constantly tracked while surfing on the Internet. Tracking cookies can identify the person across several sites and make connections. The searches on search engines are saved and connected (whether they are typed of got using voice recognition). Shopping behaviour is carefully analysed connecting both data voluntarily given using loyalty cards and data left behind involuntarily. Many things can happen to all this data. It can be sold, connected with data from other sources (anonymized or not), and analysed using algorithms. The results can be and are sold again. This enables advertisers to find carefully selected target groups. The relation doesn't even have to be direct like "people buying tools will soon buy construction materials", but the data may reveal much stranger connections. This is explained in detail with many examples e.g. in Hannah Fry's book Hello World: How to be Human in the Age of the Machine (2018). The bottom line is that there are ways the advertisers can make good, educated guesses on your potential future needs before you do even without listening to you. That's how you really get surprisingly relevant ads. Confirmation bias You talk about hundreds of things during the day. Likewise, you probably see hundreds of ads. Most advertisements are completely irrelevant and about topics you haven't talked, so they are easy to dismiss. However, when you occasionally see advertisements on topics you have been discussing, it starts bugging you, leaving suspicions behind. Every time this happens, you get more and more convinced that someone must be listening to your conversations, and your phone is the first suspect as your closest friend you even take to the toilet. This is easy to test by taking a vacation from your smartphone – unless it's glued to you. If it doesn't make any difference in how you see advertisement about the topics you are discussing, then it must be something else than the microphone you always carry with you. | {
"source": [
"https://security.stackexchange.com/questions/235163",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/96343/"
]
} |
236,461 | Do browsers have a list with sites that are supposed to be encrypted? Could a man in the middle attack be performed by presenting a user a http site instead of an https site? That way the server would not need to provide a certificate. It wouldn't show up as a secure site in the browser but I think most people wouldn't notice it. And it wouldn't warn the user, because there are legitemate sites which don't use https. Would such an attack be possible or does the browser notice that the site is supposed to use https but doesn't? | The short answer: they know a very limited number . HTTP Strict Transport Security was introduced to provide better guarantees that a website is being served over HTTPS when specified by the operator. This works well for websites you have visited recently as your browser will remember their HSTS policy and refuse a plaintext connection. For example, if you visit your (previous visited from home) bank's website from an untrusted network that happens to have a man in the middle attempting to downgrade the connection to plain HTTP, your browser will refuse to connect because it remembers the website's security policy. If you have not visited the site previously, the man in the middle needs to not only downgrade the connection security, but also remove the HSTS header ( Strict-Transport-Security ) from the response. This isn't difficult. The problem you have identified is the major limitation: what happens if you are the victim of a downgrade attack during the first visit. One solution browsers have implemented is to package a "pre-loaded HSTS list" of popular websites known to require HTTPS. Obviously this cannot be comprehensive and even with the list, attackers can still setup security downgrade proxies at slightly related DNS names. You can submit a domain for inclusion in the HSTS Preload List at hstspreload.org . | {
"source": [
"https://security.stackexchange.com/questions/236461",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/240408/"
]
} |
236,477 | I am setting up AWS stuff and wondering how to setup a secure bastion host. They all say to only allow access to your IP address, but how can I do that if my IP address is changing every few hours or days (just in my house wifi, or going to coffee shops, etc.). What is best practice here, for SSHing into a bastion host and limiting access somehow to only specific IP addresses. If not possible, what is the next best alternative? | The short answer: they know a very limited number . HTTP Strict Transport Security was introduced to provide better guarantees that a website is being served over HTTPS when specified by the operator. This works well for websites you have visited recently as your browser will remember their HSTS policy and refuse a plaintext connection. For example, if you visit your (previous visited from home) bank's website from an untrusted network that happens to have a man in the middle attempting to downgrade the connection to plain HTTP, your browser will refuse to connect because it remembers the website's security policy. If you have not visited the site previously, the man in the middle needs to not only downgrade the connection security, but also remove the HSTS header ( Strict-Transport-Security ) from the response. This isn't difficult. The problem you have identified is the major limitation: what happens if you are the victim of a downgrade attack during the first visit. One solution browsers have implemented is to package a "pre-loaded HSTS list" of popular websites known to require HTTPS. Obviously this cannot be comprehensive and even with the list, attackers can still setup security downgrade proxies at slightly related DNS names. You can submit a domain for inclusion in the HSTS Preload List at hstspreload.org . | {
"source": [
"https://security.stackexchange.com/questions/236477",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/128550/"
]
} |
236,481 | There are so many tools such as testssl, sslyze to test the TLS configurations on webservers. I wanted to know, why aren't there any tool that checks the TLS client side? What makes it difficult? | The short answer: they know a very limited number . HTTP Strict Transport Security was introduced to provide better guarantees that a website is being served over HTTPS when specified by the operator. This works well for websites you have visited recently as your browser will remember their HSTS policy and refuse a plaintext connection. For example, if you visit your (previous visited from home) bank's website from an untrusted network that happens to have a man in the middle attempting to downgrade the connection to plain HTTP, your browser will refuse to connect because it remembers the website's security policy. If you have not visited the site previously, the man in the middle needs to not only downgrade the connection security, but also remove the HSTS header ( Strict-Transport-Security ) from the response. This isn't difficult. The problem you have identified is the major limitation: what happens if you are the victim of a downgrade attack during the first visit. One solution browsers have implemented is to package a "pre-loaded HSTS list" of popular websites known to require HTTPS. Obviously this cannot be comprehensive and even with the list, attackers can still setup security downgrade proxies at slightly related DNS names. You can submit a domain for inclusion in the HSTS Preload List at hstspreload.org . | {
"source": [
"https://security.stackexchange.com/questions/236481",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/240427/"
]
} |
236,554 | A family of N people (where N >= 3) are members of a cult. A suggestion is floated anonymously among them to leave the cult. If, in fact, every single person secretly harbors the desire to leave, it would be best if the family knew about that so that they could be open with each other and plan their exit. However, if this isn't the case, then the family would not want to know the actual results, in order to prevent infighting and witch hunting. Therefore, is there some scheme by which, if everyone in the family votes yes , the family knows, but all other results (all no , any combination of yes and no ) are indistinguishable from each other for all family members? Some notes: N does have to be at least 3 - N=1 is trivial, and N=2 is impossible, since a yes voter can know the other person's vote depending on the result. The anonymous suggestor is not important - it could well be someone outside the family, such as a someone distributing propoganda. It is important that all no is indistinguishable from mixed yes and no - we do not want the family to discover that there is some kind of schism. However, if that result is impossible, I'm OK with a result where any unanimous result is discoverable, but any mixed vote is indistinguishable. Some things I've already tried: Of course, this can be done with a trusted third party - they all tell one person their votes, and the third party announces whether all the votes are yes . However, this isn't quite satisfying of an answer to me, since the third party could get compromised by a zealous no voter (or other cult member) to figure out who the yes votes are. Plus, this person knows the votes, and may, in a mixed vote situation, meet with the yes voters in private to help them escape, which the no voters won't take kindly to. One can use a second third party to anonymize the votes - one party (which could really just be a shaken hat) collects the votes without reading them and sends them anonymized to the second party, who reads them and announces the result. This is the best solution I could think of, however I still think I want to do better than this - after all, in a live-in settlement cult, there probably isn't any trustworthy third party you could find. I'd like to find a solution that uses a third party that isn't necessarily trusted. However, I do recognize that you need at least something to hold secret information, because if you're working with an entirely public ledger, then participants could make secret copies of the information and simulate what effect their votes would have, before submitting their actual vote. In particular, if all participants vote yes but the last one has yet to vote, they can simulate a yes vote and find out that everyone else has voted yes, but then themselves vote no - they are now alone in knowing everyone else's yes votes, which is power that you would not want the remaining no voter to have. EDIT: After BlueRaja's comments, I realize that the concept of "trusted third party" isn't quite well-defined, and that at some level, I probably actually do need a trusted third party at least for reliably holding state. The key is what I would be trusting the third party to do - for instance, in the first and second bullet point examples, I may not trust a third party to know who voted what, but may trust them with the contents of the votes. Ideally, of course, I would still like to be able to operate without a trusted third party at all, but failing that, I would like to minimize what I have to trust the third party to do. (Also, yes, a third party can include an inanimate object or machine, as long as it can withhold any amount of information from the participants). | The theory This could be implemented in several ways, by applying the principle of idempotency . You want a system that only produces a result (binary 1) if all the inputs are active, that is, it tells you that everybody wants to leave the cult only if everybody has voted yes, otherwise the system must not return any kind of information (binary 0). This is basically an AND relationship between the inputs, as seen in the following table (0 = no/false, 1 = yes/true): Input: You want to leave the cult.
Output: Everybody wants to leave the cult.
0 0 0 | 0
0 0 1 | 0
0 1 0 | 0
0 1 1 | 0
1 0 0 | 0
1 0 1 | 0
1 1 0 | 0
1 1 1 | 1 ---> hooray, everybody wants to leave, we can talk about it! Now, that might not be trivial to implement safely, because you need something that can count (N-1 will not be enough to trigger the result, but N will), and something that is able to count might also be able to leak information about the number of votes. So let's forget about that, and realize that since you are actually dealing with single bits of information (either yes or no, 0 or 1), then you will be able to get valuable information if you just check the opposite (no instead of yes, 0 instead of 1, etc.). So if you check if they want to stay in the cult instead of leaving, and if you check if at least one person want to stay instead of checking if they all want to leave, you get the following truth table where all 1s have been replaced with 0s and vice versa: Input: You want to stay in the cult.
Output: Somebody wants to stay.
1 1 1 | 1
1 1 0 | 1
1 0 1 | 1
1 0 0 | 1
0 1 1 | 1
0 1 0 | 1
0 0 1 | 1
0 0 0 | 0 ---> hooray, nobody wants to stay, we can talk about it! Note that now we have an OR relationship between the inputs, which I believe is easier to implement safely, because you just need a system that respond to any input in the exact same way. Such a system would be idempotent : one vote is enough to trigger the output, and any subsequent votes would have no effect. Now, what can we use to implement such a system? The system would need the following features: It must be trusted by everybody. It can't be built or bought by a single member of the family, or by someone else. So I suppose it must be something very simple that everybody can understand and trust. To avoid malicious manipulation of the system, it should also be operated while being supervised by all the members. The voters must not be able to check the output before the experiment is over. This means that the vote must not return any feedback about the current state of the system. For example, blowing out a candle is not safe if you can see it, or feel the heat, or smell anything. The system The simplest solution I can think of is something involving an electronic device with an idempotent button, like a remote control to change the channel on a TV. Here's an example of how I would set up the system: Get a device with an idempotent button. It might be a TV with a remote control, providing that changing to channel N always has the same effect no matter how many times you do it (idempotency). Or anything else you have at home, like a button to open a gate (if opening an open gate leaves it open), etc. The important thing though is that the system needs to be trusted by everybody, so if you really want to do everything safely the family might consider buying a new device (going to the mall, all together, and buying a trusted device). Set up the system safely. All the family must be present while setting up the system, otherwise the system might be corrupted by the one who sets it up. In general, the whole family must be present and check all the operations from the beginning to the end of the experiment (like from buying the equipment to throwing it away safely). Avoid any kind of feedback from the system while voting. For example, to change the TV channel, the TV and the remote could be under a huge thick blanket, and to vote you need to slide your hand under the blanket. But the volume should be muted, and maybe you'd better turn on some music in the background, loud enough to not be able to hear any possible buzz or noise from the TV. You might even want to define some delay between one vote and the next, to avoid getting any feedback from the possible heat of the remote control caused by the hand of the previous voter. The voting process should be the same for everybody. During the experiment the other members must make sure the voter is not cheating (like peeping under the blanket, acting strange, etc.), so everybody is present during the experiment. There is a relatively fixed length of time that the voter should be able to stay with their hand under the blanket. Sliding it under the blanket and immediately pulling it out is not considered valid, since that would be an obvious and publicly distinguishable NO vote. From the outside, every vote must look pretty much the same. Test the system before using it for the real experiment . You need to make sure everybody understand the process, votes correctly, and the system responds accordingly. The whole family takes part in several simulated votes for testing the system (simulated votes are fake and publicly known, not secret). At the end, the system must be dismantled safely. Any buttons or parts that have been touched might need to be cleaned carefully, to remove fingerprints. If the family members don't trust the system after the vote, fearing that somebody might be able to extract information from it, all the parts of the system might need to be thrown away. The vote Supposing they have chosen to implement the TV-remote-blanket system, what happens is this. "Ok everybody, the TV is on, the current channel is 123. If you want to stay in the cult, change it to channel 0". Each member in turn slides a hand under the blanket and either changes the channel (if they want to stay in the cult), or pretends to change it (if the want to leave). At the end, the blanket is removed and... Channel 123! Then nobody wants to stay in the cult, hooray! ...or ...Channel 0! Then at least one member wants to stay in the cult! Or maybe all of them, there's no way to know. Final notes It was fun trying to think of a solution to this problem, but I consider this more of a thought experiment than a real security question. The problem is that the threat model is incomplete, because I don't think this scenario can actually make sense in a family where all the members are part of a cult. Cult members are brainwashed and paranoid by definition. They might not even trust a store to buy a new TV or a remote control, thinking anyone they don't already know (including any sellers) might be "enemies". It is definitely possible to set up a system without any electronic devices, using only simple objects like candles, pots, water, ropes, etc. That stuff might be easier to trust, compared to a black-boxed electronic device, but it might also be harder to make such systems work reliably. I'm also wondering: if a member of the family suggests that a vote is needed, isn't that suspicious? Why should a member of the cult want to know if everybody in the family wants to leave? Chances are the one who proposes this system is the one who wants to leave. Or this might all be a trap to find out who wants to leave. | {
"source": [
"https://security.stackexchange.com/questions/236554",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190208/"
]
} |
236,946 | Let's say we have a separate hashing algorithm called s2 and it would convert Hello into dug84nd8 . If we could take the algorithm and just reverse engineer it to generate a string like 8GN492MD that would also output dug84nd8 , wouldn't it say that (s2("Hello") = s2("8GN492MD")) == true) , and let a hacker in? I feel like that I am missing something, but I don't know what it is. | Your premise has a flaw. You say you want to 'reverse engineer' the hash function. There's no need to reverse engineer it - its implementation is public. What you can't do is invert it (perhaps that's what you meant), because it's not invertible. You can easily tell it's not an invertible function because the size of the domain (possible number of inputs) is greater than the size of the range (possible number of outputs). The range is 2^256 (possible output states) and the size of the input space is infinite (technically 2^(2^64) apparently, but much larger than 2^256). And that's precisely what permits the collisions (by the pigeon hole principle, there must be more than one possible input for each output - at least for one of the inputs). The hash function's whole design makes it computationally hard to find those collisions. There are three properties of hashes (first pre-image resistance, second pre-image resistance and collision resistance) which describe that property more exactly. So the answer to your question is that the design of the function makes it purposely hard to achieve that even if you know exactly how the function works. For details (in a slightly different context) of how functions can perform surprisingly (why for instance it is impossible to "step backwards through them" to invert them), see the answers here . | {
"source": [
"https://security.stackexchange.com/questions/236946",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/241013/"
]
} |
236,994 | It seems to me that "yes, they can". As I saw it in some countries, entering adult content will simply give a warning message that the site is blocked. Some places even block Facebook similarly. If yes, then technically, ISPs can steal a website's users data by altering legitimate HTML and JavaScript with bad HTML, JavaScript, HTTP headers, CORS headers to send data in cookie/localstorage to https://somewhereelse.example.com . Note 1: This is just some imaginary situation Note 2: HTTPS is used as a standard nowadays. | Your ISP is per definition a MITM (man-in-the-middle) and therefore can serve you any content it desires. You mentioned HTTPS and this is of course a game changer. Yes, the ISP can server any arbitrary content when you access e.g. Facebook, but it does not have access to the private server certificate of Facebook and your browser will detect that it is not speaking to the correct server . Now comes the critical part - the user will get a warning and has to decide what to do with it. If the user ignores the warning and tells the browser to load the fake website nevertheless, the browser will comply. A security aware user will realize that something strange is going on, leave the site and maybe even investigate further. This means, your ISP can serve any content it likes, but as long as you are using state-of-the-art TLS encryption and validate server certificates , you have the ability to detect that some tampering took place. | {
"source": [
"https://security.stackexchange.com/questions/236994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/153280/"
]
} |
237,208 | A colleague recieved an unsolicited email along the lines below: Dear Ms. Smith please click on the following link to recieve Document X regarding
Project Y. Yours, Eve Nobody [email protected] I suggested my colleague to reply to Eve Nobody, and ask whether the email is legitimate. Note, that we typed-in the address of Eve Nobody, since one could tamper with the reply-to header. I assume three possible scenarios: Eve Nobody exists and she did send the email Eve Nobody exists, but she didn't send the email Eve Nobody does not exist, and the email-server of company.com will reply with an error message In all possible scenarios, we only interact with company.com, and not with any potential spoofer. Thus, I consider this course of action safe. Was my advice sound, or are there other aspects to consider? For context: We are a firm which does research with academia and industry, hence we have plenty of information on our current projects along with the corresponding researchers. Thus, the information contained in the initial email (a reasonable title for Document X and the title of Project Y) can be gather from our homepage. company.com is a legitimate company, and is involved in some research of ours. | You are focused on the person existing and not the account. Consider that Eve exists, did not send the email, but someone with access to her account did, and has entered an email rule to prevent your emails from hitting the inbox. You could carry on a conversation with that account but not Eve herself. So I would add: Account exists, email was sent from the account, but Eve did not send the email (compromised account) Account exists, email was sent from the account, but Eve does not exist (dummy account) In both cases, if you reply, you could be replying with the malicious actor and not Eve. The best response is to contact Eve through some means other than email (call, other contact info, etc.) | {
"source": [
"https://security.stackexchange.com/questions/237208",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/177170/"
]
} |
237,298 | We hired a new Sales Ops member 1 week ago. Within a week he's getting emails similar to the below: I did some research on the sender and it is a valid email, valid person, SPF/DKIM checks come through fine. I reached out to my CEO to check to see if he knew the sender. I know I can stop this by rejecting messages spoofing an employee, the problem is I'd be rejecting their personal emails due to the name part. Is there a way in Office 365 to detect these and stop them more intelligently? What are the ways some of these scammers get this data so easily so that they can send emails like this? They're always hitting my sales teams and not my operations or tech team. My team is running BitDefender with the latest updates, and running behind some strong firewalls and gateways that also scan incoming and outgoing data. | Presumably, your MX record is suffering from a directory harvest attack (DHA). There are lots of ways to do this and unless you're very savvy at pouring through your mail logs, most of them are (by design) hard to detect. The simplest form of DHA involves SMTP vrfy and expn commands. You can block these entirely. More sophisticated attacks can involve composing emails and then never completing them (the trailing . marking the end of a data command, or even just rset or quit or dropping the connection before issuing a data command). If you're using o365 exclusively, harvesting from the MX is less likely a concern (I assume Microsoft is savvy enough to block most DHA attempts, though they may not provide enough forensic data to determine if a DHA was attempted or how successful it was before it was cut off). Perhaps attackers have found another source of this data, like a list of your users or a compromised user system or account that attackers can access to read mail or the address book. If your usernames are predictable, e.g. [email protected], an attacker can determine users by scraping a company employee listing or a site like LinkedIn. Another source of addresses is public mailing list archives. One thing you can do is to set up a spamtrap (aka a honeypot). Just make a new account for a fictional user and never tell anybody. Wait for a while to see if it starts getting mail and you'll know there was a DHA. If you don't get any bites, then your trap wasn't listed in the place(s) attackers harvest. Try to come up with what those might be and spin up new dedicated addresses (or, if you have to pay per account, add new seeding techniques to the single trap account one by one, with a few weeks between each addition so you can identify it). | {
"source": [
"https://security.stackexchange.com/questions/237298",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/157376/"
]
} |
237,567 | Context NIST SP 800-63b gives the following guidance for password forms (aka login pages): Verifiers SHOULD permit claimants to use “paste” functionality when entering a memorized secret. This facilitates the use of password managers, which are widely used and in many cases increase the likelihood that users will choose stronger memorized secrets. In order to assist the claimant in successfully entering a memorized secret, the verifier SHOULD offer an option to display the secret — rather than a series of dots or asterisks — until it is entered. This allows the claimant to verify their entry if they are in a location where their screen is unlikely to be observed. The verifier MAY also permit the user’s device to display individual entered characters for a short time after each character is typed to verify correct entry. This is particularly applicable on mobile devices. Question I had the argument made to me that these two features should not be implemented together because they would allow a user to circumvent a password manager's protection and view the auto-populated password. I suspect this argument won't hold water, but I'm curious about community opinions. | Password managers are not meant to hide your passwords from yourself It's as simple as that. To whit: most password managers let you view your own password anytime you want anyway. I say "most" only because I haven't used them all. I've worked with a few sites where auto fill doesn't work for reasons outside the password managers control. Therefore viewing/copying your own passwords is a necessity. IMO a password manager that doesn't let you view your own secrets is a broken password manager. If you can use the password manager to view your own password, then an individual site that refuses to display a password at the user's request in an attempt to hide their password from themselves has grossly missed the bigger picture. Some people use password managers, some don't The ability to paste is very helpful when using password managers. The ability to view as you type is helpful for people who are typing their passwords (especially on phones). These are two different features for two different groups of people, both of whom can be expected to use a given site. Therefore to say you only need one of these features at a time is just silly... | {
"source": [
"https://security.stackexchange.com/questions/237567",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61443/"
]
} |
237,598 | I am aware of other questions asking similar things as this one, but I believe this design addresses many of the issues raised in those questions. I'm also not concerned with making sure there's no database to store, only that the database doesn't store any secrets. Using some key derivation function KDF
With master password provided from elsewhere
Password requirements are the rules of what are allowed by the site,
i.e. length, allowed character classes, required classes
# To register with a new site
With username provided from elsewhere
With password requirements provided from elswhere
Create a salt
Store site,password requirements,username,salt
Create key by KDF(salt, master password)
Convert key to generated password to fit password requirements
Give username and generated password to site
Register
# To login to a site
Retrieve password requirements,username,salt by site
Create key by KDF(salt, master password)
Convert key to generated password to fit password requirements
Give username and generated password to site
Login Let's say an attacker acquires both the store and the plaintext generated passwords. Does this design make it any easier for the attacker to find the master password than by a brute force attack? Is a brute force attack on this design easier than a brute force attack on an encrypted password store? Is this in any other way easier to attack than encrypted password managers that derive the encryption key from a master password? Of course the list of sites and usernames itself is important information. I'm only wondering about the security of the master password. | Password managers are not meant to hide your passwords from yourself It's as simple as that. To whit: most password managers let you view your own password anytime you want anyway. I say "most" only because I haven't used them all. I've worked with a few sites where auto fill doesn't work for reasons outside the password managers control. Therefore viewing/copying your own passwords is a necessity. IMO a password manager that doesn't let you view your own secrets is a broken password manager. If you can use the password manager to view your own password, then an individual site that refuses to display a password at the user's request in an attempt to hide their password from themselves has grossly missed the bigger picture. Some people use password managers, some don't The ability to paste is very helpful when using password managers. The ability to view as you type is helpful for people who are typing their passwords (especially on phones). These are two different features for two different groups of people, both of whom can be expected to use a given site. Therefore to say you only need one of these features at a time is just silly... | {
"source": [
"https://security.stackexchange.com/questions/237598",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/241700/"
]
} |
237,827 | I know about images posing a risk to security. I don't open suspicious junk emails for this reason. However, occasionally I have to check my junk email folder without clicking on an email just in case anything mistakenly went in there. I am seeing a growing number of junk emails have images in the FROM name before opening an email, such as the below screenshot. As you can see there is a picture of a present at each side of the FROM name. I don't think these are a setting I can see when sending an email, so how are spammers getting images to show? Could such images contain tracking pixels, drive-by downloads or steganographic code? | The pictograms you are seeing in the name portion are Unicode emojis. They can be used anywhere there is an updated version of Unicode. To see a full list of the supported emojis for each version of Unicode, look here . You can copy and paste them just about anywhere, and they will display. These do not contain tracking, they are loaded from your system locally. However, if they are embedded as an image within the body of the email, that is a different story. | {
"source": [
"https://security.stackexchange.com/questions/237827",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/241666/"
]
} |
238,031 | I'm a listener of the podcast "Security Now" where they often claim that there are no reasons to limit the number of characters a user can use in their passwords when they create an account on a website. I have never understood how it is even technically possible to allow an unlimited number of characters and how it could not be exploited to create a sort of buffer overflow. I found a related question here , but mine is slightly different. The author of the other question explicitly mentions in their description that they understand why setting a maximum length of 100000000 characters would be a problem. I actually want to know why it would be a problem, is it like I have just said because of buffer overflows? But to be vulnerable to a buffer overflow, shouldn't you have a sort of boundary which you can't exceed in the first place, and thus if you didn't limit the number of characters, would you even have this risk? And if you are thinking about starving a computer's RAM or resources, could even a very large password be a problem? So, I guess it is possible not to limit the number of characters in a password: all you'd have to do would be to not use the maxlength attribute or not have a password validation function on the server side. Would that be the secure way to do it? And if it is, is there any danger in allowing an unlimited number of characters for your passwords? On the other hand, NIST recommends developers to allow for passwords up to 64 characters at least. If they take the time to recommend a limitation, does it mean there has to be one? Some have suggested that this question could be a duplicate of my question. It is not. The other question starts from the premise that there is always a threshold on passwords, I was just wondering if there was a reason to put a threshold on passwords to begin with. | A limit is recommended simply to avoid exhausting resources on the server. Without a limit, an attacker could call the login endpoint with an extremely large password, say a gigabyte (let's ignore whether it's practical to send that much at once. You could instead send 10MB at a time, but more quickly). Any work the server needs to do on the password will now be that much more expensive. This applies not just to password hashing but every level of processing to reassemble the packets and get them to the application. Memory usage on the server also increases considerably. Just a few concurrent 10MB login requests will start having an impact on server performance, perhaps to the point of exhausting resources and triggering a denial of service. These may not be security issues in the sense of password/data leakage but crippling a service by DOS or crashing definitely is. Note that I make no mention of buffer overflow: decent code can handle arbitrarily big passwords without overflowing. To wrap up, I think when someone says "there's no reason to limit the number of characters of a password", they are talking about commonly seen small limits (eg: 10 or 20 characters). There is indeed no reason for those other than laziness or working with old systems. A limit of 256 characters which is larger than desired by most people (except those testing those limits) is reasonable and can prevent some of the issues related to arbitrarily-large payloads. | {
"source": [
"https://security.stackexchange.com/questions/238031",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232313/"
]
} |
238,043 | In PHP a magic hash attack happens when loose type comparisons cause completely different values to be evaluated as equal, causing a password "match" without actually knowing the password. Here is an example: <?php
if (hash('md5','240610708',false) == '0') {
print "'0' matched " . hash('md5','240610708',false);
} If you execute this you will see: '0' matched 0e462097431906509019562988736854 Would this be possible in a JavaScript application that's using using the loose equality == operator to compare two hashes? JavaScript is a weakly typed language, so I would naturally assume the type coercion can be taken advantage of and therefore present various security holes. | Being loosely typed with a crazy == operator, JavaScript is vulnerable to type juggling. But it is not as vulnerable as PHP. Here are a few things that are equal in PHP, but not in JavaScript: '0e111' == '0e222' Even though both are strings, PHP will treat them as numbers. JavaScript needs one of the operands to be a number before it tries to coerse anything into a number. '0eaaa' == 0 PHP will interpret anything beginning with a number as a number, while JavaScript will not. Note that even PHP needs the other operand to be a number in this case. However, this will be equal in both PHP and Javascript: '0e111' == 0 One operand is a string containing only digits after the 0e (very unlikely that will happend at random), and the other must be an actual number (not just a string looking like a number). This makes it harder to find type juggling vulnerabilities with hashes in JavaScript. That doesn't mean they don't exist, though. Use === . | {
"source": [
"https://security.stackexchange.com/questions/238043",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/242334/"
]
} |
238,254 | I work as a solutions architect for web-based systems on AWS. As part of this role, I often respond to Information Security questionnaires. Nearly all questionnaires request information about data encryption at-rest and in-transit. However only a much smaller percentage ask about other security aspects, such as password policies or common web application security issues, as published by OWASP. I wonder how common/likely accessing of clients data is within a public cloud provider such as AWS, Azure and GCP. It seems a very high barrier to pass for an external party, even data centers of small local web hosting companies seem to have very good physical access security. And informal conversations with bank employees tell me that accessing someone's bank account without reason leads to instant dismissal, so surely public cloud providers would have similar controls in place? This is not to challenge the value of encryption at rest, it is very cheap to access, so there is no reason not to enable it, but where does it sit in terms of priorities? | Your threat model is focused on external parties breaking in. But the threats are broader than that. Low-level hardware backups, VM snapshots, and disposed hardware can all contain data. And because these things tend to be seen to have lower risks, they are often mishandled. So, it's not a "Mission Impossible" style of threat that is likely. It's the "eh, the drive is old, just toss it" style of threat that's the problem. Even for large cloud providers. And because, as you say, it's cheap and easy to implement encryption-at-rest, to not implement it is its own cause for concern and follow up questions. Human factor issues, like password policies and secure coding practices, are also very important, but difficult to assure, insure, and be consistent. So, technical controls tend to be the focus, regardless of the overall priority. | {
"source": [
"https://security.stackexchange.com/questions/238254",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11027/"
]
} |
238,313 | I've done some research and it looks like that the way linux keeps history is less about security and audit and more about helping the user. Even after making changes to instantly log the command and space commands the command still wont log till finished. Is there any way to improve audit logging other then possibly writing a module for the linux kernel that will instantly log whatever is typed? | Your threat model is focused on external parties breaking in. But the threats are broader than that. Low-level hardware backups, VM snapshots, and disposed hardware can all contain data. And because these things tend to be seen to have lower risks, they are often mishandled. So, it's not a "Mission Impossible" style of threat that is likely. It's the "eh, the drive is old, just toss it" style of threat that's the problem. Even for large cloud providers. And because, as you say, it's cheap and easy to implement encryption-at-rest, to not implement it is its own cause for concern and follow up questions. Human factor issues, like password policies and secure coding practices, are also very important, but difficult to assure, insure, and be consistent. So, technical controls tend to be the focus, regardless of the overall priority. | {
"source": [
"https://security.stackexchange.com/questions/238313",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31084/"
]
} |
238,320 | Let's assume we have an internal environment consisting of some distributed systems, one central system and an internal PKI. The central system is serving a cluster of HSMs that shall be accessable by network in a securely controlled way. On the distributed systems, there are different application services run by separated technical users that want to make use of the hardware-based cryptography offered by the central system. The distributed systems don't have access to local HSMs. Each connection from an application service to the central HSM-service is secured by mutual-auth TLS using certificates from the internal PKI. While the central system's tls private key can be protected by the local HSMs, the applications services' tls keys have to be software keys and protected somehow by the local access control system. In this setup we are concerned about a single malicious administrator on the distributed systems using/copying the private key of an application service to perform cryptographic operations on sensitive data of that application. Is there any elegant solution to protect agsinst this threat? Currently, we can only think of the following approaches: a) Of course we could provide local HSMs to each distributed system. However, this should be incredibly expensive regarding the amount of distributed systems and would also require to establish a responsability for a more complex infrastructure. b) Someone had the idea to somehow use local TPMs to protect the application services' keys from the administrator and also keep them separated. I'm not sure if I really understand this approach but for me it sounds like a missunderstanding of what a TPM is capable to do. c) The access control system and the monitoring should be configured such that any access to a key from an administrator's session is raising an alert. Of course, this also requires a concept that reduces the power of an administrator so he cannot manipulate. Not to mention a concept of how to handle such alerts properly. So I would like to know if you know an elegant solution to this problem. I assume this should be a standard problem in the era of cloud computing. Maybe you have some further ideas. Thank you! | Your threat model is focused on external parties breaking in. But the threats are broader than that. Low-level hardware backups, VM snapshots, and disposed hardware can all contain data. And because these things tend to be seen to have lower risks, they are often mishandled. So, it's not a "Mission Impossible" style of threat that is likely. It's the "eh, the drive is old, just toss it" style of threat that's the problem. Even for large cloud providers. And because, as you say, it's cheap and easy to implement encryption-at-rest, to not implement it is its own cause for concern and follow up questions. Human factor issues, like password policies and secure coding practices, are also very important, but difficult to assure, insure, and be consistent. So, technical controls tend to be the focus, regardless of the overall priority. | {
"source": [
"https://security.stackexchange.com/questions/238320",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/242721/"
]
} |
238,330 | By saving all the passwords for all users forever, are companies accidentally creating a less secure environment? Related to this question: Why do some sites prevent users from reusing their old passwords? I understand that companies feel that they need to be protected, but by everyone taking the same strategy, it creates a poor security environment. It seems less secure to me. Am I missing something? | Your threat model is focused on external parties breaking in. But the threats are broader than that. Low-level hardware backups, VM snapshots, and disposed hardware can all contain data. And because these things tend to be seen to have lower risks, they are often mishandled. So, it's not a "Mission Impossible" style of threat that is likely. It's the "eh, the drive is old, just toss it" style of threat that's the problem. Even for large cloud providers. And because, as you say, it's cheap and easy to implement encryption-at-rest, to not implement it is its own cause for concern and follow up questions. Human factor issues, like password policies and secure coding practices, are also very important, but difficult to assure, insure, and be consistent. So, technical controls tend to be the focus, regardless of the overall priority. | {
"source": [
"https://security.stackexchange.com/questions/238330",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/242738/"
]
} |
238,625 | In addition to the authentication techniques that are based on “something you
have”, “something you know” and “something you are”, authentication techniques that consider “somewhere you are” are also used. Why? Does it add further security? | “Somewhere you are” is NOT an authentication factor , despite what you might have read elsewhere. It is an authorization factor. Indeed, it does not answer the question "are you who you claim to be?", but instead it answers "should you be there? / are you authorized to be here?". (The answer to the question "who are you?" being an identification , yet another category.) To further clarify (as asked in comments): Owning a badge, a key or knowing a password (a.k.a. a token) can answer the question "are you who you claim to be?" because the token should unique and should be in its owner possession. Whereas multiple different persons can easily be in front of the door trying to enter. If in your very specific case, only authenticated persons can be in front of the door, this only means that the authentication has been performed elsewhere beforehand and that you trust this specific location to be a good conveyor of the authentication information. It also implies that you trust this first authentication method. Whether this trust is misplaced or not depends on your threat model. As a side note: biometrics should only be considered an identification factor (or a most a very weak authentication factor), because you cannot revoke a biometric feature, while you can revoke a stolen authentication factor, by changing the lock or updating the whitelist. End of side note. This means in practice that you should check the "somewhere you are" factor (IP address, geo-localization, time-locatization (date expiration), etc.) independently of authentication, and preferably after a proper authentication to be able to log the activity and be able to do accountability. So yes, you can use the “somewhere you are” factor on top of the classical 3 types of authentication factors, but not as another authentication factor, but as an authorization parameter. Whether it's useful depends on the use-cases, and other answers to this question address this point or give examples. | {
"source": [
"https://security.stackexchange.com/questions/238625",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/241454/"
]
} |
238,641 | While increasing my communication security as much as possible, I came across OpenPGP & S/MIME and already use an OpenPGP key pair in both of my mail clients (PC & smartphone). I believe the CSR must be generated based on a truely private key, hence a CA generated one is unacceptable for me. Unfortunately, I struggle with finding the correct OpenSSL arguments needed for CSR generation based on my existing OpenPGP's public key instead of letting OpenSSL create an entirely new key pair. Put simply, how to tell OpenSSL: "Take Thunderbird's 'Armored ASCII'-formatted public key export file & create a CSR based on it (and further provided details like CN, O, OU, etc.)"? | “Somewhere you are” is NOT an authentication factor , despite what you might have read elsewhere. It is an authorization factor. Indeed, it does not answer the question "are you who you claim to be?", but instead it answers "should you be there? / are you authorized to be here?". (The answer to the question "who are you?" being an identification , yet another category.) To further clarify (as asked in comments): Owning a badge, a key or knowing a password (a.k.a. a token) can answer the question "are you who you claim to be?" because the token should unique and should be in its owner possession. Whereas multiple different persons can easily be in front of the door trying to enter. If in your very specific case, only authenticated persons can be in front of the door, this only means that the authentication has been performed elsewhere beforehand and that you trust this specific location to be a good conveyor of the authentication information. It also implies that you trust this first authentication method. Whether this trust is misplaced or not depends on your threat model. As a side note: biometrics should only be considered an identification factor (or a most a very weak authentication factor), because you cannot revoke a biometric feature, while you can revoke a stolen authentication factor, by changing the lock or updating the whitelist. End of side note. This means in practice that you should check the "somewhere you are" factor (IP address, geo-localization, time-locatization (date expiration), etc.) independently of authentication, and preferably after a proper authentication to be able to log the activity and be able to do accountability. So yes, you can use the “somewhere you are” factor on top of the classical 3 types of authentication factors, but not as another authentication factor, but as an authorization parameter. Whether it's useful depends on the use-cases, and other answers to this question address this point or give examples. | {
"source": [
"https://security.stackexchange.com/questions/238641",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/243064/"
]
} |
238,759 | I noticed a lot of companies do not have social engineering as in-scope of bug bounties/responsible disclosure guidelines, even though it is often used in real-world attacks. I understand that for popular bug bounty programs the amount of social engineering attempts would probably become problematic, but for small companies I would see it as beneficial to once in a while have someone try to get into the systems via social engineering as this would keep everyone sharp. Are there reasons I am missing here for excluding this from allowed methods? | Because Human Factor vulnerabilities are complex, undefined, non-linear, and often not repeatable in a predictable way. Being able to successfully soceng one person is not enough for an organisation to use as a basis for action. In short, if you could do a soceng test, the results would not be useful. SQLi on a web form, on the other hand, is simple, defined, linear, and the results are repeatable. Bug bounties are for technical issues, not "all possible issues that could possibly go wrong". SocEng also creates massive liabilities that include the individual, and that brings in a host of other issues. How far do you go? How much personal information do you gather and weaponise? How do you get permission from the person and still keep the test legitimate? Permission alone places this activity beyond a simple bug bounty program. Testing Human Factor vulnerabilities has to be done carefully, by professionals who know what they are doing, and the scope is very tightly defined. That's why soceng tests tend to keep to phishing simulations; it's very complex and there are a lot of side factors to take into account. It's not something for some random bug bounty tester to go mucking around with. | {
"source": [
"https://security.stackexchange.com/questions/238759",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/243206/"
]
} |
238,842 | How safe are we when we use phone hardware from untrusted manufacturers and use end-to-end encrypted communication like Signal and Telegram? Are our conversations really safe from keyloggers or spyware? And what is the best option to communicate safely? | The short answer is that if the hardware is compromised, then anything you can read, it can read. | {
"source": [
"https://security.stackexchange.com/questions/238842",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/231319/"
]
} |
239,067 | The situation: Currently, we are sending out emails and SMS to our users, which include a link to a form that each user has to fill out on a regular basis (an "eDiary"). As we need to be able to authenticate and authorize the user accordingly, we currently attach a jwt token as a query parameter to the link as we want to make it as easy as possible for the users, as not many of them are very tech-savvy. The problem with this is, that the links tend to be very long because of this, which makes it very expensive, as we're sending out about five SMSes, out of which four are just for the link. Therefore we are currently looking for a way to shorten the links in a safe way. As the data that is being entered is very sensitive, the link should still be secure enough, so it can not be guessed in a "short" amount of time. The links are usually valid for one or two days, as we can not "force" the user to enter the data in a shorter amount of time. Therefore I am very likely looking for a hashing algorithm of sorts, that could be used in this scenario, where the structure of the links is well known. We'll of course also limit the amount of retries a user could do, before being temporarily blocked by the system, but in case we'll have to serve millions of links, the chances will grow of guessing one at random. The question : What would be a good compromise between shortening the URL as much as possible, but still keeping the link safe enough so "guessing" a specific link is unlikely for specific users as well as for all users ( birthday attack )? | Entropy is your friend. Using only alphanumeric characters (special characters are best avoided in this case because they often need URL encoding, which complicates things) you have a "language" of 62 possible characters to choose from. For a string of length X made from this "language", the total number of possible strings is simply: 62**X If you start blocking an IP address after Y failed attempts then the odds that an attacker with a single IP address will guess a code are: Y/(62**X) But imagine an attacker can easily switch IP addresses, so let's imagine they have a million IP addresses at their disposal (note: the number will be much larger if you support IPV6). Therefore their odds of success are simply: (1e6*Y)/(62**X) Finally note (h/t @Falco ) that the above assumes the attacker is looking for a particular code. If you are worried about someone finding any code then you need to further multiply by the number of active codes you have at a given time, which depends on how often they are created and how quickly they expire. Given all of this though, you just have to decide how low you want the probability to be, plug in your Y, and solve for X. As a simple starting point I usually suggest a 32 character alphanumeric string (make sure and use a proper CSPRNG ). If you block an IP after 1000 failed attempts then an attackers odds of finding a specific code are: (1e6*1000)/(62**32) Which is 4.400134339715791e-49 . Given those odds, it's more likely that the attacker will win the lottery 4 or 5 times in a row before they guess a code. You could have billions of active codes at a time and the odds of guessing any one would still effectively be zero. | {
"source": [
"https://security.stackexchange.com/questions/239067",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/97771/"
]
} |
239,432 | You have a User table: UserID (auto-incrementing Integer)
Password hash
LastLogin All related tables are linked by the UserID. You also have a Username table: Username
Salt
IncorrectLoginCount
LockedUntil
etc. A user creates an account. You take the Username + Password and a unique, random salt and hash it all together with Argon2: hash = argon2(username + password + salt) You store the hash and the next generated UserID in the User table and the Username and randomly generated salt in the Username table. There is no way to directly tell which Username corresponds to which UserID. The user attempts to log in. You take the submitted Username, fetch the record in the Username table (unless the account is locked), grab the salt, take Username + Password and salt and hash it. You then search for the hash in the Password column of the User table. If you don't find it, incorrect login and if you do, you log the user in with the UserID. Let's say you have 100 users. You then dump 999,900 bogus records into your Username table with no corresponding record in the User table. They look like Usernames, except they correspond to no user in your database and there is no way to tell which ones are real. Now the attacker has to waste time trying to crack the passwords of non-existent users, which make up 99.99% of the records in the table and will run the full length of the attempt before abandonment because they will fail every check since they have no corresponding record. I'm trying to create a situation where the attacker has to waste time attempting to crack the password of users that don't actually exist. Also, if the initial attempt to collect the password doesn't succeed, the attacker doesn't know for certain whether it is a dummy record or a user with a strong password. The Invalid LoginCount and LockedUntil would be cleared once a day. When a new user account is first created, you search the UserID table, which only has 100 records at the moment, for a matching hash. Let's say you get a hash collision once a decade or even once a year, even one collision as frequently as once a decade is an absurd stretch in my opinion. This is especially the case that you are only generating hashes for the much smaller UserID, not the massive Username table. You simply throw away the hash, generate a new salt and rehash. You then create the User Account. Would this significantly slow an attacker down if your database and application code was compromised and the attacker knew exactly what you were doing? If you attempted to crack the hashes in the UserID table itself, you would have to hash each candidate password separately with each Username. Let's say you hashed 30,000 times. Each candidate password would have to be hashed 30,000 times for the first Username, 30,000 times for the second Username, 30,000 times for the third Username, etc. This would have to be done for every candidate password. | Before getting into the analysis of the process to slow down cracking the hashes, I want to address something far more important first: If I log in, and my hash happens to match some other user, I will get authenticated to that user. So your whole "look in the Users database to blindly find any match because I don't tie password hashes to users" is a horrifying approach to authentication . Please don't do this. Kirchoff's Principle suggests that a system must be secure even if an attacker knows how you do something. So, let's assume the attacker knows that you added fake usernames. Fine, now all the attacker has to do is to look for valid usernames and tie it to UserID before starting to crack hashes. And to do that, I would look at the logged user activity in the database. I do not know what is logged in your app, but one has to assume that the user's activity will suggest the username associated with it, if it is not stored, specifically at some point in the database. Things like timestamps can make correlation easy. And since your threat model includes the assumption that the attacker has access to the codebase and the entire database, your approach appears to do nothing but increase your design overhead and database size. So, your entire approach relies on an attacker never being able to correlate UserId and Username. This is known as "Security by Obscurity" and, while it has its place, it is not a basis for a secure control. Now let's tie my first point to my second. Let's say that I want to log into UserID 1 because I can see that it's the admin (or an account of interest). I know the password hash. Now I can take all the usernames and their salts to find a hash that might match User 1's hash. It no longer matters which username I use. It might be unlikely to find an exact match like this using Argon2, but this highlights the larger problem with your approach. | {
"source": [
"https://security.stackexchange.com/questions/239432",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/227162/"
]
} |
239,526 | There's this game at http://slither.io which I like to play. Only inputs that I give to it are space bar, and cursor location. It's not https. Are there any risks? | Yes, there is a risk. HTTPS ensures not just confidentiality, but also integrity and authenticity. As such, an attacker could hijack the connection between you and the server and inject malicious JavaScript into your session. How likely is that to happen? Depends on how you connect to the server. If you are in your own home, then the likelihood is not very big. It's a risk still, don't get me wrong, but I don't want to cause unnecessary paranoia. On the other hand, if you connect to a public access point (e.g. "free McDonald's Wifi"), then the chance of this happening is much much higher. How severe is this? Since there is no sensitive data there, the "usual" things like credential stealing or session hijacking are not applicable. However, depending on how determined the attacker is, they might redirect you to other malicious domains, exploit browser vulnerabilities, get you to download stuff or even get you to disclose credentials for other services (e.g. "Log in with your Google, Twitter or Facebook account to play"). As Eilon has pointed out in the comments, another potentially unwanted side-effect is that your ISP can tamper with the website you use. Some ISPs do this for arguably benign purposes, such as stripping whitespace off the HTML document before sending it to you, while others do more "intrusive" changes, such as compressing images, or even injecting advertisements into the website. While this is not a security-risk per se, it is unwanted behavior that can most effectively be combatted by using HTTPS. Does that mean you should stop playing? That depends completely on your risk appetite and whether or not the upsides outweigh the downsides for you personally. | {
"source": [
"https://security.stackexchange.com/questions/239526",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244143/"
]
} |
239,540 | It seems to me that one of the major flaws of Wifi is that computers will auto-reconnect to a Wifi that merely has the same name as one you connected to in the past i.e. an evil twin. While perusing log files I've seen this happen and it is a surprising design flaw. There ought to be something more substantial than just an access point name to authenticate a Wifi router as being one that the computer spoke with in the past. Why does Wifi auto-reconnect based on only the access point name? Why isn't there a shared secret? UPDATE
I should describe what I saw that made me ask this question. I was on a train at one point, I can't remember the country, I had come from an airport where I'd
been in a lounge that had free Wifi, no password, but there was a "captive portal" login screen. I noticed on the
train, which was by then far from the airport, that my computer Wifi
had once again connected to the airport lounge Wifi. I checked the log and
indeed, a "fake" Wifi hotspot with the same name but a different MAC
address was there and DHCP had provided me with an IPv4 address. | Yes, there is a risk. HTTPS ensures not just confidentiality, but also integrity and authenticity. As such, an attacker could hijack the connection between you and the server and inject malicious JavaScript into your session. How likely is that to happen? Depends on how you connect to the server. If you are in your own home, then the likelihood is not very big. It's a risk still, don't get me wrong, but I don't want to cause unnecessary paranoia. On the other hand, if you connect to a public access point (e.g. "free McDonald's Wifi"), then the chance of this happening is much much higher. How severe is this? Since there is no sensitive data there, the "usual" things like credential stealing or session hijacking are not applicable. However, depending on how determined the attacker is, they might redirect you to other malicious domains, exploit browser vulnerabilities, get you to download stuff or even get you to disclose credentials for other services (e.g. "Log in with your Google, Twitter or Facebook account to play"). As Eilon has pointed out in the comments, another potentially unwanted side-effect is that your ISP can tamper with the website you use. Some ISPs do this for arguably benign purposes, such as stripping whitespace off the HTML document before sending it to you, while others do more "intrusive" changes, such as compressing images, or even injecting advertisements into the website. While this is not a security-risk per se, it is unwanted behavior that can most effectively be combatted by using HTTPS. Does that mean you should stop playing? That depends completely on your risk appetite and whether or not the upsides outweigh the downsides for you personally. | {
"source": [
"https://security.stackexchange.com/questions/239540",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244155/"
]
} |
239,542 | A friend of mine gave me a web application for me to test and find vulnerabilities. He told me that this web application can be purposefully modified and then made to run an SQL injection. I have been trying everything, but I have not been able to gain access to information. He told me that I should be able to obtain certain information such as the debt from someone, or their credit card number, etc. Keep in mind, all the values being outputted in this website are fake and not real. The code for this website is as follows in the web login: <html> <head><title>Online Access</title></head><body>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody>
<tr>
<td valign="CENTER" bgcolor="#cbbbff">
<center>
<h2>Bank</h2>
</center>
</td>
</tr>
</tbody>
</table>
<form action="login.pl" method="post">
<p> Login denied for user <b></b>! Try again.
<br>
Please enter your access ID and your password to access your credit
card debt information.</p>
<br><b> Access ID: </b>
<input type="text" name="access" size="18"> <br>
<b> Your Bank Password: </b> <input type="password" name="password"
size="10"> <br>
<b> ID number: </b> <input type="text" name="softvulnsec"
size="3" maxlength="3"> <br>
<b> Registration code: </b> <input type="text" name="matnr"
size="7" maxlength="7"> <br>
<br>
<input type="submit" name="login" value="Login">
</form>
</body></html> Once I am logged in I have access to this bit of code of which I have noticed that there are hidden field parameters. <html> <head><title>Bank Online Access</title></head><body>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody>
<tr>
<td valign="CENTER" bgcolor="#cbbbff">
<center>
<h2>Bank</h2>
</center>
</td>
</tr>
</tbody>
</table>
<p> Welcome to the Bank access! <br>
You have successfully logged in <i><b></b></i>.<br> Your access id is: <b>123456</b></p> <br>
<p> You can use the following services: <br>
<table border="1">
<form action="login.pl" method="post">
<tr><td>
<b>Credit card account:</b> <a href="javascript:submitForm()">Request
information</a>
</td></tr>
<input type="hidden" value="164532873134525967223123872321" name="param">
<input type="hidden" value="100" name="softvulnsec">
<input type="hidden" value="123456" name="access">
<input type="hidden" value="qwef945372quwiefjd315469875312" name="token">
</form>
<script>
function submitForm()
{
document.forms[0].submit();
}
</script>
</table>
</body></html> Based on what I have read, since the method is "post" I can modify some of the values in order to gain access to the information, but every time I try to modify the access code in the login page, I get sent back to the login page and the access code and password revert back to the previous correct login information. I have tried to change the hidden parameter to text and then change the value within the access code to different values or to the correct value: 123456 plus an sql injection such as: 123456 AND SELECT *. None of this has given me any insight as to how to access information from the web application. Are there any hints I can get in order to access this information? Do I modify the hidden parameters with sql injection? | Yes, there is a risk. HTTPS ensures not just confidentiality, but also integrity and authenticity. As such, an attacker could hijack the connection between you and the server and inject malicious JavaScript into your session. How likely is that to happen? Depends on how you connect to the server. If you are in your own home, then the likelihood is not very big. It's a risk still, don't get me wrong, but I don't want to cause unnecessary paranoia. On the other hand, if you connect to a public access point (e.g. "free McDonald's Wifi"), then the chance of this happening is much much higher. How severe is this? Since there is no sensitive data there, the "usual" things like credential stealing or session hijacking are not applicable. However, depending on how determined the attacker is, they might redirect you to other malicious domains, exploit browser vulnerabilities, get you to download stuff or even get you to disclose credentials for other services (e.g. "Log in with your Google, Twitter or Facebook account to play"). As Eilon has pointed out in the comments, another potentially unwanted side-effect is that your ISP can tamper with the website you use. Some ISPs do this for arguably benign purposes, such as stripping whitespace off the HTML document before sending it to you, while others do more "intrusive" changes, such as compressing images, or even injecting advertisements into the website. While this is not a security-risk per se, it is unwanted behavior that can most effectively be combatted by using HTTPS. Does that mean you should stop playing? That depends completely on your risk appetite and whether or not the upsides outweigh the downsides for you personally. | {
"source": [
"https://security.stackexchange.com/questions/239542",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244157/"
]
} |
239,614 | HTTP I'm aware that HTTP sends plain text over the network which can be sniffed and modified if a MITM is performed. HTTPS On the other hand, HTTPS sends encrypted text over the network that can neither be sniffed nor modified. Other? I'm wondering if there is an in between where the traffic can be sniffed, but not modified. I was thinking the server could just sign every packet using a CA . I'm also aware of manually verifying hashes of files downloaded, but seeing as those hashes are served over a modifiable means (HTTP), it doesn't seem that this really provides any authenticity as the hash could modified to match the modified file. As @mti2935 suggested, the hash could be sent over HTTPS, but I'm looking for a preexisting protocol to handle all this. Why I'm sure this question begs the question why. So here are a few example scenarios. A user wants to allow their network security device to scan files downloaded for malware without having to modify their trust store. I'm a ham radio operator and I'd like to stream movies over ham bands, but I'm not allowed to encrypt. I do care about the video maintaining it's integrity, but I don't care about someone else snooping. Sites that only distribute data and don't need encryption but do need data integrity. | SSL/TLS before 1.3 has some 'with-NULL' cipher suites that provide NO confidentiality, only authentication and integrity; see e.g. rfc5246 app C and rfc4492 sec 6 or just the registry . These do the usual handshake, authenticating the server identity using a certificate and optionally also the client identity, and deriving session/working keys which are used to HMAC the subsequent data (in both directions, not only from the server) but not to encrypt it. This prevents modification, or replay, but allows anyone on the channel/network to read it. These cipher suites are very rarely used, and always (to the best of my knowledge) disabled by default. (In OpenSSL, they not only aren't included in DEFAULT but not even in the otherwise complete set ALL -- to get them you must specify (an) explicit suite(s), the set eNULL aka NULL , or the set COMPLEMENTOFALL , which last grates horribly to any mathematician!) I very much doubt you'll ever get any browser to use them, and probably not most apps or even many packaged servers. But if you control the apps at both ends of an HTTPS connection -- or perhaps proxies for the apps -- this does meet your apparent requirement. TLS 1.3 changes how cipher suites are used, and no longer has this functionality. As time goes on, 1.3 will become more widespread, and it is likely 1.2 and 1.1 will be dropped in the foreseeable future. (1.0 already has been dropped many places, though not all. SSL3 is badly broken by POODLE, and dropped essentially everywhere.) Belatedly found dupe, from before 1.3: Can TLS provide integrity/authentication without confidentiality | {
"source": [
"https://security.stackexchange.com/questions/239614",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86361/"
]
} |
239,748 | I am participating in a project that involves a JavaScript SPA that provides a service and is intended to interact via REST APIs with one of our servers. Initially, I proposed to work on the two entities as two separate projects; specifically I put forth the following The user accesses the Web app through a www.myservice.org address The Web app contacts an api.myservice.org service for REST interactions but I was immediately faced with rejection. I was told that the Web app, residing at www.myservice.org , should contact the REST server via something like www.myservice.org/api because doing otherwise would entail a security threat. I didn't say this was a bad idea, but I insisted on splitting the API server from the SPA-serving one for the following reasons Scaling Separation of concerns Easier code management I'm much more of a developer than a system admin and security expert, so I couldn't promptly reply their rejection. Why would having two api.myservice.org and www.myservice.org servers represent a security issue? I was vaguely told about Cross-site scripting but even then the reasoning wasn't perfectly clear to me. | From the information provided, it is definitely not a security risk. As long as proper controls are set on the API endpoint (HTTPS, HSTS , etc.), you should be good to go. One thing to note here is that the myservice.org may be running on a hardened system and with additional protections (such as a WAF ). In that case, those controls will have to be applied to api.myservice.org as well. Edit: The argument of XSS is irrelevant here and it cannot happen just because myservice.org and api.myservice.org are decoupled. | {
"source": [
"https://security.stackexchange.com/questions/239748",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244378/"
]
} |
239,754 | Assuming that I have no ability to use sudo and rather limited permissions, but I have a shell script exploit that allows to me change the file ownership of a file to the current user by running a buggy program written in C that has root permissions. Specifically, the execlp() function is what is being exploited as I have already found a way to specify the file parameter. The user variable is received by a call to the getenv() function. execlp("chown", user, file, (char *)0); How would I exploit this ability to gain ownership of any file in the system to ultimately gain sudo access over the system? What files would I modify? I've tried modifying the etc/sudoers file itself but it would give the following errors sudo: no valid sudoers sources found, quitting
sudo: /etc/sudoers is owned by uid 1000, should be 0 Note that I can't change the file's owner back to root as the current user does not have permission to chown the file to root. I am operating on a dummy VM right now and this is just a security exercise. Side note: perhaps that last character array parameter in the code could be exploited somehow too? | From the information provided, it is definitely not a security risk. As long as proper controls are set on the API endpoint (HTTPS, HSTS , etc.), you should be good to go. One thing to note here is that the myservice.org may be running on a hardened system and with additional protections (such as a WAF ). In that case, those controls will have to be applied to api.myservice.org as well. Edit: The argument of XSS is irrelevant here and it cannot happen just because myservice.org and api.myservice.org are decoupled. | {
"source": [
"https://security.stackexchange.com/questions/239754",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244385/"
]
} |
239,881 | I am the volunteer IT administrator for a local non-profit organization. The organization has a few systems - specifically security cameras, network hardware, and telephones - that have local administrator accounts to manage them. Right now, I am the only person working for the organization with any amount of technical knowledge about these things. As part of our disaster recovery plans, I am writing a complete manual for administration of everything in the building. The problem is what to do about passwords. No one else in the organization wants to be responsible for having the administrator passwords to everything (and honestly I don't trust them anyway), but reducing the bus factor necessitates that a password be available somewhere for some other person to manage these devices in the future. My first thought was to use something like LastPass's "Emergency Access" feature that allows another user to request access to an account, except that I have no idea who that future user might be to give them permission and I have no confidence in this feature working properly anyway (since my wife and I tested it with our accounts and it didn't work). My current thought (loosely inspired by the opening scene of WarGames ) is to generate secure passwords for the accounts and make scratch-off cards with the passwords on them. These cards would be kept in a sealed envelope in the manual binder. Access to the passwords then requires opening the envelope and scratching the cards. I am not worried about someone going through the effort to steam open the envelope, scratch the cards, copy the passwords, then re-cover the cards in scratch-off material, and perfectly re-seal the (signed edge) envelope. Physical access control will prevent outsiders from doing that, and we are confident that all of our insiders don't want the passwords and wouldn't know what to do with these passwords even if they had them. I know that this plan is still vulnerable to anyone with physical access gaining admin rights, but I can't think of any better options. Is this a good plan? Are there better alternatives? Or am I overthinking it? | Offline password manager I would use an offline password manager (for example, KeepassXC as an open source option) for the whole list of credentials that may need to be accessed by someone else. The encrypted file containing the passwords can be given to any relevant persons and management beforehand, or it may be stored in some network location that's accessible also to some of your colleagues. But the passphrase (and possibly 2FA token) to access that file can be physically put in an envelope in a safe to be given to the appropriate person if/when needed. This also means that you can continuously keep that credential list up to date without touching that 'envelope in a safe'. | {
"source": [
"https://security.stackexchange.com/questions/239881",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17339/"
]
} |
239,894 | I am currently using JWT implementation for the authentication part of my APIs. A private key is used to sign the token generated and used to make sure it's not tampered with when it's used later for other API. My question is - What is the impact if this private key is leaked? What can the bad guy do with it? From here , my understanding is that the payload can be altered. Hence, in that example, a normal user can be changed to admin. But in my scenario, I don't have any other important fields except expiration date. So other than the bad guy able to forever extend his own token expiry date, what are the other impacts that I am facing? | Whoever possesses the private key can create valid tokens where your system simply can not distinguish between a legitimate token and a token created by the attacker. I am guessing you are not just using the expiry field but also the subject field sub, which is in short terms the logged in user. With the private key, I can create a token with any subject I want, thus sign in as any user of your system. As you stated, I can also add any other claim and you system has no choice but trust it, as I was able to create a valid signature. It can not be stressed enough, but JWT heavily relies on the private key to stay absolutely private. Losing the private key is the worst case scenario. | {
"source": [
"https://security.stackexchange.com/questions/239894",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244547/"
]
} |
239,907 | I was wondering if there is a way to use SELECT and UNION keywords without being caught by a algorithm (see below) that filters these keywords out. $filter = array('UNION', 'SELECT');
// Remove all banned characters
foreach ($filter as $banned) {
if (strpos($_GET['q'], $banned) !== false) die("Hacker detected");
if (strpos($_GET['q'], strtolower($banned)) !== false) die("Hacker detected");
} | Whoever possesses the private key can create valid tokens where your system simply can not distinguish between a legitimate token and a token created by the attacker. I am guessing you are not just using the expiry field but also the subject field sub, which is in short terms the logged in user. With the private key, I can create a token with any subject I want, thus sign in as any user of your system. As you stated, I can also add any other claim and you system has no choice but trust it, as I was able to create a valid signature. It can not be stressed enough, but JWT heavily relies on the private key to stay absolutely private. Losing the private key is the worst case scenario. | {
"source": [
"https://security.stackexchange.com/questions/239907",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244455/"
]
} |
239,935 | What would be an attack against an insecure instance of the OTP cipher given two challenge ciphertexts using the same key in order to get the plaintext? I've tried to implement some approaches with Python but it did not work.(I'm a beginner in cyber security) | How does a One-Time Pad work? Imagine you have a message M which is encrypted with a key K , which then results in a ciphertext C . Let us assume that the process through which the encryption occurs is an XOR, which I'll show by the ^ symbol. Furthermore, we assume M , K and C all have the same length. So we know: M ^ K = C Assuming an attacker has access to C and wants to recover M , this is cryptographically impossible. Why? Because even if an attacker tried every single possible key K , they would receive every single possible message M' . It is impossible for them to tell which is the correct message. In fact, they could simply look at every single possible message of that length and they would not be any wiser. However... Why is it called One-Time Pad? Because the same key can only be used once. If you use it twice, certain issues arise. Let's take the same scenario as above, but now we have two messages M1 and M2 , one key K and two cipher C1 and C2 . M1 ^ K = C1
M2 ^ K = C2 The curious thing about XOR is that it is a "reversible" operation. That means XOR'ing something with the same value twice results in the original value: X ^ X = 0
X ^ 0 = X
therefore
X ^ Y ^ X = Y Order of operations does not matter, just like in addition. Let's assume that the attacker has C1 and C2 , but not access to M1 and M2 . C1 ^ C2 = (M1 ^ K) ^ (M2 ^ K) Since order of operations does not matter, we can remove the parenthesis and group the K together: C1 ^ C2 = (M1 ^ M2) ^ (K ^ K) We learned above that a value XOR'd by itself is 0, and that a value XOR'd by 0 is itself. As such, we can simply remove the right parenthesis and get: C1 ^ C2 = M1 ^ M2 This means that you now have access to two plaintext messages XOR'd to each other. This is a lot more information than just the ciphertexts. How can I go from there? Let us assume that a message is only uppercase ASCII and spaces, and you know the following: C1 = 55 3a 90 26 b3 b6 48 37 6f c1 45 f7 e8 47 61 78 21 52
C2 = 42 33 97 55 ca b0 4e 37 61 ca 24 f9 e6 34 66 71 2f 44
C1 ^ C2 = 17 09 07 73 79 06 06 00 0e 0b 61 0e 0e 73 07 09 0e 16 At first glance, this may not seem to tell us a lot. After all, it's just some hexadecimal, right? Well, if you look at the values in the last line, you see some low values and some high values. Let's look at the high values, the first one being 0x73 , which is the ASCII value of the lowercase s . Why is this interesting? We know that it's the result of the XOR of the message M1 with M2 , and both can only be uppercase and spaces. If you look closely at the ASCII value of a space, you see it's 0x20 or 0010 0000 in binary. Meaning that XOR'ing with a space only flips one bit. If you look at the ASCII table, you will notice that uppercase and lowercase characters also only differ by one bit. This was done so that "to Upper" and "to Lower", as well as "toggle case" functions only had to operate on one bit. So we know the following: Either of the following is true: The fourth byte of M1 is S The fourth byte of M2 is _ The fourth byte of K is 0x75 or The fourth byte of M2 is S The fourth byte of M1 is _ The fourth byte of K is 0x06 (Note that I am using _ to represent a space for better visibility) We cannot yet tell which one of these is true, but we know they are mutually exclusive. If you have more messages C2 , C3 , etc. all encrypted by the same key, then you can simply determine which one is the one with the space, as all others will return either valid lowercase ASCII symbols or 0x00 . In fact, let's assume you intercepted a third message C3 , with the value 4826f938d2a63b596dcc45f8e8347778354e Now you know: C1 = 55 3a 90 26 b3 b6 48 37 6f c1 45 f7 e8 47 61 78 21 52
C2 = 42 33 97 55 ca b0 4e 37 61 ca 24 f9 e6 34 66 71 2f 44
C3 = 48 26 f9 38 d2 a6 3b 59 6d cc 45 f8 e8 34 77 78 35 4e
C1 ^ C2 = 17 09 07 73 79 06 06 00 0e 0b 61 0e 0e 73 07 09 0e 16
C1 ^ C3 = 1d 1c 69 1e 61 10 73 6e 02 0d 00 0f 00 73 16 00 14 1c
C2 ^ C3 = 0a 15 6e 6d 18 16 75 6e 0c 06 61 01 0e 00 11 09 1a 0a This already gives us a bit more information. A lot more, in fact. First of all, let's have a look at the above hypothesis: We know either M1[4] is _ or M2[4] is _ . Since C1 ^ C3 (and thus M1 ^ M3 ) does not result in a lowercase ACII character, we know both M1[4] and M3[4] are not spaces. Therefore we know that the first of our two hypothesized cases is true, and we know a bit more about the key and the other messages: C1 = 55 3a 90 26 b3 b6 48 37 6f c1 45 f7 e8 47 61 78 21 52
C2 = 42 33 97 55 ca b0 4e 37 61 ca 24 f9 e6 34 66 71 2f 44
C3 = 48 26 f9 38 d2 a6 3b 59 6d cc 45 f8 e8 34 77 78 35 4e
M1 = ?? ?? ?? S ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
M2 = ?? ?? ?? __ ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
M3 = ?? ?? ?? M ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
K = ?? ?? ?? 75 ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
C1 ^ C2 = 17 09 07 73 79 06 06 00 0e 0b 61 0e 0e 73 07 09 0e 16
C1 ^ C3 = 1d 1c 69 1e 61 10 73 6e 02 0d 00 0f 00 73 16 00 14 1c
C2 ^ C3 = 0a 15 6e 6d 18 16 75 6e 0c 06 61 01 0e 00 11 09 1a 0a We can also see that M1[3] ^ M3[3] and M2[3] ^ M3[3] result in printable characters, so we know M3[3] must be _ and therefore K[3] = C3[3] ^ 0x20 , which is d9 . After doing all of this, your grid should look like this: C1 = 55 3a 90 26 b3 b6 48 37 6f c1 45 f7 e8 47 61 78 21 52
C2 = 42 33 97 55 ca b0 4e 37 61 ca 24 f9 e6 34 66 71 2f 44
C3 = 48 26 f9 38 d2 a6 3b 59 6d cc 45 f8 e8 34 77 78 35 4e
M1 = ?? ?? I S __ ?? S __ ?? ?? __ ?? ?? S ?? ?? ?? ??
M2 = ?? ?? N __ Y ?? U __ ?? ?? A ?? ?? __ ?? ?? ?? ??
M3 = ?? ?? __ M A ?? __ N ?? ?? __ ?? ?? __ ?? ?? ?? ??
K = ?? ?? d9 75 93 ?? 1b 17 ?? ?? 65 ?? ?? 14 ?? ?? ?? ??
C1 ^ C2 = 17 09 07 73 79 06 06 00 0e 0b 61 0e 0e 73 07 09 0e 16
C1 ^ C3 = 1d 1c 69 1e 61 10 73 6e 02 0d 00 0f 00 73 16 00 14 1c
C2 ^ C3 = 0a 15 6e 6d 18 16 75 6e 0c 06 61 01 0e 00 11 09 1a 0a Are we stuck now? This is as much as you can infer with 100% certainty. Now you can start to make some educated guesses. For example, you can see that M2 there is a three-letter word starting with Y and ending in U . You can make an educated guess here and assume that means "you". And indeed, if we assume that, we'd get the following: M1 = ?? ?? I S __ I S __ ?? ?? __ ?? ?? S ?? ?? ?? ??
M2 = ?? ?? N __ Y O U __ ?? ?? A ?? ?? __ ?? ?? ?? ??
M3 = ?? ?? __ M A Y __ N ?? ?? __ ?? ?? __ ?? ?? ?? ??
K = ?? ?? d9 75 93 ff 1b 17 ?? ?? 65 ?? ?? 14 ?? ?? ?? ?? The other characters seem to fit. M1 forms the word "is" and M3 forms the word "may". Since those are legitimate English words, you can be pretty certain that those are correct. Furthermore, note that some of the XOR'd messages result in 0x00 , which means that these must be the same letter. For example, even though you don't know what M1[13] ^ M2[13] is, you know that both of these letters must be identical. Keep going from there and see if you can crack the rest. | {
"source": [
"https://security.stackexchange.com/questions/239935",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244608/"
]
} |
240,236 | I am asking this because WhatsApp says it is end-to-end encrypted. Are there any problems with sending a public key through WhatsApp? There might be some objections to sending symmetric and private keys. Under what circumstances can I send symmetric and private keys? | E2EE doesn't protect data at rest. Unlike Signal, WhatsApp doesn't encrypt internal message database. A forensic analysis can decrypt deleted messages if Data Encryption Keys which encrypt user's and application data are compromised. It seems to be impractical but that's what spyware agencies are doing now. According to this research paper: Data Security on Mobile Devices: Current State of the Art, Open Problems, and Proposed Solutions (pdf) which is also covered by WIRED: How Law Enforcement Gets Around Your Smartphone's Encryption talks about design flaw in data encryption of android and iOS. One of its author has briefly explained it for iOS , although the method of exploitation is same for android as well. Android and iOS keep data encryption keys in memory once a user unlocks its device first time since last reboot. This is called After First Unlock (AFU) state. Keys remain in memory even if the device is relocked again. This is intended this way to maintain user experience and to keep user focused app functional at locked screen which include messaging apps, contacts, songs, notes, reminders, etc. Most of the time your device remains in AFU state. If you reboot your device but don't unlock it yet, your device state is in Before First Unlock (BFU) state. In BFU state, user and app data are still encrypted. To decrypt them, your device prompts to unlock screen using your screen lock password which is then fed to key derivation to derive a Key Encryption Key that decrypts data encryption keys. This is why biometric to unlock screen doesn't work first time after reboot. Once data encryption keys are extracted from memory physically that is directly tampering with SoC without disconnecting the battery or by using zero day exploits, spyware agencies can decrypt subset of the data. Keys can be per-file basis but these are derived from data encryption keys which means even if a file has been deleted, its key can be re-derived and the deleted file itself can be recovered from NAND flash. WhatsApp daily chat backup encrypts message database with AES-GCM-256 key which is known to WhatsApp service (see How can WhatsApp restore local or Google Drive Backups? ). Although, the chat backup is not possessed by WhatsApp service but Google Drive does if Google Drive backup is enabled which most users do. There you have no control of how it is used by spyware agencies. Sending passwords through Signal is somewhat safer than WhatsApp but not entirely. Signal encrypts the message database with database encryption key which is itself encrypted with a key stored in Trusted Execution Environment (TEE) (android 7+). Its message database has page size of 4096 bytes and IV of each page is stored in page footer. Modifying an existing page such as by deleting a message changes the IV and the entire page is reencrypted using database encryption key. If the IV of that page has been changed and possibly overwritten by new IV, there's no way of recovering a deleted message. Uninstalling Signal altogether also clears the key in TEE which makes its database encryption key undecryptable and so does its data. But the above design flaw also affects Signal's existing messages. As database encryption key must be in memory to service messages at locked screen, it can be extracted. That's how FBI might be Hacking Into Private Signal Messages On A Locked iPhone . Also, apps with accessibility permission can see the content on your screen which is the easiest way to compromise messages if the app that you trust is actually malicious. Google and Apple are very strict about what apps on their app stores can have code to request this permission from the user. As for private keys, I don't believe it should be even available to you for sharing. | {
"source": [
"https://security.stackexchange.com/questions/240236",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244949/"
]
} |
240,434 | Is there an effect of a certificate, or the request for, having the wrong country code? How about the other metadata? Effects can be legal, technological, anything. I could almost imagine a web browser checking the certificate country and province against the server's geolocated ip, but I could see that having false positives, especially in the days of cloudflare and similar services. | The reason certificates have the metadata they do is historical. Certificates are defined in the X.509 standard from the ITU-T . It is part of implementation of the X.500 standard, the Directory services.
It’s also related to another standard called LDAP These technologies were designed at the beginning of the internet (1988) and have a strong backing in the telephone networks. The X.500 family of standards were created to facilitate directory services (think phone books). For these it makes sense to record where someone is located in order to tie some arbitrary data (like a phone number) to a physical location or name (like address and name of user). These features are mainly still present for humans to use. Computers use other means to validate them (like OCSP and the older CRL ; a valid period of time, as in not valid before and not valid after values; and trusted root certificates or CA’s, that vouch for the certificate used). Nowadays there might be a legal requirement to fill in such data accurately but there is no technical reason to enter it aside from auditing and for use by humans. | {
"source": [
"https://security.stackexchange.com/questions/240434",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140175/"
]
} |
240,502 | This is on a somewhat layman's level. I'm not a security guru. Looking at this in very, very broad and general terms: Election fraud is definitely a serious issue that needs to guarded against pretty heavily, ideally by many third-party officials and monitors. If you have random people working at the polling stations, some may be tempted to throw away ballots or to fill out extras. You have a lot of random people, each being a potential, partisan security risk. Therefore to reduce the risk, as well as to speed up counting, a given jurisdiction makes everything completely digitized (no paper ballots), and they fully automate the vote counting. Just looking at this vaguely, some issues I see are: Genuine bugs in the software. Not all issues are malevolent. The organization that produced the software was indeed malevolent. Even if they weren't malevolent, completely outside hackers can still try to get in and interfere. Hackers from within the jurisdiction's staff and officials can also try to get in and interfere. Or even if they don't hack in the most literal terms, they can still find other ways to revise the final results after they've been produced, but before they've been presented to another party or the general public. The whole entire system works as an opaque, black box. This means trying to monitor the ballot collection, as well as the counting itself, has the same issues as trying to debug a software defect in a black box, third-party item. Logging information helps, but it is not the same as having a clear/white box. Even if the software were developed by the jurisdiction itself internally, it's still just a small subset of that jurisdiction (which can still be corrupt) that would be immediately familiar with the code and how to analyze it for potential issues. The corruption issues and black box issue are still somewhat at play. On the other hand, imagine another jurisdiction chooses to avoid computers entirely for the purposes of collecting ballots and counting the votes. This other jurisdiction still uses computers for things like verifying someone hasn't already voted or sending internal communications between staff, for obvious reasons. However the ballots are all paper ballots, they are all collected manually by hand, and the votes are counted - and aggregated together - by hand. That means there is no hacking, and it also means that we are now dealing with something at least somewhat closer to a clear/white box. If you have corrupt individuals collecting the ballots and counting them by hand, you can also have security and monitors, both from within the jurisdiction and from third parties, watching them. And if they miss something in real time, video cameras can sometimes provide footage of an incident. And if both of those fail, you still have A set of physical ballots, envelopes, and anything else. It may not be The set that is genuine (ballots missing or added corruptly - or by innocent mistake ), but having A set, heavily derived from the genuine set of votes cast, is often better than having none at all. Altogether it is potentially much easier to monitor. Now that said, the first jurisdiction may still very well be much more secure in its election process than the second, but this would depend on things like the resources invested in security, and more importantly, how well they each manage things. However is the first jurisdiction inherently running an extra risk by relying on computers to collect the votes and/or to tally the votes ? Is the first jurisdiction, compared with the second, doing the equivalent of using HTTP instead of HTTPS, writing data access code that blatantly omits SQL parameters, or leaving the car defrosting and unlocked for 10 minutes while they're getting ready in the morning? UPDATE: A lot of good answers here. I think there were at least a couple that more or less tied for 1st place, so I would've liked to accept at least a couple of different ones. | Great answers already about supply-chain attacks, complexity, transparency. I'll give an answer in a different direction: accountability and auditability (basically; how easy is it to do a from-the-ground-up recount?). With a paper-based system, in the case of disputes, as long as boxes aren't physically lost or destroyed you can always go back to the paper source-of-truth and do a recount. For example, if the voting machines physically screwed up, you can go to the supreme court to get a ruling on whether "hanging or dimpled chads" count , and then go back to the paper and do a recount. With a computerized system, if something goes wrong and the votes are recorded incorrectly in the database (either by accident or malevolently), there is a much greater risk that that data is just lost and it's impossible to reconstruct voter's original intent compared to a paper system. TL;DR given the amount of value we place on free and fair elections, and the amount of effort we assume attackers might be going to to try and subvert them, our tolerance for risk here is very low. Paper has fewer things to go wrong, and is easier to go back to the source-of-truth and do a recount. | {
"source": [
"https://security.stackexchange.com/questions/240502",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45803/"
]
} |
240,570 | Suppose that someone browses non-Youtube videos (like on Vimeo) in Google Chrome's Incognito mode. Does Google collect and store any data about this activity ("watched videos on Vimeo" activity)? In general, does Google store ANYTHING that is done in Incognito mode? | I think you need to distinguish between "Google" and "Chrome". Chrome is a browser and the main feature of the Incognito mode is to delete any locally stored information from the browser session after the Incognito mode was closed. The point here is locally stored because this is all the browser can fully control. Google is instead a company which among others things collects information about the users behavior by being included with Google Analytics or Doubleclick into many websites. This is similar how other companies like Facebook or the various ad and tracking networks are included into the websites. And this kind of data collection is also independent from the browser you use, although some browsers have special features or some extensions can be added to reduce the amount of tracking and profiling. This remote data collection does not stop when the browser is in Incognito mode. In fact, usually these ad and tracking networks are not even aware that Incognito mode is used. What is different though is that tracking information from the "normal" mode and Incognito mode cannot be easily associated with each other, so the profiling done in Incognito mode is mostly independent from the profiling done in normal mode or from profiling done in other Incognito sessions. | {
"source": [
"https://security.stackexchange.com/questions/240570",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245190/"
]
} |
240,579 | I have no cyber security knowledge whatsoever, and am trying to safely store passwords in a database. I understood I need to use salt, to avoid rainbow table attacks and to make sure two users with the same password will have different password hashes. However, does the complexity of the salt matter? I was planning on simply using the user's id (an integer that's incremented each time a new account is created), but is it good enough, or should I generate a more complex salt? | The important part The fact that you are generating salts on your own is a red flag. The best way to do this, especially if you have little experience with security, is to use an established library for password hashing. A well-designed library will generate and use salts automatically for you, and it will store the salt and the hash in the same string, that you put in one column in your database. So, use a slow algorithm designed for password hashing, and use an established library, and you won't have to think about how to generate the salt. The answer Still, I should answer your question. Does it matter if the salt has high entropy? There are two properties that we may want the salt to have here, that randomness helps with: Unique, in your database, between password changes and preferably globally, so that an attacker can only crack one password at a time. Unknown to the attacker (before a breach), so that an attacker targeting a specific account can not start any preparatory work before the database is leaked. Using a counter as salt is a decent solution, but not perfect. The salt is at least locally unique, but it's not globally unique or even unique over multiple installations of the same software. It's not unknown to the attacker, but that really isn't such a big issue. Once the hash is leaked, the salt will be leaked too. But still, using a library that gives you a random salt will be better. Don't mess around with homebrew solutions for something as important as this! | {
"source": [
"https://security.stackexchange.com/questions/240579",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245369/"
]
} |
240,677 | The BBC reports that the image Boris Johson posted on Twitter to congratulate Joe Biden contains traces of the text "Trump" in the background. The BBC article links to a Guido Fawkes' article , and when I download the tweet's JPEG, convert to PNG with macOS preview then subtract a constant background, there it is! When I do a similar check on the blanked out area in my image in this post I see nothing, i.e. it worked. My goal there was to show an image of a battery but to ensure that no personal information like the battery's serial number would be visible or detectable. Sharing that on the internet might be a small but nonzero security issue. I breathe a sigh of relief but then wonder for future reference, in order to be sure that blanked out areas are fully blanked out: Question: What aspects of image preparation workflows can lead to accidents like Boris Johnson's No. 10 tweet's 'hidden message'? What are the most prominent things to avoid doing in order to avoid accidental hidden residues like this? import numpy as np
import matplotlib.pyplot as plt
# https://twitter.com/BorisJohnson/status/1325133262075940864/photo/1
# https://order-order.com/2020/11/10/number-10s-message-to-biden-originally-congratulated-trump/
# https://pbs.twimg.com/media/EmPRWjyVoAEBIBI?format=jpg
# https://pbs.twimg.com/media/EmPRWjyVoAEBIBI?format=jpg&name=4096x4096
# https://twitter.com/BorisJohnson/status/1325133262075940864
img = plt.imread('biden.png')
average = img[20:100, 20:100].mean(axis=(0, 1))
imgx = (img[..., :3] - average[:3]).clip(-0.005, 0.005) + 0.005
imgx = imgx.sum(axis=2) # monochrome
imgx /= imgx.max() # normalize
plt.imshow(imgx, cmap='cool')
plt.show() | Summary: The most likely explanation is that the old text was removed by using a fuzzy or smooth eraser tool . Analysis: In the image below I have only increased brightness and contrast to make the "hidden message" more visible. Nothing fancy. The slight red tint is only due to the fact that the black background of the original has a very slight red tint to it. As you can see there is a very clear gradient in the most visible hidden text fragment (under "shared priorities"). The other fragments also show some signs of gradients, but there are no gradient effects used in the text about Biden. Hypothesis: These seemingly random gradients together with the fact that the "hidden message" appears to consist of small random fragments of a much larger text makes me think that whoever made this picture removed the old text by using a fuzzy eraser tool. They manually swiped the eraser tool back and forth over the text until they didn't see the old text anymore. But the fuzzy eraser tool doesn't remove everything if you pass over quickly just once. This is by design to avoid sharp edges in an image. In the picture below I have swiped a big fuzzy eraser back and forth a few times over the original image to show what the results may look like. Obviously, in my picture some parts are still a little too visible, but I still think it gives a good idea of what type of effects this could cause. Solution: Don't use a fuzzy eraser tool to remove things you want to remove completely. In this case there's no need to use an eraser tool at all. Just fill the whole image with the background color, or maybe even better, just create a new image from scratch. The only thing they wanted to keep was the size and the background color and that should only take a few seconds to replicate in a new image. Update: As requested by @Tristan in a comment, I have tried to replicate the process completely. Here is a picture where I have removed the Biden/Kamala text with a fuzzy eraser tool and then placed a new text on top of it: And here is the same picture but with increased brightness and contrast to highlight remnants of the old text: | {
"source": [
"https://security.stackexchange.com/questions/240677",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/115702/"
]
} |
240,678 | My company is using on premise DMZ proxy servers to limit internet access to employees working from home. When they use company laptop, they are automatically configured to go through the proxy server for all traffic. The proxy will then block access to malicious and other unwanted site. This solution works but is very bandwidth intensive as all traffic is multiplied by 2x. The proxy server is also under heavy ddos attacks ( we're trying to figure this out too ) Is there another solution that can help us protect our company laptops and data by blocking dangerous traffic without using proxy servers? How does your company monitor work from home laptop traffic? Thank you folks | Summary: The most likely explanation is that the old text was removed by using a fuzzy or smooth eraser tool . Analysis: In the image below I have only increased brightness and contrast to make the "hidden message" more visible. Nothing fancy. The slight red tint is only due to the fact that the black background of the original has a very slight red tint to it. As you can see there is a very clear gradient in the most visible hidden text fragment (under "shared priorities"). The other fragments also show some signs of gradients, but there are no gradient effects used in the text about Biden. Hypothesis: These seemingly random gradients together with the fact that the "hidden message" appears to consist of small random fragments of a much larger text makes me think that whoever made this picture removed the old text by using a fuzzy eraser tool. They manually swiped the eraser tool back and forth over the text until they didn't see the old text anymore. But the fuzzy eraser tool doesn't remove everything if you pass over quickly just once. This is by design to avoid sharp edges in an image. In the picture below I have swiped a big fuzzy eraser back and forth a few times over the original image to show what the results may look like. Obviously, in my picture some parts are still a little too visible, but I still think it gives a good idea of what type of effects this could cause. Solution: Don't use a fuzzy eraser tool to remove things you want to remove completely. In this case there's no need to use an eraser tool at all. Just fill the whole image with the background color, or maybe even better, just create a new image from scratch. The only thing they wanted to keep was the size and the background color and that should only take a few seconds to replicate in a new image. Update: As requested by @Tristan in a comment, I have tried to replicate the process completely. Here is a picture where I have removed the Biden/Kamala text with a fuzzy eraser tool and then placed a new text on top of it: And here is the same picture but with increased brightness and contrast to highlight remnants of the old text: | {
"source": [
"https://security.stackexchange.com/questions/240678",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245457/"
]
} |
240,688 | I understand that many open-source projects request vulnerabilities not to be disclosed on their public bug tracker but rather by privately contacting the project's security team, to prevent disclosing the bug before a fix is available. That makes perfect sense. However, since the code repository of many open-source projects is public, won't fixing the bug in the source code immediately disclose it? What measures (if any) are taken by open-source projects (e.g. the Linux kernel) to ensure that fixes for security vulnerabilities can be deployed to the end user (e.g. a Samsung Android phone) before the vulnerability is disclosed? | They don't. By releasing code, they automatically "disclose" the issue to those who can reverse engineer the patch. But they can delay explaining or providing the details for easy consumption. If they delay releasing the code, they force users to use known-vulnerable code. If they release the code and do not announce it as a security fix, then users might not patch and end up running known-vulnerable code. So, they fix the code, release it, announce a security fix so that people assign the appropriate urgency, but they can delay explaining all the details to make it a little harder for attackers to figure out how to exploit the vulnerability. Is that effective? To some degree, "security by obscurity" has a place in a strategy in order to buy some time. Since it costs nothing, and it can have some positive effect, it seems like an easy call to make. | {
"source": [
"https://security.stackexchange.com/questions/240688",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12244/"
]
} |
240,848 | I was reading this question on Stack Exchange Workplace community and it indicates that an IT team was able to prevent a user from turning their laptop on (power on). My laptop access has been shut off (IT somehow remotely shut it down,
it won't power on), company cell doesn't work, can't access e-mail via
webmail. I know that an IT system administrator can prevent a user from logging in. But, is this possible? If it is, what are technologies that can be used to do something like this? Linked question: https://workplace.stackexchange.com/q/166838/86347 | Out-of-band management Intel Management Engine and amd DASH are separate microprocessors that remotely manage enterprise PCs. They run with Ring -3 privilege on the machine and run outside of host OS. It can lock stolen devices, remotely erase data, track location, wake on LAN and wake on wireless LAN , control host OS and detect third party live USB boots. It is capable of accessing any memory region without the main x86 CPU knowing about the existence of these accesses. It also runs a TCP/IP server on your network interface and packets entering and leaving your machine on certain ports bypass any firewall running on your system. [1] As it requires a power source, in enterprise Desktops, keeping the switch on is enough for motherboard to draw power as shutting down the host OS does not shut down the AC power supply to the power supply unit of the motherboard. There is no way to disable it from UEFI. Removing the microprocessor or modifying its firmware which is stored in UEFI will prevent system to boot. Disabling secure boot or using custom UEFI keys will not disable its firmware verification. This is how Intel verifies it, amd's implementation could be different: The ME firmware is verified by a secret boot ROM embedded in the chipset that first checks that the SHA256 checksum of the public key matches the one from the factory, and then verifies the RSA signature of the firmware payload by recalculating it and comparing to the stored signature. This means that there is no obvious way to bypass the signature checking, since the checking is done by code stored in a ROM buried in silicon, even though we have the public key and signature. [1] Once stolen devices are locked, they don't respond to power button signal. In old motherboards with BIOS, they used to respond but immediately shut themselves down. Consumer PCs also have Intel Management Engine microprocessor and Intel Management Engine Interface driver pre-installed in Windows but Intel Active Management Technology software is not installed by OEMs in consumer PCs. Can it be reversed, or will this brick the device? If the device is locked by the remote administrator, it can unlock it using wake on LAN and a specific unlock instruction to the chip. This is how my organisation used to handle enterprise laptops with sensitive data. The chip is bounded with its firmware in UEFI, hardcoded with chipmaker's public key and is probably hardwired to the motherboard in order to brick the device if chip is removed. Intel is secretive about its implementation. That didn't stop researchers to partially disable it from its firmware: Disable Intel’s Backdoor On Modern Hardware (2020) Researchers discovered an undocumented configuration setting that can used to disable the Intel ME master controller that has been likened to a backdoor. (2017) Out-of-band management [1] Intel x86s hide another CPU that can take over your machine (you can't audit it) | {
"source": [
"https://security.stackexchange.com/questions/240848",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/176981/"
]
} |
240,991 | ssh-keygen has the following options for a key type ( -t ): dsa | ecdsa | ecdsa-sk | ed25519 | ed25519-sk | rsa I am not familiar with the -sk notation and it's not explained in the man page. What does it mean? | In OpenSSH FIDO devices are supported by new public
key types "ecdsa-sk" and "ed25519-sk", along with corresponding
certificate types. To quote: FIDO/U2F Support This release adds support for FIDO/U2F hardware authenticators to
OpenSSH. U2F/FIDO are open standards for inexpensive two-factor
authentication hardware that are widely used for website
authentication. In OpenSSH FIDO devices are supported by new public
key types "ecdsa-sk" and "ed25519-sk", along with corresponding
certificate types. Source: https://www.openssh.com/txt/release-8.2 | {
"source": [
"https://security.stackexchange.com/questions/240991",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8421/"
]
} |
241,010 | A potential client is planning to do penetration testing on our SaaS . Is it standard or fair for us to request things like the following? An NDA from the pen tester Details on who is performing the test (e.g., verifying they are accredited) Restrictions (like no social engineering, DoS attack, etc.) Targeting a staging server instead of production A copy of the full report | Non-Disclosure Agreements A NDA is a fairly standard thing in most penetration tests. No serious penetration tester will protest against an NDA. The company conducting the penetration test may later approach you and ask you for your permissions to anonymously talk about the findings at your company for educational purposes, which you can always deny if you feel it would harm your business. This could look like "In a penetration test for ACME Corp., we were tasked with testing a SaaS solution. During the test, we found that ..." Details on who is performing the test It makes sense to know who the actual testers are, just in case you need to contact them directly for one reason or another. However, you should keep in mind that not all penetration testers have certifications yet, especially those who just started out. So it could very well be that a company may decide to also assign a newly hired penetration tester to the project so they get more experience, in addition to an already experienced team. Restricted Scope This is also very usual to see in penetration test projects. It's up to you to define the scope of the assessment, and as such also what kinds of tests a penetration tester is allowed to perform. Testing Environment Again, the scope is up to you. If you say you would rather offer a staging server than production, that is very reasonable. In fact, I always prefer testing on staging than on production environments, because I don't want to be the reason thousands of customers suddenly can't access their software anymore, simply because some exploit code I ran crashed a machine by accident. A copy of the report That's another reasonable request to make. In fact, you're getting a penetration test done on your software and you don't even have to pay for it. That's a win for you! | {
"source": [
"https://security.stackexchange.com/questions/241010",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245953/"
]
} |
241,022 | I'd like to ask for small hint with following problem: Using the steganographic method of the least significant bits, hide
the text string "Kra" in four pixels of color with RGB code . Hide the text in the sequence of bits of the image
one character at a time, ie first hide the character "K", then the
character "r" and at the end of the sequence of image bits "a" will be
hidden. The text is encoded according to the Latin-2 character set, so
"Kra" = . Write the resulting pixel values in the
format . I assume that (128)10=(10000000)2, individual characters converted from decimal to binary system as following: (4B)16 = (01001011)2 (72)16 = (01110010)2 (61)16 = (01100001)2 At this point, I can start substituting bits in (128)10 values of R, G and B, starting at the least significant bits. However, I will get something like this: R=(11001011)2=(203)10 G=(11110010)2=(242)10 B=(11100001)2=(225)10 and this is quite far from the original (128)10 values and it does not even meet the condition of unrecognizable color difference by the human eye. In addition, the last 4th pixel remains unused. What's wrong with that method? Thank you for your explaination. | Non-Disclosure Agreements A NDA is a fairly standard thing in most penetration tests. No serious penetration tester will protest against an NDA. The company conducting the penetration test may later approach you and ask you for your permissions to anonymously talk about the findings at your company for educational purposes, which you can always deny if you feel it would harm your business. This could look like "In a penetration test for ACME Corp., we were tasked with testing a SaaS solution. During the test, we found that ..." Details on who is performing the test It makes sense to know who the actual testers are, just in case you need to contact them directly for one reason or another. However, you should keep in mind that not all penetration testers have certifications yet, especially those who just started out. So it could very well be that a company may decide to also assign a newly hired penetration tester to the project so they get more experience, in addition to an already experienced team. Restricted Scope This is also very usual to see in penetration test projects. It's up to you to define the scope of the assessment, and as such also what kinds of tests a penetration tester is allowed to perform. Testing Environment Again, the scope is up to you. If you say you would rather offer a staging server than production, that is very reasonable. In fact, I always prefer testing on staging than on production environments, because I don't want to be the reason thousands of customers suddenly can't access their software anymore, simply because some exploit code I ran crashed a machine by accident. A copy of the report That's another reasonable request to make. In fact, you're getting a penetration test done on your software and you don't even have to pay for it. That's a win for you! | {
"source": [
"https://security.stackexchange.com/questions/241022",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245939/"
]
} |
241,023 | We all know why password reuse is bad: eventually some site at which you have an account that did not properly hash+salt user passwords will get hacked, and your password will be published in a big dump. Then some hackers will take that user/pass combination and try it on every site they think that can get something useful from. I know that password managers are the recommended solution to having a unique totally random password for every site. But they are not completely without their own difficulties, and especially trying to persuade non-technical people to use them may be difficult. Instead, as a minimal alternative to shared passwords, one might have a simple algorithm to generate unique passwords from a shared random component. A minimal example might be <sitename>_<good random password> . So my passwords might be stackoverflow_rm6Z0$f237db^DGYU3r
google_rm6Z0$f237db^DGYU3r etc, where the second part is shared. Now, any idiot actually trying to hack me specifically could probably guess my algorithm even knowing only one password, and trivially if they got ahold of two, so if I were for some reason a high-profile target this would be a bad plan. But if anyone wanted to hack me, I'm probably in trouble no matter what I do. Assuming I'm not a high profile target, it seems to me a simple algorithm like this would protect me from the majority of password-reuse dangers, because no human will ever see my password specifically. So really I'm asking, is this reasoning flawed? Is this kind of algorithmically-generated password actually any safer than exact password reuse? Or are password dumps used differently than I have in mind? The accepted answer to this question suggests that varied passwords are only useful if it is hashed, but to me it seems that a hacker having the cleartext password doesn't help them. I agree this is fundamentally security-by-obscurity, but maybe security-by-anonymity would be a better title. My password would be one of a million in a big dump, with essentially zero chance that any human would ever actually see mine. the question (edited to be more explicit): Assume that: An average person (not a high profile target for hackers) uses an algorithm to generate unique site passwords. The algorithm is extremely simple, so that a human could guess the algorithm given even a single password One or more of those passwords have been obtained by hackers Is this person any less likely to be hacked on other sites than a person who uses the same password on every site? If not , is it because There is a reasonable chance that a human will actually look at this password? Attackers already look for some kinds of algorithmically-generated passwords? Some other reason? Note: Many have pointed out that using a password manager is a better idea. In particular ThoriumBR and others point out that this scheme is unsustainable, because once I need to change one of the passwords, I now have to change my algorithm. These are very good points, but not what I am hoping to learn from this question. | The main issue with a "password generation algorithm" is that the passwords are fixed. You cannot change a single leaked password without changing the algorithm, thus changing every password. To avoid that, you had to record somewhere the sites using the first version, the ones using the second one (because the password generated by the first leaked), the sites using another one because the algorithm generated a password unacceptable by some site, and so on. And some sites require you to change your password from time to time. So you would have to take that into account, and record more and more information just to keep in pace with the state of the passwords. And for that, you would need a secure data storage, with encryption, a backup process, something easy to use and easy to be integrated on your online routine. Losing any of those records would lock you out, and create an Availability Compromise . Leaking it in plaintext would create a Confidentiality Compromise . Corrupting (or forgetting) any of it would create an Integrity Compromise . You need some software specially created for secure storage. Something like... a password manager. | {
"source": [
"https://security.stackexchange.com/questions/241023",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/243152/"
]
} |
241,065 | in the Wordpress directory I found some suspicious-looking files with random strings in their name e.g. uxs5sxp59f_index.php . Can I safely check their content ? I have a suspicion that the site has been infected because some of its links on external portals to the site with the malware. | In order for the opening of the file to pose a risk, the file would need to include an exploit for the specific text editor you use. Then when you open the file, the exploit would trigger. While possible that's not very likely . It certainly isn't common. The far more likely threat is that there is malicious PHP code in the file that triggers when the file is executed by a PHP server. But all that aside, I'm not sure why you are questioning whether you have been infected when you are looking at files being served that you did not create and there are links to malware. You've been infected. Start with that assumption... | {
"source": [
"https://security.stackexchange.com/questions/241065",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245995/"
]
} |
241,139 | Everyone knows of the common cybersecurity tips to be careful when you open links in an email. But every day we look for something on the Internet, clicking links which the search engine shows us, and we do not have the same fear. Why are the links in email considered more dangerous than links from web search results? Maybe it is related to the fact that links in an email may contain a more personal attack malicious to you or your company? | The results of a search engine are based on previously collected data, i.e. the engine does not starts to scanning the whole internet when doing a search but it looks through an index of seen and stored sites. The results are also ordered, i.e. the sites which fit the query best and which also have the highest reputation for good answers in general are at the top. Thus, as long as fairly common search terms are used the top hits come from sites with a high reputation. There are attempts to pollute search engines by returning different results to the search engines web bot than to the normal user. This is not new, so search engines partially try to detect such pollution by simulating normal users. They also include historic reputation information, i.e. sites which behaved shady in the past are considered shady for some time in the future too. New sites also have less reputation than established sites etc. This together makes search engines results fairly good (but not perfect) curated data. Links in mails are the opposite of this: No up-front checks and curation are done to these links and it is all to the end user (or some security software in the path) to decide if this link is safe or not. That's why these links are far more dangerous. | {
"source": [
"https://security.stackexchange.com/questions/241139",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/214750/"
]
} |
241,201 | I understand that end-to-end VPNs (such as SurfShark and NordVPN) hide the domains I visit whereas HTTPS does not However, are they any attacks that an HTTPS website would be subject to, that could be avoided if I used an end-to-end VPN? The main kind of attack I am concerned about is having any secure information (such as passwords, bank details, mobile number etc...) I send to a website being intercepted by a "middle-man" The question in essence is are there any security benefits from using a paid-for VPN such as Surfshark given I currently enforce HTTPS on my browser (meaning I block all HTTP websites) and all my banking websites use HTTPS. By security, I mean can any personal data be obtained Privacy is not as important an issue (e.g: can people see domains I visit) | What does TLS do? From wikipedia/HTTPS : The principal motivations for HTTPS are authentication of the accessed website, and protection of the privacy and integrity of the exchanged data while in transit. It protects against man-in-the-middle attacks, and the bidirectional encryption of communications between a client and server protects the communications against eavesdropping and tampering. So the primary purpose of HTTPS is to protect your personal data. What does a VPN do? From wikipedia/VPN : A virtual private network (VPN) extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. ... VPN technology was developed to provide access to corporate applications and resources to remote or mobile users, and to branch offices. ... Internet users may secure their connections with a VPN to circumvent geo-blocking and censorship or to connect to proxy servers to protect personal identity and location to stay anonymous on the Internet. So the primary purpose of a VPN is to connect to your company's network when you're out of the building. There is a secondary usage of VPNs to protect your anonymity (specifically your IP address) when accessing public websites. Your questions: The main kind of attack I am concerned about is having any secure information (such as passwords, bank details, mobile number etc...) I send to a website being intercepted by a "middle-man". Privacy is not as important an issue (e.g: can people see domains I visit). You want the thing that HTTPS is good at. You are not interested in the thing that VPNs are good at. Sounds like there's no reason for you to use a VPN :) | {
"source": [
"https://security.stackexchange.com/questions/241201",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245899/"
]
} |
241,202 | The expensive one: https://www.dustinhome.se/product/5010873750/ironkey-basic-s1000 The cheap one: https://www.dustinhome.se/product/5010887912/datatraveler-100-g3 Over 14,000 SEK difference in price. Same company (Kingston). Same USB standard (3). Same storage capacity (128 GB). Same store. Yet such a massive price difference. All because one is "encrypted"? I don't want to sound either condescending nor ignorant, but why would even a very rich person pay such a premium for the "encryption"? Is there any benefit to have it in hardware (presumably some kind of integrated micro-computer?) over just formatting the cheap one with VeraCrypt? Is the expensive one far more durable as well? Won't this thing actually age and become useless in regards to the encryption, whereas with VeraCrypt, you could re-encrypt it since it's all software? I realize that trusting VeraCrypt in itself is also quite scary, even for me, but I somehow feel more confident about that software than I do about some company "promising" that it's "super duper encrypted with FIPS 140-2 Level 3, 256-bit AES-XTS"... Whatever that means. I doubt many people know. I want to make clear that I recognize that there may be something I'm fundamentally missing, and that this could be extremely useful for people with a lot of money and no trust in VeraCrypt, or who need the convenience that this (presumably) provides. I'd love to hear a justification since apparently, that "built-in super encryption" costs so much money versus the identical product minus the encryption. With that price tag, you'd almost expect it to be covered with real gold and gems... | super duper encrypted with FIPS 140-2 Level 3, 256-bit AES-XTS Yet such a massive price difference. All because one is "encrypted"? Your question is a bit like comparing a Toyota and a Ferrari and asking "Why the massive price difference. All because one is "fast"? What is FIPS 140-2 Level 3? FIPS 140-2 Level 3 is more than just encryption. It requires the device to be tested by a cryptography testing lab that is certified to perform this testing on behalf of the US government. The device must: (Level 1) Have its crypto implementations inspected by the testing lab for correctness and backdoors. (Level 2) "tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys" ( wikipedia ) . Typically these are fancy versions of "warranty void if broken" stickers that are very hard to get off and put back on without damaging the sticker or the product in a very noticeable way. I've seen some where the sticker is like one of those glow sticks where bending it mixes chemicals and it turns a bright colour. (Level 3) The device must be able to detect physical (or software?) tampering and wipe its own data. For a USB stick that probably means that any attempt to cut open the case will result in the device triggering either a software wipe or physical damage that makes it non-functional. For the sake of completeness: Level 4, the highest level, adds the requirement that devices be resistant to physical attacks that subject the device to temperatures and voltages outside its normal operating ranges. This can lead to attacks like glitching where you manipulate the system clock signal to, for example, double-execute or skip instructions. Level 3 is difficult to obtain. For rack-mounted servers, I've seen things like the entire motherboard and hard drive submerged in 5 kg of heat-conductive epoxy resin so that it's nearly impossible to remove the RAM sticks or hard drive without destroying them. I've also seen tripwires on the hinges of the server case such that opening the case causes destruction of the chip holding the crypto keys. I'm not even sure how you would do this on a USB stick. I'm impressed that they got a USB stick past Level 3 testing. Guess: maybe there are tiny wires in the casing and an "intrusion detection" chip with its own battery that is never powered off so that it can monitor for breakage of the wires and trigger a wipe? Target consumer of this USB stick You are not the target consumer of this USB stick. You really have no reason to buy it. Note that like all FIPS standards, FIPS 140-2 is not intended for consumer goods; it is intended soley for internal use by the US Federal Government and companies that it contracts to. This USB stick is intended for people who are doing contract work for the US government and are required by their contracts to keep all data on FIPS 140-2 Level 3 devices, likely because the data they are handling has been classified at a certain security level by the US government or military. Very specialized device for a very small market, hence the price tag. | {
"source": [
"https://security.stackexchange.com/questions/241202",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246179/"
]
} |
241,303 | We know that Intel processors have one ( ME ) so it is definitely possible. In general, how could you even trust a piece of hardware like a CPU or network card, even if the manufacturer says there is no backdoor?
For software, it is easy since you can access the source code verify it and compile it. But for hardware? | The short answer is, you can't. The longer answer: there are a few things that can be done to increase your trust in hardware, though they also just shift the root of trust elsewhere. A first interesting question you pose is the software/hardware distinction. To not go into the discussion about the possibly blurred boundary between the two here, I'll understand "hardware" to be non-reconfigurable logic implemented in some physical device (i.e. I'll exclude firmware such as the intel ME or microcode). Backdoors can be inserted into a physical device in a number of stages: from conceptual architecture, through logic design, up to fabrication. To ensure no backdoors are inserted, you would need to validate the whole process from the end product to the beginning. The good news is that the initial stages are very similiar to software - in fact, logic is usually designed using hardware description languages (HDL). These designs can be audited in the same way software can be audited. The step from here to fabrication involves multiple conversions e.g. to lithography masks using synthesis software, in a similar way to how software is compiled using a compiler. These too can be audited just like a compiler. (As a tangent - the bootstrap problem is a really interesting problem where you consider the possibility of the compiler compiling your compiler being untrustworthy) So this leaves the last step: fabrication. Validation at this stage is usually done both by inspecting the fabrication process, and by randomly sampling devices from the same production batch (produced using the same lithography masks). For instance, the masks used can be compared to a validated trustworthy copy to ensure that no backdoors are inserted at this stage. Similarly, randomly sampled devices can be delayered and inspected under an electron microscope. However, as a consumer, these steps are usually not available to you. For most chip producers, this whole process involves a lot of closely kept trade secrets, and aren't publicly available. This is why there is a movement towards creating open-source hardware toolchains and HDL implementations of common logic modules and systems - though there are a number of problems here too . Finally, as @knallfrosch correctly points out in the comments, backdoors may also be inserted after production, either at a distributor, while the product is being shipped to a customer, or in-place (c.f. evil maid attack ). An example of such practices by the NSA has come to light through the Snowden affair. Tampering at this stage may range from hardware implants added to the device to editing the circuit on a silicon die e.g. using a Focused Ion Beam ( FIB ).
Mitigations at this stage usually rely on these kinds of tampering leaving externally visible traces, which may be additionally enforced using e.g. tamper-evident packaging (something every-day users can do is the glitter and nailpolish technique ). Furthermore, minute device-specific imperfections that are a side product of fabrication may be used to create so-called Physically Unclonable Functions ( PUFs ) which may be designed in a way that tampering will most certainly alter or destroy the PUF and therefore be detectable. | {
"source": [
"https://security.stackexchange.com/questions/241303",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246343/"
]
} |
241,328 | Let's say I'm doing a pentest on BlueCorp and find a bug in the software UnrealSec made and distributed by SecCorp which is used by BlueCorp and found during said pentest. Should I report this bug to both BlueCorp and SecCorp or only one? | The short answer is, you can't. The longer answer: there are a few things that can be done to increase your trust in hardware, though they also just shift the root of trust elsewhere. A first interesting question you pose is the software/hardware distinction. To not go into the discussion about the possibly blurred boundary between the two here, I'll understand "hardware" to be non-reconfigurable logic implemented in some physical device (i.e. I'll exclude firmware such as the intel ME or microcode). Backdoors can be inserted into a physical device in a number of stages: from conceptual architecture, through logic design, up to fabrication. To ensure no backdoors are inserted, you would need to validate the whole process from the end product to the beginning. The good news is that the initial stages are very similiar to software - in fact, logic is usually designed using hardware description languages (HDL). These designs can be audited in the same way software can be audited. The step from here to fabrication involves multiple conversions e.g. to lithography masks using synthesis software, in a similar way to how software is compiled using a compiler. These too can be audited just like a compiler. (As a tangent - the bootstrap problem is a really interesting problem where you consider the possibility of the compiler compiling your compiler being untrustworthy) So this leaves the last step: fabrication. Validation at this stage is usually done both by inspecting the fabrication process, and by randomly sampling devices from the same production batch (produced using the same lithography masks). For instance, the masks used can be compared to a validated trustworthy copy to ensure that no backdoors are inserted at this stage. Similarly, randomly sampled devices can be delayered and inspected under an electron microscope. However, as a consumer, these steps are usually not available to you. For most chip producers, this whole process involves a lot of closely kept trade secrets, and aren't publicly available. This is why there is a movement towards creating open-source hardware toolchains and HDL implementations of common logic modules and systems - though there are a number of problems here too . Finally, as @knallfrosch correctly points out in the comments, backdoors may also be inserted after production, either at a distributor, while the product is being shipped to a customer, or in-place (c.f. evil maid attack ). An example of such practices by the NSA has come to light through the Snowden affair. Tampering at this stage may range from hardware implants added to the device to editing the circuit on a silicon die e.g. using a Focused Ion Beam ( FIB ).
Mitigations at this stage usually rely on these kinds of tampering leaving externally visible traces, which may be additionally enforced using e.g. tamper-evident packaging (something every-day users can do is the glitter and nailpolish technique ). Furthermore, minute device-specific imperfections that are a side product of fabrication may be used to create so-called Physically Unclonable Functions ( PUFs ) which may be designed in a way that tampering will most certainly alter or destroy the PUF and therefore be detectable. | {
"source": [
"https://security.stackexchange.com/questions/241328",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/206331/"
]
} |
241,348 | I have a web app where I allow users to create a one-page portfolio using drag and drop, also I allow them to add custom HTML freely (basically any html or js code) I'm aware that I shouldn't allow the custom HTML to be executed while they are in the EDIT mode, so I save it in the database but never render it into the editor page. However, I need that custom HTML/JS to be rendered in their built portfolio page, to prevent security issues here is what I did: In addition to the private IP related to my app, I added another private IP I purchased another domain, let say for example my app domain is portfolio-app.com , I purchased another domain portfolio-show.com and pointed that domain to my app (using the second private IP), means: I have 2 domains pointing the same app but each domain has its own private IP. In my app I have a route in place that detects if the request is coming from portfolio-show.com host, if so, then it lookups the path and show the specific user portfolio, The routing logic basically goes like this: if request.host == 'portfolio-show.com'
get ':any-id', to: 'portfolioController#showAction'
end The showAction is rendered only on portfolio-show.com but never under portfolio-app.com , so if you are a user 1 your portfolio will be visited under portfolio-show.com/user-1-portfolio My concern is: is showing the custom HTML/JS under a different domain enough to protect my app? am I safe if some user put malicious code on their portfolio page? | The short answer is, you can't. The longer answer: there are a few things that can be done to increase your trust in hardware, though they also just shift the root of trust elsewhere. A first interesting question you pose is the software/hardware distinction. To not go into the discussion about the possibly blurred boundary between the two here, I'll understand "hardware" to be non-reconfigurable logic implemented in some physical device (i.e. I'll exclude firmware such as the intel ME or microcode). Backdoors can be inserted into a physical device in a number of stages: from conceptual architecture, through logic design, up to fabrication. To ensure no backdoors are inserted, you would need to validate the whole process from the end product to the beginning. The good news is that the initial stages are very similiar to software - in fact, logic is usually designed using hardware description languages (HDL). These designs can be audited in the same way software can be audited. The step from here to fabrication involves multiple conversions e.g. to lithography masks using synthesis software, in a similar way to how software is compiled using a compiler. These too can be audited just like a compiler. (As a tangent - the bootstrap problem is a really interesting problem where you consider the possibility of the compiler compiling your compiler being untrustworthy) So this leaves the last step: fabrication. Validation at this stage is usually done both by inspecting the fabrication process, and by randomly sampling devices from the same production batch (produced using the same lithography masks). For instance, the masks used can be compared to a validated trustworthy copy to ensure that no backdoors are inserted at this stage. Similarly, randomly sampled devices can be delayered and inspected under an electron microscope. However, as a consumer, these steps are usually not available to you. For most chip producers, this whole process involves a lot of closely kept trade secrets, and aren't publicly available. This is why there is a movement towards creating open-source hardware toolchains and HDL implementations of common logic modules and systems - though there are a number of problems here too . Finally, as @knallfrosch correctly points out in the comments, backdoors may also be inserted after production, either at a distributor, while the product is being shipped to a customer, or in-place (c.f. evil maid attack ). An example of such practices by the NSA has come to light through the Snowden affair. Tampering at this stage may range from hardware implants added to the device to editing the circuit on a silicon die e.g. using a Focused Ion Beam ( FIB ).
Mitigations at this stage usually rely on these kinds of tampering leaving externally visible traces, which may be additionally enforced using e.g. tamper-evident packaging (something every-day users can do is the glitter and nailpolish technique ). Furthermore, minute device-specific imperfections that are a side product of fabrication may be used to create so-called Physically Unclonable Functions ( PUFs ) which may be designed in a way that tampering will most certainly alter or destroy the PUF and therefore be detectable. | {
"source": [
"https://security.stackexchange.com/questions/241348",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69630/"
]
} |
241,367 | I was reading this question ( How to check if an image file is clean? ) and forest's answer suggested converting the image to a "trivial pixelmap format, such as PPM". Such a format was considered "simple enough that it would be infeasible to exploit". The question is: what makes a PPM file safer? Is it because there's no room to hide a payload, without becoming too noticeable at first sight? Or is it because the format is so simple to parse, that we can suppose our software will be less likely to be vulnerable, when opening such files? And most importantly, is it even true that we can consider PPM files (and maybe also BMP) to be safer than, say, more complex formats like JPEG or PNG? | The short answer is, you can't. The longer answer: there are a few things that can be done to increase your trust in hardware, though they also just shift the root of trust elsewhere. A first interesting question you pose is the software/hardware distinction. To not go into the discussion about the possibly blurred boundary between the two here, I'll understand "hardware" to be non-reconfigurable logic implemented in some physical device (i.e. I'll exclude firmware such as the intel ME or microcode). Backdoors can be inserted into a physical device in a number of stages: from conceptual architecture, through logic design, up to fabrication. To ensure no backdoors are inserted, you would need to validate the whole process from the end product to the beginning. The good news is that the initial stages are very similiar to software - in fact, logic is usually designed using hardware description languages (HDL). These designs can be audited in the same way software can be audited. The step from here to fabrication involves multiple conversions e.g. to lithography masks using synthesis software, in a similar way to how software is compiled using a compiler. These too can be audited just like a compiler. (As a tangent - the bootstrap problem is a really interesting problem where you consider the possibility of the compiler compiling your compiler being untrustworthy) So this leaves the last step: fabrication. Validation at this stage is usually done both by inspecting the fabrication process, and by randomly sampling devices from the same production batch (produced using the same lithography masks). For instance, the masks used can be compared to a validated trustworthy copy to ensure that no backdoors are inserted at this stage. Similarly, randomly sampled devices can be delayered and inspected under an electron microscope. However, as a consumer, these steps are usually not available to you. For most chip producers, this whole process involves a lot of closely kept trade secrets, and aren't publicly available. This is why there is a movement towards creating open-source hardware toolchains and HDL implementations of common logic modules and systems - though there are a number of problems here too . Finally, as @knallfrosch correctly points out in the comments, backdoors may also be inserted after production, either at a distributor, while the product is being shipped to a customer, or in-place (c.f. evil maid attack ). An example of such practices by the NSA has come to light through the Snowden affair. Tampering at this stage may range from hardware implants added to the device to editing the circuit on a silicon die e.g. using a Focused Ion Beam ( FIB ).
Mitigations at this stage usually rely on these kinds of tampering leaving externally visible traces, which may be additionally enforced using e.g. tamper-evident packaging (something every-day users can do is the glitter and nailpolish technique ). Furthermore, minute device-specific imperfections that are a side product of fabrication may be used to create so-called Physically Unclonable Functions ( PUFs ) which may be designed in a way that tampering will most certainly alter or destroy the PUF and therefore be detectable. | {
"source": [
"https://security.stackexchange.com/questions/241367",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175681/"
]
} |
241,390 | My traffic goes trough 6 routers in sequence. +----------+ +----------+
| | 192.168.3.2 | |
| Internet | +---------->+ Router 4 |
| | | | |
+-+--------+ | +-+--------+
^ | ^ 192.168.4.1
| | |
v 203.0.113.74 | v 192.168.4.2
+-+--------+ | +-+--------+
| | | | |
| Router 1 | | | Router 5 |
| | | | |
+-+--------+ | +-+--------+
^ 192.168.1.1 | ^ 192.168.5.1
| | |
v 192.168.1.2 | v 192.168.5.2
+-+--------+ | +-+--------+
| | | | |
| Router 2 | | | Router 6 |
| | | | |
+-+--------+ | +-+--------+
^ 192.168.2.1 | ^ 192.168.6.1
| | |
v 192.168.2.2 | v 192.168.6.2
+-+--------+ | +-+--------+
| | | | |
| Router 3 +<----------+ | Computer |
| | 192.168.3.1 | |
+----------+ +----------+ Each router is of a different make with different firmware (German, American, Chinese, Swedish..) If a fault is found on a router, or backdoors installed, or for any reason, one RouterLvl1 is compromised, the attacker should hack all the other routers from lvl2 to lvl6 to get to my PC. I did this because I had a lot of old unused little and cheap routers.
Does it make sense? In your opinion, is the security of each router adding to the global security? | It may look good on paper, but it's a terrible idea. You assume that the only way to get in your PC is from the first hop, the router closer to you, and that to hack one device, the previous one should have been hacked too. That assumption is incorrect. Extending your network a little further, there are several enterprise-grade routers, managed by a team of experienced system administrators, and if your assumption were correct, to hack your PC those routers should have been hacked too, right? But we know that is not how it works. Now, back to the real world. One attacker does not need to hack every single device between their computer and yours, only one (or even none) is enough. If the attacker wants to employ a MitM attack against you, now he have 6 routers to attack and need to achieve only one. As they are chained, MitM in one means most traffic is under his control. SSL will protect you here, but non-SSL protocols (SMTP, IMAP, DNS, FTP) are all vulnerable. So instead of having only one target, the attacker can have six. And you will have six times more work keeping those routers updated and secure. One 0-day in one router is less probable than one 0-day in any of six routers. And if seems unlikely to an attacker to target one of the internal routers in the chain, remember that the user can be lead to a page making requests to internal IP subnets (10.0.0.0/8, 192.168.0.0/24, 192.168.0.1/24, and so on) while he watches cat videos. If any router responds, it can be attacked from the user PC. If the attacker can execute code on the user PC, it can run traceroute and get the address of every router, and use the user PC as a proxy to attack each one directly, from inside. NAT does not protect them at all. We use to say that any chain is as strong as its weakest link, and you are adding lots of non-necessary links on your chain. | {
"source": [
"https://security.stackexchange.com/questions/241390",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246418/"
]
} |
241,421 | I need to save very sensitive data from an Excelfile that the user uploads. The data will then be saved to mySQL. All is done in Node.js. Now I wonder what is the most secure way to upload the file. Should I use Multer ( https://expressjs.com/en/resources/middleware/multer.html ) which has a memoryStorage option (a buffer of the entire file), or should I save the file to a folder and then delete it as soon as all data is added to mySQL? From a security viewpoint, what is the best option? | It may look good on paper, but it's a terrible idea. You assume that the only way to get in your PC is from the first hop, the router closer to you, and that to hack one device, the previous one should have been hacked too. That assumption is incorrect. Extending your network a little further, there are several enterprise-grade routers, managed by a team of experienced system administrators, and if your assumption were correct, to hack your PC those routers should have been hacked too, right? But we know that is not how it works. Now, back to the real world. One attacker does not need to hack every single device between their computer and yours, only one (or even none) is enough. If the attacker wants to employ a MitM attack against you, now he have 6 routers to attack and need to achieve only one. As they are chained, MitM in one means most traffic is under his control. SSL will protect you here, but non-SSL protocols (SMTP, IMAP, DNS, FTP) are all vulnerable. So instead of having only one target, the attacker can have six. And you will have six times more work keeping those routers updated and secure. One 0-day in one router is less probable than one 0-day in any of six routers. And if seems unlikely to an attacker to target one of the internal routers in the chain, remember that the user can be lead to a page making requests to internal IP subnets (10.0.0.0/8, 192.168.0.0/24, 192.168.0.1/24, and so on) while he watches cat videos. If any router responds, it can be attacked from the user PC. If the attacker can execute code on the user PC, it can run traceroute and get the address of every router, and use the user PC as a proxy to attack each one directly, from inside. NAT does not protect them at all. We use to say that any chain is as strong as its weakest link, and you are adding lots of non-necessary links on your chain. | {
"source": [
"https://security.stackexchange.com/questions/241421",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246455/"
]
} |
241,427 | I'm interested in becoming an ethical hacker someday. I've been reading articles saying the Python language is very popular in hacking activity because of the extent of its modules (including network). Nowadays, lots of applications are web applications or mobile ones and the antivirus software make a great job removing malware written in C. Because of that I'm a bit confused. Is the knowledge of the C language important for the ethical hacker career? | Of course, you don't necessarily have to know C, or the given platform's Assembly (read: instruction set), but knowing them is a great help in figuring out many possible low-level vulnerabilities. It is not the C language itself that matters, but rather the fact that in order to know C, one must first understand many fundamental computer principles, which is what allows you to then (ab)use them in any other language. You could learn about all of them in theory, but without ever practically experiencing them (which is what you achieve by programming in C), you may not be able to use them very efficiently or even realize where they're best applicable. Similarly, you don't have to know the exact packet structure of networking protocols. However, if you do, you may suddenly be able to figure out ways to break something, which wouldn't ever occur to those who make, often incorrect, assumptions about how these protocols function solely based on their high-level experience. | {
"source": [
"https://security.stackexchange.com/questions/241427",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246422/"
]
} |
241,434 | We received a message from the IT bods this week stating: Summary of the issue: IT will disabling and blocking the use of the browser Firefox next Thursday the 03.12.20 on all IT managed devices. Due to certain vulnerabilities and security risks associated with the use of this browser it will be blocked from use as of next Thursday. Has a new exploit been found? I've checked https://www.mozilla.org/en-US/security/advisories/mfsa2020-50/ but not seen anything that's currently open. Does anyone know of a reason for this ban? | Assuming that you work in the bank industry, this is likely due to their inability to intercept Firefox's traffic. Due to Firefox's support of DoH and eSNI most banks and regulated industries are resorting to block Firefox because firewalls can't snoop encrypted traffic easily. On the other hand, if you use Chrome, IE or Edge, you can push changes through Active Directory without users' knowledge/consent.
Actually most hardware firewall vendors with DPI (deep packet inspection) have started to recommend enterprise customers to get rid of Firefox because their edge firewall isn't able to intercept Firefox's traffic any more. Note: One can enforce policies on Firefox enterprise , but most privacy-conscious users will use Firefox portable to flout it, hence blocking is easier. https://live.paloaltonetworks.com/t5/blogs/protecting-organizations-in-a-world-of-doh-and-dot/bc-p/319542 https://www.venafi.com/blog/fight-over-dns-over-https https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk98025 | {
"source": [
"https://security.stackexchange.com/questions/241434",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246463/"
]
} |
241,493 | Our development team is implementing TLS protocol for a web server. The type of clients are mobile apps and web browsers. Now there is a concern about bypassing TLS in any way trough MITM attacks and disclosure of the server's private key. Is there any solution independent of TLS for data-in-transit protection so that developers use it at the application layer in parallel with TLS? updated section: according to owasp recommendation : If possible, apply a separate layer of encryption to any sensitive data before it is given to the SSL channel. In the event that future vulnerabilities are discovered in the SSL implementation, the encrypted data will provide a secondary defense against confidentiality violation | Rejecting TLS out of fear an attacker could have stolen a private key is like rejecting medicine out of fear someone could have poisoned it. The downsides far outweigh the risk of not using it. Why TLS is secure There are many many angles to answering this, but one that speaks for itself is that everyone is using it. Government agencies are using it, huge tech companies are using it, banks are using it, hospitals are using it, StackExchange is using it too. If TLS would be insecure, don't you think at least someone would decide that it's a bad idea to use it and switch to someone else? The fact that TLS is nearly ubiquitous and recommended everywhere by everyone should tell you that it's a good idea to use it. Furthermore, TLS is secure, if configured correctly. Version 1.3 makes this a no-brainer, as of the time of this writing, there is no wrong way to configure TLS 1.3. TLS 1.2 is a bit more difficult, since there are some insecure ciphers in TLS 1.2. SOGIS - which also use TLS - have an extensive guide on which ciphers are recommended, and which are still tolerable for legacy usage. Finally, compromise of the private key leading to insecure communication isn't a flaw with TLS that some other protocol could alleviate - it's a flaw inherent to network communication. If you assume that an attacker has full control over your server, then no matter what protocol you're using, the attack would be able to decrypt any traffic, manipulate any traffic and forge new traffic. In other words, TLS is not designed to protect against server compromise, and neither TLS nor an alternative to TLS - self-made or otherwise - would protect against that. Alternatives to TLS Since the question asks for alternatives to TLS, there is one: DTLS DTLS is basically TLS, but for UDP. The reason you might want to use DTLS is because your application uses UDP instead of TCP (e.g. a VoIP program), but you still want your datagrams to be encrypted. DTLS is not more or less secure than TLS, but instead is designed to work with a different underlying layer. | {
"source": [
"https://security.stackexchange.com/questions/241493",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246524/"
]
} |
241,802 | I understand the purpose of Microsoft Word is not to store secret information. However, I would rather spread my secret information between a Password Manager and a Word document, each of which has separate secure passwords. Is a password-protected Word document from Office 365 (2020) sufficiently secure to store financial information? Wikipedia seems to suggest it is, but I'm still someone who doesn't herald Wikipedia's information as gold-standard. If Word is not secure enough, are there other alternatives that are non-password manager based that would be secure enough? | By default, Microsoft Office 2016* uses AES-256-CBC with 100000 rounds of SHA1 for password verification using a 16 byte salt. AES256 is currently considered the industry standard by many for symmetric encryption. SHA-1 isn't considered a very secure algorithm for password storage since it's a fast algorithm and can be accelerated massively using GPUs. However, since a 100000 iterations are used, this weakness is mitigated to some extent (although it still isn'tanywhere near as good as a dedicated password hashing function like bcrypt/argon2), and if you use a strong password, it shouldn't matter either ways. So the cryptography used by Office 2016 is strong enough to be currently uncrackable provided a sufficiently strong password is used. Does having strong encryption make Office a good choice for storing financial information? Probably not. Word creates lots of temporary files when it opens a document which probably aren't encrypted. These files will usually be recoverable for some time even after they have been deleted and could easily leak the contents of your file unencrypted. *Office 2013 uses AES-128 which is also perfectly secure | {
"source": [
"https://security.stackexchange.com/questions/241802",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245899/"
]
} |
241,991 | Assuming quantum computing continues to improve and continues to perform like this : ... quantum computer completes 2.5-billion-year task in minutes is it reasonable to expect that 256 bit encryption will be possible to brute force at some point in the future, and if so, what estimates (preferably from large tech companies, security companies, academics, or government organisations) are available as to when that may start to occur? Notes Obviously estimates would be extremely difficult to make, and probably depend heavily on a few key assumptions (and even then it could happen much earlier or much later than predicted), but despite the difficulty of such predictions, some rules of thumb (e.g. Moore's Law ) have performed reasonably well given the difficulty of such predictions. Since the US Government considers 256 bit encryption safe enough to not be broken (by any one outside the US) for a time scale long enough to protect US security interests (last paragraph), we can assume it's a fairly long way away. Background: In 1997, the Electronic Frontier Foundation brute forced 56 bit encryption in 56 hours | Most Probably Never Of the currently known quantum algorithms, Grover's algorithm is the one which directly affects symmetric ciphers the most. Essentially, for a cipher that a classical computer can bruteforce in time N, a quantum computer can bruteforce it in time square root of N. This means that a 256 bit cipher (which would take at most O(2 256 ) compexity to bruteforce on a classical computer) could be bruteforced in O(2 128 ) on a quantum computer. Bruteforcing even a 128 bit key is limited by the laws of physics. As this excellent answer explains, the amount of energy required to bruteforce a 128 bit key is ridiculously large (all the world's resources for 10 years straight, just for cracking one key). So, absent any other significant breakthroughs, bruteforcing a 256 bit key is out of question. In fact, Grover's algorithm has been shown to be optimal, so any further breakthroughs are extremely unlikely. | {
"source": [
"https://security.stackexchange.com/questions/241991",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/211696/"
]
} |
242,029 | Let's assume I have a Windows 10 computer and my login password has an entropy of infinity. If I did not encrypt my entire hard-drive, does it matter how secure my password is? Is it possible for someone to plug the hard-drive into another computer as an external drive and simply read all its contents? Thanks | Is a password-protected stolen laptop safe? No. The immutable laws of security say: Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore. It doesn't matter if your laptop is password-protected or not. As long as the disk is not encrypted by a state-of-the-art encryption algorithm, anyone can access your data. If I did not encrypt my entire hard-drive, does it matter how secure my password is? No. Your data are safe if, and only if, the data are well encrypted. Password protection of an OS usually does not encrypt the disk (except on iOS, as far as i know). Consider using BitLocker (on Windows), FileVault (on macOS), or LUKS (on Linux). Is it possible for someone to plug the hard-drive into another computer as an external drive and simply read all its contents? Yes, someone will do exactly this. | {
"source": [
"https://security.stackexchange.com/questions/242029",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/245899/"
]
} |
242,079 | As far as I know, when passwords from a website leak, they leak in an encrypted form. Are all those passwords equally easy to decrypt? My hunch is that a 10-character-long password maybe gets decrypted in a few minutes, but if a password is 100-characters-long, it takes days or years to decrypt. Does it make sense? Is setting a very long or difficult password a real protection from leaks? I'm asking even about passwords like passpasspasspass11111passpasspasspasspass compared to pass11111 . | As others have mentioned, " most sites " hash the passwords, they do not encrypt them. But a site can choose to do whatever they want. Some sites have encrypted passwords and some have stored the password as plaintext. Obviously, if they are stored as plaintext, then there is no difference in how long your password is. But let's assume that the site did what they are supposed to do and hashed the password with a salt. The length of the password does make a difference to the ability to brute force a guess of what the hash might be, but that assumes that the password is random. A hash of a 100-characters-long password that is well-known will be cracked very quickly. And that's a factor that a lot of people forget to account for. Length makes it more difficult to guess if you are having to guess each character at a time. But if you are using a dictionary of well-known passwords , then length is no longer a protection if your password is in that dictionary. So, your question is a good one. Not all passwords can be cracked as easily as others, and length can be a factor in how difficult it is to crack, but the overall factor to consider is how guessable the password is . Length can make it harder to guess, but not by itself. | {
"source": [
"https://security.stackexchange.com/questions/242079",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244990/"
]
} |
242,100 | I've always been afraid of downloading software because of viruses, and that has affected my life as a programmer a lot. So, I want some tips to be able to install and download software without fear of viruses. | As others have mentioned, " most sites " hash the passwords, they do not encrypt them. But a site can choose to do whatever they want. Some sites have encrypted passwords and some have stored the password as plaintext. Obviously, if they are stored as plaintext, then there is no difference in how long your password is. But let's assume that the site did what they are supposed to do and hashed the password with a salt. The length of the password does make a difference to the ability to brute force a guess of what the hash might be, but that assumes that the password is random. A hash of a 100-characters-long password that is well-known will be cracked very quickly. And that's a factor that a lot of people forget to account for. Length makes it more difficult to guess if you are having to guess each character at a time. But if you are using a dictionary of well-known passwords , then length is no longer a protection if your password is in that dictionary. So, your question is a good one. Not all passwords can be cracked as easily as others, and length can be a factor in how difficult it is to crack, but the overall factor to consider is how guessable the password is . Length can make it harder to guess, but not by itself. | {
"source": [
"https://security.stackexchange.com/questions/242100",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/244239/"
]
} |
242,787 | In many services, email can be used to reset the password, or do something that is sensitive. Sensitive data is also quite often sent to you by email, e.g. long links that enable access to your account or similar. However for most people, their email service provider can read all their emails, can see what is being sent, and can send email themselves as "you". So doesn't that give your email service provider basically full access to your accounts? This seems like the incorrect medium to send such information via. I don't really know if this matters, however you never really see these email services sending you "encrypted" email with your pgp key. Also, it is well known that email is inherently insecure, or not designed with privacy or security in mind. However it keeps being used for these purposes. | This seems like a very wrong medium to send such information via. Email is used for the same reasons Social Security Numbers get re-used as account identifiers in the US: Ubiquity . Not everyone has a Facebook account. Not everyone has a Twitter account. But almost certainly, anyone with Internet access has an email account. It is a reasonable expectation that customers can provide an email contact for businesses to use. And I don't really know if this matters, however you never really see
these email services sending you "encrypted" email with your pgp key. Because pitifully few people have a PGP key, and even fewer are set up with an email client that integrates encrypted email. I once wished to purchase software, and the vendor would only sell to people who communicated with them via PGP email. I tried sending the PGP-encrypted blob as an attachment, I tried inlining it, and I tried add-on software that integrated PGP email into my mail client - none of them passed muster with the vendor. I never purchased the software. PGP email is neither ubiquitous nor, it seems, trivially interoperable. Also, quite often it is mentioned that email is inherently insecure,
or not designed with privacy or security in mind. However it keeps being used for that. And it will keep being used for that until something better comes along and something better is available to everyone to use, trivially. | {
"source": [
"https://security.stackexchange.com/questions/242787",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/248276/"
]
} |
242,803 | I have an old Beaglebone Black board which is similar to Raspberry Pi. Unfortunately, I forgot the Linux login password and can access the Linux using the Cloud9 feature. With this I can log into the Linux as a common user but cannot use any sudo commands. In this situation, I can just wipe the microSD and reinstall the OS. But it might be a good chance to learn a penetration method. I can write C program and execute as a common user, then is it possible to execute a buffer overflow attack to reset my password? | This seems like a very wrong medium to send such information via. Email is used for the same reasons Social Security Numbers get re-used as account identifiers in the US: Ubiquity . Not everyone has a Facebook account. Not everyone has a Twitter account. But almost certainly, anyone with Internet access has an email account. It is a reasonable expectation that customers can provide an email contact for businesses to use. And I don't really know if this matters, however you never really see
these email services sending you "encrypted" email with your pgp key. Because pitifully few people have a PGP key, and even fewer are set up with an email client that integrates encrypted email. I once wished to purchase software, and the vendor would only sell to people who communicated with them via PGP email. I tried sending the PGP-encrypted blob as an attachment, I tried inlining it, and I tried add-on software that integrated PGP email into my mail client - none of them passed muster with the vendor. I never purchased the software. PGP email is neither ubiquitous nor, it seems, trivially interoperable. Also, quite often it is mentioned that email is inherently insecure,
or not designed with privacy or security in mind. However it keeps being used for that. And it will keep being used for that until something better comes along and something better is available to everyone to use, trivially. | {
"source": [
"https://security.stackexchange.com/questions/242803",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/248291/"
]
} |
242,876 | Four months ago I lost all my data after an iTunes update automatically restored my phone to factory defaults. I lost all the pictures of my newborn. I understand that when you do a factory reset the decryption key is discarded so the data is unretrievable. There's nothing I can do now. But is there any chance I'll be able to recover the pictures in the future? I have kept the phone in my drawer. I have created a group with so many moms like me; we all lost photos of our little one due to unexpected factory resets. It's not an isolated issue. **This question is about iPhone data decryption in the future but not how should I find my lost data with my laptop, backup file, etc. I have tried all methods. | Modern encryption is strong enough that there is no way to retrieve the lost data without the key. Although it's possible that it could be doable in the future in theory, consider that even the cipher 3DES, a trivial modification to a cipher designed nearly half a century ago in the 1970s, cannot be broken in the manner you want, and that was cryptography in its infancy. Modern iPhones use AES which has held up to 20 years of analysis and is showing no signs of meaningfully weakening. Ciphers are never secure one day and fatally broken the next. There is virtually never a massive breakthrough that renders a cipher useless, as attacks are improved incrementally. If AES ever gets broken badly enough that you would be able to recover the encrypted data without the key, there would have been decades of slow improvements to the attack and we would all have known for years that it's too weak to even consider using. If that were the case now, I'd tell you to wait a few decades and maybe, just maybe, a key recovery attack would be released, but that's not the case. The data is gone. Plan for keeping backups in the future to avoid a repeat of this issue. | {
"source": [
"https://security.stackexchange.com/questions/242876",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/248409/"
]
} |
243,093 | I do not quite understand why it is common practice to require a difficult to remember password with alphanumeric and special character requirements, while also having an upper limit of 32 characters. Wouldn't it be easier for everyone to remember the password if the lower character requirement was 32 and there was no requirement for special characters, numbers, and capitalization? The fact that dictionary attacks exist means that the different character requirements are mostly useless anyways; many people just prepend or append a number or special character to a short password, or use substitutions such as "leet speak". | Cargo Cult It is no secret that a lot of people who are tasked to design security for systems know very little about information security and are ill equipped for the task. As a source for that claim, I will take years of personal experience as pentester, and the absolute bewilderment for the things I have seen people develop over the years. Not because the developers are bad developers - on the contrary - but because information security has different requirements than normal software development. It's like asking a painter to paint a blueprint of a house. As such, developers who lack the understanding of what is required of a system that handles authentication and authorization often fall back to the most primitive form of learning: Imitation . We see other people do things, and without understanding why these things are done, we imitate them. "After all, it's done that way for a reason." This process has a name: Cargo Cult Programming . Originally, cargo cults were cults of tribespeople, who saw modern soldiers receive air cargo with supplies. They thought that this air cargo was a gift from a divine entity, and that the activities of the soldiers were a ritual to summon said entity to bring gifts. As a result, after the soldiers had left, the tribespeople began to immitate their activities to the best of their understanding, making makeshift uniforms and marching up and down, in hopes that this would summon some divine favour. Cargo Cult Programming is a similar process, in which some developer does something in a particular way, due to some circumstance. The other programmers then blindly implement their solution in the same way, without understanding why the solution was implemented in said way, and what problem it aimed to solve. About Password Lengths For example, "DES crypt" would only allow for a maximum of 8 bytes for password length due to export restrictions - which was primarily a legal problem. So any system that used DES crypt would require a maximum character length of 8. Someone who saw this implemented may not know the reason why 8 was chosen for this, and may blindly copy this implementation, possibly not even using DES crypt in the background. So they are not limited by a maximum password length of 8, yet still arbitrarily impose it, simply because they have seen someone else do the same thing. Misinformation As for why companies require short, complex passwords rather than long, easy-to-remember phrases? That one is unfortunately on us. For a very long time, security experts tried to get people to make "better" passwords. And in a sense, it's true: <3BZg2Ck is a better password than ILoveNYC . However, security experts are not infallible and we too learn painful lessons, such as AviD's Rule of Usability : Security at the expense of usability comes at the expense of security. Short, highly complex passwords are really difficult to remember for people, and people started to use "systems to make good passwords", like tGbHuJm! (I'll leave it up to the reader to figure out why this is a bad password). Experts changed their advice, instead advocating for long passphrases, like It's monday again and the coffee still tastes like wet socks , but "common wisdom" is difficult to change. When people hear that passphrases should be complex, they see the half-truth how a complex password is better than a simple one, and urge their users to make passwords as complex as possible. Recommendation In a system that requires passwords, users should be urged to use a password manager to create a safe, new password. A sufficiently long, randomly generated password will always beat any long, easy-to-remember passphrase, and it's more easily usable too if the browser automatically fills it in for the user. If that is not a possibility, users should be urged to make long passphrases . Phrases are generally easier to remember for people, and length is king when it comes to strength. However, users can stick to phrases that are publicly known and not secure at all, such as In the beginning God created the heaven and the earth. - any automated check will tell you that this is an amazing password, but my wordlist on my cracking machine will beg to differ. As such, you can try to generate a memorable sentence for the user, or just believe that they know what they are doing. Finally, after a user has entered a passphrase, it is recommended to check that passphrase with a database like Have I Been Pwned? to ensure that password isn't already known to be insecure. | {
"source": [
"https://security.stackexchange.com/questions/243093",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/248749/"
]
} |
243,111 | Suppose a student takes an exam at home. Since home-exams are prone to cheating, the student wants to be able to prove that he/she did not cheat. So the student puts cameras in the room, which videotape the room during the entire exam. Now, if the student is blamed for cheating (e.g. because his/her exam is similar to another exam), then he can show the video and prove that he did, indeed, write the exam by his own, did not leave the room during the exam, did not use unallowed materials, etc. The only problem is that video can be edited. Theoretically, the student can exit the room, talk with a friend about the exam, then get back into the room, and after the fact, use a video editing software to erase the relevant part. Is there a way to take a video and simultaneously sign it digitally, so that it will be possible to verify later that the video was not edited? (Is there maybe a software that does this?) | Trusted Timestamping I think, if you continue down this line of thinking, you will end up with something very similar to Trusted Timestamping Servers . The core idea of trusted timestamping is that you submit a file to the server and it signs an attestation saying that it saw the file with hash aabbcc112233 at time X . This is typically used for both proving the initial publication time (and who published it), as well as proving that the file has not been modified since. You need the trusted 3rd party because if the end-user creating the video is the same person signing it, then there's nothing stopping them from re-signing it after they edit it. Why not just save the video stream on the server? That said, I don't think you really need any fancy crypto here; the simplest solution is probably best. Have the student live-stream their cameras to an exam-monitoring website. The website logs the video stream in its database as it comes in, and it can detect and alert if the live stream had any breaks or disruptions long enough for manual editing to potentially have taken place. Create a blockchain of the video stream Update addressing comments. Ah, you have the extra privacy requirement that students do not want their video stored on 3rd party servers (that should have been in the question!). In that case, what makes this problem hard is that you can't wait until the end of the exam and publish a single hash for the entire video because that gives the student too much time to edit a middle section of the video. The solution that comes to mind would be some kind of hash-block-chaining (not "The Blockchain (Bitcoin)" but "a blockchain"). Either the sender or receiver breaks the video stream into, for example, 10 s "blocks", hash each block as they are produced/recieved, and stream the hash for each block along with the video in real-time. You do "block-chaining" by including in each block the hash of the previous block. In math notation: h_0 = hash(videoblock_0)
h_1 = hash(videoblock_1 || h_0)
...
h_n = hash(videoblock_n || h_n-1) This preserves privacy because the server only needs to store the hashes and not the video itself. This is streaming-friendly because you are producing hashes throughout the stream and each hash covers the entire stream up to that point. This is efficient because the server only needs to store the most recent hash (h_n), and that is enough to later verify if a provided video was tampered with at any point in the stream (though to detect where it was tampered you would need to save every block hash). | {
"source": [
"https://security.stackexchange.com/questions/243111",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27908/"
]
} |
243,215 | Let's assume an ecommerce site works over HTTP, rather than HTTPS. What are the practical dangers here? How could an attacker exploit this? Whenever I read about dangers of unencrypted traffic, it is somehow magically assumed that the attacker has managed to somehow slip into a point between me and the endpoint (establish a MITM). I get that the owner of the Wi-Fi hotspot could be snatching my data. Or it could be some malicious worker at my ISP. However, assuming I'm sitting behind my own router and I trust my ISP, how could one set up a MITM attack that'd allow to exploit the lack of encryption? | You can trust your ISP, but your data will not pass through just your ISP's routers. On a simple level, the internet works by passing data from one router to another, repeatedly moving each packet closer to its destination, until it is hopefully delivered. This means that your connection passes through several routers before finally reaching the site you are connecting to. Now, as an example, right now, there are 10 routers sitting between me and stackexchange.com . Of these, apparently only the first two or three belong to my ISP. The rest either belong to some internet backbone provider, the server's ISP, or any other upstream ISPs that exists between my ISP and the server's ISP. So now, instead of having to trust one ISP, you have to trust at least two ISPs and the internet backbone providers. Now that's a lot of people to trust. If any one of these has a rogue employee with access to install malware on the routers, or any of these routers are misconfigured or using outdated firmware with known vulnerabilities, an attacker can perform a man in the middle attack and harvest your credit card details, passwords, PII etc. as well as inject ads and/or malware and perform any other malicious action they can think of. And that doesn't even take into account state sponsored attackers and mass surveillance. A state sponsored actor that is interested in getting access to plaintext HTTP traffic doesn't even require a rogue employee or exploitable router vulnerability. They can serve the ISP a subpoena or they can silently tap right into the fiber-optic cables. If the traffic they are targeting doesn't pass through their jurisdiction, they have the resources to perform attacks like BGP hijacking to redirect the traffic through their own jurisdiction.* *In at least one incident, a non-state sponsored attacker also managed to perform this by hacking an ISP | {
"source": [
"https://security.stackexchange.com/questions/243215",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/248916/"
]
} |
243,446 | I want certain web pages blocked (within my country) by my Govt on a website that uses HTTPS on all pages. My Govt agrees that the specific URLs need to be blocked but expresses helplessness as their ISPs claim they can't selectively block HTTPS URLs but must block the entire website/domain / sub-domain which will cause genuine users to be affected. The prominent website in question has refused to take down the impugned web pages despite notices from my government and law enforcement requests. | No, the ISP cannot block specific HTTPS protected pages because it cannot determine which page is being accessed. HTTPS does not hide the server that is being accessed, because it needs the SNI (Server Name Indication) to be on the clear, so the server can know which certificate to use on the connection. Once the handshake is established, everything else is encrypted, including which page you are requesting. So the ISP have two non-optimal ways to answer: don't block anything, or block the entire server. Encrypted SNI is actually a thing (I think Cloudflare did add support back in 2018). Admittedly I have no idea how widespread it is. – s1lv3r Encrypted SNI is a thing for the future . It is part of the TLS 1.3 specification, but isn't something that can be enabled by changing a configuration option on the webserver. It needs changes on the DNS records, and support from the webserver. It still don't have widespread support among webservers and web browsers, and is still not mature enough for high-traffic servers to adopt it. ESNI would help hide the domain you are accessing from your ISP and government (depending heavily on your network configuration), but would not help the ISP selectively block pages on the domain. | {
"source": [
"https://security.stackexchange.com/questions/243446",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/249253/"
]
} |
243,554 | I work for a company in which the age of our average user is over 70. We have an app* that helps them collect and submit physiological data to their doctors. I found this question that I believe is helpful if you're helping your mother set a password: Secure Memorable Passwords for Older Users However, we're struggling to develop a policy for our 5000+ users, particularly given these additional wrinkles: The users' accounts are set up at the doctor's office by a
non-technical medical professional that probably thinks "Dog123" is a
good password. We can educate them about password complexity, but
getting them to similarly educate users on-site is a different
ballgame. Many of our users don't have an email address, making it infeasible to send a password reset email Password managers are also infeasible, because we can't expect our medical staff to be setting up LastPass for the users (especially with no email address) This is medical data, with all the regulation that comes with it. Any suggestions for a password policy that secures our sensitive data without frustrating and driving away our entire user base? *EDIT: Mobile app. There is a web app in the ecosystem in which medical staff reviews collected data, but it currently has no functionality for the patients. ALSO EDIT: A lot of debate here between "you can assume they have smart phones" and "no you can't." It's a bit moot in our case because we provide $20 Androids to patients without one. | Disclosure: I work for the referenced company, and I'm not sure how to get the suggestion in this post across without it seeming like a sales pitch. Here goes. It seems to me that "memorized passwords" and "our average user is over 70" are not going to play well together. Have you considered solutions other than passwords? You'd want something which is: A physical object; ie non-memorized Inexpensive for the doctor's office to hand out Easy to use even for the (potentially severely) technologically-challenged. Meets the security (and/or compliance) requirements for this product. Ex.: would you be allowed to use a physical object instead of a password? What if you coupled it with a weak password or knowledge-based set of questions. You could consider having the doctor's office generate random passwords and print them out, but we all know that passwords of the form x7a8Cqr4dPt20 are not user-friendly. The solution that comes to mind is Entrust Grid Cards (disclaimer: there may be other vendors who have similar features, but I'm not aware of any) The doctor's office could print off a grid card on their standard office printer; when using the website / app the user will be challenged to provide three cells from the grid, if they lose the paper then they go back to the doctor's office and get a new one printed. | {
"source": [
"https://security.stackexchange.com/questions/243554",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/249363/"
]
} |
243,562 | I noticed that certain software does not provide hash anymore nowadays. E.g. Zoom https://zoom.us/download wolf@linux:~$ ls -lh zoom_amd64.deb
-rw-rw-r-- 1 wolf wolf 44M Jan 1 00:00 zoom_amd64.deb
wolf@linux:~$ I've googled both md5 and sha256 hashes but couldn't find it. wolf@linux:~$ md5sum zoom_amd64.deb
5f452b11d86d41e108a32f6a7d86c6dc zoom_amd64.deb
wolf@linux:~$
wolf@linux:~$ sha256sum zoom_amd64.deb
b06bc30a53ac5d3feb624e536c86394ccb9ac5fc8da9bd239ef48724138e9fc1 zoom_amd64.deb
wolf@linux:~$ Vivaldi Browser https://vivaldi.com/download/ wolf@linux:~$ ls -lh vivaldi-stable_3.5.2115.81-1_amd64.deb
-rw-rw-r-- 1 wolf wolf 74M Jan 2 11:08 vivaldi-stable_3.5.2115.81-1_amd64.deb
wolf@linux:~$
wolf@linux:~$ md5sum vivaldi-stable_3.5.2115.81-1_amd64.deb
f6dce2ca099e9e910ca6dc1c361aa5b5 vivaldi-stable_3.5.2115.81-1_amd64.deb
wolf@linux:~$
wolf@linux:~$ sha256sum vivaldi-stable_3.5.2115.81-1_amd64.deb
38a18fe2624994cbc9782d7805ec058774e54e99e0ade6ad5e85da45055c9e5c vivaldi-stable_3.5.2115.81-1_amd64.deb
wolf@linux:~$ Microsoft Teams https://www.microsoft.com/en-my/microsoft-teams/download-app#desktopAppDownloadregion wolf@linux:~$ ls -lh teams_1.3.00.30857_amd64.deb
-rw-rw-r-- 1 wolf wolf 73M Jan 20 09:07 teams_1.3.00.30857_amd64.deb
wolf@linux:~$
wolf@linux:~$ md5sum teams_1.3.00.30857_amd64.deb
3d738e013804b96f401bd274db4069d1 teams_1.3.00.30857_amd64.deb
wolf@linux:~$
wolf@linux:~$ sha256sum teams_1.3.00.30857_amd64.deb
5058b1fe8bf9fffc57d94148a7ec55119c5cd9b21aa267cb13518bec0244241b teams_1.3.00.30857_amd64.deb
wolf@linux:~$ How do we verify software like this to make sure nobody has ever tampered with it? | Disclosure: I work for the referenced company, and I'm not sure how to get the suggestion in this post across without it seeming like a sales pitch. Here goes. It seems to me that "memorized passwords" and "our average user is over 70" are not going to play well together. Have you considered solutions other than passwords? You'd want something which is: A physical object; ie non-memorized Inexpensive for the doctor's office to hand out Easy to use even for the (potentially severely) technologically-challenged. Meets the security (and/or compliance) requirements for this product. Ex.: would you be allowed to use a physical object instead of a password? What if you coupled it with a weak password or knowledge-based set of questions. You could consider having the doctor's office generate random passwords and print them out, but we all know that passwords of the form x7a8Cqr4dPt20 are not user-friendly. The solution that comes to mind is Entrust Grid Cards (disclaimer: there may be other vendors who have similar features, but I'm not aware of any) The doctor's office could print off a grid card on their standard office printer; when using the website / app the user will be challenged to provide three cells from the grid, if they lose the paper then they go back to the doctor's office and get a new one printed. | {
"source": [
"https://security.stackexchange.com/questions/243562",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/235182/"
]
} |
243,875 | If an application's code contains even minor and subtle inaccuracies, it can open up the entire database to SQL injection. In this example (see section 'Delete All Method'), the entire Users table gets deleted with a trivial SQL injection ( "1) OR 1=1--" ). This had me wondering, have any white hats / bounty hunters (perhaps amateurs) looked for vulnerabilities and accidentally caused massive, real damage to the site/app/business? How do bounty programs deal with such a risk? | Can bounty hunting cause real damage? Sure. As you pointed out, some SQL injection vectors may inadvertently cause data deletion. Similarly, a persistent XSS attack may trigger in the browser of a real user. Or unusual characters in a username may crash a web application backend due to an unhandled encoding error. More generally, a large part of black box pentesting involves experimenting with unexpected/invalid input to the target application. Some level of fuzzing is usually unavoidable - and this always carries the risk of causing behavior that breaks the application or corrupts data. So, while blindly trying out DELETE queries may be reckless and avoidable, vendors have to face that even benign bug hunting occasionally impacts service integrity or availability. Did a bounty hunter ever cause actual damage? This report is an example of where the researcher caused a DOS by submitting invalid data. I'm entirely sure there are more severe examples, many of which simply weren't made public. Anecdotally, I remember several occasions where bug hunters were banned from programs because the tools they used were too disruptive. How do bug bounty programs manage this risk? A testing environment. While some bug bounties assume you're testing against production, many provide a separate sandbox and only allow you to test there. E.g., the program of Bitmex includes the rule: Only test on testnet.bitmex.com. A "responsible research" policy which asks that hunters make an effort to avoid damage. Rules would include not accessing real user data, limiting automated testing tools, etc. For example, Facebook's program demands: You make a good faith effort to avoid privacy violations and disruptions to others, including (but not limited to) unauthorized access to or destruction of data, and interruption or degradation of our services. You must not intentionally violate any applicable laws or regulations, including (but not limited to) laws and regulations prohibiting the unauthorized access to data. An emergency contact point. Some providers instruct hunters how to notify them immediately if their actions have caused service disruption. From the program of Exodus : If you do accidentally cause some noticeable interruption of service, please immediately email us so we can handle it accordingly [email protected] and please include the subject title "HackerOne Outage: " for the alert to trigger. Safe harbor clauses protect participants Nowadays, many program policies come with a safe harbor clause . This is intended to protect hunters from liability if they act in good faith, even if their actions have caused damage. Since IANAL, I can't comment on the effectiveness of such a policy, but it's an established practice. Here is an example of a safe harbor clause in the program of Dropbox : We will not pursue civil action or initiate a complaint to law enforcement for accidental, good faith violations of this policy. We consider activities conducted consistent with this policy to constitute “authorized” conduct under the Computer Fraud and Abuse Act. To the extent your activities are inconsistent with certain restrictions in our Acceptable Use Policy, we waive those restrictions for the limited purpose of permitting security research under this policy. We will not bring a DMCA claim against you for circumventing the technological measures we have used to protect the applications in scope. You'll find a similar passage in the rules of most large programs. | {
"source": [
"https://security.stackexchange.com/questions/243875",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/211696/"
]
} |
244,030 | I've always read: Put validations in the backend. Frontend validations are for UX, not security. This is because bad actors can trick frontend validation. But I'm having a hard time wrapping my head around how a bad actor could trick it. I never thought about it much, I just thought this meant someone could bypass the validations by making a request on something like Postman. But then I learned that with a same origin policy that's not possible.* So how are these bad actors making same origin requests? The only other idea I can think of is bad actors can go into the code (ex: on DevTools) and edit the request there and make an edited request from the same site. Is that what they do? What does tricking frontend validations look like in practice? How are they making a request that gets around CORs? * Update : I was wrong here about what SOP is. It doesn't stop requests from other origins, ex Postman. Many answers below clarify what SOP is for. | I think you are very confused about what both CORS and SOP do... neither is relevant to these attacks at all. There are lots of ways to bypass client-side validation. HTTP is just a stream of bytes, and in HTTP 1.x they're even human-readable text (at least for the headers). This makes it trivial to forge or manipulate requests. Here's a subset of ways to do it, grouped by rough categories: Bypass validation in the browser Browse to your site and input the invalid values. Use the browser dev tools to remove the validation events or manipulate their execution to pass validation anyhow. Submit the form. Use the browser dev console to send requests from the site as though through the validated form, but with unvalidated inputs (just directly invoke the function that makes the request). Use the browser dev tools to "edit and re-send" a request, and before re-sending, change the valid values in the body to invalid ones. For GET requests: just type any URL with invalid parameters into the location bar. For POST requests that use non-samesite cookies for authentication: create a web page that POSTs to your server with the expected values (including any CSRF-protection token) but with invalid values, load it in browser, and submit. Bypass validation using non-browser tools Set the browser to run through an intercepting proxy (like most in the security industry, I usually use Burp Suite, but you can use others like Fiddler too). Capture the outbound request, tamper with the fields to make them invalid, and send it on its way. Use an intercepting proxy again, but this time replay a previous request with modified, invalid values (in Burp, this is exactly what the Repeater tool is for). Right-click a request in the browser's dev tools' network history, select "Copy as cURL", paste the resulting curl command into a command line, edit the validated fields to make them invalid, hit Enter to send. Crafting malicious requests from scratch Using Burp Repeater, specify the protocol, domain, and port for your site. Add the necessary headers, including any cookies or other headers needed for authorization. Add the desired parameters, with invalid values. Click "Send". Using curl , send a request to your site with the required headers and whatever body, including invalid values, you want. Using ncat , open a connection to your site, using TLS, on port 443. Type out the HTTP top line, headers, and body (after all, it's just text, although it'll get encrypted before sending). Send the end-of-file input if needed (usually the server will just respond immediately though). Write a little script/program in any language with a TCP or HTTP client library (from JS running on Node to a full-blown compiled golang binary) that creates a request with all the required headers and invalid fields, and sends it to your server. Run the script/program. SOP only applies when the request is sent using a browser AND the request originates from a web page hosted at a different origin (combination of domain, protocol, and port) than the request's destination. Even then, SOP primarily protects against the originating page seeing the response; it doesn't prevent attacks from occurring. If you're the attacker trying to get past client-side validation, then you can just send the request from the origin you're attacking, and SOP is entirely irrelevant. Or just send the request from something that isn't a browser (like a proxy, or curl , or a custom script); none of those even have SOP in the first place. CORS is a way to poke holes in SOP (CORS doesn't add any security; it's a way to partially relax the security feature of SOP), so it doesn't even matter unless SOP is relevant. However, in many cases you can make a cross-origin request with invalid parameters (as in the case where I create my own attack page and point the browser at it, then use it to submit an invalid request to your site) because for most requests, SOP only restricts whether you can see the response - you can send the request cross-origin even if the server doesn't allow CORS at all - and often, seeing the response isn't needed. Pulling the authorization tokens (cookies, header values, whatever) out of the browser after authentication is easy (just examine the network traffic in the dev tools, or use a proxy). Remember, for validation to even be in question, the attacker has to be able to use your site via a browser, which presumably means they can authenticate. Or just submit an authentication request using curl or whatever, scrape the returned token out of the response, and use it in the malicious invalid requests; no browser needed at all! There's nothing the browser can do (in terms of sending requests to the server), that I can't do from a shell prompt with some common, open-source utilities. EDIT: As @chrylis-cautiouslyoptimistic points out, this includes spoofing the Origin header in the request. Although the Origin header is not normally configurable within the browser - it is set automatically by the browser when certain types of requests are made - it's possible to edit it after sending the request (by intercepting or re-playing the request), or to simply forge it in the first place. Any protective measure based on the presence, absence, or value of this header will only be effective in the case that the attacker is indirectly submitting the request through an unwitting victim's browser (as in the case of CSRF), typically because the attacker can't authenticate to the site or wants to attack through another user's account. For any other case, the attacker can simply choose what value, if any, to give as the Origin. EDIT 2: As @Tim mentioned, things like SOP and anti-CSRF measures are all intended to protect the user of a website from an attack by somebody else on the internet (either another user of the same site, or an outsider who wants to exploit your access to the site). They don't provide any protection at all when the authorized user is the attacker and the target is the site itself or all of its users, through something like a stored XSS, SQL injection, or malicious file upload (the types of things people sometimes try to prevent with client-side validation). | {
"source": [
"https://security.stackexchange.com/questions/244030",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/250115/"
]
} |
244,255 | I have two sticks of RAM in my computer that I would like to sell or donate. From what I understand some RAM is volatile, losing all its contents when power is gone for a few minutes, and some is non-volatile, retaining that information after power is lost. I would like to know which kind of RAM I have and whether it is safe to give it to someone else. I am not very tech savvy, all I know about the sticks is that the manufacturer is Kingston, and when I open Task Manager it says that it is "DDR3" and the form factor is "DIMM". | Yes, they're safe. Consumer memory DIMMs use volatile SDRAM memory. Volatile means that it does not hold its state after you turn it off. SDRAM memory chips are constructed from an array of memory cells, with one cell per bit of data stored. Modern memory ICs have billions of cells each. Each memory cell is constructed from a transistor and a capacitor. A transistor is like an electronic switch, and a capacitor is like a tiny battery. The value stored in the cell is a single bit - a value of 0 or 1, represented by a low or high voltage. Here's a circuit diagram of a simple memory cell: You can simulate this circuit in your browser here , and I'll talk you through how it works. You can control the simulation with the "Run/STOP" and "Reset" buttons on the top right. If the run/stop button is red, you've stopped the simulation. If it's grey, it's running. I've labelled the transistor and the capacitor, so you can see where they are. The parts marked 10k, 100k, and 1M are resistors - don't worry about those for now. At the top left of the circuit we've got our input data. This is a switch that selects either 5V or 0V (ground), to represent a value of 1 or 0. We use this switch to select which value we want to write into the cell. On the right, we've got our output data. This will show as 1 or 0 when we read from the cell. Underneath that, we have a write/read switch. This selects which operation we want to perform. When switched to the left it writes a value to the cell, using whichever value was set by the input data switch. When switched to the right it reads a value from the cell and displays it on the output data. The enable switch allows us to decide whether or not we want to talk to this specific cell at any point in time. When there are many of these cells all connected to the same input and output signals, this allows us to select just one cell to be read from or written to. There's another switch marked "leakage". I'll get to that later - leave that switch open for now. First, let's write a 1 to the cell. Set the input data switch to 1 and the write/read switch to write. Then close the enable switch. You'll see the bottom of the circuit light up green, and little yellow dots will move for a moment to indicate that a current is flowing as the capacitor charges up. The top part of the capacitor goes green, indicating that the capacitor is now charged up and the cell's value is set to 1. Next, flip the write/read switch to the right, in order to read from the cell. The output data now shows 1. Repeat this again, but for an input value of 0. Also have a play with the enable switch - you'll see that when it is open, the output data will always be 0 regardless of what you do with the input value and the write/read switch. Additionally, whatever value is stored in the capacitor stays there until you turn the enable switch back on. What we've looked at so far is an ideal memory cell. It can store a 0 or 1 almost perpetually. However, in reality there is always some leakage in the capacitor that causes it to slowly lose its charge. You can simulate this with the leakage switch I included in the circuit. Close the leakage switch, then set the input data to 1, the write/read switch to write, and close the enable switch. This will cause a 1 to be written to the cell. Now change the write/read switch to read, and look at the output data. It will show 1, just as before. Now wait a few seconds. The output data will flip back to 0. If you watch the path through the leakage switch and the 10k resistor, you'll see the current dots flowing slowly along as the capacitor charge leaks out. Eventually, the capacitor voltage drops below the threshold voltage , causing it to be read as a 0 instead of a 1. Try this a few times to get a feel for what's happening. Once you're happy with that, it's time for another experiment. With the leakage switch closed, repeat the write process again and store a 1 in the cell. This time, instead of immediately performing a read operation, open the enable switch beforehand. Wait a few seconds, then close the enable switch. Here's the full steps in case it's not clear: Close the leakage switch. Set the input data to 1. Set the write/read switch to write. Close the enable switch. Observe the capacitor charging up. Open the enable switch. Set the write/read switch to read. Wait 5 seconds. Close the enable switch. Notice that the output data is zero! What happened is that the charge in the capacitor leaked away while you were waiting, causing it to drop from 1 to 0. This happens in real DRAM chips, too. The controller on the memory chip has to constantly refresh the data in the cells by reading them and writing their values back, to keep up with leakage. When you turn off your computer the data in the DRAM memory chips quickly degrades and leaks back to all zeroes. This process normally takes only a few seconds. (interesting little aside here: the time it takes to set up the switches and get the data in and out of a specific cell is what defines the latency of memory, often listed as four numbers or just something like "CL16" - the memory timings wiki has some further info on this) The leakage process can be artificially slowed down by cooling the chips down to very low temperatures, in what's known as a cold boot attack . This causes the cells to retain their values for much longer. It only works if you freeze the RAM right at the same time as powering the system off - if you wait even a few seconds before freezing it all the data will have begun to degrade. As such it's not a problem for you. There are some additional interesting things that can be talked about here, such as SPD flash, NVDIMM, and Intel Optane, but I'm due to play a D&D game in ten minutes so I don't have time to expand this answer right now. I'll come back later and edit them in. Don't worry, though - they don't affect the safety of your sale! Ok, zombie beholder's dead. Let's talk non-volatile RAM. There's a special type of memory technology called NVDIMM. An NVDIMM is like a regular DDR SDRAM module, except it has a battery backup on it and a non-volatile flash memory chip. This allows the system to be powered off without losing memory state. The battery backup on the DIMM allows it to continue to refresh the memory cells. It then copies the contents of the memory cells into the non-volatile flash chip. It can then safely power off, because the contents of memory are saved and the volatile memory chips do not need to be refreshed. The operating system has to be built with support for this feature. This is a specialist memory technology usually reserved for server applications where you need to be able to recover the contents of memory during a power outage, or bring the system up to the same state quickly after maintenance (e.g. replacing a PSU or UPS). Another type of non-volatile DIMM is Intel Optane. This is actually not RAM at all - it's better to think of it like a very low latency NVMe SSD that just happens to plug into a DIMM slot. Finally, let's talk about SPD flash. When you plug a DIMM into your system, your motherboard needs to be able to identify it and learn about its specs and features - its name, size, type, speed, latencies, voltage requirements, inbuilt overclocking profiles, and all sorts of other details. This information is provided by a standard interface called Serial Presence Detect (SPD) . In practice, the information is stored in a table (the exact format and contents of which is defined by JEDEC ) in a small non-volatile flash EEPROM chip on the DIMM, which the motherboard can talk to via SMBus . The operating system can also talk to this chip to find out information about the memory - you can view it with a tool like CPU-Z. Normally this SPD data is written to the EEPROM at the factory and never changed. However, it is entirely possible to write to the flash chip from the operating system. Some chips do technically have the ability to lock the first half of the data (usually the first 256 bytes) but often this is not done, and the second half of the data is always writable. The size of the SPD table for DDR4 is 383 bytes. However, nobody makes 383 byte EEPROM chips - that'd be weird. Instead, you'll usually find that the chip is 512 bytes in size. This means that there are 129 bytes left in the EEPROM that aren't used for anything. If you wanted to, you could store 129 bytes of non-volatile data in every stick of RAM in your computer, by writing to the SPD flash. I wouldn't recommend trying this yourself, since there's every chance you could brick your RAM if you do it wrong, but I did this as an on-stage demo at a security conference a while back. The SPD interface isn't usually exposed to anything but the kernel, but there are many signed Windows drivers out there that either provide write access to SPD by design, as well as drivers that provide write access to SPD by accident, which you can abuse to write a small amount of data to the SPD flash chips. There is, therefore, a scenario in which an attacker could use one of these drivers to write to the SPD flash. This data would remain on the DIMMs even after the system rebooted, or even if you re-installed your OS. The data doesn't do anything by itself - you need malicious code to already be running on the system to actually read and write it - but it is a nice little hidden storage location for a few bytes of data. There's almost no reason to abuse this storage location in practice - it's complicated, error-prone, requires admin and a loaded driver, for almost no benefit in return - but it is a fun little trick to store non-volatile data on regular volatile memory DIMMs! One scenario where this is actually really important is bare-metal cloud hosting environments. In these environments, customers rent physical systems that they have complete administrative access to, rather than virtualised instances. Providers of these services need to ensure that there is no data remanence between one customer and the next. The SPD writing trick is one way for an attacker to achieve data remanence. There are plenty of other EEPROMs on the motherboard, hard disks, network card, RAID card, etc. that can also potentially be written to by an attacker with that kind of access. Bare-metal cloud providers have to implement special checks to ensure that these non-volatile flash memory devices were not modified. It's quite a challenge and hardware attestation is far from a solved problem. Again, this doesn't affect your consumer memory DIMMs, but it's a situation where a device that you expect to be volatile or stateless may actually have some non-volatile state. | {
"source": [
"https://security.stackexchange.com/questions/244255",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
244,330 | Recently, we've had users complain that they forget that they have an account, try registering, and get error message that the user with such email already exists. There is a proposal to just log them in such cases. So, if the user inputs valid login info into registration form, they are just logged in instead. Obviously, if the password isn't correct, user will not be logged in. What are the security implications of such approach? If the attacker already knows login and password, they will be able to log in normally anyway. Most sites don't have this behaviour, and my gut reaction is that this is not a good practice, but I can't articulate any specific objections. | Unless other authentication methods are involved (for example 2FA, etc.), if a correct email and the corresponding password are sufficient and necessary to log you in, then I see no security issues. The reason is simple: the authentication and authorization process doesn't change. However, if for example 2FA has been enabled for an account and the second factor is necessary in order to log in, if you allow users to login from a registration form that only accepts email and password, you will introduce a weakness (because it will be possible to bypass the second factor from the registration form), unless of course you also check the second factor right after the registration of an existing user with 2FA enabled. This might make your app more complicate. That said, I believe what you proposed is generally a bad choice for UX (User Experience) anyway. What happens if there are other fields in the registration form, and the new data is different from what is already saved in the account? Think of a phone number, for example. Are you going to update it in the profile automatically without a warning? Are you updating it with a notice? Or will you discard it? This problem will introduce steps and choices that will make everything more complicated, both for you (and your code) and for the final user. Also you will have to distinguish between users that already have an account but entered a wrong password, and users that already have an account and entered the correct original password. You can't just log them in without a notice, because their experience is going to be different (a new account will not behave in the same way as an established account, and will have different data and settings). Unless the users understand what you are doing, some of them might even wonder if there's a bug in your software and think: "Did it just let me log in because I used the right password, or would anybody be able to log in to my account with this registration process?". So, as I said, I believe that this is going to complicate things both for you (and your code) and for the user. If you have huge registration forms and you want to avoid that users waste a lot of time when registering if they already have an account, then make sure you check their email address right away, in the first steps of the registration process, or in the background via AJAX, so the user will discover they already have an account before they start filling in all the fields. | {
"source": [
"https://security.stackexchange.com/questions/244330",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/250509/"
]
} |
244,406 | Could receiving in a text or an email, a URL link just like https://security.stackexchange.com/questions/ask of a website, which could be a pernicious one, ever pose a security problem at all? What I am asking is that: if I receive such a link but do NOT click on it, will it ever do harm to my account, computer or other website I will ever visit? Edit : As a specific example, I would like to ask about the situation for receiving a link, https://down[.]nnjah68[.]me/app.php/Mjl1to , in WhatsApp. | Many browsers send "pings" to any links on a page by performing a DNS query on them to populate the cache. This makes clicking the link faster because the IP is already in the DNS cache. In theory, a bug in this code could be exploitable simply because the link is there. In practice, this isn't an issue. Just because a link exists doesn't mean it can do much to you. | {
"source": [
"https://security.stackexchange.com/questions/244406",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/250614/"
]
} |
244,571 | I was reading an article which said that if you install custom root cert from a third party then they can decipher all communication between you and others. But that doesn't make sense. What I understand is that root cert allows SSL mechanism to verify if a certificate provided by connecting party is legit or not. So unless someone(company, work, hacker, etc) really tries to impersonate by doing mitm. Only then the compromised root cert will come in play as that would be used to pass fake cert as valid cert. otherwise just simply having a custom root cert isn't like decrypting all your ssl traffic. Unless the same org has also installed a software that acts as a proxy for all internet traffic. It requires active intercepting, decrypting, and re-encryption of all traffic. So either by installing malware on the computer or monitoring internet traffic. correct? | I was reading an article which said that if you install custom root cert from a third party then they can decipher all communication between you and others. I have no idea what you were reading (citations would be helpful). But you are right in that it is not sufficient to just have a custom root CA certificate installed as trusted - the school/work also has to be an active man in the middle in the traffic and use this CA certificate for SSL interception. So either by installing malware on the computer or monitoring internet traffic. Not only malware installed on the computer can monitor the traffic. It is actually common that trusted programs like antivirus or parental control software do this. And when being directly inside the company (or school) network the path to the internet is usually through the companies firewalls and proxies anyway. Even when connecting from remote with a VPN or other access software the traffic is routed and inspected through company controlled firewalls/proxies, either in the company directly but more often also somewhere in the cloud. | {
"source": [
"https://security.stackexchange.com/questions/244571",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/43921/"
]
} |
244,663 | SIM swap occurs where the scammer uses phished information about you to request a SIM card replacement from your cell phone carrier, by tricking them into believing that it is you who is making the request for a SIM card replacement by passing their security questions on the phone based on the biometric data they phished about you. Once they have that duplicate of your SIM card, they can receive access codes to your banking and cryptocurrency accounts, because all of this is linked to your phone number (the SIM card). How can anyone possibly protect themselves against this sort of attack? Rarely does anyone have a second phone number, so whatever account you based on your sole number, means they instantly have backdoor access (by fooling your service providers using phished information about you, orally over the phone and by online forms) | You don't use SMS for a second factor. SMS is not secure by any means. The text is on clear, the traffic is on clear, and it's trivially easy to get a new SIM by pretending to be the victim. I once got my phone stolen and got myself a new SIM just by walking to the telco booth and telling my name and the phone number. Google Authenticator is offline. It does not depend on the SIM in any way. You can even calculate the OTP token using PHP/Python/Perl/Javascript, all offline. You would even be able to do it with a calculator that lets you run programs on it. | {
"source": [
"https://security.stackexchange.com/questions/244663",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/248320/"
]
} |
244,880 | There is a lot of malware that can detect whether it is running inside a VM or sandboxed environment and if such environment is detected it can conceal it self and not execute. So why not make everything a VM? Now all systems are safe! I know not all malware does this but considering that there are many cloud services these days that run on VMs in remote servers does that mean that they are all immune to these types of malware? I am aware that not all cybersecurity threats involve malware but the focus of this question is mainly on malware attacks. | One has to take into account why the malware is doing this distinction in the first place. Some malware does not run in the VM because the chance is high that this VM is used for inspecting the malware (i.e. some security researcher) since most normal users don't use a VM. But if everybody is using a VM then the chance is low that the VM is used for inspecting. This means there is no real reason anymore to use this kind of simple heuristic to distinguish between a potential security researcher and a victim. Therefore this heuristic will be considered useless and a different one will be used in the future. Which means that future malware will also run inside a VM. Note that there are also other heuristics, like checking if specific tools often used by researchers are installed on the system. Now, why not just let everybody install such tools in order to disable malware? Same reason: the heuristic will be no longer used by the malware authors since it no longer works reliably enough. | {
"source": [
"https://security.stackexchange.com/questions/244880",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232403/"
]
} |
244,888 | I'm learning about X509 certs used in client-cert authentication to https endpoints. If I have an OCSP checker (Python script that creates, submits, decodes OCSP responses), do I need to check the not-valid-after date on a client cert? Example: Client makes request to my https endpoint I check the client's certificate CA's OCSP endpoint to see if the cert has been revoked Do I also need to check the client cert dates or does an OCSP revocation occur immediately if the not-valid-after date has been reached? | No. Revocation is an active event, not something that passively or automatically happens. Expiration is passive, though. An expired cert is no longer valid, so there's no need to stick it in a CRL or update OCSP. If the purpose of your checker is only to parse OCSP, then no, you don't need to check the dates on the cert, because that's not part of OCSP. If the purpose of your checker is to answer the question "is this cert valid?", then you absolutely need to check the validity dates (both start and end). EDIT: As eckes points out , for your typical client-identification or server-identification cert (or any other kind not used for long-term signatures), once it's expired you can actually take it off revocation lists, which helps keep their size down. | {
"source": [
"https://security.stackexchange.com/questions/244888",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/236744/"
]
} |
245,151 | I would like to know which method is more secured. I know that they can be combined, but I would like to understand why TPM or OpenSSL might be a more secure technique to generate (encryption, decryption) keys | The difference between using some hardware backed key store (i.e. TPM, HSM, smartcard ...) and a "pure software" solution like openssl genrsa is not so much about the security of the key generation but about the security of the key storage . HSM and similar are designed to never actually provide the created private key but only do operations like cryptographic signatures with the key. They are designed so that the key can not be copied to some other medium, that the HSM cannot be cloned with the key inside etc. Theft of the key thus means that the hardware needs to be stolen. Contrary to that a key generated by openssl genrsa is stored in a normal file, which can be copied without anyone noticing and thus can also be used anywhere else. Theft of the key thus can go unnoticed, since the original owner still has access to the key too. It thus provides less assurance who actually owns and uses the key. | {
"source": [
"https://security.stackexchange.com/questions/245151",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/251531/"
]
} |
245,158 | I am looking for a login security measure where it is keylog and screen capture proof. Is there some type of login security like a 2FA without the need of a second device, but remembering a pattern or a formula which is used to solve a dynamic puzzle that is given to the user on login? Say I am shown 100 words during login. The correct word is an adjective that start with the letter S only or the correct word is something to do with the color blue. | The difference between using some hardware backed key store (i.e. TPM, HSM, smartcard ...) and a "pure software" solution like openssl genrsa is not so much about the security of the key generation but about the security of the key storage . HSM and similar are designed to never actually provide the created private key but only do operations like cryptographic signatures with the key. They are designed so that the key can not be copied to some other medium, that the HSM cannot be cloned with the key inside etc. Theft of the key thus means that the hardware needs to be stolen. Contrary to that a key generated by openssl genrsa is stored in a normal file, which can be copied without anyone noticing and thus can also be used anywhere else. Theft of the key thus can go unnoticed, since the original owner still has access to the key too. It thus provides less assurance who actually owns and uses the key. | {
"source": [
"https://security.stackexchange.com/questions/245158",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20141/"
]
} |
245,267 | It happens that I participate in a bug hunting program and analyzing the app I realized that there is a particular parameter that is very important for access control and that only changes with the IP address. Anyway, the question here is if I can set a specific public IP address. I don't need to receive a response, just issue a request to parse the request. | You can send out a packet with whatever IP address you like. Then, one of the routers along the way might decide the packet makes no sense (it is a "martian packet") and discard it. Consider this: | 192.1.1.1
+------------------- GW4
| 192.168.1.1
+----------------- GW2------+
| 192.168.16.1 | 192.168.33.1
+---- GW1 -----+ ---- GW3 ----
| |
192.168.16.15 192.168.16.243
YOU ALICE You send out a packet with a source address of 10.0.0.36 and directed to, say, 172.16.16.172. GW1, serving your network which is 192.168.16.0/24, only expected to receive packets matching 192.168.16.0/24, and your packet doesn't match, so it is dropped. You could spoof a connection coming from your "neighbour" Alice, but no more. Even if GW1 did not complain, it would forward the packet to the next hop, which serves the whole 192.168.0.0/16 branch and also would ignore your packet. And so on and so forth (the IP I used are actually not all that routable, but it's an example). Granted, many routers won't do "ingress filtering" because they'll be obsolete or will believe it unnecessary and not cost-effective. But it only takes one hop to wreck the transmission. Even without this hurdle, the difficulties are not over, because let's say you succeed in delivering to your target a packet coming from 10.0.0.36. The target will reply -- and will reply to 10.0.0.36, so the reply packet will never get back to you. Indeed, should the reply packet arrive to 10.0.0.36, the latter would simply reset the connection as unsolicited. This is a problem, because most protocols where you "only need to issue a request" are in fact sent over TCP, which requires a handshake between the communicating parties before any data can be sent. The major exception is HTTP/2 , And without that first reply packet you have precious little chance (not the same as zero chances, but still ) of establishing a full TCP connection, without which you probably won't be able to send your request. You might be able to do this using UDP, or other protocols which have no handshake, if the target application uses UDP (it probably doesn't). | {
"source": [
"https://security.stackexchange.com/questions/245267",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/224059/"
]
} |
245,406 | If it comes to the security a hashing or encryption algorithm provides, we never know the full story. There's that part that we (respectively the public research) understand and can reason about, but we also know that there might be weaknesses we do not know about and though we can't reason about things we don't know, still that unknown part is relevant AND is affected by certain parameters. If you for example have two symmetric encryption ciphers, say AES-256 and TwoFish, which conceptually should provide about the same security, which one would you rather trust? AES-256 is much more widely used than TwoFish. This means there is a much higher incentive in breaking it (and probably much more resources are poured into achieving exactly that). That might be an argument why to prefer the underdog. On the other hand, one can also argue that much more public research is going into AES-256 for the same reason and therefore IF something was fundamentally broken, the chances that it would be publicly known are higher. Or do such properties cancel each other out anyway and thus adoption rate of an algorithm is of no relevance to security considerations at all? What would you put more trust into and why? | Trust the widely accepted algorithm. Not because the algorithm is better. Well, it does matter: if an algorithm is too much of an underdog, it won't have had enough scrutiny and so there's no reason to trust it. But mainly because comparing algorithms, as long as they're reasonably reputable, is meaningless: they're fine and that's it. It's not the algorithm that kills you, it's the implementation. With a widely-used algorithm, you get a better selection of implementations, and the implementations themselves have better scrutiny. That's the important thing. So don't use an underdog for which there's only one or two implementations and nobody really looks at their code. Use a popular implementation of a popular algorithm. Popular AES implementations receive more scrutiny than those of any other block cipher. Among ciphers, only ChaCha20 receives as much scrutiny. This is true especially if you're worried about NSA-level adversaries. We have some historical data about NSA's capabilities. We know that when they advised on the design of DES , they made it more robust to an attack technique that wasn't publicly known at the time (differential cryptanalysis), and vulnerable only to brute force with a budget that they didn't have, but were confident of reaching before anyone else. We know that when GCHQ invented Diffie-Hellman , it was rediscovered publicly less than a decade later. We know from the Snowden revelations that in the early 2010s, NSA couldn't break popular encryption primitives, but could effectively break most software due to implementation bugs. | {
"source": [
"https://security.stackexchange.com/questions/245406",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/229626/"
]
} |
245,438 | I received an email to my corporate email account from an external Gmail account. The list of recipients clearly shows (an eventually successful) attempt to guess my email address based on my personal information (nothing confidential — all of it is semi-publicly available on LinkedIn), including a correct internal domain name. However, the email itself did not contain anything meaningful - both subject and body contained a single word (which was the corporate name). There were no links, trackers, attachments, or even an attempt to make me respond. That left me a little puzzled — by the looks of it, non-trivial effort was put in crafting this email — what would an attacker gain from it? | Attempting to send a message to a non-existant email address will typically result in a “bounceback” message with an error code like 510 or 550 invalid address . If you try several addresses and there is no error message for only one of them, you know this one actually exists. Someone who has a mailbox on a corporate email server also probably has access to multiple other systems or services, possibly with the same user name. The sender now has the name/handle of an account they can target on these systems. | {
"source": [
"https://security.stackexchange.com/questions/245438",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/251896/"
]
} |
245,919 | I searched my email addresses in https://haveibeenpwned.com/ . One of my e-mail addresses results as having been pwned, and is present in a data breach, in particular the Apollo data breach: Apollo: In July 2018, the sales engagement startup Apollo left a
database containing billions of data points publicly exposed without a
password. The data was discovered by security researcher Vinny Troia
who subsequently sent a subset of the data containing 126 million
unique email addresses to Have I Been Pwned. The data left exposed by
Apollo was used in their "revenue acceleration platform" and included
personal information such as names and email addresses as well as
professional information including places of employment, the roles
people hold and where they're located. Apollo stressed that the
exposed data did not include sensitive information such as passwords,
social security numbers or financial data. The Apollo website has a
contact form for those looking to get in touch with the organisation. I have never subscribed to Apollo or given my address to Apollo. How do they have my e-mail address in the first place? Web scraping? | Web scraping is indeed a possibility, as mentioned in this Wired article : As Apollo noted in its letter to customers, it draws a lot of its information from public sources around the web, including names, email addresses, and company contact information. But it also scrapes Twitter and LinkedIn. | {
"source": [
"https://security.stackexchange.com/questions/245919",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/246121/"
]
} |
246,062 | When implementing password hashing using PBKDF2 for authenticating access to a REST api,when we say that PBKDF2 is slow does it mean that it's going to take a lot of time to hash the password and validate it, therefore the service not being responsive enough for the end user? Or is it the case that is PBKDF2 slow only when the password given is not valid,not when the password is correct? | PBKDF2 and other key stretching algorithms are meant to be slow and take the same amount of time whether the input password is correct or incorrect. To reduce computational load and latency for your user, the API should authenticate once via login credentials and issue a revokable or time-limited session token that is verified by a simple lookup. | {
"source": [
"https://security.stackexchange.com/questions/246062",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/90321/"
]
} |
246,076 | I just logged into my FB account from a different location as usual and received a message that my account was locked down due to that attempt from an unknown location. This spiked my curiosity, and I'm wondering, if I was able to change my location to my home address using a VPN (not even sure you can be that specific or not,) would I then be able to log into my account with no issue? I'm sure this is something that's been thought of, but I'm just curious how they would determine that I'm not actually there? | PBKDF2 and other key stretching algorithms are meant to be slow and take the same amount of time whether the input password is correct or incorrect. To reduce computational load and latency for your user, the API should authenticate once via login credentials and issue a revokable or time-limited session token that is verified by a simple lookup. | {
"source": [
"https://security.stackexchange.com/questions/246076",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/252721/"
]
} |
246,157 | We recently had issues with people messing around inside our system. To prevent code injections within my python code, I implemented the following if block: #! /usr/bin/python3
#-*- coding:utf-8 -*-
def main():
print("Your interactive Python shell!")
text = input('>>> ')
for keyword in ['eval', 'exec', 'import', 'open', 'os', 'read', 'system', 'write']:
if keyword in text:
print("You are not allowed to do this!")
return;
else:
exec(text)
print('Executed your code!')
if __name__ == "__main__":
main() A few (user) people can run this python file with sudo rights inside our Ubuntu system. I know this sounds like a security hole, but I don't see any possibility to escape from this. Is there any possibility to inject inside my code block? Do you have any tips to prevent code injections? | Simple blacklists are about the worst way to patch a vulnerability. They'll create a lot of false positives and any determined attacker will usually find a way around them. Web Application Firewalls are a good example. The detection rules they employ are way more complicated than your simplistic blacklist, yet they don't catch everything, and every now and then, bypasses come up for things they are supposed to catch. I don't see any possibility to escape from this. Or so you think. You just haven't looked long enough. vars(__builtins__)['ex'+'ec']("print('pwned')") This sails right through your filter. It calls the exec() function which then goes on to print ' pwned '. Now you can modify your blacklist to catch this as well, but someone else will come up with another way. 99% of the time I see someone using something like exec or eval , it's completely unnecessary. If you can avoid using exec , do yourself a favor and get rid of this vulnerability waiting to be exploited. If, however, you absolutely need to use exec , then do as @amon suggested in the comments : Create a new pipe, fork and drop as many privileges in the child as possible, and then execute the code. Use the pipe to communicate between the child and the parent processes. Or sandbox it as Steffen Ullrich suggested. | {
"source": [
"https://security.stackexchange.com/questions/246157",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/252790/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.