source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
185,908
From time to time, some web sites asks to enter a security question and an answer for it. The question list is standard and it usually includes "What is your mother's maiden name?". Some people use their mother's real maiden name so that they are sure they can remember what to provide when asked (e.g. as part of the process to recover the account). This means that this is information is fixed for a very long period of time. If it happens that some web application is hacked and such an answer is associated with an e-mail address (or worse, with personally identifiable information), it can potentially create a vulnerability for other web applications. Also, mother's maiden name might be shared in public space. Assuming above issues with this security question (or any other security question that relies on a constant within one's life): Why is Mother’s Maiden Name still used as a security question?
Because people are lazy and/or incompetent. And, well, you know, the Internet is full of chimpanzees . I would argue that all security questions are bad, but using the mother's maiden name is exceptionally bad: At least in Sweden, I can find out anyone's maiden name just with a simple call to the tax office. It is literally public information. It's 2018, and fairly common for couples to adopt the bride's name when getting married. Your mothers maiden name is then your surname. Great. Luis Casillas rightly adds: There are dozens of countries, with billions of inhabitants between them, where women don't change their legal name when they marry. The United States in particular has huge immigrant minorities of people from such countries. Seriously, there are no excuses for this. It's just bad.
{ "source": [ "https://security.stackexchange.com/questions/185908", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164712/" ] }
185,974
Since GDPR is shaking everything up at the minute I'm working on a few changes to our website/process. I work in eCommerce in UX (UK based) and support marketing teams with certain activities. My question is, does gender of an individual count as PII? We store gender in a data layer as a JavaScript variable which is held within our own business, we then can choose to pass these variables across to a testing platform to target individuals based on the presence of these variables. As I'm not a legal/data type person I'm not 100% sure if by us storing and having the means to pass a person's gender (pulled from info we get when they create an account with us) to a third party, we are breaching any kind of information security agreements? If this is down to company policy etc. then just let me know and I'll close the question as it's not really for here.
The definition of personal data as mentioned in the GDPR: ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly , in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person; As you state that you use the presence of the variable to target individuals, the gender definitely has an indirect reference to someone's identity and as such, is personal data. However, that doesn't mean that you have to stop processing the gender in the context you described. From a risk perspective (which is what the GDPR is all about), I don't see an issue if you only share the gender - which is actually pseudonymised since you don't supply any other direct identification data along with it. However, you do have other obligations w.r.t. the transparency principle, incorporating the processing activity into your processing register, determining legal ground for processing (and acting accordingly), determining processor-controller relationship with your third party and including the necessary clauses in the the contract, etc. To determine all this, much more information is required than you have supplied in your question and I highly encourage to seek (legal) professional advice to support you in this matter.
{ "source": [ "https://security.stackexchange.com/questions/185974", "https://security.stackexchange.com", "https://security.stackexchange.com/users/178300/" ] }
185,979
I'm trying to intercept requests using OWASP ZAP proxy and Burp Suite. My current configuration is my android phone (One Plus 5 Android Oreo 8.1) has installed both certificate from ZAP and Burp. But issue arise when I've changed my ZAP certificate in ZAP Proxy. So with the new certificate on ZAP Proxy, the normal way to do was to install the certificate into my Android phone. But with an existing ZAP certificate in place, I can't do that. If I attempt to remove the old ZAP certificate from /system/etc/security/cacerts , reboot, install the new certificate and reboot again, the certificate will not be shown inside the credential storage. Moving on, I've tried to remove the new certificate and place the old certificate back and reboot, the certificate will now appear in the credential storage. Why would I not choose to just import the old certificate into the new ZAP proxy? Because importing certificate into ZAP requires the format of both the certificate and unencrypted private key. ( https://github.com/zaproxy/zap-core-help/wiki/HelpUiDialogsOptionsDynsslcert ) I've actually encountered this issue not only with ZAP certificate but also with Burp certificate, trying to install a new certificate with the same name does not work. If anyone has any solution, please do help me. For references: This is my steps inserting custom certificate into Android phone. ( https://blog.ropnop.com/configuring-burp-suite-with-android-nougat/ )
The definition of personal data as mentioned in the GDPR: ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly , in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person; As you state that you use the presence of the variable to target individuals, the gender definitely has an indirect reference to someone's identity and as such, is personal data. However, that doesn't mean that you have to stop processing the gender in the context you described. From a risk perspective (which is what the GDPR is all about), I don't see an issue if you only share the gender - which is actually pseudonymised since you don't supply any other direct identification data along with it. However, you do have other obligations w.r.t. the transparency principle, incorporating the processing activity into your processing register, determining legal ground for processing (and acting accordingly), determining processor-controller relationship with your third party and including the necessary clauses in the the contract, etc. To determine all this, much more information is required than you have supplied in your question and I highly encourage to seek (legal) professional advice to support you in this matter.
{ "source": [ "https://security.stackexchange.com/questions/185979", "https://security.stackexchange.com", "https://security.stackexchange.com/users/177185/" ] }
186,086
In short: Instead of another question asking about when to use /dev/random instead of /dev/urandom , I present the following scenario, in which I find myself in an application I'm building: A VM or container environment (ie, a fresh install, probably only seconds old when the application is run for the first time) A need for cryptographically secure random bytes to use as keying material for the rest of the life of the installation (months or more) A user story and interface in which blocking (even for minutes, if necessary) is acceptable I'm wondering: is this the rare but proper use case for a blocking random source (ie, using getrandom with the blocking mode flag)? Longer form: Obviously /dev/urandom vs /dev/random is a topic that has led to contentious discussion. For my part, I'm of the mind that /dev/urandom is preferable in nearly all typical use cases - in fact, I have literally never used a blocking random source before. In this popular and wonderful answer , Thomas Pornin makes the case that the urandom man page is somewhat misleading (agreed) and that, once properly seeded, the urandom pool will not "run out" of entropy in any practical scenario - and this comports with my understanding as well. However, I think that he slightly oversells urandom by saying that "the only instant where /dev/urandom might imply a security issue due to low entropy is during the first moments of a fresh, automated OS install." My understanding is that "boot-time entropy hole" for a typical Ubuntu server boot is over a minute long! This is based on research at the University of Michigan by J. Alex Halderman . Halderman also seems to say that the entropy pool fills on each boot , and not, as Pornin says in his answer, at the very first OS install. Although it's not terribly important for my application, I'm wondering: which is it? I have read the "Myths about Urandom" post by Thomas Hühn , but I find it unconvincing for several reasons, most pertinent for my application is that the post essentially boils down to "people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running." While this is undoubtedly true (and is the reason I've always used /dev/urandom everywhere else, especially for web stuff), there are some applications in which users will tolerate having to wait, especially if they are installing it for the first time. I am building an application meant to be run locally in a terminal setting and I already have reason to create an expectation that the initial installation process will be a bit involved. I have no qualms about asking the user to wait a bit if it can add even a small amount of robustness against a repeated keypair. In fact, Halderman says he was able to compute private keys for 105,728 SSH hosts - over 1% of those he scanned - because of weak entropy pools being used to generate the keypair. In this case, it was largely embedded devices which presumably have abysmal sources of entropy and thus a hard time filling their pool. But - and this is perhaps the heart of my question - in an age when apps are shipped in wholly naive containers, meant to be run as if on a shiny, fresh OS installation only seconds old, aren't we reasonably concerned about this same phenomenon? Don't we need a practical blocking interface? And is that what getrandom is intended to become? Of course it's possible in many situations to share entropy from the host to the guest. But for the purposes of this question, let's assume that the author has decided not to do that, either because she won't have sufficient control over the particulars of the deployment or because there is no entropy pool available on the host. Thinking just a bit further down the road: what are the best practices for environments which are as fresh and naive as I've described above, but which run on devices of fairly abysmal prospects for initial entropy generation? I'm thinking small embedded devices with few or no HIDs and which are perhaps air-gapped for the initial installation process. edit: Update: So it appears that, as of PEP524, Python (in which the app in question is written) uses getrandom when os.urandom is called, and blocks in the event that the entropy pool hasn't gathered at least 128 bits. So as a practical matter, I think I have my answer - just use os.urandom and it will behave like /dev/random only when necessary. I am, however, interested in the over-arching question here (ie, does the era of containerization mean a re-thinking of "always just use urandom" orthodoxy).
I wrote an answer which describes in detail how getrandom() blocks waiting for initial entropy. However, I think that he slightly oversells urandom by saying that "the only instant where /dev/urandom might imply a security issue due to low entropy is during the first moments of a fresh, automated OS install." Your worries are well-founded. I have an open question about that very thing and its implications. The issue is that the persistent random seed takes quite some time to move from the input pool to the output pool (the blocking pool and the CRNG). This issue means that /dev/urandom will output potentially predictable values for a few minutes after boot. The solution is, as you say, to use either the blocking /dev/random , or to use getrandom() set to block. In fact, it is not uncommon to see lines like this in the kernel's log at early boot: random: sn: uninitialized urandom read (4 bytes read, 7 bits of entropy available) random: sn: uninitialized urandom read (4 bytes read, 15 bits of entropy available) random: sn: uninitialized urandom read (4 bytes read, 16 bits of entropy available) random: sn: uninitialized urandom read (4 bytes read, 16 bits of entropy available) random: sn: uninitialized urandom read (4 bytes read, 20 bits of entropy available) All of these are instances when the non-blocking pool was accessed even before enough entropy has been collected. The problem is that the amount of entropy is just too low to be sufficiently cryptographically secure at this point. There should be 2 32 possible 4 byte values, however with only 7 bits of entropy available, that means there are only 2 7 , or 128, different possibilities. Halderman also seems to say that the entropy pool fills on each boot, and not, as Pornin says in his answer, at the very first OS install. Although it's not terribly important for my application, I'm wondering: which is it? It's actually a matter of semantics. The actual entropy pool (the page of memory kept in the kernel that contains random values) is filled on each boot by the persistent entropy seed and by environmental noise. However, the entropy seed itself is a file that is created at install time and is updated with new random values each time the system shuts down. I imagine Pornin is considering the random seed to be a part of the entropy pool (as in, a part of the general entropy-distributing and collecting system), whereas Halderman considers it to be separate (because the entropy pool is technically a page of memory, nothing more). The truth is that the entropy seed is fed into the entropy pool at each boot, but it can take a few minutes to actually affect the pool. A summary of the three source of randomness: /dev/random - The blocking character device decrements an "entropy count" each time it is read (despite entropy not actually being depleted). However, it also blocks until sufficient entropy has been collected at boot, making it safe to use early on. Note that modern kernels have re-designed this character device. Now, it will block only until sufficient entropy has been collected once, then will remain unblocking, identical to /dev/urandom . /dev/urandom - The non-blocking character device will output random data whenever anyone reads from it. Once sufficient entropy has been collected, it will output a virtually unlimited stream indistinguishable from random data. Unfortunately, for compatibility reasons, it is readable even early on in boot before enough one-time entropy has been collected. getrandom() - A syscall that will output random data as long as the entropy pool has properly initialized with the minimum amount of entropy required. It defaults to reading from the non-blocking pool. If given the GRND_NONBLOCK flag, it will return an error if there is not enough entropy. If given the GRND_RANDOM flag, it will behave identically to /dev/random , simply blocking until there is entropy available. I suggest you use the third option, the getrandom() syscall. This will allow a process to read cryptographically-secure random data at high speeds, and will only block early on in boot when not enough entropy has been gathered. If Python's os.urandom() function acts as a wrapper to this syscall as you say, then it should be fine to use. It looks like there was actually much discussion on whether or not that should be the case, ending up with it blocking until enough entropy is available. Thinking just a bit further down the road: what are the best practices for environments which are as fresh and naive as I've described above, but which run on devices of fairly abysmal prospects for initial entropy generation? This is a common situation, and there are a few ways to deal with it: Ensure you block at early boot, for example by using /dev/random or getrandom() . Keep a persistent random seed, if possible (i.e. if you can write to storage at each boot). Most importantly, use a hardware RNG . This is the #1 most effective measure. Using a hardware random number generator is very important. The Linux kernel will initialize its entropy pool with any supported HWRNG interface if one exists, completely eliminating the boot entropy hole. Many embedded devices have their own randomness generators. This is especially important for many embedded devices, since they may not have a high-resolution timer that is required for the kernel to securely generate entropy from environmental noise. Some versions of MIPS processors, for example, have no cycle counter. How and why do you suggest using urandom to seed a (I guess userland?) CSPRNG? How does this beat getrandom? The non-blocking randomness device is not designed for high performance. Until recently, the device was obscenely slow due to using SHA-1 for randomness rather than a stream cipher as it does now. Using a kernel interface for randomness can be less efficient than a local, userspace CSPRNG because each call to the kernel requires an expensive context switch . The kernel has been designed to account for applications that want to draw heavily from it, but the comments in the source code make it clear that they do not see this as the right thing to do: /* * Hack to deal with crazy userspace progams when they are all trying * to access /dev/urandom in parallel. The programs are almost * certainly doing something terribly wrong, but we'll work around * their brain damage. */ Popular crypto libraries such as OpenSSL support generating random data . They can be seeded once or reseeded occasionally, and are able to benefit more from parallelization. It additionally makes it possible to write portable code that does not rely on the behavior of any particular operating system or version of operating system. If you do not need huge amounts of randomness, it is completely fine to use the kernel's interface. If you are developing a crypto application that will need a lot of randomness throughout its lifetime, you may want to use a library like OpenSSL to deal with that for you.
{ "source": [ "https://security.stackexchange.com/questions/186086", "https://security.stackexchange.com", "https://security.stackexchange.com/users/135502/" ] }
186,133
Reusing passwords pose as a terrible risk for users because in the event of a data breach, with the passwords not being stored securely enough, this means that, by default, all other services that they use this password for are also compromised. Typically with these breaches they store the password using only a hashing algorithm like SHA-1, or even still MD5 in some cases recently (which were never meant for storing passwords securely), making it easy for attackers to bruteforce passwords using just raw GPU hash power. If a person was to create a properly random password of sufficient length (e.g. 100+) of very sufficient entropy which is mixed with many types of symbols, numbers and letters; in the case that their machine is not compromised in the sense that there is a keylogger installed or the like, and all their browsing sessions are done through an encrypted tunnel, is there a risk in reusing this password for multiple websites if it can possibly never be cracked in their lifetime if a data breach occurred on one of the services they use?
You don't need to bruteforce a hash to steal a password. A website might be compromised by an attacker so that they can read the passwords directly from the login form, in plain text. Or the website owner might be doing this, they could always do it if they wanted to, it's up to you whether you trust the owner or not (you wouldn't want to use the same password on Facebook and SomeBlackHatCommunity dot com). Also, there is always the good old shoulder surfing for stealing passwords. A password is like a key, and by using the same password you are actually giving the same key to different doors to different people. You can see it's never a good idea. So you should never use the same password in different places, unless that password protects the same data or gives you the same privileges, so for example I believe you can use the same password to protect your backups.
{ "source": [ "https://security.stackexchange.com/questions/186133", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
186,327
I've noticed a trend in emails I've recieved as a result of GDPR, some of them are sort of 'opt-out' (or pseudo-opt-out where you just need to stop using their service) like so: Our updated Privacy Policy explains your rights under this new law and will become effective on May 25, 2018. By continuing to use our site or app after this date, you are agreeing to these updated terms. Or they require you to opt-in: Hi, this is another one of those General Data Protection Regulation ("GDPR") emails where we request permission to email you, even if you are outside of the EU. We hope that you'll opt in to continue to receive an email every now and then about our latest updates. What differentiates the two requests? Are they storing different data on me, or is it more to do with their service? I've seen some companies that only email me about promotions etc (similar to the second quote above), and they have the pseudo-opt-out message, so I don't think it's service related.
It is not clear that the first kind of email is legal. A French association, la Quadrature du Net , is planning to launch a class action against five big tech companies (the famous "GAFAM") on May 28th about just this practice. Here is a summary of their arguments: Article 6 §1 of GDPR lists six cases for processing personal data legally, one of these is user consent; Article 4 §11 states that consent must be obtained in way that shows it is the will of the user in a clear, specific, informed and unequivocal way; In the preamble of GDPR, it is explained that consent must be a positive action, and there can be no consent in case of silence, pre-checked boxes, or inaction; Article 7 §4 states that when obtaining consent, it is necessary to consider whether processing personal data is absolutely necessary for providing a service. As a consequence the "G29", the group of national data protection authorities in the EU, affirmed that if a user has no real choice, feels constrained, or will face negative consequences for refusing consent, then the consent given is not valid. The G29 therefore affirmed that GDPR guarantees that giving consent to processing personal data cannot be the counterpart of providing services. Moreover if a company asks for consent as a legal basis for processing personal data, then they are forbidden from using the other legal bases of Article 6 for justifying their processing. (The reasoning goes into deeper detail, if you can read French. What I have written above is just a summary.) So the first email is essentially strong-arming you into accepting something illegal. If the class actions I mentioned above are successful, then you can expect smaller companies to follow suit and stop sending emails of the first kind (or face serious legal consequences).
{ "source": [ "https://security.stackexchange.com/questions/186327", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11640/" ] }
186,345
I managed to find a vulnerability in a so-called friend of mines website and I want to show him that his website is vulnerable to data extraction. When I use something like yes')-- as post I get the following debug info: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%' ) or (r.nrinreg=1 and datainreg='01.01.2017' and r.adresant like '%yes' at line 3 So basically my input is inside some brackets. Doing yes%')-- results in a slow-loading page that results with nothing but blank. So that would result in something like this query: or (r.nrinreg=1 and datainreg='01.01.2017' and r.adresant like '%yes%') -- whatever is after the comment How can I get any data or MySQL version from that query knowing that the input isn't escaped? Also, what is this kind of SQL injection called? ASP.NET Version:2.0.50727.8762
It is not clear that the first kind of email is legal. A French association, la Quadrature du Net , is planning to launch a class action against five big tech companies (the famous "GAFAM") on May 28th about just this practice. Here is a summary of their arguments: Article 6 §1 of GDPR lists six cases for processing personal data legally, one of these is user consent; Article 4 §11 states that consent must be obtained in way that shows it is the will of the user in a clear, specific, informed and unequivocal way; In the preamble of GDPR, it is explained that consent must be a positive action, and there can be no consent in case of silence, pre-checked boxes, or inaction; Article 7 §4 states that when obtaining consent, it is necessary to consider whether processing personal data is absolutely necessary for providing a service. As a consequence the "G29", the group of national data protection authorities in the EU, affirmed that if a user has no real choice, feels constrained, or will face negative consequences for refusing consent, then the consent given is not valid. The G29 therefore affirmed that GDPR guarantees that giving consent to processing personal data cannot be the counterpart of providing services. Moreover if a company asks for consent as a legal basis for processing personal data, then they are forbidden from using the other legal bases of Article 6 for justifying their processing. (The reasoning goes into deeper detail, if you can read French. What I have written above is just a summary.) So the first email is essentially strong-arming you into accepting something illegal. If the class actions I mentioned above are successful, then you can expect smaller companies to follow suit and stop sending emails of the first kind (or face serious legal consequences).
{ "source": [ "https://security.stackexchange.com/questions/186345", "https://security.stackexchange.com", "https://security.stackexchange.com/users/178756/" ] }
186,441
Assuming a site is using all HTTPS all the time (LB redirects port 80 to 443), is there any reason not to force every cookie set by the application to use BOTH secure AND httponly ? Currently, for example, a PCI scan will only flag the jsessionid as not using the secure attribute, but tomorrow it could be the other one, so I'm trying to get ahead of it.
Yes, there are cases where you don't want HTTP ONLY or SECURE. If you need javascript to see the cookie value, then you remove the HTTP-Only flag. A couple cases - some sites track the page state in a cookie using javascript to read and write the cookie value. CSRF mitigations often rely on the server sending a value in a cookie, and expect javascript to read that value. The Secure flag is more important. If we expect all sites to run over https, and only https, then the only http part is a redirect to https. You never want your cookie sent in the clear. Well, almost never. Here are two cases where you might: development environments often don't have, or don't need to have TLS certs (though maybe they should). to track activity that originated on http. You might even use your load balancer to set an insecure cookie before it sends back the redirect. Then your application analytics can track which URLs came in as HTTP. Your load balancer can track which sessions came in as http. In practice, if you're running an https site, always set the secure cookie , and always error on the safe side by setting HTTPONLY , unless you know your javascript requires cookie access. UPDATE - TLS in Development A lot of talk about whether you should or shouldn't use TLS in development. Posted the question here: Should I develop with TLS on or off?
{ "source": [ "https://security.stackexchange.com/questions/186441", "https://security.stackexchange.com", "https://security.stackexchange.com/users/178868/" ] }
186,622
There are multiple mechanisms (some now defunct) that allow me to access service A (the Relying Party / RP) using a token granted by service B (the Identity Provider / IdP). Typically these replace a username-and-password login. Examples of IdP protocols include: OpenID 2.0 OpenID Connect IndieAuth Mozilla Persona Portier ...and obviously Google Sign-In and Facebook Login What stops the IdP from gaining access to my account on the RP? Surely a bad actor with sysadmin privileges at the IdP can: Make a login attempt at service A ...initiating a token request from A to B Generate a token at service B (without needing my credentials) ...return the valid response to A Now access my account at A I'm asking a general question, but as a concrete example , what stops a bad Facebook admin from posting Stack Exchange questions under my name? Rough sketch: (Sketch adapted from https://developers.google.com/identity/protocols/OAuth2 but note that the referenced protocol is just an example .)
Yes, they can. Simple answer: You authenticate in some way to your identity provider, usually via username and password. The bad admin can store the transmitted credentials and just re-use them. This attack does not depend on how the backend is implemented. In general your password for the identity provider isn't used for authentication to third-party services at all, which means that your identity provider actually has your login keys (and you do not have them at all). You could think of schemes which incorporate your password into the authentication process without storing it unencrypted afterwards, but in practice I do not know any such scheme.
{ "source": [ "https://security.stackexchange.com/questions/186622", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7062/" ] }
186,657
I know that collision for MD5 has been documented since the 90s and that digital certificates based off of MD5 has been demonstrated to be completely compromised back in 2010 but how effective is MD5 in ensuring that small amounts of data have not been tampered with? I have some small text files that are a few pages in size (let's say 15kb in size). I've been using SHA-256 on them but it would be much more convenient to be able to use MD5 instead. How secure would MD5 be as a hash digest for these small 15kb text files? Would a malicious party be able to produce collisions for such a small amount of data or does the small size make this a difficult endeavor?
The size of the input is irrelevant. In fact, because of the birthday paradox , collisions are guaranteed as soon as the size of the message exceeds that of the hash. The best way to avoid collisions is to use a stronger hash which is not as vulnerable to them, such as SHA-2. However, you are describing a more difficult attack than a collision attack, called a preimage attack , which MD5 is safe from. There are three types of attacks * that result in having two files with the same digest: 1st preimage - Find an input that resolves to a specific hash. 2nd preimage - Modify an input without changing the resultant hash. Collision - Find any two distinct inputs that have the same hash. These are vulnerabilities when they can be carried out more efficiently than by brute force search. Collisions can still occur naturally, and in fact they are guaranteed with any non-trivial amount of input due to the pigeonhole principle , but hashes are designed to make it difficult to intentionally perform. For a hash with an output the size of MD5's, the chance of a random, accidental collision is extremely low. Even if you hash 6 billion random files per second , it would take 100 years before you get a 50% chance of two hashes colliding. MD5 is great for detecting accidental corruption. A strong n -bit hash function is designed to have a security level of 2 n against both 1st and 2nd preimage attacks, and a security level of 2 n /2 against collision attacks. For a 128-bit hash like MD5, this means it was designed to have a security level of 2 128 against preimages and 2 64 against collisions. As attacks improve, the actual security level it can provide is slowly chipped away. MD5 is vulnerable to a collision attack requiring the equivalent of only 2 18 hash invocations instead of the intended 2 64 to exploit. Unless the attacker generates both files, it is not a collision attack. † An attacker who has a file and wants to maliciously modify it without the hash changing would need to mount a 2nd preimage attack, which is completely infeasible against MD5 with modern technology (the best attack has a complexity of 2 123.4 , compared to MD5's theoretical maximum of 2 128 ). Collision attacks are relevant in different situations. For example, if you are given an executable made by an attacker without a backdoor, you may hash it and save the hash. That executable could then later be replaced with a backdoored version, yet the hash would be the same as the benign one! This is also a problem for certificates where someone could submit a certificate for a domain they do own, but the certificate would intentionally collide with one for a domain they do not own. It is safe to use MD5 to verify files as long as the stored hash is not subject to tampering and can be trusted to be correct, and as long as the files being verified were not created (or influenced!) by an attacker. It may still be a good idea to use a stronger hash however, simply to prevent a potential practical preimage attack against MD5 in the future from putting your data at risk. If you want a modern hash that is very fast but still cryptographically secure, you may want to look at BLAKE2 . * While there are other attacks against MD5 such as length extension attacks that affect all Merkle–Damgård hashes as mentioned by @LieRyan, these are not relevant for verifying the integrity of a file against a known-correct hash. † A variant of the collision attack called a chosen-prefix collision attack is able to take two arbitrary messages (prefixes) and find two values that, when appended to each message, results in a colliding digest. This attack is more difficult to pull off than a classic collision attack. Like the length extension attack, this only applies to Merkle–Damgård hashes.
{ "source": [ "https://security.stackexchange.com/questions/186657", "https://security.stackexchange.com", "https://security.stackexchange.com/users/131521/" ] }
186,745
Could one create a vulnerable website on purpose to attack a server of a hosting provider? So in the question above which I recently asked we came to the conclusion that preventing one vulnerable website opening the doors to all other websites on the same server must be done by some way of controlling the privileges of the users. Therefore I was wondering how this is realised in shared webhosting. Is this OS based such as UAC in Windows or how does this conceptually work? I understand that this may differ from server to server but I would like to understand the options for defense for a hosting provider in order to understand the magnitude of this thread.
This question is a bit broad, but I think an answer that is a little bit broad will still be helpful. The answer depends on the "kind" of hosting you are talking about. There are three main kinds which I will break down below, but just FYI the names I use are not necessarily industry-standard names. The concepts however are pretty common across the board: Shared hosting (this is the kind of hosting Jeroen talks about in his answer) VPS hosting (aka virtual private servers) Dedicated hosting Shared Hosting With a shared hosting setup many websites are running under the same instance of the same OS, and different "websites" are typically separated out by having different user accounts on the same system. cPanel and Plesk are two common software systems that manage such setups (although Plesk can also be used to manage VPS hosting). As a result, you have to concern yourself with securing two primary aspects of your system: making sure that user accounts are properly isolated (see Jeoroen's excellent answer), and making sure that your container manager (Plesk, cPanel, etc...) doesn't have any weaknesses. Those tools do occasionally have vulnerabilities of their own, and if someone manages to break into your container management software, they typically end up with root access to the physical server, or in a worst case scenario, root access to all the servers managed by that system. Also, if you give the users command line access, you have to worry about potential privilege escalation vulnerabilities in the OS itself. VPS Hosting With VPS (virtual private server) hosting, the physical server itself is running some sort of virtualization software that then runs any number of "virtual machines" that run a full and separate operating system. Some examples of tools that manage these kinds of setups are the KVM hypervisor or the XEN hypervisor. The advantage of this kind of setup is that it gives the end user (aka the customer of the hosting company) full control over their own system. They can theoretically install any operating system they want, have full root/admin access, and install/manage whatever software they want. The hypervisor keeps each virtual OS separated from the rest and completely sandboxed from one another. Theoretically, it makes it impossible for one compromised host to impact the others at all. In practice of course, things are not always perfect. Although (I believe) it is more rare and harder to exploit, the hypervisors themselves occasionally have their own vulnerabilities that can allow a malicious attacker to take control over the whole system (Spectre and Meltdown were relevant for hypervisors - h/t phyrfox). Like anything else, keeping your systems up-to-date is key. Dedicated hosting Some hosting providers offer dedicated hosting, where they basically just manage the hardware for you and provide internet access. They install an OS of your choice on a server and basically just give you root/admin access. Obviously this isn't immediately relevant to your question because this is no longer shared-hosting - no one else to impact. However, they are still inside your network, so proper network access controls are always a must (and this is true for all other hosting instances as well). Network Management Edit to add: that last point about network security is worth its own mention. Regardless of whether you use shard hosting, VPS hosting, or dedicated hosting, a hosting company is inherently giving internal network access to an outside party. Without clear access controls, this means that anyone using your hosting services can potentially scan other systems on the network for OS vulnerabilities of their own. For instance, it doesn't matter if your virtualization layer can perfectly isolate the virtual servers running on it if one of the VPS instances can find other instances over the network and gain access via any network-level vulnerabilities in the OS (e.g. heartbleed/eternalblue). Many of the larger hosting companies will allow you (the person using the hosting) to setup a VPC - virtual private cloud - inside their network to isolate your systems from both the internet and other systems on their own network. Presuming that their network rules actually work as promised, this gives the end-user additional ways to protect themselves. Presumably the larger service providers also have active network security that can detect malicious network traffic from inside their own network and shut down accounts appropriately. In short, the kinds of vulnerabilities you have to worry about depend very much on your hosting configuration. Of course these days there are a whole new slew of hosting options from the large hosting providers (aka serverless infrastructure) which have their own completely separate list of concerns, but I think the above outlines the major concerns for the kinds of hosting you have in mind.
{ "source": [ "https://security.stackexchange.com/questions/186745", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179200/" ] }
186,780
I am using a password manager, and have optimized all my passwords so they are complex, long, and unique. I am wondering if there are any guidelines as to if I should still change my passwords from time to time and if so, how often.
In the new NIST guidelines (US National Institute of Standards and Technology), there are now some rather surprising reversals of guidance on several areas of password management. According to this article from Sophos' Naked Security blog , automatic or periodic password aging is no longer recommended by the new guidelines; rather, the article says: The only time passwords should be reset is when they are forgotten, if they have been phished, or if you think (or know) that your password database has been stolen and could therefore be subjected to an offline brute-force attack. The only other reason to change passwords on a schedule is to comply with outdated policies which are resistant to change, such as the PCI DSS requirements regarding passwords: 8.2.4 Change user passwords/passphrases at least once every 90 days. This is not a security issue but a compliance issue. When faced with a regulation which essentially has the force of law, compliance unfortunately must trump security.
{ "source": [ "https://security.stackexchange.com/questions/186780", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179235/" ] }
186,781
Normally I preach that rolling your own custom crypto algorithm is a bad idea. But will it really hurt if it's the outermost layer though? Or will it make security worse? AES -> CipherText -> CustomEncryptionAlgorithm-> CipherText I'm thinking that the extra layer will help. Let's say even if CustomEncryptionAlgorithm is bug ridden mess, it can't possibly make things worse. That's because AES output is already indistinguishable from random noise. On the other hand, something tells me that the following is problematic CustomEncryptionAlgorithm -> CipherText -> AES -> CipherText Is it bad? and why? Please don't comment on company resources spent on security vs obscurity etc (I agree security comes out ahead) I am more interested in understanding the cryptographic theory behind vulnerabilities in this approach.
Don't roll your own crypto! From a purely cryptographic point of view, any length-preserving bijective function cannot reduce security. In fact, even the identity function , defined as f(x) = x , will not reduce security, assuming the keys used for the standard cipher and your homebrew cipher are mutually independent. The only possible way it could reduce security is if your homebrew function does not use a different, independent key and leaks the key in the ciphertext, for example with f k (x) = x ⊕ k done on each individual block of input x , a classic XOR cipher vulnerable to known-plaintext attacks. From a practical standpoint, there are gotchas that can matter. I mentioned length-preserving above for good reason. A compression function is still a function, and sometimes compression and encryption can lead to very bad results . This is partially why your second example, with your custom algorithm applied before standard encryption, is indeed worse. It can leak information about the plaintext through length. There can also be bugs in your implementation that result in other security vulnerabilities. From a purely theoretical point of view, they are out-of-scope. It was pointed out to me in a comment that I may not be sufficiently emphasizing just how bad of an idea this is. While it may be fine in theory, the real world works differently. Actually using your own homebrew crypto is a very bad idea , no matter how you use it. The only time you should ever actually do this is if you are a professional cryptographer. Bernstein can do this. Rivest can do this. Rijmen can do this. You cannot. Don't shoot yourself in the foot and instead use proper algorithms.
{ "source": [ "https://security.stackexchange.com/questions/186781", "https://security.stackexchange.com", "https://security.stackexchange.com/users/156809/" ] }
186,925
I was thinking about this earlier this morning and was wondering why websites and devices don't offer fake logins for hackers? What I mean by that is that if a hacker finds out some of your details and tries to log in to a website (for example) the website will show that you have successfully logged in but will show dummy data that is completely fake. That way the hacker won't know if they have got the login details correct or not. It will also protect people in a security situation. For instance, imagine a criminal has stolen someones phone and realises he can't access it. He then points a gun at the owner who then types in part of their details correct but some of them incorrectly. The device unlocks in fake mode, and the criminal then thinks they have access and they decide not to shoot the person because they have complied with their wishes. But the criminal never knows that what they see is just a fake login. Has anyone implemented something like this? It seems like quite a good idea to me.
That way the hacker won't know if they have got the login details correct or not. If the information presented after login has no relationship to the person who the login should be for, then most hackers will quickly recognize that the login is probably not the real one. But, in order to show information which looks like it fits the user, considerable effort could be needed. It also needs to be created specifically for each user and show some true information about the user so it does not look fake but not too much so no important information is leaked. You cannot expect your provider to do this for you but you might try to do this yourself in many cases, i.e. add another email account, another facebook account etc.
{ "source": [ "https://security.stackexchange.com/questions/186925", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4252/" ] }
186,929
I frequently leave accounts logged in on my personal computer because of the immense physical and cryptological barriers a hacker would have to overcome to access my computer. Could a hacker, that knows my IP address and what websites I left logged in, take advantage of this knowledge in any way? I hear that IP addresses are very dangerous when a malicious user knows them. Would this even be my first concern if someone knew my public IP address?
No, they would have to have access to your browser cookies in order to abuse them to log into a site you left logged in. Merely knowing your public IP address would not allow them to log into any website. If you are asking this question though, I would not be so sure that there are "immense barriers" between them and your personal computer. A good hacker can do a lot more than you may think. In theory, a vulnerability in your router could be exploited which typically requires knowing your IP address, but there are dozens of ways to get your IP address anyway. Not to mention, the IPv4 space is small enough that a decent server can scan every single possible IP address in under a day (only 2 32 , or 4,294,967,296 in total, including a large number of reserved or invalid ones). It is more likely that an attacker would exploit a vulnerability in, say, your browser than your router through your IP address. That is not to say that vulnerabilities in routers are uncommon, but the risk of an infection or compromise through some vulnerable or out of date program is far greater. IP addresses are not very dangerous when a malicious user knows them. This is somewhat of a myth caused by script kiddies (especially of the video gaming variety) who ominously proclaim that they have your IP address and you better watch out, often with the implication that knowledge of an IP address amounts to full access to a network. The worst common scenario is that a malicious user mounts a denial of service attack on your router, causing your network connection to slow down or break. This can be irritating, but is not particularly dangerous. There are two real situations where your IP address is sensitive information: If you are dealing with a bitter player for an online video game who you just beat (because that headshot totally didn't hit him) or a spiteful troll on IRC, they may mount a DoS attack against your network in vengeance. In this case, you may want to call your ISP. They may be able to change your IP address or protect you from the attack in order to restore your connection to the network. Even if that does not work, these types of attacks quickly subside. You should probably just avoid associating with the type of person who falls into this category. If your adversary is a law enforcement agency or any other legally-privileged entity whose goal it is to tie your IP address to your real-life identity, you should be using an anonymity network such as Tor (for web browsing) or a VPN (for P2P). This is the case when your adversary is able to subpoena your ISP to obtain your subscription details. In the past, it was easy to social engineer ISPs to get this information (folks on IRC used to do this to get someone's real address), but nowadays it tends to take a legally-binding court order, in which case your ISP will barf up all the personal information it has on you without giving it a second thought. If neither of those cases applies to you, you have nothing to worry about.
{ "source": [ "https://security.stackexchange.com/questions/186929", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179376/" ] }
187,160
I have to give a school presentation about vulnerabilities found in the Moodle platform . Of course, they only apply to a legacy version which has since been patched. The catch is that the presentation should be aimed at an audience with no technical knowledge. So I'm not allowed to explain anything using the code, but I should give an explanation which helps the audience to understand the problem even if they have zero technical understanding. Moodle's problems arose from different contributors implementing the same functionality in different ways which created some security loopholes. One of the vulnerabilities alone wouldn't have sufficed to do considerable damage to the system, yet combining exploits of false assumptions, an object injection, a double SQL Injection and a permissive Administrator dashboard RCE became possible. I thought of explaining it with a house which was originally build safely, but then got insecure because of some additional stuff built onto it, but I'm looking for something a little more specific or maybe some real-world historic scenario which would fit. I thought maybe you might have a better idea.
Here's an idea for an analogy that I think is fairly accurate while generally understandable: A bank requires two forms of ID to get a loan: a driver's license and a birth certificate. Bank employees Alice and Bob are lazy in different ways: Alice always stamps "driver's license verified" without checking, while Bob always stamps "birth certificate verified" without checking. Individually this is bad but not too bad -- anyone applying with forged documents would get caught by the one check the employee still does perform. But one day Alice is running late, stamps "driver's license verified" on a form, and leaves it for Bob to finish up. Bob sees the form, assumes Alice actually verified the license, and stamps "birth certificate verified" without checking like he always does. The loan is approved, without either form of ID having been checked.
{ "source": [ "https://security.stackexchange.com/questions/187160", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179638/" ] }
187,316
If I create a servlet that would return the server time publicly (no need for authentication), would this be a security issue? I couldn't think of any issue with this, but somehow something tells me I could be wrong. To explain more, this end-point will be used by mobile apps to enhance their security (to avoid cheating by adjusting the device date).
The server time is of little use to an attacker, generally, as long as it is accurate. In fact, what is being revealed is not the current time (after all, barring relativistic physics, time is consistent everywhere) so much as the time skew. Note that, even if you do not explicitly reveal the time, there are often numerous ways to get the local server time anyway. For example, TLS up to and including version 1.2 embeds the current time in the handshake, web pages may show the last modified dates of dynamic pages, HTTP responses themselves may include the current time and date in response headers, etc. Knowing the exact clock skew of a server is only a problem in very specific threat models: Clock skew can be used in attacks to deanonymize Tor hidden services. Poorly-written applications may use server time to generate secret values. Environmental conditions of the RTC may be revealed in contrived situations .
{ "source": [ "https://security.stackexchange.com/questions/187316", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179825/" ] }
187,332
If someone would turn on internet connection using command su -c "/sbin/ifup ppp1" would it put system at risk (how bad) ?
The server time is of little use to an attacker, generally, as long as it is accurate. In fact, what is being revealed is not the current time (after all, barring relativistic physics, time is consistent everywhere) so much as the time skew. Note that, even if you do not explicitly reveal the time, there are often numerous ways to get the local server time anyway. For example, TLS up to and including version 1.2 embeds the current time in the handshake, web pages may show the last modified dates of dynamic pages, HTTP responses themselves may include the current time and date in response headers, etc. Knowing the exact clock skew of a server is only a problem in very specific threat models: Clock skew can be used in attacks to deanonymize Tor hidden services. Poorly-written applications may use server time to generate secret values. Environmental conditions of the RTC may be revealed in contrived situations .
{ "source": [ "https://security.stackexchange.com/questions/187332", "https://security.stackexchange.com", "https://security.stackexchange.com/users/88482/" ] }
187,515
I’m asking the question with these conditions: The device (computer or mobile phone) is in a running state. “Momentary” refers to a reasonably short period of time, such as 5 to 10 seconds. The system may not be in a “locked” state (e.g. showing a lock screen asking for a password). However, the active session doesn’t have superuser privilege (the usual case for a mobile phone). What can a hacker do to gain further access to the system?
That all depends on the system, the attacker, and the level of preparation they had. If they have unlimited preparation, they could do effectively anything that they could do with an unlimited access window. Even if they do not have in-depth knowledge of the specific system, it would not be difficult to very quickly inject malicious code that allows for subsequent remote access. They could: Connect a PCMCIA or PCIe card and dump memory or inject code. Splice a hardware keylogger in between the keyboard's PS/2 or USB cable. Quickly download and execute malicious code, or modify existing code. Access sensitive files and save them (e.g. with a camera or USB flash drive). Physically destroy the computer (e.g. with a hammer or a power surge over USB). Simply grab the system and pawn it off for a quick buck. Time for a story. I once had a target who I would be in close proximity to for a brief period. My goal was to gain persistence on their laptop to exfiltrate sensitive documents. I knew I had only a few seconds every time they went out of sight, so I couldn't just grab their laptop and take my time. I obviously also could not steal it. Luckily, I came prepared. I had a programmable USB device that I plugged in. As soon as it was plugged in, it simulated keyboard input to open PowerShell and execute a few commands to download a payload I had set up earlier. The scenario went like this: I waited until this person had left to get something for me in another room. I leaned over on the table where the laptop was and surreptitiously plugged in the device. I waited a few seconds to be safe, unplugged it, and tried to keep a straight face. After they gave me what I asked for, I thanked them and left. When I got home, I got on my computer and connected to their machine. It was not difficult, did not take an extensive period of preparation, and was moderately stealthy. I could have made it even more stealthy if I used something that looked like a cell phone so I could claim I was just charging the device and didn't see any other USB ports around (which would be kind of weird and suspicious, but not as much as if it looked like a flash drive). The moral is that you can do a lot with just a few seconds of access, so you must never underestimate the risk. So how do you protect against these threats? You need to develop a threat model. Figure out who your adversary is, what assets of yours they are after, and what their resources are. If you don't want your mother to see your porn when you're at her house, you probably don't need to worry about exploits abusing corrupt EDID in a VGA or HDMI cable. If you are holding extremely valuable company secrets in a highly competitive industry (robotics, epoxy, etc) and are going to a high-risk country like France or China, you absolutely need to worry about sophisticated attacks, because industrial espionage (aka the more illicit side of "corporate intelligence") is rampant. Stay with your computer at all times in adversarial situations. Lock it if you are going out of its line of sight, and bring it with you or physically secure it in a safe if you are going to be away for a longer period.
{ "source": [ "https://security.stackexchange.com/questions/187515", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114489/" ] }
187,519
I have a manager who likes to "deactivate" accounts by replacing the existing bcrypt hash in the database with a simple dash ( - ). This seems to work as the old password is no longer valid and there is no valid bcrypt hash that any other password could match. But I was curious, is this effective or does it create a security risk? Is this an effective way to make no password work or does it create a larger security risk? This specific implementation uses the PHP password_verify function but I would like answers to focus on any general implementation. Please do not focus on my manager's bad practice. We also have a boolean field for setting the user to inactive/active which I believe is better and safer but is not what I want answers to focus on.
In terms of disallowing legitimate login attempts, it's fine. Unless you're using a very weird hash function, there won't be any values which map to - , and it prevents brute force attacks against the missing values if the database is stolen too, which is a positive (they were unlikely, given the use of bcrypt, but this applies even if the implementation is using a terrible method for storing passwords - pretty much anything other than plain text). In terms of downsides, if the database is taken, it slightly decreases the security of other accounts - the attacker has fewer records to brute force. If they are paying attention, they should probably remove the records which are marked as inactive, but still. I did say "slightly"... The other risks could be if there are any methods for access which allow bypassing the hash method for comparison (e.g. you have a legacy method which allows supplying the full hash for some reason) - in that case, if you aren't checking the active status carefully, it might allow access by supplying a dash. Ideally, remove this access method, if this is the case.
{ "source": [ "https://security.stackexchange.com/questions/187519", "https://security.stackexchange.com", "https://security.stackexchange.com/users/143529/" ] }
187,523
I originally asked this on stackoverflow , but due to lack of traction and a recommendation by a user there I have asked it here too. Imagine a scenario where a client application is sending a password to a backend server so that the server can validate that the user entered the correct password when being compared to a stored variation of the password. The transport mechanism is HTTPS with the server providing HSTS & HPKP to the user agent and strong cryptographic ciphers being preferred by the server scoring A+ on SSL labs test. None the less, we may wish to avoid sending the original user provided password to the server from the user agent. Instead perhaps we'd send a hash after a number of rounds of SHA-256 on the client. On the server-side, for the storage of passwords we are using bcrypt with a large number of rounds. From a cryptographic point of view, is there any disadvantage to performing bcrypt on the already sha-256 hashed value as opposed to directly on the plain text password? Does the fixed length nature of the input text when using hashes somehow undermine the strengths of the algorithm. I'm not asking about performance such as the memory, CPU, storage requirements or wall clock time required to calculate, store, sent or compare values. I'm purely interested in whether applying a hash prior to applying bcrypt could weaken the strength of bcrypt in the case of a disclosure of the full list of stored values. I've read posts this (which I find interesting and useful) but I'm not specifically asking whether it's a good idea to hash on the client side - I'm more interested in whether doing so could somehow weaken the password storage system with bcrypt given that an attacker armed with this knowledge would know that all values stored are a derivative of a fixed set length of inputs consisting of a much smaller range of possible characters (SHA-256)
In terms of disallowing legitimate login attempts, it's fine. Unless you're using a very weird hash function, there won't be any values which map to - , and it prevents brute force attacks against the missing values if the database is stolen too, which is a positive (they were unlikely, given the use of bcrypt, but this applies even if the implementation is using a terrible method for storing passwords - pretty much anything other than plain text). In terms of downsides, if the database is taken, it slightly decreases the security of other accounts - the attacker has fewer records to brute force. If they are paying attention, they should probably remove the records which are marked as inactive, but still. I did say "slightly"... The other risks could be if there are any methods for access which allow bypassing the hash method for comparison (e.g. you have a legacy method which allows supplying the full hash for some reason) - in that case, if you aren't checking the active status carefully, it might allow access by supplying a dash. Ideally, remove this access method, if this is the case.
{ "source": [ "https://security.stackexchange.com/questions/187523", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179231/" ] }
187,556
I noticed that in Google Chrome, if I type in file:///C:/Users/MyUsername/Desktop/ it shows me all of the folders on my Desktop, and I can type open up PDFs and such in chrome just by typing in the file path. What processes and systems are in place so that Google is not able to copy data stored on my computer? What processes and systems are in place so that someone who writes a Chrome extension is not able to copy files stored on my computer?
What processes and systems are in place so that Google is not able to copy the data on my computer? None. Google Chrome usually runs with the permissions of your user account. The application can then read and modify local files to the same extent your user account can. (These permissions apply to most of the programs you're using.) So you need to trust Google in that they don't ship a malicious update that spies on you, or keep sensitive files inaccessible to the account you're running the browser with. Alternatively, there are most likely sandbox implementations for your OS that let you run Chrome in an isolated environment with restricted access to the filesystem. What processes and systems are in place so that someone who writes a Chrome extension is not able to copy files on my computer? Chrome extensions have limited privileges by default. An extension needs to explicitly request (declare) a permission to interact with documents on the file:// scheme. Also note that your browser disallows ordinary websites to read or even redirect to file:// URIs. So while your local files are accessible to the Chrome process, they are not exposed to the web.
{ "source": [ "https://security.stackexchange.com/questions/187556", "https://security.stackexchange.com", "https://security.stackexchange.com/users/176607/" ] }
187,820
Is TPM really worth it? According to Wikipedia it: Provides a generator of random numbers (that's okay) Facilities for the secure generation of cryptographic keys for limited uses (that's okay too I guess) Remote attestation (doesn't sound safe) In the section on the bottom, it mentions some criticisms of TPM such as remote validation of software - manufacturer, not the user decides what can be run on the computer. This sounds scary. Also, VeraCrypt doesn't support TPM which raises some concerns. If they don't trust it, why should I? So is TPM worth it or is it just an unnecessary potential point of failure? Would my security and privacy be safer if I didn't use a computer with TPM at all? Full disk encryption with VeraCrypt sounds safe enough even for the most illegal use cases (NSA-proofed). And then, would it be possible to remove the TPM module from a motherboard safely?
It depends on your threat model. A TPM has multiple purposes, but the most common purpose is measured boot . That is, a TPM will verify the integrity of the BIOS, option ROMs , bootloader, and other sensitive boot components so that it is able to detect an evil maid attack or modified firmware. If your threat model includes an adversary which is able to modify firmware or software on your computer, a TPM can provide tamper-evidence to ensure that it will not go undetected. So how does a TPM work ? It's actually pretty simple when you get down to it. The TPM measures the hashes of various firmware components * and stores the hashes in registers called PCRs. If the hashes all match a known value, the TPM will unseal , allowing itself to be used to decrypt arbitrary data. What data it decrypts is up to you. Most commonly, it is part of the disk encryption key. Unless every piece of firmware and boot software has the correct hash, the TPM will not unseal and the encryption key will not be revealed. TPMs can be used for a lot more, but the idea is the same. * Technically, the TPM is passive and cannot actively read firmware, bootloaders, or other data. Instead, a read-only component of the BIOS called the CRTM sends a hash of the BIOS to the TPM, starting the chain of trust . This component is read-only to ensure that a modified BIOS cannot lie to the TPM about its hash. So is TPM worth it or is it just an unnecessary potential point of failure? Would my security and privacy be safer if I didn't use a computer with TPM at all? Full disk encryption with VeraCrypt sounds safe enough even for the most illegal use cases (NSA-proofed). Remote attestation is not something you will likely need to use. It is however not at all unsafe. All it does is allow a remote device to prove to the appraiser that the firmware and software it is running matches a known-good hash. It does not allow remotely controlling the machine. It is up to the OS to do the remote connections and send the data to the TPM. The TPM itself isn't even aware that it is being used for remote attestation. In fact, remote doesn't even have to mean over a network. There are very clever implementations that use a TPM to remotely attest the computer's state to a secure USB device! There are no privacy issues with a TPM's unique private key either due to a TPM's ability to sign things anonymously using DAA, or Direct Anonymous Attestation . Let's go even further and assume the TPM is not only useless, but downright malicious. What could it do then? Well, nothing really. It lacks the ability to send the so-called LDRQ# signal over the LPC bus which is necessary to perform a DMA attack . The only thing it could do is say "everything is OK" when in reality the firmware has been tampered with. In other words, the worst a malicious TPM could do is pretend it doesn't exist, making a malicious TPM no worse than no TPM. It is completely possible to safely remove the TPM from the motherboard. There is nothing that requires it be there. If it is not present, you will simply not be able to verify a chain of trust to be sure that firmware has not been tampered with. Note however that many modern CPUs have an integrated TPM, but it can be easily disabled, with the same results as removing the physical one. Note that some newer versions of Windows do require a TPM's presence in order to secure the boot process. If the TPM is removed, you may need to modify the OS and UEFI settings so it no longer requires it. In the section on the bottom, it mentions some criticisms of TPM such as remote validation of software - manufacturer, not the user decides what can be run on the computer. This sounds scary. The worry is that, in the future, manufacturers might use the TPM to prevent you from making sensitive modifications to your system. By default, TPMs will obey only its owner. If you tell a TPM that the current state of the system is known-good, it will always check to make sure the system is in that state. If an evil manufacturer sets the TPM to believe that a known-good state is one where malicious DRM and other rights-restricting software is enabled, then we have a problem. For current TPMs, it's entirely up to you to decide what software you want to run! They don't restrict your rights. Another criticism is that it may be used to prove to remote websites that you are running the software they want you to run, or that you are using a device which is not fully under your control. The TPM can prove to the remote server that your system's firmware has not been tampered with, and if your system's firmware is designed to restrict your rights, then the TPM is proving that your rights are sufficiently curtailed and that you are allowed to watch that latest DRM-ridden video you wanted to see. Thankfully, TPMs are not currently being used to do this, but the technology is there. The upshot is that a TPM can prove both to you locally, and to a remote server (with the OS handling the networking, of course) that your computer is in the correct state. What counts as "correct" hinges on whoever owns the TPM . If you own the TPM, then "correct" means without bootkits or other tampering. If some company owns the TPM, it means that the system's anti-piracy and DRM features are fully functional. For the TPMs in PCs you can buy today, you are the owner. Also, VeraCrypt doesn't support TPM which raises some concerns. If they don't trust it, why should I? VeraCrypt actually has added support for TPM version 1.2 and experimental support for TPM version 2.0 in VeraCrypt release 1.20 , although they have not yet edited their documentation to reflect this. They originally were resistant because the original TrueCrypt authors did not understand the TPM. Its purpose is not to assist with disk encryption, but to verify that the firmware and important boot software (including the VeraCrypt bootloader!) have not been tampered with.
{ "source": [ "https://security.stackexchange.com/questions/187820", "https://security.stackexchange.com", "https://security.stackexchange.com/users/176430/" ] }
187,912
I noticed a comment on this answer where another user said ...but it requires risking burning a 0day, which people are not always all that willing to do. I did an Internet Search for the phrase "burning a 0day" (and similar permutations like 0 day, zero day, etc) and not much came back. It's obvious that "burn" means "use up" in this case. I understand most of what the user meant, but probably not all of why it was important (aka context). I'm looking for a canonical answer, with some reasoning about why "burning a zero-day" is an expensive thing. Mr Robot s01e06 touches on this when Elliot and Darlene start to argue about what went wrong in their attempted hack. I found some other people on this SE using the same terminology: Answer: https://security.stackexchange.com/a/184541/71932 Answer: https://security.stackexchange.com/a/184217/71932 Answer: https://security.stackexchange.com/a/162416/71932 Answer: https://security.stackexchange.com/a/175535/71932 Answer: https://security.stackexchange.com/a/182288/71932 Comment on this answer: Do drive-by attacks exist in modern browsers?
I was the one who wrote the comment you quoted. Quick answer: A 0day is burned when the exploit is used too often or haphazardly, resulting in it being discovered and patched. Virtually every time a 0day is used, it risks being burned. Using a 0day more sparingly and cautiously can increase its shelf life. The idiom intends to compare a 0day to a non-renewable resource like combustible fuel that loses its value when used up. This likely originates from the idiom burn your bridges : To destroy one's path, connections, reputation, opportunities, etc., particularly intentionally. What is a 0day? A 0day is an exploitable vulnerability that is not publicly known. When a 0day is discovered, it can be turned into a working, "weaponized" exploit. Like all vulnerabilities, if it is discovered in public, it will usually be patched and fixed, making it so the exploit no longer works. Every time you use an exploit, you necessarily transmit valuable information to a system that you do not control (yet). If the system is being extensively monitored, the exploit technique may be discovered and with it, the necessarily knowledge to fix the vulnerability and roll out patches to all affected systems. What does it mean to "burn" one? I was the one who wrote the comment you are referencing. To "burn" a 0day is slang for using it either too often or using it in a high-risk situation where it is likely that it will be discovered because of its use. Like combustible fuel, once used up or "burned", a 0day will no longer hold the same value (both in monetary terms * and tactical terms). It stops being a 0day once it is no longer in private hands. Friends may let you "borrow" a 0day to use yourself, under the condition that you do not burn it. This means they are telling you that you can use it, but they are trusting you to be very careful not to use it in a way that makes it likely that it will be found and fixed, depriving access to it. Someone might decide to disclose the 0day suddenly in public. Especially when it's not done using coordinated disclosure, it's often called dropping a 0day , which will also burn the 0day. This is a bit uncommon but not unheard of. A few years ago on IRC, a guy joined and informed us of a remote code execution vulnerability in TeamViewer that involved sending malicious WinHelp files (which contain Visual Basic code) or something along those lines. Since the first place he disclosed that was in the middle of a general security-related IRC, he was burning a 0day by dropping it. * 0days usually have literal monetary value. A 0day can range from a thousand dollars to upwards of a million, depending on a variety of factors such as exclusivity, applicability, reliability, specificity, conditionality, etc. How much are 0days worth? Exploit brokers often buy or sell bugs with promises of exclusivity. For example, you can sell a bug under the condition that it is sold for the highest price to only one person, not to multiple people. That reduces the chance that it will be discovered, but it means you only get paid once. Alternatively, you could sell an exploit to as many people who want to buy it. You would have to sell it for a lower price because it will be burned very quickly, making its shelf life rather short. Obviously, when a 0day is burned, it is no longer nearly as useful since it will only work on outdated systems. The actual value depends on quite a few factors. They are worth more if they: Work on a variety of systems. Do not depend on a specific configuration. Are reliable and work every time. Are silent and do not leave traces in the logs. Are sold to only one or a limited number of buyers. Many contractors that deal in exploits will pay you the complete price in small sums over a period of time. If the 0day is discovered before you are paid in full, they will stop the payments. This behaves as disincentive to selling it to multiple contractors or using it frequently yourself. It essentially forces you to guarantee to them that it will remain a 0day, or you simply will not be paid in full. Additionally, 0days are bought for more by government contractors than by random exploit brokers. You might be able to sell a complete Chrome exploit chain complete with sandbox bypass and local privilege escalation (LPE) for hundreds of thousands of dollars to Raytheon SI. The same exploit would net only a fraction of that price if you sell it to J. Random Broker on IRC. The reason is simply that corrupt governments want to be non-competitive and have ample money from tax payers to obtain exclusive vulnerabilities so they can drone strike journalists export democracy. How do 0days get burned? There are many activities that can risk burning a 0day. A few examples: Using an exploit that is unreliable and may create a coredump. Using an exploit that is conditional and only works for some configurations. Using an exploit that results in an event being logged. Using an exploit against a sophisticated and paranoid target. Simply using an exploit too often, increasing its exposure. Giving it to or trading it with a friend who is not responsible with it. Revealing too much about the exploit, allowing others to find the vulnerability themselves. I do not condone selling 0days to governments or government contractors.
{ "source": [ "https://security.stackexchange.com/questions/187912", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71932/" ] }
188,005
Say, I have on my server a page or folder which I want to be secret. example.com/fdsafdsafdsfdsfdsafdrewrew.html or example.com/fdsafdsafdsfdsfdsafdrewrewaa34532543432/admin/index.html If the secret part of the path is quite long, can I assume that it's safe to have such a secret page or area, and it'll be hard to guess or brute force it? What are the issues with this approach in general? Note that I'm not asking how to do this right, but what could the issues with this approach be, if any.
You are essentially asking if it is safe to pass secret parameters in a GET request. This is actually classified as a vulnerability . It is not feasible to brute force a sufficiently long pseudorandom string, assuming the server simply returns a static 404 response whenever an invalid path is specified, but there are numerous other security issues in practice that make this a dangerous technique: Logging software often logs GET requests in plaintext, but not POST. Using GET makes CSRF trivial * when processing active content. Referer headers may leak secret values to other websites. Browser history will retain secrets passed in GET requests. Your second example has /admin/ in it. This implies to me that knowledge of this path alone would be sufficient to authenticate to the server in an administrator context. This is very insecure and should not be done anymore than /?password=hunter2 , a major web security faux pas . Instead, secrets should be sent in a POST request. If this is not possible to do or if your threat model is exclusively to prevent brute force of the URL (for example, password reset links which are promptly invalidated after they are used), then it should be safe if done carefully. I am not aware of any side-channel attacks that would provide a method to obtain the string faster than brute force. * Avoiding parameterized GET requests does not prevent CSRF attacks, which is a common misconception as there are various ways to similarly abuse POST (fake forms, etc), but passing secrets in GET does make CSRF easier.
{ "source": [ "https://security.stackexchange.com/questions/188005", "https://security.stackexchange.com", "https://security.stackexchange.com/users/180492/" ] }
188,088
I'm reading some PPT and it says ENV("_") can be used for anti-debugging in Linux Does anyone know what it means?
In this context, the _ environment variable will typically contain the path to the debugger that started the program rather than the program itself. The program trying to detect the debugger can then read that variable and behave differently if it sees the debugger (perhaps by looking for known debugger names like gdb or by comparing it to argv[0] ). Here's an example that shows this variable in action and how it differs from argv[0] : C code: #include <stdlib.h> #include <stdio.h> int main(int argc, char *argv[]) { char *path = getenv("_"); printf("%s\n", argv[0]); printf("%s\n", path); return 0; } Shell output: $ gcc -o main main.c $ ./main ./main ./main $ gdb main ... (gdb) r Starting program: /home/user/tmp/main /home/user/tmp/main /usr/bin/gdb [Inferior 1 (process 21694) exited normally] (gdb) NOTE: This is not unique to Linux, you can do it on macOS and probably other POSIX systems too. ALSO NOTE: The is a really cheap trick that is really easy to bypass and has a high chance of not working as intended (both false positives and false negatives).
{ "source": [ "https://security.stackexchange.com/questions/188088", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10331/" ] }
188,112
Is it safe to use the following stateless authorization mechanism between a client (iOS & Android) and server? Sign up The client provides an email and password and saves the clear password on the Keychain of iOS and using some alternative for Android. The server checks the password strength if it's deemed strong enough the user is created on DB. The server generates a JWT token and returns it to the client. The token has an expiration time of 15 minutes. The client stores the token (maybe on the Keychain itself) and includes it for each following request on the Authorization header. For each request, the server checks the token provided (checks the signature and the expiration time). If it's ok the request is processed, otherwise, an HTTP 401 is returned. Sign in When the client receives an HTTP 401 from the server it means a login is required. So the app accesses to the Keychain and gets the email & password and sends it to the server (no user intervention needed). The server validates the credentials provided and if they're valid it will repeat the Sign Up steps from 3 to 5. Thanks to expiration time on the token, if a token is compromised it will be valid during a short time period. If a user is logged on multiple devices and she changes her password from one device, the other devices will keep logged only for a short time period, but the clear password stored on the Keychain will not be longer valid. So a new manual login will be required, which I think it's fine. Which drawbacks do you see? I've been thinking on using refresh token procedure to avoid store the clear password but this adds complexity and other drawbacks (for example: how to guarantee the refresh token is only used once). And as far as I've seen, storing the clear password on the KeyChain is secure enough: KeyChain Services Documentation What is the best way to maintain the login credentials in iOS? But I also have seen other questions that do not recommend storing passwords on the device. So I would like to hear opinions from others on this.
In this context, the _ environment variable will typically contain the path to the debugger that started the program rather than the program itself. The program trying to detect the debugger can then read that variable and behave differently if it sees the debugger (perhaps by looking for known debugger names like gdb or by comparing it to argv[0] ). Here's an example that shows this variable in action and how it differs from argv[0] : C code: #include <stdlib.h> #include <stdio.h> int main(int argc, char *argv[]) { char *path = getenv("_"); printf("%s\n", argv[0]); printf("%s\n", path); return 0; } Shell output: $ gcc -o main main.c $ ./main ./main ./main $ gdb main ... (gdb) r Starting program: /home/user/tmp/main /home/user/tmp/main /usr/bin/gdb [Inferior 1 (process 21694) exited normally] (gdb) NOTE: This is not unique to Linux, you can do it on macOS and probably other POSIX systems too. ALSO NOTE: The is a really cheap trick that is really easy to bypass and has a high chance of not working as intended (both false positives and false negatives).
{ "source": [ "https://security.stackexchange.com/questions/188112", "https://security.stackexchange.com", "https://security.stackexchange.com/users/177567/" ] }
188,157
Our website is 100% API based (with an SPA client). Recently, a hacker managed to get our admin's password (hashed with SHA-256) through SQL injection (and cracking pwd) like this: https://example.com/api/products?userId=3645&type=sqlinject here It is just an example, but it is deadly to us. Fortunately, it was a nice hacker and he just emailed us about the hack. Lucky for a website under constants attacks throughout it's 5 years of existence. Yes we DO know that we need to check user input everywhere and do properly format/escape/prepared statement before sending data to MySQL. We know, and we are fixing it. But we do not have enough developers/testers to make it 100% safe. Now assuming we have 0.1% chance of being SQL injected somehow. How do we make it harder for hacker to find it and limit the damage as possible? What we do so far as quick fixes/additional measurements: At the output "gate" shared by all APIs, we no longer send the raw PDOException and message like before. But we still have to tell the client that exception occurred: {type: exception, code: err643, message: 'contact support for err643'} I am aware that if hacker see exception, he will keep trying... The user PHP uses to connect to MySQL can only CRUD to tables. Is that safe enough? Rename all tables (we use open source so hacker can guess the tables name). What else should we do? Update : since there are many comments, I would like to give some side info First of all, this is LEGACY app, developed by some student-like folks, not enterprise grade. The dev team is nowhere to be found, now only 1-2 hybrid dev-test guy handling hundreds of Data Access class. The password is hash with SHA-256, yes it sucks. And we are changing it to recommended php way. SQL injection is 17 years old. Why is it still around? Are prepared statements 100% safe against SQL injection?
Don't spend lots of time on workarounds or half fixes. Every minute you spend trying to implement anything suggested here is a minute you could have spent implementing prepared statements. That is the only true solution. If you have an SQLi vulnerability, the attacker will probably win in the end no matter what you do to try to slow her down. So be aware that while you may make improvements, in the end you are playing a losing game unless you fix the root cause of the issue. As for what you have tried so far: Hiding error messages is a good start. I'd recommend you to apply even stricter permissions (see below). It is debatable whether changing table names help , but probably it won't hurt. Ok, so what about those permissions? Limit what the database user can do as much as possible, down to table or even column level. You may even create multiple database users to lock things down even more. For instance, only use a database user with permission to write when you are actually writing. Only connect with a user that has the right to read the password column when you actually need to read the password column. That way, if you have an injection vulnerability in some other query it can not be leveraged to leak passwords. Another option is to use a web application firewall (WAF), such as mod_security for Apache. You can configure it to block requests that look suspicious. However, no WAF is perfect. And it takes time and energy to configure it. You are probably better off using that time to implement prepared statements. Again, don't let any of this lure you into a false sense of security. There is no substitution to fixing the root cause of the problem.
{ "source": [ "https://security.stackexchange.com/questions/188157", "https://security.stackexchange.com", "https://security.stackexchange.com/users/153280/" ] }
188,208
Provided that I have a decent level of physical security in the office, I monitor the physical addresses of devices connected to the network and only give VPN access to trusted parties, do I need to encrypt access to intranet resources over HTTP? There is an employee complaining that he doesn't like sending his credentials in plain text over the network and that he cannot take responsibility for his network identity in such case. What are the real world chances that someone would steal his identity? I can't find any clear-cut recommendations for encryption within a corporate network.
Yes encrypt, it is easy. Plus according to a 2014 Software Engineering Institute study 1 in 4 hacks was from someone inside the company with an average damage 50% higher than an external threat actor. Link to source: https://insights.sei.cmu.edu/insider-threat/2017/01/2016-us-state-of-cybercrime-highlights.html Although this is the 2017 version.
{ "source": [ "https://security.stackexchange.com/questions/188208", "https://security.stackexchange.com", "https://security.stackexchange.com/users/180692/" ] }
188,446
As per the question. If I am connected to a website, streaming content from my native IP and I then enable my VPN, what does the website see?
Since you asked specifically what the website will see, rather than any intermediary watching your network connection, we should think in terms of requests : Your old ("native") IP will disconnect any long-running requests, and stop making any new requests. Your new (VPN) IP will connect and start making requests. On their own, those two events will be unconnected as far as the web server is concerned, however there may be various things which can be used to guess , with reasonable confidence, that they are related: A cookie may have been set for your browser to send with each request, and the old and new requests will send the same cookie. A client-side script (code running in your browser) may be feeding additional stateful data with the content requests, possibly stored in LocalStorage so that it will resume if you reload the page. The URLs for the content may have been dynamically generated for you, so any request to that URL is assumed to be the same user. The site may take a "fingerprint" of your browser - User Agent, detection of features and settings, etc - and recognise that before and after you connect to the VPN. The server could log where in the stream you had got to, and line up the first position the new IP requests (e.g. with the Range HTTP header) with the last position the old IP requested. All of these things are technically information which you are providing to the website, and can be altered, spoofed, or removed; but doing so will not happen automatically in a standard browser.
{ "source": [ "https://security.stackexchange.com/questions/188446", "https://security.stackexchange.com", "https://security.stackexchange.com/users/180948/" ] }
188,631
I'm currently writing my own little password manager that stores the key in a SHA256 hash, with salt. I create the hash by doing the following: def sha256_rounds(raw, rounds=100001): obj = hashlib.sha256() for _ in xrange(rounds): obj.update(raw) raw = obj.digest() return obj.digest() After it is created it is stored with the following: key = base64.urlsafe_b64encode(provided_key) length = len(key) with open(key_file, "a+") as key_: front_salt, back_salt = os.urandom(16), os.urandom(16) key_.write("{}{}{}:{}".format(front_salt, key, back_salt, length)) The questions/concerns I have are: Is this an acceptable way to store a hashed key? Should I use more iterations? Is my salting technique sufficient? Or should I use a different salting technique? If this is not an acceptable take on storing a hashed password/key, what other steps can I take to make this more secure? UPDATE: I took a lot of your guy's advice, and if you would like to see the outcome, so far , of my little password manager you can find it here . Everyone's advice is much appreciated and I will continue to strive to make this as excellent as possible. (if that link doesn't work use this one https://github.com/Ekultek/letmein )
I'm currently writing my own little password manager That's your first mistake. Something this complex has many subtle pitfalls that even experts sometimes fall into, without plenty of experience in this area you don't have a chance making something even close to secure. stores the key in an SHA256 hash Uh oh... This doesn't necessarily indicate you're doing something wrong, but I have strong doubts that you're going to do it right. I assume you're talking about a master password being hashed here? The master password should be turned into a key using a KDF like PBKDF2, bcrypt, or Argon2, then this key is used to encrypt the stored passwords. If you want to have a way to verify that the password is correct, storing a hash of the key should be fine, but you MUST NOT store the key itself...if you store the key anyone who gets access to your storage has everything they need to decrypt all the passwords! If you aren't talking about hashing a master password and you do mean an actual randomly generated key then I have no idea what you're trying to accomplish here, but you shouldn't be using a slow KDF with a large number of iterations. Alternatively you could be hashing the master password twice, once to store as a hash to later verify that the password the user enters is correct, and again to use as a key for encryption. Depending on how this is done it could range from a design flaw to completely giving away the key. Edit: After seeing the full code it seems to be a fourth option: you store a hash of the password to later check if the entered password is correct, then you hash this hash to use as the key, which is nearly as bad as just storing the key itself. I create the hash by doing the following: def sha256_rounds(raw, rounds=100001): obj = hashlib.sha256() for _ in xrange(rounds): obj.update(raw) raw = obj.digest() return obj.digest() It's not really clear what raw is here, but I'm assuming it's the password. What you're doing is an unsalted hash using SHA256. Don't try to create your own KDF! After it is created it is stored with the following: key = base64.urlsafe_b64encode(provided_key) length = len(key) with open(key_file, "a+") as key_: front_salt, back_salt = os.urandom(16), os.urandom(16) key_.write("{}{}{}:{}".format(front_salt, key, back_salt, length)) So, you're creating the key by hashing the password, then adding a random salt to the front and back? Not only is concatenating 2 different salts to the front and back non-standard, it's not accomplishing anything here because it's done after the KDF has already finished! You're just adding in some random values for the sake of having them there. To show just how bad this is (as of commit 609fdb5ce976c7e5aa1832670505da60012b73bc), all it takes to dump all stored passwords without requiring any master password is this: from encryption.aes_encryption import AESCipher from lib.settings import store_key, MAIN_DIR, DATABASE_FILE, display_formatted_list_output from sql.sql import create_connection, select_all_data conn, cursor = create_connection(DATABASE_FILE) display_formatted_list_output(select_all_data(cursor, "encrypted_data"), store_key(MAIN_DIR)) While it may be a good learning experience to try creating a password manager, please please don't ever use it for anything remotely important. As @Xenos suggests, it doesn't seem like you have enough experience that creating your own password manager would really be beneficial anyway, it would likely be a better learning opportunity to take a look at an existing open source password manager.
{ "source": [ "https://security.stackexchange.com/questions/188631", "https://security.stackexchange.com", "https://security.stackexchange.com/users/173664/" ] }
188,762
At our organization, we came across some frequent incidents such as: Reported successful password guessing attacks Frequent password reset complaints We started an investigation to identify the causes and the flaws in our practice. The password policy is as follows Passwords shall have a minimum of 8 characters with a mix of alphanumeric and special characters and 60 days of expiry. No repeating passwords for 3 consecutive changes. Most of the user feedback on our password policy was negative and the complaint was that they find difficulty in remembering the password and often they use a simple one to meet the policy. We conducted an internal (personal) survey to identify how strong the passwords being used are; the outcome indicates there were several common words being used in different combinations as users must change their passwords every 60 days. For example, passwords containing repeated words like name , home , office , etc. I believe most organizations have these policies in place and most of the standards recommend these (PCI-DSS, etc) but none of them really strike the balance between the controls and practical applicability. Hence the real outcome of such policies/controls are not achieving the desired outcome. The major concerns is how do we strike the balance between these policies (in this case password policy) and practical implementation challenges?
Since this question is not a technical one, rather more about human behaviour, you won't get the answer. What you describe is very typical though and I made the same experience. Complex password rules will usually not lead to more safe passwords, really important is only a minimum length, and a check against a list of the most used passwords. People cannot remember tons of strong passwords, and such rules can even interfere with good password schemes. People can get very inventive to bypass such rules, e.g. by using weak passwords like "Password-2018", which satisfies most rules. Often you end up with weaker passwords instead of stronger ones. The same applies to the password-change rule, it is very common to add an increasing number or the current month to the password. Recently NIST published an official paper (see chapter 10.2.1), advising against such rules, and against its former recommendations. A try to answer the edited question: We can try to delegate authentication , either with single-sign-on or with OAuth2 , this way we can reduce the passwords a user has to remember (same password for multiple services). One could recommend a password-manager . A link on the login page to a good tool won't hurt. We could engourage password-phrases . Why not place a funny example on the login page: "I like to sleep until it is too late to get up", this raises awareness and shows the user how much easier (and mobile-friendly) pass-phrases can be. Just make sure to reject this exact example.
{ "source": [ "https://security.stackexchange.com/questions/188762", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179495/" ] }
188,852
According to Is it common practice to log rejected passwords? , I know logging rejected plain text password is a bad idea, but how about if I store the hashed form of rejected passwords? I want to have more information about the failed login for analysis,eg: Check if it is just typo : if a user often failed login at first time with the same hashed password but later it logs in successfully, then it is possibly just a typo Check if there is someone trying to guess password : Some common passwords, just like 123456 and birthday, which has fixed hash, if those attempts exists, the failed login may be done by password guessers. Check if a user have alternative account : some users may have alternative account but with different passwords, if a user usually failed to login with the same hash, and the hash is the same as another account but then login successfully, then it may be a user trying to login with an alternative account but forgot to switch password. My question is, is logging the hashed form of rejected passwords has the same problem as logging plain text rejected passwords?
If properly hashed (i.e. with random salt and strong hash) a hashed password is not reversible and hashed passwords for different accounts differ even if the passwords are the same. This means that almost none of the analysis you want to do can be done with the properly hashed (i.e. random salt) passwords in the first place, i.e. you gain almost nothing from logging passwords and at most you lose since you leak some information into places where an attacker might get easier access since logs are usually not considered as sensitive as stored passwords. When using a plain hash instead (no salt) some of the things you mention are possible at the cost of an increased attack vector since now an attacker can use pre-computed hash tables to reverse your logged passwords. Maybe you have some misconceptions about what proper password hashing means. I recommend reading How to securely hash passwords? to get an idea how proper password hashing is done and why it is done this way but in the following I will address some of the misconceptions you seem to have: Check if it is just typo : if a user often failed login at first time with the same hashed password but later it logs in successfully, then it is possibly just a typo You cannot check a typo using the hashed passwords since even a small change on the input results in a huge change in the output. You also cannot check for the typo in the original passwords since you cannot reverse the hash to get the password for comparison. To check if the entered wrong password is always the same you could log the hash salted with the same salt as the stored (correct) password as Jon Bentley suggested in a comment. If the logs are at least as well protected as the stored passwords then this would only slightly increase the attack surface, but as I said logs are commonly not considered as sensitive as password storage. ... Some common passwords, just like 123456 and birthday, which has fixed hash, ... Proper password hashing uses a random salt to make attacks using pre-computed hashes with common passwords impossible. This means that the same password results in a different hash when it gets hashed, i.e. your assumption of these passwords having a fixed hash is wrong. Again, you could use the more easy to reverse unsalted hashes here at the cost of an increased attack surface. It might instead be better to do that kind of analysis when the entered password is still available and only log the result of this analysis. ... if a user usually failed to login with the same hash, and the hash is the same as another account but then login successfully, ... Since hashes for the same password differ you need the originally entered password to do this kind of comparison. Since you cannot get this back from the logged hash it does not help to have the hashed password logged. Again, you could do this kind of analysis with unsalted hashes, but this would mean that all of your accounts must have their password available in the insecure unsalted way - which is a large increase of the attack surface. You could probably do this kind of analysis with salted passwords too but then would need to log the newly entered passwords salt-hashed with all the salts you currently have in use (i.e. one for each account).
{ "source": [ "https://security.stackexchange.com/questions/188852", "https://security.stackexchange.com", "https://security.stackexchange.com/users/180144/" ] }
188,865
Personally Identifiable Information (PII) is defined (the example below is from NIST ) as (emphasis mine) Information that can be used to distinguish or trace an individual's identity , such as name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information that is linked or linkable to a specific individual, such as date and place of birth, mother’s maiden name, etc. How should this be interpreted in the case of a single phone number, not associated to a name? In other words, if an application is sending bare phone numbers to a server (I am looking at you WhatsApp) without the name of the number owner , is that number still PII? EDIT: Jonah Benton gave in his answer a very nice summary of the question, I quote him for the sake of clarity (...) the question refers to a practice where an app on a user's phone gets access to contacts (...) on the phone and uploads all phone numbers to the server without also uploading the names associated with those numbers (despite having access to them.) Are these uploaded bare numbers, without names, considered PII?
It depends. Could this number linked to a single person? (e.g. it is your cell number and if I know this number is in a database and know it's yours, then I know you are in that DB) Then yes. If this is not possible (e.g. it is the central call in number of a big corporation that is connected to any available agent) then no. If you only have phone numbers without further information about them, you have to assume it is PII, because you don't know if a number belongs to an individual or not.
{ "source": [ "https://security.stackexchange.com/questions/188865", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6341/" ] }
188,957
I'm currently job searching, and sometimes I come across sites that are just huge databases full job postings, and before you apply you have to create an account. I came across a site , but I'm skeptical of its security practices. When I found a job posting that I wanted to apply to it asked for my e-mail address, so I entered it. A pop-up asked for me for resume, and the usual contact information. I supplied want I needed to and I sent my application. I noticed however, it used my e-mail address and created a user account without prompting me for a password. Immediately, I was alarmed by this, so I checked my e-mail thinking that the site supplied me with a temporary password that it requires me to change, only to discover that I had to confirm my e-mail address and then be prompted to enter a password. From my perspective, I had a user account with no password for maybe 3-5 minutes. Was I right to be skeptical? Should I delete my account?
Just because you had not set a password, that does not mean that your account could be accessed. Without seeing the code, I cannot be sure, but it is possible that you could not log in to your account until you used the link in the email to set the password. You were still using the same session ID while you continued to use the site.
{ "source": [ "https://security.stackexchange.com/questions/188957", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
188,975
A couple of days ago, I attempted to log into the website of a well-known SaaS provider. I used a password manager on my browser (so user/pass were correct) and the NoScript plugin which had limited permissions granted to the site so some JS was blocked. The whole exchange was over HTTPS. The response was a redirect back to the login site with a URL as below, and the login_password data had also been populated to the value of the password input. .../[email protected]&login_password=REDACTED&remember_me=on Would this be classed as just a weak approach and a bad practice to be raised with customer support, or should this be reported as a security vulnerability? Update The company has responded with a stop-gap fix that prevents the original issue from being reproduced. Further Update I just saw this and realised I didn't post a final update. So very belatedly: It was Dropbox. The bug report was outside the strict remit of their bug bounty program through HackerOne, but they did reward the report with a cash reward and it was fixed. I think it's way outside the disclosure limit by now.
Definitely problematic - and worth reporting. If the HTTPS is properly protected with HSTS and preloading , then threat actors observing the traffic wouldn't be able to see the GET contents. But since HSTS is still somewhat rare (and if they're putting plaintext passwords in a GET, they're probably not aware of other best practices like HSTS), the interception/downgrade risk is probably somewhat high. Separate from that, regardless of the health of the HTTPS setup, remote HTTP servers almost always log full GET contents in the server's logs - so those plaintext passwords are probably being recorded on the server as well. [Edit: and vakus' answer covers the rest of the attack vectors much more thoroughly!]
{ "source": [ "https://security.stackexchange.com/questions/188975", "https://security.stackexchange.com", "https://security.stackexchange.com/users/181518/" ] }
189,000
Maybe I have been negligent towards the verification of software I download over the Internet, but I (or anybody I ever met) have never tried to verify the checksum of the contents I download. And because of this, I have no idea about how to verify the integrity of the downloaded item. So how do I verify the checksum of a downloaded file?
Usually this would start on the owners side displaying the checksum for the file that you wish to download. Which would look something like the following: md5: ba411cafee2f0f702572369da0b765e2 sha256: 2e17b6c1df874c4ef3a295889ba8dd7170bc5620606be9b7c14192c1b3c567aa Now depending on what operating system you are using, once you have downloaded the required file you can compute a hash of it. First navigate to the directory of the file you downloaded, than: Windows CertUtil -hashfile filename MD5 / CertUtil -hashfile filename SHA256 Linux md5sum filename / sha256sum filename MacOS md5 filename / shasum -a 256 filename The issue that comes with checking a hash from a website is that it doesn't determine that the file is safe to download, just that what you have downloaded is the correct file, byte for byte. If the website has been compromised then you could be shown the hash for a different file, which in turn could be malicious.
{ "source": [ "https://security.stackexchange.com/questions/189000", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34823/" ] }
189,021
I always feel scared to connect to hotel, airport Wi-Fi etc. I feel that if the Wi-Fi router is hacked, my personal information can be collected by a hacker. How can I determine if a Wi-Fi network is safe to connect to? Also, what can an adversary do if he hacks the router I connect to? For example, can he obtain my browsing history? Can he obtain my login credentials if I log in to Gmail? Can he see the emails I sent using the network? Can he install malware onto my mobile? Can he disable the encryption somehow? Can he create some backdoor on my laptop/mobile and access it remotely? Edit: I got some pretty good answers when the adversary doesn't have control over the router (like arp attacks, mitm attacks). What can an adversary do if he has control over the router?
Can you tell if the network you're attached to, assuming you're just an average user, has a been compromised? No What can an attacker do if they're on the same network as you? Regardless if you're connected to an open access point, an access point with WEP enabled (hopefully not) or an access point with WPA/WPA2 you can be attacked. Many public places have WiFi with WPA2 enabled and they just freely give out the password. You're still at risk. An attacker doesn't need to compromise a router to attack you on a public network. It's very easy to arp spoof the entire network and pretend to be the router. Then all your traffic will pass through them. It'd be seamless to your experience Once the arp spoof and ipv#_forwarding is configured it's trivial to sniff your traffic, inject malicious javascript into http traffic, etc etc etc. The attacker doesn't even need to arp spoof you to attack your machine though. Just being on the same network as the attacker gives them the ability to scan your machine for open ports, vulnerable services running, start probing your machine. Using tools such as nmap to first scan the network for potential targets and then port scanning each target, an attacker can quickly find you and identify any possible holes in your machine. Nmap even has some nifty passive scanning features where it won't even expose the attacker on the network because it just listens to who's transmitting instead of actively probing. How can you mitigate risk? Always use TLS and if you can, connect to a VPN whenever you're on a public wifi. Make sure that you don't have any unnecessary services running on your machine that are open to the network. Honestly, you probably shouldn't have any ports open. Any openings are potential access points. Also make sure your machine is fully patched and running all available firewall services. Mind you this is all just mitigation. If you're connecting to public wifi points you have some accepted risk. Some reference material for you: If you want to dig in a little deeper into what can be done, how and with what tools, please look at these links below. To do what I've described above has a very low barrier of entry. What is NMAP What is Man in the Middle Attacks? What is arp spoofing? What are beef hooks? What is MiTM Framwork? What is SSLStrip? Bonus: How do you see what ports are open and listening on Windows? Open a command prompt Run netstat -ano | FIND /N "LISTEN" The output will show you all the ports that are open and listening internally and externally. The one's marked with 127.0.0.1 you can ignore because those are only visible internally to your machine. Anything marked as 0.0.0.0:Port will be visible to attackers on the network. Also anything marked with private addresses such as 10.0.0.0 to 10.255.255.255 172.16.0.0 to 172.31.255.255 192.168.0.0 to 192.168.255.255 will be visible. The command and results are almost the same on Linux netstat -ano | grep LISTEN How to identify what services are running on your listening ports From an elevated command prompt run netstat -a -b and look for ports marked as listening. You'll see the name of the service in brackets. Final Note I use this attack pattern all the time to test devices on my home network for weaknesses. My favorite is testing my phone apps for random things they're sending over the internet. Anyone with a live boot of kali or parrot os can have this attack up and running in about 5 minutes. Last year I even wrote a tool that does most of this for you and injects javascript miners into public networks. You can find my article about it here
{ "source": [ "https://security.stackexchange.com/questions/189021", "https://security.stackexchange.com", "https://security.stackexchange.com/users/181490/" ] }
189,048
I know how to use most of the tools in Kali like msfvenom and msfconsole and I can safely call myself a script kiddie. I learned the basics of C# and that helps me understand some of the things in C, but I still get easily lost. I know the basics of assembly like mov xor add jmp cmp etc but it's hard for me to follow the flow of actual code and I most likely won't be able to make a pseudo C code from an assembly program I want to stop being a script kiddie and start being better, not to make a career of it, but just because I like the whole concept of programming tools, finding exploits, and taking advantage of holes in the system. I started following this book "The Shellcoder's Handbook" and read the first 4-5 chapters. I kind of get the idea and concept but it's very hard for me to follow what's actually happening. I've been studying for 8 hours every day following the book and similar videos, yet I don't feel much improvement. Is my method wrong? Am I just starting from a bad place? Is it better to learn every command in python/C and overall the languages or should I learn them while studying books and videos like these since I'm pretty good at C# (or so I believe)? Is it normal to hardly understand these things? I mean, I can understand how a shell script works, for example, but I can't make one myself if left alone to code it.
So let me preface this with "I'm not implying you're a child" Often when I teach kids about CIS and they hear what I do for a living, the first question is "How do I hack?" I'll tell you the same thing I tell them. Hacking isn't a thing you learn as much as it is the result of years of experience in a series of topics that relate to security. Often you'll find people break into the security field coming from software backgrounds and network backgrounds. I started on my security path a few years ago by attending defcon, derbycon, and other security related events. I learned that if I wanted to do it professionally then I needed to dig deeper. I learned there was no magic knowledge that made you a security professional. Just hard work and learning about what it takes to make things secure. After building lots of Arduino projects, setting up lots of webservers from scratch, building lots of websites from the database up, taking (and failing mind you) my OSCP, I finally got to a point where I felt comfortable calling myself a security software engineer. The company I worked at during that time had a security champion program for engineers. They gave a single team member more advanced security training to be the team security engineer. I did that and I loved it. About a year and a half ago I took the leap and got a job building intrusion detection software and I love it. Most of my work is standard coding but I do get to apply my security knowledge on a regular basis in various ways. I guess my long winded point is, keep learning. You're doing it the right way by learning different technologies. Check out my blog https://www.DotNetRussell.com and you can see the projects I did that led me to where I'm at now. Keep learning. As mentioned in the comments, security is hard. That's why there's such a demand for it! It's really rare that it comes easy to people. The people that you see that are experts in the field are there because they spent years and years grinding that hard knowledge.
{ "source": [ "https://security.stackexchange.com/questions/189048", "https://security.stackexchange.com", "https://security.stackexchange.com/users/181417/" ] }
189,088
The only time I've used my browser's proxy settings is when setting up Burp Pro, which makes me wonder: What was the original use-case for these settings? Other than testing / debugging, what was this feature designed for? Do people actually use this in the wild for corporate content filtering, access control, etc? (Note: I'm less interested in Tor or other anonymization technologies; I'm more interested in traditional infrastructure / original design intent) For reference, I'm talking about these settings:
One of the early uses of HTTP proxies was caching proxies in order to make better use of the expensive bandwidth and to speed up surfing by caching heavily used content near the user. I remember a time when ISPs employed explicit mandatory proxies for the users. This was at a time when most content on the internet was static and not user-specific and thus caching was very effective. But even many years later transparent proxies (i.e. no explicit configuration needed) were still used in mobile networks. Another major and early use case is proxies in companies. These are used to restrict access and also still used for caching content. While many perimeter firewalls employ transparent proxies where no configuration is needed, classic secure web gateways usually have at least the ability to require explicit (non-transparent) proxy access. Usually not all traffic is send through the proxy, i.e. internal traffic is usually excluded with some explicit configuration or using a PAC file which defines the proxy depending on the target URL. A commonly used proxy in this area is the free Squid proxy which provides extensive ACL's, distributed caching, authentication, content inspection, SSL interception etc. Using an explicit proxy for filtering has the advantage that in-band authentication against the proxy can be required, i.e. identifying the user by username instead of source IP address. Another advantage is that the connection from the client ends at the proxy, and the proxy then only forwards the request to the server given in the HTTP proxy request if the ACL check is fine and maybe after rewriting parts of the request (like making sure that the Host header actually matches the target host). Contrary to this in inline-IDS/IPS (which is the basic technology in many NGFW) the connection setup from the client is already forwarded to the final server and ACL checks are done based on the Host header, which might or might not match the IP address the client is connecting to. This is actually used by some malware C2 communication to bypass blocking or detection by claiming a whitelisted host in the Host header but having actually a different target as IP address.
{ "source": [ "https://security.stackexchange.com/questions/189088", "https://security.stackexchange.com", "https://security.stackexchange.com/users/61443/" ] }
189,632
I work at a company with a staff of about 1000+. We currently have programming development staff that work on web based projects (approx 50 people). Recently due to security concerns our IT and Security department implemented a restriction no longer allowing local admin access on machines . The entire company runs Windows OS for both workstations and servers. I completely agreed with the decision to remove admin, honestly I thought it was long overdue (as the company deals with patient data and requires HIPAA compliance). Unfortunately I believe they took the decision too far. I assumed a subgroup or AD group would be created for users that legitimately needed admin access to do their job (EX my programming team) something like a Tech group that would retain admin access. However this was not the case, the only group created a specific Admin group for Network and Help Desk staff. The main problem is, as web developers we run programs that require local admin access and unfortunately can't do our job without them running as admin. Example programs include Visual Studio for ASP.NET web development, MAMP for local development, composer, etc. I believe the main reason these programs need admin access is because they need to run and modify local IIS, command line, etc. Basically there was short notice of when the local admin access was removed. After about 2 days of the development team being dead in the water in terms of being able to work and me and other team leaders basically yelling and screaming at the IT staff to come up with a solution they finally conceded and found a third party program that works as a pass through allowing the administrators to create the ability for certain programs to run as admin even though we don't have local admin access. Unfortunately, this program we use for local admin access is incredibly buggy and unreliable and not from a reputable source and there doesn't seem to be much for alternatives out there. (I would prefer not to disclose the program we use.) My question is, is it typical to not allow Programmers/Developers local admin access at a company or corporation? And if it is common practice to do so, then how do developers run the programs they need as local admin? A little more information on our network environment (not that it really relates to the question I just thought I'd add this): We use AppBlocker to block programs not on an approved list We use an email security blocker that does things like scan and convert attachments to PDF, etc. We have at least 2 major antivirus programs on all workstations. The network and it's servers very segregated, users only have access to certain servers, folders, and databases that they legitimately need access to.
Here's one data point from a software company that has an interest in security. I know this is common in similar organisations. There is a number of networks. They are physically separated and airgapped, run different colour-coded network cables. Each employee has an 'administration' machine, which can connect to the Internet (via a proxy) for doing email etc. All users are strictly locked down, and there's strict device and access control. In addition to this, each developer has an 'engineering' machine. This has full admin access, and the user can do whatever they like. However it is connected only to the engineering network (no route to the Internet). In most software development contexts this could be seen as extreme, but in orgs that have conflicting requirements of high security and developer freedom, this is a common solution. To answer your question, yes it is common to allow developers admin access, but this doesn't always mean admin access to a machine that could cause information security issues.
{ "source": [ "https://security.stackexchange.com/questions/189632", "https://security.stackexchange.com", "https://security.stackexchange.com/users/167403/" ] }
189,726
I recently started a job at a small company where the CTO prefers to host SSH services at obscure, high numbered ports on our servers rather than the well known port 22. His rationale is that "it prevents 99% of script kiddy attacks." I'm curious whether this is considered bad practice. Intuitively this seems sensible. But we are both largely self taught, and I am uncomfortable with the idea of improvising our own security procedures rather than following well established convention. I know that in general cryptographic schemes and protocols are painstakingly designed by teams of experts, and that every detail is intended to protect against some kind of attack, no matter how insignificant it might seem. I worry about the unintended consequences of deviating from these recommendations. But my colleague seems to have evidence on his side. He was able to demonstrate that we do get dozens of attacks every day that try port 22 and just move on. I know that generally we should avoid security through obscurity , but moving away from this common target really does seem to thwart most attacks . Is this good practice or not? Should we use well known ports? ADDENDUM We do not rely only on the obscure port. We have many other security measures in place, including mandatory hardware keys . I will restate my question more clearly. Is there any reason why port 22 in particular was chosen for SSH? Is there anything dangerous about using other ports?
Does it improve security to use obscure port numbers? If you're already using high entropy passwords or public key authentication, the answer is "not really". Mostly you're just getting rid of noise in logs. I worry about the unintended consequences of deviating from these recommendations. It depends on what port was picked. In Linux, by default all ports below 1024 require root access to listen on them. If you're using a port above 1024, any user account can listen on it if there's not already a process listening. Why does this matter? Let's say you chose to set ssh to bind to port 20,000 . If someone could stop the SSH process on a server (let's say they somehow found a way to crash the process), and had local access to that server, they could then run their own SSH server on port 20,000 before the service restarted. Anyone logging in would then be logging in to the bad guys SSH server. This isn't as big a deal as it used to be, since these days it's only trusted IT staff that are given login access to servers. But still, it does have the potential for privilege escalation and other nastiness if an attacker gets a foothold on your server. Other than being below 1024, there's nothing special about the number 22. Largely it was chosen because SSH was a replacement for telnet at port 23 , and 21 was already taken by FTP. As long as you pick a port below 1024 , the port doesn't really matter. Is this good practice or not? Should we use well known ports? I wouldn't say I recommend it. I also wouldn't advise against it. As I said, it avoids a lot of noise in log files, but the benefits are largely limited to that. For anyone interested in the background story on SSH, you can find it at: https://www.ssh.com/ssh/port
{ "source": [ "https://security.stackexchange.com/questions/189726", "https://security.stackexchange.com", "https://security.stackexchange.com/users/182291/" ] }
189,766
I believe I understand the basics of SQL injection. I also know using prepared statements with PHP files is the best way to prevent SQL injection. I was always told that SQL injection happens most commonly when an attacker inputs valid SQL commands inside form data fields or file input fields on a public facing site. However, if I have PHP files on my site that can only be accessed by an authenticated user, is it still 100% necessary to use prepared statements? Also, what about SQL queries that don't require any outside user data to run? Something like: SELECT * FROM tableName If I'm not passing any variables to a query is it still vulnerable to SQL injection?
Do you trust all of your authenticated users completely? Including that they won't have their accounts compromised by attackers? It's bad if an attacker gets access to an account, but far worse if they can then leverage that to steal the rest of your database. For your second point, the example you've used would not be vulnerable. However, take care with more boundary cases such as $query = "SELECT * FROM tableName WHERE secret='".dbquery("SELECT secret FROM otherTable WHERE id=3")."'"; If it was possible to insert a payload into otherTable in the secret column for record with id 3, this would be vulnerable to Second Order SQLi , even if the insertion routine used prepared statements. As per comments: SELECT * FROM tableName WHERE secret=(SELECT secret FROM otherTable WHERE id=3) is executed within SQL, so not an issue. The problem occurs when data drawn from the DB is considered trusted at the application layer, and used in later queries.
{ "source": [ "https://security.stackexchange.com/questions/189766", "https://security.stackexchange.com", "https://security.stackexchange.com/users/182330/" ] }
189,771
I recently took a CEH class and they mentioned checking for internal IP's probing to External unknown IP's to determine an attack. What would be the best way to gather and analyze these types of issues? Especially when you have a large network how do you keep track of this?
Do you trust all of your authenticated users completely? Including that they won't have their accounts compromised by attackers? It's bad if an attacker gets access to an account, but far worse if they can then leverage that to steal the rest of your database. For your second point, the example you've used would not be vulnerable. However, take care with more boundary cases such as $query = "SELECT * FROM tableName WHERE secret='".dbquery("SELECT secret FROM otherTable WHERE id=3")."'"; If it was possible to insert a payload into otherTable in the secret column for record with id 3, this would be vulnerable to Second Order SQLi , even if the insertion routine used prepared statements. As per comments: SELECT * FROM tableName WHERE secret=(SELECT secret FROM otherTable WHERE id=3) is executed within SQL, so not an issue. The problem occurs when data drawn from the DB is considered trusted at the application layer, and used in later queries.
{ "source": [ "https://security.stackexchange.com/questions/189771", "https://security.stackexchange.com", "https://security.stackexchange.com/users/173135/" ] }
189,784
Every time i sign into my google account on a web browser on a new machine, it warns me of suspicious activity. How does google know its a new machine? I am trying to implement a similar thing for my website, where I should allow the user to login without any notification if I know he had logged in from that machine before, whereas on the other hand, I want to notify user if the user is logging in from a new machine.
Do you trust all of your authenticated users completely? Including that they won't have their accounts compromised by attackers? It's bad if an attacker gets access to an account, but far worse if they can then leverage that to steal the rest of your database. For your second point, the example you've used would not be vulnerable. However, take care with more boundary cases such as $query = "SELECT * FROM tableName WHERE secret='".dbquery("SELECT secret FROM otherTable WHERE id=3")."'"; If it was possible to insert a payload into otherTable in the secret column for record with id 3, this would be vulnerable to Second Order SQLi , even if the insertion routine used prepared statements. As per comments: SELECT * FROM tableName WHERE secret=(SELECT secret FROM otherTable WHERE id=3) is executed within SQL, so not an issue. The problem occurs when data drawn from the DB is considered trusted at the application layer, and used in later queries.
{ "source": [ "https://security.stackexchange.com/questions/189784", "https://security.stackexchange.com", "https://security.stackexchange.com/users/182353/" ] }
190,102
My brother disabled my internet access for my devices from 10pm - 6am. The internet on his devices still work 24/7. I am still connected to the WiFi but there is no internet access because he did a MAC address time filter. I use my iPhone and my laptop. Is there any way to get around this?
You can defeat your brother's access restrictions, either by a timing-attack or side-channel attack. In a timing-attack, you wait for a sufficient time, your brother will remove the MAC filtering for your device. If you cannot wait for the time-based attack to succeed, you can use a side channel attack and connect to the internet via an alternative channel, such as GSM or a friendly neighbor. Joke aside, MAC spoofing is a way to overcome MAC filtering. Since MAC-filtering is (usually) only tied to the MAC-address assigned to a network interface controller, you can change your MAC-address to match the one of an unfiltered device. This is a relatively easy process, but can cause harm (Denial of Service), depending on network equipment and configuration. On wired networks, switches are usually only designed to forward traffic destined MAC-address to one port. If multiple ports have the same MAC-address, the network logs might contain warnings of MAC-flapping and alert the administrator. This blog post demonstrate how the network can become unreliable for devices that share the same MAC-address. On wireless networks, sharing the same MAC-address usually do not lead to the same problems as on a wired network. The reason for this is that the wireless network is a single network port (a single radio interface) with multiple connected devices. There are no alternative ports for packets to take as long as both devices are connected to the same access point over WiFi. Sometimes, you also have to clone the IP-address of an unfiltered device (this is also dependent on the network devices that handle MAC-addresses). This can lead to another set of problems: If your network adapter is set to DHCP, you might be issued the same IP-address as your target device. You and the target device can get visual warnings about IP-address conflict. Your and the target device might drop connections that belong to the other. If possible, try to use statically configure the adapter to use an unused IP-address. If you absolutely have to also spoof the IP-address, wait for the device to disconnect from the network. There is a tool called CPScam that is used to bypass captive portals (which most commonly use MAC-filtering). This tool will monitor the network for active devices, and alert you whenever a device leaves the network. If you impersonate a device that is no longer on the network, it should not cause harm or alarms, at least not until it reconnects.
{ "source": [ "https://security.stackexchange.com/questions/190102", "https://security.stackexchange.com", "https://security.stackexchange.com/users/182643/" ] }
190,179
I read this question: Is it common to allow local admin access for developers in organizations? This question makes me wonder. I can see how local admin access is a danger to the machine. But can a computer with a user account with local admin access actually be a bigger danger to a properly set-up network than a computer that does not have local admin access? If yes, how?
Local Admin access means that it is easier for the attacker to establish persistent control of the host, to install software and modify system settings, and to take actions like sniffing the network that may allow it to move laterally onto other systems. So, yes, it is a danger to the network, in that it provides the attacker with more stable access to a more capable platform for lateral movement.
{ "source": [ "https://security.stackexchange.com/questions/190179", "https://security.stackexchange.com", "https://security.stackexchange.com/users/180588/" ] }
190,245
There is a strategic question that we are banging our heads against in my IT department, which essentially boils down to this: There is a type of attack against our systems that can cause a lot of damage if missed or not addressed properly. More precisely, this could cause a major blow to the company's operations and potentially ruin the entire business. The probability of such an attack is very low. Nonetheless it does happen to other companies in the field regularly (however rarely). It has not happened to our systems yet. In order to be able to mitigate the attack, we must hire another employee and spend an additional 8% (at least) of our budget every year. Both of which are significant investments. Usually we gauge such problems by multiplying the probability of occurrence by the expected damage, but in this case we are lost trying to multiply a number tending to zero by a number tending to infinity to come up with cohesive answer. Along the same lines, our team in divided into two camps: one thinks that the attack will never happen and the investment of time and money will be wasted; the other camp thinks that the attack will come tomorrow. Everybody agrees, though, that half-assed measures will be both a waste of resources and will not protect against the attack – we either go all-in or don't bother at all. As the team leader, I see merit in both opinions – we may operate for the next 20 years without encountering such an attack, and we might have it today (out of the blue, as it usually happens). But I still have to decide which way to proceed. In that regard, I would like to ask you whether you encountered such puzzles and what is the industry's approach to dealing with them.
Now usually we gauged such problems by multiplying probability of occurrence by the expected damage, but in this case we are lost trying to multiply a number tending to zero by a number tending to infinity and come up with cohesive answer. Unfortunately this is what you need to do in this case. But I don't believe that this calculation is really as difficult as you make it out to be. The risk can be estimated by first estimating how many companies face the same security threat and don't take the necessary precautions. Then you skim the news reports to check how many companies get affected by it each year (plus an educated guess how many of them managed to prevent the mess from becoming public). Divide the second number by the first, and you have a risk percentage. Your damage does not tend towards infinity. The event which would come closest to "infinite damage" would be a collapse of the entire time/space continuum of the universe (and even that's an event where you could make a fermi estimate to quantify the damage in dollars , if you are bored and interested in astrophysics). The highest damage you might be able to cause realistically is bankrupting your company. Maybe you could cause even more damage if you also account for damages caused to other people. But when your company is bankrupt, it can't pay those liabilities. So you can use the net worth of the company as the upper limit of your expected damage.
{ "source": [ "https://security.stackexchange.com/questions/190245", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91956/" ] }
190,429
At my work all browsers have a custom root CA installed, which then allows them to snoop on all https traffic while the users get the false impression that they are browsing a secure https page. Why are browsers allowing such easy defeat of https, and not warning the user about it? EDIT: Based on the answers/comments I realize that I was perhaps incorrectly emphasizing the wrong part of my confusion. I understand that there are some legitimate needs to want to change the CA list, what I don't understand is why you wouldn't want to warn the user if such a change has been made. Doesn't not warning the user defeat the point of the green box next to the address? Do I really need to go through multiple clicks, and then perhaps do a (compromised) search to figure out if the root CA is a real one or not for any computer that I don't own and/or if I let someone else touch my computer?
As pointed out in comments and answers, there are plenty of legitimate reasons why you would want to add a CA to your browser's trust store, and the mechanisms for doing this require admin access to the machine / browser. You're implying a trust model where you don't consider your administrator (or past you) to be trustworthy and would like the browser to visually distinguish between a certificate that is publicly-trusted (ie issued by a CA in Mozilla's publicly-trusted list) and one that is privately-trusted because it was explicitly added to the browser's trust store. Maybe the usual green with a warning symbol for privately-trusted? Good idea! It would also solve my problem of needing two copies of firefox installed: one for testing products that need me to install certs, and one for browsing the internet. You should see if Firefox already has an enhancement for this, and if not, suggest it :)
{ "source": [ "https://security.stackexchange.com/questions/190429", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54306/" ] }
190,558
It seems to me that a hardware component which generates random numbers is extremely simple - just measure tiny vibrations in the hardware with a sensor, right? Maybe I'm wrong but it seems like if you measured vibrations with very high precision, you could easily generate unguessable random numbers. However, I'm a cryptography noob, and know little. So I'm reading and an article says: and you just don’t need to be that sophisticated to weaken a random number generator. These generators are already surprisingly fragile, and it’s awfully difficult to detect when one is broken. Debian’s maintainers made this point beautifully back in 2008 when an errant code cleanup reduced the effective entropy of OpenSSL to just 16 bits. In fact, RNGs are so vulnerable that the challenge here is not weakening the RNG — any idiot with a keyboard can do that — it’s doing so without making the implementation trivially vulnerable to everyone else. I'm surely missing some critical detail about how random number generation is "fragile". Can someone explain?
Hardware vs software RNGs The first thing you mention is a hardware noise source. High-precision measurement of some metastable phenomenon is enough to generate unpredictable data. This can be done with a reverse-biased zener diode, with ring oscillators, with an ADC, or even with a Geiger counter. It can even be done my measuring nanosecond-level delays in the timing between keystrokes. These noise sources can fail if the hardware itself begins to fail. For example, a transistor can break down if it is not specifically designed to operate in reverse at high voltages. While these techniques have varying levels of fragility, it is not what is being discussed in the text you quoted. The second type of RNG you mention is a software RNG called a pseudorandom number generator (PRNG * ). This is an algorithm which takes a seed , which is like an encryption key, and expands it into an endless stream of data. It attempts to ensure that the data cannot be predicted, or told apart from pure randomness, without knowledge of the secret random seed that the algorithm started with. In this case, the PRNG is implemented in pure software, so breaking it only takes introducing a bug into the code, which is what the text you quoted is talking about. It is merely code that is fragile , risking complete failure if changes to the code are made that deviate from the algorithm's intended behavior. A PRNG can be thought of as a re-purposed encryption algorithm. In fact, you can create a cryptographically secure PRNG by using a cipher like AES to encrypt a counter. As long as the encryption key (seed) is secret, the output cannot be predicted and the seed cannot be discovered. When you think about it this way, it becomes easier to understand how a small, inconsequential change in the code can completely break the security of the algorithm. Collecting randomness So how do modern devices actually collect randomness? Let's take a server running quietly in a datacenter somewhere. In order to support things like TLS, it needs a large amount of completely unpredictable data that cannot be distinguished from a truly random stream. Without a dedicated hardware noise source, the randomness must come from within. Computers strive to be fully deterministic, but they have plenty of input from non-deterministic devices. Enter... interrupts! In modern hardware, an interrupt is a signal emitted by hardware to alert the CPU to a status change. It allows the CPU to avoid rapidly polling every hardware device for updates and instead trust that the device will asynchronously alert it when the time comes. When an interrupt occurs, an interrupt handler is called to process the signal. It turns out this handler is the perfect place to get randomness! When you measure the nanosecond-level timing of interrupts, you can quickly get a fair bit of randomness. This is because interrupts are triggered for all sorts of things, from packets arriving on the NIC to data being read from a hard drive. Some of these interrupt sources are highly non-deterministic, like a hard drive which relies on the physical motion of an actuator. Once sufficient random bits have been collected by the operating system, a small seed of at least 128 bits can be fed into a cryptographically secure PRNG to generate an unlimited stream of pseudorandom data. Unless someone could predict exactly when every past interrupt occurred to nanosecond precision (along with several aspects of the CPU's execution context at the time of the interrupt), they will not be able to derive the seed and will not be able to predict future PRNG output. This makes the output completely suitable for TLS keys. * A security-oriented PRNG is called a cryptographically-secure PRNG, or CSPRNG. Using a regular PRNG when an application calls for a CSPRNG can result in security vulnerabilities.
{ "source": [ "https://security.stackexchange.com/questions/190558", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166618/" ] }
190,796
On Chrome, if you open a sign up page, it will offer to fill and remember the password field. I did this and got the following sequence of passwords offered as generated: suCipAytAyswed0 LUnhefcerAnAcg2 it2drosharkEweo UndosnAiHigcir0 AKDySwaybficMi5 DorrIfewfAidty5 MeecradGosdovl9 KasEsacHuhyflo4 OngouHemNikEyd0 In all of these, there is only one digit, and in all of them except the third the digit is at the end . The chance of me getting that many with this exact pattern is extremely low, and I've seen it in other passwords generated by Chrome, so I'm confident it would continue if I generated some more. This just seems weird, wouldn't it be easier to generate the password as a random collection of letters and numbers than to enforce some strange pattern; given that these are not supposed to be remembered? Why would Chrome do something else?
Conor's answer is a good starting point, but if you dig into Chromium's source the situation starts to look a little bleaker (but still better than not using a password manager at all). Chrome 68 (current version as of August 1st, 2018) Up through version 68 Chrome follows FIPS 181 to generate a 15 character pronounceable password allowing uppercase letters, lowercase letters, and numbers. If the result doesn't contain both an uppercase letter and a number, it changes the first lowercase letter to uppercase, and changes the last lowercase character to a random digit. Unfortunately the entropy of a FIPS 181 password is pretty hard to calculate, as it generates variable length syllables rather than characters, and there are a bunch of rules dictating whether or not a syllable is allowed. The non-uniformity has severe implications. A 1994 paper (page 192) estimated that to break into 1 out of 100 accounts with 8 character passwords, an attacker would only have to try 1.6 million passwords. Even if increasing the length from 8 to 15 doubles the entropy, that's still probably under 60 bits of entropy on average 1 , though this is improved slightly due to capitalization. The original standard doesn't appear to support uppercase letters or numbers, and the implementation 2 only capitalizes the first letter of a syllable with a 50% chance (interestingly y is replaced with w in the array of characters checked, so y will never be capitalized). This means that rather than adding about 1 bit of entropy per letter, capitalization only adds 1 bit per syllable. The number of syllables isn't constant, so it's hard to determine how much entropy this actually adds, but given the scarcity of single letter syllables it almost certainly adds less than 8 bits on average. Numbers and symbols are supported by turning each single letter syllable alternately into a digit or symbol with 50% chance (though the symbol feature isn't used). Unfortunately, as you have noticed, single letter syllables are uncommon 3 , so ForceFixPassword usually ends up swapping out the last lowercase letter for a digit. There may be more issues, but I'm getting a bit tired of looking at it. In short, this isn't a very good method of generating passwords, and the entropy is significantly less than one would expect for the length. In practice this is still probably ok for the average user, as it means they won't be using their favorite low entropy password in 20 different places, but breaking it is definitely possible for a determined and funded attacker with a fast hash (ie not a good password hash ) of the password. Chrome 69 (scheduled for release in September) Things are looking much better in Chrome 69. The character set is upper and lowercase letters, numbers, and the symbols -_.:! with the following removed for readability : l (lowercase letter L) I (capital letter i) 1 (digit one) O (capital letter o) 0 (digit zero) o (lowercase letter O) The generation works by adding a random character from each class until the minimum count for said class is met. By default and as currently used this is one lowercase character, one uppercase character, and one digit. Then the rest of the password is filled with random characters evenly chosen from all character classes (respecting a maximum count for each class, currently unused). Finally, since characters were added to the beginning of the password from predictable classes in order to satisfy requirements, the string is randomly shuffled . The shuffling happens up to 5 times if two dashes or underscores are adjacent in order to improve readability, so this will very slightly reduce the entropy, but the reduction is so slight as to be unnoticeable (and removing dash or underscore from the allowed symbols would be worse). With 61 possible characters a fully random password would have log 2 (61 15 ) = 88.96 bits of entropy. Using inclusion-exclusion to account for the required characters, I come up with 88.77 bits of entropy: 61^15 all possible passwords -53^15 passwords without digits 2-9 (0 and 1 are excluded) -37^15 passwords without lowercase letters (l and o excluded) -37^15 passwords without uppercase letters (I and O excluded) +29^15 add back passwords excluded twice for lack of digit and lowercase +29^15 add back passwords excluded twice for lack of digit and uppercase +13^15 add back passwords excluded twice for lack of lowercase and uppercase -5^15 remove all-symbol passwords that were excluded then added back The extra shuffling will shave off a fraction of a bit as well, but I don't have time to calculate it right now. In the end, the password should have over 88 bits of entropy, which is pretty good. The old generator still exists in version 69, but when I tested the dev build it was using the new one. Whether or not there's any way to use the old generator I don't know. 1. Average entropy isn't necessarily useful with non-uniform distributions, the original 1975 paper gives an example (pages 29-30) of a generator that produces a single password (e.g. "password") with a 50% chance, and a high entropy password otherwise. The average entropy may be high, but there's still a 50% chance the password will be guessed immediately. Even so, extrapolating from the 1994 analysis , I believe it should still have well over 40 bits in the worst case. 2. The implementation isn't actually Chrome's, but is taken from the APG program, with minor modifications for compatibility 3. Testing with apg reveals that single letter syllables actually occur in about 33% of passwords, but 70-75% of those only have a single letter syllable at the end.
{ "source": [ "https://security.stackexchange.com/questions/190796", "https://security.stackexchange.com", "https://security.stackexchange.com/users/183373/" ] }
191,017
I was in a mall a few days ago and I searched for a shop on an indication panel. Out of curiosity, I tried a search with (.+) and was a bit surprised to get the list of all the shops in the mall. I've read a bit about evil regexes but it seems that this kind of attack can only happen when the attacker has both control of the entry to search and the search input (the regex). Can we consider the mall indication panel safe from DOS considering that the attacker only has control of the search input? (Leaving aside the possibility that a shop might be called some weird name like aaaaaaaaaaaa.)
I would compare accepting user supplied regular expressions to parsing most sorts of structured user input, such as date strings or markdown, in terms of risk of code execution. Regular expressions are much more complex than date strings or markdown (although safely producing html from untrusted markdown has its own risks) and so represents more room for exploitation, but the basic principle is the same: exploitation involves finding unexpected side effects of the parsing/compilation/matching process. Most regex libraries are mature and part of the standard library in many languages, which is a pretty good (but not certain) indicator that it's free of major issues leading to code execution. That is to say, it does increase your attack surface, but it's not unreasonable to make the measured decision to accept that relatively minor risk. Denial of service attacks are a little trickier. I think most regular expression libraries are designed with performance in mind but do not count mitigation of intentionally slow input among their core design goals. The appropriateness of accepting user supplied regular expressions from the DoS perspective is more library dependent. For example, the .NET regex library accepts a timeout which could be used to mitigate DoS attacks. RE2 guarantees execution in time linear to input size which may be acceptable if you know your search corpus falls within some reasonable size limit. In situations where availability is absolutely critical or you're trying to minimize your attack surface as much as possible it makes sense to avoid accepting user regex, but I think it's a defensible practice.
{ "source": [ "https://security.stackexchange.com/questions/191017", "https://security.stackexchange.com", "https://security.stackexchange.com/users/110133/" ] }
191,055
Can the client be harmed in any way, and how?
There are a number of settings in the standard RDP client that could be exploited for an attack on the client, if enabled. For example: shared folders, access to devices like printers, etc. If you're remoting into a known compromised machine, you might want to disable as much as possible in the client's connection and sharing options before connecting. It's also possible that in addition to the things that you intentionally shared, there could be vulnerabilities in the RDP client itself. Ensure you're patched and up-to-date, and treat the client machine as at-risk of infection until you have a chance to scan it for indicators of compromise.
{ "source": [ "https://security.stackexchange.com/questions/191055", "https://security.stackexchange.com", "https://security.stackexchange.com/users/183656/" ] }
191,060
I was studying the Wi-Fi security section for a pentesting certification the other day and there is an extensive part about cracking WEP. Is going in-depth on WEP cracking worth it anymore? According to this statistic: https://wigle.net/stats# about 7% of Wi-Fi networks still use WEP for encryption today. It's not a lot, but at the same time it is a lot considering that WEP was deprecated in 2004. Thoughts?
Unfortunately, WEP is still present in the world. There are legacy systems and devices in certain environments that can only do WEP, plus a number of networks that have no one interested and/or knowledgeable enough to update. Like many advances in technology, phasing out the older technology takes time. Look at IPv4 vs. IPv6 after 20ish years and tell me which is still predominant. That being said, WEP is no longer viable in modern 802.11 networking. Not only is WEP not viable in modern 802.11 networking, neither is TKIP (was initially used as part of WPA certification). Since the release of the 802.11n amendment to the standard, the use of either requires that devices disable the use of HT or VHT data rates. In other words, the use of WEP or TKIP causes a modern 802.11 network (i.e. 802.11n or newer) to function little better than an 802.11a/g network. While you do pick up some of the advantages of newer standards, the performance (which is the typical driving force for people to upgrade) is negated. But all that aside, I have to point out that Wigle's stats are a bit "flawed" unless you actually understand what it is you are really viewing. Wigle is a large, user collected database of information. However, as far as I know, they do not age out old data for a number of reasons (for instance, just because someone hasn't recorded updated information on a network doesn't mean it isn't still present). So what you have is a large number of networks present in their data that are not present in the real world. If you check many of the WEP entries, they will not have been updated in 5 or more years. Many of these are likely gone or replaced. In the graph on the Wigle statistics page, they are simply showing the percentage of their database entries that are using the respective technologies. They are not showing the actual technologies deployed in the real world at present. The shown decline of WEP is largely due to new networks being added to the database that are not using WEP, rather than WEP networks being removed from the database. Pulling from the Wigle.net API, these stats may present a more accurate picture of the decline of WEP: All Entries ------------------- 464,429,878 (Total) 31,800,699 (WEP) ---WEP: 6.85%--- Updated since 2014 ------------------- 343,970,477 (Total) 8,550,789 (WEP) ---WEP: 2.49%--- Updated since 2016 ------------------- 233,996,263 (Total) 4,374,629 (WEP) ---WEP: 1.87%--- Updated since 2017 ------------------- 158,548,717 (Total) 2,707,548 (WEP) ---WEP: 1.71%--- As you can see, while WEP is still certainly present, the real world statistics of WEP being in the wild is much lower than the 6-7% number to which you were referring.
{ "source": [ "https://security.stackexchange.com/questions/191060", "https://security.stackexchange.com", "https://security.stackexchange.com/users/147421/" ] }
191,152
In its recent policy, the US Department of Defense has prohibited the use of GPS-featured devices for its overseas personnel. They explain it with a theory that commercial devices like smartphones or fitness trackers can store the geo-position (GPS) data along with the device owner's personal information on third-party servers, and this information can leak to the enemies, which, in turn, would “potentially create unintended security consequences and increased risk to the joint force and mission.” Although it's a nice theory, I'd like to know whether this policy is just a theory or it has been based on some confirmed incidents of such use of cyber-warfare in an ongoing war. Hence the question: Is there any confirmed evidence of actual use of cyber-warfare exploiting the vulnerable GPS data? If so, what are they? I have initially asked this question on Politics.SE, but was suggested to ask it here instead.
Confirmed cases? Yes, at least two. One is Strava, and the other is Polar. When Strava updated its global heat map, it showed some areas in supposed desert areas full of activity. Who would go jogging, at night, on the desert? What about US soldiers ? An interactive map posted on the Internet that shows the whereabouts of people who use fitness devices such as Fitbit also reveals highly sensitive information about the locations and activities of soldiers at U.S. military bases, in what appears to be a major security oversight. In war zones and deserts in countries such as Iraq and Syria, the heat map becomes almost entirely dark — except for scattered pinpricks of activity. Zooming in on those areas brings into focus the locations and outlines of known U.S. military bases, as well as of other unknown and potentially sensitive sites — presumably because American soldiers and other personnel are using fitness trackers as they move around. Using fitness trackers will allow the enemy to detect the place, extrapolate the number of soldiers, the patrol patterns and path, and even identify the soldiers. If you can identify someone that lives somewhere in Montana, and suddenly spent 4 months on Pakistan, you can bet he is a soldier. And using the pace and heart rate, you can even say how fit the person is. The Polar leak was even worse : With two pairs of coordinates dropped over any sensitive government location or facility, it was possible to find the names of personnel who track their fitness activities dating as far back as 2014. The reporters identified more than 6,400 users believed to be exercising at sensitive locations, including the NSA, the White House, MI6 in London, and the Guantanamo Bay detention center in Cuba, as well as personnel working on foreign military bases. The Polar API allowed anyone to query any profile, public or private, without any rate limit. The user ID was pretty easy to predict, and 650k+ user profiles were downloaded , several GB of data. Just ask, and Polar would give all. The post shows lots of sensitive places (nuclear facilities, military bases, NSA headquarters, Guantanamo Bay facilities, among others) and could identify the users on those places, and even their home addresses, Facebook pages and personal pictures. You don't need to think too much to realize the damage that can be done with all that information.
{ "source": [ "https://security.stackexchange.com/questions/191152", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9758/" ] }
191,191
My website has a redirect page with the format https://my.site/redirect?deeplink=https://foo.bar& ... The redirect is implemented in Javascript, so when you request the site, you get a 200 and some HTML + JS, not a 30X. I recently started to notice that someone is abusing the redirect page for dubious links (guns, viagra, ...). It was suspicious that the traffic of the page increased by a lot, especially at night, when there should be barely any traffic. I started to log the requests including referer. The referers seem to be all kinds of different hosts (not the same one every time) but mostly redirect pages themselves. Examples are http://foo1.bar/cgi/mt4/mt4i.cgi?cat=12&mode=redirect&ref_eid=3231&url=http://my.site/redirect ... http://foo2.bar/modules/wordpress/wp-ktai.php?view=redir&url=http://my.site/redirect ... http://www.foo3.bar/core.php?p=books&l=en&do=show&tag=2774&id=20536&backlink=http://my.site/redirect ... http://www.google.sk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=172&ved=0CCMQFjABOKoB&url=http://my.site/redirect ... I'm actually in control of the URLs that users should be legitimately redirected to, so I implemented a whitelist of valid hosts and started redirecting invalid ones to my start page. What I'm wondering is, why should someone abuse my redirect page in the described way? And are there any risks I should be aware of?
Assuming that people trust your site, abusing redirections like this can help avoid spam filters or other automated filtering on forums/comment forms/etc. by appearing to link to pages on your site. Very few people will click on a link to https://evilphishingsite.example.com , but they might click on https://catphotos.example.com?redirect=https://evilphishingsite.example.com , especially if it was formatted as https://catphotos.example.com to hide the redirection from casual inspection - even if you look in the status bar while hovering over that, it starts with a reasonable looking string. The main risks are to your site reputation (it's more likely to get black listed by filtering services if they spot dubious traffic being accessed through it) and to people following these links (who knows what is actually on the other site you're sending them to). It's unlikely to result in compromise of your server directly.
{ "source": [ "https://security.stackexchange.com/questions/191191", "https://security.stackexchange.com", "https://security.stackexchange.com/users/63880/" ] }
191,460
I recently had to authenticate myself online to use an internet-based service. The authentication process was done via video call with me holding my ID card in front of my laptop camera beside my face. I also had to wiggle the ID card so the person on the other end of the video call could see the security features that are printed on the ID card. Then the person asked me to wave my hand in front of the ID card, so that it was shortly fully covered by my hand several times. What is this method supposed to achieve or is this just security theater?
Given that this identification was likely performed according to German law, this request was to conform with BaFin Circular 3/2017 which demands (in their non-binding English translation): Any substitution/manipulation of parts or elements of the identity document must be countered by suitable measures. To this end, the person to be identified must be asked, for example, to place a finger over security-relevant parts of the identity document (variable and determined at random by the system) and move one hand across their face. Using stills from these movements that are cut out and enlarged, the employee must verify that the identity document, along with all the security features visually identifiable in white light, is completely covered at the right point and that no artefacts indicating manipulation are evident at the transition points. So the stated reason for that is to uncover potential manipulation in the video feed you send them. There have to be enough and unpredictable tasks which you may be asked to make it harder for you to have a suitable substitution prepared.
{ "source": [ "https://security.stackexchange.com/questions/191460", "https://security.stackexchange.com", "https://security.stackexchange.com/users/86741/" ] }
191,530
I live in a country which is under many sanctions. Both internal sanctions (government on people) and external sanctions (US on our people). In our country, YouTube, Twitter, Facebook and many other sites are blocked by default and we can only access them through VPN. But there is one thing that should work: DNS. If I set my DNS to 8.8.8.8 , theoretically it should return the right IP address for www.youtube.com and that IP address should get blocked by the ISP. But it is not. It looks like our government is manipulating the DNS servers, even the public ones. I have Ubuntu 18.04 (Bionic Beaver), and I have disabled Network Manager DNS . I have disabled resolvconf and systemd-resolve , by which I mean that no entity in my own system can change the file /etc/resolv.conf . I changed the content of /etc/resolv.conf to: nameserver 8.8.8.8 and only this name server. So now every application uses this server by default, and they should query this server for the IP address of websites, but unfortunately they are not doing so! kasra@ubuntu:~$ nslookup google.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: google.com Address: 216.58.214.110 Name: google.com Address: 2a00:1450:4001:812::200e kasra@ubuntu:~$ nslookup youtube.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: youtube.com Address: 10.10.34.35 Name: youtube.com Address: 10.10.34.35 kasra@ubuntu:~$ nslookup twitter.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: twitter.com Address: 10.10.34.35 Name: twitter.com Address: 10.10.34.35 kasra@ubuntu:~$ █ 10.10.34.35 is the intranet IP address for the filtering authority. How is the ISP achieving this? Are they really stealing and MITM-ing the traffic of 8.8.8.8 ? Is it some kind of BGP hijack? How can I get around this without a VPN?
How are they (ISP) achieving this, Are they really stealing and MITM ing the traffic of 8.8.8.8? They probably simply redirect all packets with destination port 53 (i.e. DNS) to their own servers and answer the query themselves. This is not that hard to do. How can I get around this without VPN? A properly configured VPN (i.e. no DNS leaks) can get around this (unless they block VPN too). Also using a HTTP proxy or SOCKS proxy can help (make sure to configure external name resolution for the last one) for HTTP and HTTPS traffic. And DNS privacy techniques like dnscurve, dnscrypt, DNS over TLS and DNS over HTTPS ( supported already by Firefox ) will help too.
{ "source": [ "https://security.stackexchange.com/questions/191530", "https://security.stackexchange.com", "https://security.stackexchange.com/users/184163/" ] }
191,590
My friend just asked me: "why is it actually that bad to put various passwords directly in program's source code, when we only store it in our private Git server?" I gave him an answer that highlighted a couple of points, but felt it wasn't organized enough and decided this might make sense to create a canonical question for. Also, how does not storing passwords in the source code relate to principle of least privilege and other foundations of information security?
The way I see it, not storing passwords in Git (or other version control) is a convention . I suppose one could decide not to enforce it with various results, but here's why this is generally frowned upon: Git makes it painful to remove passwords from source code history, which might give people a false idea that the password was already removed in the current version. By putting the password in source control, you basically decide to share the password with anyone who has access to the repository, including future users. This complicates establishing roles within a developer team, which might have different privileges. Source control software tends to get pretty complicated, especially "all-in-one" systems. This means that there's a risk this system might eventually get compromised, leading to password leakage. Other developers might be unaware that the password is stored and might mishandle the repository - having keys in the source means that extra care would have to be taken when sharing the code (even within the company; this might create a need for encrypted channels). I cannot say that every pattern related to infosec is good , but before breaking them it's always a good idea to consider your threat model and attack vectors. If this particular password got leaked, how difficult would it be for an attacker to use it to harm the company?
{ "source": [ "https://security.stackexchange.com/questions/191590", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15648/" ] }
191,702
Every year an automated password reset occurs on a VPN account that I use to connect to the institution's servers. The VPN accounts/passwords are managed by the institution's IT department, so I have to send an email every year to follow up with the account controller in order to get the new password. This always ends in a phone call , because their policy is to not send passwords through email. I have a vague understanding of why sending passwords through email is bad, but honestly I don't understand why telling someone a password over a phone would be any better. Assuming I have a 0% chance to change their policy (I really have no chance), why would telling someone a password over a phone call be more secure than email? I am primarily focused on the ability for phone/email to be intercepted by a third party , but @Andrew raised a good point about the permanency of email. There is some great information in this Q/A , but that question is about the most secure way to send login information, while I'm specifically asking about phone call vs email security.
Emails are saved somewhere, whether it be on a mail server or someone's personal computer. Phone calls usually are not, unless it's a customer facing environment.
{ "source": [ "https://security.stackexchange.com/questions/191702", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46732/" ] }
191,900
Okay, we know the drill: don't use FTP , use SFTP or FTPS . But what exactly is the risk being posed? The files themselves are sent unencrypted, and this may be fine, or disastrous, depending on what the code in them contains. But, if we're dealing with static HTML files (or similar), presumably this is fine? What about a user's credentials, password, etc.? Are these guarded when using vanilla FTP? And if so, is it "adequate"? I'm asking quite honestly as a quality of life measure. My alternative for updating basic files on a public server would otherwise be with a messy process of SSH keys, etc. using systems which, for the life of them, cannot remember my password when sent via SFTP. Granted the particular network that one is on is also of importance. A secure office network is a bit different from an airport lounge.
With plain FTP the credentials are passed in plain and thus can be easily sniffed. Also, the files are not only send in plain but they are also not protected against modifications, i.e. an active man in the middle might change the files on the fly. Insofar the risks are similar to plain HTTP, i.e. it might be fine within a trusted network but is a bad idea if you cannot fully trust the network.
{ "source": [ "https://security.stackexchange.com/questions/191900", "https://security.stackexchange.com", "https://security.stackexchange.com/users/109400/" ] }
191,939
Consider the following line from /etc/passwd : sadeq:x:1000:1000:Mohammad Sadeq Dousti,,,:/home/sadeq:/bin/custom-script.sh The last part, /bin/custom-script.sh , shows the command/script to be run when the user logs in to the system. Currently, it's a simple Bash script, which presents the user with menu, effectively limiting the possible commands he can execute. Or, I hope so! Maybe there's a way the users can bypass custom-script.sh , and access Bash directly. Then, they can execute any command within their user context. Is there a way to bypass the above Bash script, and execute other commands? Edit: Consider the following simple case as custom-script.sh : #!/bin/bash echo What is your name? read name echo Hi $name
Assume there is a (probably unintentional) backdoor. The default /etc/passwd on Sun workstations of the early 1990s included an entry something like this: games::0:0:games:/nopath:/bin/false In other words, an account named 'games' with no password. Apparently the genius who came up with this idea had no imagination, and assigned it a uid and gid of zero (i.e. root and wheel). Generally, that was okay, as the home directory was meaningless and the shell was set to a program that always exited with failure. Furthermore, the default settings for any access by network -- telnet, rlogin, rcp, ftp -- were set to prevent access by any uid of zero. There was a separate passwd entry for root, with a properly-set set password, home directory, and shell. This meant if you tried to log in as games, the following would happen: Logging in as games at the console would at first succeed, but then spawn the /bin/false shell, which would immediately exit. Using telnet or rlogin would outright deny the login. Even if it had succeeded, the /bin/false shell would immediately quit. FTP and scp don't use shells, but they were configured to deny access to uid zero, so you couldn't log in this way. A GUI login would start up the default GUI services, including a window manager, clock client, and a terminal. The latter would immediately exit because its child shell would immediately exit. So you would get an empty screen except for a clock. (More on this below...) If you really had to log in as root, you would either have to do that from the console, or first rlogin/telnet as another user on that machine and then su root . Either way uses the root passwd entry rather than the games passwd entry, and thus works they way root should work. So the games account appeared to always fail, unless you did a GUI login. In that case, the only thing that appeared was the clock. However, you could right-click on the background to get a root menu, with a factory-provided list of programs that users would normally customize for themselves. (Customizing the menu didn't work for the games account; I don't remember exactly why.) You could try to bring up more terminal windows, which would all fail. There was a puzzle game (which may have been the impetus for the account in the first place). Another choice was to log out. And then there was the graphical debugging tool, dbxtool . dbxtool was a graphical frontend to the dbx symbolic debugger, similar to today's gdb . Being uid zero, you could attach to and control any process on the system, although this was not useful because programs provided by Sun were compiled without symbols. You could launch a shell, but this would use your SHELL environment variable, which was /bin/false . However, you could also change environmental variables! This meant you could get a root shell by the following: Log in via the GUI as games, without password. Right-click to bring up the root menu. Start a dbxtool . setenv SHELL /bin/sh (Optional) setenv HOME / Start a shell with ! Because the terminal is not set, do stty sane Voila, root shell without password! So, do not assume that a user can't escape out of an invalid shell.
{ "source": [ "https://security.stackexchange.com/questions/191939", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7/" ] }
192,197
From my days of amateur web development the principle of least privilege has beaten into me not to use chmod -R 777 dir . I have personally never needed it, so I've never used it. I now work on a development team professionally, and we recently moved executable code to a shared internal server. Only people from the company can access the server and we trust everyone at our company. The code isn't particularly sensitive, anyway. Trying to run a script † that another team member wrote into the shared folder caused a permissions error, so "just to check if it would otherwise work" a coworker ran chmod -R 777 /opt/path/to/shared/folder on the project. Once it did work the coworker said it's fine to leave it as is instead of switching to a more controlled groups solution for us. Because I am a chimpanzee I want to speak up and say this is bad practice and we should change it to a groups solution. However, after putting some thought into it I can't come up with a reason why shared executable code on an internal server shouldn't have 777 permissions. From a security standpoint, is there any reason to change our project folder's permissions from 777 to something tied down a little tighter with groups ? † We can't change this scripts' permission requirements.
However, after putting some thought into it I can't come up with a reason why shared executable code on an internal server shouldn't have 777 permissions. Because you're not only trusting every user - which might be reasonable on an internal server where "everybody" who has access should have that control - you're also trusting every process on that server. The web server. The SSH server. The DHCP client. Every scheduled task and every service. Even processes running as "nobody" and "nogroup". All sorts of processes that might be leveraged by an attacker to gain or expand their access. Because if you're going to be that sloppy in internal development, someone's going to be that sloppy on a Production or Customer system, because "that's how we fixed it internally". Because an attacker will delight in finding that little system that's only internal and isn't important or protected, see weaknesses like writable web application files, use them to get onto the system and start leveraging it to get somewhere. I'll bet the passwords people use on that system also work on other, more valuable, internal systems. Maybe you guys use the same root password across servers, too? That's always a fun one to find.
{ "source": [ "https://security.stackexchange.com/questions/192197", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90486/" ] }
192,244
I have a user account for each of my children in our district website, which oversees registration, grades, identification, etc. I was recently sent home a form from both of my children's classrooms asking us to login to our accounts so we could sign a new school year form. Printed on this piece of paper was both the username and the password for our accounts. The security practice of sending home printed passwords is immediately discouraging, but my larger concern is how my password is stored in the district system (and ultimately, what would happen if that system were compromised). I want to contact the webmaster, but I want to make sure I'm correct in any assumptions I make prior to shooting off my email asking that action be taken to avoid this kind of thing. I saw a related question , and want to make sure I don't jump the gun on harassing them over their storage policies. -- Since it's been asked several times, this is a password that I set on the account, not an auto-generated password. Also, this is an account that parents control; it contains sensitive identifying information of your child. It's not intended as a student portal or anything like that. -- Update_1 : I got a call from the district webmaster today, wanting to discuss my email in more detail. I explained my concerns were two-fold: (a) the transmission of our password on a printed piece of paper, and (b) the ability to retrieve that password in the first place. I was informed that the system is a legacy system, and as such has no capability of allowing a "forgot my password" feature. While the policy, they agreed, is incorrect, the alternative is to have every parent who doesn't remember their password come into the school with an ID to retrieve their password. (I was also informed that since we're in a 60% poverty district, assuming all parents have an email address for password management isn't an option). While this is and incredible inconvenient, I explained the inconvenience of likewise having someone access my accounts because they had access to my password. I was also informed that the system is being replaced next year, which will come with more modern security features (though, I'm unsure of the storage policies on the future system). The lady was very polite, and offered to put me in contact with their director of IT to discuss my concerns around password storage policies, which I accepted. She also offered to BCC me on an email to our school principal, requesting that future communications be issued in a sealed format. Finally, I was slightly (and correctly) scolded for reusing my password in the first place.
Yup! If they are able to retrieve the password from the database, then they are clearly not following password storage best-practices. OWASP provides a good guide for how to do it properly: https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet Here's some ammunition you could use in that letter: You want me (the legal guardian of my child) to sign a form. You are using the action of logging into a website and clicking a button as a form of legal signature. How do you know it was actually me that logged in and clicked the button? How many people had access to the sheet with the username and password on its way to me? How can you prove that it was actually me that logged in and clicked the button? Clearly the password is stored in the database in such a way that it can be retrieved by school board staff. How can you prove that it was actually me that logged in and clicked the button? Were something to go wrong, I highly doubt that "signature" would hold up in court, meaning the form will not hold up in court. This seems like a liability issue for the school board and/or for me (depending on what's in the form). Can I get a statement from the school board's legal team that this is ok?
{ "source": [ "https://security.stackexchange.com/questions/192244", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80469/" ] }
192,336
Lately, I was watching an online video about Microsoft Certified Solutions Associate (MCSA) and in one of the videos it says " removing GUI from Windows server makes it less vulnerable. " Is that true? If so, how does removing the GUI have that effect?
Removing the GUI is useful and recommended. It will remove unused components, a lot of libraries, and makes the install size smaller. How does this make it less vulnerable? Fewer components equal less attack surface. A vulnerability on a GUI component will not affect you. Attacks relying on GUI components won't work either. So, when designing a server, remove every single component not needed by the application you are serving. It will be way more secure than using the default install.
{ "source": [ "https://security.stackexchange.com/questions/192336", "https://security.stackexchange.com", "https://security.stackexchange.com/users/122088/" ] }
192,472
Context: I have a laptop supplied by my organisation. I am trying to connect to eduroam , but I cannot do it using my organisation's laptop. When I use a personal computer, it asks me for a username and password, just as a standard wifi network asks for password. I found the text below in the internal IT policy. I need help understanding it. To me it's totally counterintuitive : Using hotel, coffee shop and public WiFi hotspots You may be able to connect your laptop to use the WiFi in hotels, coffee shops etc but this depends on how the WiFi is set up: if it’s “open” (that is, you don’t need any password to connect) then you should be OK if it’s set up so that you need a password to connect to the WiFi (and this password is given to you by the establishment) then again, you should be OK if however you can readily connect to the WiFi but you need to enter a username and/or password in your web browser software, then you will not be able to access the service. The security standards to which our laptops are built, means that they cannot connect directly to a “dirty” or insecure internet connection – everything goes via the secure VPN connection into our IT network. So the user can’t get to the web page where they’d need to type in a password, without first connecting to the VPN – and they can’t connect to the VPN without first getting to the web page. So basically, I can use my work's laptop in a coffee shop where the network is shared by anyone (for which so much has been written against, e.g. here ). I can also use it in a network with password security only, for which there is even a WikiHow (!) guide on hacking. And yet, I cannot use it in a network that requires both username and password, which surely must be much more difficult to hack into. What is this sense of security that underlies my organisation? Am I missing something?
They have configured the laptops to spin up a VPN connection and only speak to "home base" after they go on the network. That means that if there is a local "captive portal" that requires you to enter credentials, you will not be able to use it, because that would require evading the VPN. (It's a chicken and egg thing. No VPN, no ability to reach the portal - no portal, no ability to spin up the VPN!) It is more secure because they are ensuring that, no matter what connection you have, any network traffic you send goes through your company's network, your company's controls, and is not subject to interception or manipulation by any other party. It unfortunately breaks the case of Wireless with a "captive portal," but allowing for that case would lower their security by allowing your machine to talk to arbitrary machines directly rather than through the VPN. The "eduroam" service that you mention in the comment explicitly states that they do not have a captive portal , but use WPA-Enterprise based on 802.1X: Does eduroam use a web portal for authentication? No. Web Portal, Captive Portal or Splash-Screen based authentication mechanisms are not a secure way of accepting eduroam credentials.... eduroam requires the use of 802.1X... 802.1X is the kind of authentication you need to enter to configure your machine to connect to the network, so this case is one that your IT policy explicitly states that they allow: if it’s set up so that you need a password to connect to the WiFi (and this password is given to you by the establishment) then again, you should be OK In fact, eduroam appears to be very well aligned with your IT policy - they both distrust the "bad security" imposed by captive portals. Based on the edit to the original question: I am trying to connect to eduroam, but I cannot do it using my organisation's laptop. When I use a personal computer, it asks me for a username and password, just as a standard wifi network asks for password. That suggests to me that your organization's laptop is simply not prompting you to connect to new networks the same way that your personal computer is. This could be because of different operating systems or different policies applied to the two computers. You may simply want to ask your IT group for help configuring 802.1X for connection to the eduroam network; using that keyword will make it clear to them you're trying to do something they allow.
{ "source": [ "https://security.stackexchange.com/questions/192472", "https://security.stackexchange.com", "https://security.stackexchange.com/users/185302/" ] }
192,521
Someone to whom I am related is at a study camp for their desired profession. This person, let's call her Jane, is supposed to be studying rigorously for two months. The housing provided offers wireless internet connections, which are spotty and don't allow for fluid streaming of even low-quality video, or other useful tasks to studying. Being that Jane wants to study in her down-time and look up resources as a reference to the material, she needs to access these materials and suffer with a slow connection. There are no provided modems or other ways to connect via Ethernet, and the student is expected to have some form of wireless connection computer, presumably. Now, I want Jane to have the best possible studying experience, and I understand that they might deem this experience "the best to study in," so I called and claimed that I was interested in attending the camp myself, but I only have a desktop computer with no wireless card, and I expect a wired connection. After a few hours, I received a response saying the following: "We do not provide hard wire connections to our network because of viruses and stuff" It was clear to me the information I was being relayed was second-hand, but acknowledging that I wouldn't be able to change anyone's mind about this policy, I come here to posit this question: Exactly what security benefits could be gained by only offering a Wireless connection? In this case, I'm assuming that the answer given to me was genuine and not just an excuse for them to not do extra work or anything of the sort.
Warning: Conjecture, because none of us know their actual setup. It is very likely that the organization has their own network, which is hard-wired, as well as a guest network, which is wireless-only. The two are separate networks. This is a common layout because laying wire to desks is expensive, but worth it, for your own employees; broadcasting wireless is cheap, and worth every penny of it, for your guests. When you asked about a hard-wired connection, they are answering the question of which network you'd be on rather than how you connect to the network . And as the two are intertwined in their minds ("hard-wire is our network, wireless is guest network") they are answering very simply. From their point of view, they don't want non-organization machines on their network, only on the guest network - because of viruses and stuff. We can all understand that we wouldn't want random visitors on our internal networks, right? So that would be a context in which their answer makes sense. I would suggest explaining your concern to them and seeing if they can come up with a solution, instead of asking them about the solution you would expect to work. It may be that they only expect guests to need enough connectivity for email and light web browsing. If you explain that Jane needs more bandwidth for her study needs, and can convince them that it's a reasonable request, they're likely to find some way to help - even if it's just moving Jane to a room closer to the Wireless AP.
{ "source": [ "https://security.stackexchange.com/questions/192521", "https://security.stackexchange.com", "https://security.stackexchange.com/users/185331/" ] }
192,535
My kid is starting 6th grade and the school requires him to get a laptop and bring it to school. Now the school IT department wants to install some software on the laptop and is asking for administrative access. They want to install Office, Outlook, an AV and some site certificates. I feel that on principle this is not right, as it's not the school's device, so school staff shouldn't have access. Additionally, I don't have any sense of how good the school's security practices are. What if they inadvertently install malware? However, if I refuse then I risk being "that parent" and I'm setting myself up for a few years of headaches as any time the school wants to add new software, I'll have to do it myself. What would you do? Update Wow, this certainly blew up! Thanks to everyone for reading and commenting. We ended up letting the school have access, for a couple reasons: The clock was ticking and our child was the only one whose laptop wasn't set up, so he wasn't able to fully participate in lessons and was missing out on emails sent to the students. I'm traveling and am not at home, so remotely installing the software myself would add another layer of complexity and require someone at home to prep the laptop for remote admin, while adding more delay to the device being ready. It came down to what's best for the child and at the moment it seemed to us parents that it was letting the school have its way. I can check the device myself later and if there is anything that compromises the device's security or the child's privacy, then I have a better argument against the school's approach. In the meantime I'm letting them know that they could have communicated more about their plans and given us time to have a conversation about it rather than springing it on us at the last minute (though from their point-of-view this worked out just fine).
Needing to install things is kind of the point of needing the laptop, so it makes perfect sense that they want to install Office, AV, and certificates. There are no surprises there. To do that, they need admin access, but I would want to revoke that access once they were done. I would want to know the list of everything they want to install, and if they have central control over the AV (and if they do, why they want that). If your worry is that they might install malware, then download a Live CD of an anti-malware program and run it on the laptop after they are done. If the laptop is only used for school work, then there is really no harm here. If your child will be using it for other things, then there might be some privacy conflicts. The onslaught of comments and the split in votes highlights a difference in understanding of the operating model here. This is not a situation where the school wants sudden control of a personal device. This is a situation where the school is asking the parent to purchase a device for the school to control and this answer is meant to be applied in this model. The school needs to be able to control the device as a part of due care (and remember that the child in this case is a minor; 12 or 13). In terms of protecting the child's privacy, my advice to make sure that the device is only used for school work holds. The fact that the parent can retain admin control is a great thing for the protection of the child, something that would not be possible if the school owned the device. The parent can inventory, patch, and uninstall . This operating model means that the school can ensure consistency of software, which would be required for teaching consistency, it lowers the cost to the school (yes, it increases direct costs to the parents, but does offer cost efficient options) and it offers due care controls for the protection of the child. You just have to shift your mindset that just because you bought the device does not mean that you should have 100% control of the device. And again, with the new onslaught of comments, I say: consider the idea of a "burner" device . You own it, but it is meant to be, at least in part, out of your control and properly classified for certain activities. If the operating model was that the school wanted sudden control of a personal device, my answer would be very different (more like AviD's).
{ "source": [ "https://security.stackexchange.com/questions/192535", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168571/" ] }
192,539
Must my AWS account ID be kept secret? Can anything at all be done using just the AWS account ID? From the AWS documentation : The AWS account ID is a 12-digit number, such as 123456789012, that you use to construct Amazon Resource Names (ARNs). When you refer to resources, such as an IAM user or an Amazon Glacier vault, the account ID distinguishes your resources from resources in other AWS accounts.
An AWS Account ID can be shared, when required. Like the documentation says, the main thing anyone can use your AWS Account Number for is to construct ARN's. For example, if I had an AWS Account which held an AWS Lambda function, and someone on another account, who I had explicitly granted permission to, wanted to manipulate it, they would use by account number in the ARN. arn:aws:lambda:us-east-1:123456789012:function:ProcessKinesisRecords Again, this is totally limited by the permissions applied on your account. Even if I had a full ARN, unless you give my AWS account access, I won't be able to do anything with it. API Keys are the things that grant remote control of things, and are dangerous.
{ "source": [ "https://security.stackexchange.com/questions/192539", "https://security.stackexchange.com", "https://security.stackexchange.com/users/148825/" ] }
192,583
I have heard of the method of using 4 random dictionary words, it gives you lots of characters and is easy to remember. But that seems to be open to dictionary attacks, especially if the attacker has heard of the method as well, and brute force attacks of combinations of 4 dictionary words. Are there too many combinations of 4 dictionary words, so that it would still be safe? I noticed that Veracrypt specifically states not to use dictionary words, or combinations of 2, 3, 4 such words. So, if thats not safe, is there a safe method that still let's me remember the password? Would a combination of 8 dictionary words work?
The main problem with passwords is not password complexity, but password reuse ( obligatory xkcd ). One service leaks logins and passwords, suddenly lots of providers see a surge on account hijacks. Why? Because we humans cannot remember dozens of different passwords, so we create one password for common services, and one for special ones. But most of us will have only one password. Don't create your own passwords, use a password manager. They can create very complex passwords, one for each service, have plugins and extensions for the major browsers, have strong encryption, cloud backup, multi device syncing, and more. Don't trust your brain to create different random passwords for each service. Using a password manager means you will only need to know one password - the master one. This password can be written down and kept on your wallet. All the others will be created by the manager, and can contain 128 chars, 10 numbers, 30 special chars, including ĥaŕd-tö-tỹpẽ ones...
{ "source": [ "https://security.stackexchange.com/questions/192583", "https://security.stackexchange.com", "https://security.stackexchange.com/users/162856/" ] }
192,943
Does the EU consent form system pose a new security risk? Today we have to click OK on about 20 cookie consent forms every week, where previously we could mostly dismiss internet forms as being invasive and risky. There are so many EU consent forms, I feel more likely to confuse a disguised download consent form and a security attack with an EU consent form. How big a risk do EU consent forms represent?
It increases dialog box fatigue . By overflowing the user with mundane dialog boxes, they are more likely to get into the habit of just clicking OK to remove the dialog box from their screen. This increases the risk of a user clicking OK on some important security decision presented in a dialog window.
{ "source": [ "https://security.stackexchange.com/questions/192943", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77723/" ] }
193,004
I reached out to an old friend of mine who was a terrific programmer back in my school days and he invited me to attend one of the CTF events with his university group. This group seems very beginner friendly and open to everyone, but I still fear that I have not nearly enough knowledge in the security field to be able to participate. So I would like to prepare a bit for it, find out exactly what this is and what I can do to improve to a basic level. Internet research just gave me a very vague idea of what a CTF is. What I already have is basic and intermediate knowledge in some programming languages including C#, PHP/Javascript/etc (basic), C (very basic), Java. I don't know if this is of any use, but I thought it can't hurt. What exactly is a CTF and how can I, as a total beginner, prepare for a CTF event on my own?
CTFs (Capture The Flag) are like courses within games. Some website provide easy ones to learn the ropes, with simple challenges of increasing difficulty. For example http://overthewire.org/wargames/ will teach you how to use tools (Hex dump, vi, even the terminal itself) with each challenge. The main goal is usually to find some code, either embedded in a file (stegano), hidden in a file inside a server where you will need to abuse a known vulnerability (regular CTFs), or even exploit a program's source code to find a secret password (reversing). Just like any programming challenge, take your time, learn the tools, and don't be afraid to look for help or writeups (obviously not on the CTF you're trying to achieve), but they can provide insight on tools to use, depending on the type of challenge. Some links : https://www.hackthebox.eu/ : Various categories of CTF as explained above, ranging from easy to hard, lots of writeups http://overthewire.org/wargames/ : Mostly regular CTFs with a file hidden in a server, and specific rules to find/decrypt it. Good for beginners, will teach you the basic tools
{ "source": [ "https://security.stackexchange.com/questions/193004", "https://security.stackexchange.com", "https://security.stackexchange.com/users/173012/" ] }
193,092
I'm massively under-informed when it comes to in-depth security, however my understanding was that password entropy should be somewhat similar between different algorithms. I gave a couple of passwords entropy checks, and the algorithms gave me massively varying results. Is this to be expected, and is there a reliable/standard check for password entropy?
Attempting to add to the other Connor's answer: Something important to keep in mind is that entropy of course is, in essence, "the amount of randomness" in the password. Therefore, part of why different entropy checkers will disagree is because the entropy is a measure of how the password was generated, not what the password contains. An extreme example is usually the best way to show what I mean. Imagine that my password was frphevgl.fgnpxrkpunatr.pbzPbabeZnapbar . An entropy checker will probably rate that with a high amount of entropy because it contains no words and is long. It doesn't contain numbers, but taking a simple calculation (like what Connor outlined in his answer, and what most entropy calculators do), you might guess an entropy of 216 bits of entropy - far more than a typical password needs these days (38 characters with a mix of upper and lower case gives 52^38 ≈ 2^216). However, looking at that, someone might suspect that my password isn't really random at all and might realize that it is just the rot13 transformation of site name + my name . Therefore the reality is that there is no entropy in my password at all, and anyone who knows how I generate my passwords will know what my password is to every site I log in as. This is an extreme example but I hope it gets the point across. Entropy is determined not by what the password looks like but by how it is generated. If you use some rules to generate a password for each site then your passwords might not have any entropy at all, which means that anyone who knows your rules knows your passwords. If you use lots of randomness then you have a high entropy password and it is secure even if someone knows how you make your passwords. Entropy calculators make assumptions about how the passwords were generated, and therefore they can both disagree with eachother and also be wildly wrong. Also, XKCD is always applicable when it comes to these sorts of things.
{ "source": [ "https://security.stackexchange.com/questions/193092", "https://security.stackexchange.com", "https://security.stackexchange.com/users/185947/" ] }
193,183
Firstly sorry, maybe dumb question, but I have one service running on my server which can be operated only by telnet (port 23), but I know that telnet is insecure, so I blocked port 23 in iptables except loopback interface (to be not accessible from internet, but only from localhost). So my idea is that I connect to the server using SSH and then in SSH session I will connect to telnet localhost 23 , so I wonder if it is safe or if it can be sniffed.
The traffic cannot be sniffed. It is not ideal - you're adding extra steps to arrive at a secure connection, so performance will suffer - but it is safe from sniffing at least at the network level. Obviously, if the server is compromised, the traffic can be sniffed, but you will have other problems by then. Since someone told me to add this here is an edit: It is not ideal because you have to utilize two additional steps to initiate a connection and secure it. One is prohibit every incoming connection to the telnet service and second is to work around that prohibition. That simply leaves more space for errors than simply using SSH in the first place. That's not ideal but sometimes the best of valid options.
{ "source": [ "https://security.stackexchange.com/questions/193183", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54479/" ] }
193,310
To start with: This is in consideration of in-body implants. The answer should (initially) assume there is no record of your DNA sequence mapped by a third party yet. The question boils down to using your own DNA sequence as private key for communication to hypothetical in-body augmentations/devices. Edit: Further clarification: The device would only be controlled from outside the body, not automation etc. A laptop or (god help us) and 'app' for example could send it commands. A DNA Sequence as we understand it, is unique to each person. Is this already possible, is it even a worthwhile investigation if not, and what might be the areas to focus concern. DNA Sequencing Interesting related article Edit: I should add the reason for posing the question is that it may be a field we as humans move towards, and the biggest drawback I see is the concern of data protection and security around it.
A major problem with modern biometric authentication is the fact that it uses things like your fingerprint as a password. Unlike a real password, a fingerprint is not sufficiently secret. Biometric identities can be used for usernames, which merely need to be unique to you, but not passwords. There are a few issues with using DNA as a private key or any other secret value: You leave exact copies of your DNA on everything you touch. The human genome does not vary much, making brute force a risk. People who you are related to have an even more similar genome. DNA actually changes over time in an individual, so it is not static. You cannot revoke your DNA and change it after compromise. You leave your DNA everywhere A good private key or password is something you know , something which you need to voluntarily reveal. "Private key" implies that the key must remain secret and its knowledge allows someone to identify as the original holder. Unlike a password-encrypted private key on your laptop, you leave your DNA around everywhere. In your breath, excretions, skin cells that fall off in the millions, oil, hair, saliva, tears, etc. This makes it pointless for a private key that, as the name implies, must remain secret. From a recent case where DNA was used to incriminate an individual, three judges pointed out the risk to people's privacy when DNA left unintentionally can be used as evidence in a criminal trial. Regardless of the outcome of this review, the fact that we leave DNA is still there: The Majority’s approval of such police procedure means, in essence, that a person desiring to keep her DNA profile private, must conduct her public affairs in a hermetically sealed hazmat suit. Moreover, the Majority opinion will likely have the consequence that many people will be reluctant to go to the police station to voluntarily provide information about crimes for fear that they, too, will be added to the CODIS database.... Majority's holding means that a person can no longer vote, participate in a jury, or obtain a driver's license, without opening up his genetic material for state collection and codification. Unlike DNA left in the park or a restaurant, these are all instances where the person has identified himself to the government authority. It doesn't matter if the government decides whether or not this evidence is admissible. What matters is the fact that the evidence, in the form of DNA, is left over in the first place for anyone to take and analyze. Worse still, once your DNA is "compromised", it cannot be revoked! The (lack of) genetic variation in Humans Only a tiny fraction of DNA differs between each person. While DNA itself contains a lot of information, any prototypical human genome is going to be extremely close to your genome. If you use your genome for any sort of cryptographic purposes, the differences will be small enough that brute force becomes feasible. In other words, there's just too little difference between us. Not only that, but your family will have an even more similar genome. Even if random genetic variation were great enough to prevent exhaustive search, do you really want it to take nothing more than both your parents to give up their genome to make it possible to "calculate" your genome? Or your siblings? A large amount of human genetic differences comes from what's called SNPs, or Single Nucleotide Polymorphisms . These are individual bases in DNA which are known to vary between people. Currently, there are only a few hundred million known polymorphisms . While a hundred million may seem huge, people who are ethnically related will have far fewer genetic differences. A good key will differ equally between users, regardless of whether or not they are related by family or race. Jumping genes (transposable elements) There's another problem. If you are analyzing DNA down to each individual base pair, you'll find that it actually changes over time! Small self-reproducing elements called transposons are sequences that copy themselves and re-insert themselves in different areas of our genome. They do this slowly and rather randomly. Over time, this means that even our individual cells won't have the same DNA as they had when we were born. Likewise, even identical twins won't have perfectly identical DNA as a result of this phenomenon. This is not a problem for most modern genetic sequencing, which only counts specific variations on specific genes (called alleles ), but will be a problem for anything that requires exact accuracy down to the individual base pair. 40% of DNA is transposable! DNA as a unique identifier Now, what could your genome be used for? Identification and authentication. While it's very easy to discover your DNA sequence, it's quite impossible to copy it into another person's body. No matter how much I try, if a single cell of mine is extracted and its DNA analyzed, it will show the DNA I had at birth (ignoring jumping genes), not your DNA. This makes it possible to prove an individual is who they say they are, given sufficiently careful DNA examination. It's like a SSN, but much, much more difficult to use for identity theft. Knowledge of your DNA doesn't let you impersonate it. DNA is not the only thing that can be used for identification. The pattern of veins in your hand are unique to you, and unlike fingerprints or DNA, are not left all over everything you come into contact with. You need to explicitly place your hand on a vascular scanning device, which is less intrusive than a retina scan. This is actually a technique that is used in Japan . While this is still more private than DNA, it is still better used as a username than as a password, since it can still be obtained surreptitiously.
{ "source": [ "https://security.stackexchange.com/questions/193310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99443/" ] }
193,393
My CFO received an email from a director at a financial institution advising that all traffic (inbound and outbound) from certain IP addresses should be blocked at the firewall. The director at the financial institution was advised by his IT department to send this mail. The list of addresses (about 40) was in an attached, password-protected PDF. The password was sent to my CFO by text message. I initially thought this was a malicious attempt to get our CFO to open an infected PDF, or a phishing/whaling attempt, but it seems legit. We have spoken to the IT department at the FI and they say it's genuine, but they can't (won't) provide any more information. Due to the nature of the relationship between my company and the FI, refusing isn't really an option. From what I can see most of the IP addresses appear to belong to tech companies. Does this approach strike you as suspicious? Is there some social engineering going on here? What could the nature of the threat be?
If you spoke to the FI on a separate channel, you actually spoke to the FI, and they know about this, then by definition, it is not a phish . What strikes me as odd is "but they can't (won't) provide any more information", and "refusing isn't really an option". These 2 facts cannot co-exist if you are a separate entity from the FI. Your push-back is simple: your firewall policy requires a legitimate reason along with an end-date for the rule to be reviewed/removed. You don't just add firewall rules because someone outside of the organisation told you to. The FI has no idea if blocking those IPs may impact your operations. what effect is this rule supposed to have? how long does the rule need to exist? who (named individual) owns this rule on the FI side? what remedies are expected if the rule has a negative effect on operations? what effect will there be between your companies if the rule is not implemented exactly as requested? You will not add the firewall rule without knowing what the impact is , either positively or negatively. If they want greater control over your firewalls, then they can supply and manage your firewalls for you. On the other hand, if they own you and the risks, then they take on the risks of this change, so then just add the rules. As for a Director sending this request, it's not so strange. When you need someone to do something, you have the person with the most clout make the request. The Director may have no idea what a firewall is, but the request is being made in that person's name. I am also curious why so much clout is needed, though. It seems like they want to pressure you into doing it while not having to explain themselves. Don't let them dictate your policy and how you best protect your company.
{ "source": [ "https://security.stackexchange.com/questions/193393", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186301/" ] }
193,429
Subresource integrity basically lets me know that a resource I'm about to download is valid, because the hash of its contents matches what I expect. But this assumes that I'm already running on some trusted and verified code. If a hacker has compromised the server serving resources, then they could easily just replace the root resource with hashes of their own malicious code (or just bypass integrity checks altogether). So how do subresource integrity checks help at all? And how would a client go about verifying the root resource?
Subresource integrity is not about protecting your own code of the web application against modification. What SRI is intended to do can be seen from the description of the goals : Compromise of a third-party service should not automatically mean compromise of every site which includes its scripts. Thus, it is about protecting your use of resources located at sites which are not under your control. SRI gives back some control even if code from a third party site is included. It does not offer availability but it offers integrity, i.e. protection against unwanted modifications by the third party (or some hacker which took over this party).
{ "source": [ "https://security.stackexchange.com/questions/193429", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49767/" ] }
193,465
I am working for a software editor and we deliver to our clients a turnkey solution that the client has just to install on his server to use it. This solution is a web service who uses HTTPS to communicate. This web service can be public (accessible from the Internet) or not, client choice. To enable HTTPS, we provide in our solution a self-signed certificate (this certificate is the same for all our different clients but specific to our solution). Now, a client is asking us if we could deliver the solution with a trusted SSL certificates (to avoid ERROR_SELF_SIGNED_CERT in their browser when connecting to the service). My question is: is it possible to provide a trusted certificate in our solution? If yes, how? If no, what can we suggest to the client to answer to his request?
Subresource integrity is not about protecting your own code of the web application against modification. What SRI is intended to do can be seen from the description of the goals : Compromise of a third-party service should not automatically mean compromise of every site which includes its scripts. Thus, it is about protecting your use of resources located at sites which are not under your control. SRI gives back some control even if code from a third party site is included. It does not offer availability but it offers integrity, i.e. protection against unwanted modifications by the third party (or some hacker which took over this party).
{ "source": [ "https://security.stackexchange.com/questions/193465", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186362/" ] }
193,508
I recently wanted to see what happens when I change my local time to something obviously wrong. I tried the year 2218, so 200 years in the future. The result: I couldn't access any website anymore (I didn't try too many, though). I got this error: I guess NET::ERR_CERT_DATE_INVALID means that an HTTPs certificate is not valid. But usually there is an "advanced" option that allows me to ignore it. Not so here. Also, I wonder why it says "your clock is ahead" - if chrome knows the correct time, why doesn't it take this for comparing? Coming to my question: How important is local time for security? If an attacker can arbitrarily change the system time, which kinds of attacks allows this? Are there reported cases where time manipulation was a crucial part?
You have a bunch of questions rolled in there. I guess NET::ERR_CERT_DATE_INVALID means that an HTTPs certificate is not valid. Yes. Here is the cert for help.ubuntu.com : You'll notice that it has Valid From and Valid Until dates; if you try to access a site protected by this cert outside of these dates, your browser will complain. The reason certs expire is (among other reasons) to force webmasters to keep getting new certs using the latest crypto and other new security features in certificates. When your browser is trying to decide if it trusts a certificate, it uses the system clock as the definitive source of truth for time. Sure, it'll try to use NTP, but if you (the admin user) have explicitly told it that the NTP servers are wrong, well, you're the boss. If an attacker can arbitrarily change the system time, which kinds of attacks allows this? Let's consider personal computers and servers separately. I haven't done any research here, just off the top of my head. Personal Computers Users often play games with their system clock to get around "30-day trial" type things. If you're the company whose software is being used illegally this way, then you would consider it a security issue. Spoofed websites. It's much easier to hack old expired certificates -- maybe it used 10 year old crypto that is easily cracked, or maybe the server was compromised 6 years ago but the CAs don't track revocation info for that long (idea credit: @immibis' answer ). If an attacker can change your system clock then you won't see the warnings. Servers Logging. When investigating a security breach, if your servers' clocks are out of sync, it can be very difficult to piece together all the logs to figure out exactly what happened and in what order. Logins. Things like OTP 2 factor authentication is usually time-based. If one server's clocks are behind a different server, then you could watch someone enter an OTP code, then go use it against the server that's behind because that code won't have expired yet.
{ "source": [ "https://security.stackexchange.com/questions/193508", "https://security.stackexchange.com", "https://security.stackexchange.com/users/3286/" ] }
193,651
So I was going through my email and accidentally clicked on a suspicious link. It was a quickmessage.io link, which I had no clue what it was. When I clicked on it, my anti-virus came up and blocked it from accessing it, saying that the link may be harmful and may want to steal your info. I clicked off it and didn't go further than that. I looked into it and apparently, it's an IP tracker website. Now I'm scared that whoever it was now has my IP address and maybe my home address. Is it possible to get one's home address through the IP address? Is it possible that they have it, even if I clicked off it with my anti-virus? I went into shock mode and straight away downloaded a VPN. Am I safe?
First: almost every single site out there is an "IP logger". Every server logs at least this information: IP address of the client Browser type and version Operating system Which site they came from (the Referer ) So, not only does this site have your IP address, but each site you ever visited has your IP address in their own logs. A few, very few sites won't log any information, but they are a negligible minority. But you don't need to be paranoid. The IP address alone is not enough to get your name, your home address and the kind of car you drive. It's possible to correlate information and get close to that, but it's not something you will have to be worried about, unless someone is being paid to track you specifically. It's expensive, takes a lot of work and time, and does not always work, so don't expect a full tracking mode to be started just for you because you clicked a link. Concerning GDPR : 6.1 Processing shall be lawful only if and to the extent that at least one of the following applies: f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party I am not a lawyer, but in sysadmin circles, it seems that protecting your service or a third party from fraud or security violations are legitimate reasons to log an IP address, and thus are legal under GDPR.
{ "source": [ "https://security.stackexchange.com/questions/193651", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186544/" ] }
193,877
I have one of those questions that rely on the rule sets for DNS lookup. Let us say Person A owns the site https://www.example.com . A different person, Person B, not associated with A, attempts to register https://sub.example.com with the local registry. Will the registry allow this? Or is there an implicit understanding that these domain names are linked, and can't be obtained by third parties? The reason I ask is that my university https://www.sydney.edu.au supposedly sent me a link in an email authored by [email protected] and this link directs me to https://canvas.sydney.edu.au/ . This looks bad to me. But maybe DNS rules only allow https://www.sydney.edu.au to have the associated domain https://canvas.sydney.edu.au . Otherwise, if any person (e.g. a Bad Person) can register https://badsite.sydney.edu.au and DNS lets it go through... then there is a hole in the DNS world that is made for badness.
Welcome to Security! The case of educational/government intitutions is a particular case of subdomaining. Basically ICANN, who rules the Internet top names, delegated maangement of the .au TLD to Australian government (to make it simply simple). But since .edu and .gov (et similia) are owned by US for historical reasons, Australia, like some other countries, had no choice than to manage its own dedicated Educational subdomain under .au . Other examples are .gov.uk , .gov.it etc. that made similar choices. If you use whois Linux tool you can find interesting information. I have summarized its output :~> whois au % IANA WHOIS server % for more information on IANA, visit http://www.iana.org % This query returned 1 object domain: AU organisation: .au Domain Administration (auDA) address: Lv 17 address: 1 Collins St address: Melbourne VIC 3000 address: Australia :~> whois edu.au Domain Name: EDU.AU Registry Domain ID: D407400000002449554-AU Registrar WHOIS Server: whois.auda.ltd Registrar URL: http://www.afilias.com.au Last Modified: Registrar Name: Afilias Australia Pty Ltd :~> whois sydney.edu.au Domain Name: SYDNEY.EDU.AU Registry Domain ID: D407400000000057080-AU Registrar WHOIS Server: whois.auda.org.au Registrar URL: https://www.domainname.edu.au Last Modified: 2018-07-17T00:59:06Z Registrar Name: EDUCATION SERVICES AUSTRALIA LIMITED Each subject in the chain is responsible for allowing parties to register subdomains. For example, if your Science department wants to register a subdomain, they must inquire Education Services Australia . Experiment: try to register usrlocalechelon.edu.au on GoDaddy: they are not allowed to sell you that Experiment 2: https://www.domainname.edu.au/ offers me to register usrlocalechelon.edu.au for 41 AUD. Looks like too public in my opinion as you may claim yourself to be an educational institution in Australia if you can just pay for a .edu.au domain. Comment: the site shows a "Eligibility details" step of registration, where probably I won't be able to register an Australian educational domain because I lack authority to register under eu. I haven't bothered trying to push the wizard forward. Experiment 3 (which answers the security question) https://www.domainname.edu.au/ does NOT allow me to register usrlocalechelon.sydney.edu.au because Requires applicant to have national interests and resposibilities or be recognized and delivering services in more than one state or territory Summarizing You can never apply for your favourite sub-sub domain at a public registrar, because technical reasons require you to pass through the owner of the level-minus-one domain. DNS is hierarchical. But if the organization owning your third or fourth level domain (like sydney.edu.au in the example) flaws in filtering domain applications, that is their own organizational problem and is not a flaw in the DNS system.
{ "source": [ "https://security.stackexchange.com/questions/193877", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186762/" ] }
193,904
Many apps allow the user to authenticate with their phone number, by having the user enter it, and then sending an SMS with a code to be entered into the app. Very few (if any that I can find still active), simply present the SMS interface, and have the user send an SMS with a verification code to the server. I can think of a few reasons for this, but none that really seem to rule it out for me: Sending an SMS could cost the user, and without having local numbers for every country, it could cost a significant amount A user may want to sign in on a device that does not have SMS capabilities, but can have the SMS sent to their phone instead [iPod/Tablet etc.] (this could be mitigated by allowing the user to use both inbound or outbound for verification depending on the device capabilities) Users are very familiar with the receiving interface from other big name apps, and so it may feel more secure Does sending an SMS seem "dodgy" a bit like old-school scams that ask you to send a message to a number? It is not compatible with a desktop web version of the product None of these seems like a real reason not to do it, but for some reason the big names like WhatsApp, SnapChat, Facebook etc. all seem to avoid it. Can anyone think of any major reasons to not do this, or have any insights as to why it is not more common?
It's quite easy to send an SMS message that appears to come from the phone number of your choice without actually controlling that number. And so sending an SMS from a number doesn't verify your ID in the same way as receiving an SMS to a number.
{ "source": [ "https://security.stackexchange.com/questions/193904", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186788/" ] }
194,048
If you have a router with default login and password for the admin page, can a potential hacker gain access to it without first connecting to the LAN via the WiFi login?
This may be possible using cross-site request forgery . In this attack, the attacker triggers a request to your router, for example by including an image on his site: <img src="http://192.168.1.1/reboot_the_router?force=true"> When a user visits his site, this triggers a request to the router. The attacker's site can trigger requests, but not view responses. Not all routers are vulnerable to this. Setting a non-default password certainly protects against CSRF 1 . There are plans to block such requests in the browser, but these haven't been implemented yet. Edit 1: Setting a non-default password protects against CSRF in some cases. The attacker can no longer forge a request to login using the default credentials. However, if you are already logged in to the router he can use your current session.
{ "source": [ "https://security.stackexchange.com/questions/194048", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186915/" ] }
194,142
I'm using 1password and I've seen 1password allows you to store 2FA tokens in the same place where you store the password. I don't like the idea of having everything in the same place as if someone steal my 1password password it could access to my account and get both password and security tokens. Actually, I'm using the Google Auth for the 2FA and 1password for the passwords. Is a good idea to keep them separated to increase security? Does it make any sense?
I work for 1Password, and I wrote exactly about this question when we introduced the feature. The answer depends on what security properties you actually want from time-based one time passwords (TOTP). The "second factorness" of TOTP one of several security properties it offers, and it may be the least important in many cases. Don't get led astray that this all goes under the term "2FA", as if that is the only security benefit you get from these schemes. Security benefits of TOTP (contrasted with typical password use) So I am going to list a few of the security properties you get with TOTP and contrast them with typical password use. Long term secret isn't transmitted during authentication. With TOTP get a long term secret that is only transmitted (typically the QR code), when you enroll. The long term secret is not transmitted when it is actually used. (This is unlike typical password usage where the password is transmitted over the net, and so depends on other protections, such as TLS). This also means that the long term secret can't be phished (although the numeric codes can be.) Long term secret is unguessable . The long term secret is generated by the server when you first enroll, and so it is generated up to the service's standards of randomness. Again, this is unlike typical password use with human created passwords. Long term secret is unique. You will not end up reusing the same TOTP long term secret across various services. Again, this is unlike typical password use, where people reuse passwords. Oh yeah. And you put the long term secret on "something you have", if for some reason that is important to you. In most cases where TOTP is deployed, it is done so because of properties #2 (unguessability) and #3 (uniqueness). Indeed, when Dropbox first introduced TOTP on their services, they spelled out their reasons as helping protect users who were reusing passwords. After the uniqueness and unguessability of the long term secrets, the next most important benefit (for most people) is that the long term secret isn't transmitted. This makes it harder to capture on a compromised network. Probably the least important of the security properties that TOTP gives us is the second factorness. I'm not saying that there is no benefit to it, but for the cases that most people are using TOTP, it is probably the least important. Contrasting with using a password manager well. In the above I listed four security properties of TOTP and contrasted them with typical password use. But now let's consider someone who is using a password manager to its full potential. If you are using a password manager well for some site or service4, you will have randomly generated (and so unguessable) password for that site and you will have unique password for that site. And so the use of TOTP doesn't really add a great deal in terms of those two security properties. If we look at property #1 (long term secret not transmitted), TOTP still offer some additional security, even if you are using a password manager. However, using a password manager does reduce the chance of getting phished, and so the gain of TOTP, while real, is not as great as it would be for someone not using a password manager. The only thing left is #4. If the two factorness really is why you value TOTP, then don't keep the long term secret in 1Password. But for most people, the value of TOTP comes from having a strong and unique long term secret that is never transmitted. Look at the actual security properties I recommend that you evaluate what you really get out of TOTP (instead of getting caught up in the whole 2FA rhetoric), and then consider the tradeoffs. I'll bet that if things like TOTP where called "Unique Secret Authentication" instead of "Two Factor Authentication" the question would never have come up.
{ "source": [ "https://security.stackexchange.com/questions/194142", "https://security.stackexchange.com", "https://security.stackexchange.com/users/121284/" ] }
194,166
While I understand the idea of SUID is to let an unprivileged user run a program as a privileged user, I have found that SUID usually doesn't work on a shell script without some workarounds. My question is, I don't really understand the dichotomy between a shell script and a binary program. It seems that whatever you can do with a shell script, you can also do it with C and compile it into a binary. If SUID is not secure for a shell script, then it's also not secure for binaries. So why would shell scripts but not binaries be prohibited from using SUID?
There is a race condition inherent to the way shebang ( #! ) is typically implemented: The kernel opens the executable, and finds that it starts with #! . The kernel closes the executable and opens the interpreter instead. The kernel inserts the path to the script to the argument list (as argv[1] ), and executes the interpreter. If setuid scripts are allowed with this implementation, an attacker can invoke an arbitrary script by creating a symbolic link to an existing setuid script, executing it, and arranging to change the link after the kernel has performed step 1 and before the interpreter gets around to opening its first argument. For this reason, all modern unices ignore the setuid bit when they detect a shebang. One way to secure this implementation would be for the kernel to lock the script file until the interpreter has opened it (note that this must prevent not only unlinking or overwriting the file, but also renaming any directory in the path). But unix systems tend to shy away from mandatory locks, and symbolic links would make a correct lock feature especially difficult and invasive. I don't think anyone does it this way. A few unix systems implement secure setuid shebang using an additional feature: the path /dev/fd/ N refers to the file already opened on file descriptor N (so opening /dev/fd/ N is roughly equivalent to dup( N ) ). The kernel opens the executable, and finds that it starts with #! . Let's say the file descriptor for the executable is 3. The kernel opens the interpreter. The kernel inserts /dev/fd/3 the argument list (as argv[1] ), and executes the interpreter. All modern unix variants including Linux implement /dev/fd , but most do not allow setuid scripts. OpenBSD, NetBSD and Mac OS X support it if you enable a non-default kernel setting. On Linux, people have written patches to allow it but those patches never got merged. Sven Mascheck's shebang page has a lot of information on shebang across unices, including setuid support . In addition, programs running with elevated privileges have inherent risks that are typically harder to control in higher-level programming languages unless the interpreter was specifically designed for it. The reason is that the programming language runtime's initialization code may perform actions with elevated privileges, based on data that's inherited from the lower-privilege caller, before the program's own code has had the opportunity to sanitize this data. The C runtime does very little for the programmer, so C programs have a better opportunity to take control and sanitize data before anything bad can happen. Let's assume you've managed to make your program run as root, either because your OS supports setuid shebang or because you've used a native binary wrapper (such as sudo ). Have you opened a security hole? Maybe. The issue here is not about interpreted vs compiled programs. The issue is whether your runtime system behaves safely if executed with privileges. Any dynamically linked native binary executable is in a way interpreted by the dynamic loader (e.g. /lib/ld.so ), which loads the dynamic libraries required by the program. On many unices, you can configure the search path for dynamic libraries through the environment ( LD_LIBRARY_PATH is a common name for the environment variable), and even load additional libraries into all executed binaries ( LD_PRELOAD ). The invoker of the program can execute arbitrary code in that program's context by placing a specially-crafted libc.so in $LD_LIBRARY_PATH (amongst other tactics). All sane systems ignore the LD_* variables in setuid executables. In shells such as sh, csh and derivatives, environment variables automatically become shell parameters. Through parameters such as PATH , IFS , and many more, the invoker of the script has many opportunities to execute arbitrary code in the shell scripts's context. Some shells set these variables to sane defaults if they detect that the script has been invoked with privileges, but I don't know that there is any particular implementation that I would trust. Most runtime environments (whether native, bytecode or interpreted) have similar features. Few take special precautions in setuid executables, though the ones that run native code often don't do anything fancier than dynamic linking (which does take precautions). Perl is a notable exception. It explicitly supports setuid scripts in a secure way. In fact, your script can run setuid even if your OS ignored the setuid bit on scripts. This is because perl ships with a setuid root helper that performs the necessary checks and reinvokes the interpreter on the desired scripts with the desired privileges. This is explained in the perlsec manual. It used to be that setuid perl scripts needed #!/usr/bin/suidperl -wT instead of #!/usr/bin/perl -wT , but on most modern systems, #!/usr/bin/perl -wT is sufficient. Note that using a native binary wrapper does nothing in itself to prevent these problems. In fact, it can make the situation worse, because it might prevent your runtime environment from detecting that it is invoked with privileges and bypassing its runtime configurability. A native binary wrapper can make a shell script safe if the wrapper sanitizes the environment. The script must take care not to make too many assumptions (e.g. about the current directory) but this goes. You can use sudo for this provided that it's set up to sanitize the environment. Blacklisting variables is error-prone, so always whitelist. With sudo, make sure that the env_reset option is turned on, that setenv is off, and that env_file and env_keep only contain innocuous variables. All these considerations apply equally for any privilege elevation: setuid, setgid, setcap. Recycled from https://unix.stackexchange.com/questions/364/allow-setuid-on-shell-scripts/2910#2910
{ "source": [ "https://security.stackexchange.com/questions/194166", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96843/" ] }
194,313
Assume we want to protect a document against manipulating and forging. So, we encode some sensitive information of the document and store it in a QR-code inserted in the document. Can we be sure that an attacker is not able to change and modify the stored data in the QR-code? And if it is possible to modify it, how difficult is it for an attacker to do so? In other words, how secure is a QR-code?
... how secure is a QR-code? Data in a QR code are kind of protected against accidental damage by having some error correction but they are not protected against deliberate manipulation. Also, an attacker might completely replace the QR code in the document with a different one.
{ "source": [ "https://security.stackexchange.com/questions/194313", "https://security.stackexchange.com", "https://security.stackexchange.com/users/187204/" ] }
194,336
Can we use E2EE to secure the communication between the bank and it's customers in the applications of mobile banking? And what is the status off E2EE vs SSH regarding this matter?
... how secure is a QR-code? Data in a QR code are kind of protected against accidental damage by having some error correction but they are not protected against deliberate manipulation. Also, an attacker might completely replace the QR code in the document with a different one.
{ "source": [ "https://security.stackexchange.com/questions/194336", "https://security.stackexchange.com", "https://security.stackexchange.com/users/186935/" ] }
194,345
From this answer about browser security : time to update if you really care about security So if every other software and functions I need can work on a 32-bit OS, I guess the only reason for upgrade is the browser's security? Can you explain why browser security should be placed on the top priority, when: Most of the websites I visit have SSL certificate, Most of them are either big enough that I can trust that they can't be hacked, or small enough that I don't think it's profitable for the hackers, Windows and Windows Defender are up-to-dated, I can smell fishy websites? I hope that this is not the overconfidence effect . And I hope that I'm not overconfident that I don't have overconfidence effect. As always, a statistics or a case study may increase the convincing of the answer.
Can you explain why browser security should be placed on the top priority ... Because the browser is processing lots of untrusted content from the internet. Of course, if you use any other programs which does this (like Mail client, maybe Office program, PDF reader) you should keep these updated too since vulnerabilities in these programs are a regular attack vector too. .. Most of the websites I visit have SSL certificate, A SSL certificate says nothing about the trust you can have in a site. HTTPS only protects against sniffing modification of the traffic during transport. A HTTPS site can serve malware as much as a plain HTTP site can do. Apart from that " Most of the websites" is not the same as " All of the websites". I can smell fishy websites? Even if you might be confident in your ability to sniff websites where the URL looks fishy (which might actually be overconfidence) I'm pretty sure that you will not know up-front if the site you visit regularly got hacked and is serving malware (i.e. Watering hole attack or other kinds of hacking high-reputation sites to increase number of victims ) or if it is serving malicious ads which are outside the control of the website itself (i.e. Malvertising ). EDIT: After I've wrote my answer the OP added the following to the question: Most of them are either big enough that I can trust that they can't be hacked, ... Too big to be hacked? While large web sites usually employ better security than smaller ones it does not mean they are unhackable. And sites with lots of customers are especially a lucrative target for the attackers since this also means lots of potential victims. Some examples: ... malicious ads on Forbes ... or ... New York Times and BBC hit by 'ransomware' malvertising or Study: One-third of top websites vulnerable or hacked . ... or small enough that I don't think it's profitable for the hackers, ... Too small to be hacked? That's not true either: attackers use automated tools to hack insecure CMS installations like WordPress or Django en mass, i.e. it is very cheap to take over a vulnerable site this way.
{ "source": [ "https://security.stackexchange.com/questions/194345", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94500/" ] }
194,353
Chinese police are forcing whole cities to install an Android spyware app Jingwang Weishi . They are stopping people in the street and detaining those who refuse to install it. Knowing that I may be forced to install it sooner or later, what are my options to prepare against it? Ideally: Make it appear like the app is installed and working as intended, without having it actually spy on me. The app is downloadable and documented . It basically sends the IMEI and other phone metadata, as well as file hashes, to a server. It also monitors messages sent via otherwise secure apps. I don't know whether it includes sophisticated anti-tempering features or not. I can't afford two phones nor two contracts, so using a second phone is not a viable option for me.
This may not be the answer you will be happy with but how about abstaining from having any undesirable data inside your phone in the first place and instead using the right tool for the job ? According to Wikipedia: The app records information about the device it is installed on, including its [...] IMEI, the phone's model and manufacturer, and the phone number. The app searches the phone for images, videos, audio recordings, and files [...] So, instead of trying to tamper with this spyware in any way (which can get you in a much bigger trouble), simply don't do anything suspicious on this phone and let this app do its job. Prepare against it by not having any photos, videos, audios, file, etc., and instead use the right tool for the job . Use some other secure software/hardware to connect to internet, use encrypted email provider and do all of your communication through the computer where you can do communication safely, and store all of your files somehow in a safe place (encrypted, somewhere on computer or USB, etc). Pretend to be an obedient citizen and use the right tool for the job to do whatever it is you don't want your government to find out. Some people may wonder why bother having a phone in the first place (and FYI, I asked the same question under OP's question, for clarification). My answer is: to make phone calls (and have conversations which are not going to be considered by Chinese government suspicious, in case they are tracking that too) to use it as a "red herring" - if police asks you to give them your phone you won't have to lie to them that you have no phone, or worry that they will find out that you tampered with app, or get in trouble if you don't have app, etc. You'll just confidently give them phone, with no "illegal" information on it, they will check it, and walk away. You may, actually, even have some "red herring" files: pictures of nature, shopping list (milk, eggs, etc.), etc., just so that they wouldn't suspect that you deliberately not using your phone for such purposes, and harass you farther. I mean, not long ago mobile phones didn't even have the ability to store pictures, videos, files, etc. Are you willing to put your life in danger simply because you want to have some files on your phone? Tough times require tough decisions.
{ "source": [ "https://security.stackexchange.com/questions/194353", "https://security.stackexchange.com", "https://security.stackexchange.com/users/187246/" ] }
194,460
I noticed in the html of my router this parameter: form.addParameter('Password', base64encode(SHA256(Password.value))); So when I type in the password passw I get this via sslstrip : 2018-09-25 21:13:31,605 POST Data (192.168.1.1): Username=acc&Password=ZTQ1ZDkwOTU3ZWVjNzM4NzcyNmM2YTFiMTc0ZGE3YjU2NmEyNGZmNGNiMDYwZGNiY2RmZWJiOTMxYTkzZmZlMw%3D%3D Is this hash easy to crack via bruteforce/dictionary? I am still a beginner, but that looks like double encryption to me. Also, is there some faster way of getting this password than cracking it?
It's a base64 unsalted sha256 hash. It's not double encryption, but merely an unneeded encoding. An unsalted hash means it's trivial to just search the hash on Google and probably it will find the result.
{ "source": [ "https://security.stackexchange.com/questions/194460", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
194,600
What are similarities and differences between a "checksum" algorithm and a "hash" function? Can they be used instead of each other? Or their usage are different? For example, for verifying the integrity of a text, which one is better to be used? And if they are different, what are specific algorithms for each one? Meaning that which algorithm is appropriate for "checksum" and which one for "hash" function?
A checksum (such as CRC32) is to prevent accidental changes. If one byte changes, the checksum changes. The checksum is not safe to protect against malicious changes: it is pretty easy to create a file with a particular checksum. A hash function maps some data to other data. It is often used to speed up comparisons or create a hash table. Not all hash functions are secure and the hash does not necessarily changes when the data changes. A cryptographic hash function (such as SHA1) is a checksum that is secure against malicious changes. It is pretty hard to create a file with a specific cryptographic hash. To make things more complicated, cryptographic hash functions are sometimes simply referred to as hash functions.
{ "source": [ "https://security.stackexchange.com/questions/194600", "https://security.stackexchange.com", "https://security.stackexchange.com/users/187204/" ] }
194,608
I've been doing some research into SSL for a paper for school, so please forgive my ignorance and lack of experience in this subject. While doing my research I have noticed that some websites when you do a nslookup on them and then search the IP address in chrome it varies what happens. When you nslookup google I get 172.217.7.174 and when you enter that it redirects to google.com as expected. But when you do the same for yahoo.com it takes you to an empty page. When you do it for Microsoft.com 23.100.122.175 it takes you to a "your connection is not private" page and then from there, you can see the information for Microsofts wildcard cert when you click NET::ERR_CERT_COMMON_NAME_INVALID. So my questions are: Isn't the private key for a cert supposed to be kept a secret? Is it okay to have your website like microsoft's that displays the cert in its entirety? If someone gets your cert cant they spoof your websites? Shouldn't you be implementing some sort of redirect for IP addresses like a 301 or something?
A checksum (such as CRC32) is to prevent accidental changes. If one byte changes, the checksum changes. The checksum is not safe to protect against malicious changes: it is pretty easy to create a file with a particular checksum. A hash function maps some data to other data. It is often used to speed up comparisons or create a hash table. Not all hash functions are secure and the hash does not necessarily changes when the data changes. A cryptographic hash function (such as SHA1) is a checksum that is secure against malicious changes. It is pretty hard to create a file with a specific cryptographic hash. To make things more complicated, cryptographic hash functions are sometimes simply referred to as hash functions.
{ "source": [ "https://security.stackexchange.com/questions/194608", "https://security.stackexchange.com", "https://security.stackexchange.com/users/187608/" ] }
194,667
A while ago, I was opening Facebook app on Android and then I got the message "Session expired. Please log in again.". I then tried logging in with my current password and was success to log in my account. Before, long time ago, when I created this account, I'd set up two-factor authentication for my account and when I checked after I did the log in, it was still active. After that, I opened my laptop and Chrome then went to Facebook, just to find out that the session on PC was also logged out. After I logged back in, I went to security under settings and checked the section "When you're logged in" and I saw that all of the past logged in entries are gone. The only entries I got were those log in on my phone and my laptop (also appeared to be my trusted devices). I was thinking of someone had tried (and succeeded?) to access my account, then logged out of all current sessions. However, I did not get any suspicious prompt on my phone to authenticate an unusual log in (Like "Did you just logged in near location xxxxx?"), also no warning email from my registered email telling me about my account being accessed on an unrecognized browser or computer. Tl;dr: Facebook account suddenly got logged out of all devices, password was not changed, logged in entries are gone, no email warning about account being compromised, no two-factor authentication prompt showed up. My questions are: Are there any chances that someone was successfully able to get into my account? If yes, then how could they bypass the two-factor authentication? Is that incident normal or I should take security actions? Thank you!
Facebook reported a data leak today and forced a large number of accounts to log off as a precaution. Source: NY Times and Facebook . That NYT article says "The company forced more than 90 million users to log out early Friday, a common safety measure taken when accounts have been compromised." Additional article from The Hacker News - "unknown hacker or a group of hackers exploited a zero-day vulnerability in its social media platform that allowed them to steal secret access tokens for more than 50 million accounts" and "Facebook has already reset access tokens for nearly 50 million affected Facebook accounts and an additional 40 million accounts, as a precaution"
{ "source": [ "https://security.stackexchange.com/questions/194667", "https://security.stackexchange.com", "https://security.stackexchange.com/users/184642/" ] }
195,063
A few days ago I got an email from a hacker supposedly using an email of mine (he was using the same email address TO and FROM) from my own email domain, and had a part of a password I use to purchase items with this particular email but not the one associated with the email server at HostGator, and threatening me with bogus claims and demanding a ransom. I used haveibeenpwned and resulted in 7 sites (i.e. Linkedin hacks) and 1 paste. I read your sites answers and Troy's info but do not understand how to proceed. I am a small biz man and not a coder.
This is a known scam. The scammers look up emails and cracked passwords in public leaks of site databases and then send an extortion email to people. The password is already out in the open, sorry. You should change the passwords on all sites using that password. On the up-side, this does mean that the person who is emailing you is not actually a hacker and they have not infected your computer. You should use a password manager to prevent this from being an issue in the future.
{ "source": [ "https://security.stackexchange.com/questions/195063", "https://security.stackexchange.com", "https://security.stackexchange.com/users/188135/" ] }
195,134
I recently read the Rust language documentation and saw this : By default, HashMap uses a cryptographically secure hashing function that can provide resistance to Denial of Service (DoS) attacks. This is not the fastest hashing algorithm available, but the trade-off for better security that comes with the drop in performance is worth it. As someone with no background in systems languages, I've never heard of in memory attacks based on the wrong hashing algorithm. So I got some questions: How does the secure hashing algorithm prevent a DoS or any other attacks? When should I opt for a more secure hash over a faster one?
Sometimes applications use untrusted data as the key in a hash map. A simple implementation can allow the untrusted data to cause a denial of service attack. Hash maps are fast - O(1) - in the best case, but slow - O(n) - in the worst case. This is because keys are normally in separate buckets, but some values can result in the same hash - a collision - which are handled by a slower linked list. With random data, collisions will be uncommon. However, some implementation have a vulnerability where malicious data can cause many collisions, which makes the hash map slow. Some years ago there was a Linux kernel DoS due to this. The root cause of the Linux vulnerability was that hashing was predictable. It was fixed by introducing a key into the hash function that a remote user would not know. I don't know exactly how Rust hash maps work, but I expect they use a similar kind of keyed hash. You should opt for a more secure hash any time you're using untrusted data as a key.
{ "source": [ "https://security.stackexchange.com/questions/195134", "https://security.stackexchange.com", "https://security.stackexchange.com/users/146937/" ] }
195,252
Let's assume CAPTCHA is enabled with account lock out control (after five continuous failed attempts, the account will be locked for 15 min) on a system. Is brute force still a probable threat?
The protections you describe are good ones that you should consider, but there can still be weaknesses: Many CAPTCHAS can be solved by robots, or you can easily pay people to solve them en masse for you (there are companies selling that service). Account lock out is a good idea, but if you do it based on IP someone with access to a botnet could retry login on a single account from different IP:s until they get in. Offline brute force is still a problem if your database gets leaked. If the attacker has access to the password hash, they can try all they want on their own system. That's why you should use good hashing.
{ "source": [ "https://security.stackexchange.com/questions/195252", "https://security.stackexchange.com", "https://security.stackexchange.com/users/179495/" ] }
195,482
I accidentally gave my USB to my friend and then I realized it had some important files of mine. Is there any way I can know if he got something from the USB?
No logs are recorded on the USB itself around file accesses. At best, you might know if the files were changed by looking at the file timestamps, which can sometimes happen just by opening them, depending on the program opening them. But there will be no way to determine, by looking at the USB, if the files were copied.
{ "source": [ "https://security.stackexchange.com/questions/195482", "https://security.stackexchange.com", "https://security.stackexchange.com/users/188580/" ] }
195,805
I don't remember when this "accept/cancel cookie" button started to be used in websites. Why do they insist on getting users to click on this button? Can it do any harm to user's PC or to collect any private and sensitive data? Their reason for this mostly is "For better browsing experience on the website". Is it possible to use this as a trick for a possible hack? Also my knowledge of cookies and web hacking is not good enough.
Technically, browsers do not have to ask the user a question in order to use cookies. Furthermore, they are not technically bound to the answer given by the user. Legally, that is another matter. In the European Union, the websites are now required to ask the user for their consent before using tracking cookies or other means to collect personal data about the user. However, they do not have to ask for the consent of the user to use cookies necessary to provide their service (such as session cookies). Thus, if websites asks to allow cookies, it is in order to legally collect personal data about the user. This data can be considered private or sensitive, depending on the appreciation of the users. The formulation “For better browsing experience” usually means “In order for us to provide you targeted advertisement, that will earn us more money to make better content.” or “In order for us to provide you targeted advertisement, so you will have (in theory) less irrelevant advertisements”. A malicious website might not honor their legal obligations. They could ask for the consent and not honor the answer, or they could dispense with asking the question in the first place. For more information on the law: GDPR on Wikipedia
{ "source": [ "https://security.stackexchange.com/questions/195805", "https://security.stackexchange.com", "https://security.stackexchange.com/users/120858/" ] }
195,834
Looks like CVE-2018-10933 was just released today and you can find a summary here from libssh here Summary: libssh versions 0.6 and above have an authentication bypass vulnerability in the server code. By presenting the server an SSH2_MSG_USERAUTH_SUCCESS message in place of the SSH2_MSG_USERAUTH_REQUEST message which the server would expect to initiate authentication, the attacker could successfully authentciate without any credentials. I am trying to understand this more and its range of impact. Do operating systems like Debian, Ubuntu rely on libssh for SSH and if they do does that mean every server exposing SSH is vulnerable to this attack? Also, does OpenSSH rely on libssh or are they two separate implementations? I tried looking for OpenSSH vs libssh but couldn't find what I was looking for. This vulnerability sounds like the worst case scenario for SSH so I am just surprised it hasn't been making headlines or blowing up. The summary of this vuln is vague so I'm looking for any insight into the range of impact and in what scenarios I should be worried.
... does OpenSSH rely on libssh OpenSSH (which is the standard SSH daemon on most systems) does not rely on libssh. I tried looking for openssh v.s. libssh ... Actually, a search for openssh libssh gives me as first hit: OpenSSH/Development which includes for libssh the following statement : "... libssh is an independent project ..." Also, if OpenSSH would be affected you can sure that you would find such information at the official site for OpenSSH, which has explicitly a page about OpenSSH Security . Do Operating Systems like Debian, Ubunutu rely on libssh for SSH ... See the official documentation of libssh on who is using it (at least): KDE, GitHub... You can also check which available or installed packages on your own OS depend on libssh. e.g. for Debian and similar (e.g. Ubuntu) this would be apt rdepends libssh-4 or apt rdepends --installed libssh-4 . Note that the use of libssh does not necessarily mean that the product is automatically vulnerable. First, the problem seems to be only relevant when using libssh for SSH server not client. And even in the server role it is not necessarily affected, for example Github seems to be not affected even though they use libssh in the server role.
{ "source": [ "https://security.stackexchange.com/questions/195834", "https://security.stackexchange.com", "https://security.stackexchange.com/users/189035/" ] }
196,121
When starting an incognito/private browsing session, no cookies from other browsing profiles should exist. For example, if I am logged in to a site on my main browsing profile, then start a new private browsing session, I am not logged into that same site (cookies not carried over). Assuming it is a new private browsing session, there should not be any existing cookies or sensitive information that is available at all. Does this also have the side effect of preventing or nullifying XSS attacks since there is no sensitive data to steal? Or this is a false sense of security?
An XSS attack is not primarily about cookies. It is not about stealing sensitive data either. It is instead about executing attacker-controlled code on the client side within the context of the site you visit. What kind of harm can be done by this code depends on the actual site and context. Using a private browsing session will not prevent XSS by itself but it might limit the impact of what harm XSS can do - i.e. it has no access to the cookies or other stored data from the non-private browser session. It might though still do harm, but again this depends on the specific context and site you visit.
{ "source": [ "https://security.stackexchange.com/questions/196121", "https://security.stackexchange.com", "https://security.stackexchange.com/users/189434/" ] }