source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
176,202 | there are lots of SSH Clients one can use with Android, for example JuiceSSH client , so i have a security concern, that is, how can i know or verify, that this app is not recording my credentials to when i authenticate to my server? | This is just regular malware spam. The evil part of this message is likely the attached PDF it mentions. It likely contains an exploit which targets a vulnerability in one or more PDF readers and does something bad if opened with a vulnerable program. So do not open the attachment. The reason for the gibberish text in the email's sourcecode is likely to confuse spam filters so they don't filter it. | {
"source": [
"https://security.stackexchange.com/questions/176202",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/92724/"
]
} |
176,213 | I have been reading up on different types of DDoS attacks recently and came upon DDoS distribution by type in 2017 , by Kaspersky Labs.
They list 5 different DDoS types: SYN, TCP , UDP, HTTP, ICMP In all other resources that I have come across so far, SYN DDoS and TCP DDoS attacks are used as complete synonyms, however Kaspersky Labs seems to differentiate between the two. When I search for TCP DDoS attack specifically, only information about SYN DDoS comes up. I guess my question could be rephrased as:
Is TCP DDoS attack always also a SYN attack? Or are there other ways to use TCP for DDoS purposes? | This is just regular malware spam. The evil part of this message is likely the attached PDF it mentions. It likely contains an exploit which targets a vulnerability in one or more PDF readers and does something bad if opened with a vulnerable program. So do not open the attachment. The reason for the gibberish text in the email's sourcecode is likely to confuse spam filters so they don't filter it. | {
"source": [
"https://security.stackexchange.com/questions/176213",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3233/"
]
} |
176,373 | Via Hacker News , I came across a Tweet implying that Facebook's iOS app routinely reads and transmits all content from the user's pasteboard. Leaving aside whether Facebook's app genuinely does this (which is a separate question), is this possible ? I had always naively assumed that an app couldn't access what was on my clipboard unless I explicitly chose to "Paste" into a native text view. Is that assumption wrong? What is the security model for the content of the clipboard on the two major phone OSes? (Or what are the security models, if it's handled differently between iOS and Android?) | Android Prior to Android 10 any app could freely register listeners to receive the clipboard contents whenever they changed. As of Android 10, only the current app with focus and any app set as the input method editor (i.e. the keyboard app) can read the clipboard. Previously common methods of creating background services that would listen to changes to the clipboard are no longer possible and background applications that used to rely on clipboard data must now implement workarounds where they receive the focus at least momentarily before they are able to read the clipboard. Additionally in Android 12 or later it is possible to enable a feature which alerts the user (in the form of a toast) whenever clipboard content is read by an app. iOS As confirmed in the comments by user 11684 , even Apple allows apps to read your clipboard (though only while they are in the foreground). Here is the link to the documentation that returns the data of the clipboard. Update in iOS 14:
Beta version of iOS 14, reportedly, has a feature which alerts the users whenever any app reads the data from the clipboard. | {
"source": [
"https://security.stackexchange.com/questions/176373",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29805/"
]
} |
176,440 | This is something interesting.
Try going to http://www.circaventures.com/ You will get a venture capital company. Now go to google and search "Circa Ventures". The first result you get is the exact same domain but the description is "medical website". Click it, you get to same domain but now a medical drug website is shown at exactly same domain! One can of course see the previous website visited and accordingly force display of the other website but why? Is this SEO experiment? Or am I missing something? Other possible reasons? | This is obviously a spamming or scamming site, either setup on purpose or a hacked legitimate site. If visited without Referer header it will show some seemingly innocent site: $ curl http://www.circaventures.com
...
<title>Circa Ventures | helping you close the loop</title> If visited with a Referer from a search engine it will show spam: $ curl -H "Referer: https://www.google.com/" http://www.circaventures.com
<frameset rows="*,0" framespacing="0" border="0" frameborder="NO">^M
<frame src="http://mantrshopo.com/redirect.php?z=cialis" noresize="" scrolling="auto">^M
</frameset> This seems to work for any Referer which contains Bing, Google, Yahoo or similar, i.e. even when using https://this-site-is-not-yahoo/ as Referer. Using a different Referer like https://this-site-is-not-stackoverflow/ instead will result in the seemingly innocent site. | {
"source": [
"https://security.stackexchange.com/questions/176440",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167217/"
]
} |
176,473 | I have a text file (.txt) compressed in ZIP format protected by a password. I think it has only one line of text and I want to see the contents of this file. I tried fcrackzip but I think the password is more complicated than I imagine, so the question is: is it possible to see the content without needing to have the password of the file? I am not an expert in computer security but a somewhat absurd idea that comes to mind quickly is something like seeing the hexadecimal code of the file and trying to decipher it. | No . There are two ways of zip encryption, a classic one, which is weaker, and a newer one based on AES. In both cases the password is needed in order to decrypt the contents (i.e. it's not just UI, where you could be asked for a password without the program actually requiring it to read the file). So the process would involve breaking the password (which would be more or less complex depending on the algorithm used and how the password was used). At most, you would be able to obtain without decrypting, in addition to the filename, the CRC32 of the plain file. But although that would help if you already suspected what the content was, it probably won't be helpful here, even if it is just a line of text. | {
"source": [
"https://security.stackexchange.com/questions/176473",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167341/"
]
} |
176,477 | This question concerns dictionary attacks conducted: Over the Internet, using programs like THC Hydra Via protocols such as HTTP, FTP and SMTP I believe I'm right in thinking that: a) due to the sophisticated layers of security they tend to employ, such an attack cannot be run successfully on the bigger sites (Facebook, Twitter, Gmail, Outlook and so on) without needing to mask your IP, channel the attack through the Tor network and distribute it among an army of botnets; b) that the efficacy of these attacks on smaller self-hosted sites is limited only by the competency of the person(s) running their servers. However, what about the gap that occupies the (arguably) larger space in between the two - the medium-to-small web hosting providers that the rest of the web relies on for uptime. On average, is the security of these organisations generally advanced enough to detect a guessing attack from a single IP address and permanently ban the address over those protocols? Is anonymising yourself during an attack on such a target just as much of a necessity as it would be when targeting the big sites? Or to put what I'm asking another way: has the security of the smaller web hosting organisations now become sufficiently advanced enough so as to make guessing attacks from a single machine, without anonymisation, entirely obsolete? I ask this because none of the write-ups I've seen on the topic (guides to the use of THC Hydra and similar programs for both dictionary and brute-force attack) so much as mention either anonymisation or the distribution of attacks with bots, and it's left me wondering just how necessary or unnecessary such steps are when doing so. Are there hackers that are actually getting anywhere without taking those measures? | No . There are two ways of zip encryption, a classic one, which is weaker, and a newer one based on AES. In both cases the password is needed in order to decrypt the contents (i.e. it's not just UI, where you could be asked for a password without the program actually requiring it to read the file). So the process would involve breaking the password (which would be more or less complex depending on the algorithm used and how the password was used). At most, you would be able to obtain without decrypting, in addition to the filename, the CRC32 of the plain file. But although that would help if you already suspected what the content was, it probably won't be helpful here, even if it is just a line of text. | {
"source": [
"https://security.stackexchange.com/questions/176477",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/120534/"
]
} |
176,545 | I know many ads can store third-party cookies, but what about reading cookies? If so, what stops them from reading the session id to perform session hijacking? | Any script included into a page can read all cookies for which the httpOnly attribute is not set. Access restrictions for scripts are not determined based on the domain the script was loaded from but only in which page it is loaded into. This means all scripts loaded into a page have the same access and control over this page, no matter what the origin of the script was. Regarding cookies this means that you need to protect any sensitive cookies like session ids with httpOnly if you have included third party scripts which are outside of your control and trust into your page. But including such scripts into a page working with sensitive data is a bad idea anyway, since such scripts can not only read cookies (unless httpOnly) but also extract information from forms (like login credentials) or change the client side application logic. See also Should I be worried of tracking domains on a banking website? . Note that these statements apply only to third party script which is directly included into the main page. If the script is instead only inside a third party iframe inside the main page it can neither read cookies on the main page nor access or modify the DOM on it. | {
"source": [
"https://security.stackexchange.com/questions/176545",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167416/"
]
} |
176,548 | My computer has been a target of a Man-in-the-Middle attack. When I am using the same site for more than 5-10 minutes, occasionally that website gets replaced by a compromised website for a couple hours. Since I am using HTTPS, I immediately detect the attack, since the certificate is suddenly self-signed-- and I avoid sending sensitive data; however, the attacks have been getting more sophisticated lately. Furthermore, I need to be able to continue my work-- I need to be able to connect to the original website somehow. One odd thing is, that this attack is browser-specific. For example, when eBay.com has been compromised on Firefox, it continues working on Opera. However, after 5 minutes of working on Opera, it is compromised there too; but switching to IE allows me to continue working for another 15-20 minutes. It is never compromised when I am accessing it via command line (presumably to prevent me from using tools such as tracert). I have been able to obtain my system administrator's password, and admittedly it is only a moderately strong password; and, the router allows remote control; so possibly, the router has been hacked. I can change the router password; but first, I would like to confirm where the attack is coming from. Because if I don't, and the attacker detects my attempts to thwart him, then he can stop for a time, to make me think I've found where the problem is coming from, when I actually haven't. | Any script included into a page can read all cookies for which the httpOnly attribute is not set. Access restrictions for scripts are not determined based on the domain the script was loaded from but only in which page it is loaded into. This means all scripts loaded into a page have the same access and control over this page, no matter what the origin of the script was. Regarding cookies this means that you need to protect any sensitive cookies like session ids with httpOnly if you have included third party scripts which are outside of your control and trust into your page. But including such scripts into a page working with sensitive data is a bad idea anyway, since such scripts can not only read cookies (unless httpOnly) but also extract information from forms (like login credentials) or change the client side application logic. See also Should I be worried of tracking domains on a banking website? . Note that these statements apply only to third party script which is directly included into the main page. If the script is instead only inside a third party iframe inside the main page it can neither read cookies on the main page nor access or modify the DOM on it. | {
"source": [
"https://security.stackexchange.com/questions/176548",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167420/"
]
} |
176,559 | Our web application is being mimicked by another domain, https://www.djjpl.com.sg . On ping, it gives the same IP address as ours. Every git push to our server reflects on that domain. We want to stop that domain from mimicking our app. How can we prevent it? | Our web application is being mimicked by another domain ... The domain in question is configured to resolve to the same IP address as yours. That's why it looks like they mimic you when in fact it is simply the same physical server, only accessed by another name. But when using the URL from your question one gets a security warning in the browser: the certificate is for nrnsewa.com which does not match the name in the URL www.djjpl.com.sg. Since this warning will scare away most visitors I doubt that this is an attack but believe that it is simply a misconfiguration for www.djjpl.com.sg. How can we prevent it? You have no control what others configure for their domain. This includes misconfigurations where they accidentally configure the wrong IP address for their site in DNS. This includes also that they don't update their records if they no longer use a domain which means it might point to IP addresses which are used by others in the mean time. Anybody could configure their domain to point to an arbitrary IP address (including yours), both deliberately and by accident. But what you can do is refuse access to your site or show an error when the domain name in the TLS handshake (i.e. HTTPS-URL's) or the HTTP Host header do not match your site. How this is done depends on the specific server implementation you have. See for example How can I block requests with the wrong Host header set? . And even if you don't care about such misconfigurations, enforcing the correct Host header is still recommended since it prevents some attacks like DNS rebinding . | {
"source": [
"https://security.stackexchange.com/questions/176559",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167438/"
]
} |
176,641 | The patch for Meltdown is rumoured to incur a 30% performance penalty, which would be nice to avoid if possible. So this becomes a Security vs Performance risk-assessment problem. I am looking for a rule-of-thumb for assessing the risk of not patching a server or hypervisor. From reading the whitepapers , my understanding is that you definitely need to apply the patch if your machine: is a workstation that runs random potentially malicious code - including, it turns out , java script from random websites, is a VM that could potentially run malicious code (which essentially becomes the first case). is a hypervisor that runs untrusted VMs next to sensitive VMs (which essentially becomes the first case), My understanding is that the risk is (significantly) lower in the following cases: server running on dedicated hardware running a tightly-controlled set of processes in a tightly-controlled network (including not using a web browser to visit untrusted sites) VM running a tightly-controlled set of processes on a virtualization stack of other tightly-controlled VMs, all in a tightly-controlled network. Is that logic sound, or am I missing something? UPDATE: early adopters of the patch in Azure are reporting no noticeable slowdown , so this may all be moot. Related question: What are the risks of not patching a workstation OS for Meltdown? | Basically, if you run code from untrusted sources on a machine that has data you don't want that code to have access to, you need to patch. Desktop computers should be patched because they've got an unfortunate habit of encountering untrusted code; shared-hosting servers, particularly virtual private server hosts, must be patched, because Meltdown lets one user access every other user's data. Note that the Meltdown attack cannot be used to break out of a virtual machine . You can break out of a container, sandbox, or a paravirtualized system, but performing the Meltdown attack in a fully-virtualized system just gets you access to that VM's kernel memory, not the host's kernel memory. | {
"source": [
"https://security.stackexchange.com/questions/176641",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61443/"
]
} |
176,719 | Let's say I visit Twitter using HTTPS and a VPN First, I know that HTTPS is end-to-end encrypted, so no one except Twitter can know what data is sent, not even the VPN provider. Second, I know that when I am using a VPN no one can know who is the user , except the VPN provider. So, Twitter doesn't know the user , and the VPN provider doesn't know the data . Is this true? Am I 100% anonymous? | Twitter doesn't know the user If you have ever used that browser to connect to Twitter outside of the VPN then it is possible that twitter have used cookies or (even in the case of a complete browser data wipe) browser fingerprinting to identify you. Even if they haven't you should assume their ad providers have. no one can know who is the user, except the VPN provider. Anyone with visibility of entrance nodes and exit nodes to the VPN (ISPs, state actors etc) can apply packet matching techniques to try and identify traffic flows. You also have the risk of both Twitter and the VPN sharing the information they hold on you with other parties. | {
"source": [
"https://security.stackexchange.com/questions/176719",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167617/"
]
} |
176,803 | Canonical question regarding the 2018 Jan. disclosed Meltdown and Spectre Attacks. Other identical or significantly similar questions should be closed as a duplicate of this one. Main concerns What is speculative execution and what does it do? What is the difference between Meltdown and Spectre? How does Meltdown work? How does Spectre work? How do I protect myself? Is the KPTI/KAISER patch sufficient to protect my computer? The patch is claimed to induce a loss of performance; is it true? How much does this mean? Can/should I disable the new patch if I'm worried about performance losses? What are the risks? Which platforms and CPUs are vulnerable? I'm running a virtual machine/containers; to what extent am I vulnerable? Can I detect a Meltdown/Spectre attack on my computer? Are these attacks remote code execution vulnerabilities? Do I need to patch my software/website? Can I be affected while visiting a website? Can you explain this with an image? References: Meltdown research paper Spectre research paper CERT Notice CERT Meltdown and Spectre Side-Channel Vulnerability Guidance | This answer is an attempt at addressing simply the main concerns. The details here might not be exemplary accurate, or complete. I'll try to link to more detailed explanations when possible. What is speculative execution and what does it do? Speculative execution is a feature of modern processors that comes as an optimisation. To allow for the parallel treatment of instructions into the processor pipeline, instructions can be processed ahead of time by using techniques such as branch prediction, value prediction or memory prefetch. Instructions can then be executed out-of-order. This is a way to gain time by predicting correctly what happens next, instead of keeping the processor idle. When the prediction fails, the instructions are rolled back. What is the difference between Meltdown and Spectre? Meltdown exploits are globally easier to implement than Spectre. Meltdown takes advantage of memory reads in out-of-order instructions, Spectre acts on the branch prediction mechanism. Spectre allows for cross/intra process memory disclosure, Meltdown allows disclosure of kernel memory to the user-space processes (normally not accessible). Meltdown has a known software mitigation. Both rely on a cache side-channel attack, which is a measure of timing differences when accessing certain blocks of memory to deduce the information otherwise unknown. How does Meltdown work? In a nutshell, Meltdown works by asking the processor to load memory the user-space program should not have access to. Memory is read in an out-of-order fashion and put into cache. Using a side-channel attack (execution time measurement) on the cache, it is possible to infer the value of that memory location. How does Spectre work? Spectre works on a different level and does not allow access to kernel-space data from user-space. In this attack, the attacker tricks the speculative execution to predictively execute instructions erroneously. In a nutshell, the predictor is coerced to predict a specific branch result (if -> true), that results in asking for an out-of-bound memory access that the victim process would not normally have requested, resulting in incorrect speculative execution. Then by the side-channel, retrieves the value of this memory. In this way, memory belonging to the victim process is leaked to the malicious process. How do I protect myself? Update your operating system and/or hypervisor. The main operating system distributors already release patches to implement the KPTI/KAISER behaviour. This is a means to reduce the attack surface by removing the user memory mapping of the kernel that are not strictly necessary for the processor to function. Microcode (firmware) updates are likely to be release at some point. Are the KPTI/KAISER patch sufficient to protect my computer? No. And it only covers Meltdown. The problem lies in the architecture design of the processors. To solve the problems completely (or even partially in the case of Spectre), at least a firmware update will be necessary. It is also possible that processors will have to be replaced by more recent models without this design flaw. The patch is claimed to induce a loss of performance, is it true? How much does this mean? Yes, but the loss incurred is dependent of the software workflow. Claims are it could range from 5% to 30% loss, depending on how system call reliant your software is. Benchmarks with high kernel-to-user space transitions, such as databases, have been reported to be the most affected. This is still undergoing investigations, preliminary reports have been made (see links).
However, since impacts vary by application, you should test the performance impact in your environment rather than relying on generic benchmarks. Preliminary results (independent??): Phoronix 1 Phoronix 2 RedHat Official announcements: Microsoft announcement Intel announcement Apple Can I/Should I disable the new patch if I worry about performance losses? What are the risks? No. You can, but it's definitely not recommended. First of all, if you're managing data from others, you would make your clients vulnerable to data theft. Then, the performance losses still have to be properly evaluated and are highly dependent on your specific workflow. It's not a straight down 30% in performances. Should you disable KPTI, you run the risk of having your confidential data leaked. Which platforms and which CPU are vulnerable? Most of the Intel CPU are vulnerable to both attacks. AMD CPU seems to be only affected by Spectre . Some ARM Cortex processors are affected, while all other ARM processors are not. PowerPC processors in the POWER 7, POWER 8, and POWER 9 families are affected, and others may be. (see links for details). I am running a Virtual Machine/Containers, to what extent am I vulnerable? As per Steffen Ullrich's answer Meltdown attacks do not cross VMs, only leaks kernel memory to local processes. Spectre can work across VMs. Also, from Steffen again , Meltdown and Spectre works with containers, as containers relies on the host kernel. Can I detect Meltdown/Spectre attack on my computer? It's very difficult, if even possible to. Meltdown and Spectre both uses a designed property of the CPU that is triggered by innocuous programs all the time, making malicious programs difficult to tell apart from benign programs. Are these attacks a remote code execution vulnerability? No they aren't. To be able to apply this attack, the attacker need to be able to execute code on the target host. Note however that if these are combined with other attack vectors, for example file upload or cross-site-scripting exploits, then there is a possibility of executing them remotely. Do I need to patch my software/website? In an ideal world, no. In reality, some browser vendors already have implemented decrease in time accuracy measurements to mitigate the side-channel attacks. The KPTI should already provide a fair enough fix for programs using native system calls. Can I be affected while visiting a website? Yes, there's already a proof of concept of a Javascript exploit for Spectre (only). Can you explain this with an image? No, but Randall Munroe can: [Source XKCD : https://xkcd.com/1938/ ] | {
"source": [
"https://security.stackexchange.com/questions/176803",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1574/"
]
} |
176,806 | I see that Chrome and Mozilla have added mitigations into their javascript engines for the Spectre vulnerabilities (CVE-2017-5753 & CVE-2017-5715). However I cant find anything regarding javascript engines that run on the JVM for example Rhino and Nashorn. Is there any reason to think that these javascript engines are not capable of executing code that exploits the vulnerability? Thanks | This answer is an attempt at addressing simply the main concerns. The details here might not be exemplary accurate, or complete. I'll try to link to more detailed explanations when possible. What is speculative execution and what does it do? Speculative execution is a feature of modern processors that comes as an optimisation. To allow for the parallel treatment of instructions into the processor pipeline, instructions can be processed ahead of time by using techniques such as branch prediction, value prediction or memory prefetch. Instructions can then be executed out-of-order. This is a way to gain time by predicting correctly what happens next, instead of keeping the processor idle. When the prediction fails, the instructions are rolled back. What is the difference between Meltdown and Spectre? Meltdown exploits are globally easier to implement than Spectre. Meltdown takes advantage of memory reads in out-of-order instructions, Spectre acts on the branch prediction mechanism. Spectre allows for cross/intra process memory disclosure, Meltdown allows disclosure of kernel memory to the user-space processes (normally not accessible). Meltdown has a known software mitigation. Both rely on a cache side-channel attack, which is a measure of timing differences when accessing certain blocks of memory to deduce the information otherwise unknown. How does Meltdown work? In a nutshell, Meltdown works by asking the processor to load memory the user-space program should not have access to. Memory is read in an out-of-order fashion and put into cache. Using a side-channel attack (execution time measurement) on the cache, it is possible to infer the value of that memory location. How does Spectre work? Spectre works on a different level and does not allow access to kernel-space data from user-space. In this attack, the attacker tricks the speculative execution to predictively execute instructions erroneously. In a nutshell, the predictor is coerced to predict a specific branch result (if -> true), that results in asking for an out-of-bound memory access that the victim process would not normally have requested, resulting in incorrect speculative execution. Then by the side-channel, retrieves the value of this memory. In this way, memory belonging to the victim process is leaked to the malicious process. How do I protect myself? Update your operating system and/or hypervisor. The main operating system distributors already release patches to implement the KPTI/KAISER behaviour. This is a means to reduce the attack surface by removing the user memory mapping of the kernel that are not strictly necessary for the processor to function. Microcode (firmware) updates are likely to be release at some point. Are the KPTI/KAISER patch sufficient to protect my computer? No. And it only covers Meltdown. The problem lies in the architecture design of the processors. To solve the problems completely (or even partially in the case of Spectre), at least a firmware update will be necessary. It is also possible that processors will have to be replaced by more recent models without this design flaw. The patch is claimed to induce a loss of performance, is it true? How much does this mean? Yes, but the loss incurred is dependent of the software workflow. Claims are it could range from 5% to 30% loss, depending on how system call reliant your software is. Benchmarks with high kernel-to-user space transitions, such as databases, have been reported to be the most affected. This is still undergoing investigations, preliminary reports have been made (see links).
However, since impacts vary by application, you should test the performance impact in your environment rather than relying on generic benchmarks. Preliminary results (independent??): Phoronix 1 Phoronix 2 RedHat Official announcements: Microsoft announcement Intel announcement Apple Can I/Should I disable the new patch if I worry about performance losses? What are the risks? No. You can, but it's definitely not recommended. First of all, if you're managing data from others, you would make your clients vulnerable to data theft. Then, the performance losses still have to be properly evaluated and are highly dependent on your specific workflow. It's not a straight down 30% in performances. Should you disable KPTI, you run the risk of having your confidential data leaked. Which platforms and which CPU are vulnerable? Most of the Intel CPU are vulnerable to both attacks. AMD CPU seems to be only affected by Spectre . Some ARM Cortex processors are affected, while all other ARM processors are not. PowerPC processors in the POWER 7, POWER 8, and POWER 9 families are affected, and others may be. (see links for details). I am running a Virtual Machine/Containers, to what extent am I vulnerable? As per Steffen Ullrich's answer Meltdown attacks do not cross VMs, only leaks kernel memory to local processes. Spectre can work across VMs. Also, from Steffen again , Meltdown and Spectre works with containers, as containers relies on the host kernel. Can I detect Meltdown/Spectre attack on my computer? It's very difficult, if even possible to. Meltdown and Spectre both uses a designed property of the CPU that is triggered by innocuous programs all the time, making malicious programs difficult to tell apart from benign programs. Are these attacks a remote code execution vulnerability? No they aren't. To be able to apply this attack, the attacker need to be able to execute code on the target host. Note however that if these are combined with other attack vectors, for example file upload or cross-site-scripting exploits, then there is a possibility of executing them remotely. Do I need to patch my software/website? In an ideal world, no. In reality, some browser vendors already have implemented decrease in time accuracy measurements to mitigate the side-channel attacks. The KPTI should already provide a fair enough fix for programs using native system calls. Can I be affected while visiting a website? Yes, there's already a proof of concept of a Javascript exploit for Spectre (only). Can you explain this with an image? No, but Randall Munroe can: [Source XKCD : https://xkcd.com/1938/ ] | {
"source": [
"https://security.stackexchange.com/questions/176806",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167703/"
]
} |
177,040 | For instance do they know whether you're using an iPhone or Samsung? | Depends on the device and if you have taken any steps to hide it. Most devices by default put a lot of identifying information in the User-Agent header on outgoing HTTP/S requests. For HTTP requests these will be visible to anyone with wire access. For example for Android from here - Mozilla/5.0 (Linux; U; Android 4.0.3 ; ko-kr; LG-L160L Build/IML74K ) AppleWebkit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30 Mozilla/5.0 (Linux; U; Android 4.0.3 ; de-ch; HTC Sensation Build/IML74K ) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30 And for iPhones - Mozilla/5.0 (iPhone; CPU iPhone OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25 Mobile carriers/providers meanwhile have access to your IMEI which uniquely identifies your device. *As @Anders points out there are also service fingerprinting techniques. Any connection to a manufacturer related applications/service could be an indicator. As could profiling of data patterns (a simple approach would be fingerprinting device update files). I initially leaned towards user-agents because the original question asked about ISP's / this approach can be used no matter how the device is connected (i.e. IMEI is only visible to your carrier unless they forward it on. User-Agent would be visible on non-encrypted requests from any internet connection - WiFi, ADSL etc). | {
"source": [
"https://security.stackexchange.com/questions/177040",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/158113/"
]
} |
177,049 | Are GPUs vulnerable to spectre/meltdown attacks, since they have most of what makes CPUs attackable? Is there any information in the VRAM, that would cause trouble if it was stolen? | First of all you would not normally expect kernel memory to be mapped in a GPU. Even if you did modern GPU's generally don't have much in the way of support for sharing memory between processes. There have certainly been research papers on speculative execution inside of a GPU - Speculative Execution on GPU: An Exploratory Study; Liu, Eisenbeis, Gaudiot - but I don't believe it is actively done at a hardware level by any existing devices. So whilst theoretically there is nothing to stop you building a GPU/OS setup that may allow it I doubt this is possible on any existing products. | {
"source": [
"https://security.stackexchange.com/questions/177049",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167947/"
]
} |
177,069 | My concern is the disposal of a replaced disk from a private RAID5 disk array. I have had to replace a disk from my personal RAID5 disk-array. It had started developing errors, so out it went. But now, I have this disk lying on my desk and that got me wondering... The data on the array was never encrypted. I'm concerned that turning it in at the recycle-station could be a security-risk. Is it possible that some mischievous individual would be able to retrieve personal data (photo's, files etc.) from the disk? Or is the fact that it was part of a RAID5 array sufficient for the data to have been scrambled beyond recognition? | Raid 5 stripes the data across the disks but the blocks used for striping are typically pretty big. At the very least they will bewhole sectors but normally they will much larger than that. For example madm defaults to half-megabyte chunks. Even one sector is big enough that you are likely to find recongisable chunks of text and with typical chunk sizes it is quite likely entire recognisable files will be present on the individual drives from the array. | {
"source": [
"https://security.stackexchange.com/questions/177069",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167979/"
]
} |
177,182 | I am looking to Block all old browsers that support only TLS 1.0. Since TLS 1.0 is out of PCI Compliance, it is a safety measure that I want to take. But I am having trouble finding a list of these old browsers. Can anyone help? | I am having trouble finding a list [of old browsers that only support TLS 1.0] Lists of browsers with specific features Trouble finding a suitable list might be partly because such a list might be large, incomplete, frequently changing and might need to take account very large numbers of plugins and addons. As Tripehound mentioned in a comment. A browser might not be on such a list because it has support for TLS 1.2 even if support for 1.1 and 1.2 is disabled by default . This makes relying on a list more risky. Depending on what you are doing, you may not need a list of browsers (user agents?). Predicting Impact of barring browsers that use TLS 1.0 If you need to work out how many of a server's customers rely on TLS 1.0 you can enable TLS version logging in Apache and probably in other webservers etc. After a suitable period (e.g. a week) you would have some good statistics about the number of customers affected. Preventing use of insecure protocols by browsers It is often possible to configure web-servers and other services to not support TLS 1.0 - thus blocking browsers that don't support more recent versions. | {
"source": [
"https://security.stackexchange.com/questions/177182",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168088/"
]
} |
177,374 | From what I've read, Spectre and Meltdown each require rogue code to be running on a Windows box in order for attacks to take place. The thing is, once a box has rogue code running, it's already compromised . Given that the Microsoft patches for Spectre and Meltdown reportedly slow down the patched systems , it seems like possibly a good decision to leave Windows systems unpatched at the OS level. Assuming rogue code is not installed on a Windows box, the only point of easy penetration to a system seems to be via javascript running in a web browser. Yet both Mozilla's Firefox and Microsoft's Internet Explorer have already been patched . Google's Chrome is not currently patched, but it can reportedly be run in strict site isolation in order to prevent successful Sprectre and Meltdown attacks. Given all the above (and assuming best practices of not running unknown code), does it really make sense to patch Windows for Spectre and Meltdown? | the only point of easy penetration to a system seems to be via javascript running in a web browser. How about Flash? Java? Silverlight? VBA in an office document? Any applications that load web-pages inside of themselves? The thing is, once a box has rogue code running, it's already compromised. With code running under your user account a lot can be done. But compromised is a relative term. You can't for example install a low level key logger. Permissions might stop you from reading other applications memory space even when run under the same user account. And of course you have no access to files and processes run under other user accounts. A user on a corporate machine could not hijack another logged in users session. Being able to read arbitrary RAM values can give you the access tokens required to defeat all of this. | {
"source": [
"https://security.stackexchange.com/questions/177374",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59811/"
]
} |
177,405 | If I connect to lets say gmail over a VPN. How does the provider forward the traffic without exposing my IP, but also without breaking the SSL. Shouldn't gmail know my real IP if the traffic just gets tunneled through the provider? I thought about invalid certificates if the ssl gets broken, but how do next gen firewalls like palo alto claim they can do deep packet inspection on ssl traffic without the users noticing? Why can't the VPN provider just use a similar box to decrypt it? I am a bit curious about how much data a VPN provider could potentially collect about me. I hope you can help me here. | How does the provider forward the traffic without exposing my IP, but also without breaking the SSL. SSL is protection (like encryption) on top of TCP which sits on top of IP. The underlying layers (TCP, IP) can be changed without changing the data transported. This means that the encryption can be kept even though your IP address at the network layer is changing. This is similar to having an encrypted mail (i.e. PGP or S/MIME). It does not matter if it gets transported via multiple mail servers, gets stored on different machines etc - the encrypted part of the mail itself and its inner content will not be changed. ... but how do next gen firewalls like palo alto claim they can do deep packet inspection on ssl traffic without the users noticing? They don't. If the inner contents of SSL connections need to be analyzed the DPI system does a man in the middle "attack", i.e. it is the endpoint of the SSL connection from the perspective of the server and decrypts any traffic and encrypts it again to present it to the client. Usually this will result in security warning to the user since the new certificate for the connection (created by the DPI system) is not trusted. But this can be made more transparent to the user if the user explicitly trusts the DPI appliance. For the details to this see How does SSL Proxy server in company work? , Deep Packet Inspection SSL : How DPI appliances prevent certificates warnings? or Is it common practice for companies to MITM HTTPS traffic? . Why can't the VPN provider just use a similar box to decrypt it? It actually could do this. Only, in theory users would need to explicitly trust the VPN provider for inspecting SSL traffic similar to what is done in companies. But, if you for example install the VPN software provided by the VPN provider, this software could actually silently trust the computer the VPN provider for SSL interception so that you don't realize that the provider can sniff and even modify the encrypted traffic. This silent installation of trusted certificate authorities is actually what many antivirus products do, so that they sniff encrypted traffic and protect the user from attacks delivered inside encrypted connections. One could in theory find out that the provider is doing this by looking at the certificate chain for each SSL connection and comparing it to the expected one. Or one can look at the locally trusted certificate authorities and see if there was one added. Still, if you install software from the VPN provider the provider could also change parts of your system like the browser in order to hide the inspection from you. And this is not restricted to software given by the VPN provider - any software you install could actually make such changes. See also How can I detect HTTPS inspection? . | {
"source": [
"https://security.stackexchange.com/questions/177405",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168297/"
]
} |
177,415 | I heard from a guy that's involved in low-level (assembler, C for drivers and OSes) programming, that meltdown and spectre weren't actually vulnerabilities discovered only so recently, but they were openly known as debug tools. It seems quite unlikely, but could anyone confirm or deny this? | It's not even remotely true. Although you can use a Meltdown or Spectre attack to inspect the internals of a program in the way a debugger can, a proper debugger is much faster, easier, and more reliable. | {
"source": [
"https://security.stackexchange.com/questions/177415",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168316/"
]
} |
177,643 | In my Spring application, I was planning to remove passwords from the authentication process by sending a "magic sign-in link" to a user's email address. However, in this question Rob Winch (lead of Spring Security) says the following: Be careful that you know what you are doing in terms of allowing login
from a link within an email. SMTP is not a secure protocol and so it
is typically bad to rely on someone having an email as a form of
authentication. Is that really the case? If so, then how is sending a link for password reset any more secure? Isn't logging-in using a magic link the same thing as sending a magic link for resetting a password? | A magic link alone is not necessarily bad. A 512 bit entirely random value is going to be no easier to guess than a 512 bit private key. In general it is considered good practice to expire them after a reasonable amount of time. A good approach - which also avoids having to store database entries is to embed the token data in the url and sign it with a private key. I.e. site.com/login?type=login&user=[username]&expires=[datetime]&sig=[signature of other parameters]. However email as a transmission mechanism isn't secure. By default SMTP offers very little protection against interception. Traffic may be encrypted between servers but there are no guarantees. Even with encryption its still often possible to man in the middle the connection (encryption is not the same as authentication). If so, then how is sending a link for password reset any more secure? It isn't. This is why several services ask for some additional proof its you before sending the link (or after clicking it). | {
"source": [
"https://security.stackexchange.com/questions/177643",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71966/"
]
} |
177,654 | My new girocard did not reach me. I wanted to call the bank to block the old and get a new one. So I checked my online banking and found a phone number ("Block card: girocard or visa card lost? Call 04106-...). I called said number, and I talked to a real person. So far, so good. The person wanted my Online-Banking PIN, through the phone, before doing anything. Is this normal and safe operation, or was I correct to politely end the conversation? What happens if my card is misused after tomorrow? I called their hotline to get it blocked, but they didn't want to do anything without my online banking PIN. | In my opinion, you did the right thing. There is no situation in which you should ever be required to give up a PIN either over the phone or in person, with the exception of typing it into the (HTTPS) bank's website to login to your account or on a physical banking terminal such as an ATM. The entire purpose of a personal-identification-number (PIN) is to be a unique number that you and only you know, allowing you to authenticate yourself. I would recommend finding a physical banking location where you can go in person and talk to a person about your experience and the issue you were having that lead to you making that phone call. It is entirely possible the phone number was for the real banking institution and the phone operator was new or inexperienced and made a massive mistake, but there is no good reason for a person to give up a PIN to another person. Hope it helps! | {
"source": [
"https://security.stackexchange.com/questions/177654",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/37853/"
]
} |
177,708 | I know for a fact that some sites/apps with low security restrict passwords to alphanumeric characters only, and some allow a slightly broader ASCII range. Some sites/apps also support Unicode. Passwords are usually meant to be typable on any generic keyboard, so they are typically generated using the commonly available characters. But for passwords which will only be kept digitally, would it be a good idea to maximize the guessing time by using the entire Unicode range of characters? Or are there reasons to believe some or most Unicode supporting sites/apps could still limit their allowed character range? | This is a good idea from a security perspective. A password containing unicode characters would be harder to brute-force than a password containing ASCII characters of the same length. This holds up even if you compare byte-length instead of character length, because Unicode uses the most significant bit whereas ASCII does not. However, I think it wouldn't be practical since Unicode bugs are so common. I think if you use Unicode passwords everywhere you will encounter more than a couple of sites where you would have problems logging in, because the developers didn't correctly implement Unicode support for passwords. | {
"source": [
"https://security.stackexchange.com/questions/177708",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168590/"
]
} |
177,713 | I got an empty email in my language (Hebrew) with only a title that can be translated to I am still waiting for your feedback (original: עדיין מחכה למשוב שלך ) Since my gmail handle multiple accounts (by forward and by user\pass) I tried to figure out which account was the target, and to my surprise - None! My question is simple, why gmail chose to put this email in my inbox? Attached bellow is the full source of the email (from gmail) except my censored email . How is this not a security breach\stopped by spam filters ? Original Message: Message ID <[email protected]> Created at: Tue, Jan 16, 2018 at 2:54 PM (Delivered after 2 seconds) From: joseph andrew Using Mail.Ru Mailer 1.0 To: Subject: עדיין מחכה למשוב שלך SPF: PASS with IP 217.69.138.160 Learn more DKIM: 'PASS' with domain bk.ru Learn more DMARC: 'PASS' Learn more Delivered-To: ****<cencored>****@gmail.com
Received: by 10.2.76.217 with SMTP id q86csp4037724jad;
Tue, 16 Jan 2018 04:54:21 -0800 (PST)
X-Google-Smtp-Source: ACJfBotHqXkzj6W7gd+8IjEmxrwG9SqXBSC+QiTqyAB1j2Dt4ASXtmXr5UpqIpdU7Mge/EFmnzVI
X-Received: by 10.46.101.207 with SMTP id e76mr192303ljf.115.1516107261904;
Tue, 16 Jan 2018 04:54:21 -0800 (PST)
ARC-Seal: i=1; a=rsa-sha256; t=1516107261; cv=none;
d=google.com; s=arc-20160816;
b=O+SDwmDS2Y7Jxi+mhwkV/+svfXK0KI3VvepeQgBpyhlgYY5gK3wln+RC4YPO+MMn71
tyrGBUoc1iGKpeGcilWAovf0XLceJY+EAGoMX4Hl4Pse8C5mWiP0DJQXfmolB5myOFD/
EoSl7Km4KDcQsvSC0DGwcni1yUPjgiQr+KIY+y19WCqVfm5EtCkbYkUCnFP1RWh/BUBp
YZHKJrRYH/gWsSqBuIm/fDuSFk4bYAFaCOfkq5LfJcnkf7lpRFBnNYseignqwnnYMEzB
aScqj1ppvCoYLbBuEiw+yLo1iQLoNdXwGOLShsGXELxkpdFpmchOEV44Wx2vp+RlJMF0
v1/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
h=content-transfer-encoding:message-id:reply-to:date:mime-version
:subject:from:dkim-signature:arc-authentication-results;
bh=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=;
b=IPjf/U3N1a7Dx8HIw59iWKeccU4mS7Bd0Iy2FY/Be4Mx4QATd8uBvH7pLOVOLHRAXj
iATzKUy69ZyLgu6gJVc+yjW3+i740O9ccPNbWAPQQASX1H9OkiMsmlhNYOU5u4KDKfbj
nNm77TeMxrF57z4XKpbO3iE4YEv6JFankI949HvLnehC7wPP5M5YHpS8CllmV3zP8RX4
2kb14n4PzguduwYoHL3q7wwWHHnyPUsa3UuhCKLvNJYw4KWKLWNY6Kt7fwReb83T+OsG
lavZ7huDrCcf0P8Ee7YGepcNpGFyh2WjpA4o7l+gAlnqsb6+5FZloH6j7cmQx/gA+gKh
aVng==
ARC-Authentication-Results: i=1; mx.google.com;
dkim=pass [email protected] header.s=mail header.b=kgyoMcic;
spf=pass (google.com: domain of [email protected] designates 217.69.138.160 as permitted sender)
[email protected];
dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=bk.ru
Return-Path: <[email protected]>
Received: from f493.i.mail.ru (f493.i.mail.ru. [217.69.138.160])
by mx.google.com with ESMTPS id 1si926950ljd.480.2018.01.16.04.54.21
(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Tue, 16 Jan 2018 04:54:21 -0800 (PST)
Received-SPF: pass (google.com: domain of [email protected] designates 217.69.138.160 as permitted sender) client-ip=217.69.138.160;
Authentication-Results: mx.google.com;
dkim=pass [email protected] header.s=mail header.b=kgyoMcic;
spf=pass (google.com: domain of [email protected] designates 217.69.138.160 as permitted sender)
[email protected];
dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=bk.ru
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bk.ru; s=mail; h=Content-Transfer-Encoding:Content-Type:Message-ID:Reply-To:Date:MIME-Version:Subject:From;bh=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=; b=kgyoMcicHiqU/OvYmq/2sQYQ3607jIk9ZHkh3oVDZTrA6cWDxLGvol37xuQ3L0mfrqIOpSTYHuJzssIjww+4FL/h/GwBRRO1AsuLgxyjSQaOLVqidNe0MUIz6EVQxYXLcUCl9USryPLWAWKBiwL80efALu5znH8K96P6fF33Gzw=;
Received: by f493.i.mail.ru with local (envelope-from <[email protected]>) id 1ebQkt-0004Zj-4G; Tue, 16 Jan 2018 15:54:19 +0300
Received: by e.mail.ru with HTTP; Tue, 16 Jan 2018 15:54:19 +0300
From: joseph
andrew <[email protected]>
Subject: עדיין מחכה למשוב שלך
MIME-Version: 1.0
X-Mailer: Mail.Ru Mailer 1.0
Date: Tue, 16 Jan 2018 15:54:19 +0300
Reply-To: joseph
andrew <[email protected]>
X-Priority: 3 (Normal)
Message-ID: <[email protected]>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
Authentication-Results: f493.i.mail.ru; auth=pass [email protected] [email protected]
X-7FA49CB5: 0D63561A33F958A5898151702DBFE222A841889EBAE90B8E20672C31CB87FE38725E5C173C3A84C39B8A9203B4187291E49527301AB7C5D479C543ECCDAE434EC4224003CC836476C0CAF46E325F83A50BF2EBBBDD9D6B0F20A889B128FC2D163B503F486389A921A5CC5B56E945C8DA
X-Mailru-Sender: AEEF7784B292FD580FA14CB39DA65BAAC14FF9386EF48BFAB56EC34AB37A73FC68AD8B15A0E9EE36875DC23763F10D8F75E89B9EB25B1370F805D6321A69DA8E2FB9333096616C166E245241366DF001CF5B8F1B83B229A3C432A6261406F5E9E7E03437FF0094633453F38A29522196
X-Mras: OK
X-Spam: undefined Edit: Tried to send with bcc and empty "to" and gmail showed the bcc. So currently @Luc answer seems to be the real answer ( RCPT TO was used). | Why? Two explanations: BCC Spam I've often gotten spam where they seem to want to hide to whom it was addressed for some reason. Since I have a catch-all on my domain, it will arrive for me no matter what address they used (unless they used one which I blacklisted). How is this possible? SMTP traffic looks like this: EHLO example.com
MAIL FROM:<[email protected]>
RCPT TO:<[email protected]>
RCPT TO:<[email protected]>
RCPT TO:<[email protected]>
DATA
Subject: test
From: <[email protected]>
To: <[email protected]>
CC: <[email protected]>
Hi Jake,
Just letting you know that email works.
.
QUIT You can open a TCP connection to any mail server and type this, and it'll respond to you and deliver your email (I've omitted responses in the example). On Windows, install Telnet from the Windows features menu and type telnet example.com 25 to connect to the server at example.com on port 25. As you can see, user4 was addressed in the RCPT TO and it will end up in their email inbox, but they are not mentioned in the from, to or cc headers of the email data. The email data, from the DATA command until the . on a line by itself, is the part that you will see when you open "view source" in your email client. So it has little to do with the actual email exchange. Of course it usually matches, but in a malicious case, they don't care what is "usual". And in the case of BCC you'll also not see it. I've often gotten spam where they hide where it was sent to. In order to be able to trace it, I have to dig in my mail server logs. A server administrator can also lookup BCCs like this, though of course only of their own domain (if it was BCC'd to [email protected] and [email protected] , the administrator of a.example.com will not see stefan). As to why you can send GMail to yourself in the BCC and see a BCC field with yourself listed on the receiving end: the email program/provider can just send separate SMTP messages for each recipient in the BCC, with the BCC header crafted in the nested email header to list only that recipient. | {
"source": [
"https://security.stackexchange.com/questions/177713",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69837/"
]
} |
177,716 | I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it? | Public and private keys are linked in such as way that if two certificates have the same public key, they were created using the same private key. So if you assume that the private key is indeed kept private, the part you can trust in the certificates to identify the creator is the public key , and by extension the digest of the public key. | {
"source": [
"https://security.stackexchange.com/questions/177716",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168602/"
]
} |
177,728 | Context: We have a private certification authority in my company. We are provisioning VMs in our private cloud which will need to trust SSL certs issued by this CA, i.e. they will need the cert chain installed and trusted. Since provisioning is fully automated, we are committing the .pem of the cert chain (consisting of the Root and one intermediate cert) to a private Git repository. As always, even though the repo is private, the risk of exposure exists. Question: If said certificate chain is inadvertently made public for any reason, does this expose us to any undue risk? (I am fairly confident this is fine, but would like to check my sanity against this community, and am hoping the answer will help someone else in the future). | There's a reason they're called "public" keys. :) There are hundreds of Root CA certificates bundled with your operating system, etc. If your attacker can factor your public key, you've already lost. The only concerns I'd have with a private CA are whether or not you expose information about your internal structure that might be useful to an attacker. e.g., details about who operates your CA, particular servers that have the CA private keys for signing... | {
"source": [
"https://security.stackexchange.com/questions/177728",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168612/"
]
} |
177,963 | For this question I will use the following domains: example.com - an online shop exmpl.com - a domain which is used for sharing items. exmpl.com will be used for redirects (e.g. http://exmpl.com/foo will redirect to https://example.com/items/42 ) . I have the following questions: How important is it that exmpl.com should be encrypted? Does it matter at all if it has https or only http ? Of course, if it doesn't have https , one who attempts to access https://exmpl.com will not be able to do so, but are there any side effects? On the other side, why would a company prefer to support only http in this context? | Should redirect sites use HTTPS If the main site uses HTTPS then the redirect site should too. What attacks are possible if it does not Passive Attacker Can see every item looked at by the user May get extra information (which site/chat linked to the page) Active Attacker Anything a passive attacker can, and... Can MITM the connection and use something like SSLStrip to prevent the connection upgrading to HTTPS, allowing then to watch passwords, credit card numbers, etc. Can add a cookie to the reply, allowing them to track a user Can redirect the traffic to a phishing site, or a site that installs malware. How to prevent using HTTP entirely, to stop tools like SSLStrip You can serve an HSTS header, so only the first visit (and the first visit once the header expires) will use HTTP, so an attacker cannot intercept the traffic If you with to prevent these weaknesses you can add your site to the HSTS Preload List , which will prevent HTTP connections. This would need to apply for both sites, as if the main site is HSTS but the redirect then the attacker could redirect to a site they control instead, bypassing HSTS. Should I use HTTPS Generally, YES There are very few occasions when you can get away with not using HTTPS, all of these need to apply: You manually sign your data - and always verify the signature You do not care about people finding out your software is running on someone's machine You do not handle any sensitive information, or information users may want to keep secret You won't accidentally get sensitive information sent over your system An example would be ACARS, it's wasn't meant for sensitive information, but credit card numbers have been sent over it without realising the problems caused. [ forum , paper ] You do not care or can detect if someone swaps around the files, using other valid signed ones The service does not imply sensitive data (a page about a disease may not be sensitive, but the fact that someone is searching about the disease may be sensitive). | {
"source": [
"https://security.stackexchange.com/questions/177963",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76797/"
]
} |
178,076 | I received an email to techsupport@ websitename .com (pretty generic email) saying that there was a security flaw in my website etc. etc My initial reaction was that this was a scam. (How/why did they find our site.) However, they didn't seem to be looking for money (so far) and they also had emailed it from a gmail account (which seemed off to me, spam is usually sent from weird domains) - also google marked it as important. The overall writing is clearly not well educated, but it isn't as bad as they usually are. The email address also seemed like a gamer addresses (some weird name and a few digits) This is the email: Hello, I have found a Web Application Vulnerability [XSS] in
' websitename .com' which can lead an attacker to perform
unauthenticated tasks like account takeovers and other malicious
stuffs like web defacement (your site), port scanning through your
servers to other servers on internet or may use your website to spread
Ransomware, and this bug is needed to be fixed as fast as possible. Being a responsible security researcher, I m sending this mail
directly to you without making the bug public, so if you are concerned
about your website's security and want detailed information and
Proof-of-Concept of this bug, please contact me on my mail - email @gmail.com Would be happy to know - do you provide any rewards (bug bounty) /
swag as token of appreciation for reporting bugs ? Thank you, - (Foreign sounding) Name Italics have been changed for privacy Question: Is this a typical things that scammers would do? If so what are they trying to gain, what would be (if any) the risks of replying to the email requesting some more information. On the other hand, if it is in fact a legit "responsible security researcher" what kind of questions should I ask to find out. | TL;DR : It's probably well-intentioned and not a scam, but just poorly written. I don't know of any kind of scam that would be based on this. Certainly there have been attempts to extort website owners for money based on knowledge of website vulnerabilities (and the implicit threat to exploit them), but that doesn't look like the case here. It's not a very well-written disclosure email. I've certainly stumbled across vulnerabilities before (obviously, attempting to exploit them on a site that hasn't given permission would be illegal, but there are some that can be obvious without attempting exploitation), and sent emails with the same intent as the author above, but I try to provide all the detail in the first email. I want to help. I don't want to bounce back and forth in email land. If it were me, I would ask them for details: what page (or pages) contain the vulnerabilities, which parameters are injectable, and whether they could share a proof of concept. If you're not familiar with XSS, I recommend reading the OWASP page on the vulnerability . It's both very common and can be critical, depending on the context. A typical proof of concept (PoC) for XSS won't be dangerous to you or your site, but will do something like pop up a javascript alert box containing the hostname of the site, your session cookes, or even just the number 1. Any of those show that a malicious attacker could be running Javascript on your site, which would have significant implications for your site security. As some have pointed out, it's also possible the lack of information is them playing it "cagey" will looking for a reward/payment. Obviously, if your site does not have a published bug bounty, you're under no obligation to do so. | {
"source": [
"https://security.stackexchange.com/questions/178076",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168620/"
]
} |
178,309 | I discovered something I consider a major vulnerability in a SaaS product that includes the username and password in the query string of the URL on registration and every login attempt. The technical support of the service has told me they consider the vulnerability insignificant, as the only way to exploit it is to gain access to the user's browser history. Were they correct in their decision? I'm fairly new to information security, but it still sounds like laziness on their part. I did skim through this question , but having read the most upvoted answer I'm now even more concerned about this being overlooked, as the data is sent via GET and the credentials are displayed in plain text. | Yes, this is a vulnerability. You can point them to such august bodies as OWASP Top 10 https://www.owasp.org/index.php/Information_exposure_through_query_strings_in_url https://www.owasp.org/index.php/Top_10_2013-A2-Broken_Authentication_and_Session_Management CWE Information Exposure Through Query Strings in GET Request https://cwe.mitre.org/data/definitions/598.html Insufficiently Protected Credentials https://cwe.mitre.org/data/definitions/522.html StackExchange https://stackoverflow.com/questions/26671599/are-security-concerns-sending-a-password-using-a-get-request-over-https-valid The common problem is that the credentials are stored on the client-side in the clear (browser history) and on the server side (webserver connection logs) and there are multiple methods to access that data. Yes, it is laziness on their part. They are thinking of their code alone, and forgetting the client-side and the infrastructure. | {
"source": [
"https://security.stackexchange.com/questions/178309",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/169210/"
]
} |
178,663 | I tried to export all my cookies through the 'Edit This Cookie' extension on a logged-in page which uses cookie authentication. While logged out I tried inserting those cookies hoping that I would be logged in, but nothing happened. After searching I came to know that the cookies sent are in encrypted form. But the page wasn't using any TLS encryption. Am I missing anything? EDIT: I tried using the same cookies while Logged in i.e exported all the cookies and imported on an incognito window but nothing happened. Also, this kind of attack doesn't seem to be working on most popular sites like Google, Facebook etc. So how do they protect against such attacks? | Technically, even if the contents in the cookie were to be encrypted, if cookies are properly copied to the new browser and the new browser sends the same HTTP headers (same user agent string, referrer is as expected, computer has same IP address, and all other headers the server could have previously stored and and compare against), the server theoretically wouldn't be able to differentiate between the original browser and the new browser. I'm assuming that you're trying to copy the cookie(s) from a site that auto-logs you on every time you open your browser and you haven't logged out. Some sites could use other ways to detect if this is a stolen cookie/session, but it's a losing battle because all those can still be spoofed E.g.: Check if the IP address changed Is the User-Agent the same Check if the referrer makes sense Any other HTTP headers that the browser sends To answer your question, you should be able to make it work if you're dealing with an auto-login site and you haven't logged out. Make sure that all the HTTP headers your new browser is sending are the same , that the IP address is the same, the referrer is the expected one, same user agent. Note that also perhaps the service you're using is using a 2nd cookie that you forgot to copy, and thus creates an anomaly and kicks you out. | {
"source": [
"https://security.stackexchange.com/questions/178663",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/169563/"
]
} |
178,682 | We have been recently contracted to run phishing tests for a company. Let's call it a company but basically they are obligated, by law, to assess the security of their environment with phishing campaigns. We ran our first campaigns not too long ago and the results were pretty bad. Over 70% of their users trusted the "malicious emails" we sent and did whatever the email asked of them. After it was over, we of course had a out brief detailing our findings. Long story short, they did not want ~any~ Identifiers (email, username, whatever) of who fell for the phish. They wanted "X out of 300" failed to identify the email. Their reason was they did not want to offend anyone. (I wanted to say your customers feelings will be hurt when your employees fall for a potential attack and leak info) I politely accused them of checking a box and not actually being interested in educating their users. They weren't very happy. I should elaborate by saying they didn't even want 2 reports, one showing the emails and another not showing them. I offered it to them because at least they can see how these individual people react to different campaigns overtime. This would absolutely help if user "Sam" clicks on every single link in a email every time over the course of ten campaigns. Surely you would want to educate Sam differently than other users? My question is, does this not defeat the purpose of phishing campaigns and improving the security of information in your network? Is this even normal? | This initial campaign established a baseline first. So, yes, it's normal. "How do we as a company stand? To what level do we need to train? Do we have, as a whole, secure users or do we have, as a whole, unsecure users?" This report establishes this and the extent to which management needs to engage in phishing training. Were only 5% of users to fall for the phish attempt, then the training focus would be very different. As it stands, now management knows that they have, essentially, a corporate-wide problem and that a phish campaign basically stands a 70% chance of succeeding. Now, when the company does future phishing training, they can compare results and determine whether the training was successful. "We initially fell for it 70% of the time. This time, we fell for it 68% of the time. It was therefore, not successful." Or "We initially fell for it 70% of the time and now fell for it 50% of the time. We're doing better, but need further training." | {
"source": [
"https://security.stackexchange.com/questions/178682",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/169586/"
]
} |
178,814 | I use Atlassian SourceTree on Windows, and one thing I like about it is that it doesn't require admin privileges to install or update. I happened to mention this to our ISSO (Information System Security Officer), and he was not a fan. He said that not requiring admin was dangerous because (to paraphrase) "If it's not asking you for approval, you never know what it's going and changing in the background!" Now, this person has a tendency to be overly-cautious, so I am skeptical of his assessment. I had always thought that if a program doesn't ask for administrator permissions, it's because it doesn't make deep enough changes to need them. To add to that, our work computers are extremely locked down, so I find it hard to believe that all an installer has to do to get by our security features is to not ask for permission. So what's the real situation? Can an installer that can run without administrator privileges really be that dangerous? | Installing something without needing admin privileges is no more dangerous than running a no-install program with standard user permissions. This is also less dangerous than installing something WITH admin privileges (or indeed, running anything with admin permissions). Running a random program downloaded off the internet, of course, is potentially dangerous - even if it doesn't require admin. If your ISSO's concern is "you're running random internet code, and the author of that code makes it easy for you to be lazy about asking me to vet it", then this is quite valid and factual. (you might debate the cost/benefit, but it is valid) If the concern is "this installer is more dangerous than other installers or no-install programs because it doesn't escalate its access level", then no, this is factually incorrect. | {
"source": [
"https://security.stackexchange.com/questions/178814",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140214/"
]
} |
178,909 | There is a lot of confusion around this on here, so I am making this post to be sure to understand it correctly. My school uses Aruba networks wifi, and after I type my Active Directory username and password (RADIUS authentication), it tells me I have to trust a certificate from 'wifiaruba.myschoolname.com' (Organization: My School) issued by DigiCert SHA2 High Assurance Server CA (Issuer Name, at least that is what the certificate says). I click trust and it goes away. My phone does not trust this by default it seems. Is it because this theoretically allows my school to decrypt SSL communications? If it really were from DigiCert, surely my phone would trust it? | It's ok. The certificate you installed and trusted is used to provide you secure authentication against their RADIUS server and prevent you from connecting to rogue RADIUS server. If someone decides to steal your Active Directory credentials by installing a rogue RADIUS server your phone will pop up with a warning that RADIUS certificate is not trusted. By trusting this certificate you are not risking with anything else. This certificate can't be used by school to read your SSL traffic or attempt to MITM your SSL traffic. From what I read in your question, your school does it correctly and cares about your security. Regarding your question: why your phone does not trust that certificate (which is certainly issued by trusted authority). RADIUS requires explicit trust to particular RADIUS authentication certificate, because you don't know what AP you are connecting to. Otherwise, an attacker could get certificate from other trusted CA vendor (say, Let's Encrypt) and use it to impersonate school RADIUS server and steal your credentials. This is not an issue in SSL context, because you know what kind of certificate you expect, because you manually type web site name in address bar. In wi-fi don't know to which AP you are connected and to ensure that it is legitimate, AP should provide RADIUS certificate you explicitly trust. | {
"source": [
"https://security.stackexchange.com/questions/178909",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/169800/"
]
} |
179,114 | I work heavily with SSH and SFTP, to be specific between two machines, both of which have their SSH port open on a public IP address. What are the toughest SSH daemon settings in terms of encryption, handshake, or other cryptographic settings in 2018? I am specifically interested in the cryptographic protocols. Securing SSH with good password selection, good key management, firewalling, etc. are out of scope for what I am asking here. So far, I have found and set on both machines in /etc/ssh/sshd_config : AuthenticationMethods publickey
Ciphers aes256-cbc
MACs [email protected]
FingerprintHash sha512
#KexAlgorithms This can be considered a follow-up question of Hardening SSH security on a Debian 9 server which I have posted before some time ago. But in a specific way, I want to know the highest settings. | You have a good discussion here: https://wiki.mozilla.org/Security/Guidelines/OpenSSH On modern OpenSSH they recommend: KexAlgorithms [email protected],ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers [email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr
MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,[email protected] This page gives explanations for each choice: https://stribika.github.io/2015/01/04/secure-secure-shell.html (do not be fooled by the hardcoded date in the URL, the document is updated from time to time as can be seen from its "changelog" at https://github.com/stribika/stribika.github.io/commits/master/_posts/2015-01-04-secure-secure-shell.md ) Against Logjam, see the end of https://weakdh.org/sysadmin.html : KexAlgorithms [email protected] | {
"source": [
"https://security.stackexchange.com/questions/179114",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82570/"
]
} |
179,273 | Authenticated encryption (AE) and authenticated encryption with
associated data (AEAD, variant of AE) is a form of encryption which
simultaneously provides confidentiality, integrity, and authenticity
assurances on the data. Source : Wikipedia I recently noticed that several SSL checkers started to point out AEAD. But, what is the difference between AE and AEAD and what is the purpose of “associated authenticated data” to an encrypted connection? Why not encrypt it all, instead of “authenticate” it only? | As a very general rule, the purpose of associated data (hereafter "AD") is to bind a ciphertext to the context where it's supposed to appear, so that attempts to "cut-and-paste" a valid ciphertext into a different context can be detected and rejected. For example, suppose I'm encrypting the values that I insert into a key/value database, and I use the record key as AD. What does that do? Well, first of all, mechanically it means now that whenever I decrypt the value, I must present the same key as AD or otherwise the decryption will fail with an authenticity error. So my application will have to perform that step in order to decrypt any of the data correctly. Second, and more deep, is that by using the key as AD, I've thwarted one sort of attack an insider—say, the database administrator—could conceivably carry out against my application: take two records in the database and swap their values. Without the AD, the application would happily decrypt these, and blindly assume that it read the correct value for those keys. With the key as AD, instead the application would immediately notice that the values are not authentic—even though the attacker never actually modified them—because they're occurring in the wrong context . That's just one example out of many possible, but applications of associated data tend to have that flavor. One important detail that people often miss from examples is that the associated data doesn't necessarily have to be stored or transmitted with the ciphertext. Any context-dependent non-secret values that the honest parties are both able to correctly infer can be useful as associated data. For example, if the parties are executing a complex protocol that's been formulated in terms of a state machine, such that every correct party can always tell their own state and that which an honest counterparty should be in, then those states—even though they're implicit to the protocol—can be used as AD. This is the sort of thing that makes AEAD suites so attractive to designers of protocols like TLS—it's a tool that can be used to cut down on the complexity of a secure protocol. | {
"source": [
"https://security.stackexchange.com/questions/179273",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/72031/"
]
} |
179,363 | I had an interesting conversation with a pentester who told me he had found a buffer overflow in Wordpress. The person in question was really adamant that this was true. The client is a bit skeptical about the technical skills of the pentesting firm and asked my opinion. So the question I have is this: has anyone ever heard that someone found a buffer overflow in WordPress by just making a GET request to some PHP? My opinion: If this were true, he would have found a buffer overflow in the PHP interpreter, and that would be huge. So I do not think it is true. EDIT:
The BOF was in two places: In a php function build by the client with the same payload as an XSS vulnerability (so something like 123">alert(0); In the wp_session token with just a bunch of A's (~60) It was all done externally with no access on the server in a routine quick pentest of around 10 different websites... I'll update after doing an actual code review over the parts that should be vulnerable EDIT: So I did the code review and it was indeed a BS story. Not only did the BOF's not exist at all, he actually said to have found a SQLi in a part of the code that did absolutely nothing with a database. But at least the discussion in the comments was very insightfull about possibilities of BOF's these kind of standard platforms and CMS's so I learned a lot! Thanks! | As PHP does memory management and a lot of stuff by itself, finding a buffer overflow specifically in WordPress doesn't really make sense to me. Before discrediting that Penetration Tester, I'd ask him/her for documentation of the finding in question. As he/she works for said client (sounds like it, correct me if I'm wrong), it's his/her job to report such an issue to the client, including a documentation of at least a way to track down/reproduce the issue. I'm very sceptical, as you say, that he/she only had access to the webservice from the outside. Verifying a low-level issue like a buffer overflow (which is even way beyond the webservice or wordpress in general) is next to impossible from the outside. Executing one is tricky, even if you have access to the source code, which doesn't seem to be the case (assuming it's not a whitebox test). P.S.: If you get an answer from the client/pentester, I'd love to hear it. You got me pretty curious for some reason... | {
"source": [
"https://security.stackexchange.com/questions/179363",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/135722/"
]
} |
179,384 | My information security teacher gave me homework - hack smime.p7m. He said that the key is simple and bruteforce will not take much time, but I can not find a utility to do this. Maybe you know these? | As PHP does memory management and a lot of stuff by itself, finding a buffer overflow specifically in WordPress doesn't really make sense to me. Before discrediting that Penetration Tester, I'd ask him/her for documentation of the finding in question. As he/she works for said client (sounds like it, correct me if I'm wrong), it's his/her job to report such an issue to the client, including a documentation of at least a way to track down/reproduce the issue. I'm very sceptical, as you say, that he/she only had access to the webservice from the outside. Verifying a low-level issue like a buffer overflow (which is even way beyond the webservice or wordpress in general) is next to impossible from the outside. Executing one is tricky, even if you have access to the source code, which doesn't seem to be the case (assuming it's not a whitebox test). P.S.: If you get an answer from the client/pentester, I'd love to hear it. You got me pretty curious for some reason... | {
"source": [
"https://security.stackexchange.com/questions/179384",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/170303/"
]
} |
179,446 | Quoted form my course instructor's lecture: The following are the stages of a typical web attack: The victim visits a legitimate web site that has been compromised. The compromised web site redirects the victim to another site that is running malicious code that is controlled by the attacker. The redirection may go through various intermediary servers first. I also faced the same issue. If I visit torrent sites and mistakenly or intentionally click on some links, it takes me to another site through various intermediary site. Why does it go through many site within a few seconds instead of only going to the last one directly? Whats is the benefit for the attacker? | There are actually two cases here: A site which is serving malicious ads ( Malvertising ) In this case the attacker does not compromise the site itself but is misusing targeted ads to select the victim based on its specific capabilities (browser, OS, geolocation,...) and attack it. Due to the way targeted ad delivery works it uses a lot of redirect between various sites, i.e. the majority of redirects are not for malware delivery but part of the usual ad delivery process. See for example How real time ad serving works for more information. There might be some malware specific redirects at the last stages for the same reasons as described for the second case below. A site which has been compromised by an attacker In this case the visited site is compromised by an attacker. The attacker will usually only put some minimal redirect onto the compromised site for the following reasons: Harder to detect If only some redirect code is installed and not the malicious payload the chance is higher that the compromise will stay longer undetected by the owner. Protect the malware from researchers The malware is precious for the attacker. If some security company would get hands on all the malicious code when cleaning the compromised site and could analyze it and thus add protections for their customers, thus making the malware less valuable suddenly. Increased flexibility in updating malware If the malware gets detected by security systems the attacker needs to install the next version. Also, the malware might not be owned by the attacker itself but one attacker might just redirect the victim to another attacker which develops and hosts the (always up-to-date). Both attackers share then the profit (i.e. kind of franchising). Protection against takedowns By using redirects the attacker can build a more flexible infrastructure which is more robust against takedowns or blacklisting. | {
"source": [
"https://security.stackexchange.com/questions/179446",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/161049/"
]
} |
179,498 | This article from Auth0 recommend storing the JWT locally in a local storage (or cookie). But this article from OWASP recommend not to story any sensitive data locally (not even sessionStorage) So, is it safe to store the JWT token locally or not? | How bizarre! I asked basically the same question about a month ago. In the end, we decided that using localstorage for the JWT token was ok, as long as we also did the following on the HTTP level: Ensure the entire site was served over HTTPS Ensure the use of HSTS Ensure that, once live, only the actual redirect URL was included in the Auth0 rules, as well as our source code We use the Angular CLI. It turns out that, despite the tree shaking provided by WebPack, unused variables still show up in the compiled source code, for example localhost:4200 Make sure that there are no localhost URl's actually on Auth0 (on the allowed redirect page, for your client). Make a seperate Auth0 account for testing Add the X-Frame-Options header to every HTTP response, and set it to Deny Set X-XSS-Protection to 1 Set X-Content-Type-Options to nosniff Make sure Content-Security-Policy is restricted to your own domain name, and any CDN's you may be pulling scripts in from Set Referrer-Policy to same-origin Limit the JWT expiry on Auth0 to 1 hour The above will give you an A/A+ on securityheaders.io , and will prevent the most common attacks (somebody embedding your website in an iframe, and extracting data from localstorage, for example). | {
"source": [
"https://security.stackexchange.com/questions/179498",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/123882/"
]
} |
179,583 | I work for a corporation and we are all given a kind of employee login system whose URL goes like this in the image below. (Sorry, I cannot disclose the full URL.) I thought "Not Secure" had something to do with SSL certificates or something like that, but after clicking "view site information" , I got this: I manually blocked Flash, but don't know what possibly can be done with cookies and also if possible I couldn't take a risk for doing that on a corporate website. I have a few questions in mind: What exactly does "Not Secure" mean? Does it mean it's a "HTTP only" website? What are all the possible reasons for a site to be "Not Secure"? Is it OK to have an account login site that is "Not Secure"? Do cookies have something to do with a site being not secure? What are possible ways to make this site secure and how can I inform those responsible to make it secure? | What exactly Not Secure means ? Does it means HTTP only website ? "Not secure" in Chrome means that the site isn't using HTTPS. What are all possible reasons for site being Not Secure ? To get the exact error above, it's just when a site doesn't use HTTPS. However, you can get a similar not secure error if the site's certificate is invalid or if there isn't HTTPS over the whole page. Is it OK to have an Account Login Site asNot Secure ? No, this is not ok - if somebody can intercept a login request, they can see the user's login credentials. IBBoard made a good point in the comments - having a login site without HTTPS which is on the internal corporate network isn't as dangerous as it being a public site where it can be accessed from your home PC. It's still not secure but the only people who can really MiTM the connection are the company system administrators ( assuming the network is setup correctly ). Do Cookies have something to do regarding site as being not secure ? If the site isn't using HTTPS, this means cookies are sent in the clear. This could cause issues when the cookies contain sensitive data such as tokens, which can lead to session hijacking. What are possible ways to make this site as Secure and How can I inform the responsible ones to make it Secure ? By using HTTPS with a valid certificate, Chrome will mark the site as "Secure". However, as stated by Edu , even a website with a valid certificate can be non-secure if is also serving non-secure content such as HTTP images. Mixed content (Having HTTP items in HTTPS pages) is considered non secure. If you're concerned about the security of this login site, I'd express your concerns to the IT department and see what they can do about it. | {
"source": [
"https://security.stackexchange.com/questions/179583",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/164312/"
]
} |
179,635 | I recently noticed a penetration test report wherein the non-compliance of the European Union (EU) cookie law was stated as a finding under an "other" category. I consider this more of a legal, privacy-related matter and not so much security. Why would this be in a penetration test report? Are there possible security-related concerns that I'm not aware of? | I don't know of any technical security impact relating to not adhering to EU cookie laws. Ultimately I think this is mostly down to the discretion of the assessor and the context of the assessment. Privacy issues are security-adjacent and come with similar PR impacts, and may even be judged to infringe upon the rights of the individual, so I think in some cases such findings may be useful. For me the question isn't so much whether these things should be reported to the client, as whether or not they should be in the pentest report itself. There are other communications channels that can be used to relay this information. It may well be that this was discussed and the client asked that it be put into the report. It could even be that compliance concerns were one of the key drivers to having the assessment done in the first place. Some scopes explicitly include looking for findings that might embarrass the company or its associates (content injection is a fun one here). I have reported everything from functionality problems to typos (albeit serious ones with vulgar consequences) to clients when doing pentesting work, when appropriate, because ultimately my job is to help improve their system. I don't think it hurts to include this kind of thing in a report because it can always be removed and filed separately at the client's request. | {
"source": [
"https://security.stackexchange.com/questions/179635",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/72031/"
]
} |
179,688 | Has Intel released any information about new processors? According to their advisory a number of processors are susceptible, but it says nothing about when new processors will be fixed. Also Meltdown and Spectre Vulnerabilities has no answers addressing this. So, from what production date on will/are Intel processors secured against speculative execution attacks? | The processors that were already announced and are about to be launched in the near future will still be vulnerable to both Spectre v2 and Meltdown if patches and/or firmware is not applied correctly. Spectre v1 was not entirely fixed with the latest patches. Most recent products have patches available, although not always functioning very well. You can easily cross-reference the list of affected products to the soon-to-be-launched ones. To answer your question directly: Intel plans to fix this on a hardware level in 2018. Intel's CEO stated the following in the earnings call for Q4 2017 Our near term focus is on delivering high quality mitigations to protect our customers infrastructure from these exploits. We’re working to incorporate silicon-based changed to future products that will directly address the Spectre and Meltdown threats in hardware. And those products will begin appearing later this year. I don't know what "appearing" means in this context (announcement of new products or release of new products). Conclusion : throughout 2018 anybody who plans to buy a new processor (or a new laptop/PC with a new processor) will have to take some security measures to secure themselves against Spectre and Meltdown IF the accompanying firmware or the OS has no proper protection against these vulnerabilities. Edit: After the conversation with R.. I rechecked Intel's official statements. All mitigation attempts and patches only target Meltdown and Spectre v2, because : For the bounds check bypass method that's Spectre v1 , Intel’s mitigation strategy is focused on software modifications. It remains unclear if this will remain Intel's strategy throughout the next product cycle. | {
"source": [
"https://security.stackexchange.com/questions/179688",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/405/"
]
} |
179,748 | I have been browsing the internet looking for simple ways to password protect a folder in Windows 7 without any extra software nor BitLocker. I have found several places that state that using a .bat file (and then converting it to an .exe file to avoid people opening it in a text editor and making sense of it) would allow this. However something tells me this is a not very secure system. I share here some of the typical .bat file's code they claim to work and password protect the folder: @ECHO OFF
if EXIST "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}" goto UNLOCK
if NOT EXIST Private goto MDPrivate
:CONFIRM
echo Are you sure to lock this folder? (Y/N)
set/p "cho=>"
if %cho%==Y goto LOCK
if %cho%==y goto LOCK
if %cho%==n goto END
if %cho%==N goto END
echo Invalid choice.
goto CONFIRM
:LOCK
ren Private "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}"
attrib +h +s "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}"
echo Folder locked
goto End
:UNLOCK
echo Enter password to Unlock Your Secure Folder
set/p "pass=>"
if NOT %pass%== wonderhowtogoto FAIL
attrib -h -s "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}"
ren "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}" Private
echo Folder Unlocked successfully
goto End
:FAIL
echo Invalid password
goto end
:MDPrivate
md Private
echo Private created successfully
goto End
:End Would this be easy to crack after we export it into an .exe file? What if I encrypt the .exe file for extra security? And also, would the folder need to be hidden for this "trick" to work?
What are those keys like 21EC2020-3AEA-1069-A2DD-08002B30309D ? Thanks very much for throwing some light over here. | This batch script doesn't protect anything at all. It just renames the folder in question and sets the system and hidden attributes, which makes the folder a little bit harder to find and might keep your ten-year-old child from seeing it, but it won't stop anyone else. At most, we might call this "security by obscurity", but I'm hesitant to even call it that, because in reality, setting the hidden and system attributes don't obscure that much. There is no encryption going on at all – the folder doesn't get password-protected, the password is only used in the batch file to block someone from using the batch file to automatically remove the hidden and system attributes and rename the folder back to something sane. Would this be easy to crack after we export it into an .exe file? Yes. Simply extract all strings from the .exe file and you'll find the "password". What if I encrypt the .exe file for extra security? Again, the script does not provide any security at all. If you encrypt the batch/ .exe file, you make it harder to use, but you don't actually make the folder in question more secure. What are those keys like 21EC2020-3AEA-1069-A2DD-08002B30309D ? These don't mean anything by themselves (edit: see Bob and Danny's comments about junction points for an explanation of their meaning). They might be part of a larger system the batch file is part of, but they don't add anything to the security of the actual folder – it's just part of the folder's new name while it is "hidden". What to do instead You've already said it: Use BitLocker or VeraCrypt. Veracrypt can work with container files which will contain a whole tree of folders and it offers real security, as does BitLocker. If you don't want to use any extra software, do what Mike suggests and zip the folder, protecting it with a password. This offers you nowhere near the security of BitLocker, VeraCrypt, and compatriots, though. | {
"source": [
"https://security.stackexchange.com/questions/179748",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/170680/"
]
} |
179,823 | For privacy against tracking, I have my browser set up to refuse cookies by default. I only allow cookies from whitelisted domains. In general, this works OK for me. However, I now have at least one case where it has become inconvenient. I pay for a digital subscription to the Washington Post, because I like their journalism and want to support it, but I do not want them or their advertisers tracking me, so I never log in and don't accept cookies from them. This has worked fine until recently. Within the last few days, they have done something new on their web site with the result that although I can view their home page, if I click through to a story, I get this message in firefox: The page isn’t redirecting properly. Firefox has detected that the server is redirecting the request for this address in a way that will never complete. This problem can sometimes be caused by disabling or refusing to accept cookies. This is not them paywalling the site. In a browser set up to accept all cookies, I can access all their content without logging in, but any page click creates about 20 cookies from washingtonpost.com and about 20 cookies from their advertisers. If I go to the home page, clear the cookies, and then click on a link to a story, it works, but the cookies get recreated. So it appears that there is code on the pages I'm trying to view that attempts to create these cookies and then throws an error if creating the cookies fails. Is there any good strategy for this type of situation that preserves my privacy? For example, I thought about writing a script that would run every 10 seconds on my machine and delete all cookies except those from whitelisted domains. related: https://askubuntu.com/questions/368325/how-to-clear-browsers-cache-and-cookies-from-terminal | If you're concerned about trackers, you're probably looking for First Party Isolation . First Party Isolation is a feature that Firefox adopted from the Tor browser's Cross-Origin Identifier Unlinkability concept. FPI works by linking all cookies to the first-party domain (the one in the URL bar), making third-party cookies distinct between different domains. That is, if you're visiting a.com and a tracker sets a cookie, and later visit b.com which uses the same tracker, it won't be able to see the cookies it has placed earlier, when the first-party domain was different ( a.com ). Another explanation: What is First-Party Isolation FPI works by separating cookies on a per-domain basis. This is important because most online advertisers drop a cookie on the user's computer for each site the user visits and the advertisers loads an ad. With FPI enabled, the ad tracker won't be able to see all the cookies it dropped on that user's PC, but only the cookie created for the domain the user is currently viewing. This will force the ad tracker to create a new user profile for each site the user visits and the advertiser won't be able to aggregate these cookies and the user's browsing history into one big fat profile. (Source) To enable FPI, you can either go to about:config and set privacy.firstparty.isolate to true , or install the now un official First Party Isolation add-on . But before you activate it, be aware that some web apps rely on third-party cookies for actual functionality and may become unusable afterwards (e.g. you might be unable to log in). If you experience such problems, try also setting privacy.firstparty.isolate.restrict_opener_access to false which will relax the isolation rules and you're less likely to experience problems during, say, a cross-domain login flow that redirects you between different domains. Another approach in Firefox are containers . With containers you're essentially isolating different sessions from each other without having to use multiple browser profiles. E.g., you could read WaPo in a distinct container, and any cookies set by trackers in that container wouldn't be visible in the other ones. Containers are available in Firefox Nightly and as an add-on . (Chrome doesn't have this exact feature, but you can use multiple profiles to get the same effect.) I thought about writing a script that would run every 10 seconds on my machine and delete all cookies except those from whitelisted domains. The problem I see with this is that some sites re-create cookies immediately after you delete them (as long as you still have their scripts loaded). And if your timing is bad, you might eventually run into the same problems you had with disabled cookies. Finally, there are also reputable addons such as Ghostery that detect and block known trackers. So, you have plenty of options to maintain your privacy without disabling cookies entirely -- which unfortunately doesn't get you very far in the modern web. | {
"source": [
"https://security.stackexchange.com/questions/179823",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
179,852 | When reading some documentation about the security of a product, I found that the vendor uses the SHA-2 of a password to encrypt data (AES-256), instead of using this password directly. Are there any advantages of doing so? An attacker is not going to crack the encrypted data using this SHA-2-as-a-password key but rather exhaust the password keyspace (if feasible) and try its hash. Therefore the only reason I can think of is that there is an extra computational step (the creation of the hash). I would have rather increased the password entropy if the point is to computationally complexify the attack. | It sounds like a primitive version of a key derivation function (KDF) , in particular they probably could have avoided reinventing the wheel by using PBKDF2 . There are several reasons why you don't want to use the password directly as an AES key. To distribute the bits. The main property here is that a hash function's output is, statistically speaking, uniformly distributed. People tend to pick passwords that aren't fully random, in particular, most passwords would only contain characters you can type in a keyboard. When used as an encryption key, a non-statistically random key may expose weaknesses in the encryption function. To fit the keys to the encryption key length. Most passwords are going to be either longer or shorter than the key space of the encryption function. By hashing your password, the exact key length will be exactly the size of the input key of your encryption function. While the entropy of the derived key doesn't increase, this avoids the likelihood of exposing weakness in the encryption function if you just simply zero pad the password or worse truncate the password. To slow down key derivation decryption. Per your description, the software is only using a single SHA256 round, which is not much. But with proper password based KDF, like PBKDF2, there are usually tens of thousands or hundreds of thousands of rounds of the underlying hash function. This slows down computing the keys, increasing the effective strength of passwords without increasing its length. To keep the user's plain text password out of memory, thus preventing it from being accidentally dumped to disk during hibernation or crash dump. While this wouldn't protect the hash from being used to decrypt the data you're encrypting, it will prevent the password from being reused to decrypt other files (which presumably uses different salt) or being tried on your online accounts or other devices that you use. | {
"source": [
"https://security.stackexchange.com/questions/179852",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6341/"
]
} |
179,969 | I was recently the victim of credit card fraud and I suspect it is from a merchant somewhere keeping track of my credit card details. I cancelled the card and received a new one, but I would like to make it as difficult as possible for criminals in brick-and-mortar stores to copy my card details. What parts of the credit card can I save to a password vault and obscure by scratching over/off and still have it be valid? | If you deface a credit card, you are likely to find it will be rejected for all transactions. The merchant really needs all the info on the card to be valid - it's part of how they protect themselves from fraud. So my answer would be: none! Instead of worrying about that, concern yourself more with how the merchants handle your card. In the UK, for example, a customer never needs to let go of the card in most stores now, as contactless is almost ubiquitous. But if you have to hand over your card, watch it like a hawk. Handheld terminals brought to you are safer than letting someone take your card away. And remember, if the merchant commits fraud, your bank will reimburse you so it's not the end of the world. | {
"source": [
"https://security.stackexchange.com/questions/179969",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12427/"
]
} |
180,321 | I reviewed the auth.log file on my Ubuntu server to find: [preauth]
Feb 22 17:39:18 code-storage sshd[17271]: Disconnected from 147.135.192.203 port 49408 [preauth]
Feb 22 17:40:15 code-storage sshd[17273]: Invalid user ellen from 147.135.192.203
Feb 22 17:40:15 code-storage sshd[17273]: input_userauth_request: invalid user ellen [preauth]
Feb 22 17:40:15 code-storage sshd[17273]: Received disconnect from 147.135.192.203 port 50193:11: Normal Shutdown, Thank you for playing [preauth]
Feb 22 17:40:15 code-storage sshd[17273]: Disconnected from 147.135.192.203 port 50193 [preauth]
Feb 22 17:40:34 code-storage sshd[17275]: Connection closed by 103.237.147.107 port 17583 [preauth]
Feb 22 17:41:12 code-storage sshd[17277]: Invalid user emil from 147.135.192.203
Feb 22 17:41:12 code-storage sshd[17277]: input_userauth_request: invalid user emil [preauth]
Feb 22 17:41:12 code-storage sshd[17277]: Received disconnect from 147.135.192.203 port 50841:11: Normal Shutdown, Thank you for playing [preauth]
Feb 22 17:41:12 code-storage sshd[17277]: Disconnected from 147.135.192.203 port 50841 [preauth]
Feb 22 17:42:05 code-storage sshd[17280]: Invalid user enzo from 147.135.192.203
Feb 22 17:42:05 code-storage sshd[17280]: input_userauth_request: invalid user enzo [preauth]
Feb 22 17:42:05 code-storage sshd[17280]: Received disconnect from 147.135.192.203 port 51356:11: Normal Shutdown, Thank you for playing [preauth]
Feb 22 17:42:05 code-storage sshd[17280]: Disconnected from 147.135.192.203 port 51356 [preauth]
Feb 22 17:42:14 code-storage sshd[17282]: Connection closed by 103.237.147.107 port 64695 [preauth]
Feb 22 17:43:00 code-storage sshd[17285]: Invalid user felix from 147.135.192.203
Feb 22 17:43:00 code-storage sshd[17285]: input_userauth_request: invalid user felix [preauth]
Feb 22 17:43:00 code-storage sshd[17285]: Received disconnect from 147.135.192.203 port 52145:11: Normal Shutdown, Thank you for playing [preauth]
Feb 22 17:43:00 code-storage sshd[17285]: Disconnected from 147.135.192.203 port 52145 [preauth]
Feb 22 17:43:52 code-storage sshd[17287]: Connection closed by 103.237.147.107 port 55122 [preauth]
Feb 22 17:43:56 code-storage sshd[17289]: Invalid user fred from 147.135.192.203
Feb 22 17:43:56 code-storage sshd[17289]: input_userauth_request: invalid user fred [preauth]
Feb 22 17:43:56 code-storage sshd[17289]: Received disconnect from 147.135.192.203 port 52664:11: Normal Shutdown, Thank you for playing [preauth] There is much more than this, but this is from the last few minutes before I copied the log file. Is this a brute force SSH attack, and if so should I be worried and what are the best mitigation steps and/or solutions other than changing the server IP? | Is this a bruteforce attack This looks like the background scanning that any server on the internet will experience. Should I be worried Not really, background scanning is completely normal, as long as your passwords are secure background scanning should pose no risk. What are the best mitigation steps You can use the following to make the server more secure: Disable SSH service when you don't need it. Only allow login using key auth Disable root ssh access Use a system like Fail2Ban to block brute force attempts Should I change IPs Changing IPs will probably not affect automated background scanning much | {
"source": [
"https://security.stackexchange.com/questions/180321",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/166613/"
]
} |
180,357 | I do understand that a header is the "cleaner" solution to transport an auth-token from a trusted system to another in a REST call. But when you are in client-side JavaScript code, the world looks different to me. Cookies can be marked as "http-only" and thus can't be easily stolen by JavaScript. A header even has to be set by JavaScript, thus the auth token has to be accessible from within JavaScript. But yet, people use auth-headers to submit their auth-tokens from an untrusted client JavaScript to the server. What has changed from the good old "use cookies with http-only and secure flag" to "let the JavaScript handle the auth token"? Or should the right way be that "on the client side, use cookies and as soon as you enter the trusted world, switch to auth-header"? PS: I know that there are many answers to similar questions, but I think my questions is from a different point of view "what has changed, is different". | Cookie Based Authentication Pros HttpOnly Flag : Session cookies can be created with the HttpOnly flag which secures the cookies from malicious JavaScript (XSS-Cross-Site Scripting). Secure flag : Session cookies can be created with Secure flag that prevents the cookies transmission over an unencrypted channel. Cons CSRF : Cookies are vulnerable/susceptible to CSRF attacks since the third party cookies are sent by default to the third-party domain that causes the exploitation of CSRF vulnerability. Performance and Scalability : Cookie based authentication is a stateful authentication such that server has to store the cookies in a file/DB in order to maintain the state of all the users. As the user base increases the backend server has to maintain a separate system so as to store session cookies. Token Based Authentication: Pros Performance and Scalability : Tokens contains the metadata and its signed value(for tamper protection). They are self-contained and hence there is no need of maintaining state at the server. This improves the performance and thus scalability when the expansion is required. CSRF : Unlike cookie-based authentication, token-based authentication is not susceptible to Cross-Site Request Forgery since the tokens are not sent to third party web applications by default. Cons XSS : Since the session tokens are stored in the local data storage of the browser and it is accessible to the JS of the same domain. Hence there is no option to secure session identifier from XSS attacks unlike HTTPOnly security flag which is available in the cookie-based authentication. Conclusion Both of the mechanisms have their own pros and cons as mentioned.In the present era of Application Development Frameworks , cons such as XSS and CSRF are taken care by the underlying framework itself and hence I feel it is without a doubt a clear trade-off on which developers and the stakeholders takes the decision. | {
"source": [
"https://security.stackexchange.com/questions/180357",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5203/"
]
} |
180,389 | Keeping things vague - I work at a company that handles compliance issues for our clients. Very often, this means we need to log onto their various accounts for various entities. We store their username and password, both to make it easier for them to remember and have access to, and for us to be able to log onto their account. Ignoring the complete and total security nightmare of sharing logins for a minute, the username and passwords appear to be stored in plaintext. Hurray. How can I convince my boss, and the higher ups, that A) This is a terrible idea, B) a better method/way of storing these | Why is it a terrible idea? By recording others' login credentials, the company is taking upon itself a liability . Since the company is now responsible for malice that could occur using those credentials, the company should take steps to minimize the risk. In addition, companies that interact with yours have to take upon the liability of trusting you, something that can harm business if an incident goes public (as mentioned by symcbean) Storing passwords in plaintext (as you know) provides no protection against that risk. What is a better solution? As you mentioned in a comment, what you need is something like a password manager . In fact, what you want is a password manager. Since there is a lot of room for error when it comes to cryptography, I suggest using an established password manager. You can find countless comparisons online, but in the end your choices include ones based around a GUI (like 1password, LastPass, keepass, etc) or one based an cli for automated access (like pass or via the LastPass cli) | {
"source": [
"https://security.stackexchange.com/questions/180389",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/171435/"
]
} |
180,555 | Coming from the comments in this question Why is it bad to log in as root? : The sudo mechanics is in use so non-administrative tools "cannot harm your system." I agree that it would be pretty bad if some github project I cloned was able to inject malicious code into /bin . However, what is the reasoning like on a desktop PC? The same github code can, once executed, without sudo rights, wipe out my entire home folder, put a keylogger in my autostart session or do whatever it pleases in ~ . Unless you have backups, the home folder is usually unique and contains precious, if not sensitive data. Root directories however build up the system and can often be recovered by simply reinstalling the system. There are configurations saved in /var and so on, but they tend to have less significance to the user than the holiday pictures from 2011. The root permissions system makes sense, but on desktop systems, it feels like it protects the wrong data. Is there no way to prevent malicious code happening in $HOME ? And why does nobody care about it? | I'm going to disagree with the answers that say the age of the Unix security model or the environment in which it was developed are at fault. I don't think that's the case because there are mechanisms in place to handle this. The root permissions system makes sense, but on desktop systems, it feels like it protects the wrong data. The superuser's permissions exist to protect the system from its users. The permissions on user accounts are there to protect the account from other non-root accounts. By executing a program, you give it permissions to do things with your UID. Since your UID has full access to your home directory, you've transitively given the program the same access. Just as the superuser has the access to make changes to the system files that need protection from malicious behavior (passwords, configuration, binaries), you may have data in your home directory that needs the same kind of protection. The principle of least privilege says that you shouldn't give any more access than is absolutely necessary. The decision process for running any program should be the same with respect to your files as it is to system files. If you wouldn't give a piece of code you don't trust unrestricted use of the superuser account in the interest of protecting the system, it shouldn't be given unrestricted use of your account in the interest of protecting your data. Is there no way to prevent malicious code happening in $HOME? And why does nobody care about it? Unix doesn't offer permissions that granular for the same reason there isn't a blade guard around the rm command : the permissions aren't there to protect users from themselves. The way to prevent malicious code from damaging files in your home directory is to not run it using your account. Create a separate user that doesn't have any special permissions and run code under that UID until you've determined whether or not you can trust it. There are other ways to do this, such as chrooted jails, but setting those up takes more work, and escaping them is no longer the challenge it once was. | {
"source": [
"https://security.stackexchange.com/questions/180555",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/171614/"
]
} |
180,561 | My understanding of Have I Been Pwned is that it checks your password to see if someone else in the world has used it. This really doesn't seem that useful to me. It seems equivalent to asking if anyone in the world has the same front door key as me. Statistically, I would assume yes, but without knowing where I live... who cares? So have I misunderstood what HIBP does or am I underestimating its value because I'm misunderstanding some principle of security? EDIT Turns out there was more to the site than I understand. I was referring specifically to the password feature . | Disclaimer: I am the author, creator, owner and maintainer of Have I Been Pwned and the linked Pwned Passwords service. Let me clarify all the points raised here: The original purpose of HIBP was to enable people to discover where their email address had been exposed in data breaches. That remains the primary use case for the service today and there's almost 5B records in there to help people do that. I added Pwned Passwords in August last year after NIST released a bunch of advice about how to strengthen authentication models. Part of that advice included the following : When processing requests to establish and change memorized secrets, verifiers SHALL compare the prospective secrets against a list that contains values known to be commonly-used, expected, or compromised. For example, the list MAY include, but is not limited to: Passwords obtained from previous breach corpuses. That's what Pwned Passwords addresses: NIST advised "what" you should do but didn't provide the passwords themselves. My service addresses the "how" part of it. Now, practically, how much difference does it make? Is it really as you say in that it's just like a one in a million front door key situation? Well firstly, even if it was , the IRL example breaks down because there's no way some anonymous person on the other side of the world can try your front door key on millions of door in a rapid-fire, anonymous fashion. Secondly, the distribution of passwords is in no way linear; people choose the same crap ones over and over again and that puts those passwords at much higher risks than the ones we rarely see. And finally, credential stuffing is rampant and it's a really serious problem for organisations with online services. I continually hear from companies about the challenges they're having with attackers trying to login to people's accounts with legitimate credentials . Not only is that hard to stop, it may well make the company liable - this popped up just last week: "The FTC’s message is loud and clear: If customer data was put at risk by credential stuffing , then being the innocent corporate victim is no defence to an enforcement case" https://biglawbusiness.com/cybersecurity-enforcers-wake-up-to-unauthorized-computer-access-via-credential-stuffing/ Having seen a password in a data breach before is only one indicator of risk and it's one that each organisation using the data can decide how to handle. They might ask users to choose another one if it's been seen many times before (there's a count next to each one), flag the risk to them or even just silently mark the account. That's one defence along with MFA, anti-automation and other behavioural based heuristics. It's merely one part of the solution. And incidentally, people can either use the (freely available) k-Anonymity model via API which goes a long way to protecting the identity of the source password or just download the entire set of hashes (also freely available) and process them locally. No licence terms, no requirement for attribution, just go and do good things with it :) | {
"source": [
"https://security.stackexchange.com/questions/180561",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15091/"
]
} |
180,756 | While I was playing some Capture the Flag (CTF), I couldn't solve a challenge because of this tilde ~. I'm playing on www.example.com/index.php but when I added (~) at the end: www.example.com/index.php~ , a file with name index.php started downloading. Could you explain to me what's the role of this tilde (~)? | It's just part of the filename, just like letters, numbers, and other special characters can be part of the filename. It's conventional to create "backups" of files before editing them by appending a tilde, so in case you mess something up, you have a previous version to restore. In Bash this can be easily done with cp index.php{,~} which expands to cp index.php index.php~ . It could be on the CTF because people often forget about those files and leave them unprotected. Especially a file like index.php might contain database credentials. In bash, a tilde at the start of an argument also expands to a home folder: ~ turns into $HOME (e.g. /home/yourusername ), or ~username turns into that user's home folder (e.g. ~root typically expands to /root ). That is not the case here, though, because it's not at the beginning. | {
"source": [
"https://security.stackexchange.com/questions/180756",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/168067/"
]
} |
180,903 | If I understand correctly, Certification Authority Authorization DNS records are used to specify which certificate authorities are allowed to issue certificates for a given domain. If that record exists and a CA is not listed on it, that CA must refuse to issue a certificate for the domain. However, this doesn't seem to protect against vulnerabilities in the CA. If any trusted authority doesn't implement CAA properly, or if an authority's private keys are breached, then CAA doesn't help. My question is, why don't browsers check CAA records? If the certificate given was not issued by a CA authorized in the record, then it would consider the certificate invalid. This would greatly increase security by reducing the list CAs that must be trusted to only the ones the website owner chooses to trust. I understand that HPKP is also used to prevent bad certificates. However, it only works with HTTP, and it requires either trusting the first certificate received for a site, or trusting a third-party preload list. So is this something that browsers could implement, or am I missing something here? | I just found the answer in RFC 6844 , DNS Certification Authority Authorization (CAA) Resource Record: A set of CAA records describes only current grants of authority to
issue certificates for the corresponding DNS domain. Since a
certificate is typically valid for at least a year, it is possible
that a certificate that is not conformant with the CAA records
currently published was conformant with the CAA records published at
the time that the certificate was issued. Relying Applications MUST
NOT use CAA records as part of certificate validation. [emphasis mine] Basically, it is not the purpose of CAA to describe which certificates are currently valid for a domain. If a certificate is issued when the CA is on the record, and the record is later removed, then the certificate should remain valid until it expires (or is revoked). Validation of certificates in the browser (or Relying Application) through DNS appears to be the purpose of DANE, or DNS-Based Authentication of Named Entities, specified in RFC 6698 . Unfortunately, DANE is not widely implemented. | {
"source": [
"https://security.stackexchange.com/questions/180903",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60643/"
]
} |
180,948 | Backstory My sites and VPS were stolen from me. The hosting company and I were locked out and unable to access it. They weren't able to create a temp password for access because the attacker blocked it.
The last time I was logged into WHM, root control was taken and all HDDs were no longer bootable. I believe the same person used a worm to monitor my desktop remotely. 5 pcs and 5 mobile devices, bricking my ddwrt r7000 router with PIA VPN killswitched. A dozen or so mint/ubuntu vms were taken over. Many usb drives were made to be write-only.It was relentless.
I stopped trying to figure out what was going on and reformatted all devices. I am now waiting on the server image and a memory snapshot, as well as an rsync copy. Upon transaction ill get a fresh server image...certainly of a different IP, unless they aren't to be convinced. Here is the email I received today: Blockquote
This ticket was just assigned to me. I have made a backup of the account >out of the way of the backup processes here. [2018-03-25] pkgacct completed I was reading over some of the conversations and would like to just make sure we are on the same page. We can not dd ( bit for bit copy ) the current state because the account does not have a storage medium that can handle it. If you wanted to add keys that we can rsync over ssh to a remote destination just give us the destination of the output file and we will be happy to help with that. We normally do not keep hacked operating systems around unless there is some specific interest. What I find interesting is the openVPN software: uperior.hosting.com [home]# ifconfig
as0t0 Link encap:Ethernet HWaddr
inet6 addr: Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:436367538 errors:0 dropped:504 overruns:0 carrier:0
collisions:0 txqueuelen:200
RX bytes:0 (0.0 b) TX bytes:26310498062 (24.5 GiB) asbr0 Link encap:Ethernet HWaddr
inet addr: Bcast: Mask:
inet6 addr: Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:431222324 errors:0 dropped:0 overruns:0 frame:0
TX packets:1492069 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:20595591150 (19.1 GiB) TX bytes:634373123 (604.9 MiB) If you are using this server as a production WebServer I would recommend using CentOS 7 and installing something like 'Cockpit' instead of using a VPN through a cPanel server on CentOS 6. What is in the scope of our support is to insure you have a path to and from the server while you salvage the barebones data. I will conclude with some options and questions. (The following are my answers) An inquiry about the openvpn activitu Letting them know, again, I want a copy of the server image, a memory snapshot, and all logs available Fresh installation of cPanel/WHM Fresh IPs for sites and VPS SFTP info Blockquote Is having a VPN, IDS, firewall, honeypot enough?
Have I left anything out? VPS is a from Bluehost, running CentOS 7 with any alternative to WordPress... | I'll start with what to do with your current system: Get in and make a backup of everything. Unless you can demonstrate major losses ($10k+), I wouldn't even begin to think about involving law enforcement. They have their hands full, and given the current patterns on the internet, it's highly likely that your culprit is in a different country than you are. Nobody is going to do an extradition process for hacked Wordpress sites. (Sorry, I know it's hard to hear, but it's the reality.) Burn the current server to the ground. Consider every password on your old server compromised. Now, how do you build a new server to avoid this happening again? I'm going to make a few assumptions based on what you wrote in your post: Multiple Wordpress installs. mod_php on Apache CPanel/WHM Here's a few recommendations: Get your new server. Do a fresh installation of your applications, and setup strong passwords/credentials that have nothing to do with your old ones. Configure SELinux to limit the exposure of each site as much as possible. Be careful what wordpress plugins you install. They have a much worse security track record than wordpress core. Ensure that directories that are writable by the webserver are never interpreting files as PHP. Use 2-factor authentication for everything you can. CPanel SSH Without being able to do digital forensics on your system, it's hard to know for sure, but I'm going to go out on a limb and guess what happened: Attacker runs scanner looking for vulnerable Wordpress installs. Attacker finds vulnerable Wordpress on your server. Gets RCE as webserver. Dumps password hashes for wordpress databases. Cracks password hashes, one of them matches a WHM/CPanel password. Gets into CPanel. Maybe as an admin, or maybe the version of CPanel had a bug that allows privilege escalation to admin. You talk about fear of retaliation and the attacker coming after you again and again. Unless this is personal (a vendetta of some sort), I wouldn't worry about that. Attackers like this will just move on to their next compromised host. Just don't give them another chance. | {
"source": [
"https://security.stackexchange.com/questions/180948",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172063/"
]
} |
181,050 | I've come across a random website Moodoo.cz . The interesting thing is that if you access it via the HTTPS: Moodoo.cz , the content completely changes. It is not that unusual - I guess server can serve different content for different protocols. But I've found dozens of such websites that have the same content (Peugeot 205 Club forum) served on their HTTPS protocol, many of which are valid businesses. I am strongly convinced that most of these websites don't know about this happening and that it's just some misused security hole. Can you explain (or at least make some educated quess) what security issue these websites share? What to check to ensure this won't happen to my website? Following is a subset of websites I've found currently having the described issue. Naturally some of them will fix the issue in the future. You might also be asked to add a temporary security exception to view the content. www.artatak.cz [HTTP] [HTTPS] www.autodilykoci.cz [HTTP] [HTTPS] www.blackdogs.cz [HTTP] [HTTPS] www.cairns-esc.com.au [HTTP] [HTTPS] www.czechmusic.org [HTTP] [HTTPS] www.designconcept.cz [HTTP] [HTTPS] www.hanes.cz [HTTP] [HTTPS] www.kopprea.cz [HTTP] [HTTPS] www.moodoo.cz [HTTP] [HTTPS] www.mujdummujhrad.cz [HTTP] [HTTPS] www.pribehy.info [HTTP] [HTTPS] www.resultscoaches.cz [HTTP] [HTTPS] www.spalicek.eu [HTTP] [HTTPS] www.spectris-dot.com [HTTP] [HTTPS] www.statspol.cz [HTTP] [HTTPS] www.tonej.cz [HTTP] [HTTPS] www.tuze.cz [HTTP] [HTTPS] | This is likely a server misconfiguration, since all those websites are served from 95.173.215.72 . When opening one of the websites via HTTPS, my browser warns me that the certificate common name, which must match the website domain, is invalid. I guess those websites aren't supposed to be acccessible via HTTPS, since Apache isn't configured to deliver the correct certificate, and seems to load the default website ( forum.205gti.org ). As far as I know, this isn't a security vulnerability. | {
"source": [
"https://security.stackexchange.com/questions/181050",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32957/"
]
} |
181,158 | I have seen examples of password hashing that were: H(username + salt + password). What is the purpose of adding username? Is there any purpose? | No, there is no purpose. This is security theater*. The purpose of a salt is to make parallel attacks against all hashed passwords infeasible, and to break rainbow tables. Adding the username in there does not improve that behavior or increase any other aspects of security. It actually has a bit of a downside as you now will run into some troubles if you are changing the username, and you are required to maintain a more complex and non-standard system. In a purely cryptographic sense, there is no downside. Practically speaking though, more complexity means more bugs. The properties of salts and usernames You might think that the username itself may not be public, so it couldn't hurt to use it as an additional secret, but the fact is that the database will likely already contain the username in plaintext, invalidating this already questionable benefit. People should stick to pre-existing authentication techniques. But let's look at the properties of each of these objects: A salt is: Not secret - Salts are stored in plaintext. Secure - They are generated randomly and are long. Unique - Every user's salt is intentionally different. A password is: Secret - Assuming they are not put on a sticky note. Secure - If it's not hunter2 . The password should be good . Unique - Ideally, at least, but not all passwords are ideal. Now compare this to a username. A username is: Not secret - They are public or at least stored in plaintext . Not secure - No one thinks to choose a long and complex username. Not unique - Usernames are safe to share between sites. The traits of a good salt Now, what exactly does a salt do ? In general, a good salt provides three benefits: It prevents an attacker from attacking every user's hash at once . The attacker is no longer able to hash a candidate password and test it against every single entry at once. They are forced to re-compute any given password to be tested for each user's hash. This benefit provided by a salt grows linearly as the number of distinct target hash entries grow. Of course, salts are still important even if only a single hash is in need of protection, as I explain below. It makes rainbow tables infeasible . A rainbow table is a highly-optimized precomputed table that matches passwords to hashes. They take up less space than a gigantic lookup table, but take a lot of time to generate (a space-time trade-off ). In order for rainbow tables to work, a given password must always resolve to the same hash. Salts break this assumption, making rainbow tables impractical as they would have to have a new entry for each possible salt. It prevents targeted precomputation through rainbow tables, at least when the hash is truly random. An attacker can only begin the attack after they get their hands on the hashes (and with them, salts). If the salt is already public but the hash is not, then the attack can be optimized by generating a rainbow table for that specific salt. This is one reason why WPA2 is such an ugly protocol. The salt is the ESSID (network name), so someone can begin the attack for their target's router before they ever even get their hands on the 4-way handshake. So what possible benefit would concatenating a value before hashing when this value is public, insecure, and re-used? It doesn't end up requiring the attacker dig for more information. It doesn't add to the security of a salt. It doesn't increase the complexity of the password. There is no benefit. Proper password hashing So what should they do to increase security ? They can use a KDF such as PBKDF2, bcrypt, scrypt, or argon2 instead of a single hash. They can add a pepper , which is a random global value stored outside of the database and added to the password and salt, making it necessary to steal the pepper to attempt to attack the hashes rather than simply dump the database using SQLi. EDIT: As some of the comments point out, there is one contrived scenario where the username would be beneficial to add into the mix. That scenario would be one where the implementation is broken badly enough that the salt is not actually a salt, and the username is the only unique or semi-unique per-user value in the database, in which case mixing in the username would be better than nothing. But really, if you have no salt, you should start using one instead of trying to use usernames. Use real security and don't be half-assed when your users' safety is on the line. * In this context, I am defining security theater as the practice of implementing security measures that do not actually improve security in any meaningful way and are only present to provide the illusion of better security. | {
"source": [
"https://security.stackexchange.com/questions/181158",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172365/"
]
} |
181,178 | I am currently working on the migration of a user Identity and Access Management tool from a legacy platform (product + solution) to a new one (same product but upgraded + updated solution) My team was challenged with the following requirements: users should keep the same password users should keep the same challenge questions & answers We believe that as a best practice users should setup new password and new challenge questions & answers. Also, we need to ensure compliance with GDPR, which means we are avoiding procedures and files that may show in plain text sensitive data. What is the standard here? | No, there is no purpose. This is security theater*. The purpose of a salt is to make parallel attacks against all hashed passwords infeasible, and to break rainbow tables. Adding the username in there does not improve that behavior or increase any other aspects of security. It actually has a bit of a downside as you now will run into some troubles if you are changing the username, and you are required to maintain a more complex and non-standard system. In a purely cryptographic sense, there is no downside. Practically speaking though, more complexity means more bugs. The properties of salts and usernames You might think that the username itself may not be public, so it couldn't hurt to use it as an additional secret, but the fact is that the database will likely already contain the username in plaintext, invalidating this already questionable benefit. People should stick to pre-existing authentication techniques. But let's look at the properties of each of these objects: A salt is: Not secret - Salts are stored in plaintext. Secure - They are generated randomly and are long. Unique - Every user's salt is intentionally different. A password is: Secret - Assuming they are not put on a sticky note. Secure - If it's not hunter2 . The password should be good . Unique - Ideally, at least, but not all passwords are ideal. Now compare this to a username. A username is: Not secret - They are public or at least stored in plaintext . Not secure - No one thinks to choose a long and complex username. Not unique - Usernames are safe to share between sites. The traits of a good salt Now, what exactly does a salt do ? In general, a good salt provides three benefits: It prevents an attacker from attacking every user's hash at once . The attacker is no longer able to hash a candidate password and test it against every single entry at once. They are forced to re-compute any given password to be tested for each user's hash. This benefit provided by a salt grows linearly as the number of distinct target hash entries grow. Of course, salts are still important even if only a single hash is in need of protection, as I explain below. It makes rainbow tables infeasible . A rainbow table is a highly-optimized precomputed table that matches passwords to hashes. They take up less space than a gigantic lookup table, but take a lot of time to generate (a space-time trade-off ). In order for rainbow tables to work, a given password must always resolve to the same hash. Salts break this assumption, making rainbow tables impractical as they would have to have a new entry for each possible salt. It prevents targeted precomputation through rainbow tables, at least when the hash is truly random. An attacker can only begin the attack after they get their hands on the hashes (and with them, salts). If the salt is already public but the hash is not, then the attack can be optimized by generating a rainbow table for that specific salt. This is one reason why WPA2 is such an ugly protocol. The salt is the ESSID (network name), so someone can begin the attack for their target's router before they ever even get their hands on the 4-way handshake. So what possible benefit would concatenating a value before hashing when this value is public, insecure, and re-used? It doesn't end up requiring the attacker dig for more information. It doesn't add to the security of a salt. It doesn't increase the complexity of the password. There is no benefit. Proper password hashing So what should they do to increase security ? They can use a KDF such as PBKDF2, bcrypt, scrypt, or argon2 instead of a single hash. They can add a pepper , which is a random global value stored outside of the database and added to the password and salt, making it necessary to steal the pepper to attempt to attack the hashes rather than simply dump the database using SQLi. EDIT: As some of the comments point out, there is one contrived scenario where the username would be beneficial to add into the mix. That scenario would be one where the implementation is broken badly enough that the salt is not actually a salt, and the username is the only unique or semi-unique per-user value in the database, in which case mixing in the username would be better than nothing. But really, if you have no salt, you should start using one instead of trying to use usernames. Use real security and don't be half-assed when your users' safety is on the line. * In this context, I am defining security theater as the practice of implementing security measures that do not actually improve security in any meaningful way and are only present to provide the illusion of better security. | {
"source": [
"https://security.stackexchange.com/questions/181178",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172429/"
]
} |
181,190 | I want to make a weather-forecast website and was planning to make AJAX requests to the openweathermap API to load the weather data. However, I am uncertain on how to properly use the API key. I was going to include the key in a JS script, but I am concerned that the key might be "stolen" either from GitHub or the html source. AFAIK the key can only be used to make weather data requests, but it could still be abused to make an inordinate number of requests. Would including the key in the html file be a bad practice? If so, how would I properly implement it? | No, there is no purpose. This is security theater*. The purpose of a salt is to make parallel attacks against all hashed passwords infeasible, and to break rainbow tables. Adding the username in there does not improve that behavior or increase any other aspects of security. It actually has a bit of a downside as you now will run into some troubles if you are changing the username, and you are required to maintain a more complex and non-standard system. In a purely cryptographic sense, there is no downside. Practically speaking though, more complexity means more bugs. The properties of salts and usernames You might think that the username itself may not be public, so it couldn't hurt to use it as an additional secret, but the fact is that the database will likely already contain the username in plaintext, invalidating this already questionable benefit. People should stick to pre-existing authentication techniques. But let's look at the properties of each of these objects: A salt is: Not secret - Salts are stored in plaintext. Secure - They are generated randomly and are long. Unique - Every user's salt is intentionally different. A password is: Secret - Assuming they are not put on a sticky note. Secure - If it's not hunter2 . The password should be good . Unique - Ideally, at least, but not all passwords are ideal. Now compare this to a username. A username is: Not secret - They are public or at least stored in plaintext . Not secure - No one thinks to choose a long and complex username. Not unique - Usernames are safe to share between sites. The traits of a good salt Now, what exactly does a salt do ? In general, a good salt provides three benefits: It prevents an attacker from attacking every user's hash at once . The attacker is no longer able to hash a candidate password and test it against every single entry at once. They are forced to re-compute any given password to be tested for each user's hash. This benefit provided by a salt grows linearly as the number of distinct target hash entries grow. Of course, salts are still important even if only a single hash is in need of protection, as I explain below. It makes rainbow tables infeasible . A rainbow table is a highly-optimized precomputed table that matches passwords to hashes. They take up less space than a gigantic lookup table, but take a lot of time to generate (a space-time trade-off ). In order for rainbow tables to work, a given password must always resolve to the same hash. Salts break this assumption, making rainbow tables impractical as they would have to have a new entry for each possible salt. It prevents targeted precomputation through rainbow tables, at least when the hash is truly random. An attacker can only begin the attack after they get their hands on the hashes (and with them, salts). If the salt is already public but the hash is not, then the attack can be optimized by generating a rainbow table for that specific salt. This is one reason why WPA2 is such an ugly protocol. The salt is the ESSID (network name), so someone can begin the attack for their target's router before they ever even get their hands on the 4-way handshake. So what possible benefit would concatenating a value before hashing when this value is public, insecure, and re-used? It doesn't end up requiring the attacker dig for more information. It doesn't add to the security of a salt. It doesn't increase the complexity of the password. There is no benefit. Proper password hashing So what should they do to increase security ? They can use a KDF such as PBKDF2, bcrypt, scrypt, or argon2 instead of a single hash. They can add a pepper , which is a random global value stored outside of the database and added to the password and salt, making it necessary to steal the pepper to attempt to attack the hashes rather than simply dump the database using SQLi. EDIT: As some of the comments point out, there is one contrived scenario where the username would be beneficial to add into the mix. That scenario would be one where the implementation is broken badly enough that the salt is not actually a salt, and the username is the only unique or semi-unique per-user value in the database, in which case mixing in the username would be better than nothing. But really, if you have no salt, you should start using one instead of trying to use usernames. Use real security and don't be half-assed when your users' safety is on the line. * In this context, I am defining security theater as the practice of implementing security measures that do not actually improve security in any meaningful way and are only present to provide the illusion of better security. | {
"source": [
"https://security.stackexchange.com/questions/181190",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138397/"
]
} |
181,328 | I went online on my Macbook today and noticed my iTunes complaining that it couldn't connect to Apple, I tried logging out and in of my account but weirdly it said it couldn't log in; I didn't think much of it at first as I thought maybe it was iTunes just being more buggy than usual. However then I noticed something really weird, when I tried to visit www.apple.com my browser warned me (Google Chrome) saying this website was not secure. This started ringing alarm bells in my mind, I clicked "Continue Anyway" and was greeted with this page: Being (somewhat of) a web designer/developer I pay attention to the little details on a website and I knew instantly this was not what the Apple homepage looks like, and they certainly didn't prompt you to login on their homepage. I dug in a little deeper to the source code for the page and could see that the source code was way too simplified for a large corporation; the only piece of JS was to verify that the email address was in the right format. I began to suspect maybe my Mac machine had been infected, so I switched to my iPhone (on the same WiFi network), tried www.apple.com , and got shown the exact same page. To me this sounded like something to do with DNS as the chances that both my devices were infected were very unlikely. I then turned to my router to have a look at its settings. Lo and behold, when digging into the DNS settings I could see that the settings looked a little odd. I had initially set my DNS settings to use Google's servers, although this was set many years ago I knew the were something along the lines of 8.8.*.* . In my settings however I found the following IP's: Primary: 185.183.96.174
Secondary: 8.8.8.8 I knew straight away that the DNS had been changed, the primary address should have been 8.8.4.4 . No one has access to my router administration page aside from me on the network, and I have disabled access to the router outside of the local network I can see outside access was enabled, on initial setup this was definitely switched off. My question is: "How could the DNS have been changed/What can I do to prevent this from happening again? I try to keep my router firmware up to date (although I was maybe 1 release behind at the time of this post). More about the phishing site: Before I changed the Primary DNS setting back and I wanted to find out more about this phishing site, so I ran ping apple.com to find the IP address was 185.82.200.152 . When I entered this into a browser I could see that the person had created a number of sites to try and capture logins. I suspect they're based in the US; I don't believe Walmart operates outside of the states (at least not in the UK).
I have reported the IP to the Dubai based web host and am waiting for a response. Edit (Router details): Asus AC87U, FW Version 3.0.0.4.380.7743 (1 release behind) I did not have the default passwords set. Second update: Host has suspended the account. | Yes, your router's primary DNS entry was pointed to a rogue DNS server to make devices in your network resolve apple.com and other domains to phishing sites instead. The router possibly got compromised through an unpatched vulnerability in its firmware. I have an Asus AC87U, FW Version 3.0.0.4.380.7743 (1 release behind). Your release is over half a year old. The latest release 3.0.0.4.382.50010 (2018-01-25) comes with lots of security fixes, including RCE vulnerabilities which may have been exploited here. Security fixed Fixed KRACK vulnerability Fixed CVE-2017-14491: DNS - 2 byte heap based overflow Fixed CVE-2017-14492: DHCP - heap based overflow Fixed CVE-2017-14493: DHCP - stack based overflow Fixed CVE-2017-14494: DHCP - info leak Fixed CVE-2017-14495: DNS - OOM DoS Fixed CVE-2017-14496: DNS - DoS Integer underflow
-Fixed CVE-2017-13704 : Bug collision Fixed predictable session tokens(CVE-2017-15654), logged user IP validation(CVE-2017-15653), Logged-in information disclosure (special
thanks for Blazej Adamczyk contribution) Fixed web GUI authorization vulnerabilities. Fixed AiCloud XSS vulnerabilities Fixed XSS vulnerability. Thanks for Joaquim's contribution. Fixed LAN RCE vulnerability. An independent security researcher has reported this vulnerability to Beyond Security’s SecuriTeam Secure
Disclosure program Fixed remote code execution vulnerability. Thanks to David Maciejak of Fortinet's FortiGuard Labs Fixed Smart Sync Stored XSS vulnerabilities. Thanks fo Guy Arazi's contribution.
-Fixed CVE-2018-5721 Stack-based buffer overflow. (Source) Although Asus doesn't publish bug details, attackers may have independently discovered some of the vulnerabilities patched in that release. Diffing firmware releases to reverse-engineer what parts were patched is usually quite straightforward, even without access to the original source. (This is routinely done with Microsoft security updates .) Such "1-day exploits" are comparatively cheap to develop. Also, this looks like it's part of a more wide-spread recent attack. This tweet from three days ago seems to describe an incident very similar to what you experienced: My ASUS home router was apparently hacked and a rogue DNS server in Dubai added to the configuration. It redirected sites like http://apple.com to a phishing site that (I think) I caught before my children gave away their credentials. Check your routers kids. ( @harlanbarnes on Twitter , 2018-03-09) [...] my browser warned me (Google Chrome) saying this website was not secure. [...] I began to suspect maybe my Mac machine had been infected [...] The fact that you got certificate warnings makes it less likely that an attacker managed to get into your machine. Otherwise, they could have messed with your local certificate store or browser internals and wouldn't need to conduct a blatant DNS change. No one has access to my router administration page aside from me on the network Even if your router interface isn't visible from outside your network, it can be vulnerable to a range of attacks. As an example, take this Netgear router arbitrary code execution exploit from a while ago which had Netgear routers execute arbitrary commands sent as part of the URL. The idea here is to trick you into visiting a prepared website that makes you conduct the attack yourself by issuing a specially crafted cross-origin request to the router interface. This could happen without you noticing and wouldn't require the interface to be remote accessible. Ultimately, the given information doesn't reveal the exact attack path. But it's plausible that they leveraged vulnerabilities in your outdated firmware release. As an end user you should at least update your firmware as soon as possible, do factory resets if necessary, and keep your router interface password-protected even if it's only accessible from the intranet. | {
"source": [
"https://security.stackexchange.com/questions/181328",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26065/"
]
} |
181,381 | User registers account on a web app. Passwords are salted and hashed.
But is it safe to check the password against the HIBP Pwned Passwords API , before salting and hashing it? Of course the app uses TLS. So if the password is found on any breach - don't allow to register an account.
If password not found in breach - salt it and store it in a database. Same would apply if changing the password. | Have I Been Pwned? allows anyone to download the full database to perform the checks locally. If that's not an option, using the API is safe, since it uses k-anonimity which allows you to perform the check without transmitting the full password / hash. | {
"source": [
"https://security.stackexchange.com/questions/181381",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172731/"
]
} |
181,580 | The JavaScript Math.random() function is designed to return a single IEEE floating point value n such that 0 ≤ n < 1. It is (or at least should be) widely known that the output is not cryptographically secure. Most modern implementations use the XorShift128+ algorithm which can be easily broken . As it is not at all uncommon for people to mistakenly use it when they need better randomness, why do browsers not replace it with a CSPRNG? I know that Opera does that *, at least. The only reasoning I could think of would be that XorShift128+ is faster than a CSPRNG, but on modern (and even not so modern) computers, it would be trivial to output hundreds of megabytes per second using ChaCha8 or AES-CTR. These are often fast enough that a well-optimized implementation may be bottlenecked only by the system's memory speed. Even an unoptimized implementation of ChaCha20 is extremely fast on all architectures, and ChaCha8 is more than twice as fast. I understand that it could not be re-defined as a CSPRNG as the standard explicitly gives no guarantee of suitability for cryptographic use, but there seems to be no downside to browser vendors doing it voluntarily. It would reduce the impact of bugs in a large number of web applications without violating the standard (it only requires the output be round-to-nearest-even IEEE 754 numbers), decreasing performance, or breaking compatibility with web applications. EDIT: A few people have pointed out that this could potentially cause people to abuse this function even if the standard says you cannot rely on it for cryptographic security. In my mind, there are two opposing factors that determine whether or not using a CSPRNG would be a net security benefit: False sense of security - The number of people who otherwise would use a function designed for this purpose, such as window.crypto , decide instead to use Math.random() because it happens to be cryptographically secure on their intended target platform. Opportunistic security - The number of people who don't know any better and use Math.random() anyway for sensitive applications who would be protected from their own mistake. Obviously, it would be better to educate them instead, but this is not always possible. It seems safe to assume that the number of people who would be protected from their own mistakes would greatly exceed the number of people who are lulled into a false sense of security. * As CodesInChaos points out, this is no longer true now that Opera is based off of Chromium. Several major browsers have had bug reports suggesting to replace this function with a cryptographically-secure alternative, but none of the suggested secure changes landed: Chromium thread: https://bugs.chromium.org/p/chromium/issues/detail?id=45580 Firefox thread: https://bugzilla.mozilla.org/show_bug.cgi?id=322529 The arguments for the change essentially match mine. The arguments against it vary from reduced performance on microbenchmarks (with little impact in the real world) to misunderstandings and myths, such as the incorrect idea that a CSPRNG gets weaker over time as more randomness is generated. In the end, Chromium created an entirely new crypto object, and Firefox replaced their RNG with the XorShift128+ algorithm. The Math.random() function remains fully predictable. | I was one of the implementers of JScript and on the ECMA committee in the mid to late 1990s, so I can provide some historical perspective here. The JavaScript Math.random() function is designed to return a floating point value between 0 and 1. It is widely known (or at least should be) that the output is not cryptographically secure First off: the design of many RNG APIs is horrible . The fact that the .NET Random class can trivially be misused in multiple ways to produce long sequences of the same number is awful. An API where the natural way to use it is also the wrong way is a "pit of failure" API; we want our APIs to be pits of success, where the natural way and the right way are the same. I think it is fair to say that if we knew then what we know now, the JS random API would be different. Even simple things like changing the name to "pseudorandom" would help, because as you note, in some cases the implementation details matter. At an architectural level, there are good reasons why you want random() to be a factory that returns an object representing a random or pseudo-random sequence, rather than simply returning numbers. And so on. Lessons learned. Second, let's remember what the fundamental design purpose of JS was in the 1990s. Make the monkey dance when you move the mouse . We thought of inline expression scripts as normal, we thought of two-to-ten line script blocks as common, and the notion that someone might write a hundred lines of script on a page was really very unusual. I remember the first time I saw a ten thousand line JS program and my first question to the people who were asking me for help because it was so slow compared to their C++ version was some version of "are you insane?! 10KLOC JS?!" The notion that anyone would need crypto randomness in JS was similarly insane. You need your monkey movements to be crypto strength unpredictable? Unlikely. Also, remember that it was the mid 1990s. If you were not there for it, I can tell you it was a very different world than today as far as crypto was concerned... See export of cryptography . I would not have even considered putting crypto strength randomness into anything that shipped with the browser without getting a huge amount of legal advice from the MSLegal team. I didn't want to touch crypto with a ten foot pole in a world where shipping code was considered exporting munitions to enemies of the state . This sounds crazy from today's perspective, but that was the world that was . why do browsers not replace it with a CSPRNG? Browser authors do not have to provide a reason to NOT do a change. Changes cost money, and they take away effort from better changes; every change has a huge opportunity cost . Rather, you have to provide an argument not just why making the change is a good idea, but why it is the best possible use of their time. This is a small-bang-for-the-buck change. I understand that it could not be re-defined as a CSPRNG as the standard explicitly gives no guarantee for suitability for cryptographic use, but there seems to be no downside to doing it anyway The downside is that developers are still in a situation where they cannot reliably know whether their randomness is crypto strength or not, and can even more easily fall into the trap of relying on a property that is not guaranteed by the standard. The proposed change doesn't actually fix the problem, which is a design problem. | {
"source": [
"https://security.stackexchange.com/questions/181580",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106285/"
]
} |
181,619 | Some services (for instance ProtonMail) claim to store hashes of phone numbers, instead of phone numbers themselves (while they don't say how they hash it). Now, given that the number of potentially valid phone numbers is very small (about 26 bits worth of information in an 8-digit phone numbers), it should be quite easy to recover a phone number from its hash. So what's the point? | ProtonMail may request your phone number to perform a human check: ProtonMail detects that you're attempting to create several accounts. It requests you a phone number, to send you a token via SMS. You must send that token to ProtonMail to prove you're the phone number owner. Then, ProtonMail doesn't need your phone number anymore, but it still need to use it to prevent spammers to create multiple accounts. Hashing the phone number allows it to not store the original number and to prevent someone to use the same number twice. From their FAQ : However, using the same phone number will result in obtaining the same cryptographic hash, so by comparing hashes, we can detect re-use of phone number or email addresses for human verification. Thus ProtonMail doesn't seem to use unique salts. We also know thanks to a tweet from Bart Butler (ProtonMail CTO) that: ProtonMail regularly flushes stored hashes. Stored hashes aren't linked to any account. Bart Butler also tweeted : We use a slow password hash (With a salt) and flush the list and rotate the salt at irregular intervals. In conclusion: brute-forcing them is possible, but it's neither practical nor useful. | {
"source": [
"https://security.stackexchange.com/questions/181619",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/143975/"
]
} |
181,793 | If my website has HSTS and forced HTTPS (i.e. user won't be able to access the plain HTTP version of the website), is there any point in setting secure: true for the cookies? | Yes, you should still mark your cookies as secure, for three reasons: You dont want them to be exposed just because of a server configuration mishap. What if you move your application to a server with a different configuration? HSTS is trust on first use. If your HSTS has expired but your cookies has not, the browser may send them unencrypted. Whether or not there is something responding to plain HTTP is irrelevant here. As Tgr writes, not all browsers support HSTS. I admit that the benefits aren't huge here, but the cost is basically zero. So set the secure flag! | {
"source": [
"https://security.stackexchange.com/questions/181793",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/173247/"
]
} |
181,949 | I was reading this article about hardening security on Linux servers , and in point #23, the article says: #23: Turn Off IPv6 Internet Protocol version 6 (IPv6) provides a new Internet layer of
the TCP/IP protocol suite that replaces Internet Protocol version 4
(IPv4) and provides many benefits. If you are NOT using IPv6 disable
it: The article then gives links to different websites which tell how to disable IPv6. Neither the article nor any of the links, however, seem to tell why IPv6 should be disabled if not used. Since the article was on hardening security on Linux servers, how would disabling IPv6 make a server any more secure? | From a firewall perspective it is important to realize that both IPv4 and IPv6 (if enabled) are configured on a system and this is not always the case. In my experience, I have been able to bypass (internal) firewalls. In one scenario, on a Linux machine, iptables was configured however, ip6tables was not, which exposed (vulnerable) services that were not available over IPv4. Since most services bind to 0.0.0.0 and [::]:[port] (every interface), these services are also available over IPv6. So, yes it is important to consider disabling IPv6 if you do not use it. If you do use it, you or administrators in general should be made aware that (at least on Linux servers) extra firewall configuration is required. And before you start that administrators should be aware of this, you are totally correct. However, from experience there is lacking a lot of IPv6 knowledge among system administrators. | {
"source": [
"https://security.stackexchange.com/questions/181949",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/102283/"
]
} |
182,161 | I have just received a message asking to consent to PayPal policy updates from the domain: https://epl.paypal-communication.com The actual link is full of trackers. Given the domain name, it sounds like a routinely email spoof. Also, visiting the domain, you are welcomed by a "503 Service Unavailable" message. After some investigations, including whois , the weird domain seems really linked to PayPal.com. That being the case: Why should a company (and in particular a company dealing with payments) send messages from another domain? Why add countless trackers if you can already recognise users from logon? Should the practice of sending messages from somecompany.com using anothercompany.com become established, it will be virtually impossible to us users telling if a website is legit or a scam. | Should the practice of sending messages from somecompany.com using anothercompany.com become established, it will be virtually impossible to us users telling if a website is legit or a scam. Unfortunately, this practice is already established - and yes, it makes it very hard to tell legitimate communications from spam. Companies use partners and third parties to handle their email all the time. Why should a company (and in particular a company dealing with payments) send messages from another domain? Because companies outsource non-core functions like marketing to third parties for economic reasons. Why add countless trackers if you can already recognise users from logon? Trackers can provide a lot more psychographic information than logon can, and that information is valuable to marketing departments. | {
"source": [
"https://security.stackexchange.com/questions/182161",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55476/"
]
} |
182,284 | I have read a lot of articles that talk about how using an AV is less safe than not having one for more intermediate PC users who are careful with what they click and download. For example, here are a couple of articles: https://antivirus.comodo.com/blog/computer-safety/a-new-attack-that-turns-antivirus-software-into-malware/ http://www.dailymail.co.uk/sciencetech/article-3574724/Is-antivirus-software-putting-risk-Programs-offer-lower-levels-security-browsers.html I have also read that when an AV automatically scans an executable file you just downloaded, the hacker could potentially abuse a vulnerability in the AV scan and get it to execute without you ever running it. On top of that, you still have to put some trust in the AV which has kernel access to your PC and intercepts network traffic for you and possibly even gathers data about things you don't know. So I was wondering what would be some better ways I could keep my PC safe without relying on some heavy, automatic AV software? I can think of a few things so far (but would doing these things actually be safer than having an AV?): Using extensions like AdBlock and NoScript/ScriptSafe so malicious code can't secretly execute. Monitoring the network traffic on your PC now and then to check for anything suspicious. Using a tool like AutoRuns every week to check for suspicious start-up entries. | Antivirus is more dangerous in that it parses complex attacker-controlled data in a highly privileged context. This is a recipe for privilege escalation exploits. As a result, sophisticated attackers can often abuse antivirus programs to gain SYSTEM privileges. This is not a rare occurrence or one that is only a problem for enemies of a powerful government. AV software is riddled with privilege escalation vulnerabilities. A quick look at the severity of the vulnerabilities in the CVE list for any popular piece of software will give at least a little insight into the scope of the problem. Consider your threat model It is necessary to understand your own threat model. One person's situation might dictate that AV is harmful, while another person's situation might dictate that it is beneficial. Being able to understand the risks that apply to you, and the adversaries which you have is vital to being able to make any security-related decisions, especially ones such as this which are not necessarily black and white. AV may be beneficial in situations where: The computer is used by someone who can be easily fooled into installing malware. The computer will be handling user-submitted data which may be redistributed to others. You download a lot of untrustworthy programs, such as warez . AV may be harmful in situations where: Your adversary is at least moderately sophisticated or is targeting you in particular. You are the sole user of your computer and do not download unsigned programs. You keep your software up to date and are not expecting people to burn 0days on you. Your threat model is what determines whether or not you should use AV software. My personal suggestion, assuming you are not going to download random dolphin screensavers and you keep your software up to date, is that you may want to use a simple, default program such as Windows Defender, and only use it when you explicitly need to. Each time you ask it to scan the hard drive, you are putting all your faith into it to not be compromised by any specially-crafted malware it may stumble upon. If instead you use it when targeting specific programs that you download before you execute them, you reduce the risks considerably. Enforce code signing It would be preferable if you did not need to download untrusted software and instead use trusted, signed executables from official sources only. This is especially important for files that wish to be run as Administrator, as those have the most potential for doing damage to your installation. Make sure they are signed ! Never assume that your own will power is sufficient to prevent you from making mistakes when running a new program. This is what trojan developers rely on! In order to further reduce the chance of accidentally running an unsigned or untrustworthy executable, you can configure your security policy such that unsigned executables cannot be run. This will ensure that any malware will need to have a valid signature, signed by a trusted CA. While it is obviously possible to get a malicious file signed, it is far more difficult, and will tend to be more of an issue if you are a specific target and not just an opportunistic victim. If you further restrict the policy such that only executables signed by Microsoft themselves (and not just a CA which Microsoft trusts), you can effectively eliminate any possibility of infection from a trojan. The only way to get a program to execute in that case would be to exploit a 0day in the operating system, or compromise Microsoft's internal signing keys (those are both in the realm of capabilities for advanced state-sponsored actors). This can help prevent the rare (but not non-existent) cases where malicious code slips into the repositories of a trusted developer. System hardening On systems before Windows 10, you can use the Enhanced Mitigation Experience Toolkit (EMET) to enhance the system's security without increasing attack surface area significantly, though note that EMET will not be receiving updates for much longer. EMET works by injecting processes with code that hardens them against exploitation, increasing the chances that an exploit attempt will cause the targeted application to crash rather than be successfully exploited. If you are on Windows 10, most of these security features will be natively present. This makes it the most secure Windows release yet, despite the potentially problematic privacy issues it may have. You can also disable unnecessary services (especially networking services, such as those exploited by EternalBlue ), use AppLocker , and read the security guides provided by Microsoft to allow yourself to further improve the security of your system. The topic of system hardening is vast. | {
"source": [
"https://security.stackexchange.com/questions/182284",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/173919/"
]
} |
182,410 | I had my iPhone battery replaced in a phone repair shop. After collecting it, I noticed that there is a strange new app installed, some "Chinese" web browser. It has no alphanumeric name and nothing in the interface was in English.
I spoke with the technician who replaced the battery and he said that they didn't do anything with the phone, didn't even connect it to a PC. Should I be concerned? There's plenty of sensitive data on it.
It has never been jailbroken, I never visited any suspicious sites on it and I didn't connect it to any PC other than my trusty laptop. | Unless you can come up with some other explanation of how this happend, it sounds like your phone has been infected by some malware. It's impossible for us to say if the infection was the result of something the factory did or something you did. Either way, you should be very concerned. I'd recommend the following course of action: Make a backup of any data you have on the phone. (The backup could be infected, see this question , but if you have no earlier backup you either have to take the risk or lose your data.) Do a complete factory reset, wiping the phone clean. Change any passwords that has been stored on or entered on the device. While you are only seeing one app, it might just be a symptom of a deeper infection. Just removing the strange chinese app may not be enough. | {
"source": [
"https://security.stackexchange.com/questions/182410",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/174095/"
]
} |
182,567 | I was playing an online game and I came in contact with this user. She was listed as from the same country as me (Egypt). So when she asked me for my cell phone number, I gave it to her. I figured it was to add me on WhatsApp so we can chat or something. However, I instead received an SMS message (titled 'Telegram') with a number, and she asked me to tell her that number. I asked what it's for, she said that she's from Canada (not the country she listed on her avatar) and that she needs that code I received to connect with me, since I'm outside Canada. At the same time, I received a phone call (no number appeared on my screen) with an automated message. I hung up before hearing that message. I refused to give this girl the code I received via SMS and parted ways. My questions: Was this a scam? If yes, did I prevent the scam by denying this user the code? If not, what's at risk? Is my bank account at risk, for instance? (I
only gave the user my cell phone number. No name, no address,
nothing else whatsoever) If I am at risk, what do I need to do to prevent any theft of my
information, passwords, money, etc.? | Let's go through the process of what actually happened: Telegram requires a cell phone number to be linked in order to create an account. To verify that the number exists, they send out a verification code for you to enter in the app while creating the account. This person obviously didn't want their own cell phone number to be linked to the Telegram account, which is an indication they might use it in malicious ways (terrorism, hacking, …), so: They tried to set you up to get that code. So to answer your questions: Was this a scam? I'd call it gray zone. But yes, someone did try to scam you for a Telegram verification code. If yes, did I prevent the scam by denying this user the code? Yes, you did well, using common sense :-) If not, what's at risk? Is my bank account at risk, for instance? (I only gave the user my cell phone number. No name, no address, nothing else whatsoever) That depends, how visible are you on the internet? If you have domain names registered with that number, they can find that. Facebook linked? They can likely find that depending on privacy settings. In brief, if you've linked that number to a lot of accounts, they might find those accounts and the information that's publicly in them. If I am at risk, what do I need to do to prevent any theft of my information, passwords, money, etc.? "At risk" is a heavy word in this situation. They just want a Telegram account; I find it likely they just moved on to one of the other 50 people they PMed while talking to you. Just keep an eye open for "strange things" (like hacking attempts to other accounts etc.), research yourself with that phone number and see if what you can find needs changing. | {
"source": [
"https://security.stackexchange.com/questions/182567",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/174295/"
]
} |
182,843 | Lets say I create 100 separate passwords, consisting of around eight random characters followed by two constant ones that are the same for all passwords: Generated password = 8 random characters + `.p` If I do this for all 100 passwords, does adding the same .p for every password make them more or less secure? How much of an impact would it have if two of all those passwords were compromised? | It depends on what kind of attack you are trying to protect against: If your password is one among millions in a data breach where the attacker isn't targeting you specifically, then your password is effectively 10 random characters long instead of 8. It will be harder to crack. If an attacker is targeting you, and knows about your pattern, then it gives you no protection at all. If an attacker is targeting you, and doesn't know about the pattern, it could help until the attacker finds out about it. Breaking one account would give a little help, breaking two would make the pattern obvious and hence useless. So your system could be helpful sometimes, but not always. Or in other words: your "effective" password length will be somewhere between 8 and 10 depending on your threat model. But unless you have some specific reason not to, I would just forget all about clever systems and just install a password manager instead. | {
"source": [
"https://security.stackexchange.com/questions/182843",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/169476/"
]
} |
182,846 | Recently, it was unveiled that Facebook's application in mobile phones (and messenger) have access to data that you didn't consent in their collection. Data that is completely unrelated to the existence of Facebook's and messenger's applications in the first place. Data that include all your contacts in your phone and metadata about SMS messages you sent/received. Now, if an application on the surface does one thing but actually does tracking and spying on you, it is considered spyware at least and malware in the worst. How come there is no mention that Facebook's applications and network are at least spyware? | Very Simple, Facebook IS spyware . It is also "Consensual" spyware...yeah, it's spying on you, but you agreed to it (even if you didn't read the agreement before checking the "I Agree" box), and it is probably safe to say, you 'enjoyed' the benefits of it, so most people (still) aren't really complaining about it. As the public's understanding of just how invasive Facebook has become people are coming to terms with what they will and will not accept. For the most part what it really comes down to is that Facebook is paying attention to what you do in order to target you with advertising for things you will like and to 'grow' it's network. Think of it this way, if you like, for decades advertisers choose where to place beer commercials, dish soap ads, and other products based on who they thought would be watching TV. There are more beer commercials on CBS during a football game than on ABC during the news (though sometimes the news might make you reach for a beer...but I digress). Facebook has taken this to a new level. For my part I am pleased that I don't get bombarded with feminine hygiene ads, but do get ads, and even coupons and invitations to events where the food, music and activities are more likely to be 'to my liking'. More recently, however, Facebook has gotten into more 'censorship'... and IMHO that is bothersome. How much so? each Facebook user will have to decide for themselves. When you walk into the "mall" that is owned by Zuckerberg he does get to decide the rules...if you don't like them, you can always leave. But yes, he is watching. | {
"source": [
"https://security.stackexchange.com/questions/182846",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45035/"
]
} |
182,956 | I have a requirement to generate a one time use URL which should have the following features: As the URL query parameters may contain sensitive information, it should be encrypted (on top of https encryption). Once used, the URL cannot be used again. URLs have automatic expiry after a certain amount of time. It shall be possible for an administrator to revoke a valid URL. If the user at a later point in time tries to use this URL, he should see an appropriate error message. To do this, I can think of two high level approaches: Generate a random number as the query parameter of the URL. Store the random number and the corresponding parameters (i.e. the real query parameters, expiration, revocation status, used status) in a database. When the user uses the URL check all the required pre-conditions and mark it as used before providing the real query parameters. Embed the real query parameters and the expiration timestamp as the query parameter of URL. Encrypt the URL with an algorithm such as AES256. However, I would still need to store the URL in a database so as to provide the revocation feature. Based on the above I am leaning towards option 1 as all the logic is in a single place and it looks more secure. Is there any industry best practice to deal with this type of problem? If it matters, this will be a REST-based web service hosted on IIS. | Looks like you have a pretty good idea what you're doing. The one-time link pattern is pretty common for things like email verification. Typically, you'd store the expiration date in a database and/or use a signed string in the URL which includes the expiry in the string-to-sign. These are just precautions to avoid trusting user input. If you want to be really thorough, you can send them the actual random ID or token URL, and then store a hash in the database to avoid someone using the token if you have a data breach. You're pretty vague about the context of the proposed AES-encrypted parameters. I would usually include sensitive information which is not required for the URL-routing in a message body, instead of the URL. That way, it doesn't appear in web server or proxy logs. AES-encrypted data may also push you over the URL length limit pretty quickly, depending how large the plaintext content is. Edit: For completeness, Azure Storage SAS Tokens are a great example of a cryptographic method that requires no database record, and is revokable. Revocation is done by changing the service's API Key... which would revoke all tokens issued by the same key. Given that I don't have a lot of information about the system in use,
I would recommend the simpler database lookup, over any solution that requires cryptographically sophisticated measures. Also, for cryptographic methods to work as a one-time link, some parameter known by the service, and included in the URL, must be changed when the link is used. This parameter doesn't have to be a database lookup, but there does need to be a persisted change. For example, if the link is a one-time upload URL, the presence of a file in the upload directory might invalidate the URL. Lastly, the database-stored value is simpler in that there is only one source to check for validity. With the no-database solution, you'd need to check against the secret (ie. the encryption key), and also whatever factor invalidates the link after first use. | {
"source": [
"https://security.stackexchange.com/questions/182956",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/174815/"
]
} |
183,083 | In using Argon2 for hashing passwords in my application, I've noticed it generates a string like this (e.g. for password "rabbit"): $argon2i$v=19$m=65536,t=3,p=1$YOtX2//7NoD/owm8RZ8llw==$fPn4sPgkFAuBJo3M3UzcGss3dJysxLJdPdvojRF20ZE= My understanding is that everything prior to p= are parameters, the body of the p= is the salt, and the last part is the hashed password. Is it acceptable to store this in one field in a SQL database ( varchar(99) in this case), or should the string be separated into its constituent parts? Should I store the salt separately from the hashed password and keep the parameters in code? | You should store it in a single field. Do not try to divide it into parts. I know this might seem a bit unintuitive for people coming from a database background, where the modus operandi is to normalize data and using serialized strings is considered an ugly hack. But from a security practice, this makes perfect sense. Why? Because the smallest unit the individual developer needs to operate on is that full string. That is what the app needs to pass to the hash library to check if a provided password is correct or not. From the developers perspective, the hash library can be a black box, and she need not concern herself with what those pesky parts actually mean. This is a good thing, because most developers are imperfect humans (I know, because I am one myself). If you let them pick that string apart and then try to fit it all together again they will probably mess things up, like give all users the same salt or no salt at all. Only storing the parameters in the code is also a bad idea. As processors gets faster, you may want to increase the cost factor in the future. During the migration phase, different passwords will have different cost factors. So you will need information on the password level about what cost factor was used to hash it. | {
"source": [
"https://security.stackexchange.com/questions/183083",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/127979/"
]
} |
183,358 | This question is a fork from a previous question here: Is it safe/wise to store a salt in the same field as the hashed password? Assume you run a web portal, and store passwords in SHA1 hashes. How do you upgrade this to BCRYPT hashes instead? Typically, you'd wait for users to log on, and re-hash their passwords (from the plaintext they entered) to BCRYPT. But it could be a while before all users to logon to your platform, and there will always be inactive users who will never come back. A separate proposal (which I first heard from Troy Hunt) was to BCRYPT the existing SHA1 passwords in your database. Effectively you'd be BCRYPT(SHA1(plaintext_password)) . This way all users on the system get upgraded to BCRYPT at once, regardless of their activity. This way, a breach on your database, doesn't expose users who haven't logged in yet and still on SHA1. The question is: Is BCRYPT(SHA1(plaintext_password)) is equivalent in security to BCRYPT(plaintext_password) If Not --Why? And is the gap reasonable enough to consider this option? The question focuses on BCRYPT(SHA1) but could easily apply to any two hash algorithms with the stronger one being applied last. | First of all, thank you for taking the time to determine how to do this correctly and improve security for your users! Migrating password storage while taking legacy hashes into account is relatively common . For your migration scenario, bcrypt(base64(sha1(password))) would be a reasonable balance . It avoids the null problem ( important - you definitely don't want to leave out the base64 stage! ), sidesteps bcrypt's native 72-character limit, and is 100% compatible with your existing hashes. In the basic case, you would simply hash all existing SHA1s with bcrypt(base64(sha1)) , and then hash all new passwords with the full sequence. (You could also use SHA256 instead, though this would increase your code complexity slightly, to check to see if SHA1 or SHA25 was used (or to just try them both and pass if either succeeds) Long term, SHA256 would be more resistant to collisions, so would be a better choice). For resistance to bruteforce, this would not only be equivalent to bcrypt, but would be theoretically superior (though in practice, 72 characters is so large for password storage than they are effectively the same). Bonus advice: Be sure to use a bcrypt work factor that is high enough to be resistant to offline attacks - the highest value that your users can tolerate, 100ms or higher (probably at least a work factor of 10). For speeds under 1 second, bcrypt may actually be slower for the attacker (better for the defender) than its modern replacements scrypt and Argon2 (YMMV). Store the default value for the work factor as a system-wide configurable variable, so you can periodically increase it as your underlying hardware speeds get faster (or more distributed). UPDATE: Note that in general, wrapping a fast hash inside a slow hash is an anti-pattern (that should be reserved for migration or temporary purposes only). See my answer here for more information. In a nutshell, even if not yet cracked themselves can be cracked inside a slow hash and then cracked much faster directly . This is true both for targeted attacks against high-value users and against larger sets of target hashes. This attack is called 'hash shucking' and is well-known to advanced password-cracking researchers. This is definitely non-intuitive, so please see my other answer for extended discussion, rather than attempting to discuss it here. | {
"source": [
"https://security.stackexchange.com/questions/183358",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172108/"
]
} |
183,473 | I was under the impression that Adobe Flash was dead, and that browsers were no longer natively supporting Flash? Why therefore, is there a large amount of hype online about a new remote code execution vulnerability in flash? | The short answer is that it takes a loooooong time for software to die. Even in 2018 we still have COBOL running multi-billion dollar companies, despite COBOL being a "dead" language for decades. The longer answer is there's still a significant amount of websites that require Flash, and people re-enable Flash for practical reasons. Oftentimes these are "mission critical" internal corporate websites or schools that haven't put a priority on replacing legacy applications based on Flash. This might mean using older browsers where Flash isn't disabled, or just users being trained to re-enabled it every time. Across the board, the numbers as of April 2018 are around 5% of websites according to https://w3techs.com/technologies/details/cp-flash/all/all So I wouldn't say Flash is "dead", but it is slowly dying. | {
"source": [
"https://security.stackexchange.com/questions/183473",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32233/"
]
} |
183,636 | When using ssh-keygen : What is the passphrase for? Why is it optional? What are the security implications of specifying (or not specifying) one? Below is an excerpt taken from a shell session
(some details may have been altered): user@localhost:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase):
Enter same passphrase again: Your identification has been saved in /user/.ssh/id_rsa.
Your public key has been saved in /user/.ssh/id_rsa.pub.
The key fingerprint is:
60:8b:50:1e:0f:bc:5a:2a:13:1e:83:2b:d9:95:38:9e user@localhost
The key's randomart image is:
+---[RSA 2048]----+
| .+ |
| o.+ |
|. ...o+ |
|ooo.=o o |
|.*oB. . S |
|*.E |
|.o |
| |
| |
+-----------------+ | $ man ssh-keygen
[...]
It is possible to specify a passphrase when generating the
key; that passphrase will be used to encrypt the private
part of this file using 128-bit AES. So this passphrase just encrypts the key locally. An attacker with access to your system will not be able to read the private key, because it's encrypted. (They could install a keylogger, though.) If your laptop is stolen for example, your ssh key might still be secure if you have a strong passphrase. Or even with a fairly weak passphrase (so long as it is not trivial), it will buy you some time to revoke the key and roll over to a new one, before the attackers can crack it. It's optional because you can choose to accept the risk of having it not encrypted in storage. Or perhaps you have disk encryption enabled, which mitigates some of the same attacks (but not all, for example: malware can still steal the key, even with disk encryption; on the other hand, a stolen laptop is still secure unless stolen while running with the key in memory). The server can require the use of both a public key and a password to log in. The security of this is different from using a password-encrypted public key. If you use an encrypted key, then: you cannot change the password on the server side, you'll have to generate a new key; someone might crack the key's password undetected, because they can do it offline (if the server requires a password, they have to ask the server "is aaaa correct? Is aaab correct?" etc.); someone can crack the key much, much faster because it's an offline attack without network limitations; and the server cannot use something like fail2ban to reject too many login attempts, because the cracking happens offline. | {
"source": [
"https://security.stackexchange.com/questions/183636",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83289/"
]
} |
183,658 | I am partially responsible for some resources protected by a 4-dial combination lock like this one : There are two things that people will usually do after they've locked it: reset all the digits to 0, so that the combination reads 0000, or mash around on the dials a bit so that the combination reads something else. I have a strong feeling that there is no functional difference between the two, but I am encouraged to set a best practice. So, assuming that the lock has a random combination and is practically unbreakable without entering the correct combination, which approach is more secure? | I would recommend setting it to 0000 or some other specified combination (doesn't really matter what). "Mashing around the dials" is a little vague, but I would guess based on my own behavior that people would tend to move most or all of the dials at once, which would create a strong correlation between the current combination and the lock combination. For instance, if the lock combination is 1234, someone might change it to 5678 (probably not exactly, but close enough that an attacker could prioritize the combinations they try). Humans also have a tendency to think some things seem more secure when they actually weaken security. Someone may try to set it to a combination that seems "further" from the lock combination, such as changing 1234 to 6578 instead of 2142 because 2142 is too "close" to the lock combination. This could allow an attacker to prioritize the order they attempt combinations. Specifying a constant value to set it to avoids such issues. | {
"source": [
"https://security.stackexchange.com/questions/183658",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175639/"
]
} |
183,665 | I'm preparing for the CCSP exam and another test question is throwing me off. The question reads: In all cloud models, security controls are driven by which of the
following: A. Virtualization engine B. Hypervisor C. SLAs D.
Business Requirements I chose C. SLAs because: Virtualization and hypervisor answers are too technical and not in scope for the context of this question. The cloud provider may not necessarily have the same business objectives as the customer so a business requirement for the customer may not align with the goals or security objectives of the cloud provider. The SLA is a mutual contractually-binding document describing the extent of service reliability and the scope of overall liability for both parties in the cloud. Example: if the cloud provider decides to sell all customer information to a customer's competitor, that would be in breach (hopefully) of the security and disclosure terms outlined in the SLA... right? Obviously though, I'm wrong though according to the test prep material. Could someone please elaborate? The test prep material considers "D. Business Requirements" the correct answer. | I would recommend setting it to 0000 or some other specified combination (doesn't really matter what). "Mashing around the dials" is a little vague, but I would guess based on my own behavior that people would tend to move most or all of the dials at once, which would create a strong correlation between the current combination and the lock combination. For instance, if the lock combination is 1234, someone might change it to 5678 (probably not exactly, but close enough that an attacker could prioritize the combinations they try). Humans also have a tendency to think some things seem more secure when they actually weaken security. Someone may try to set it to a combination that seems "further" from the lock combination, such as changing 1234 to 6578 instead of 2142 because 2142 is too "close" to the lock combination. This could allow an attacker to prioritize the order they attempt combinations. Specifying a constant value to set it to avoids such issues. | {
"source": [
"https://security.stackexchange.com/questions/183665",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2075/"
]
} |
183,885 | After reading Xorg becomes the default display server again and considering the security risk of xorg , I am wondering why the developers left Wayland. The fact that a few programs do not work on Wayland does not justify such a security risk. Any low permission program will have your password and they can enjoy it. Does any body know why Wayland or any other alternative could not continue with Ubuntu and why this risk is tolerated? | They are doing this because the next release is an LTS release, which means stability is the primary concern. Xorg has a good track record of stability, whereas Wayland is still (relatively) new. This decision is not permanent and does not mean Ubuntu has given up on using Wayland, just that it has delayed it. You can also opt to use Wayland instead of Xorg if you would like. From Ubuntu Insights , the three primary reasons for using Xorg by default are: Screen sharing in software like WebRTC services, Google Hangouts, Skype, etc works well under Xorg. Remote Desktop control for example RDP & VNC works well under Xorg. Recoverability from Shell crashes is less dramatic under Xorg. You will still be able to use Wayland and it is still pre-installed: The Wayland session will still be available, pre-installed, for people to use, but for our ‘out of the box’ users the Ubuntu experience needs to be stable and provide the features they have come to expect and use in daily life and Xorg is the best choice here, at least for 18.04 LTS, but for 18.10 we will re-evaluate Wayland as the default. | {
"source": [
"https://security.stackexchange.com/questions/183885",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/152311/"
]
} |
183,916 | The workplace has a physical access key stored in a fire department lockbox (sometimes called a Knox Box), how it's possible to mitigate the risk that the Knox Box gets picked, or that an unauthorized key may exist? What could the local fire department ask for to remove that key? | To be clear: a Knox Box is a lock box that holds keys for emergency personnel. If the fire department needs to get inside your building while it is locked, the fire crew will have a key to unlock your Knox Box and retrieve your building's key. There are a couple of ways to mitigate this risk. The easiest IMO is security cameras that watch your doors. If someone unlocks the Knox Box and uses your key, the camera will pick them up and you can respond appropriately. What I often see either to automate this or in conjunction with this is, the Knox Box is hooked up to an alarm system. When it's unlocked, the alarm goes off alerting your security company that someone has obtained access to the key. If it's a true emergency and responders are on scene, this will not have any impact. If it's not an emergency and it's a burglar, the police will now be notified to respond. Most Knox Boxes I've seen have a hookup to wire them into a security system. Here is a link to a fire department recommending this approach . | {
"source": [
"https://security.stackexchange.com/questions/183916",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2270/"
]
} |
183,981 | Same-origin policy (SOP) makes browsers block scripting from one origin to mess with another, unless explicitly being told not to do so. But cross-site POSTs are still allowed, creating the vector for CSRF attacks. The defense is anti-forgery tokens. But why don't browser developers just follow the same SOP's philosophy when dealing with POSTs? | In theory your suggestion is perfectly reasonable. If browsers blocked all cross origin POST requests by default, and it required a CORS policy to unlock them, a lot of all the CSRF vulnerabilities out there would magically disappear. As a developer, you would only need to make sure to not change server state on GET requests. No tokens would be needed. That would be nice. But that is not how the internet was built back in the day, and there is no way to change it now. There are many legitimate uses of cross origin POST requests. If browsers suddenly changed the rules mid game and forbade this, sites relying on the old rules would stop working. Breaking existing sites like that is something that we try to avoid to the largest extent possible. Unfortunately we have to live with our past. So is there any way we could tweak the system to work better without breaking anything? One way would be to introduce a new HTTP verb, let's call it PEST, that works just like POST only that all PEST requests are preflighted and subject to CORS policies. That is just a silly suggestion I made up, but it shows how we can develop the standards without breaking them. | {
"source": [
"https://security.stackexchange.com/questions/183981",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/176037/"
]
} |
184,099 | As a guy working in security/pentest, I regularly take screenshots of exposed passwords/sensitive information. Whenever I report these, I mask parts or complete info as in the sample given below I often wonder, is it possible for someone to 'reverse engineer' these pics and recover the original information? If so, what should be the correct way of masking such kind of info? I am using shutter for taking screenshots and using accompanied edit tool to add the black stroke. EDIT: As pointed out by some of you, my question is different from this since: I am not asking about MS paint/black strokes. The image is just an
example to better explain the question I have clearly asked for the correct/most secure way of producing
photographic evidence. | Yes, it can be recovered. As long as shutter does not use layer (it almost certainly does not) and as long as the black is really all black (it must not be transparent), it is enough. The picture that you provided uses some amount of transparency, see here: All I had to do is use the Fill tool in MS Paint. If I used some algorithm that would take the jpg compression into account, I could probably get better results. Solution: Use an editor that does not make the block transparent. Make sure layers are not used. Make sure change history is not stored (to allow undo) in the file. I believe MS paint + bitmap format satisfy all the requirements. Most editors combined with bitmap ( BMP ) format without compression should satisfy these requirements, but I can not confirm this. Remove the data. You can do so in many editors by selecting it and pressing delete or Ctrl + X. Then apply redaction graphics, whether black box, or anything else. DO NOT use JPEG (jpg) or other lossy formats anywhere from capture until redaction. JPEG may leave artifacts that may convey information about the deleted pixels. This may also apply to other lossy formats, use lossless formats if possible. Using any format after the image is redacted is fine. As lossless formats may also retain some information, if they are not completely re-encoded after the edit, it is recommended to either only use pure bitmap format with no compression before redacting, or to change the format after redacting. Double check: You can double check no compression is used in BMP format by checking the file size. The size should be larger than color_depth / 8 * width * height (resolution in pixels, color depth usually 24). Note that this check will not reveal transparency and artifacts caused by lossy compression, so make extra sure you did not use lossy format at any point. It may also be useful to post a specific question about your proposed setup here, so you can see additional opinions and recommendations. It is hard to give definitive answer, that would work in general for all platforms, formats and editors as they all have their specific caveats. | {
"source": [
"https://security.stackexchange.com/questions/184099",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71034/"
]
} |
184,305 | I guess the gist of my question is: Are there cases in which CBC is better than GCM? The reason I'm asking is that from reading this post by Matthew Green, and this question on cryptography stack exchange, and this explanation of an attack on XML (since I'm encrypting json in my work, although it's not streamed anywhere, but apparently a chosen cipertext attack is possible), then I should never, ever use CBC, and just use GCM. In other words: There's no reason to use CBC, as long as GCM exists ( which it does on OpenSSL , the library I use for my encryption work). Because: GCM = CBC + Authentication. Could someone please tell me whether my conclusions are correct? IMPORTANT UPDATE: Since this question is getting popular so fast, I'd like to point out from my research that GCM IS NOT A SILVER BULLET . There's huge problem with GCM, which is that if you use the same IV twice it can compromise your key (due to the use of GMAC, so it's no fool-proof). In case you're paranoid (like myself), CBC with HMAC ( encrypt then MAC ) is probably the best if one wants to be on the safe side. (Also please correct me if I'm wrong on this update). | CBC and GCM are quite different. Both are secure when used correctly, but CBC isn't as parallelizable and lacks built-in authentication. Due to this, CBC is only really practical for encrypting local files that don't need random access. As for any advantages it might have, CBC doesn't fail as catastrophically if the IV is reused, and it can be faster if implemented on basic hardware. As for GCM, it's basically GCM = CTR + Authentication (not CBC). It's fast and secure if used correctly, and very versatile, hence its popularity. | {
"source": [
"https://security.stackexchange.com/questions/184305",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/77240/"
]
} |
184,318 | In the Computer and network security incident taxonomy what are the differences between "Incident", "Attack" and "event"? Where does "threat" fit with them? | Assuming that you have looked up the official terms and wanted further help: An event is something that has triggered notice. An event need not be an indication of wrongdoing. Someone successfully logging in is an event . An incident is something that indicates a problem, however you define "problem". It carries from an event but has a layer of interpretation on top. Someone successfully logging in when they are on long-term sick leave and should be unable to use a computer is an incident . An attack is an incident with malicious intent. Someone brute-forcing the credentials of someone on long-term sick leave is an attack . A manager asking the person on long-term sick leave for their password so that they can gain access to the person's work product for the benefit of the business is not an attack . It might be an incident , depending on your policies. A threat is anything that has the potential to cause an incident. People, weather, machines, etc. | {
"source": [
"https://security.stackexchange.com/questions/184318",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/135373/"
]
} |
184,344 | Why use usernames, and not just email addresses, to identify users? - What is the main concern or the main case when a security expert (which I'm not) should recommend inserting another layer of usernames, for example, when a native/web application is created? | Your question is missing a lot of context, but what you do say sounds like you’re looking to settle an argument. So my answer will start with “It depends...” One reason to have unique usernames that aren’t email addresses is to protect privacy when other users can see the username. For example, GitHub profiles indicate the username in the profile URL, and as authorship indicators on commits, issues, comments, etc. Providing a username as the user’s public face instead of their email address allows them a layer of privacy. In some rare cases, a service may elect not to collect email addresses at all... since email addresses can be considered sensitive and personally identifiable information. The downside to not collecting an email address at all is that account recovery for someone who forgets their password, or has their account breached, will be more difficult without a verified channel to use for recovery. Or for the hybrid approach, one might collect
the email address, but store it in the database behind strong encryption. Strong encryption is generally difficult to search on, so having a less sensitive identifier to use that can be store in plaintext would be convenient. | {
"source": [
"https://security.stackexchange.com/questions/184344",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/170381/"
]
} |
184,365 | The ease of taking control of a cheap IoT device makes them the perfect target to craft a botnet to perform DDoS attacks. I heard about botnets created from a lot of hacked IoT devices used to create DDoS attacks on websites. How much of this information is true? Is there an existing common threat about IoT botnets nowadays? Which were the most popular IoT botnets and attacks? | Your question is missing a lot of context, but what you do say sounds like you’re looking to settle an argument. So my answer will start with “It depends...” One reason to have unique usernames that aren’t email addresses is to protect privacy when other users can see the username. For example, GitHub profiles indicate the username in the profile URL, and as authorship indicators on commits, issues, comments, etc. Providing a username as the user’s public face instead of their email address allows them a layer of privacy. In some rare cases, a service may elect not to collect email addresses at all... since email addresses can be considered sensitive and personally identifiable information. The downside to not collecting an email address at all is that account recovery for someone who forgets their password, or has their account breached, will be more difficult without a verified channel to use for recovery. Or for the hybrid approach, one might collect
the email address, but store it in the database behind strong encryption. Strong encryption is generally difficult to search on, so having a less sensitive identifier to use that can be store in plaintext would be convenient. | {
"source": [
"https://security.stackexchange.com/questions/184365",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/176021/"
]
} |
184,391 | My girlfriend (let's call her Jane) just got a set of SMS or MMS messages coming from a friend of her (let's call her Hellen). These messages contain: Two photos of Hellen A voice message A text that says "I need help" followed by a Google Maps link. And according to the messaging application, they where sent to 5 people, including Jane. With all this being extremely suspicious, I told Jane not to touch anything on that messages and asked Hellen a bunch of questions: The phone is a Samsung Galaxy J5 Android version is 7.1.1 Hellen did not take those pictures of herself, and the pictures do look accidental, like if the phone took the photos while she was just holding it doing another thing. Hellen doesn't remember any recent apps installed or unusual web activity (I don't fully trust her on this one but whatever). With the case exposed, I ask you: I am right assuming this is only Hellen's problem and as long as Jane ignored and deleted her messages she is safe? How should we proceed about Hellen's phone? A simple factory reset would do it? How can we prevent this situation in the future? | This does not seem to be a virus. It is a panic function in some android phones, that allows to send these messages in case you are kidnapped or otherwise in danger by pressing the power button 3 times. She must have activated it accidentally. More info here and here . | {
"source": [
"https://security.stackexchange.com/questions/184391",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/176424/"
]
} |
184,475 | According to NetTrack , Let's Encrypt is now used on more than 50% of domains (51.21% as of April 2018). I know Let's Encrypt helped a lot of people to get free certificates for their websites, so I think its existence was a very good thing for the web. But does the fact that a certificate authority is used by the majority of users cause security risks? Note: this question is CA-agnostic, even if Let's Encrypt is the main CA today. | TL;DR : It does not matter much. The only security "risk" here really is the CA being "Too big to fail", where the browsers cannot distrust the CA quickly. But this is happening to all big CAs, not just the biggest one. Other than that, the only problem may be the CA being a more tempting target, though all CAs are already very tempting. Having all the eggs in one basket has its advantages and its disadvantages in this situation. Advantage is, that you need to protect just one basket, disadvantage is that if that basket breaks, the impact is somewhat bigger (assuming technologies like CAA and HPKP are used, otherwise it does not matter how big the CA is). | {
"source": [
"https://security.stackexchange.com/questions/184475",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76718/"
]
} |
184,893 | According to XKCD: Password Strength , if the password consists of “four random common words”, it will be secure and memorable. I want to make a web application and make users create their passwords in this way. Each password should have at least 16 characters and must be sentence of at least 4 words to make it more secure and memorable. But this will make the attacker know that all passwords are like that. Is this a bad idea? What better ways are there to force passwords that are more secure and memorable at the same time? | It is not necessarily a bad idea. The attacker can know the password is in that format, considering the 4 words are random enough. But here is the thing, there are other good ways to make a memorable strong password. Limiting your users to the one you like is not very nice. For example I use password manager with truly random long passwords, which is even better than what you propose, but I could not do that on your site. More importantly, if the reason you want to do this is to force users to use strong password, then generate the 4 words password for them. You can generate such password by having a dictionary, then choosing random number between 1 and the number of words in your dictionary and taking that word. Do this 4 times and you have a password. You can get inspiration here . This is important, as most users may not choose 4 truly random words, but instead 4 easily guessable words. In such scenario, this would be worse then letting them choose any password. | {
"source": [
"https://security.stackexchange.com/questions/184893",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/177014/"
]
} |
184,905 | I'm developing a system with communication via REST between front (JavaScript) and back end (Java/Spring) and this question popped up. Does it makes this system more secure to name variables, URLs, etc in a language other than English? I imagine it could because, since the most important programming languages are in English, it's likely most programmers know at least a bit of it. Naming our stuff in another language could make hacking more difficult because the attacker would have a harder time to understand what means and does what. I couldn't find results on Google because "programming" an "language" together makes it impossible to find results about the other meaning of "language". | Technically slightly, yes. But: It would be security by obscurity , which is a bad idea It does not boost confidence in your product It would be very easy to figure out what does what, it would only take a bit of time Google Translate, you can just use meaningless names, it would still not help much It would make maintenance harder It would make audits very hard, as the auditors may not understand the language All things considered, it is probably never worth it. | {
"source": [
"https://security.stackexchange.com/questions/184905",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/177028/"
]
} |
185,043 | I have only a basic understanding of how files are written in a hard disk. Here I assume the files are overwritten this way. File A is deleted Its address is removed and showed vacant New File B is overwritten above the space of File A I am assuming here that even though we are deleting File A and overwriting with File B, we can still recover File A with its residual information using powerful software or instruments. I have an idea to remove this residual information. Lets assume we have an imaginary hard disk with 10 GB space in which 5GB is filled with files and the other 5GB is empty. Then we select all the 5GB files in the hard disk then Ctrl+X (cut) it and then Ctrl+V (paste) the same files in same hard disk. Performing this operation for n number of steps thus overwriting it again and again. Will this completely wipe the residual information? | You have to stop thinking about this on the file level. For a storage device, all that matters is the sector. If one sector on a hard drive* is overwritten, the data in it is gone for good. There is no known way to retrieve it even with "powerful software", and there is no need to overwrite the same sector multiple times. Modern hard drives encode data in such a dense and complex format that a single overwrite will invariably make that data irretrievable (we can't even recover data from an old fashioned low-density audio cassette tape!). However, whether or not filling up a bunch of free space on a hard drive will actually overwrite the sensitive sectors is another matter. Due to features such as damaged sector relocation, and due to the behavior of the specific filesystem, it is not possible to guarantee that free space has been overwritten without overwriting the entire drive. Filesystems are incredibly complex, and they are far more than flat databases of files. Cutting and pasting a file for example does not do anything but move it, which involves changing only a few bytes in filesystem metadata, regardless of how large the file is. So what about deleting a file and then filling the free space with dummy data until the drive is full? That might work, but it might not. Many filesystems contain redundant copies of information. The filesystem ext4 for example can keep copies of small files in its journal , which does not get overwritten when you wipe free space. The exact way of storing new files is also more complex than what you have mentioned. When you delete a file, you are deleting a reference to the file (as you have surmised). However, creating a new file does not guarantee that you will be overwriting the sectors that made up the previous file. A new file will likely be strategically placed at an address on the hard drive that minimizes access latency, or which decreases fragmentation. The file may not even be deleted, but simply hidden, in order to make undeletion and incremental backups ("snapshots") possible. A new file will not simply be stuffed into the newly unallocated space in all but the very simplest of filesystems. * When I say hard drive, I mean a real spinning rust. Solid state and hybrid drives work differently such that, even if you overwrite the same sector twice, the physical location that the data is saved to may be different each time. If you actually need to remove a file such that no one can recover it, you will not have many options at your disposal that preserve any other existing data on the drive. But you aren't out of luck: If you have encrypted the file or drive, you can simply throw away the encryption key. You can erase the entire drive, for example by using ATA Secure Erase. You can destroy the drive from the outside using an expensive degaussing machine. Deleting the file and then filling up free space, or shredding the file using data erasure tools (which typically get a list of the sectors that the file occupies, and then overwrites those specific sectors) will generally destroy the majority of the file, but comes with a high risk of incomplete erasure, with both the file's metadata and potentially small portions of the file remaining elsewhere on the drive. | {
"source": [
"https://security.stackexchange.com/questions/185043",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/89608/"
]
} |
185,179 | Yesterday, Twitter anounced that they recently identified a bug that stored passwords unmasked in an internal log. Recently, Github also had a similar bug . In both cases, they claim that nobody had access to these files. Twitter: We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Again, although we have no reason to believe password information ever left Twitter’s systems or was misused by anyone, there are a few steps you can take to help us keep your account safe: GitHub: The company says that the plaintext passwords have only been exposed
to a small number of GitHub employees with access to those logs. No
other GitHub users have seen users' plaintext passwords, the company
said. GitHub said it discovered its error during a routine audit and made it
clear its servers weren't hacked. To note, GitHub has not been hacked or compromised in any way. How can Twitter and GitHub be sure that they have not been hacked or compromised in any way?
Would someone who is compromising a server and read/copy a file always leave traces? | They can't be sure. In fact, you can never be sure you haven't been hacked. But a thorough examination can make you conclude that it is more or less likely. The Twitter statements only says that there is no indication of a hack. That doesn't exclude the possibility that they were hacked, and in urging their users to change their passwords they implicitly admits that. As for GitHub, the wording is a bit more categorical. But I think forcing a password reset shows that they understand the risks involved. | {
"source": [
"https://security.stackexchange.com/questions/185179",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/174840/"
]
} |
185,236 | I am talking about this password - 23##24$$25%%26 and the similar ones consisting of special characters appearing in a pattern, which the users these days use a lot. At work (finance company), I was creating a list of bad passwords that users should not be allowed to choose, involving certain cycles or pattern, and the above kind exhibit that trait of containing consecutive numbers sandwiching special characters (repeated certain times, here twice). Just out of curiosity, I checked this on a very well known website (on its login page), and it stated that this would take centuries to break. Some more examples in the list which would take years to break, but are highly vulnerable: 1!2@3#4$5%6^ 2@3#4$5%6^7& a!b@c#d$e%f^ Now, I stand confused with the list, should I mark these particular kinds of passwords as vulnerable and disallow the users going for them, even if they take a very long time to break? Note 1 - We are considering these vulnerable as many users (and, hence, multiple similar passwords) are following this trend to remember things easily. Note 2 - We are confused because the security guys are all about increasing the entropy, what they are ignoring is an increased orderliness of similar hashes in the database There arose a friction between guys as to where we draw a line, defining which password be allowed or disallowed. Edits: This website I am talking about, but won't name, on which I tested the password is pretty much known to every ethical/unethical hacker, almost every user here on Stack Exchange and many big/small companies (as they use its services). We do not store passwords in plain text . We use a nice, not home-brewed, hashing algorithm. An incident led us to review our password policies, and we created a separate database db-2 on a different machine, wherein we stored simply_hashed_newly_created user's password (no salt, nothing, just hashed_password) without storing who_created, when_created details. We did this for short period only. Any password change too went into this database. While, in our original database db-1 sat secure_salted_passwords. We kept creating a list L of hashed vulnerable passwords as well, following a pattern, and we were left amused when we matched L & db-2 - we saw multiple groups of similar passwords, with certain patterns. L and db-2 were later erased from the systems. db-2 was kept on a highly secure machine and was safe, won't disclose the exact details here. We are aware that even air-gaps or electric-sockets aren't secure. Both db-2 and L were destroyed. Worry not about passwords posted here, as we have already conducted our small experiment, and all those specific set of users have been made to reset their passwords (of course, a new password different from the old one). That's the reason, I came here posting few samples. And, I have commented here earlier too, that I posted a sample of passwords from the generator's logic which created a very huge list of hashed passwords, which may or may not be in db-2 . Again, since all those users have a new password, so no worries, everything is safe and secure. Since I know the generator's working, I tested few password-samples on a website, and we got confused over which ones to be allowed or disallowed, over what criteria. Thanks a lot for your answers, accept my gratitude. Based on the answers, for the time being, we have postponed any kind of restriction to be put over password's choice. | The online calculators are basing their results on a particular set of assumptions, ones that might not apply in any one case. There is no basis for trusting the calculators to provide any insight into how an attacker might choose to break the password. For instance, if I know that you use a pattern, and what that pattern is, then I would adjust my bruteforcing to align with the pattern. There is no way for an online calculator to account for that. There are other similar factors to take into account. "Entropy" is all about ensuring as much randomness as possible. A set pattern has much-reduced randomness, regardless of the characters used. So, yes, do ban such obvious patterns, if that meets your goal. I do have concerns about your method of banning these passwords, though. You might be fighting the wrong battle. | {
"source": [
"https://security.stackexchange.com/questions/185236",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/123651/"
]
} |
185,284 | I'm connected over a café WiFi and received a warning from my mobile browser. When I looked further, it seems like the certificate is only valid for one day, which seems super suspicious. It says Imgur on it, but then why is it flagged up and why is it only valid for one day? Here is the same certificate while using a friend's hotspot/data: I've not found another certificate that's affected. | This isn't one of Imgur certificates. Certificate Transparency logs Certificate Authorities must report all certificates they generate to transparency logs, which are public databases. This allows user-agents, like Chrome, to check that this certificate can be audited by the website's owner. According to the following certificate transparency search tools, this certificate was not logged, and such a short lifetime is not usual for Imgur: crt.sh Google Facebook DNS Filter According to the error messages, this certificate hasn't been issued by a valid certificate authority, so you can't trust the issuer. The issuer claims to be "DNSFilter". DNSFilter is a proxy used to filter requests , and it also tries to proxy HTTPS requests , so it generates a self-signed certificate for every domain. Since you can't trust the issuer, you can't be sure that the certificate comes from the real DNSFilter product. Anyone could be impersonating it. It's safe to assume that this is not a legit certificate for Imgur. The exact reason for such a short lifetime for the certificate is unknown. | {
"source": [
"https://security.stackexchange.com/questions/185284",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11640/"
]
} |
185,301 | I am a university student and I have only now reached the part of my degree where we focus on security. The task was very broad in how we were to protect our database and user details. All it said was "you need to store the user's passwords in hashed form.". I am now wondering if I should perform the hash client side (JS) then send the hashed data to the server to be stored in the database or if I send the password in plain text to the server and hash the data server side (PHP) before storing. I am using the HTTP POST request for both scenarios. (I am under the impression that if you use a POST request, users/attackers cannot access parameters being sent to the server. Is this right?) On another note, what would be a good hashing algorithm to use in my situation? | This isn't one of Imgur certificates. Certificate Transparency logs Certificate Authorities must report all certificates they generate to transparency logs, which are public databases. This allows user-agents, like Chrome, to check that this certificate can be audited by the website's owner. According to the following certificate transparency search tools, this certificate was not logged, and such a short lifetime is not usual for Imgur: crt.sh Google Facebook DNS Filter According to the error messages, this certificate hasn't been issued by a valid certificate authority, so you can't trust the issuer. The issuer claims to be "DNSFilter". DNSFilter is a proxy used to filter requests , and it also tries to proxy HTTPS requests , so it generates a self-signed certificate for every domain. Since you can't trust the issuer, you can't be sure that the certificate comes from the real DNSFilter product. Anyone could be impersonating it. It's safe to assume that this is not a legit certificate for Imgur. The exact reason for such a short lifetime for the certificate is unknown. | {
"source": [
"https://security.stackexchange.com/questions/185301",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172029/"
]
} |
185,366 | In the context of SSL/TLS certificates, what is the difference between key encipherment and data encipherment? What are some examples that highlights the difference? | Key encipherment means that the key in the certificate is used to encrypt another cryptographic key (which is not part of the application data). This is used within TLS in the RSA key exchange, where the pre-master secret (from which the symmetric encryption key is derived) is generated by the client, then encrypted with the servers public key and send to the server and decrypted there with the servers private key. Data encipherment means that the key in the certificate is used to encrypt application data. This is not used in TLS. But certificates are not only used for TLS (for example also in S/MIME, VPN, signing of documents ...) so there might be use cases where this is needed. | {
"source": [
"https://security.stackexchange.com/questions/185366",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/177528/"
]
} |
185,435 | I've just read an article about bitsquatting (which refers to the registration of a domain name one bit different than a popular domain) and I'm concerned about how it could allow an attacker to load its own assets on my website. For example, if my website located at https://www.example.org/ loads a script file located at https://www.example.org/script.js , then an attacker could register dxample.org and host a malicious JS file, which would be downloaded and executed by some users of my website. Is there any standard defense technique against it? | Is there any standard defense technique against it? As outlined in the other answers, bit errors when querying domain names may not be a realistic threat to your web application. But assuming they are, then Subresource Integrity (SRI) helps. With SRI you're specifying a hash of the resource you're loading in an integrity attribute, like so: <script src="http://www.example.org/script.js"
integrity="sha256-DEC+zvj7g7TQNHduXs2G7b0IyOcJCTTBhRRzjoGi4Y4="
crossorigin="anonymous">
</script> From now on, it doesn't matter whether the script is fetched from a different domain due to a bit error (or modified by a MITM) because your browser will refuse to execute the script if the hash of its content doesn't match the integrity value. So when a bit error, or anything else, made the URL resolve to the attacker-controlled dxample.org instead, the only script they could successfully inject would be one matching the hash (that is, the script you intended to load anyway). The main use case for SRI is fetching scripts and stylesheets from potentially untrusted CDNs, but it works on any scenario where you want to ensure that the requested resource is unmodified. Note that SRI is limited to script and link for now, but support for other tags may come later: Note: A future revision of this specification is likely to include integrity support for all possible subresources, i.e., a , audio , embed , iframe , img , link , object , script , source , track , and video elements. (From the specification) (Also see this bug ticket) | {
"source": [
"https://security.stackexchange.com/questions/185435",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76718/"
]
} |
185,711 | Android uses full-disk encryption to encrypt files and decrypt them at startup. What I don't understand is that decrypting multiple gigabytes of files must take a lot of time, if nothing else then atleast the IO access time required to read all the contents of the storage, but android boots up in mere seconds. How is this possible? | Encryption happens in memory, not on the disk. You are misunderstanding how disk encryption works. It does not read the entire disk and replace it with a decrypted version. Rather, when a file or sector of encrypted data is accessed, it is read into memory and decrypted in memory . Likewise, when writing to the disk, data is encrypted in memory before being saved to persistent storage. Operating systems keep copies of data that have been read or is to be written in memory (the filesystem buffer) as a performance optimization. It is in this memory that the encryption and decryption take place. This allows data to be read from the disk and decrypted once, but subsequently accessed many times from memory. Incidentally, using memory to store frequently-accessed files is why so many people mistakenly think Linux eats too much RAM . I also want to point out that the bottleneck is often I/O, not encryption. You say that encrypting gigabytes of data takes a long time, but on most modern machines (including mobile devices), encrypting gigabytes of data can take only a few seconds (especially with hardware acceleration, encryption is really, really fast ). However the solid-state drive in most modern Android devices is unable to read or write data at nearly those speeds. So no matter how fast you are trying to read or write data to the disk, the bottleneck will generally always be I/O, not encryption. Older hardware often did suffer reduced performance when encryption was in use. This was because, at that time, storage speeds were improving faster than processor speeds. The lack of dedicated hardware acceleration for cryptography and the inefficient algorithms often caused a noticeable slowdown when accessing the disk. On modern systems, this is reversed. The processor is so fast that the storage device struggles to keep up. Any overhead is negligible. | {
"source": [
"https://security.stackexchange.com/questions/185711",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/177965/"
]
} |
185,835 | Are there any security concerns with logging that a user changed their password? I'm already logging whenever an admin changes a users password for audit purposes, but is there a reason to not have a log of when each user changed their own password? Edit: Answers to questions below What is your expected gain from this? Mainly forensic. The ability to see who (user_id of admin or self) changed the password in case the user claims they were hacked. We are not using this to support any password managing schemes such as forced password change or disallowing password reuse. Who would have access to these logs? System administrators and possibly a small support team. About what kind of accounts are we talking about? User accounts on an e-learning platform i.e. teachers and students. Also, do you have password expiration rules? No. I'm talking about storing when every change of password was done, not only the last change. What info are you storing? (not actually asked but implied) We are storing the user id of the user whos password was changed, the user id of the user doing the change (it could be an admin or a students teacher), the time the password was changed and the URI used to change the password. | To, answer your question, Yes, you can and SHOULD log password-changes, and there's nothing fundamentally wrong with doing so, as long as you don't e.g. record the password itself" What to log? When designing logging for Security purposes you want to address these questions: When did the event happen? The date and time the event occurred (Use the common log format) What was the event? A short description of the event (e.g Password Change) Who triggered the event? The user id, name, email or some unique identifier Why was the event triggered? This is not the same as the "What" even though many people use it that way. This is the reason the event was executed. (e.g. Password changed due to policy, User manually changed password, etc). This can be really good for weeding out noise. Scenarios One of the best methods for discussing what to see is via scenarios and asking the team: What information does the event provide? Is the event required for compliance / legal? Are we logging for detective reason? (e.g. Triggering a SIEM) or for corrective? (e.g. Forensics after the fact) Who will be looking at the logs? How will we protect the logs? Example: James is part of the IR team, which is responsible for Made-Up Company's critical application 'Non-Existence'. James want to be able to see all password changes in order to detect changes that occur outside the normal policy process. These events will trigger and investigation if a password change happens without an incident logged by the support team. Logs will be sent to the IR SIEM appliance which will use a rule set to trigger warnings to the IR team when an incident cannot be correlated to a password change outside of a required policy change. (Obligatorily caution for using this at a workplace. I just made this example up.) [edit]
- Updated the initial answer to be more clear. Thanks to @SeldomNeedy for the suggestion. | {
"source": [
"https://security.stackexchange.com/questions/185835",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/73083/"
]
} |
185,844 | An acquaintance of mine got a call from an alleged Microsoft employee and provided him access to his Windows 10 computer via team viewer (commonly known as the tech support scam). But when the scammer wanted to send him a file he got suspicious and immediately shut down the computer before anything could be sent. He did not give away his credit card number or any other personal information. Afterwards he immediately changed his passwords from another computer and did not connect the affected computer to the internet since. He asked me for help now, but I am not sure which steps are necessary. Do you think the computer could be infected? A team viewer remote
session was active, but as I told, no file was sent. Is it still
possible to infect a computer? My plan is to start a live CD and run a virus scan, but I am not sure
if it is necessary to erase the whole disk. Would be the safer way,
but also much more time consuming. Is it possible that the router could have been infected? I want to check the DNS settings, is there anything else I should check? Or should I completly reset the router? Would be nice if someone gave me some hints and advice.
I don't think the question is a duplicate of these two: what to do after a "tech support" scam Help! My home PC has been infected by a virus! What do I do now? Because I'm more interested if it was possible to infect the computer without sending a file rather than about what to do if there is a virus on the computer. PS I'm from Germany, it seems like the tech support scam has reached non English speaking countries as well... | From your description, there is nothing to worry about. The victim just shared the screen with the attacker without giving the attacker control or giving the attacker any information. As the victim used a common tool (TeamViewer) and not one provided by the attacker, there is no risk in the shared session. There is no risk to the router as the attacker never had access to it. It is not known what information the attacker saw on the screen, but perhaps the only concern is the disclosure of the IP address. This can be mitigated by turning the router on/off (which works in some instances) or asking the ISP for a new IP. | {
"source": [
"https://security.stackexchange.com/questions/185844",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/178131/"
]
} |
185,889 | I've seen setups where a password-protected database resided on the same server as an application holding the credentials to said database in plain text. What are the benefits of such a setup over a simply unprotected database? Apart of some obfuscation, having to configure both, database and application, to those credentials only seems to add complexity, but not real security. An attacker with access to the server could always just find the credentials in the application's config file. Edit: I'm talking about a database only visible within the same machine, and not exposed to the outside. | It makes sense to password protect the database if you secure access to the application's config file that holds the plaintext credentials. When you restrict read access to the application's account only, the attacker requires root access to see the password. In case he breached any other (less privileged) account, he will not be able to gain access to the database. | {
"source": [
"https://security.stackexchange.com/questions/185889",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/125571/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.