source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
224,304 | I received a report from the security team today. The report contains the below mentioned vulnerabilities and descriptions: 1) Poor Error Handling: Overly Broad Throws The methods in program1.java throws a generic exception making it harder for callers to do a good job of error handling and recovery. 2) Poor Error Handling: Overly Broad Catch The catch block at EXAMPLE1.java line 146 handles a broad swath of exceptions, potentially trapping dissimilar issues or problems that should not be dealt with at this point in the program. 3) Poor Error Handling: Empty Catch Block The method SomeMethod() in somefile.java ignores an exception on line 33, which could cause the program to overlook unexpected states and conditions. After reading the report above, a few questions are raised in my mind: How are these related to security? According to my understanding it seems like the above issues are code quality issues. If those are real security threats, then how can an attacker exploit this? What kind of protection controls are need to mitigate above issues? | I think you are correct that those issues are more related to code quality rather than security, and none of them are exploitable in any obvious way. I would not call them "vulnerabilities". But vulnerabilities are born out of bugs, and bugs are born out of bad code quality. Bad error handling could lead to unexpected results - i.e. a bug - and if you are unlucky that could lead to vulnerabilities. Here are some examples related to exception handling: If you don't handle an exception at all, it could lead to the program crashing. This could be used for a denial of service attack - simply overflow the server with malicious requests that will crash it, and it will spend all it's time restarting instead of serving legitimate requests. Unhandled exception might lead to disclosure of sensitive information, if error messages are passed on in HTTP responses (as pointed out by Rich Moss ). Overly broad catch statement that catch everything could lead to security critical exceptions being glossed over. For real examples, check out this CVE list filtered on the word "exception" (as suggested by Ben Hocking ). The solution is to follow best prectices for exception handling. I will not cover that in detail here since that is more of a programming than a security issue. | {
"source": [
"https://security.stackexchange.com/questions/224304",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213787/"
]
} |
224,446 | I have a web application built in a classic MERN stack (MongoDB, Express, React, Node) and I want to create an admin route, so I figured I could just do it with a [url]/admin route. Could that be a security risk? Of course the admin users would be prompted with some form of authentication, but having the admin panel as a public route, is that a bad practise? | It is not a security flaw to use a known admin URL. The things that should be secret are the management credentials, not the URL. It's like hiding a door, while really it's the key that you should keep secure. You can protect the door better using human guards, a perimeter fence, extra secure lock, sturdy walls, etc. This should not give you a false sense of security: the lock should still be secure to let only authorized persons enter, but these measures can help. Translating them to the digital world, you can put the admin panel behind an IP address whitelist (guards that check ID cards), have multi-factor authentication (extra secure lock), only allow connections through an internal management network interface (a fence), etc. It slightly helps not to have a known/predictable URL, but this is mainly useful for common applications. If there are ten thousand WordPress websites out there and a security issue is found, then the first thing hacking groups do is send hack attempts to as many /wp-admin pages as possible (the standard WordPress admin URL). If you changed your URL to /wp-admin5839 then your site will not be hit in the first wave with untargeted attacks. Nevertheless, if someone means to hack specifically your site, odds are that they spent time guessing your admin page already or managed to figure it out some other way, and once the security issue becomes known, they will just use it on your hidden page before you have a chance to patch. So it doesn't help a lot, but it does help a little in some specific scenario. | {
"source": [
"https://security.stackexchange.com/questions/224446",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198465/"
]
} |
224,447 | My dad sent me this video asking if he should be worried about this? The video shows: a wifi AP broadcasting an airport's wifi name security researcher seeing the sites the victim browses security researcher viewing files accessed by victim on cloud storage victim installing attacker's "free wifi" app the app giving the security researcher full control over the device Obviously most folks should be wary of untrusted WiFi networks, but there’s a couple of strange things occurring that makes me wonder if this is just an over-hyped hacker story. First, the Google search he performs seems to be protected by TLS, how is that possible with just MitM? Then he does some truly mind-blowing stuff like being able to access the microphone, record audio and send it to himself. No way that’s done via just an MitM over WiFi. Am I missing something, or does this community concur that this video is either over-simplified or just plain deceitful? Connecting to a strange WiFi might get you into trouble, but it alone cannot cause this level of compromise... can it? | Am I missing something, or does this community concur that this video
is either over-simplified or just plain deceitful. I wouldn't say it's deceitful, but it's definitely overhyped/oversimplified. First, the Google search he performs seems to be protected by TLS, how
is that possible with just MiTM? Yes. In order to do that, he would have to either strip SSL or install a root CA certificate on the mobile device. So you can't simply MITM https websites (the video over-simplifies it). Then he does some truly mind-blowing stuff like being able to access
the microphone, record audio and send it to himself. No way that’s
done via just an MiTM over WiFi. Of course not. You cant just use a phone's microphone via MITM over wifi. As you can see in the video itself, he says that you make a victim install an application and then you can record microphone or access data on the phone. He obviously oversimplifies it. Not only will the victim have to install the application, but also have to give all the required permissions to the app (if you are dumb enough to do that, I guess you could make someone install root CA as well). Connecting to a strange WiFi might get you into trouble, but it alone
cannot cause this level of compromise .... isn’t it? At the end of the day using public WIFI is similar to being in the same network as the attacker but that's about it. Don't be stupid, keep software updated and be informed about security. The story is overhyped. Same as the ads from VPN companies. | {
"source": [
"https://security.stackexchange.com/questions/224447",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172108/"
]
} |
224,614 | As we know CIA of the demand for security means: Confidentiality Integrity Availability I don't understand why define the "Integrity" and "Availability`, If we make a plaintext Confidentiality, the Integrity is a whole plaintext, this is the basic, why there gild the lily? If there defined the Integrity, the decrypted plaintext must be usable, so the Availability is gild the lily too. | You're focusing on a very narrow scope here. The CIA triad is about security of a whole system , not just an encrypted message. That being said, all elements of the triad do apply to your example: Confidentiality: As you mentioned, encryption's primary purpose is to enforce confidentiality. Integrity: Encryption does not automatically provide integrity. An attacker could swap an encrypted message for a previously seen encrypted message. An attacker could abuse ciphertext malleability in order to modify the plaintext without knowing the key, e.g. if a stream cipher was used without an authenticity record on the ciphertext. Availability: An attacker might delete or corrupt the encrypted message, or leverage a denial-of-service (DoS) attack against the system that contains the encrypted message. | {
"source": [
"https://security.stackexchange.com/questions/224614",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190529/"
]
} |
224,673 | I was reading this The New York Times (NYT) article about the hack of Jeff Bezos's phone. The article states: The May 2018 message that contained the innocuous-seeming video file, with a tiny 14-byte chunk of malicious code, came out of the blue What malicious code could possibly be contained in 14-bytes? That doesn't seem nearly enough space to contain the logic outlined by the NYT article. The article states that shortly after the message was received, the phone began sending large amounts of data. | Yes, it can. It could be just the trigger vulnerability which would load data on specific areas of the movie in memory and execute. The malicious part can be pretty small, and the payload could be stored elsewhere. After extracting and executing the payload, additional modules can be downloaded, doing way more than the loader. It's like most malware infections work: a small component, called the "dropper" , is executed first and it downloads and executes other modules, until the entire malware is downloaded and executed. Those 14 bytes may very much be a dropper. In this specific case, those 14 bytes could load parts of the movie on memory, load its address into the register, and jump into it. Examining only the video would not show anything suspicious, as the code would look like video data (or metadata), but the 14 bytes from the loader would stand out. | {
"source": [
"https://security.stackexchange.com/questions/224673",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/225577/"
]
} |
224,701 | From what I've seen, USB-based attacks such as RubberDucky need to be able to open a terminal and then execute commands from there, usually to download and then install malware or to open a reverse shell. Is this how most, if not all USB-based infections operate? Would being able to detect and prevent keystroke injection ensure I would be safe from USB-based malware attacks? If at all relevant to the question, key combinations used to send signals to the shell would be caught and detected alongside regular keystrokes. edit: I am mostly concerned with USB-based attacks whose purpose is to infect a machine with malware and/or to open a backdoor through which manipulate the system. In the case of a reverse shell being opened, will the attack/er still rely on executing commands, i.e., assuming that on the system in question, there was only one terminal open or available will I be able to see keystrokes if this attack were taking place? In the case of data-exfiltration, would there be ways for the malware on the hardware to mount the partition/filesystem and then copying the files without being able to enter keystrokes? | There were also attacks based on the autoplay-feature (other source) , although I think this is a bit outdated with newer OS like Windows 10. There are also USB-Killers which operate on a hardware-level and kill your machine through sending high current shocks. Here's a list of other attacks that might fall in the same category, including but not limited to: An attack that actually emulates a USB ethernet adapter, which then injects malicious DNS servers into DHCP communications, potentially changing the computer's default DNS servers to use these malicious ones; sites of interest (email, banking, ecommerce, etc) can then be mimicked remotely, and the victim redirected to the mimic sites via the malicious DNS server. Attacks that use a small hidden partition on a mass storage device to boot and install a rootkit, while otherwise behaving like a normal mass storage device Various attacks intended for data exfiltration on a secured device (generally only relevant to secure air-gapped computers that the attacker can get physical access to, such as a contractor with access to secure systems) | {
"source": [
"https://security.stackexchange.com/questions/224701",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/183895/"
]
} |
225,001 | I'm developing an application over an intranet and is used only by an internal employee. There wouldn't be any external parties involved here and no external communication would be used by the application. Does it need secure software design in this case? If so, will it be enough to follow the guideline of OWASP? | While Kyle Fennell's answer is very good, I would like to offer a reason as to why it is recommended for internal applications to be designed securely. A large number of attacks involve internal actors There are many different versions of this factoid. "50% of all successful attacks begin internally", "Two thirds of all data breaches involve internal actors", etc. One statistic I could find was Verizon's 2019 DBIR , in which they claim: 34% [of the analyzed data breaches] involved internal actors Whatever the exact number may be, a significant amount of attacks involve internal actors. Therefore, basing your threat model on "it's internal, therefore it's safe" is a bad idea . Secure Software Development does not just prevent abuse, but also misuse Abuse: The user does something malicious for their own gain Misuse: The user does something malicious because they don't know any better The reason why I bring up misuse is because not everything that damages the company is done intentionally. Sometimes people make mistakes, and if people make mistakes, it's good if machines prevent those mistakes from having widespread consequences. Imagine an application where all users are allowed to do everything (because setting up permissions takes a long time, wasn't thought of during development, etc.). One user makes a mistake and deletes everything. This brings the entire department to a grinding halt, while the IT gets a heart attack and sprints to the server room with last week's backup. Now imagine the same application, but with a well-defined permission system. The user accidentally attempts to delete everything, but only deletes their own assigned tasks. Their own work comes to a halt, and IT merges the data from last week's backup with the current data. Two employees could not do any productive work today, instead of 30. That's a win for you. "Internal" does not mean free from malicious actors Some companies are technically one company with multiple teams, but they are fractured in a way that teams compete with each other, rather than working together. You may think this does not happen, but Microsoft was like this for a long time. Imagine writing an application to be used internally by all teams. Can you imagine what would happen once an employee figures out you could lock out other employees for 30 minutes by running a script that he made? Employees from "that other team" would constantly be locked out of the application. The help desk would be busy for the 5 th time this week trying to figure out why sometimes people would be locked out of the application. You may think this is far-fetched, but you would be surprised how far some people would go to get that sweet sweet bonus at the end of the year for performing better than "the other team". "Internal" does not stay "Internal" Now, in 2020, your application will only be used by a small group of people. In 2029, the application will be used by some people internally, and some vendors, and some contractors as well. What if one of your vendors discovered a flaw in your application? What if they could see that one of their competitors gets much better conditions? This is a situation you do not want to be in, and a situation that you could have prevented. Re-Using Code from your "internal" application You write an internal application that does some database access stuff. It works fine for years, and nobody ever complained. Now you have to write an application that accesses the same data, but externally. "Easy!", thinks the novice coder. "I'll just re-use the code that already exists." And now you're stuck with an external application in which you can perform SQL injections. Because all of a sudden, the code that was created "for internal use only", no pun intended, is used externally. Avoid this by making the internal code fine in the first place. Will it be enough to follow OWASP? The answer to this question is another question "Enough for what?". This may sound nitpicky at first, but it illustrates the problem. What exactly do you want to protect? Define a threat model for your application, which includes who you think could possibly be a threat for your application in what way, then find solutions for these individual threats. OWASP Top 10 may be enough for you, or it might not be. | {
"source": [
"https://security.stackexchange.com/questions/225001",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/225975/"
]
} |
225,012 | I asked a question on what I need to do to make my application secure, when somebody told me: That depends on your threat model. What is a threat model? How do I make a threat model for my application? | FilipedosSantos' answer does a great job of explaining a formal threat modelling exercise under, for example, the Microsoft STRIDE methodology . Another great resource is the threat modeling course outline on executionByFork's github. When I use the term "threat model" on this site, I usually mean something less formal. I generally use it as a response to new users asking "Is this secure?" as if "secure" is a yes/no property. It's usually part of a paragraph like this: That depends on your threat model. "Secure" isn't a thing; secure against what ? Your kid sister snooping on your iPhone? A foreign government soldering chips onto your datacentre equipment? Or something in between? I really like the Electronic Frontier Foundation's threat modelling framework , which focuses on asking these three questions: What are you protecting? Who are you protecting it from? How many resources can you invest in protecting it? I really like the way the EFF has written this because these simple and easy to answer questions can guide someone with zero background in security into figuring out "the right amount of security" for them. | {
"source": [
"https://security.stackexchange.com/questions/225012",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
225,057 | From a security perspective, is it considered bad practice to use the company name as a part of an SSID? Assuming that the company is located in a densely populated area with a lot of "competing" wireless networks, having an easily identifiable SSID would make it easy to find the network for those who work at that office. However, it would be equally easy for others outside the office to find the network. Would I be overthinking this if I required wireless network SSID's to be random strings? | "Hiding" your SSID is just "security by obscurity" - like hiding the front door key under the mat. It works only as long as no one figures it out. Once it is figured out, it provides zero security. In general, you want security measures that will work even if everyone knows what measure you've used. Yes, by providing your name, any opportunist can focus on your network if they have that desire, but just having a WiFi network broadcasts that a network is there, anyway. If someone is targeting you specifically, they will find your SSID, even if you obscure or hide it. So, hiding or obscuring the SSID provides very, very low protection. Unless you have a specific reason to need such specific, low, and opportunistic protection (and there are possible reasons), I'd focus on securing the network instead. As JPhi1618 and emory point out in the comments, you could even create a security issue by using a nondescript SSID: If you set it to df42Sdd235f2 , for example, then someone could set up a WiFi network with your company name or even df42Sdd235f 3 in order to attract people to connect to it instead of your corporate network and the victims would not have any clues that the network was not the official network. | {
"source": [
"https://security.stackexchange.com/questions/225057",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/226047/"
]
} |
225,262 | Is an XSS attack possible on a website in which all the user generated content sits behind Auth and all content is only and strictly only shown to the user that generated it in the first place? No user generated content is ever shown to other users in its raw form, not even the admin. It's always aggregated stats and never a string. | tl/dr: Yes! Defense in depth is critical, so any XSS vulnerability is
a vulnerability that needs to be fixed, regardless of whether or not a
method of exploit is immediately obvious. The Underlying Question Whether intentional or not, there is a question behind your question that is much more important to answer. In essence what you are really asking is: Do I need to worry about XSS if an attacker can't take advantage of
it? The answer to that is simple: you need to always worry about XSS. Any XSS vulnerability is a danger and, once known, should be fixed in accordance with best practices. This should be taken as a given even without considering specifics of your particular case. The reason is because web applications are rarely static. They change often and quickly. Something starts life as an admin-only feature, so the developers decide to not worry about security. Six months later a change request comes from management and they want to share part of that feature with customers. The original developers are busy with something else and the new team doesn't notice that there is no XSS protection in the feature at all. As a result, when they push it out to everyone, the application gains a glaring XSS vulnerability. Stuff like that happens all the time . As a result, it's critical that basic application security is properly executed everywhere. The insecure-but-private applications of today invariably become the insecure-and-public applications of tomorrow. This will happen to you. Finally, defense-in-depth is critical. Many real life breaches happen because an attacker finds one weakness that allows them to exploit another weakness, which leads to another weak spot, that finally leads to something juicy. "My user's only ever see their own data, so XSS doesn't matter" works perfectly fine until an attacker figures out how to inject data into another user's session. The lack of XSS protection will then allow the attacker to gain full access to the other user's data and presumably take over their account, while having XSS protection in place may have instead completely thwarted the attack. Some Specifics So, are there any ways in which an attacker may inject data into another user's session? There probably are! In particular: Reflected XSS ( Steffen already touched on this) CSRF is a major concern: if you have an endpoint with a CSRF vulnerability, an attacker may be able to trigger a CSRF attack on another user's account, store an XSS payload in their data, and then use that XSS attack to compromise the user's account. SQLi: An attacker may be able to take advantage of an SQLi vulnerability to execute a stored XSS attack against all users' accounts, compromising everything. That last one may seem like a bit of a stretch (simply because an SQLi attack is usually a severe problem on its own), but in the case of a write-only SQLi vulnerability it is a legitimate concern. Summary Having siloed user data isn't sufficient reason to be able to skimp on basic application security, because it rests on an assumption: that user's can never access each others' data. That assumption rarely holds true. If there is no XSS protection in place than an attacker only needs to find one vulnerability anywhere that allows them to inject data into another user's account in order to take it over. However, if you practice proper security everywhere, then an attacker will have to find a vulnerability that allows them to inject data and find a vulnerability that allows their XSS payload to execute. This is a much harder task and will make your users much safer. | {
"source": [
"https://security.stackexchange.com/questions/225262",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/226285/"
]
} |
225,346 | I'm trying to find the best degree of entropy for a password template, and after carrying out several tests, the best result came from this: à . This symbol alone adds 160 characters to the set (contrary to lower-upper case letters, numbers, or even symbols) and is readily available from a Spanish keyboard such as the one I use, which looks perfect. However, I can't find any information about this, all password generation software seems to avoid using those, and I don't know why. A password like +9qQ¨{^ adds up to 254 charset size, +9qQ¨{^aaaa has 67 bits of entropy already, setting the ease-to-remember factor aside, is there any reason to avoid using these special characters? | Language-specific characters are typically avoided by password generators because they would not be universally available (US keyboards don't have accented characters, for instance). So don't take their omission from these tools as an indication that they might be weak or problematic. The larger the symbol set ( a-z , A-Z , 0-9 , etc.) the larger the pool of possible characters to try to guess when bruteforcing a password. Adding language-specific characters adds to the pool, and that can be a good thing. But be careful about how you calculate entropy. The string ààààààààààà doesn't have a lot of entropy if you are just hitting it on your keyboard because it's convenient. Entropy is about how the characters are chosen. A randomly chosen string has high entropy and a randomly chosen string from a wide pool of characters has higher entropy. | {
"source": [
"https://security.stackexchange.com/questions/225346",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/226485/"
]
} |
225,366 | I'm using JWTs for authenticating users for my mobile and web applications in the same API.Both access_token and refresh_token have same expiration Passport::tokensExpireIn(now()->addHours(1));
Passport::refreshTokensExpireIn(now()->addHours(1)); When a user logs in authenticate (username and password) store access token (client-side) while for the refresh_token (I don’t use it at all) If the access_token is expired then it will go directly in signing
in again (#1 repeat) My Question: Since the expiration will last an hour and thinking the
process is secured enough. Is there something I'm missing? I do understand using the refresh_token. If the access_token is expired then it will use the refresh token to
get a new access token + new refresh token. (requires the client id
and secret) The presence of the refresh token means that the access token will
expire and you’ll be able to get a new one even without the user’s
interaction is intended to automatically detect and prevent attempts to use the
the same refresh token in parallel from different apps/devices. mitigates the risk of a long-lived access_token leaking (which in my
case not applied) Once the new access token + refresh token generate in using the
refresh_token Those previous access_token and refresh_token will be
useless or revoked My Question: Should I use the refresh_token? I am thinking of saving
it back-end or server. Setting the expiration lasts longer than the
access_token, which is still 1 hour but for the refresh_token making
it 1 year unless revoked. Once authenticated then it will generate
using the refresh_token sending the new access_token in the client_side then saving the new refresh_token again in the server and so on. This will do everytime he/she logs in or register. Do you think it's okay to save the
refresh_token in the back-end? | Language-specific characters are typically avoided by password generators because they would not be universally available (US keyboards don't have accented characters, for instance). So don't take their omission from these tools as an indication that they might be weak or problematic. The larger the symbol set ( a-z , A-Z , 0-9 , etc.) the larger the pool of possible characters to try to guess when bruteforcing a password. Adding language-specific characters adds to the pool, and that can be a good thing. But be careful about how you calculate entropy. The string ààààààààààà doesn't have a lot of entropy if you are just hitting it on your keyboard because it's convenient. Entropy is about how the characters are chosen. A randomly chosen string has high entropy and a randomly chosen string from a wide pool of characters has higher entropy. | {
"source": [
"https://security.stackexchange.com/questions/225366",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213988/"
]
} |
225,569 | Online joint accounts that are similar to bank accounts that are joint accounts? Say you want to have two partners for a certain account, let's say a PayPal account, and you want to make it so that if one partner wants to change a password he must get confirmation from the second person. Are there currently any online accounts that have a such a feature? | It's generally undesirable to have multiple people knowing the same password. Instead, systems that requires multiple user to be able to access the same resources usually requires each user to create their own accounts, each with their own password that's only known to themselves, and the system would simply allow all the users to access the same resources. This means that a Joint Paypal account would have worked by allowing multiple user accounts to transact from the same wallet. This means that it's unnecessary to get the approval of the other account holders to change password. | {
"source": [
"https://security.stackexchange.com/questions/225569",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/173701/"
]
} |
225,637 | I am looking to build a custom keyboard and bring it to work, but the security department is extremely wary of keyloggers, and rightly so. Now, my understanding of hardware keyloggers is that they are either USB adapters or additional PCBs wired into keyboards, and since I am not planning on wiring a keylogger into my own keyboard I feel as though I should be safe. However, the keyboard PCB I have is from some lesser-known Chinese distributor, so I am now concerned that the PCB itself may contain keylogging software. Is this possible, or is a secondary PCB required for keylogging capabilities? | The answer that you don't really want is that keyloggers can be very stealthily incorporated into pretty much anything: Keyboard with integrated keylogger: https://www.paraben-sticks.com/keyboard-keylogger.html Less savoury keylogger found in retail keyboard, sending keystrokes back: https://thehackernews.com/2017/11/mantistek-keyboard-keylogger.html Wifi keylogger in a USB extension cord: https://www.amazon.co.uk/AirDrive-Forensic-Keylogger-Cable-Pro/dp/B07DCCBBHT QMK the firmware used on many handmade keyboards is fully programmable. Many keyboards use quite sophisticated CPUs (QMK supports e.g. Atmel AVR and ARM processors) . Adding an sdcard to a mainboard is quite easy. All of this is fairly irrelevant, because you're approaching the issue from the wrong direction: since I am not planning on wiring a keylogger into my own keyboard I
feel as though I should be safe This is assuming that the threat model that IT Security are working with are you being an unassuming victim to something like the second link. The more probable route is assuming that you are a potentially malicious insider, and aiming to use a stealthed keylogger, coupled with standard business practices ("Hey, {Admin}, I need this totally legitimate software, can you just enter your credentials to install it?"). While the first option is something to protect from, the second may also be considered. Luckily for the IT Security team, the solution is simple, known and approved keyboards are the only ones to be used. Unfortunately, this isn't good for your prospects on this. | {
"source": [
"https://security.stackexchange.com/questions/225637",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/226823/"
]
} |
225,873 | I just discovered that my files have been encrypted by ransomware. Can I get my files back? How? Should I pay the ransom? What should I do so that this never happens again? | Can I get my files back? How? Maybe. If you have backups, you can restore your files from there. Just make sure to completely reinstall your operating system first, i.e. "nuke from orbit", to remove the malware first. If you don't do that, you will just get infected again. If you don't have backups, things get trickier. Some ransomware has been beaten and its encryption can be reversed. Others have not. To find out if you are lucky, you can use a decryptor (e.g. The No More Ransom Project , Kaspersky's No Ransom ). They offer a service that helps you identify what strain of ransomware you have, and let you know if there is a tool to decrypt your files. If you are unlucky and your ransomware is not on the list, you can backup the encrypted files on an external drive (with nothing else on it, it might get infected, too) in the hope of a future cure. But there is a real risk that your files are just irreversibly gone. Should I pay the ransom? I wouldn't. First of all, there is no guarantee that you will get your files back - there is no honor among thieves. Some forms of ransomware don't even bother to encrypt the files - it just replaces it with random junk to make it look encrypted. Obviously, paying in a situation like that does not help. Second, you will be financing organized crime and creating incentives to create ransomware in the first place. What should I do so that this never happens again? Apart from general good computer hygiene (don't download strange stuff, keep things updated, etc.) there is one killer solution to the ransomware problem: Make frequent external backups. | {
"source": [
"https://security.stackexchange.com/questions/225873",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/98538/"
]
} |
226,070 | As a protection against attacks such as SSLstrip, the HSTS header prevents an attacker from downgrading a connection from HTTPS to HTTP, as long as the attributes of the header are properly configured. However, HTTP/2, whilst not making encryption mandatory is implemented with mandatory TLS connection in modern browsers, according to wikipedia . In that case, is there any point in having the HSTS header enabled when using HTTP/2? Can an attacker force the client to use HTTP/1 and in turn SSLstrip the connection? Is HTTP/2 enough, does it make the HSTS header obsolete? | Yes, HSTS is still needed, including HSTS preload. The way a browser connects to HTTP/2 is through a URL that looks exactly the same as the URL for HTTP/1, so it doesn't know that it must be HTTP/2 just from looking at the URL. It will try plain cleartext HTTP if it is given a http:// URL. In order for the browser to not try plain HTTP (and not be subject to attacks from rogue WiFi AP, etc.), the URL must be https:// (and then HTTP/2 upgrade will happen through ALPN), and the way to ensure that regardless of what the user types in the address bar or an external links says, is using HSTS. | {
"source": [
"https://security.stackexchange.com/questions/226070",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/96649/"
]
} |
226,108 | I realize that a hash function is a one way function, and that changes in the hash are suppose to tell us that the original data has changed (that the entire hash changes on even the slightest changes to data). But is there a way to find out to what degree has the original data changed when two hashes are different? | No, at least with a good hash function. You can test this yourself by creating a hash over a specific data set, and then a modified hash over a different data set. You will see that every bit of the resulting hash function has about a 50% chance of flipping. I'll demonstrate this by creating the SHA-256 hash of the string MechMK1 : $ echo -n "MechMK1" | sha256sum
2c31be311a0deeab37245d9a98219521fb36edd8bcd305e9de8b31da76e1ddd9 When converting this into binary, you get the following result: 00101100 00110001 10111110 00110001 00011010 00001101 11101110 10101011
00110111 00100100 01011101 10011010 10011000 00100001 10010101 00100001
11111011 00110110 11101101 11011000 10111100 11010011 00000101 11101001
11011110 10001011 00110001 11011010 01110110 11100001 11011101 11011001 Now I calculate the SHA-256 hash of the string MechMK3 , which changes one bit of the input: $ echo -n "MechMK3" | sha256sum
3797dec3453ee07e60f8cf343edb7643cecffcf0af847a73ff2a1912535433cd When converted to binary again, you get the following result: 00110111 10010111 11011110 11000011 01000101 00111110 11100000 01111110
01100000 11111000 11001111 00110100 00111110 11011011 01110110 01000011
11001110 11001111 11111100 11110000 10101111 10000100 01111010 01110011
11111111 00101010 00011001 00010010 01010011 01010100 00110011 11001101 I compared both results and checked how often a bit differed from both hashes, and exactly 128, or 50% of all bits differed. If you would like to play around with this yourself and see what kind of results you get, I created a simple C program that does exactly that. | {
"source": [
"https://security.stackexchange.com/questions/226108",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/225364/"
]
} |
226,114 | I am using a web service (call it X) which allows files to be uploaded to AWS S3. The way it works is that an initial call is made to X which then returns a list of file descriptors and also meta information which should be injected into the web form as hidden fields that the user is presented with to choose a file to upload. One of these hidden fields is the url of the S3 bucket where the file will be uploaded to. When the user chooses a file and clicks submit the file is sent as byte streams to the S3 location. I see two security concerns here: The url which is returned from calling X and then set as a hidden field in the form could be hijacked and substituted for another url of the hacker's choosing I am not sure if this is possible but the byte stream from the user's browser to the S3 bucket could be diverted? Is this paranoia or actual real security concerns? | No, at least with a good hash function. You can test this yourself by creating a hash over a specific data set, and then a modified hash over a different data set. You will see that every bit of the resulting hash function has about a 50% chance of flipping. I'll demonstrate this by creating the SHA-256 hash of the string MechMK1 : $ echo -n "MechMK1" | sha256sum
2c31be311a0deeab37245d9a98219521fb36edd8bcd305e9de8b31da76e1ddd9 When converting this into binary, you get the following result: 00101100 00110001 10111110 00110001 00011010 00001101 11101110 10101011
00110111 00100100 01011101 10011010 10011000 00100001 10010101 00100001
11111011 00110110 11101101 11011000 10111100 11010011 00000101 11101001
11011110 10001011 00110001 11011010 01110110 11100001 11011101 11011001 Now I calculate the SHA-256 hash of the string MechMK3 , which changes one bit of the input: $ echo -n "MechMK3" | sha256sum
3797dec3453ee07e60f8cf343edb7643cecffcf0af847a73ff2a1912535433cd When converted to binary again, you get the following result: 00110111 10010111 11011110 11000011 01000101 00111110 11100000 01111110
01100000 11111000 11001111 00110100 00111110 11011011 01110110 01000011
11001110 11001111 11111100 11110000 10101111 10000100 01111010 01110011
11111111 00101010 00011001 00010010 01010011 01010100 00110011 11001101 I compared both results and checked how often a bit differed from both hashes, and exactly 128, or 50% of all bits differed. If you would like to play around with this yourself and see what kind of results you get, I created a simple C program that does exactly that. | {
"source": [
"https://security.stackexchange.com/questions/226114",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/192017/"
]
} |
226,174 | I am developing a reliable system for token generation and validation used mainly for links in confirmation emails (reset password request, change email flow, activate an account, etc...). There are a few things that are mandatory: Token must be unique (even when two generated at the same time) in system (in database) Token must be one-time use Token must have expiration Token cannot be guessable From that I decided to generate token like this: token = sha256(user.id + time + uuid(v4) + secret) This token do not need to carry any expiration information, because it is saved in database with those columns externally. Does this token meet my requirements above points? If not, how to modify my approach? If this token meets my requirements, is there a way to simplify it while meeting my goals? I am asking this, because I know there are some known exploits of those types of one-time use tokens sent to email and I am not sure if I will be safe. | If you are storing all the relevant information (token, expiration time, user) in the database anyway, the only thing you need to make sure about the token is that it is impossible to guess a token. Your token is impossible to guess if at least one of these two holds: The secret remains secret. It has to have very high entropy, and never be leaked. The UUID is generated using a secure random source. Actually, your system is more complex than it needs to be. Since the token is only used to look up the info in the database, and not validate it, you could just use the UUID and nothing else - no hash, no secret, no other data. Only thing you need to make sure is that the UUID is generated with a secure random source. | {
"source": [
"https://security.stackexchange.com/questions/226174",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/227518/"
]
} |
226,181 | I want to set up my company's laptops in a way that all files created on these laptops can only be read by these laptops. If it is copied to a USB then that file is only readable when plugging that USB on a company laptop. If plugging in or copied to another non-authorized laptop then it is not readable. Ernst and Young are using this technique to protect their data but I don't know what is it called and how to set it up. | If you are storing all the relevant information (token, expiration time, user) in the database anyway, the only thing you need to make sure about the token is that it is impossible to guess a token. Your token is impossible to guess if at least one of these two holds: The secret remains secret. It has to have very high entropy, and never be leaked. The UUID is generated using a secure random source. Actually, your system is more complex than it needs to be. Since the token is only used to look up the info in the database, and not validate it, you could just use the UUID and nothing else - no hash, no secret, no other data. Only thing you need to make sure is that the UUID is generated with a secure random source. | {
"source": [
"https://security.stackexchange.com/questions/226181",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/227520/"
]
} |
226,322 | I've been trying to figure out "practical encryption" (AKA "PGP") for many years. As far as I can tell, this is not fundamentally flawed: I know Joe's e-mail address: [email protected]. I have a Gmail e-mail address: [email protected]. I have GPG installed on my PC. I send a new e-mail to Joe consisting of the "PGP PUBLIC KEY BLOCK" extracted from GPG. Joe received it and can now encrypt a text using that "PGP PUBLIC KEY BLOCK" of mine, reply to my e-mail, and I can then decrypt it and read his message. Inside this message, Joe has included his own such PGP public key block. I use Joe's PGP public key block to reply to his message, and from this point on, we only send the actual messages (no key) encrypted with each other's keys, which we have stored on our PCs. Is there anything fundamentally wrong/insecure about this? Some concerns: By simply operating the e-mail service, Google knows my public key (but not Joe's, since that is embedded inside the encrypted blob). This doesn't actually matter, though, does it? They can't do anything with my public key? The only thing it can be used for is to encrypt text one-way which only I can decrypt, because only I have the private key on my computer? If they decide to manipulate my initial e-mail message, changing the key I sent to Joe, then Joe's reply will be unreadable by me, since it's no longer encrypted using my public key, but Google's intercepted key. That means Joe and I won't be having any conversation beyond that initial e-mail from me and the first reply by him (which Google can read), but after that, nothing happens since I can't read/decrypt his reply? | As Steffen already said , the Achilles' heel on your security is making sure you are talking to Joe, and Joe being sure he is talking to you. If the initial key exchange is compromised, the third party will be able to read your messages, reencrypt and send to Joe, and vice versa. The crypto does not matter unless you solve this issue. That's why on the HTTPS world, there's a special entity named CA (Certificate Authority). The CA is to make sure Google cannot obtain a certificate for Facebook, and so on. So unless a rogue CA issued the certificate, you can navigate to Google, and be sure you are on Google. The initial key transfer is the critical one, and this can be done in a couple ways: in person by out-of-band message (tweet, Instagram post, snail-mail) referral by a friend in common (you know Bill, have his key saved, and Bill knows both of you, and shares both keys) After this exchange, your setup is pretty solid. | {
"source": [
"https://security.stackexchange.com/questions/226322",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/227718/"
]
} |
226,334 | There's lots of information on using one email for multiple accounts, but how about the other way around? I'm building a service and considering allowing users to log in with any of their registered emails, using the same password for all of them. Instead of a "my account is my email" mindset, I'm going for a "my account has emails that I can use to access my account" mindset. Aside from increasing the discoverable routes of entry for an attacker, are there any security downsides to this? | As Steffen already said , the Achilles' heel on your security is making sure you are talking to Joe, and Joe being sure he is talking to you. If the initial key exchange is compromised, the third party will be able to read your messages, reencrypt and send to Joe, and vice versa. The crypto does not matter unless you solve this issue. That's why on the HTTPS world, there's a special entity named CA (Certificate Authority). The CA is to make sure Google cannot obtain a certificate for Facebook, and so on. So unless a rogue CA issued the certificate, you can navigate to Google, and be sure you are on Google. The initial key transfer is the critical one, and this can be done in a couple ways: in person by out-of-band message (tweet, Instagram post, snail-mail) referral by a friend in common (you know Bill, have his key saved, and Bill knows both of you, and shares both keys) After this exchange, your setup is pretty solid. | {
"source": [
"https://security.stackexchange.com/questions/226334",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/214758/"
]
} |
226,526 | We have a web app (Django) that logs users out if they haven't made a request within 1 hour. From a security point of view, is it good practice to also block concurrent log ins?
In other words, if a user logs in on his PC and then logs in from their mobile device, should they get logged out of the session on their PC? | There's no "one answer fits all" here. If it's simply a social media app, it might be sufficient to allow concurrent sessions, but also offer a way to terminate one or all sessions if the account is compromised. For many types of games, concurrent access means cheating, so should probably be disallowed, or at least designed in a way that the account can't cheat (e.g. there may be multiple latent sessions, but only one active session). For systems with sensitive information, like HIPAA- or GDPR-related information, 2FA should probably be required, short session times, and concurrent logins should probably be disallowed. The important thing here is common sense. You need to ask yourself "what's the worst that could happen if concurrent access were allowed?" and "what would the user gain from having such a feature?" If the cons outweigh the pros, don't do it. If there's too much risk involved, don't do it. If it would convenience the user, consider allowing concurrent logins, perhaps with some caveats, such as only having one active session at a time, or allowing session management to disable sessions, and in any case, 2FA should probably be available, if not required. | {
"source": [
"https://security.stackexchange.com/questions/226526",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56886/"
]
} |
226,600 | I'm reading a white hat hacking book from a famous certification.
They say the methodology for hacking a web server is: information gathering (domain name, DNS, IP, etc.) footprinting (ex: banner grabbing) website mirroring vulnerability scanning session hijacking password cracking Apart from session hijacking and information gathering, I don't see why I would not just launch Acunetix Web App Scanner and/or Nessus to find all weaknesses. What is the point of performing manual tests if you can automate them? For instance, if the vulnerability scanner does not know how to find vulnerable cookies, and if I manually find a way to do session hijacking, I won't be able to train Acunetix of Nessus for that. Even if I did, I don't how beneficial it would be. Please explain why I would not just let my tool do the hacking for me. | You have several assumptions here: scanners can find all vulnerabilities if a scanner cannot find a vulnerability then there are no vulnerabilities all manual tasks can be automated attackers would only use automated tools and not manual approaches manual approaches cannot be turned into bespoke automated tools finding vulnerabilities is the same as exploiting the vulnerabilities None of these assumptions are universally true. Automated scanners help make the process of finding vulnerabilities more efficient, but they are far from perfect and far from complete. Scanners are also not designed to exploit the vulnerabilities in a useful way. In practice, you want to manually test the results of a scanner (false positives) and perform manual tests to look for things that the automated tool might have missed. Attackers will use a mix of approaches and then often create or modify a tool to exploit the vulnerability so that it is repeatable and reliable. But that doesn't mean that the tool will work in other situations. Automated tests are the basic threshold. If your site/program fails an automated test, then you've made a pretty bone-headed error and it should be fixed immediately (because it will be easy to find). But I've seen some cases where a developer has added a check for 1=1 in their SQL in order to hide from automatic scanners, but I was able to exploit the site using 2=2 (modern SQL scanners account for this now). I only knew that from manual testing and personal experience. You can't encode experience and intuition in a tool. Coding is an insanely complex activity. That means that the errors can be complex, too. No tool could be created to look for or to exploit all vulnerabilities. | {
"source": [
"https://security.stackexchange.com/questions/226600",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/226540/"
]
} |
226,615 | Can an NTFS volume be read by forensics without having to log into the windows user or provide any passwords? aka can data be read straight from the sectors in clear text? | No, NTFS is not encrypted by default. can data be read straight from the sectors in clear text? Yes, by default NTFS files are unencrypted. Since NTFS 3.0, EFS ( Encrypting File System ) is a feature of NTFS, but By default, no files are encrypted , but encryption can be enabled by users on a per-file, per-directory, or per-drive basis. | {
"source": [
"https://security.stackexchange.com/questions/226615",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228132/"
]
} |
226,661 | I use KeePass with auto-type, and once in awhile (when tired, etc.) I'll accidentally launch a similarly-named entry's URL and try to logon with the wrong U/P. This question is unrelated to KeePass per se. I'm just wondering if attempted logons are recorded and logged by the "wrong" site, allowing site admins to see an unrelated logon which they might abuse. | They could be, phishing sites are set up to do exactly this. On non-malicious sites, this would be generally be considered poor practice, but there is no reason why they couldn't, beyond user privacy regulations. | {
"source": [
"https://security.stackexchange.com/questions/226661",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228191/"
]
} |
226,663 | Which is harder to exploit:
Password reset link with tokens/timestamps/code/ticket etc
Or, temporary password sent on user mail using which login can be done and password can be changed. Any suggestions on how they can be exploited please? | They could be, phishing sites are set up to do exactly this. On non-malicious sites, this would be generally be considered poor practice, but there is no reason why they couldn't, beyond user privacy regulations. | {
"source": [
"https://security.stackexchange.com/questions/226663",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228198/"
]
} |
226,747 | What is the difference between a certificate and a private key? In answering another question on this site, I wanted to point to a canonical answer to this question, but to my surprise I don't see one. Users can be forgiven for getting these terms confused because many applications use the term " certificate " when they really mean " certificate + private key ". It would be good to clear up the difference between the following file types: .crt , .pem , .p12 , .pfx and why, for example, an application expecting a "client certificate" blows up when you give it a .crt file. Here are some references that an answer could pull information from: security.SE: What is the difference between Key, Certificate and Signing in GPG? Digicert: What is a private key/public key pair? Superuser: What is the difference between a certificate and a key with respect to SSL? And of course, our canonical: How does SSL/TLS work? | Every Private Key has a corresponding Public Key . The public key is mathematically derived from the private key. These two keys, together called a "key pair", can be used for two purposes: Encryption and Signing . For the purposes of certificates, signing is far more relevant. A certificate is basically just a public key, which has been signed by someone else's private key. This forms the basis of public key infrastructure (PKI), which is explained in the articles linked in the question. How do Certificates and Private Keys relate? A certificate is just a "fancy" public key, which is related to a private key. You can do the same thing with a certificate as you can do with a public key. If Bob gets the certificate of Alice, he can encrypt a message for Alice. Likewise, if Alice publishes some data and signs it with her private key, Bob can use Alice's certificate to see if it is really from Alice. What are all those different file types? .pem : A .pem is a de-facto file format called Privacy-Enhanced Mail . A PEM file can contain a lot of different things, such as certificates, private keys, public keys and lots of other things. A file being in PEM format says nothing about the content, just like something being Base64-encoded says nothing about the content. .crt , .cer : This is another pseudo-format that is commonly used to store certificates. These can either be in the PEM or in the DER format. .p12 , .pfx : These are interchangable file extensions for the PKCS#12 format. Technically, PKCS#12 is the successor to Microsoft's PFX format, but they have become interchangable. PKCS#12 files are archives for cryptographic material. Again, what kind of material this contains is completely up to the user. Wait, what!? Yes, .crt , .pem , .pfx and .p12 can all be used to store certificates, public keys and private keys. From a purely technical standpoint, you can not tell what the semantic content of any of these files is just by their file extension. If you ever get confused, don't worry - you're not alone. However, there are some common conventions that are being followed. .p12 and .pfx files are usually used to store a certificate together with the private key that corresponds to this certificate. Likewise, .crt files usually contain single certificates without any related private key material. .pem files are wildcards. They can contain anything, and it's not uncommon to see them used for all different kinds of purposes. Luckily, they are all plain text, and are prefixed in a human-readable way, such as -----BEGIN CERTIFICATE-----
MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G
A1UEChMGR251VExTMSUwIwYDVQQLExxHbnVUTFMgY2VydGlmaWNhdGUgYXV0aG9y
... Why would an application not handle a .crt file if it wants a client certificate? A certificate is just a public key, and thus by definition public. A client certificate is no different - just a public key by a person, machine or other "client", that is signed by some authority. An application that wants a client certificate usually wants to use that certificate for something, such as to authenticate the client to a server. In order to do that, one needs the certificate and the corresponding private key. So an application should really write "certificate plus private key", because the certificate alone is not enough to prove one's identity. It's actually the private key that does it. To answer vitm's question: As the answer explains, a private key is always associated with a public key, and a certificate contains a public key, as well as other information regarding the individual holding the public key. If a server program or client program want to use a certificate (e.g. a web server using a server certificate or a web browser using a client certificate), they need both the certificate and the private key. However, that private key is never sent anywhere. The private key is used mathematically to decrypt messages, which are encrypted with the public key in the certificate - and to sign messages, which are verified using the public key in the certificate. If I only had a certificate, without a public key, then I would not be able to act as the server or client, to whom the certificate relates to, as I could not sign messages or decrypt messages. | {
"source": [
"https://security.stackexchange.com/questions/226747",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61443/"
]
} |
226,748 | I have a YubiKey 5c nano attached to my laptop used for 2FA, etc. It works great and is unobtrusive, but I feel like the small form factor encourages leaving it plugged into the laptop all the time, essentially weakening the two-factor nature of they key... or effectively making my laptop the second factor. I have Yubico Authenticator set to protect the Yubikey with a password so that if my laptop were stolen, the Yubikey is unusable without the password. But once the password is entered, the Yubikey is usable until unplugged (or the machine is shut down) and I am worried that I may forget to unplug the Yubikey before traveling. Is there any way to "lock" a Yubikey and require the password after some period of inactivity, or via a CLI-interface? I'm on Mac OS X and could write a shell script, but an OS-angnostic answer would be more useful to the community. | Every Private Key has a corresponding Public Key . The public key is mathematically derived from the private key. These two keys, together called a "key pair", can be used for two purposes: Encryption and Signing . For the purposes of certificates, signing is far more relevant. A certificate is basically just a public key, which has been signed by someone else's private key. This forms the basis of public key infrastructure (PKI), which is explained in the articles linked in the question. How do Certificates and Private Keys relate? A certificate is just a "fancy" public key, which is related to a private key. You can do the same thing with a certificate as you can do with a public key. If Bob gets the certificate of Alice, he can encrypt a message for Alice. Likewise, if Alice publishes some data and signs it with her private key, Bob can use Alice's certificate to see if it is really from Alice. What are all those different file types? .pem : A .pem is a de-facto file format called Privacy-Enhanced Mail . A PEM file can contain a lot of different things, such as certificates, private keys, public keys and lots of other things. A file being in PEM format says nothing about the content, just like something being Base64-encoded says nothing about the content. .crt , .cer : This is another pseudo-format that is commonly used to store certificates. These can either be in the PEM or in the DER format. .p12 , .pfx : These are interchangable file extensions for the PKCS#12 format. Technically, PKCS#12 is the successor to Microsoft's PFX format, but they have become interchangable. PKCS#12 files are archives for cryptographic material. Again, what kind of material this contains is completely up to the user. Wait, what!? Yes, .crt , .pem , .pfx and .p12 can all be used to store certificates, public keys and private keys. From a purely technical standpoint, you can not tell what the semantic content of any of these files is just by their file extension. If you ever get confused, don't worry - you're not alone. However, there are some common conventions that are being followed. .p12 and .pfx files are usually used to store a certificate together with the private key that corresponds to this certificate. Likewise, .crt files usually contain single certificates without any related private key material. .pem files are wildcards. They can contain anything, and it's not uncommon to see them used for all different kinds of purposes. Luckily, they are all plain text, and are prefixed in a human-readable way, such as -----BEGIN CERTIFICATE-----
MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G
A1UEChMGR251VExTMSUwIwYDVQQLExxHbnVUTFMgY2VydGlmaWNhdGUgYXV0aG9y
... Why would an application not handle a .crt file if it wants a client certificate? A certificate is just a public key, and thus by definition public. A client certificate is no different - just a public key by a person, machine or other "client", that is signed by some authority. An application that wants a client certificate usually wants to use that certificate for something, such as to authenticate the client to a server. In order to do that, one needs the certificate and the corresponding private key. So an application should really write "certificate plus private key", because the certificate alone is not enough to prove one's identity. It's actually the private key that does it. To answer vitm's question: As the answer explains, a private key is always associated with a public key, and a certificate contains a public key, as well as other information regarding the individual holding the public key. If a server program or client program want to use a certificate (e.g. a web server using a server certificate or a web browser using a client certificate), they need both the certificate and the private key. However, that private key is never sent anywhere. The private key is used mathematically to decrypt messages, which are encrypted with the public key in the certificate - and to sign messages, which are verified using the public key in the certificate. If I only had a certificate, without a public key, then I would not be able to act as the server or client, to whom the certificate relates to, as I could not sign messages or decrypt messages. | {
"source": [
"https://security.stackexchange.com/questions/226748",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1234/"
]
} |
226,757 | My old laptop is possibly infected. But I just want to transfer documents that I have created like Excel or Word files. Is it possible that malware entered into these files making it dangerous to transfer them into my new laptop?
Also, can a pendrive get infected when it's connected to the infected laptop? Is it safer to send the documents online (like Droppbox or via email)? | Yes, malware can infect user-created files. Yes, pendrives can get infected when inserted. And it doesn't matter how you transfer them, they will still be infected when they arrive. You want to scan the files and the pendrive before actually accessing the files. | {
"source": [
"https://security.stackexchange.com/questions/226757",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/227865/"
]
} |
226,839 | Is there a way or program to make another program think I am using a different system? For example, let's say currently I am on Windows 7 32-bit and I want the program to detect Windows 10 64-bit or perhaps Windows XP. Can I do something similar with hardware? Can I tell a program that I'm running on a PC from the early 2003 even if it's from last year (2019)? | Cheeky answer: I think you are about to invent a Virtual Machine. In order to spoof the hardware to look like a 2003 motherboard you'll need to deal with things like writing a CPU instruction translation layer so that the program gives 2003-era CPU binary instructions, and your layer translates them into instructions that your 2019 hardware understands. In order to spoof the OS to look like Windows 7 32-bit, you will need to hide all the binaries and other files in C:\Windows , C:\Program Files , etc, that are clearly part of a Windows 10 install, and instead "fake" a complete file system with all the Windows 7 32-bit files. You'd also need a wrapper around the Windows 10 kernel to make the kernel APIs that are accessible to the program look and behave like the Windows 7 kernel. ie you can't really "fake" this; you basically need to emulate a fully functional install of Windows 7 32-bit inside your Windows 10. By the time you have done all that, you will basically have invented virtual machines . So why don't you just download VirtualBox, set the hardware emulation to 2003-era hardware, and install Windows 7 32 bit on it? | {
"source": [
"https://security.stackexchange.com/questions/226839",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228397/"
]
} |
226,913 | I may be misunderstanding, but if I want to use a hashing algorithm such as argon2, what's stopping from someone seeing how it works and reversing what it does? | Being public is exact the point: you show everyone how it's done and how difficult it is to reverse it. It's like showing you a ginormous jigsaw puzzle with a trillion pieces, but with every piece in its place, and shuffling everything down. You know all the pieces form the puzzle (you just saw it), and you know it's very, very difficult to put everything back. A public hash shows you how it's done (the result) and how difficult is to do everything in reverse. A public hash function is just a set of mathematical operations. Anyone can (but only a few will) do the operations by hand and prove that the algorithm works as expected. Anyone can reverse it too, but it takes so much time (trillions of years with all computing power of our planet combined) that the most cost-effective way to reverse it is a bruteforce. Unless it's a pretty basic insecure hash function. | {
"source": [
"https://security.stackexchange.com/questions/226913",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228502/"
]
} |
227,020 | I am building web applications for my customer's company. At the server side, there will be 2 kinds of server to server network communication. Separated REST API servers making requests among each other. Communication from application load balancers (AWS ALB specifically) to their auto-scaling EC2 instances. Currently all of these communications use HTTP protocol. Only the user-facing nodes (such as the load balancer or the web server reverse proxy) will serve HTTPS with valid certificates. The customer ask us to change them all to HTTPS as thet believe that it is the modern best practice to always use HTTPS instead of HTTP anywhere. I would like to dispute with the customer but I am no security expert. Please help review my understanding below and correct me if I am wrong. In my view, I think the purpose of HTTPS protocol is for being a trusted channel in an untrusted environment (such as the Internet). So I cannot see any benefit of changing the already trusted channel to HTTPS. Further more, having to install certificates to all servers make it difficult to maintain, chances are, the customer will find their application servers broken someday in the future because some server has certificate expired and no one know. Another problem, if we have to config all the application server, apache for example, behind the load balance to serve HTTPS, then what is the ServerName to put inside the VirtualHost ? Currently we have no problem using the domain name such as my-website.example.com for HTTP VirtualHost . But if it were to be HTTPS we have to install certificate of my-website.example.com to all instances behind the load-balancer? I think it's weird because then we have many servers claiming to be my-website.example.com . | The answer to your question comes down to threat modeling. Using cryptographic protocols like HTTPS is a security mechanism to protect against certain threats. If those threats are relevant for you, must be analyzed: Are there potential threat actors in your internal network? Based on your question you seem to assume that the internal network can be fully trusted. This is often a misconception, because there are several ways your internal network can be compromised (e.g. valid users with access to this network are turning malicious, a systems in this network gets compromised, a misconfiguration opens up the network segment, etc.). Will the architecture be subject to change? It is likely that the system will change over time and prior security assumptions (e.g. my internal network is trusted) no longer hold. If that's a reasonable scenario, it might be a good idea to build the necessary security mechanism in in advance. That's what security best-practices are for. Providing security in an area of uncertainty. Is there a regulatory, legal or compliance requirement that must be fulfilled? You said that your customer considers HTTPS to be state-of-the-art / modern best-practice. The source of this friendly worded statement might actually be an externally driven requirement, that must be fulfilled. Non-compliance is a threat that should also be covered in a threat analysis. Those are important topics worth analyzing. When I design system architectures and I am in doubt, I prefer to err on the side of security. In this case the best-practice approach is indeed using HTTPS for communication, no matter the circumstances, as long as there are isn't a considerable impact on the application (e.g. performance impact). Difficulty to maintain server certificates shouldn't be a problem nowadays, as this is common practice. This should be part of normal scheduled operations activity. Having said all this, there is of course additional effort required to use HTTPS instead of HTTP and it is your right to charge the customer for this additional effort. I suggest you calculate what this will cost during development and over time during operation and let the customer decide if the cost is worth the benefit. | {
"source": [
"https://security.stackexchange.com/questions/227020",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228635/"
]
} |
227,146 | In every TV program where there's a person that wants to remain anonymous, they change their voice in a way that to me sounds like a simple increase or decrease in pitch (frequencies). What I'm wondering is: is the usual anonymizing method actually based on a simple change in pitch, or is it a more complex transformation that most TVs / media / etc. are using? is a simple change in pitch enough to make it impossible, or very hard anyway, to recover the original voice? I would think that if a voice has been changed to have a higher pitch, by lowering the pitch I might try to get the original voice, but I'm not sure how hard or reliable it could be. Note that I'm just talking about the voice quality, not about other features that of course could immediately deanonymize a person (like accent, dialect, personal vocabulary and slang, etc.) | A simple pitch change is insufficient to mask a voice, as an adversary could simply pitch the audio back to recover the original audio. Most voice modulators use a vocoder, not a simple pitch change. The term "vocoder" is unfortunately rather heavily overloaded these days, so to clarify I mean the type that is most generally used in music, rather than a phase vocoder, pitch remapper, or voice codec. The way this works is as follows: The voice input audio (called the modulation signal) is split into time slices, and its spectral content is analysed. In DSP this is usually implemented using an FFT , which effectively translates a signal from the time domain - a sequence of amplitudes over time - into the frequency domain - a collection of signals of increasing frequency that, if combined, represent the signal. In practice implementations output a magnitude and phase value for each of a fixed number of "buckets", where each bucket represents a frequency. If you were to generate a sine wave for each bucket, at the amplitude and phase offset output by the FFT, then add all of those sine waves together, you'd get a very close approximation of the original signal. A carrier signal is generated. This is whatever synthesised sound you want to have your voice modulator sound like, but a general rule of thumb is that it should be fairly wideband. A common approach is to use synth types with lots of harmonics (e.g. sawtooth or square waves) and add noise and distortion. The carrier signal is passed through a bank of filters whose center frequencies match that of the FFT buckets. Each filter's parameters are driven by its associated bucket's value. For example, one might apply a notch filter with a high Q factor and modulate the filter's gain with the FFT output. The resulting modulated signal is the output. A rather crude diagram of an analog approach is as follows: The audio input is split into a number of frequency bands using band pass filters, which each pass through only a narrow frequency range. The "process" blocks take the results and perform some sort of amplitude detection, which then becomes a control signal to the voltage controlled amplifiers (VCAs). The path at the top generates the carrier waveform, usually by performing envelope detection on the input and using it to drive a voltage controlled oscillator (VCO). The carrier is then filtered into individual frequency bands by the bandpass filters on the right, which are then driven through the VCAs and combined into the output signal. The whole approach is very similar to the DSP approach described above. Additional effects may be applied as well, such as pre- and post-filtering, noise and distortion, LFO, etc., in order to get the desired effect. The reason this is difficult to invert is that the original audio is never actually passed through to the output. Instead, information is extracted from the original audio, then used to generate a new signal. The process is inherently lossy enough to make it fairly prohibitive to reverse. | {
"source": [
"https://security.stackexchange.com/questions/227146",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175681/"
]
} |
227,211 | Assume the following very basic hashing algorithm. h(k) = k mod 17 Let's say we create a password 12345 for a website that uses this very basic hashing algorithm. That would yield the hash of 3. Say a brute force attacker comes by and starts guessing numbers starting at 1 they would only have to get to 3 before they got a hash collision and obviously 3 is not the original password. Is the problem that the password hashing space (0-16) is much smaller than the space of the allowed password, or is there something else I'm overlooking? | Steffen's answer covers this perfectly, but I just wanted to add a few more details. Anything that gives a match is usually fine As he says, you generally don't care about finding the actual password, because many applications will be happy to authenticate you with any string that happens to hash to the hash value stored in the database. This is most often true for web applications, but may be less often the case in other contexts . This means that if you perform a brute force search and find something with the same hash, you can login to the account even if it isn't actually the same password. This, as implied in the question, is the nature of hashing and is a direct result of the pigeonhole principle . Until you need the actual password However, there may be some cases where you do want to find the original password. This would be the case if for instance a hacker stole a username/password from a valueless service and wanted to try to login as the user on a more valuable service (Facebook, banks, etc...). Since people often use the same password everywhere, then in this case you really do want the original password - not just something that hashes to the same value for a given hashing algorithm (after all, different services may use different hashing methods, and most also use cryptographic salts - h/t @Taemyr ). But the difference doesn't matter Fortunately (for our attacker), this doesn't really matter. The reason is because an exhaustive brute force is effectively impossible. Instead a hacker will try things that are likely to be passwords (word lists, common passwords, etc...). As a counter example imagine that despite the impossibility of it you have a hash from a web service and manage to perform a brute force search on all possible 256 bit ASCII strings. You find three inputs that hash to the same value as the user's password hash: BD3EDF42F6D3AF2DAAE93313EB534 7AF7B8B8F84443872C48EC372DBD1 password Which would you guess is the actual password? The answer is clearly #3. I mean, there is technically a chance that the user just happened to pick an extremely strong password (i.e. BD3EDF42F6D3AF2DAAE93313EB534 ) that just happened to hash to the same value as password , but the odds of that are effectively zero. In this sense the attacker has a nice advantage. They would prefer to find the actual password, and it turns out that because people are bad at picking random passwords, the best way to do that isn't by checking everything anyway - it's by checking things that look like passwords. This makes brute forcing searching much, much, much, much more effective, and also gives the attacker a much more useful result (the actual password, rather than some random string that happens to have the same hash). | {
"source": [
"https://security.stackexchange.com/questions/227211",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86361/"
]
} |
227,335 | Lately I have been receiving messages from accounts on Discord telling me that I have won 1 BTC in a giveaway. This is obviously not true, but I have a hard time seeing how these accounts will get anything useful out of me falling for this. The instructions in the message for getting the 1 BTC goes like this (with a lot of emojis springled in): You WON: 1 BTC! How to receive your BTC? Register account in: maticbit.com Go to the "Codes" section and activate your code Withdraw BTC to your address. Done! I could guess that by registering an account I would have to enter some personal information which they could use, but I am not going to visit the site of course.
You can't withdraw BTC from a wallet just by knowing it's address, so it's not like they are after bitcoins, it seems like it's something else they are after. | I dug into this, cause I also got the Discord message. Felt too good to be true, right? Well, yes, it is too good to be true . I set up a new Browser, with no history in it, made a temporary email and signed up. I entered their "You have won" code, and sure thing, they did deposit 0.48 BTC on the account I just made on their site. Cool. (They probably just edited a value in a database somewhere though, so I doubt there is any real BTC under that value) Alright, so lets withdraw that to a newly made wallet, right? Nope! So this is how the scam works: Make you sign up -> give you "free BTC" -> request a really small deposit (to prove you are not a bot) -> take that small deposit and laugh all the way to the bank. (Just like @Demento predicted) Until I have significant proof from multiple credible sources, I would not trust this. | {
"source": [
"https://security.stackexchange.com/questions/227335",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/229033/"
]
} |
227,345 | Situation I was about to install Skype on a laptop driven by Ubuntu 18.04 LTS Desktop.
The software installation helper graciously informs me that Skype is unconfined. It can access all your personal files and system resources as per the screenshot below. Apparently there must be reasons to make a distinction from applications that do not call for this warning. Reality-checks Can Skype really scan anything I have in my home directory regardless of the permissions set to files and directories? Does it become like a sort of superuser? What is the meaning of system resources there? Does it go about functional resources like broadband and memory, or is that an understatement for control on all applications? Mitigation How is it possible for an average "power user" to confine such an unconfined application? Beside the mere answering, pointing out to interesting readings is also appreciated. | Why am I getting this message? The idea of snap is to be an "app store for Linux", with much of the same benefits as app stores for other platforms, such as iOS or Android. One of the big advantages is that applications are rather confined, unable to interact with your OS unless the user gives it specific permissions. In snap, there are several different "confinement" settings, as documented here : Strict Used by the majority of snaps. Strictly confined snaps run in complete isolation, up to a minimal access level that’s deemed always safe. Consequently, strictly confined snaps can not access your files, network, processes or any other system resource without requesting specific access via an interface (see below). Classic Allows access to your system’s resources in much the same way traditional packages do. To safeguard against abuse, publishing a classic snap requires manual approval, and installation requires the --classic command line argument. Devmode A special mode for snap creators and developers. A devmode snap runs as a strictly confined snap with full access to system resources, and produces debug output to identify unspecified interfaces. Installation requires the --devmode command line argument. Devmode snaps cannot be released to the stable channel, do not appear in search results, and do not automatically refresh. The Skype app is most likely a "Classic" snap, which means you don't get the same benefits as from a strict confinement. Can Skype really do anything on my system? Skype can do as much as any other traditional binary can do, such as those installed via apt . It does not generally become "some kind of super user", but it could use sudo or other means to ask to become a privileged process. The easiest way to do that is to simply refuse running as anything but root. However, Skype cannot magically bypass any file permissions, unless you specifically gave the binary capabilities to do so. What does it mean by system resources? Think about apps on your smartphone. Applications have to ask to access your files, your contacts, your microphone, your camera, your location, etc. Snap in its strict confinement setting does allow applications to access these, but individual applications need to request access to these interfaces. Of course, you as the user can forbid an application from accessing them. Perhaps you don't want an application to access the network because you don't want to use network-enabled features. What the installer is telling you is that, since Skype is a "classic" snap, you cannot stop Skype from accessing all these resources (network, camera, etc.), at least not in an easy way. How is it possible to confine such an application? You can, if you so desire, try to create a strictly confined snap yourself. I assume that this will be a difficult-if-not-impossible task, else Microsoft had done that. Or maybe it's super easy, barely an inconvenience, and Microsoft just didn't care. You could also create a limited user and configure your system to run the application as this user, then restrict that limited user from accessing resources such as the network, the web cam, etc. | {
"source": [
"https://security.stackexchange.com/questions/227345",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/159653/"
]
} |
227,365 | I have a simple static webpage that lets users sign-up for a newsletter. Once they enter their email address, it gets sent to a public endpoint (AWS Lambda). This lambda function forwards the email address to a subscription list manager endpoint (Mailchimp) along with the API key. The connection between the AWS Lambda function and Mailchimp is secure as no one has the API key and can't hammer my Mailchimp account. But my concern is the connection between the static webpage and the AWS Lambda endpoint. This endpoint is public and unauthenticated and I'm worried about things like people flooding the endpoint with fake addresses. How can I best secure this? The static page is a simple Gatsby bundle. | Why am I getting this message? The idea of snap is to be an "app store for Linux", with much of the same benefits as app stores for other platforms, such as iOS or Android. One of the big advantages is that applications are rather confined, unable to interact with your OS unless the user gives it specific permissions. In snap, there are several different "confinement" settings, as documented here : Strict Used by the majority of snaps. Strictly confined snaps run in complete isolation, up to a minimal access level that’s deemed always safe. Consequently, strictly confined snaps can not access your files, network, processes or any other system resource without requesting specific access via an interface (see below). Classic Allows access to your system’s resources in much the same way traditional packages do. To safeguard against abuse, publishing a classic snap requires manual approval, and installation requires the --classic command line argument. Devmode A special mode for snap creators and developers. A devmode snap runs as a strictly confined snap with full access to system resources, and produces debug output to identify unspecified interfaces. Installation requires the --devmode command line argument. Devmode snaps cannot be released to the stable channel, do not appear in search results, and do not automatically refresh. The Skype app is most likely a "Classic" snap, which means you don't get the same benefits as from a strict confinement. Can Skype really do anything on my system? Skype can do as much as any other traditional binary can do, such as those installed via apt . It does not generally become "some kind of super user", but it could use sudo or other means to ask to become a privileged process. The easiest way to do that is to simply refuse running as anything but root. However, Skype cannot magically bypass any file permissions, unless you specifically gave the binary capabilities to do so. What does it mean by system resources? Think about apps on your smartphone. Applications have to ask to access your files, your contacts, your microphone, your camera, your location, etc. Snap in its strict confinement setting does allow applications to access these, but individual applications need to request access to these interfaces. Of course, you as the user can forbid an application from accessing them. Perhaps you don't want an application to access the network because you don't want to use network-enabled features. What the installer is telling you is that, since Skype is a "classic" snap, you cannot stop Skype from accessing all these resources (network, camera, etc.), at least not in an easy way. How is it possible to confine such an application? You can, if you so desire, try to create a strictly confined snap yourself. I assume that this will be a difficult-if-not-impossible task, else Microsoft had done that. Or maybe it's super easy, barely an inconvenience, and Microsoft just didn't care. You could also create a limited user and configure your system to run the application as this user, then restrict that limited user from accessing resources such as the network, the web cam, etc. | {
"source": [
"https://security.stackexchange.com/questions/227365",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/229060/"
]
} |
227,380 | My question is not about the recovery of deleted file, nor is it about the complete wiping of a disk; it's to ask about the traces deleted files leave, and how I could possibly 'see' them or visualise them? I'm very much interested in the right to be forgotten, data loss, technological decay etc. Which made me question the traces of the deleted files. I have read that unless you physically destroy the harddrive or override the files/disk the deleted files leave traces of some sort- loose fragments of the file that used to be there. Is there a way for me to see those fragments and access them? Can I pull metadata from them, for instance how many files were deleted, what the extensions were of those files, sizes, names etc. | Why am I getting this message? The idea of snap is to be an "app store for Linux", with much of the same benefits as app stores for other platforms, such as iOS or Android. One of the big advantages is that applications are rather confined, unable to interact with your OS unless the user gives it specific permissions. In snap, there are several different "confinement" settings, as documented here : Strict Used by the majority of snaps. Strictly confined snaps run in complete isolation, up to a minimal access level that’s deemed always safe. Consequently, strictly confined snaps can not access your files, network, processes or any other system resource without requesting specific access via an interface (see below). Classic Allows access to your system’s resources in much the same way traditional packages do. To safeguard against abuse, publishing a classic snap requires manual approval, and installation requires the --classic command line argument. Devmode A special mode for snap creators and developers. A devmode snap runs as a strictly confined snap with full access to system resources, and produces debug output to identify unspecified interfaces. Installation requires the --devmode command line argument. Devmode snaps cannot be released to the stable channel, do not appear in search results, and do not automatically refresh. The Skype app is most likely a "Classic" snap, which means you don't get the same benefits as from a strict confinement. Can Skype really do anything on my system? Skype can do as much as any other traditional binary can do, such as those installed via apt . It does not generally become "some kind of super user", but it could use sudo or other means to ask to become a privileged process. The easiest way to do that is to simply refuse running as anything but root. However, Skype cannot magically bypass any file permissions, unless you specifically gave the binary capabilities to do so. What does it mean by system resources? Think about apps on your smartphone. Applications have to ask to access your files, your contacts, your microphone, your camera, your location, etc. Snap in its strict confinement setting does allow applications to access these, but individual applications need to request access to these interfaces. Of course, you as the user can forbid an application from accessing them. Perhaps you don't want an application to access the network because you don't want to use network-enabled features. What the installer is telling you is that, since Skype is a "classic" snap, you cannot stop Skype from accessing all these resources (network, camera, etc.), at least not in an easy way. How is it possible to confine such an application? You can, if you so desire, try to create a strictly confined snap yourself. I assume that this will be a difficult-if-not-impossible task, else Microsoft had done that. Or maybe it's super easy, barely an inconvenience, and Microsoft just didn't care. You could also create a limited user and configure your system to run the application as this user, then restrict that limited user from accessing resources such as the network, the web cam, etc. | {
"source": [
"https://security.stackexchange.com/questions/227380",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/229077/"
]
} |
227,524 | We have a system where if you forgot your password and want to reset it, to go to the forgot password page and enter your email address. A temporary link will be sent to your email to reset your password. Now, when we subjected our app to penetration testing, an issue was found: Application is giving clues of possible valid email addresses when attempting to reset password. This functionality can be abused by simply guessing possible email address and being able to find valid ones through the error messages. Well, there's only one field and of course its obvious that if a reset password attempt fails, it's due to an invalid email. Seems this penetration test is wrong. Are there any solutions to fix this issue besides adding an additional field (besides email) for password reset? | Don't indicate that the attempt "failed". A user (legitimate or otherwise), asks for a password reset link, and gives you an email address. All you should say here is along the lines of Your submissions has been received. If we have an account matching your email address, you will receive an email with a link to reset your password. The user will still get the link (success), but attackers won't know whether a provided email address is associated with an account. | {
"source": [
"https://security.stackexchange.com/questions/227524",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/228714/"
]
} |
228,918 | I recently set up NextDNS on my personal devices to further reduce the amount of tracking and ads I'm exposed to. The service comes with built-in analytics that shows a brief overview of your network activity. Most of the top hits are uninteresting, however there's one domain I couldn't figure out: What's the domain fhepfcelehfcepfffacacacacacacabn ? The seemingly random string gives roughly two pages of Google results, but none of them seem to hold any useful information. The log table says the entry is a DNS record of type NIMLOC , but that seems like another dead end inquiry-wise. | That domain is an encoded form of the string "WORKGROUP". It is using a variant of hex encoding that uses the letters A-P, instead of the numbers 0-9 followed by A-F. $ echo fhepfcelehfcepfffacacacacacacabn |
tr a-p 0-9a-f |
xxd -r -p |
xxd
00000000: 574f 524b 4752 4f55 5020 2020 2020 201d WORKGROUP . This appears to be a NetBIOS name , which is why it's padded with spaces to 15 ASCII characters, and then followed by a different character at the end as a suffix. The hex encoding is described in the NetBIOS-over-TCP/UDP Concepts RFC , called "first level encoding". Also, NetBIOS uses DNS record type ID 32 for its "name service" packets; that ID was later allocated to NIMLOC ( ref ), which explains that part of the log. However I'm not sure exactly what software on your machine is making this DNS query; if you're using Windows, it seems likely to be something at the OS level. I don't believe the answer from NextDNS support is correct here about the source of this particular query—it is probably not coming from Chrome. | {
"source": [
"https://security.stackexchange.com/questions/228918",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17222/"
]
} |
229,004 | What is the speed and frequency that hackers can bypass rate limiting continuously in a website using login with an SMS OTP? The rate-limiting activates if the same IP address triggers SMS OTP more than x times. | Botnet If someone has access to a botnet, even a small one, they could change the IP that their actions are coming from several times per minute or flood their actions from a million IPs in any given moment. Non-botnet For individuals without a botnet, there are several methods, including resetting their ISP modem, which, in some cases, resets the IP. Turning a phone's airplane mode on/off can also trigger a change in IP (MechMK1 reports this might take as little as 3 seconds to complete). They could also change between public WiFi networks. Using a series of VPNs and proxies or VPNs that permit the user to choose the exit node will also work, and services that have different exit nodes can make this switch in seconds. There are also proxy/Tor utilities that can randomise your IP according to a schedule you set. The Janus tool has a default change time of 1 minute, but that can be changed. Therefore, for any practical case, if the attacker is waiting for the SMS code to come back from your server, then they are only limited by the time it takes to receive the SMS code . | {
"source": [
"https://security.stackexchange.com/questions/229004",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/231051/"
]
} |
229,133 | Many Google Chrome extensions require permissions to read the contents of webpages the user visits. How can the user verify whether and to what extent a certain Chrome extension leaks data? | According to developer.chrome.com : [Chrome Extensions] are built on web technologies such as HTML, JavaScript, and CSS. This means, anything that could affect the behavior of an extension will exist in a plain-text format (as opposed to a binary). Chrome allows you to debug extensions , giving you a comparatively easy way to see what an extension is doing, and whether or not some behavior of an extension is potentially malicious. This requires the user to be somewhat fluent in the above-mentioned technologies. There are certain attacks , purely using CSS, which can be used to exfiltrate data. Without knowledge of CSS or understanding of these attacks, it would be hard to identify them among megabytes of auto-generated CSS code. A non-technical user will likely not be able to carry out such an analysis. In this case, it helps to follow typical security advice: Only install extensions from credible sources (i.e. the Chrome Web Store) Popular extensions are more likely to get audited than unknown extension Only install extensions where the benefit greatly outweighs the risk Pay attention to the permissions required by the extensions and if it makes sense that those permissions are requested | {
"source": [
"https://security.stackexchange.com/questions/229133",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21439/"
]
} |
229,292 | Many bots attack websites by trying to find an admin login page (such as wp-login.php ) and trying to login. Would an effective way to stop these attacks be to create a fake (non-login) page for the targeted URLs? Stack Overflow does this by redirecting to YouTube: stackoverflow.com/wp-login.php | Using non-standard paths for your WordPress login and admin pages would stop automated brute-force attacks scanning for every example.com/wp-login.php , but the practice you describe is just messing around with the attackers and doesn't really do any good nor harm. Best way to stop the bots is to use strong passwords and Fail2Ban . A fake wp-login & wp-admin could be used as a honeypot for learning more about the ongoing attacks , though. I like to collect the attempted login credentials to know which leaks are currently popular. I also let the credentials "work" randomly to collect the malware they are trying to install. Of course it doesn't work, because it just looks and acts like WordPress without being one. However, by reverse engineering the malware I'm able to learn how it's trying to hide, which gives me an advantage when cleaning infected sites for customers. | {
"source": [
"https://security.stackexchange.com/questions/229292",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/181610/"
]
} |
229,469 | I've just received this email. Is it a standard practice or a scam? I'm a Security Researcher running a vulnerability identification
service for a small group of private clients, and I accidentally found
some vulnerabilities in your infrastructure. For a small fee, I will share the vulnerability details with you
(includes POC, screenshots, and suggested solutions). Paypal instructions: Recipient: REDACTED GMAIL ADDRESS Paying for an item or service (covered under PayPal Purchase Protection for Buyers) Amount: $100 Add a note: [redacted, my domain name] After I receive your payment, within 48 hours, I will send you an
email with all the vulnerability information. | This certainly is not standard practice . Even if this person has found a legitimate problem on your site, it's a form of extortion. There is proper "responsible disclosure" and professional "security researchers" don't start off asking for cash. Bug bounty programs exist for a reason. The problem is that you do not know if the vulnerability is worth $100 to you . This is very, very likely a scam , but on the off-chance that you are dealing with a legitimate professional with poor communication skills, you could ask for details, like where the problem is ("infrastructure"? that's odd for a website), and any details about who they are and proof of their professional work in security research. If they ramp up the emotion or extend the extortion, then you know it's a scam. Don't install or open any files they send you. If they are legitimate, they will work with you. To give you an idea, I am not a professional tester and I do not do bug bounties. But once in a while, I discover a vulnerability in a site. I first contact the company asking for the person who would handle site vulnerabilities with a 1-sentence rundown of the general issue. I do this to make sure I get to talk to a responsible person , and not an unauthorised person who might abuse or mishandle (or fail to understand) the information I am about to give them. I also give them proof of who I am so I do not come across as a scammer. When I am talking to the best person I can, I give the full break down, with my process to repeat the problem, URLs, parameters, etc., and the reason for why I think it is a concern. I answer whatever questions they ask, but I never, ever, give the impression that I need or want them to do anything with any urgency. I let them work out their risk assessment. That's their job. It's their site. I also don't ask for money, but if I did, it would be after I did as much as I could to help their team resolve it. And I would not expect to get money or any form of reward, even if I asked. Either the site has a bug bounty program that defines the expectations and relationships for everyone involved, or the site doesn't, and I'm just helping out and maybe getting something out of it, or not. That's how a professional would approach a site with a vulnerability they discovered. | {
"source": [
"https://security.stackexchange.com/questions/229469",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/231724/"
]
} |
229,480 | I started learning PHP and I wanted to practice PHP code using xampp or wamp server. But after reading this and doing some google research,I thought that installing server software on my home pc might be dangerous. I thought it would be better to install xampp server on a virtual box. Is this a good idea? Is there any security concerns should i be worried about? Is installing xampp server on a virtual machine is actually better than installing it on my home computer? | This certainly is not standard practice . Even if this person has found a legitimate problem on your site, it's a form of extortion. There is proper "responsible disclosure" and professional "security researchers" don't start off asking for cash. Bug bounty programs exist for a reason. The problem is that you do not know if the vulnerability is worth $100 to you . This is very, very likely a scam , but on the off-chance that you are dealing with a legitimate professional with poor communication skills, you could ask for details, like where the problem is ("infrastructure"? that's odd for a website), and any details about who they are and proof of their professional work in security research. If they ramp up the emotion or extend the extortion, then you know it's a scam. Don't install or open any files they send you. If they are legitimate, they will work with you. To give you an idea, I am not a professional tester and I do not do bug bounties. But once in a while, I discover a vulnerability in a site. I first contact the company asking for the person who would handle site vulnerabilities with a 1-sentence rundown of the general issue. I do this to make sure I get to talk to a responsible person , and not an unauthorised person who might abuse or mishandle (or fail to understand) the information I am about to give them. I also give them proof of who I am so I do not come across as a scammer. When I am talking to the best person I can, I give the full break down, with my process to repeat the problem, URLs, parameters, etc., and the reason for why I think it is a concern. I answer whatever questions they ask, but I never, ever, give the impression that I need or want them to do anything with any urgency. I let them work out their risk assessment. That's their job. It's their site. I also don't ask for money, but if I did, it would be after I did as much as I could to help their team resolve it. And I would not expect to get money or any form of reward, even if I asked. Either the site has a bug bounty program that defines the expectations and relationships for everyone involved, or the site doesn't, and I'm just helping out and maybe getting something out of it, or not. That's how a professional would approach a site with a vulnerability they discovered. | {
"source": [
"https://security.stackexchange.com/questions/229480",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/231753/"
]
} |
229,501 | During the COVID-19 pandemic, many of us are working from home using meeting apps like Zoom. It's all over the news that Zoom is lacking in its E2E encryption. Let's assume we're using Zoom and can't switch apps. Considering only the traffic, how can we make these meetings secured/encrypted? The solution can be hard on 1 group (the company using Zoom) but should be easy on the users (employees of the company). My 1st thought here is to have the company set up a VPN server (in the company or with a 3rd party) and have everyone connect to the VPN before joining a meeting. | You cannot magically secure applications like Zoom without changing the application and the infrastructure it relies on. The missing end-to-end encryption you want to have fixed is due to the basic architecture of Zoom, in which media streams are processed and mixed together on a central server (which is owned by Zoom). Only this architecture actually allows it to perform well without stressing bandwidth and CPU of endpoints when many users are involved. With E2E instead the requirements for CPU and bandwidth at each end would grow linearly with the number of users and thus would quickly overwhelm clients. These kind of restrictions apply to any video conferencing solution. This means that you will not get real E2E with any other solution too, at least not if you want conferences which scale to many users without having excessive requirements regarding bandwidth and CPU power. The best you can get is that you control the central mixing and forwarding server yourself and thus don't need to trust a third party. Even the broken AES ECB mode could not be fixed without changing application and infrastructure since the server actually expects the encryption to be a specific way and if you change it the communication will fail. Usage of a VPN would not magically solve the problem. The data would still need to be processed on the servers owned by Zoom. | {
"source": [
"https://security.stackexchange.com/questions/229501",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/206331/"
]
} |
229,645 | Eduroam is an organization that provides free WiFi to educational institutions and around some cities. I don't fully understand how the authentication works, but in order to connect you have to install a CA Certificate called eduroam_WPA_EAP_TTLS_PAP on your device. I know CA certificates are used to decrypt TLS/SSL traffic, so doesn't this mean that Eduroam can decrypt my traffic considering I have their certificate installed on my phone? Any input is appreciated. The specific certificate looks like this (numbers changed for security): $ openssl x509 -inform der -in ca.skole.hr.der -noout -tex
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C = HR, ST = Zagreb, L = Zagreb, O = MZOS, OU = CARNet, CN = CA Root certificate skole.hr
Validity
Not Before: Nov 15 14:17:58 2011 GMT
Not After : Nov 12 14:17:58 2021 GMT
Subject: C = HR, ST = Zagreb, L = Zagreb, O = MZOS, OU = CARNet, CN = CA Root certificate skole.hr
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (1024 bit)
Modulus:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93:e5:d0:8f:97:da:63:
00:e5:a0:99:17:88:9d:1c:93
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
00:e5:a0:99:17:88:9d:1c:9300:e5:a0:99:17:88:9d:1c:93
X509v3 Authority Key Identifier:
keyid:00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:00:e5:a0
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: sha1WithRSAEncryption
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5:a0:99:17:88:9d:1c:93:00:e5:a0:99:17:88:9d:1c:93:
00:e5 It is installed using the Eduroam app into the Android credential storage and is "Installed for Wi-Fi" which I assume means that the credential is applied to all WiFi traffic. | First, Android provides two distinct import options for a reason. VPN and Apps is for general HTTPS traffic from all of your apps, including browsers. You can install your own CAs here if you want to intercept your own traffic, for example. WiFi is for identifying enterprise WiFi networks, but does not affect normal traffic, to my knowledge. This brings us to the next part. You should always specify a CA certificate when you connect to enterprise WiFi networks. 802.1X supports a number of authentication protocols (e.g. EAP-TLS); the CA is typically used to verify the authentication server's certificate. If you do not specify a CA, your client will accept whatever server it talks to. The result, depending on authentication type, is that you may be handing over plaintext credentials (your credentials for your eduroam-participating organization) to an attacker. This can be done easily with an evil twin attack, using a tool such as EAPHammer . There is nothing stopping someone from performing this attack with the eduroam ESSID and stealing your credentials. For this reason, you should always specify a CA when connecting. | {
"source": [
"https://security.stackexchange.com/questions/229645",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/127066/"
]
} |
229,841 | Say there is a bank/financial service that wants to have hyperlinks on their secure website/domain (or even in emails they send out to customers). In some of these links there are some long/obscure URLs which link to one of their subdomains, but the long links are ugly and not very user friendly, so they want to have shorter, nicer links to put on the website or email. What are the risks for a bank/financial service using an external URL shortener service, e.g. Bitly, for this? Is it better for a bank/financial service to host this sort of short link to long link translation service on their own domain and infrastructure? | In some of these links there are some long/obscure URLs which link to one of their subdomains, but the long links are ugly and not very user friendly, so they want to have shorter, nicer links to put on the website or email. Users generally don't have to type any URLs anymore since at least a few decades. In fact, if you have a look at this link , you'll see it is really long and yet you don't need to type any of it. Generally, URL shorteners are only useful if you have to transmit a URL via a medium that doesn't support hyperlinks, such as printing it on paper. And even there, QR codes are slowly getting implemented more and more to solve this exact problem. What are the risks of using an external service to do this? Source: Randall Munroe, xkcd/1698 , licensed under CC-BY-NC 2.5 By using URL shorteners, you promise that this URL will link to a trustworthy source. As long as your URL shortener of choice works and remains trustworthy, this works. However, once some time passes, that URL shortener may go out of business, and that domain will go up for sale once again. From that point on, you can't guarantee anymore if those old URLs will work (they probably won't), or what will happen if users visit them. Remember, that in the eyes of a user, this link came from you, so they will trust anything on that site to be from you - even though it might not be. Has this happened in practice? Yes. In Windows XP, the Windows Media Player does not come with the required license to play DRM protected WMV files. Luckily for the user, Windows Media player has the URL to get the license hard-coded. Unluckily for the user, that URL does not longer point to Microsoft, but rather to a distributor for malware. Any user that still uses Windows XP and wishes to play a DRM-protected WMV file, will be redirected to malware by the built-in media player of their OS , only because of URL shorteners. A better solution If you distribute links only digitally, it does not matter how long URLs are. Users just click on the button and that is it. If you really need to distribute URLs in a format that is non-clickable, such as TV advertisements or print media, make your own URL shortener. If ACME Corp. offers a new product called the "Ultra-Gigatron 9001", then make the URL ac.me/ug9001 . If that doesn't work, make it acme.com/ug9001 . Subdomains are free, and so are path names. Just be aware that that URL needs to be up as long as you expect people to type it. | {
"source": [
"https://security.stackexchange.com/questions/229841",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232263/"
]
} |
229,954 | I'm pretty sure this is a stupid idea but I'd like to know why, so bear with me for a moment. Lots of the work backend developers do is providing CRUD access to customers via HTTP, essentially mapping data from and to the internal database. Customers authorize to the web service using some sort of credentials via an encrypted connection, the web service validates data and performs queries against the backend database, then returns the result to the client. All in all, this is merely a worse way to interact with the database directly: Almost nobody fully implements the REST specification, and sooner or later you always end up with home-cooked generic filtering, sorting or pagination - while SQL supports all of this already. That got me wondering: Why not give customers access to the database by exposing the SQL port, skipping the HTTP API entirely? This has lots of advantages: Clients must encrypt connections using a client certificate We can use the access control built into the server or just use shard databases per customer (My-)SQL permissions are pretty fine-grained, so I'd wager there shouldn't be any obvious security issues Performance should be way better, since we skip the entire HTTP communication and web app code New features are a matter of database migrations, everything is reflected in the schema Powerful query capabilities are provided to users, without any additional effort The downsides seem to include being unable to support multiple schema versions, even though I think careful deprecations (and client SDKs, maybe) should make the impact minimal. As nobody seems to do this, there must be a security risk I'm overlooking. Why can't we provide public SQL access to our customers? What could possibly go wrong? (Please keep in mind that this is just a thought experiment born out of curiosity) | TL,DR: Don't. (My-)SQL permissions are pretty fine-grained, so I'd wager there shouldn't be any obvious security issues Even with permission on the record level, it does not scale easy. If a user has irrestricted SELECT on a table, they can select any record on that table, even those not belonging to them. A salary table would be a bad one. If any user has DELETE or UPDATE , they may forget the WHERE clause, and there goes your table. It happens even to DBAs, so why would it not happen to a user? Performance should be way better, since we skip the entire HTTP communication and web app code And you throw away all security, auditing, filtering and really fine grained permission control from using an application to validate, filter, grant and deny access. And usually most of the time spent on a transaction is the database processing the query. Application code is less than that, and you will not remove the HTTP communication, you just replace it with SQL communication. New features are a matter of database migrations, everything is reflected in the schema That's why so many people use a "spreadsheet as a database." And it's a nightmare when you need to reconcile data from multiple sources. Powerful query capabilities are provided to users, without any additional effort It's like putting a powerful engine on a skeleton chassis, bolting on a seat, and taking it to a race. There's no extra weight slowing the car down, so it's very fast! It's the same here. Sure, it's fast and powerful, but without security measures provided by the application, no session, record-level access control, "users do what they are allowed to", or auditing. One of the most common vulnerabilities on web applications is SQL Injection, and you are giving a SQL console to your users. You are giving them a wide variety of guns, lots of bullets, and your foot, your hand, your head... And some of them don't like you. | {
"source": [
"https://security.stackexchange.com/questions/229954",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67818/"
]
} |
230,035 | If a Linux server only open SSH port 22 and HTTP port 80, must we go through one of these two ports to hack into server from the internet? | Not really. I'd say it depends on your threat model. There might be other threats that don't need to use those ports in order to compromise your server. The first example that I can think of right now is a supply-chain attack. When you update any software on your server, if the updated software has been compromised by a supply-chain attack, your server will get infected. Or if you install example-program by mistake instead of example_program (note the hyphen instead of the underscore), and example-program was malicious and had been given that name on purpose to confuse you, then your server will be compromised. I think something like this happened recently... oh, yesterday ( Bitcoin stealing apps in Ruby repository ). Other examples? Maybe some MITM in the outgoing connections from your server. Then let's not forget about phishing, or anything involving social engineer. So to be precise, if you asked me "in general, can I only be hacked by a remote threat through open ports?", my answer would be no. Whether some threats are likely or not though, depends on your threat model, which in turn depends on what your server does, how you are managing it, who you are, etc. | {
"source": [
"https://security.stackexchange.com/questions/230035",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190529/"
]
} |
230,072 | If I encrypt, say, a .pdf file then does knowing that it's a .pdf make it easier to decrypt? i.e. could the file's well-known structure act as a predictable part of the encrypted bytes? | For practical purposes, no. Against good crypto, using modern ciphers and making no major mistakes such as a padding oracle or ECB mode or other weakness, it doesn't really help. It can theoretically act as a test for whether you brute-forced the key correctly - does the decrypted file contain the right magic number and data format? Good, you got the right key - but actually brute-forcing the key should be impossible for any modern widely-used cipher (even the weakest form of AES, with 128-bit keys, would take centuries for even a nation-state to brute-force, even assuming Moore's Law continues on course ; using only modern hardware it would take many orders of magnitude longer). Of course, people do make mistakes in crypto, and some ciphers that were once thought strong are now known to be vulnerable. If you know that the file was encrypted in a particular way, and you know of a weakness in that method, then you can attempt to exploit that weakness. This might be made easier by knowing the file type, as some files are generated by software which nominally supports encryption but implements it very weakly, and knowing that the file is of that type will suggest you might be able to try such attacks (though it won't help you if the file was encrypted using some other tool that implements the crypto correctly). The ECB (Electronic Code Book) block cipher mode of operation mentioned above encrypts every block (typically 16 bytes for modern ciphers; historically often only 8 bytes) using the same algorithm (for a given key), no matter where it is in the message. This means that if you break the message (or any number of messages, if they were all known to be encrypted using the same key) up into blocks and find two identical blocks, you'll know they're the same plain text. If you find such duplicates and know at least some of the plain text of one of them, because you know at least part of the data of the file (due to knowing its format, or for other reasons), you now know the plaintext for the same part of the other block. This can also be useful when you know the data format in general even if you don't know any specific bytes in it, especially if it's low-entropy data such as a simple image file; see the link above for a striking example of taking a bitmap image, encrypting it using ECB, and the ciphertext (if rendered as a bitmap) still largely revealing the content of the image. There are other attacks that are less likely to be relevant here, but might be relevant in other situations. For example, if you can get the same data encrypted many times using the RC4 (sometimes called ARC4 or ARCFOUR) stream cipher, you can exploit biases in the "key stream" (the bits generated by the pseudorandom function that a stream cipher is) to slowly decrypt the data; this is why RC4 is no longer trusted for use in SSL/TLS (although for a given blob of data encrypted only once, this attack isn't viable). Padding oracle attacks allow you to decrypt a message (typically one encrypted using a block cipher such as AES in the CBC mode of operation) in linear time, provided there's an "oracle" that knows the decryption key and will, on command, decrypt any message and tell you if the padding (padding is necessary for block ciphers) is correct. Such an oracle is usually not available for a file at rest, but a padding oracle is the reason the that CBC mode can no longer be used in SSL, and led to the deprecation of the entire SSL protocol (TLS includes protections against padding oracle attacks). Knowing the structure and some basic data about the file can also enable bit-flipping attacks (where you don't decrypt the data, but do change it in a predictable way that could further your causes against whoever legitimately uses the file). This is getting pretty far afield of your question, but it's sometimes relevant when you're attacking an encrypted file of which you have minimal but non-zero knowledge. | {
"source": [
"https://security.stackexchange.com/questions/230072",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/7247/"
]
} |
230,160 | This is probably a massive noob question, but Google results aren’t being helpful and I couldn’t find something specific here. I made this server that just hosts IRC, HTTP and SSH for some friends. I have done this sort of thing before, and to my knowledge everything was fine. But today, minutes after I turn boot up the server properly for the first time, and pretty much for the whole day until I noticed it tonight, I was getting brute-forced via SSH. They were checking from a whole bunch of different IP addresses, from businesses in places like China and Vietnam, to DigitalOcean’s address. I had not shared the direct IP address with anyone, and the DNS had only been set up for a day or two. There is no way anybody outside of my friend circle (people I trust) would have known that the server existed, and nobody would have any reason to hack me. So my question is, assuming it wasn’t leaked, how did these people get my IP address so quickly, and what would they seek to gain my taking control of my machine? | The IPv4 address range isn't that big. A class A network ( /8 ) has about 16 million hosts, and in theory there is 256 of them. As a result, the internet has about 4,294,966,784 hosts. Of course, this is an approximation. Many address ranges are actually reserved (e.g. 127.0.0.0/8 , 10.0.0.0/8 ), and others are actually one address that represent a NAT-ed internal network. But just judging from a naive back-of-the-envelope calculation, we can say it's somewhere in that ballpark. What an attacker can do now is mass-scan one subnet for a particular service, such as SSH. Simply get a number of hosts (e.g. 32 hosts) and divide the target subnet evenly. Scan only for SSH hosts on port 22, and check which hosts reply. An attacker can then either try to launch a brute force attack themselves, or they can sell that list of active hosts to someone else, who then attempts to attack you. How long would it take to make such a list? Assuming that the attacker wants to scan a whole class A network (16 million hosts), with 32 hosts to scan, at roughly 100 hosts per second, we get a rough estimate of 90 minutes . Of course, time will vary, depending on the speed or the number of hosts, but it should be in that ballpark. | {
"source": [
"https://security.stackexchange.com/questions/230160",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232712/"
]
} |
230,211 | Why are stored procedures and prepared statements the preferred modern methods for preventing SQL Injection over mysql_real_escape_string() function? | The problem of SQL injection is essentially the mixing of logic and data, where what should be data is treated as logic. Prepared statements and parameterized queries actually solve this issue at the source by completely separating logic from data, so injection becomes nearly impossible. The string escape function doesn't actually solve the problem, but it tries to prevent harm by escaping certain characters. With a string escape function, no matter how good it is, there's always the worry of a possible edge case that could allow for an unescaped character to make it through. Another important point, as mentioned in @Sebastian's comment, is that it is much easier to be consistent when writing an application using prepared statements. Using prepared statements is architecturally different from the plain old way; instead of string concatenation, you build statements by binding parameters (e.g. PDOStatement::bindParam for PHP), followed by a separate execution statement (PDOStatement::execute). But with mysql_real_escape_string() , in addition to performing your query, you need to remember to call that function first. If your application has 1000 endpoints that perform SQL queries but you forget to call mysql_real_escape_string() on even one of them, or do so improperly, your entire database could be left wide open. | {
"source": [
"https://security.stackexchange.com/questions/230211",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232770/"
]
} |
230,213 | It obviously doesn't connect with any sort of database.
How is this even possible? | Thinking of it as "password protection" slightly misrepresents the actual situation. What happens when you password-protect a zip file is that the archive is encrypted using a symmetric algorithm (same key to encrypt and decrypt) using the password as the key. The unzipper program "checks" whether the key is correct the same way I check whether the key to my front door is correct: If it opens the lock, it was the correct key. So in this case the unzipper attempts to decrypt the data using the password you provide, and if the output is a properly structured archive, it was the correct password. (I'm skipping the whole cryptography debate WRT collisions and possible duplicate keys for now; this is about how the concept works in theory rather than a specific implementation that may or may not have flaws) EDIT : As user MobyDisk points out in comments, in the case of Zip specifically, the structure and the file tree are not encrypted, just the files themselves, as well as checksums for each file. If the password you use decrypts the file, and the decrypted checksum matches, you had the right password. | {
"source": [
"https://security.stackexchange.com/questions/230213",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232773/"
]
} |
230,514 | We are considering to add the following feature to our web application (an online product database, if it matters): Instead of uploading an image, the user can provide the (self-hosted) URL of an image. We store the URL instead of the image. So far, so good. However, sometimes our web application will have to fetch the image from the (external, user-supplied) URL to do something with it (for example, to include the image in a PDF product data sheet). This concerns me, because it means that our web server will send out HTTP requests to user-supplied URLs. I can immediately think of a lot of evil stuff that can be done with this (for example, by entering http://192.168.1.1/... as the URL and trying out some common router web interface exploits). That seems similar to cross-site request forgery , only it's not the web server tricking the user into submitting a web request, it's the user tricking the web server. Surely, I'm not the first one facing this issue. Hence, my questions: Does this attack vector have a name? (So that I can do further research...) Are there any other risks associated with fetching user-supplied URLs that I should be aware of? Are there some well-established best-practice techniques to mitigate those risks? | This particular vulnerability indeed has a name. It is called Server-Side Request Forgery (SSRF). SSRF is when a user can make a server-side application retrieve resources that were unintended by the application developer, such as other webpages on an internal network, other services that are only available when accessed from loopback (other web services and APIs, sometimes database servers), and even files on the server ( file:///etc/passwd ). See the SSRF Bible and PayloadsAllTheThings for examples on how it can be abused. Since it's an image tag, most things probably won't be displayed, but it's still an issue to fix. What to do about it? You can reference the OWASP SSRF Cheat Sheet . Your situation matches the second case, although you won't be able to perform all of the mitigations, like changing requests to POST or adding a unique token. The guidance otherwise boils down to: Whitelist allowed protocols: Allow HTTP and HTTPS, disallow everything else (e.g. a regex like ^https?:// ). Check that the provided hostname is public : Many languages come with an IP address library; check whether the target hostname resolves to a non-private and non-reserved IPv4 or IPv6 address*. My own addition, custom firewall rules: The system user that runs the web application could be bound to restrictive firewall rules that block all internal network requests and local services. This is possible on Linux using iptables / nftables . Or, containerize/separate this part of the application and lock it down. Perhaps you could also validate the MIME type of the file at the time of retrieval to ensure it is an image. Also, do not accept redirects when fetching the image, or perform all the same validation on them if you do. A malicious webserver could just send a 3xx response that redirects you to an internal resource. Additionally, you mentioned you are generating PDFs from user entered data. Your image URL aside, PDF generators have historically been a breeding ground for XXE (XML eXternal Entity injection) and SSRF vulnerabilities. So even if you fix the custom URL, make sure your PDF generation library avoids these issues, or perform the validation yourself. A DEFCON talk outlines the issue ( PDF download ). * As mentioned in comments, DNS responses can contain multiple results, and responses could change between requests, causing a time-of-check time-of-use (TOCTOU) problem. To mitigate, resolve and validate once, and use that originally validated IP address to make the request, attaching the host header to allow the correct virtual host to be reached. | {
"source": [
"https://security.stackexchange.com/questions/230514",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12244/"
]
} |
230,534 | Let's say I use a 5 word password composed of 4 words plus the name of the website I'm accessing. For example for GitHub, it would be something like " correct battery horse staple github". How is that different to using a password manager with "correct battery horse staple github" as the master password? I also have a simpler password for accounts I don't care about or that I suspect might be vulnerable. I assume that everyone but the major companies (Google, Facebook, GitHub, Apple) store passwords in plain text. Am I at risk by using this approach? | Yes, decent password managers are more secure than using any password pattern. You have a password manager, and it has created you random passwords: 6AKQ3)mcV!xX3b8-ZgncCe%tdn!&.@3X a6/4TFaWKrzTHQyT2Df#;/*+QA$zH2tJ 9y__&%7jP4UcuG(9f7X6z44C#64bF:m& 9W649r788_8AU=9272zuGH"=C?2&C66j nT29HMc$y'H)ww2#D/2x(2sBU#WG23us Versus you have a pattern for your passwords: correctbatteryhorsestaplegithub correctbatteryhorsestaplestackexchange correctbatteryhorsestaplegooogle correctbatteryhorsestaplesomesite correctbatteryhorsestapleapple The site #4 has a bad practice of saving passwords in plain text, and their password database leaks. Now, from the latter it's possible to assume that this is a password pattern you use and deduce you might have correctbatteryhorsestaplegithub as your password for GitHub etc., but from the random password it's impossible to deduce the other random passwords, as they are completely unrelated. On the other hand, if your computer gets infected and someone steals both your password manager database and the password (e.g. using a keylogger), they have keys to the kingdom. That's a completely different risk model and requires access to the operating system the password manager is installed on. Against this you need other measures like multi-factor authentication. | {
"source": [
"https://security.stackexchange.com/questions/230534",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/233165/"
]
} |
230,703 | I use a password scheme where I keep a small number of easy to remember personal passwords. Instead of using the passwords directly for each service, I run them through a hashing algorithm first, as a sort of a seed, together with the name of the actual service. I then use the resulting hash as my actual password for the service. (There's some more to it, I add some extra fixed letters to satisfy some normal password requirements, but let's look away from that in this question.) The pattern looks like this (using SHA512, and keeping just the 12 first characters of the resulting hash): "my_p4SSWord!"
+ => SHA512 => "d4679b768229"
"Facebook"
"my_p4SSWord!"
+ => SHA512 => "182c5c2d4a2c"
"LinkedIn" The pattern allows me, not to remember all of my online passwords, but to remember how to easily re-create them, whenever I need to. There are lots of online services for calculating hashes, and I currently use this one (which is javascript only, and thus purely client-side): https://emn178.github.io/online-tools/sha512.html My question to the security experts is, how secure is this personal scheme of mine really? I truncate the hashes to just 12 characters . How does that affect the real crackability of my passwords? Also, I use SHA512 . How does it affect my scheme, as a contrast to using for instance bcrypt ? Any comments? edit : I'm getting some knee jerk reactions here, for which I'm thankful. I'll just make a few quick comments before this probably gets closed off. I was asking particularly about the cryptographical properties of the hashes that I'm using, and the increased likelyhood of a successful brute force attack against them as a result of the truncation that I'm doing. Also, a lot of people mention usability. The thing to remember, is that most passwords you don't actually have to type in, the exception being maybe the password to your OS. In my case, I just remember the hashed password to my OS, and that's it. For the rest, it's just a minor inconvenience each time I have to look up and re-generate the hash. It's quick, and the tools are standardized and available everywhere. Also, there are simple rules that you can have for cases where you do need to change the password for a particular site. Rules that you can just as easily remember, and that you don't have to write down. Finally, I mean this mostly as an alternative to doing nothing at all, which, after all, is what most people are doing when they reuse the same passwords indiscriminately across all the online services that they subscribe to! Finally-finally, be kind! edit2 : There's a lot of focus on the method in the answers, and not on the cryptographic properties of the hashes, which is what I intended in the original question. Since there's focus on the method, I'll add just one extra piece of information, which is that I do keep a small text file on the side. The text file is, as far as I can tell, in accordance with Kerckhoffs's principle, it reveals things, but not the keys. Which is why my original question is focused on the cryptographic properties of the hashes, and their strength. | SHA-512 PROS: Due to the avalanche effect , every single modification to the suffix will change the SHA512 sum entirely. This means that from one N first letters of one hash you can't say anything about the N first letters of another hash, making your passwords quite independent. SHA512 is a one-way compression function, so you can't deduce the password from the hash; you could try brute-force passwords that will give the same SHA512 sum. As you only use part of the hash, there's more chances to find more than one password that will match. CONS: You think you have a strong password as it is 12 characters long and contains both letters and numbers, but it actually has a very limited character set of 0-9 + a-f (it's a hexadecimal number ). This gives only 16^12 i.e. log2(16^12)=48 bits of entropy, which is less than 10 characters of a-z + 0-9 and close to 8 characters of a-z + A-Z + 0-9 . Online services could save the hashes you create. Use a local tool for the hashing, instead. It's possible you can't recall your seed in all cases, which may lead you to lose your password. What happens if you want or are required to change a password? Does the seed become my_p4SSWord!Sitename2 or something else? How you keep track of the count? The site/service name is not always unambiguous. Was your Gmail password suffix Google or Gmail or did you first use YouTube ? Was your Microsoft account Microsoft or Live or Office365 ? Combinations of these makes things even uglier. If the site has password complexity requirements, you'd had to add characters and/or convert some of the characters uppercase manually after the hashing. How to keep track of these modifications? Someone could still find out your procedure e.g. by seeing you create/use a single password. That would compromise all your passwords at once. bcrypt While bcrypt would give more entropy usings Radix-64 encoding with 64 characters; ASCII 32 (space) through 95 _ ( !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_ ) be slower to calculate use a user defined iteration count you could utilize to make it as slow as you wish ...it's incompatible with your current procedure, as it has incorporated salting and would give you a different hash every time: $ htpasswd -nbBC 12 Elling my_p4SSWord\!Sitename
Elling:$2y$12$jTkZQqYzWueA5EwKU1lLT.jbLLXUX.7BemHol.Q4SXhoJyeCVhcri
Elling:$2y$12$c1lDwR3W3e7xlt6P0yCe/OzmZ.ocKct3A6Fmpl8FynfA.fDS16bAa
Elling:$2y$12$eAciSW6iGxw/RJ7foywZgeAb0OcnH9a.2IOPglGGk.wL9RkEl/Gwm
Elling:$2y$12$EU/UDJaZYvBy6Weze..6RuwIjc4lOHYL5BZa4RoD9P77qwZljUp22 ...unless you exclusively specify a static salt (possible on some implementations like PyPI bcrypt ). Password managers All the cons are solved in a password manager creating passwords that are random completely unrelated can have different requirements (different character sets and minimum/maximum lengths). | {
"source": [
"https://security.stackexchange.com/questions/230703",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/233385/"
]
} |
230,708 | I read on ssh.com that there are new ECDSA ssh keys that one should be using to create the public / private key pair; and that's it's a US Government Standard based on elliptical curves (probably something mathy). I also noticed that they use fewer bits, so I'm unsure how it is supposed to be more secure. Have you heard anything about this, and if it uses fewer bits how on earth could it be more secure? | ECC keys can be much shorter than RSA keys, and still provide the same amount of security, in terms of the amount of brute force that an attacker would need to crack these keys. For example, a 224-bit ECC key would require about the same amount of brute force to crack as a 2048-bit RSA key. See https://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography for more info. | {
"source": [
"https://security.stackexchange.com/questions/230708",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6103/"
]
} |
230,731 | For example:
I'm browsing my favorite website with my favorite VPN enabled.
I disable my VPN while I'm still on a page of the website. I haven't clicked any links yet, I haven't went back to the previous page, I'm just on the page, touching nothing. At this point, are the web server and ISP now aware of this change? Or does a page refresh / new request have to occur? If I had to guess, it would be no, because it would be theoretically the same thing as viewing the webpage and disconnecting your internet completely. Am I on the right track? | In the early days of the web, webpages were mostly static and there would be no communication between your computer and the web server unless you were actively loading a page. Today, that is no longer true. It it extremely common for webpages to maintain active connections to provide features like live updates. When you see a new email appear in Gmail without refreshing, or when you see an incoming chat message, those features are powered by active connections. This very page is likely using them to alert you to new answers. It is also common to try to automatically restore these connections when there is a network interruption. Even if the webpage doesn't have any features that would benefit from live updates, it is possible that there are still connections established for analytics and advertising purposes. To be safe, you should assume that when you disconnect from VPN, any pages you have open can immediately pick up on the change. | {
"source": [
"https://security.stackexchange.com/questions/230731",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/220620/"
]
} |
230,984 | What tools or techniques can companies use to prevent penetration testers from behaving maliciously and exfiltrating data or compromising the system? I cannot imagine that the only protections companies use are contractual ones. | You're looking for a technical solution to a legal problem. This won't work. What you worry about is primarily a legal problem. Penetration testers operate under a Non-Disclosure Agreement , which is the legal equivalent of "keep your mouth shut about anything you see here". Non-Disclosure Agreements, or NDAs for short, are what prevents a penetration tester from talking about the cool vulnerabilities they found when they tested ACME Corp. last week. But why would a penetration tester honour a NDA? Because not doing so would basically destroy their career. If the company learns that a penetration tester disclosed internal information, they will sue the pen-tester for damages, which can be upwards of millions. Furthermore, it'll completely destroy the reputation of the pen-tester, ensuring that nobody would ever hire them ever again. To a penetration tester, this means that the knowledge they have spent years or decades to accumulate is essentially worthless. Even if the idea seems sweet to a morally corrupt pen-tester, the punishment is magnitudes worse. Furthermore, most pen-testers just have no interest in compromising a client. Why would they? It's in their best interest to ensure that the client is satisfied, so that they hire them again and again. As for why you would not put in technical restrictions, there are several reasons. First, as a pentester, you feel like you are treated like a criminal. A lot of pentesters are proud of the work they do, and treating them like criminals just leaves a sour taste in their mouth. A pentester understands that a company has certain policies, but if a company goes above and beyond and escorting them with an armed guard to the toilet, just to make sure they don't look for post-it notes with passwords on them on their way back, they will feel mistrusted. This can and most likely will lower morale, and may cause a pentester to not give their absolute best. Furthermore, absurd technical constraints can also just make things difficult for a pen-tester. For example, if their company-provided domain account gets blocked as soon as they start Wireshark or nmap, it takes time for that account to get reactivated. It prevents a pentester from launching all their tools to find vulnerabilities as effectively as possible, and wastes a lot of their time. This is bad for both the pentester and the customer, and will likely result in a worse overall experience for both of them. | {
"source": [
"https://security.stackexchange.com/questions/230984",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/233791/"
]
} |
231,006 | I was surprised to read in the responses to this popular question that it's considered nigh impossible to secure a computer system if intruders have physical access. Does this apply to smartphones as well? Let's suppose I have done the most I can to secure my phone on a software level (e.g. encrypted storage, restricted app permissions ... whatever you consider "maximally secure"). Is physical access still game over? This seems like an important issue, as there are many situations in most people's daily lives where they leave their cell phone on their desk as they take a break or what have you. Technical answers are welcome, but I would personally appreciate responses that are legible to someone without a background in information security. This question is related , but it deals more with surveillance technologies that might be built into a smartphone, not the options available to a malicious individual with physical access. | "Physical access = game over" is an over-simplification. It absolutely boils down to the outcome of a threat assessment, or what the vendor needs to protect and to what level. The direct answer to your question is a great big 'it depends'. Smartphones are no different than other devices to the extent that they are computers running an operating system of some description handling data of some kind that interact with other computers or people in some way via peripherals. The class and range of attacks a device is susceptible to when physical access is possible is very different to the type of attacks it would be susceptible over a network. Conversely, the impact on the ecosystem is also quite different and it could be as impacting or worse on the network side. Modern/recent operating systems and smartphone hardware have multiple methodologies that aim to secure user data from attackers, whether by means of physical attacks or otherwise. Even "physical attacks" can vary between occasional access (a few minutes, casual access) to unlimited time and expertise in micro-electronics in a lab (such as forensics investigations). But there are aspects that can defeat some (or all) of these features such as local configuration of the device, having weak passwords, guessable (or no) PIN codes, etc. Online backup services, cloud based accounts (Apple/Google) aid in these vectors since most of the data in a device ends up mirrored on the cloud in some way. However, not all smartphone hardware is born in the same way and not all operating systems in the field are implemented to the same security strength, so there are attack vectors whereby full access is possible against certain hardware/software combinations provided physical access is possible. This is a very short summary, there is scope to this matter to write books. | {
"source": [
"https://security.stackexchange.com/questions/231006",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/233820/"
]
} |
231,408 | There are a couple of files / tools which provide file-level encryption. I guess PDF and ZIP are probably the most commonly known ones. I wonder what scenario they actually help with or if it just is a bad solution. For example, if I want to be sure that no man in the middle gets information when I transfer a file over the internet, I think TLS is the solution to go with. If I want to get some security that attackers who get data on my laptop when I loose the device, I would think of full disk encryption. The only point I can see is a low-skill attacker which has access to the machine with the encrypted file. Essentially if a friend / family member has access and does not accidentally see something. Do I miss a scenario? | File-level encryption can be useful in several cases, here's a few examples: Sending data over insecure channels. You mentioned TLS, and that's enough when you have it. But what if you aren't sure every node actually uses TLS? And do you really trust every node? Think about emails, for example. Storing data in untrusted places. You might trust your encrypted external HDD, but what about Google Drive? What about your hosting provider? If you want to be sure that nobody else is able to access your files (Google, employees at your hosting provider, cyber criminals who manage to breach those service, etc.), you will need to encrypt your files. Defense in depth. Full-disk encryption will protect your data when your machine is turned off, but what if an attacker grabs your laptop while it's on? What if your machine gets infected and before you notice all your sensitive files are sent to the attacker? With file-level encryption, the attacker won't have access to the content of your sensitive files right away, so such attacks might fail. | {
"source": [
"https://security.stackexchange.com/questions/231408",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3286/"
]
} |
231,411 | All the products supporting TOTP-based 2FA use one of the common authenticator apps such as Google Authenticator, Authy, etc. I want to understand whether there are any security reasons behind why the implementations prefer to use the generic authenticator apps and not build the TOTP code-gen application themselves or even have it in the main app (the one that relies on TOTP for 2FA)? I see a few concerns: Most of the TOTP use-cases are for login, so I understand the need for a different app to share the code, instead of having the code-generator within the parent app, which cannot be accessed - well - until one login. However, 2FA can have use-cases other than login Time-sync on the local device can be hard, so one can rely on apps like Google Authenticator to implement it rather than implement it themselves. What are the other reasons? Are there any security concerns? I could not find any reference in the RFC specs. | File-level encryption can be useful in several cases, here's a few examples: Sending data over insecure channels. You mentioned TLS, and that's enough when you have it. But what if you aren't sure every node actually uses TLS? And do you really trust every node? Think about emails, for example. Storing data in untrusted places. You might trust your encrypted external HDD, but what about Google Drive? What about your hosting provider? If you want to be sure that nobody else is able to access your files (Google, employees at your hosting provider, cyber criminals who manage to breach those service, etc.), you will need to encrypt your files. Defense in depth. Full-disk encryption will protect your data when your machine is turned off, but what if an attacker grabs your laptop while it's on? What if your machine gets infected and before you notice all your sensitive files are sent to the attacker? With file-level encryption, the attacker won't have access to the content of your sensitive files right away, so such attacks might fail. | {
"source": [
"https://security.stackexchange.com/questions/231411",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/234314/"
]
} |
231,454 | (There is a highly related question , however I believe mine is not a duplicate, since it deals with resetting a password without access to the account, not changing it while being logged in.) Say someone has gained access to my email that I used to register some accounts with. Assume also that these accounts all have some kind of 2FA, be it a 30-second code generated by an app, a U2F key - the type doesn't matter for my question. In my understanding, in order for the attacker to change the password of an account, there are two ways: Log into the account and change the password in the internal settings, without using the associated email. Even if we leave our computer/phone unattended with an active session of the relevant account, therefore bypassing the need for the hacker to also guess the account password, the change is still impossible. This is because, as explained in the question linked above, this would require at least 2FA
verification, possibly 2FA + the original account password. On the log-in screen for the account, use the 'reset password' option to send a reset email to the email account that we assumed the hacker had access to. I am confused as to what happens then: is the 2FA needed to send the reset email in the first place? If not, is the attacker able to reset the password, but not to actually log in, since the 2FA is still in
place? This essentially means that they can't access the account, but nor can we. is the attacker able to reset the password and log into the account, since the 2FA somehow becomes void? Of course, scenario 1) is the most desirable from the perspective of the legitimate user, 2) is significantly worse, 3) is tragic. But which one actually happens when someone tries to reset a password for an account with 2FA enabled? | Technically, this is a question about how you should implement 2FA (or how you should expect it to be implemented), since there's nothing inherent in 2FA that answers your questions in either direction. With that said, there are certainly best practices. 2FA (or multi-factor authentication in general) should apply whenever the user is being asked to prove their identity in any way (that is, to authenticate ). So you should prompt for MFA when the user is doing anything where you would normally request a password (such as changing their current password or email address, or changing MFA settings). You should also prompt for MFA any time the user is doing anything that takes the place of a password, such as clicking a link in a password reset email. For your three scenarios, #1 is unlikely just because requesting a password reset email usually doesn't require any authentication at all, so it'd be an odd place to put a MFA demand. However, using the email in any way - that is, actually resetting the password - should require MFA. So #2 is technically incorrect - an attacker can't reset the password - but it's true that they can't log in either. The correct answer is sort of "#1.5". However, again, "which one actually happens" will depend 100% entirely on how that particular service implemented MFA, and there's no guarantee they've done it correctly. I've seen sites that do it like #3. | {
"source": [
"https://security.stackexchange.com/questions/231454",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232626/"
]
} |
231,525 | Let’s say: We buy a domain from http://cheap-unsecure-domains.example . Then in our control panel at cp.cheap-unsecure-domains.example we configure it to use the Cloudflare service. We set some MX record at Cloudflare and point them to Google, for example. In theory it should be possible for cheap-unsecure-domains to hijack our MX records answering them by itself instead of referring to Cloudflare. Is this correct? If yes, is there any type of protection against this kind of attacks? Except using something like GPG . I'm considering possible attacks on the receiving side. | Yes, your registrar can hijack not only your MX records, but your entire DNS. Not only that - but they can then proceed to intercept mail sent to your domain, get a valid CA-signed SSL certificate for your domain, and host a site for your domain using the trusted SSL certificate. And DNSSEC won't prevent any of this. One of the primary functions of your registrar is to register the nameservers for your domain. For example, if you do a whois lookup for stackexchange.com , you'll see that the registrar for stackexchange.com is eNom, LLC., and that the nameservers for stackexchange.com are hosted by Google Cloud and Amazon AWS. So, the DNS for stackexchange.com is handled by Google Cloud and Amazon AWS. In the example that you gave in your question, cheap-unsecure-domains is the registrar for yourdomain.example . With cheap-unsecure-domains, you specified Cloudflare's nameservers as nameservers for yourdomain.example . So, DNS for yourdomain.example is handled by Cloudflare's nameservers. Then, in Cloudflare's control panel, you setup your DNS records for yourdomain.example , including your A records, MX records, etc. So if cheap-unsecure-domains wanted to intercept your mail - they wouldn't need to hack into your account at Cloudflare to change your DNS records. They would simply change the nameservers for yourdomain.example to their own, then create MX records for yourdomain.example in their nameservers to point to their own mail servers. Then, they would start receiving mail sent to your domain. Interestingly, they could start receiving mail for yourdomain.example securely using SMTP STARTTLS, without even getting an SSL certificate for yourdomain.example . They could just use their own certificate. See https://blog.filippo.io/the-sad-state-of-smtp-encryption/ . Now, things get more insidious. They can start receiving mail for [email protected] (or [email protected] , or any of the other designated approved email addresses used for SSL domain validation). Then, they can request a SSL certificate for yourdomain.example from a trusted CA, and when the CA sends the verification link to [email protected] , they'll receive it, and the CA will issue the certificate. Now, they can setup an A record for www.yourdomain.example , and run a site with a valid certificate for www.yourdomain.example . At this point, you might be wondering - can't this type of attack be prevented using DNSSEC? The answer is no. DNSSEC records are stored in the DNS for the domain. When the registrar changes the nameservers for yourdomain.example to their own, the DNSSEC records that you created for yourdomain.example are gone, along with all of the other DNS records that you created. See https://moxie.org/blog/ssl-and-the-future-of-authenticity/ for more info. | {
"source": [
"https://security.stackexchange.com/questions/231525",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60368/"
]
} |
231,543 | I just added a drive to my system which is basically a partition mounted for extra storage. I'd like to encrypt it to protect my data in case of god knows what, and by doing that I'd need to enter the passphrase every time to unlock the partition. I just read that I can add a keyfile so I wouldn't need to manually unlock it every time, but this is confusing. What is the point of having encryption if it's going to unlock automatically anyway? | If decryption only relies on the keyfile and this keyfile is readily available, there is indeed no significant security benefit in your setup. What you can do though is store the keyfile on a removable device (e.g. a USB stick) and detach it when you are not around. That way decryption is only possible when you are present and the removable device is attached. Storing the keyfile locally makes sense if you want to ensure that a removable device can only be decrypted on your system. You can distribute the keyfile to other systems as well if you want to use the encrypted device in different places. If you lose the removable device in transit, little harm is done, because it can only be decrypted on a system that has your keyfile. | {
"source": [
"https://security.stackexchange.com/questions/231543",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68834/"
]
} |
231,831 | This was , before someone helpfully fixed it after seeing this question, a relatively unassuming and tiny photo of a ̶f̶i̶s̶h̶ nudibranch, with 283,620 pixels. It has some metadata: text Exif tags as well as 8.6kB of Color Profile information, and a 5,557-byte Thumbnail as well as a 648,534-byte Preview Image (that I cannot read) and some other random things (like Face Detect Area) that take up little space. Using exiftool -a -b -W %d%f_%t%-c.%s -u -g1 -ee -api RequestAll=3 temp.jpg extracts a total of <650kiB of stuff. Are there any strategies or tools that one might use to discover what is going on, and whether something has been hidden in the file? In case it makes things easier, the same or very similar inclusions appear to affect multiple files by the same Flickr user: 2 , 3 , 4 , 5 | Short answer: It's an artifact of Nikon Picture Project I had difficulty finding "Nikon Picture Project" but finally found a 1.5 version to try. The last version produced was 1.7.6 . It turns out that "Nikon Picture Project" does indeed implement non-destructive editing with undo and versioning capabilities. Unlike every other photo editing software I've ever seen, it does this by directly altering the JPG file structure and embedding edit controls and versions directly in the JPG. There is an Export JPEG function in the software to flatten and remove history but it looks like the native munged JPGs were posted instead of using the export. I loaded up your first reference image (resized here) . Sure enough, "Nikon Picture Project" showed it as an edit and crop of a much larger picture (resized here) . Checking the before and after file structures verifies the weird artifacts. Thanks for the puzzle! | {
"source": [
"https://security.stackexchange.com/questions/231831",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/141049/"
]
} |
232,158 | I have two servers with a pair of RSA public and private keys. We do not use a CA for the internal communication yet and therefore we need to exchange keys without CA. I need to establish a trust between two servers: I need to copy a public key form the first server to the second server and the public key from the second server to the first server. Note that it is not Diffie–Hellman key exchange (that explained very well in "Diffie-Hellman Key Exchange" in plain English ). The simplest way is just manually copy the public keys from one server to another. An additional option is to use the following homegrown flow: Generate a one-time token on the first server Copy the token manually to the second server The first server accesses the second server via an API. Ase the token for the API authentication. The API implementation exchanges public keys between servers Any suggestions to improve the flow? Do we have some best practices flow, since homegrown flows are usually bad for security? | This will mean a lot of unneeded overhead. I'd suggest following: Since you don't have certificates issued by CA, create your own CA. Namely, create a self-signed certificate and add it to a key store on both servers, so that your certificate is trusted. Issue certificates to each server and sign them with private key of your own CA. Make your servers use their certificates when communicating with the others. Thus you will actually use PKI. In the future, when you get certificates from the real (commonly known) CAs, the only thing you will need to do will be to replace your own self-signed CA certificate by (also self-signed) certificate of a real CA. | {
"source": [
"https://security.stackexchange.com/questions/232158",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24842/"
]
} |
232,162 | I just clean installed my MacOS Catalina on my MacBook due to virus on my PC that I worried got on my MacBook via usb file transfer. I did an internet recovery to reinstall MacOS Catalina with my home WiFi network, with password protected. However, since my Pc was had a virus, I was worried that my IP addresss and home WiFi network has been compromised, can I still trust that the installation was not tempered with? Or should I perhaps reinstall it again using mobile network? Since I have heard that mobile data are much more secure than WiFi network. | This will mean a lot of unneeded overhead. I'd suggest following: Since you don't have certificates issued by CA, create your own CA. Namely, create a self-signed certificate and add it to a key store on both servers, so that your certificate is trusted. Issue certificates to each server and sign them with private key of your own CA. Make your servers use their certificates when communicating with the others. Thus you will actually use PKI. In the future, when you get certificates from the real (commonly known) CAs, the only thing you will need to do will be to replace your own self-signed CA certificate by (also self-signed) certificate of a real CA. | {
"source": [
"https://security.stackexchange.com/questions/232162",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/232855/"
]
} |
232,190 | I'm doing a nmap scan to my own machine to my own machine. First of all I set the port 333 to listen with this command sudo nc -lvnp 333 On the other terminal I run sudo nmap -O -sV -p 0-65535 IP where IP is my local IP. The result I got on the nmap terminal is this one: But on the terminal where I opened the port, the process finishes and I have this message: root@kali:~$ sudo nc -lvnp 333
listening on [any] 333 ...
connect to [IP] from (UNKNOWN) [IP] 47462 I got curious and I tried to do the same thing with proxychain just to check which IP would appear, so I run sudo proxychains nmap -O -sV -p 0-65535 IP The result on the nmap terminal was different I guessed because the limitations of nmap through proxy I read in other places: But when I checked on the nc terminal the process didn't finish and it doesn't seem that noticed some scan was checking that port.
Which is the reason that with proxychains the scan was stealthy? | This will mean a lot of unneeded overhead. I'd suggest following: Since you don't have certificates issued by CA, create your own CA. Namely, create a self-signed certificate and add it to a key store on both servers, so that your certificate is trusted. Issue certificates to each server and sign them with private key of your own CA. Make your servers use their certificates when communicating with the others. Thus you will actually use PKI. In the future, when you get certificates from the real (commonly known) CAs, the only thing you will need to do will be to replace your own self-signed CA certificate by (also self-signed) certificate of a real CA. | {
"source": [
"https://security.stackexchange.com/questions/232190",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/235327/"
]
} |
232,198 | I noticed after encrypting some files with 7zip that they contained the information about the encryption algorithm used AES 256, i want to know if there is a better software that doesn't leave traces about the nature of encryption because with such information this will simplify the task of deciphering by the attacker. I think that the security wouldn't be good if the encryption software is adding metadata or inserting an ID somewhere in header leaving hints about the nature of file (7zip compressed), the version used and the encryption algorithm name. What can you suggest ? thank you | This will mean a lot of unneeded overhead. I'd suggest following: Since you don't have certificates issued by CA, create your own CA. Namely, create a self-signed certificate and add it to a key store on both servers, so that your certificate is trusted. Issue certificates to each server and sign them with private key of your own CA. Make your servers use their certificates when communicating with the others. Thus you will actually use PKI. In the future, when you get certificates from the real (commonly known) CAs, the only thing you will need to do will be to replace your own self-signed CA certificate by (also self-signed) certificate of a real CA. | {
"source": [
"https://security.stackexchange.com/questions/232198",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/169925/"
]
} |
232,223 | Recently I found a leaked database of a company and I do not know how to go about contacting the company. It is so weird because I cannot find any type of Information Security contact email to report this to. It just has a support email. I feel uncomfortable sending the link to the support email. Should I ask for an Information Security email contact from that company or what should I do?
By the way, the support email for the company is more of a fraud or customer support email not a technical support or security. Also, what would be a good template to follow to give the best insight of the leaked database? For clarification I did not penetration test any website that owns or distributes or has any relation with the company that seems to be likely to be possible originators of the database. I however found the database while using my internet searching abilities. I did not use any special tool or calculated methods. I am not a magician that knows where all databases or leaks are. I do stumble across content that is floating on the internet on places where they should not be. The way I found the database was in a legal manner and not in an illegal fashion. | Don't give security info to non-security people. Use whatever contact method is available to ask for the right security person. Don't give details about what you found until you get someone who will understand it. Then provide the details about what you found. Don't ask for reward or demand any kind of action or else you are very likely not to be taken seriously. Just provide help and leave it up to them to deal with. I'm not sure what kind of template you need. Give them the info/steps they need in order to locate the information you found. If you sound too "scripted" you might sound like a scammer. Be human. Be helpful. | {
"source": [
"https://security.stackexchange.com/questions/232223",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/235374/"
]
} |
232,345 | In helping a corporate user log on to eBay, I noticed that when on the login page, a stream of errors were coming up in the Firefox JS Console about not being able to connect to wss://localhost . This is a bit concerning, obviously. Why would a web site need to connect to a web server running locally. In looking further, I found that this request comes from check.js at this URL: https://src.ebay-us.com/fp/check.js?org_id=usllpic0&session_id=586308251720aad9263fb1e7fffd7373 Is this some malicious script injected into eBay or do they have a legitimate reason for doing that? Anybody knows? | This is ebay running a local port scan over websockets. It has been reported recently: https://twitter.com/JackRhysider/status/1264415919691841536 (original research) https://www.bleepingcomputer.com/news/security/ebay-port-scans-visitors-computers-for-remote-access-programs/ (bleeping computer article) I don't think it's malicious, but it is bad practice, it's sneaky and erodes user trust. They do it before you accept any T&Cs of any kind allowing probing into your own computer. Similar tactics are used by banks in more or less open ways (it varies). | {
"source": [
"https://security.stackexchange.com/questions/232345",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21054/"
]
} |
232,363 | I would like to know if there is a way to make a false certificate to bypass a Palo Alto VPN (specifically Global-Protect). The clients only connect by certificate and I don't know if some hacker can create a false certificate and connect to my vpn. I want to emphasize that with the legitimate certificate, they are already inside my network. I would like to know if there is another way to bypass or hack it, just connect through the Global-Protect agent and disable the web access. Thank you very much! | This is ebay running a local port scan over websockets. It has been reported recently: https://twitter.com/JackRhysider/status/1264415919691841536 (original research) https://www.bleepingcomputer.com/news/security/ebay-port-scans-visitors-computers-for-remote-access-programs/ (bleeping computer article) I don't think it's malicious, but it is bad practice, it's sneaky and erodes user trust. They do it before you accept any T&Cs of any kind allowing probing into your own computer. Similar tactics are used by banks in more or less open ways (it varies). | {
"source": [
"https://security.stackexchange.com/questions/232363",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/235542/"
]
} |
232,395 | I downloaded a software that has a login interface. It's a $100 a month subscription software. I disassembled the software and found that passwords are being sent by combining them with a hard coded "salt" that everyone can see in the source code (is it really "salt" if it's the same for everyone?), hashing them with MD5 and sending the hash to the server. I hope that the passwords are hashed again on the server side with unique salts for every user, but even if they do, isn't this a breach? Can't an attacker sniff the passwords easily, or do a send-the-hash attack? | It depends. There is not enough information in your question to give specific answers, and probably without knowing the server side, it will be hard to evaluate. Isn't this a breach? I would not call it a breach just yet, because simply knowing the fact that the client hashes the password does not reveal any sensitive information about this or other accounts. Can't an attacker sniff the passwords easily, or do a send-the-hash attack? If an attacker can sniff the communication between the client and the server, it makes no difference, wether the original password is sent or a hash version of it. In both cases, the attacker has the credentials needed. The only thing that could stop them now would be two-factor authentication. The best thing to do agains sniffing is enforcing HTTPS for the communication. Is it really "salt" if it's the same for everyone? No, a salt is some random data that is added to a one way hash function to safeguarde passwords in storage. A salt must be random and be unique for each password and should be long enough to protect agains creating rainbow tables with all possible salt combinations. It's also not a pepper , because a pepper must be secret. Why would you hash a password on the client side in the first place? Hashing a password on the client side before submitting it to the server simply turns the resulting hash into the new password. While it provides no additional security during the authentication process, it protects the user's password in general (especially if they use it elsewhere as well): the clear text password is never transmitted nor processed by the server. If the server (in contrast to the database) is ever compromised, it will be more difficult to extract the original password. PS: Hashing something with a hash function like md5 or sha is different from encrypting something. Encrypted data can be decrypted, hashed data however can not be restored as a hash is a one-way function. | {
"source": [
"https://security.stackexchange.com/questions/232395",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105501/"
]
} |
232,488 | I'm building an API with websocket that serializes data through JSON. The App itself is a chat application. I came up with the following structure to send my data: {date: '2020-05-31', time: '14:28:05', text: "Hey!", to: '<id:int>', from: '<id:int>'} The user basically sends a message through the browser and this is received in a websocket server. The from: 'id' would be from the user sending the data whereas the to: 'id' would be to the user the data is being sent. Looking at this I have a very bad feeling. My thoughts; The user using the App would in theory authenticate and that's where he would get his id. Then the receiver would have another id, such that is not the same as the authenticated one (obviously). The server would then look for that id and send the message but I'm not sure if this is secure. I have some aspects that I think must be dealt correctly to protect the app from any attacker: What if the attacker decides to tamper the " from:id" such that it could send arbitrary messages to anyone from any user? What if the attacker builds a script that spams millions of messages by taking advantage of the "to:id" field? Is it possible there is another security issue that I'm not concerned of? | What if the attacker decides to tamper the "from:id" such that it could send arbitrary messages to anyone from any user? Create a session, and use the session identifier as identifier, not the user ID directly. E.g. let user send credentials, and upon successful validation, return a (short lived) session handle, that can be used in future messages. Validate that the session exists and is active, and map it back to user server-side. What if the attacker builds a script that spams millions of messages by taking advantage of the "to:id" field? Rate limit users server side. For instance, disallow sending messages to more than ten different users a minute. This will probably not bother legitimate users, but will hamper spammers efforts. Tuning of the limit may obviously be needed - and it may be an idea to raise it for trusted users, based on behavior, and lower it upon receiving reports about spam from users. | {
"source": [
"https://security.stackexchange.com/questions/232488",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167225/"
]
} |
232,569 | I'm wondering, would it be clearer to declare a company wide requirement with regard to password theoretical entropy, rather than the usual "at least one big letter, and a small latter, and special character..." Thus if we target a reasonable entropy level for humans to remember, say 60-bits , then calculate the entropy. This can be calculated dynamically and locally and give user feedback as needed. Is this not a better, language/region agnostic way to do a password policy? | The tests for any policy are: people know about it people understand it people know if they are complying with it people know how to comply with it Your approach is about 2 out of 4 on that scale for the average user. The better option is to demand randomly generated passwords. That's easy to understand, easy to implement, and easy to provide processes and tools for ("just use this password manager"). With your approach, you are basically trying to get people to be their own random generator. This is going to result in a lot of trial and error as people try to figure out what password will pass the test. This will result in frustration and confusion. But that's assuming that you are writing a policy for the average user and assuming your calculation of entropy is valid (which seems beside the point of your question right now, and I have some serious reservations about it). | {
"source": [
"https://security.stackexchange.com/questions/232569",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140436/"
]
} |
232,573 | I don't have an education in computer science, I've just become interested in information security and encryption lately. I'm struggling to understand why encrypted web browsing using HTTPS has been so widely adopted but at the same time most emails are unencrypted. From what I understand when using PGP the exchange of the public keys are a bit of a hassle, the recommended method seems to be meet in person or get the key from the person's homepage (which uses HTTPS I guess). Here's my naive suggestion of another way, I would appreciate you to say where I'm wrong: Email companies start to provide the ability for me to upload my public PGP key to their server My friends want to send me an email without having my public key beforehand. My friends' email client can get my public key automatically from my email provider, for example fastmail. The downloading of the public key takes place after the "send email"-button is pressed. Because the connection to fastmail would be encrypted using TLS, one can be certain that the connection actually goes to fastmail. And one can be certain that fastmail gives my friend the right key that I've uploaded there. If I don't care so much, fastmail could generate the whole keypair for me and store both my private key and public key. That way I can still read my email using webmail. This seems simple, and also much easier when I want to change the key. Just like if I want to change ssh keys I just generate a new pair and put the public part on the server. So, where have I gone wrong in this idea? Or are there already a solution like this, but people don't care to use it? | The biggest obstacle to your proposal is user adoption and behavior change. Imagine having to explain to everyone what a public key is and how great it is to have. This is just not going to happen. Instead, email security has moved to the mail server side of things, with multiple goals: transport encryption . This is already fairly widely deployed sender authentication (for authentication of the sending domain, not the individual user) which is a bit more tedious and relies on considerable knowledge by individual email server admins (as someone who's had to setup SPF/DKIM/DMARC, I can tell you it's not much fun). Your proposal minus uploading your personal key (instead having it generated automatically) is more or less transport security, but without authentication. The authentication part is the tricky one and is what the mentioned acronyms try to do, albeit tediously. As a side note: proper end-to-end email encryption would require you to either 1) trust the web-based mail provider with your keys, or 2) use a local client that knows about your private key. The former is undesirable for many, the latter is inconvenient for most people. Another side note: HTTPS was widely adopted because it is (mostly) invisible to most users, bar the browser warnings. Modern email encryption/authentication is the equivalent of that. But the equivalent of everyone having a key pair for email would be asking people to use client certificates to log into websites. ugh! | {
"source": [
"https://security.stackexchange.com/questions/232573",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/235824/"
]
} |
232,574 | I'm having a hard time drawing the line between Web App pentesting vs a Web App Security Audit. For instance, OWASP Testing Guide could be used for both of those cases. Let's say the pentester and the auditor are working inside the enterprise: they therefore have access to the same resources ie architectures, documentation, etc. I understand that for a pentester, exploiting a vulnerability is the ultimate goal. But, it looks like the process of discovering the vulnerability for both the pentester and the auditor is the same. Consequently, the auditor job may look like a vulnerability assessment. | The biggest obstacle to your proposal is user adoption and behavior change. Imagine having to explain to everyone what a public key is and how great it is to have. This is just not going to happen. Instead, email security has moved to the mail server side of things, with multiple goals: transport encryption . This is already fairly widely deployed sender authentication (for authentication of the sending domain, not the individual user) which is a bit more tedious and relies on considerable knowledge by individual email server admins (as someone who's had to setup SPF/DKIM/DMARC, I can tell you it's not much fun). Your proposal minus uploading your personal key (instead having it generated automatically) is more or less transport security, but without authentication. The authentication part is the tricky one and is what the mentioned acronyms try to do, albeit tediously. As a side note: proper end-to-end email encryption would require you to either 1) trust the web-based mail provider with your keys, or 2) use a local client that knows about your private key. The former is undesirable for many, the latter is inconvenient for most people. Another side note: HTTPS was widely adopted because it is (mostly) invisible to most users, bar the browser warnings. Modern email encryption/authentication is the equivalent of that. But the equivalent of everyone having a key pair for email would be asking people to use client certificates to log into websites. ugh! | {
"source": [
"https://security.stackexchange.com/questions/232574",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/226540/"
]
} |
232,575 | I know that HttpOnly attribute restricts the cookie from being accessed by JavaScript etc. Is there any specific attacks regarding this issue? Can cookie theft and hijacking be counted as attacks towards this issue? | The biggest obstacle to your proposal is user adoption and behavior change. Imagine having to explain to everyone what a public key is and how great it is to have. This is just not going to happen. Instead, email security has moved to the mail server side of things, with multiple goals: transport encryption . This is already fairly widely deployed sender authentication (for authentication of the sending domain, not the individual user) which is a bit more tedious and relies on considerable knowledge by individual email server admins (as someone who's had to setup SPF/DKIM/DMARC, I can tell you it's not much fun). Your proposal minus uploading your personal key (instead having it generated automatically) is more or less transport security, but without authentication. The authentication part is the tricky one and is what the mentioned acronyms try to do, albeit tediously. As a side note: proper end-to-end email encryption would require you to either 1) trust the web-based mail provider with your keys, or 2) use a local client that knows about your private key. The former is undesirable for many, the latter is inconvenient for most people. Another side note: HTTPS was widely adopted because it is (mostly) invisible to most users, bar the browser warnings. Modern email encryption/authentication is the equivalent of that. But the equivalent of everyone having a key pair for email would be asking people to use client certificates to log into websites. ugh! | {
"source": [
"https://security.stackexchange.com/questions/232575",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/235809/"
]
} |
232,592 | Is there any way that i can use the compromised website as the shell What i am trying to achieve is to get a reverse shell on the machine,not a webshell.The target machine has a website hosted on it open to public.The machine is behind a waf so all tcp are blocked only port 80 is allowed. Is there any tool out there that can make a php intermediate page between my uploaded reverse tcp shell in the website and my local listener target machine netcat (NC1) <==>php intermediate webpage<====>attacker netcat (NC2) how the implementaion should be:
NC1 should bind to a localport in target machine ,the php page would read from the socket and upon recieving a get request from N2 would hand it over. I know it can be programmed but i dont want to take the burden. | The biggest obstacle to your proposal is user adoption and behavior change. Imagine having to explain to everyone what a public key is and how great it is to have. This is just not going to happen. Instead, email security has moved to the mail server side of things, with multiple goals: transport encryption . This is already fairly widely deployed sender authentication (for authentication of the sending domain, not the individual user) which is a bit more tedious and relies on considerable knowledge by individual email server admins (as someone who's had to setup SPF/DKIM/DMARC, I can tell you it's not much fun). Your proposal minus uploading your personal key (instead having it generated automatically) is more or less transport security, but without authentication. The authentication part is the tricky one and is what the mentioned acronyms try to do, albeit tediously. As a side note: proper end-to-end email encryption would require you to either 1) trust the web-based mail provider with your keys, or 2) use a local client that knows about your private key. The former is undesirable for many, the latter is inconvenient for most people. Another side note: HTTPS was widely adopted because it is (mostly) invisible to most users, bar the browser warnings. Modern email encryption/authentication is the equivalent of that. But the equivalent of everyone having a key pair for email would be asking people to use client certificates to log into websites. ugh! | {
"source": [
"https://security.stackexchange.com/questions/232592",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/209787/"
]
} |
232,769 | If there are admin consoles on the internal network and TLS has been disabled what is the use case for enabling it? The only use case I can think of is when you have untrusted users on the network who could potentially use pack sniffers to sniff the traffic. Enabling TLS would mean monitoring devices would not able to monitor the traffic as it will be encrypted. What are your thoughts? Should TLS be enabled for users accessing services (e.g. admin consoles) on internal networks? | The only use case i can think of is if you have untrusted users on the network... This, but the problem is that you have untrusted users who you don't even know are users on the devices on network. This includes: Botnet nodes on compromised IoT junk Developers of whatever sketchy apps you installed on your phone or PC Attackers who've already compromised an actual server on your network, possibly a low-value one where security was overlooked Physical attackers who discretely connected a device to an ethernet jack somewhere Neighbors/wardrivers who guessed/brute-forced your wifi password And any of the above using their devices Etc. A fundamental principle of security is that the network layer is always untrusted . If you follow this you will save yourself a lot of trouble. | {
"source": [
"https://security.stackexchange.com/questions/232769",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/212301/"
]
} |
232,873 | A desktop running Ubuntu Linux 14.04 LTS seemed to go slower than usual. top showed that freshclam, the database-update utility for the Unix anti-virus program, was working the hardest. freshclam --version shows the version is from yesterday: ClamAV 0.100.3/25835/Sat Jun 6 14:51:26 2020 This program was running under the user clamav , rather than root or user. Is it usual for a program to claim a dedicated user profile to run itself? Is this actually a good sign, because it adds transparency to what happens anyhow? Is this actually a bad sign, because such an ad-hoc user can intrude upon other "stuff"? Can I retrieve a list of the programs installed in my computer claiming this right of working with a dedicated user name? Basically, can a user oversee such behaviours? Any common-sense tips that apply to understanding these kinds of situations are appreciated. | Clamav is a daemon . The Linux Standard Base Core Specification recommends that daemons run under individual User IDs. This way you have fine-grained access control for each daemon, and in case one of them is compromised, the attacker does not automatically have unlimited access to the system (as they would if the daemon ran as root, for example). | {
"source": [
"https://security.stackexchange.com/questions/232873",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/159653/"
]
} |
232,924 | Once an attacker has a shell as your sudoer user (or just compromised a local process enough), he/she can use one of the many privilege escalation tool to even automatically put themselves for example as apt or some other processed called by root to gain root access (see also What can an attacker do in this scenario? (unwritable bashrc, profile, etc.) ). What's the point of sudo then outside of blocking smaller payloads or making it a bit harder? It seems that the focus should be on SELinux and such. Edit: There are 2 side of this question (I should have been more specific). First, what I initially meant, for a standard Linux desktop user. A pretty different answer could be given for a machine administered by someone else. | Sudo has no real security purpose against a malicious third-party. So yes, it is basically useless for that purpose. In the past I believed it was actually a security control to prevent escalation of privilege and make attacks harder, because some people keep on insisting it also has that purpose, but that's actually false. In fact, at the moment you only got one answer to your question, and that answer is propagating that myth. The only purpose of sudo is to protect you from yourself, that is, to avoid messing up your system by mistake. Gaining all the privileges with one click or one key press might be dangerous, while sudo will at least force you to consciously type your password. And if you (or a program or script) ends up touching system files or other users' files by mistake, without consciously using sudo , you will get a "permission denied" notice. So in the end it's just an administration tool, and not actually a security control meant to protect you from an attack. Here's a very basic example of why sudo offers no real protection against malicious code: # Create payload: replace sudo with an alias
payload='
fake_sudo() {
# Simulate a sudo prompt
echo -n "[sudo] password for ${USER}: "
read -s password
echo
# Run your command so you are happy
echo "$password" | sudo -S "$@"
# Do my evil stuff with your password
echo "Done with your command, now I could use $password to do what I want"
}
alias sudo=fake_sudo
'
# Write the payload to the bashrc config file
echo "$payload" >> ~/.bashrc That is a very basic example of code that an attacker could run on your machine. It's not perfect, it doesn't even handle every case (it won't work well if you enter the wrong password), but it just shows you that sudo can be replaced by an attacker. If you run that script, the next time you open your terminal and run sudo you will actually be running fake_sudo . An attacker can replace your programs with aliases, or replace the binary files (putting malicious versions in ~/bin or wherever they can be executed in your path), etc. This is not even a real "escalation of privilege", because a user that can run sudo to become root already has the all the privileges. A real unprivileged user should not have sudo capabilities. To have a real separation of privileges you should run administration stuff on a totally separate account. | {
"source": [
"https://security.stackexchange.com/questions/232924",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81999/"
]
} |
232,938 | I have created a mobile application that monitors the accelerometer activity and based on that it rewards the user if a specific pattern is observed. How I can secure the application against the user itself who may try to hack the application to report the pattern I am looking for in order to get the reward? One more thing is that the reward is given to the winner at the end of the competition at a physical agency. Is it possible that the agent before given the reward check if the user has manipulated the app or not, for instance by observing or compare something in the device? | You cannot. As soon as the user has the mobile device and your application, nothing stops him from decompiling your application, understanding how it works & what data it sends, and replicating it. They can even cheat using some contraption that rotates the phone around and make your application believe it's a human that is using it. They don't even need to decompile your application; they just have to put a proxy to intercept the requests and understand the protocol. From the comments: If you control the hardware, you can secure the app: Not quite. Apple controls from the processor to the UI of the iPhone, and jailbreaks are a thing. Even if they are controlling every aspect of it, one day someone jailbreaks and roots the iPhone, and loads your app on it. Certificate Transparency, Key Pinning Not useful if the device is rooted. Checksum, digital signature and integrity verification only work if the OS is not compromised. If the user owns the OS and the device, he can disable OS checks, can edit the binary of the app and change the instructions verifying the signature or checksum. Virtual Machine, Code obfuscation They make it much more difficult to analyze the code, but code must be executed by the processor. If a disassembler cannot help, a debugger will. The user can put breakpoints on key parts of the code, and in time will reach the function checking the certificate, or the checksum, or any validation checks in place, and can alter anything he wants. So it's pointless to try? No. You must weigh the costs and benefits. Only don't count on the defenses to be unbeatable, because every defense can be beaten. You can only make it so hard that the attacker gives up putting lots of resources against your app and receiving a little benefit. | {
"source": [
"https://security.stackexchange.com/questions/232938",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/236204/"
]
} |
232,939 | I'm trying to analyze a SSLv3 connection. The certificate of the server has a "MD5 with RSA" signature. So I was setting up a local man in the middle attack by setting up a local DNS server that would return a local IP address to the client. That local server would pipe the connection to the real server. However, the client immediately drops the connection, because it verifies the certificate, this suggests the binary is using certificate pinning. I don't have write access to the calling binary, therefore I can't just patch the cert-verification. Is it possible to forge a certificate so the MD5 signatures collide, preventing the client from dropping the connection? I already read about HashClash, that it is indeed possible to create two certificates that have a colliding MD5 signature. But is it possible to do the same with a given certificate? If yes, is it possible in a reasonable amount of time? | You cannot. As soon as the user has the mobile device and your application, nothing stops him from decompiling your application, understanding how it works & what data it sends, and replicating it. They can even cheat using some contraption that rotates the phone around and make your application believe it's a human that is using it. They don't even need to decompile your application; they just have to put a proxy to intercept the requests and understand the protocol. From the comments: If you control the hardware, you can secure the app: Not quite. Apple controls from the processor to the UI of the iPhone, and jailbreaks are a thing. Even if they are controlling every aspect of it, one day someone jailbreaks and roots the iPhone, and loads your app on it. Certificate Transparency, Key Pinning Not useful if the device is rooted. Checksum, digital signature and integrity verification only work if the OS is not compromised. If the user owns the OS and the device, he can disable OS checks, can edit the binary of the app and change the instructions verifying the signature or checksum. Virtual Machine, Code obfuscation They make it much more difficult to analyze the code, but code must be executed by the processor. If a disassembler cannot help, a debugger will. The user can put breakpoints on key parts of the code, and in time will reach the function checking the certificate, or the checksum, or any validation checks in place, and can alter anything he wants. So it's pointless to try? No. You must weigh the costs and benefits. Only don't count on the defenses to be unbeatable, because every defense can be beaten. You can only make it so hard that the attacker gives up putting lots of resources against your app and receiving a little benefit. | {
"source": [
"https://security.stackexchange.com/questions/232939",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/236205/"
]
} |
232,950 | I recently had a conversation with a teacher who argued that in a digital certificate (x509), the public key is used to verify the name of the certificate holder. From my understanding, there are actually 2 public keys at work in this scheme: 1) There's the public key contained in the certificate, which belongs to the certificate owner, and can be used by others to encrypt things that only the certificate owner can decrypt, or be used to check the signature of things encrypted by the owner (i.e the usual public-private key verification). 2) The public key of the trusted certificate authority which signed that certificate, which is not contained in the certificate itself but can be obtained through other means (i.e local certificate stores) to verify if that certificate is indeed OK (by checking the signature of the certificate). The first key can in no way be used to verify the name of the certificate holder, because it would be equal to me claiming I am named Bob and having no proof to it. The second key may be used indirectly to verify the name of the holder, but by verifying the integrity and authenticity of the certificate as a whole (i.e someone else objectively testifying that my name is Bob). But this key is not part of the certificate itself. Through logical assumptions, the only public key in the certificate, which is the first key, cannot be used to verify names. Am I correct in my assumptions in that there is no way of using the public key in the certificate to verify the name of the holder? Or is the teacher right and I am missing some crucial detail? | You cannot. As soon as the user has the mobile device and your application, nothing stops him from decompiling your application, understanding how it works & what data it sends, and replicating it. They can even cheat using some contraption that rotates the phone around and make your application believe it's a human that is using it. They don't even need to decompile your application; they just have to put a proxy to intercept the requests and understand the protocol. From the comments: If you control the hardware, you can secure the app: Not quite. Apple controls from the processor to the UI of the iPhone, and jailbreaks are a thing. Even if they are controlling every aspect of it, one day someone jailbreaks and roots the iPhone, and loads your app on it. Certificate Transparency, Key Pinning Not useful if the device is rooted. Checksum, digital signature and integrity verification only work if the OS is not compromised. If the user owns the OS and the device, he can disable OS checks, can edit the binary of the app and change the instructions verifying the signature or checksum. Virtual Machine, Code obfuscation They make it much more difficult to analyze the code, but code must be executed by the processor. If a disassembler cannot help, a debugger will. The user can put breakpoints on key parts of the code, and in time will reach the function checking the certificate, or the checksum, or any validation checks in place, and can alter anything he wants. So it's pointless to try? No. You must weigh the costs and benefits. Only don't count on the defenses to be unbeatable, because every defense can be beaten. You can only make it so hard that the attacker gives up putting lots of resources against your app and receiving a little benefit. | {
"source": [
"https://security.stackexchange.com/questions/232950",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/236213/"
]
} |
233,135 | There are two ways I can think of doing this: On a system with sudo , by modifying /etc/sudoers . On a system without sudo (such as a Docker environment), by writing a program similar to the below and setting the setuid bit with chmod u+s . apt-get checks real uid, so a setuid call is necessary. ...
int main(int argc, char **argv) {
char *envp[] = { ... };
setuid(0);
execve("/usr/bin/apt-get", argv, envp);
return 1;
} I have two questions: What are the potential vulnerabilities of allowing non-root users to run apt-get ? My goal is to allow people to install/remove/update packages, given that apt-get lives in a custom non-system refroot and installs from a custom curated apt repository. Are there safer ways to allow non-root users to run apt-get on a system without sudo ? | apt-get update -o APT::Update::Pre-Invoke::=/bin/sh From GTFOBins This gives you a root shell on the system. No creating packages and adding fake repos; this will give the user who runs this command easy and simple access to root. So, in answer to your question, you are effectively giving root to every user who has access to this binary. If you are willing to do this, then you might as well just give them sudo access or the root password. | {
"source": [
"https://security.stackexchange.com/questions/233135",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/236489/"
]
} |
233,147 | Yubico offers the YubiKey Nano , a 2FA key designed to be left inside the device more or less permanently. While it does add comfort to be able to just leave it plugged in, what risks would there be if the device was stolen with this still attached? From what I could gather, local device accounts would have the same level of protection as a regular passphrase would provide. Online accounts, depending on the setup, would either have no protection at all (e.g. through a "Remember me on this device" function), or the same protection as a regular passphrase. Is there anything I am missing? | The threat model for the Nano is protecting accounts from remote access, not from direct access from an approved device. You essentially make the device itself the "thing you have" factor with the benefit that the "thing's" properties cannot be stolen remotely (as is the case for private keys, cookies, etc.). Convenient? Yes. Easy to add to your grandmother's laptop and everyone to forget about while still maintaining protection? Yes. Easy to lose? No. Are there "more secure" methods? Yes. | {
"source": [
"https://security.stackexchange.com/questions/233147",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
233,465 | I know nothing about cryptography. How does one encrypt a file in the strongest possible way, such that it can be accessed some years later?
I prefer that it should be fairly resistant from brute force and other possible ways attacks. I need some direction only. | The strongest possible way to encrypt data is to start with a threat model. What sort of adversary are you trying to protect your data from? What are they willing to do to get it? All reasonable approaches to cryptography start with one. If you start with one, you stand a chance of finding "the strongest" for your particular situation. I recommend this approach because, as you start thinking about threat models and researching them, you'll start to realize that security is far more about the human element. Then you can worry about things like how you will secure your key. source: https://xkcd.com/538/ Once you have decided whether you are trying to outwit a state actor while committing treason, or just merely trying to protect your diary from the prying eyes of your little sister, you can decide what the best algorithm is. Failing that, go with the flow. Rather than finding out what is the "strongest" encryption, look for what is "recommend" by the security experts for someone who knows nothing of cryptography. Currently AES comes highly recommended. We're quite confident that nobody short of a state actor can break it, and we are reasonably confident that no state actor can break it either. But better yet, don't look for encryption algortihms, look for tried and true packages which are recommended. The application of an algorithm is as important as the algorithm itself. Highly reputable implementations are worth their weight in gold. | {
"source": [
"https://security.stackexchange.com/questions/233465",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/236900/"
]
} |
233,597 | I am new to web development and am trying to implement a password reset feature according to the OWASP cheat sheets: Forgot Password Cheat Sheet The cheat sheet advises not to send the username as a parameter when the form is submitted and sent to the server. Instead one should store it in the server-side session. However, I am not sure how I should do that, since for me to be able to store the username in such a way, the user needs to enter his/her username and send it to the server at some point, right? Why not send it together with the form where the user answers security questions? Or am I just understanding this the wrong way? | This is what I usually do: The user asks for a password reset. The system asks for the registered email. The user enters email, and no matter if email exists or not, you say that you sent a reset link. The server stores email, expiration and reset token on a reset_password table When the link is accessed, expiration is checked and a form to reset the password is shown. User only receives a link with a large random token. | {
"source": [
"https://security.stackexchange.com/questions/233597",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/229591/"
]
} |
233,619 | I don't understand this part of the Rainbow table attack. In all my Google searches, it says that a hacker uses a rainbow table on password hashes. But how does the hacker obtain the password hashes in the first place? I have rephrased this question from a previous question which was closed: How is Salting a password considered secure, when the hacker already has access to user password Database? If the hacker already has the password hashes, can't he just use them to hack the system? | The news is full of examples of leaked databases (this is just the most recent results). The How: The vast majority of cases involve unsecured databases/backups (across pretty much all technologies: S3, mongodb, cassandra, mysql, etc....). These are usually due to configuration errors, bad defaults, or carelessness. What data is leaked: These generally provide at least read-only access to some or all of the data contained in the database, including usernames and hashed-and-salted passwords. These dumps include a lot of private user records. Plaintext passwords (or using a simple hash such as md5 ) are even more problematic because that data can be used in credential stuffing attacks (by trying the same username/password combinations on different websites), potentially accessing even more data. What to do with a password hash: If an attacker has access to a hashed and salted password, they cannot just provide this to the server to authenticate. At login time, the server computes hash(salt + plaintext_password) and compares it with the value stored in the database. If the attacker attempts to use the hash, the server will just compute hash(salt + incoming_hash) , resulting in a wrong value. One scenario that could spell a lot of trouble is client-side-only password hashing. If the client computes and sends hash(salt + plaintext_password) into the login endpoint, then the stored hash can be used to login. This alone shows how dangerous that is to do. There are some algorithms that offload some of the work to the client (such as SCRAM ) but they involve a more thorough client-server exchange to prevent exactly this scenario. Password storage security is worried about attackers deriving the real password from the stored value. It is not concerned with other vectors of attack against the server. | {
"source": [
"https://security.stackexchange.com/questions/233619",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/237064/"
]
} |
233,645 | I have a Microsoft Account linked to a Microsoft Authenticator app for 2FA purposes. Every time I log in, it first sends me the Authenticator request, but I can always click "Other ways to sign in" and then choose "Use my password instead", which then prompts me for the good old password, and logs me in successfully. Lastly, But doesn't that negate the point of having the 2FA at all? I wouldn't expect this mix of a cargo cult meets security theater from a major corporation. Or did I misunderstand something? | You didn't actually set up 2FA. You set up your authenticator as an alternative method of single-factor authentication. This is clear from the first screenshot: "... to sign in without a password". If it didn't ask you for a password in the first place, it's probably not 2FA; the password is one of the two factors. The way I read this question it seemed like you'd gotten that prompt after entering your password, because that's when any second-factor authentication prompt would appear, but it looks like that's not what happened. Go to https://account.live.com/proofs/manage/additional and click "Set up two-step authentication" if you actually want 2FA. You will still be able to "remember" trusted devices after you've completed the two-step auth on them, but any time you try to sign in using a new device (or a private browser, etc.) it should ask for both factors. | {
"source": [
"https://security.stackexchange.com/questions/233645",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/111286/"
]
} |
233,785 | Unless someone has my private ssh key, how is leaving an aws instance open to 0.0.0.0 but only on port 22 via ssh insecure? The ssh key would be distributed to a small set of people. I prefer to not need to indicate their source IP addresses in advance. I do see another similar question SSH brute force entry in aws ec2 instance . If you disabled password based login via SSH, then it is very hard to brute force an SSH login using a private key ( Maybe this covers it? Just want to double check since in the security world you do not get a second chance. | The answer depends on your risk appetite. Restricting access to the SSH port to only known IP addresses reduces the attack surface significantly. Whatever issue might arise (private key leaks, 0-day in SSH, etc.), it can only be exploited by an attacker coming from those specific IP addresses. Otherwise the attacker can access the port from anywhere, which is especially bad in case of an unpatched SSH vulnerability with an exploit available in the wild. It is up to you to decide, how important the system and its data is to you. If it is not that critical, the convenience of an SSH port open to the world might be appropriate. Otherwise, I would recommend limiting access, just in case. Severe 0-days in SSH do not pop up on a daily basis, but you never know when the next one will. | {
"source": [
"https://security.stackexchange.com/questions/233785",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/237301/"
]
} |
233,795 | It's well known that GET requests with ?xx=yy arguments embedded
can be altered in transit, and therefore are insecure. If I change the request to POST, and use HTTPS, then the parameters
are in the body of the message, which is encrypted, and therefore
difficult to hack, correct? Two more cases concern me. Suppose GET style parameters were added
to a POST request - would those parameters be reliably ignored? What about some sort of security downgrade attack? If the URL manipulator
forces HTTPS transactions to fail, and then the client/server "helpfully"
downgrade to HTTP, which would allow the unencrypted POST body to be
manipulated. | TL;DR: HTTPS provides encryption, and it's the only thing protecting the parameters. It's well known that GET requests with ?xx=yy arguments embedded can be altered in transit, and therefore are insecure. If you are not using encryption, everything is insecure: HTTP, Telnet, FTP, TFTP, IRC, SNMP, SMTP, IMAP, POP3, DNS, Gopher... If I change the request to POST... ...it does not change anything at all. and use HTTPS... HTTPS changes everything. Any HTTP request not protected by TLS is not protected. No matter if you use GET, POST, PUT, if it's a custom header, none changes a thing. For example, this is a GET request: GET /test?field1=value1&field2=value2 HTTP/1.1
Host: foo.exam
Accept: text/html And this is a POST request: POST /test HTTP/1.1
Host: foo.example
Content-Type: application/x-www-form-urlencoded
Content-Length: 27
field1=value1&field2=value2 What is the difference? On the GET request, the parameters are on the first line, and on the POST, the parameters are on the last line. Just that. The technical reasons behind GET or POST are not the point here. Suppose GET style parameters were added to a POST request - would those parameters be reliably ignored? It depends entirely on the application. On PHP, for example, if the application expects $username = $_POST['username'] , sending it as GET parameter changes nothing at all, as the application will get the POST parameter. What about some sort of security downgrade attack? If the URL manipulator forces HTTPS transactions to fail, and then the client/server "helpfully" downgrade to HTTP, which would allow the unencrypted POST body to be manipulated. Not easy for properly configured servers. If they use the HTTP Strict Transport Security header, it forces the client to only access the site using HTTPS, even if the user forces HTTP and port 80. The browser will helpfully upgrade to HTTPS, not the other way. Even on servers that not use HSTS headers, if the first access is done via HTTPS, it's not trivial to downgrade to HTTP. The attacker must send a faked certificate, and the client must accept the faked certificate in order to an HTTPS connection be redirected to HTTP. But if the attacker succeeded on this, he will usually keep using HTTPS as the client already accepted his fake certificate anyway. | {
"source": [
"https://security.stackexchange.com/questions/233795",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4384/"
]
} |
233,920 | I quit my job to start my own SaaS product. I’m now looking to hire my first employee (another developer). I will be taking appropriate legal precautions to protect my IP, but I’m wondering what other reasonable actions that I can take to further protect my code / data. The last thing that I want happen is what happened to Tesla where someone dumped the source code onto iCloud and ran off with it to a competitor. I know that it is practically impossible to prevent this 100% from happening and that I need to make sure that I hire quality people and offer meaningful pay and have the appropriate legal documents signed. Apart from this, what else can I do to protect myself from inside threats? I am pouring in my entire life’s savings into this and I will be devastated to lose what I spent the better part of 2 years coding. Here’s what I’ve thought of so far: Buy a work laptop for them Encrypt the hard drive (like with Bitlocker) Disable all USB ports Create a non-admin / limited user account with no install permissions and just the IDEs (e.g. Visual Studio) installed. I use Windows 10 for most development with the exception of a Mac for the iOS portion of the app development. Install some kind of employee logging software. Disable access to file hosting websites. Somehow detect and stop when a certain folder is being uploaded or copied somewhere? Somehow make the git repository only accessible from that machine. Install some kind of remote admin management system? Azure Active Directory or something? This must be a common problem for businesses but I must be searching for the wrong thing because I can’t seem to find a guide anywhere on this issue. | For the most part this is not a technical problem but a human problem. So while technology has a role to play it has limits. If the employee will be working from home supervision is more difficult. If you'll be monitoring his/her activity you don't want to be in breach of applicable privacy laws. The computer has to be secured obviously but the rest of the environment is important too. If you have a corporate LAN there should be adequate protections like an IDS/firewall. But the equipment is often useless without somebody keeping an eye on the logs and the alerts. Since you mentioned Visual Studio, the developer may need to be at least a local admin to work in optimal conditions. If you cripple their environment they may be tempted and even forced to find workarounds and defeat your security measures which is what you want to avoid. I'm afraid we all have to trust other people and take risks. The more you monitor your employees, the more you make it obvious to them that you don't trust them and make them feel untrustworthy. At some point the surveillance effort becomes counter-productive because you frustrate and demotivate them. They may become less productive, less loyal. Security training may be beneficial too. The employee could be honest and acting in good faith but vulnerable to social engineering, and unwittingly jeopardize the company and its assets. Naïveté can be as dangerous as malicious intent. I would say that many developers lack cybersecurity awareness. Perhaps you should order a penetration test against your company and learn from it. Thus your security posture will improve and you'll be better equipped to fend off attacks. Employees are often the weakest link but you should also consider the threat of hackers and unethical competitors. In other words don't focus too much on your employees, but develop a 360° security approach for your company. Physical security is important too. A lost laptop should be no big deal if the hard drive is encrypted and has a strong password. But your backups should be in a safe place. Consider the risk of burglary. Yes backups are extremely important . Make sure you have a solid backup plan in place, test it from time. Prepare a disaster recovery plan . What would happen if your office burns with all your computer equipment ? You need to protect your source code but also plan for business continuity. Hint: insurance. If you have valuable IP you could consider applying for patents . Again, this is a lawyer's job here. Probably you can find insurance to cover the risk. The question is whether it's worth paying for a low risk. I would also offer shares or some equity in the company. Then your employees have less incentive to go rogue and sabotage your enterprise. To sum up: there are so many possible risks, I think you are putting too much emphasis on the insider threat. You are more likely to get hacked , than sack someone for misconduct.
Your employees must be your allies and considered as such - not as potential foes. | {
"source": [
"https://security.stackexchange.com/questions/233920",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/148970/"
]
} |
234,060 | I recently jumped onto the hypetrain for an unnamed email service and am currently on my way to update all my accounts on various websites to get most of my (future) data off Google's Gmail. During this adventure I came across a couple user-flows of changing your e-mail address which I would like to share (amounts like "many" or "a few" are purely subjective, I did not count): No questions asked The email address is just changed without any confirmation emails, second password check, or spellchecking (two input fields). The email address is the main login method to this account with some sensitive data. Any person with malicious intent will not be stopped from taking over my account if they change the email address and afterward change my password. Confirmation of new email What I feel like the method used by most platforms: You will receive a confirmation email to the new address you provide. This will assure you typed in the email correctly, but will not stop anyone from changing the main login method though. Confirmation through old address Very few platforms send an email to the old address to check if I am the actual owner of this account. If I click the link in the mail or enter a number they send me, the address is changed. Confirmation through old and of new address Just once I had to confirm with my old address that I am the owner of the account and got another email to the new address to check that it does indeed exist. Looking back at it, it feels like the usual UX vs security conflict. While method 1 provides the most comfortable flow, I see the most issues with it, as already pointed out.
Having to confirm the old address and the new one is kind of a hassle, but it's the best way of those listed to keep the account of your users in their own hands. Are there other common methods I am not aware of? What is generally considered best practice? | The problem I see with confirming the old email address is that sometimes people change address because they cannot access the old one anymore. For example, the old address might have expired (and maybe even reassigned to someone else!). Or they might just have forgotten the password to the old address, and have no alternative ways to prove their identity. Or the old address belonged to a company where they used to work, and they don't have access to it anymore. Or it might have been blocked or terminated for for other reasons (violation of ToS, DoSed, filled with spam and practically unusable, etc.) So the way I see it, you have two ways to log in to an account: using your password, or using your email account to request a password reset. Therefore the email account is an alternative way to authenticate. Now, whenever you make changes to the ways you can authenticate, you should verify your identity. And if you can't rely on the old email account (that you are about to change) the obvious solution seems to ask for your current password before you can change the address. Ask for your current password (it's a valid verification of your identity) to be able to change the email address. Verify you have access to the new email address and, if verified, make it the new default one. Maybe do something with the old address that has been changed. What you do with the old account, in my opinion, has some pros and cons. You could decide to: Send a notice to the old email , for example: "Your email address for accessing Website Example has been changed, if you didn't expect this your account might have been compromised, etc." You might also provide a link to recover the possibly compromised account, with a token that expires after some time. The problem with this option is that if the old account has been reassigned to someone else, you would not want to let them know too much information (and definitely not give them a direct recovery option) Just throw away the old email once it's been successfully updated, without sending any notices. The problem with this is that if an attacker manages to steal a user's password, they can take over their account completely and lock them out. Don't send anything to the old email, but keep it in the database for some time, for support or recovery, in case the real owner of a compromised account (with the old email) shows up asking for help. The problem with this option is that the real owner, in case the account is compromised, might not realize it until... how long? | {
"source": [
"https://security.stackexchange.com/questions/234060",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/125927/"
]
} |
234,160 | I'm seeing a lot of tech support scam videos on YouTube, which made me think; do legitimate tech support companies use remote control for regular customer service calls? I remember calling Lenovo tech support from their website a while back (I double checked it was their official site because I'm paranoid) and they had to use a remote control software to check my PC. I reasoned it was the same as handing over your PC to a repair shop as long as you know it's legitimate. Now I'm thinking; do they even use this type of software? What are the security flaws/implications of letting them do it? Is it fine as long as we can see our screen and retain control of the cursor? | Yes, it is normal for legitimate tech support to use remote support tools. It's far easier than trying to blindly walk someone through a complicated series of technical steps. Companies like TeamViewer exist because of this reason. The risks of the software are: having a persistent "back door" into a system, but there are security measures in most software to limit this vulnerabilities in the software that could be exploited by others a malicious tech support user using legitimate access to create harm There are several functions in remote support tools besides cursor control that could also create secondary problems, like being able to upload and download files. As long as all that is enabled is "remote viewing" or "screen share", your risks are limited. The more control you give, the higher your risks. | {
"source": [
"https://security.stackexchange.com/questions/234160",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/172653/"
]
} |
234,251 | I've read articles suggesting that passwords will eventually go the way of the dinosaur only to be replaced by biometrics, PINs, and other methods of authentication. This piece claims that Microsoft, Google, and Apple are decreasing password dependency because passwords are expensive (to change) and present a high security risk. On the other hand, Dr. Mike Pound at Computerphile claims that we will always need passwords (I think this is the correct video). But as this wonderful Security StackExchange thread notes, biometrics are not perfect . Granted, the criticisms are roughly six years old, but still stand. Moreover, and perhaps I have a fundamental misunderstanding of how biometric data is stored, but what if this information is breached? Changing a password may be tedious and expensive, but at least it can be changed. I'm uncertain how biometric authentication addresses this problem--as I cannot change my face, iris, fingerprint, and etc.--or if it needs to address this problem at all. Are those who argue that we can eliminate passwords prematurely popping champagne bottles or are their projections correct? | First of all, let's keep in mind that vendors of biometric solutions have a vested interest in badmouthing passwords to promote their own products and services. There is money at stake. They have something to sell to you, but that doesn't mean you will be better off after purchasing their stuff. So one should not take those claims from vendors at face value. Moreover, and perhaps I have a fundamental misunderstanding of how
biometric data is stored, but what if this information is breached?
Changing a password may be tedious and expensive, but at least it can
be changed. I'm uncertain how biometric authentication address this
problem--as I cannot change my face, iris, fingerprint, and etc.--or
if it needs to address this problem at all. This is precisely the biggest problem with biometric. The compromised 'tokens' cannot be revoked. Breaches have already happened on a large scale. A devastating occurrence that will have consequences for many years to come is the OPM data breach . Faces cannot be protected. They literally are public knowledge. Lots of people have their face on the Internet nowadays. Fingerprints can be seized off a glass. These are not secrets. On top of that the collection of biometric data is a formidable enabler for the mass surveillance of individuals. Even the most democratic governments cannot be trusted. Technology also changes the nature of government and social interactions - not always in a good way. We have to consider the trade-offs: what do you have to gain vs what could you possibly lose. Is the convenience worth the risk ? Not everyone is convinced. So it is not just a technical issue but a societal issue that has enormous implications. Hint: China is the benchmark. The false or negative positives rate is also a problem. Some people cannot be enrolled because of their physical characteristics. A password is unambiguous. You either know it or you don't. Biometrics = calculation of probability. Relying on biometrics alone is not wise for critical applications. Hence the emergence of multi-factor authentication . As an example 3-factor authentication would be: something you have: for example a smart card something you are: this is where biometrics comes into play something you know: for example a password It would be objective to say that biometrics are gaining momentum in some markets/applications, without eliminating passwords altogether. It does not have to be a zero-sum game. | {
"source": [
"https://security.stackexchange.com/questions/234251",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/234275/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.