source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
123,045
Note: I'm not asking about this password scheme is the best or not (of course it isn't); I'm asking about its theoretical or practical secureness relative to the optimal password scheme based on the algorithms commonly used in authentication and encryption. As I already say in my question, mathematically this scheme (as I've defined it here) is as secure as what people are told to do (but rarely do; though irrelevant here), but I don't know whether there are weird aspects of authentication or encryption that invalidate the mathematical analysis based on entropy alone. Well so I had this interesting thought. Since password-protected systems often require users to change their password frequently, many people just increment a counter appended to some prefix, such as: awefjio;1 awefjio;2 ... I wonder whether using such a family of passwords is as secure as choosing a completely random new password each time, as long as the prefix is uniformly randomly chosen and as long as each random new password plus about 10 bits. I ignored the extra bits from the counter since that might be guessable within a small error margin given the frequency that users are required to change their password. Mathematically this password scheme is good enough because the extra 10 bits lets the attacker try 1024 times as long to crack the password as compared to the completely random one, assuming one does not change a password more than 1024 times. (Here I'm assuming the attacker somehow has a hash of the password, including any salt needed.) Now I know one obvious disadvantage to this, which is that an attacker who somehow gets hold of the current password will know all previous passwords, nevertheless this seems completely irrelevant in most practical situations, where once logged in the user has full access to his/her account and never needs the old passwords. Hence my question; is there any concrete reason why users should not use this scheme for changing passwords? I'm guessing that if the answer is positive, it will depend on the authentication or encryption algorithms involved themselves, in which case I'm fine with restricting the question to the most commonly used algorithms. To be clear, I'm only asking about the situation where: Users have no choice but to change their passwords at some fixed frequency, and do not have the option to use a password manager (such as on a workplace computer). They therefore have to decide on some scheme. This question is about whether one particular scheme is better than another. Choose a fixed constant n. I have in mind n ≥ 64 at minimum, if that matters. a. The first scheme is to choose a new independent random string with n-bit entropy every time one needs to change the password. b. The second scheme is to choose a fixed random string with (n+10)-bit entropy and append a counter that is incremented every time one needs to change the password. The question is whether (b) is at least as secure as (a) for common types of password-protected systems.
If an attacker has found out your password, he can access the system up until you change the password. Changing passwords often prevents attackers that already have your password to have undetected access indefinitely. Now, if your password is secret-may16 and the attacker is locked out when you change your password, he is certainly going to try secret-june16 as the password. So changing the password in a predictable way is not secure.
{ "source": [ "https://security.stackexchange.com/questions/123045", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102172/" ] }
123,058
I am trying to see if a webapp transmit password in some form of cleartext. The problem is that the web app is NOT using https! This is why I wondering is my passwords is sent in clear text. I've found a cookie in my browser containing the following : usr=waytodoor&mdp=3C5115ECB18725EBFFF4BB11F9D17798358477D6E11E0188D54BAB80C86D38D41A16F76D8D0B253445774484FB1AE22E9FC13E257CFB9E7B7466F1BA5417EA004FCBB4E7965CA571819146A148EED15A2AEA6E47D9DF338B534264FBC99A57952212629BA79BB34BE9C8C60B73F1F681EE872C64 From what I can tell, usr is my user name, and mdp ("mot de passe" in french means password). So, is the mdp field of the cookie my password (and, if it's the case, how could I "decrypt" it ?), or just some kind of token? The password is 7 chars long. I tried to convert it from hex or from base 64, but no luck. EDIT: Included statement that HTTPS is not being used. EDIT2 : The website is a webstite for students. Passwords are given by the school at the beginning of the year. We can't change them. I suspect that some students are ARP'ing the school network. I try to disconnect as fast as possible from the website to invalidate the cookies, but I wanted to know if they can find the password from the cookie.
There are two reasons to ask if your password is being encrypted: You are worried about the security of the site. You are worried about the security of your password. Regarding site security, with no HTTPS, there is effectively none. You should consider every communication with the site as public and assume that an attacker can pretend to be you. Just use the site with care. Regarding the security of your password, without SSL, it doesn't really matter. Someone can steal your session cookie and pretend to be you without knowing your password. So be sure not to reuse the password on other sites (or reuse any password ever) to prevent a password exposure on this site from compromising your accounts on other sites. Edit In response to your concern about ARP spoofing , without SSL, it may be possible that they establish a MiTM . Once they do that, they can see the cookie. Without deeper inspection of the web site, I cannot tell you if the cookie leaks your password. Perhaps it is securely encrypted, perhaps not. That said, once they have an MiTM, they can alter the JavaScript that is sent to your browser. This would allow them to alter what is sent on the wire, thereby getting your password. And, while I can't be certain without further examination, that cookie is looking to me like a pass the hash vulnerability . If that is the case, then there is no need for them to steal your password as the cookie's value is as good as a password. All of this boils down to, without SSL,there is no security.
{ "source": [ "https://security.stackexchange.com/questions/123058", "https://security.stackexchange.com", "https://security.stackexchange.com/users/82691/" ] }
123,170
Is it a bad idea to post a photo of your keyboard to social media? Can I look at a photo of a keyboard and determine the password of an account? Assuming a certain (set of) password(s) is the most commonly typed character sequence on a given keyboard: Is the resolution in the photo of that keyboard sufficient to determine the most frequently used set of keys, by analyzing the grease patterns on them? If I know the most frequently used keys, is a brute force attack now feasible, since I can limit the size of the dictionary? This question is inspired by a time I saw a door protected by a numeric keypad, where the paint was missing on three of the keys. The room number was three digits, none of which had paint on their respective keys. Unsurprisingly, the combination was the room number reversed.
In some cases yes, you can guess the most frequently used keys by the wear marks. That's how I know that apparently I use the L, M, N, A and E keys a lot - the keys are now just black, the letter is faded. And one special key being significantly more used than the others - unless it's "{", "}" or ";" and you happen to be a programmer - could allow to include that in a bruteforcing, or exclude others (this is definitely NOT my keyboard, but still): But most people don't use the keyboard for just their passwords, and the wear pattern is also influenced by the stroke direction, angle and pressure - keys farther down the keyboard will be pressed differently from those nearer. The keys I wear faster appear to be exactly under the hand that controls them, and now that I noticed, I hit them harder than the rest (OK, also I'm a horrible typist). The keyboard might hint (weakly, at that) at what the most frequent typed word, or anagrams thereof , might be. But that's not the same as your password except in very specific cases (e.g. an entry keypad). In even more specific cases such as heat-conducting entry keypad, FIR or strong UV picture taken immediately after typing so that residual heat or fluorescence or phase interference from skin oils may be appreciated, you might be able to get something. But an ordinary picture conveys no such information. So my opinion would be that keyboard pictures are mostly harmless. On the other hand, I sometimes see Post-It with letters and numbers on them attached to monitors and on woodboards behind selfied people, so I'd also say that it's always a very good idea to review the photos (as well as whatever else ) you post to the social media, looking at the goods with an attacker's eye: inside view of (broken/defective/jimmied?) locks and/or brand names useful to research approaches hints about location (this might be relevant to determine whether you're holidaying abroad, and estimate how long your home will be vacant) valuable items (might give people ideas either about them, or about your wealth, which also could give people ideas) discrepancies between what's shown and what you told anybody who might get to know, such as employer, insurance company , significant other(s), etc. At least once, some months after this answer was posted, the last case actually happened .
{ "source": [ "https://security.stackexchange.com/questions/123170", "https://security.stackexchange.com", "https://security.stackexchange.com/users/109774/" ] }
123,234
After that case in which Brazilian government arrested a Facebook VP due to end-to-end encryption and no server storage of messages on WhatsApp to prove connection with a drug case, it's become pretty common for friends of mine to start conversations about what cryptography is and why we should use it on a daily basis. The same applies with the iPhone terrorist encryption case in which the FBI broke in . For non-techie friends, it's easy to understand the basics of cryptography. I have managed to explain them the basics, public key x private key, what is end-to-end encryption during communication(your data is not stored encrypted, but it is "scrambled" during data exchange), all the core concepts without enter on more technical words like AES, MD5, SSL, PGP, hardware encryption acceleration, TPMs, etc. They like to have encryption on their phones, but they always come up with the following concept: If terrorists/criminals could be caught by not having cryptography in our world, I would not blame data surveillance by governments and companies, nor the lack of cryptography in our communications/data storage. I explained that this point of view is somehow twisted (as a knife can be used to do crimes, but its primary use is as a tool), but I didn't keep their attention. Is there a best way to explain the value of cryptography for end-users in our modern world? (Snowden and Assange stories seems like fairy tales to them too). Compendium: Some of the explanations/concepts that didn't work so far: Would you let the government have a copy of your house key? People tend to isolate data from house access, and they clearly would say "no, i do not want the government to have a copy of my house key and watch me doing private stuff. But if they are looking for a terrorist/criminal, it's fine to break the door". For them, it's okay since they don't break in your house while you are pooping. The existence of a "master key" on encryption world is fine to them. "My information is encrypted, but it could be turned into plain again in case of terrorism/crime". Would you let others trace your life based on what you do online? "But Google already does that based on emails and searches...". This mostly shocks me, because they are "with the flow" and they aren't bothered with data mining. Worse, people tend to trust way too much on Google. What about the privacy of your communications? What if you are talking dirty things with your boy(girl)friend? . "I don't talk about things that would harm others(criminally speaking) so, i don't mind on being MITM'ded.". Again, it's fine to them if a conversation about their sexual routine is recorded, if the intent is to investigate criminal activity on their city. The Knife paradox . You can see on their faces that this is a good one, but instead, they say that "knifes aren't as dangerous as secret information being traded between criminals so, it's okay that Knifes are misused by criminals sometimes".
"If lack of encryption allows FBI to catch terrorists, then lack of encryption allows criminals to loot your emails and plunder your bank account." The rational point here is that technology is morally neutral. Encryption does not work differently depending on whether the attacker is morally right and the defender morally wrong, or vice versa. It is all fear-driven rhetoric anyway, so don't use logic; talk about what most frightens people, personally. And people fear most for their money.
{ "source": [ "https://security.stackexchange.com/questions/123234", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
123,401
Many Android devices, including the Google Nexus line, are now receiving monthly security patches via OTA updates, accompanied by the Android Security Bulletins . However, these updates are often released in what is known as "staggered roll outs," where the update is available for more devices (of the same model) over time, instead of becoming available to all users instantaneously. I understand that for feature updates, gradual roll outs allow errors or bugs to be fixed before all users receive the new software. However, for security patches, wouldn't a staggered release make it much easier for blackhat hackers to utilize the now-public vulnerabilities against users whose devices have not yet received the OTA, even though a patch for their device model is already available? For example, I have a Google Nexus phone that is supposed to be among the first to receive all Android updates. The latest Android security update was publicly released on May 4, 2016, along with its source code; it is already May 16, 2016, and my Nexus device is still telling me that my "system is up to date," even when I click the "check for update" button. Of course, I can manually download the latest firmware images and flash it manually; but shouldn't security updates be made available to all devices of the same model as quickly as possible once it becomes public? Edit: Thank you for the very thoughtful responses. While this question is intended to be widely applicable to different devices, my specific concern is the intentional "staggered roll out" to identical devices of the same model, once a patch has already been developed for that specific model. For example, the May 2016 security update has been released for the Google Nexus 6P at the beginning of the month, but as of May 16 not all Nexus 6P devices have received the OTA update. Google releases security patches for Nexus devices every month, and each month some devices receive them a few weeks later than other devices of the same model.
Excellent question. Yes, your understanding is correct, as well as your rationale behind it. Staggering roll outs for new features often makes good sense. Staggering roll outs for security patches rarely is a good idea. As you pointed out, this gives even more opportunity for the vulnerabilities to be exploited. Perhaps even more importantly, the patches can be quickly reverse engineered to develop exploits in a rapid fashion. Microsoft often publicly releases their patches on the second Tuesday of the month (and also sometimes on the fourth Tuesday). This has commonly been referred to as " Patch Tuesday ". There's a reason we call the next day "Exploit Wednesday". It's unfortunate that a significant chunk of the Android ecosystem has not learned from this phenomenon. Updates: Several knowledgeable people have pointed out potential impact on internet infrastructure, including fears of overloading the entire internet. The volume of internet traffic is monumental; security patches, even extremely large ones, are tiny drops in the bucket. Microsoft releases large patches to hundreds of millions of users on the same day each month, and they have yet to "crash the internet". Netflix, YouTube, and Twitch stream videos to millions of people every day, and even with their combined traffic, they have yet to "crash the internet". On the other hand, Android patches are predominately (but not exclusively) delivered to wireless users. There are solutions to any potential issues: Provide users with a choice of when to download the patches. This provides numerous benefits: Does not disrupt the user's workflow Creates traffic staggering due to human interaction and decision variability Allows the user to wait until they are connected to a higher bandwidth system (perhaps at work, their university, at home WiFi) Allows the user, at their own risk, to wait and see if others report problems with the patches When distributing patches to specific regions known to have limited infrastructure that could be impacted, stagger the patches over the minimal number of days to avoid overloading infrastructure. In regards to specifically the Google Nexus 6P security updates not being released to all users promptly, that's simply a poor choice by Google that is not in the best interest of their customers. Compared to the massive volume of internet traffic, those patches are minuscule. On top of that, that device is relatively rare in the Android ecosystem. This further supports the statement that releasing the patches to all customers at once would not harm any internet providers. Even the entire Google Nexus product line comprises only a tiny part of the Android world. As a product line, however, there could be a little impact on infrastructure in select regions. As such, the following methodology, while combined with the recommendations outlined above, will minimize infrastructure impact while maximizing patch distribution: Release zero-day exploit patches immediately Release scheduled updates on different days each month, a different day for each product If a product has significant market share that could reasonably impact infrastructure in a region, stagger the roll out over the minimal number of days required to avoid infrastructure overload in that region only Finally, according to your statements, it has been over two weeks since Google initially released those patches for the Google Nexus 6P. That's more than enough time to know if their patches are causing havoc. I have found no documentation from Google recognizing or apologizing for a bad bunch of patches, nor anecdotal evidence of any serious problems. One could make the argument that staggering patches out over a few days could possibly be reasonable in order to detect flawed patches and to reduce traffic load. But leaving customers unpatched for weeks is unreasonable, unnecessary, and not an effective policy from an information security standpoint. In conclusion, based on the statements above, and your statement that Google has not rolled out the security patches to your device, my conclusion is that Google, by not delivering security patches to all affected Google Nexus 6P customers, is making a poor decision and is doing a disservice to their customers.
{ "source": [ "https://security.stackexchange.com/questions/123401", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46454/" ] }
123,461
I'm working on the onboarding functionality of my web app and at some point I'm asking a new user to choose a username. When the user gets to this point, I have his/her email and a cellphone number that is validated with an SMS and he/she has passed Google's no CAPTCHA reCAPTCHA. I need to check if the username one chooses already exists and if it does, then the user is prompted to choose another one. Upon browsing the web, I came across some pages that say implementing such functionality also provides attackers with a tool to check if a username exists. What are the best practices for checking if a username exists?
This source says that it is almost impossible to avoid user enumeration in this situation and delaying an attacker is the best you can do: If you are a developer you might be wondering how you can protect your site against this kind of attack. Well, although it's virtually impossible to make an account signup facility immune to username enumeration, it is however possible to avoid automated username enumeration attacks against it by implementing a CAPTCHA mechanism. However, I might have missed something - I'm also curious if anybody else has a better solution.
{ "source": [ "https://security.stackexchange.com/questions/123461", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9708/" ] }
123,672
Recently, I had a Mac which fried its video logic board. Luckily, Apple had concluded that this was a design flaw and was fixing the affected models for free (see more here ). However, I did not find this page for a while, and during that time had to think about recovering my data. So, I looked around the interwebs and found single-user mode. When the computer is off, press the Power On button while holding the down the command and s keys. Keep holding these down, and instead of booting to the Apple loading screen, it boots to the underlying Unix terminal. Once there, you can enter the following commands: mount -uw / cd /Users/ ls And all of the users' home folders are displayed. Continuing to cd into these folders and ls to view contents, you can browse all of the users' files, without needing a password . I then found that you are also able to plug in a USB stick and copy files to it (or from it), or perform actions on the files such as move and delete. While this was helpful for me recovering data from my fried Mac, how is this a good idea? If I ever got hold of the MacBook of a friend and it was locked, I could just shut it down, boot into single-user mode and mess with their files - or even make a copy of them to a USB stick for later use. Macs are used by many people, a lot of whom have very important files that they need to protect. This obviously isn't a bug, as Apple has a support article on how to enter single-user mode. I also know that one of the original purposes of single-user mode is to reset your password if you lost it, but giving access to the entire computer through the command line does not seem like a good way to go about it. So, is this a problem? Is single-user mode bad? As far as I see it it is a security hole, but I could be missing something.
Physical access is total access, right? How is this any worse than a boot CD or yanking the hard drive and popping it into another system? Not that I'm a fan of OSX or this particular feature, but if someone has physical access to a computer with an unencrypted disk, they have access to everything on that disk anyway, so single user mode doesn't make that any worse, either.
{ "source": [ "https://security.stackexchange.com/questions/123672", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99577/" ] }
123,706
I'm currently writing a web application, and my client asked me if it would be possible to suggest a valid URL to the user when they accidentally write a typo in the URL bar, an example of this would go like this: Bob navigates to ' https://www.example.com/product ' The web server is unable to find the route '/product' , but knows that the route '/product s ' does exist The web server suggests Bob to navigate to '/products' instead Bob navigates to '/products' and continues browing the website This example would cause Bob to have a better user experience. However, it led me to wonder if this is considered bad practice, as the server might expose URL's the admin of the website might not want to show publicly.
If Bob is trying to type products and mistypes product, he already knows there's a URL in the website for products and so you're not telling him anything he doesn't know. If you don't suggest URLs that shouldn't be public, you won't have any issues. Why use a 404 message though, and not do an immediate redirect?
{ "source": [ "https://security.stackexchange.com/questions/123706", "https://security.stackexchange.com", "https://security.stackexchange.com/users/59639/" ] }
123,717
Today I was watching a video on 'Ethical Hacking' where, while discussing hardware attacks, the narrator said: Removing RAM or components from a desktop or a laptop Here's a screenshot: I understand that removing stuff like storage drives is a security risk but removing RAM? The maximum it can do is slow down the system, but how else is that a security risk?
RAM is used to store sensitive non-persistent information in a lot of cases. Encryption keys would be a common example. Sometimes it is possible to remove RAM and place it into another device to dump the contents - often with the aid of liquid nitrogen. For more information, see the Wikipedia article for Cold Boot Attack .
{ "source": [ "https://security.stackexchange.com/questions/123717", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69385/" ] }
123,946
Let me be clear first and foremost: I do not think installing a backdoor in security algorithms is a good idea. They undermine the trust in the software and in the company that provides the service. That being said, I do agree that encryption provides a certain measure of protections for criminals and malicious actors against legitimate inquiries by authorities, which in turn can lead to dangerous situations. I am also aware that cryptography is an INCREDIBLY complex subject matter, which should be left to the professionals with many years of experience. That said, in the below question, I will attempt to leave the creation of the cryptographic algorithm to the pros. Now, the main question: Imagine a scheme like this: A new algorithm is developed with equal security as the current standards. In effect, the algorithm that replaces AES would be just as hard to crack as AES if you do not know the key. This algorithm generates one extra unique decryption key when used. This key is then sent via a secure channel (i.e. HTTPS or equivalent) to an NGO with the sole duty of guarding these keys. As soon as the tool gets confirmation that it is delivered, the tool securely deletes this key. The key is always different and strong enough that brute-forcing is not feasible. In addition, the encryption software will require a usable connection to the NGO via internet when the encryption is started to ensure that the key can be sent. Once the key arrives, it is stored in an offline, airgapped database that can only be accessed in a single room with rigourous safety. In addition, the database and the machine it is located have tamper protection, similar to tamper protection on bank transports: any access that's out of the ordinary, like too many requests within a certain period or too many faulty requests, and the machine gets wiped. When a legitimate law enforcement organization has need of a key to decrypt, it sends a formal request to the NGO. The NGO first analyzes the request based on the importance of the request. the NGO allows decryption when the suspect is strongly incriminated by other evidence, and only in the case of terrorism, murder or abuse of a minor (which are probably the only widely accepted reasons for public opinion). If the NGO allows decryption, a trusted employee of the NGO goes to the room the database is accessible from and downloads the key on a read-only medium with similar tamper protection as the database. This medium is then handed over to the law enforcement organization that originally requested it. At this point, normal law enforcement will take over. Assuming all of the above is feasible, what problems can arise from this system?
In addition to the points mentioned by Lucas Kauffman I would elaborate on point two: 2.This algorithm generates one extra unique decryption key when used. This key is then sent via a secure channel (i.e. HTTPS or equivalent) to an NGO with the sole duty of guarding these keys. As soon as the tool gets confirmation that it is delivered, the tool securely deletes this key. What would stop someone from implementing the algorithm but omitting the part where the second key gets sent to the NGO? Such an implementation would still output the normal key, making it completely compatible with other users of the cryptosystem. The only way to do this would be to make the algorithm closed source and safeguard its implementation from reverse engineering through intense obfuscation. But nobody in the security community who isn't completely out of their mind would ever trust an algorithm which isn't open to public peer review. Also keep in mind that the people law enforcement wants to find are those who are already breaking the law. Criminals would have no qualms to encrypt their information with any of the other cryptosystems which are currently available, even when doing so is declared illegal. Unless, of course, the punishment for using non-government-approved encryption software is just as harsh as the punishment for terrorism, murder or abuse of a minor, but that would be hard to justify IMO. That means you would create an expensive key escrow infrastructure and put a legal shackle on millions of citizens and companies regarding the software they are allowed to use without affecting any of the people you actually want to affect.
{ "source": [ "https://security.stackexchange.com/questions/123946", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34161/" ] }
124,002
We are generating random numbers 16 digits in length. One option that was put forward was to generate four random numbers of 4 digits each and concatenate them instead of just generating a single 16 digit random number. The reason accompanying the suggestion was that it would be harder to predict the next number in case there was an issue with the random number generator. So is the concatenated random number better than a single random number?
So is the concatenated random number better than a single random number? If the random generator really produces random data then it will not matter. ... it would be harder to predict the next number in case there was an issue with the random number generator. If the issue is that the random generator is not that random at all then it might even be better for an attacker to get as much last outputs as possible because then the behavior could be better to predict. Of course this assumption depends highly on the internals of the random generator, so no general answer is possible. But in general: if you need really good random data you should use a proper random generator. Your method will not improve the quality of the output if the random generator is bad, i.e it stays predictable. If you actually don't need true random data but only want to make sure that you get somewhat random data without a bias you should be careful because depending on how exactly you do it your method might add a bias to the output.
{ "source": [ "https://security.stackexchange.com/questions/124002", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52211/" ] }
124,076
To begin with, I am not very computer savvy. I am an older person with an older computer and a 2003 Windows XP using Google Chrome for a browser. (If anyone is old enough to remember when Windows first came out, and remembers their hologram security seal with a baby touching a computer screen - that baby was my son.) I am widowed, and without resources for help with computer problems, so I would greatly appreciate some help. But you might have to explain things in common english! We have a wireless router provided by AT&T with WPA2. I have had a suspicion for a while that a neighbor is a hacker. Recently, if I log onto my computer late at night, it tells me that I don't have an internet connection. When I check the wireless network, my network doesn't even appear. What appears is a network that I don't recognize with a 5 bar signal strength. If I refresh the network list, my network appears (with signal bars), then the bars disappear on the unknown network, and I get a connection. This must mean that someone is hijacking my network, or hacking into my computer at night - right?
It is unlikely a hacker stealing internet access will have the sophistication (or need) to make the wireless network change between different names. It is more likely that someone/some device nearby installed a new wireless network that happen to broadcast on the same channel as yours (there are only 3 or 4 non-overlapping ones to choose from) and have a higher signal strength. Depending on the signal strength of your router and the capability of the wireless card, the latter may fail to detect your network until the other one goes quiet. One solution is to reconfigure your router to use a different channel. This option is normally under the "wireless" or "advanced wireless" section in the web configuration interface of the router. Another one is to find out who operates the other network and ask them to reduce power (it is possible they're violating FCC rules). You'll find a WiFi analyser on a smart phone very hand for both solutions. They can tell you what network are using which channel and the make of the router (useful for hunting them down). On Android, there are many options. I think the best one is Wifi Analyzer . For iOS, the only option is Apple's own AirPort Utility (see guide ).
{ "source": [ "https://security.stackexchange.com/questions/124076", "https://security.stackexchange.com", "https://security.stackexchange.com/users/111485/" ] }
124,086
With all the news about hacking banks and stealing money from banks over SWIFT , while the vulnerabilities weren't directly related to SWIFT, some questions arise: Are software components of the SWIFT network certified by any external organization? Have they undergone rigorous security tests? (I'm worried they'd be available to a narrow audience, and may not be subject to enough scrutiny or security testing.) Knowing that SWIFT was started many years ago, was the software architecture designed with security in mind? Is the architecture available for security review?
It is unlikely a hacker stealing internet access will have the sophistication (or need) to make the wireless network change between different names. It is more likely that someone/some device nearby installed a new wireless network that happen to broadcast on the same channel as yours (there are only 3 or 4 non-overlapping ones to choose from) and have a higher signal strength. Depending on the signal strength of your router and the capability of the wireless card, the latter may fail to detect your network until the other one goes quiet. One solution is to reconfigure your router to use a different channel. This option is normally under the "wireless" or "advanced wireless" section in the web configuration interface of the router. Another one is to find out who operates the other network and ask them to reduce power (it is possible they're violating FCC rules). You'll find a WiFi analyser on a smart phone very hand for both solutions. They can tell you what network are using which channel and the make of the router (useful for hunting them down). On Android, there are many options. I think the best one is Wifi Analyzer . For iOS, the only option is Apple's own AirPort Utility (see guide ).
{ "source": [ "https://security.stackexchange.com/questions/124086", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81374/" ] }
124,131
On a long haul flight, I imagine that charging a phone (in flight mode) with the inbuilt USB port on the head rest would be a security risk. Could I mitigate that risk by taking a regular USB cable and cutting the data (but not the power) cables? Or does the USB protocol need a data handshake to begin charging? Or is there another better solution?
Could I mitigate that risk by taking a regular usb cable and cutting the data (but not the power) cables? Or does the usb protocol needs a data handshake to begin charging? Such a cable does exist, so a data handshake must not be required. Such cords are discussed on some Stack Exchange sites: Micro USB cables that only charge but no data, no mounting etc (Samsung Galaxy S) How can I tell charge only usb cables from usb data cables So yes, using such a cable or making one using a DIY approach could mitigate a risk that depends on the 2 data pins. Of course, a different kind of attack where unexpected power is sent, possibly with the intent to damage the device, would still be possible.
{ "source": [ "https://security.stackexchange.com/questions/124131", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45228/" ] }
124,262
I am on a website where I need to pay for something. This website has the following warning in the top left: This site uses a weak security configuration (SHA-1 signatures) so your connection may not be private Should I go ahead and enter my card details and pay for something on this site? What are the security risks? Extra Info: I am using Google Chrome on a Windows 10 Machine. In internet Explorer I get the following Message:
It's a bad sign, but it is still very unlikely that the connection is being eavesdropped on. The website appears to have a valid certificate signed by a certificate authority, but it is signed with a weak and obsolete hash algorithm. What does that mean? It means the connection is encrypted and a passive eavesdropper can still not listen in. But a determined attacker with access to lots of processing power could generate a fake certificate for this website and use it to impersonate the website. So it is possible you aren't actually on the website you think you are but are instead on one controlled by a hacker. But such an attack would require quite a lot of resources and additionally require to be in control of a router between you and the website. But even when we assume that no attack is taking place, we should keep in mind what impression this makes. SHA-1 is obsolete for quite a while now. When the admins of that site still do not bother to update, that's a quite bad sign for their general competence. It could mean that they are also quite lax regarding other aspects of security of their website. The final decision what information you provide them with is yours to make.
{ "source": [ "https://security.stackexchange.com/questions/124262", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87457/" ] }
124,300
According to SecurityWeek, Microsoft is banning common passwords , and they will dynamically update their list: Microsoft says it is dynamically banning common passwords from Microsoft Account and Azure AD system. […] Microsoft is seeing more than 10 million accounts being attacked each day, and that this data is used to dynamically update the list of banned passwords. Is this list based on actual passwords for other people's accounts or just passwords used in brute force attempts? Could a secure system be built that checked password updates against other people's existing passwords and rejected the update if the password was too common?
We, at Microsoft, are banning the passwords most commonly used in the attacks and nearby variants. We aren't basing this on our user populations, who (because of the system) don't share these passwords unless the attacks change. The attack lists generally derive from studying breaches. Attackers are smart enough to look at lists to figure out high probability passwords, then do their brute force, etc. around those high frequency words. We look at the same lists plus the attack patterns to determine our ban lists. Hope this helps.
{ "source": [ "https://security.stackexchange.com/questions/124300", "https://security.stackexchange.com", "https://security.stackexchange.com/users/56961/" ] }
124,532
I noticed that Google's "I am not a robot" reCAPTCHA forces me to check correct images on my computer. I installed a virtual machine and tried there. Same thing. Used proxy. Same thing too. Then I used another computer in the same network (same public IP), but this time the reCAPTCHA doesn't force me to solve it. It just checks itself when I click it. Very curious behaviour. I repeated the process a couple times with a few days in between, and some computers never need to solve reCAPTCHA, while others (including brand new virtual machines) behind a proxy need to. I even tried a new browser in a fresh new VM. I'm on a home network, not an enterprise network. I am confused about what triggered reCAPTCHA into thinking it needs to double check me even when using new virtual machine behind a proxy? On computers where it isn't suspicious, I can delete all the cookies, history and caches, visit a website and reCAPTCHA just lets me go without any concerns. So it can't be solely based on my past activity. On the other hand, if I indeed solve the reCAPTCHA and register for an account on a website, the website is missing all the functionality for registered users. Also, when I'm presented with a CAPTCHA, even on brand new VMs, the functionality of registered users is limited. Which leads to thinking that reCAPTCHA sends information of what it thinks about a specific user to the website owner. Is this a documented behaviour?
Google tries to figure out if you are a bot or not. If it's in doubt, it serves you a CAPTCHA to check. Exactly how this is done is part of Google's secret sauce, and I don't think they will tell you. But here are some ingredients I guess that they mix together: Your IP: Has it been identified as a bot already? Is it a Tor exit node? The resources you load: A simple bot does not load styles or images, since it does not need them. That is a tell tale sign that someone is not human (or, as JDługosz points out in comments, blind). Sign in: Are you signed in to a Google account? Does that account appear to belong to a real person? Your behaviour: A human scrolls down the page, moves the mouse around, takes some time between pushing down the mouse button and releasing it. A human does not click the dead center of the check box every time. All this could be mimmicked by a good bot, but it is not easy. Your history: Google knows a lot of your browsing history. Bots usually don't have a browsing history. Figuring out exactly why you need to solve the CAPTCHA sometimes, but not others, is not easy. I could imagine that a fresh virtual machine has a browser fingerprint - installed fonts, plugins, etc - that is very common and therefore fishy enough for Google to flag your for a CAPTCHA. If you are behind a proxy, perhaps others have used it as well for non legit activities. That you don't get a CAPTCHA when you clean your cookies is surprising. I don't understand why - then Google knows very little about you and should assume you need a CAPTCHA to be on the safe side. Perhaps they do some advanced browser fingerprinting so they still know who you are? Do note that all of this is speculation. If you want more speculation, have a look at How does new Google reCAPTCHA work? .
{ "source": [ "https://security.stackexchange.com/questions/124532", "https://security.stackexchange.com", "https://security.stackexchange.com/users/109689/" ] }
124,633
Let's say someone does an ARP spoofing or DNS poisoning attack to redirect traffic to their own web server. If the real site has an SSL/TLS certificate, would that stop the hacker from redirecting let's say google.com to their own server? Doesn't the web server determine whether to connect via HTTP or HTTPS? And DNS lookup is done before they connect to the server. Couldn't they just tell the client to connect via HTTP instead of HTTPS?
The decision on whether to use HTTP or HTTPS is the client's. If the user goes directly to http://example.com , an attacker could simply hijack that connection and perform a man-in-the-middle attack. If the user goes directly to https://example.com , then the attacker must spoof the SSL/TLS connection somehow; doing so without showing the user an invalid certificate warning requires the attacker to have access to a Certificate Authority's private key. This situation should never happen. Without this, the user's browser would reject the connection, not allowing the attacker to redirect. In the case of Google and a number of other websites, they set the HTTP Strict Transport Security (HSTS) header, which causes the user's browser to cache a rule saying that they should never ever visit the site via plaintext HTTP, even if the user asks for it or Google itself redirects to a HTTP URL. The browser will automatically re-write the URL to HTTPS, or block the request entirely. This also prevents the user from clicking through a certificate warning in most browsers; the option simply isn't there.
{ "source": [ "https://security.stackexchange.com/questions/124633", "https://security.stackexchange.com", "https://security.stackexchange.com/users/112034/" ] }
124,682
My bank went through a major redesign of their customer online banking system recently. The way security is managed across the platform was also reviewed. The password I am able to set now to log in is forced to be 6 digits long, numerical . This goes a long way against what I thought to be a secure password policy . On the other hand, I trust my bank to know what they are doing. Could you help me understand how good this policy is? Compared to common practices in the sector. From a more general IT security point of view. As a customer: How much should I be worried that my account may be easy to compromise? Notes: The user is id card number, which is almost public data. Someone entering my account is still not able to make a payment before it goes through another security mechanism (which we will assume to be good).
Unusual? Yes. Crazy? No. Read on to understand why... I expect your bank has a strong lockout policy, for example, three incorrect login attempts locks the account for 24 hours. If that is the case, a 6-digit PIN is not as vulnerable as you might think. An attacker that tried three PINs every day for a whole year, would still only have about a 0.1% chance of guessing the PIN. Most websites (Facebook, Gmail, etc.) use either email addresses or user-selected names as the user name, and these are readily guessable by attackers. Such sites tend to have a much more relaxed lockout policy, for example, three incorrect logins locks for account for 60 seconds. If they had a stronger lockout policy, hackers could cause all sorts of trouble by locking legitimate people out of their accounts. The need to keep accounts secure with a relaxed lockout policy is why they insist on strong passwords. In the case of your bank, the user name is a 16-digit number - your card number. You do generally keep your card number private. Sure, you use it for card transactions (online and offline) and it is in your wallet in plaintext - but it is reasonably private. This allows the bank to have a stronger lockout policy without exposing users to denial of service attacks. In practical terms, this arrangement is secure. If your house mate finds your card, they can't access your account because they don't know the PIN. If some hacker tries to bulk hack thousands of accounts, they can't because they don't know the card numbers. Most account compromises occur because of phishing or malware, and a 6-digit PIN is no more vulnerable to those attacks than a very long and complex password. I suspect that your bank has no more day-to-day security problems than other banks that use normal passwords. You mention that transactions need multi-factor authentication. So the main risk of a compromised PIN is that someone could view your private banking details. They could see your salary, and your history of dodgy purchases. A few people have mentioned that a 6-digit PIN is trivially vulnerable to an offline brute force attack. So if someone stole the database, they could crack your hash, and get your PIN. While that is true, it doesn't greatly matter. If they cracked your PIN they could login and see your banking history - but not make transactions. But in that scenario they can see your banking history anyway - they've already stolen the database! So while this arrangement is not typical, it appears that it is not so crazy after all. One benefit it may have is that people won't reuse the same password on other sites. I suspect they have done this for usability reasons - people complained that they couldn't remember the long, complex passwords that the site previously required.
{ "source": [ "https://security.stackexchange.com/questions/124682", "https://security.stackexchange.com", "https://security.stackexchange.com/users/26970/" ] }
124,714
I have two servers with 2 web application in PHP, I need this server make a secure communication there is my scope: I need to application 1 make a POST on application 2, but on application 2 I need to make sure that POST comes from the application 1. Which is the best way to figure it? I was thinking on some alternatives like a hash key and encrypt the information making a window of 1 minute to that information be valid, so if I get a post with a date with more than 1 minute of the actual date it's a invalid post. My biggest worrie is Man In The Middle attack.
I need to application 1 make a POST on application 2, but on application 2 I need to make sure that POST comes from the application 1. For the purpose of this post, I'm going to call application 1 the Client , and application 2 is the Server . Let's start with: My biggest worry is Man In The Middle attack. Ensure that when the Client makes a connection to the Server, the client must confirm the identity of the Server , and ensure there is no Man In The Middle . (MITM) This is the same problem browsers face when trying to make a secure connection for Online Banking or other sensitive operations. This is easily secured using a standard HTTPS certificate (which would use TLS encryption). The connection is secured by using Asymmetric Cryptography, such that the only way to accomplish MITM would be to steal the private key from the Server's filesystem. A certificate can be issued from a Certificate Authority with renewal every few years, or you could create a self-signed certificate stored on the Client system at no cost. Now with Server identity, and MITM successfully thwarted; you need only to verify the identity of the Client . There are several ways to proceed. Pick one Shared Secret: simply include a password on the Client machine, and the Server must verify that it is matched. This is the most straight-forward way to verify client identity, and is safe thanks to HTTPS. Client certificate as part of the TLS connection: While HTTPS provides a means for client certificates, I find this to be more complicated than necessary. Time-of-day with encryption: Create yourself a separate RSA private and public key. Store the public key on the Client, and the Server will use the Private key to decrypt. The client will encrypt the time of day, and the server will verify that it is correct. (within range) Challenge-response: In this case, a Public RSA key is stored on the Server, and Private Key on the client. The server sends a random string to the client, along with a request id, and the client must be able to decrypt the info and send it back with the matching request id. If decryption was successful, then Client identity is verified. The benefit this has over the others is that, supposing, an attacker successfully hacks your server and downloads files, he will not be able to impersonate a Client without a more advanced hack. (though he would be a step closer to MITM) With solutions 1, 2 and 3, simply downloading the appropriate files from the server filesystem would provide the necessary info to impersonate a Client. Solutions 3 and 4 would are best accompanied with a protection against Replay Attacks. Solution 1 is not compatible with Replay attack prevention. However, in your use case, using HTTPS, I think Replays are only an issue after a successful hack of the Server filesystem thereby permitting MITM. Bonus: IP Address: If possible, have the Server verify that the connection originates from a pre-determined Client IP address. If the Client IP address is verified not to change, this Bonus protection will provide a great deal of protection against any attacks on the above solutions I have provided.
{ "source": [ "https://security.stackexchange.com/questions/124714", "https://security.stackexchange.com", "https://security.stackexchange.com/users/112119/" ] }
124,717
I am writing a page for our website which describes the measures we take to keep our customer's information secure. On this page one section describes how we keep their passwords secure. We are using Secure Password Storage v2.0 which is an implementation of PBKDF2. We are using the hash algorithm SHA256, 64000 iterations, and 24 bytes for our random salt. I'm not really sure this matters so much, other than that I just don't want people to come with pitchforks raised thinking we are encrypting the passwords. Is it correct to say "The passwords which we store cannot be decrypted"? I worry that it implies that the passwords can never be cracked, which simply isn't true. However, I do want to emphasize to our users that our system to store passwords is secure enough that they shouldn't ever have to worry about it (as long as they pick a sufficiently unique password of course), even in the case of the entire password database being stolen. Other options I have considered are "The method used to store passwords cannot be reversed" or "In the case of a breach, your password should not be retrievable" but I find saying they can't be decrypted to be more understandable and to the point, especially since some people may not even realize that passwords aren't stored in plaintext.
"We believe the secrecy of your passwords is very important, which is why we have implemented measures that strongly protect them while stored on our servers. Once you submit your password we convert it using a cryptographic function (salted PBKDF2-SHA256 with 64,000 iterations for the tech savvy), so even if attackers are able to breach our site they won't immediately learn your password. This method of storing passwords makes it significantly harder for your password to be cracked by the bad guys. Choosing a unique and good password/passphrase choice paired with this technology can help prevent unauthorized access to your account. Even our employees won't know your original password." Similar to Adam's suggestion, just expanded to cover more of your concerns.
{ "source": [ "https://security.stackexchange.com/questions/124717", "https://security.stackexchange.com", "https://security.stackexchange.com/users/68680/" ] }
124,723
I've been considering a VPN tunnel, specifically PIA . These types of companies make bold claims of complete anonymity and privacy, with no one in the world (ISP's, governments, school or corporate network monitors, no one ) being able to know the contents of your web traffic. Simply put, are there any hidden gotchas? How can such a thing exist for such a relatively low price and truly be that secure? And of course, let's ignore the gotchas of "VPN's log your traffic" (let's assume you've chosen one that doesn't) and "the encryption could always be broken" (let's take difficult to execute exploits off the table). I'm concerned about this for a variety of reasons. I don't want to be profiled by corporations; I have political and social leanings that I don't want to be uncovered both by governments and just anybody; I don't want my peer-to-peer activity throttled or inspected; etc etc. I just don't want my activity used against me by anybody. I of course am doing only ethical things :) A narrower version (tell me if I should ask a separate question): are VPN's effective at hiding activity from "localized" gateways like an ISP or a school/corporate network?
"We believe the secrecy of your passwords is very important, which is why we have implemented measures that strongly protect them while stored on our servers. Once you submit your password we convert it using a cryptographic function (salted PBKDF2-SHA256 with 64,000 iterations for the tech savvy), so even if attackers are able to breach our site they won't immediately learn your password. This method of storing passwords makes it significantly harder for your password to be cracked by the bad guys. Choosing a unique and good password/passphrase choice paired with this technology can help prevent unauthorized access to your account. Even our employees won't know your original password." Similar to Adam's suggestion, just expanded to cover more of your concerns.
{ "source": [ "https://security.stackexchange.com/questions/124723", "https://security.stackexchange.com", "https://security.stackexchange.com/users/72781/" ] }
124,775
In recent events, a server I've been managing with has been under a few attacks, a risk you take when hosting a web-server. The firewall has been set up properly to only allow connections through the ports used. The thing is, there was brief discussion about blacklisting all IP's from certain countries that don't fit into the scope of the website, meaning the idea is to automatically blacklist anyone from several countries where some attacks originate from, but users don't. Is it practical to auto-blacklist users by Geo-IP from regions that don't normally use the website? We've been thinking of having this limit for at least back-end ports, meaning only the countries that house authorized people are allowed through.
It's essentially a business decision, rather than a security one. The risks from a business perspective are that you lose users from that country, or who are accessing the site from VPNs located in that country, and that, whilst really unlikely, it's theoretically possible for IP assignments to change, meaning that if you didn't keep these blocks maintained and updated with the latest assignments, you might accidentally block legitimate users from target countries, who happen to have been given IPs from a pool previously assigned to a blocked country. From a security point of view, it can reduce the volume of attacks, and increase the costs to an attacker of targeting your site (since they need to get machines from specific countries, rather than any machines). It tends to make sense when you have a regionally restricted product - think of shops where goods are only shipped within a specific country, competitions which only accept entries from people in a given region, or systems which work in conjunction with physical businesses which have a limited range (e.g. deliveries to a national chain store, so there would be no way for a user elsewhere to benefit from the service). In those cases, it tends to be easier to justify the risks, since there is no way people from other countries can use the service (and it wouldn't be hard to include neighbouring countries in case of edge cases - a Portuguese business might include Spanish IP ranges, just in case, say). It makes less sense when you have an information business, or a digital product. In these cases, you might end up getting more unwanted traffic, as people who want to obtain the product resort to VPNs within allowed countries. Think artificial restrictions such as film releases staggered around the world, TV shows with months of delay before being shown outside the country of origin, or game releases. You can obtain country specific IP lists from sites such as http://www.ipdeny.com/ipblocks/ and then choose whether to use a whitelist approach ("we only deliver to southern Italy, so will only allow Italian and Vatican City IP addresses") or a blacklist approach ("we see lots of attacks from Australia, so will block all Australian IP addresses"). (Please note, all countries are randomly selected and should not be taken as approval or disapproval of given countries.)
{ "source": [ "https://security.stackexchange.com/questions/124775", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45233/" ] }
125,051
Here's my relatively layman's view of the issue. Many websites tout multifactor authentication (MFA) as an enormous boost to the security of users' accounts, and it can be if implemented properly . However, it seems that some sites will only prompt the user for their MFA AFTER they enter their password correctly. I've only tested this with gmail.com and outlook.com , but given that these are two huge email providers, I imagine they're only two of many perpetrators. The reason this is (at least on the face of things) such a huge security flaw is that it can allow crackers to guess a user's password until they're presented with the prompt for MFA, at which point they know they've got the user's password. It seems like websites will brush this off, saying, "But since the user has MFA, the cracker can't get into their account." What they seem to forget is that the user likely has accounts on other websites, and quite possibly uses the same password for that site. So now the cracker may have access to all the user's accounts across the web, many of which probably don't have MFA implemented, leaving the user completely vulnerable to attacks. Are there any flaws in my argument or assumptions that would make this a non-issue? If not, then why do huge companies like Google and Microsoft not fix this issue?
If I'm understanding your question properly, the attack you are proposing is to brute-force passwords against a server like this, then once it shows you the MFA screen, go try that password on other websites that this user has accounts on. This is a great question! Good find! But you seem to be overlooking two points: This is no weaker than not having MFA, which also confirms the correct password ... by letting you in. No hacker in their right mind will try brute-forcing a password against a live server which typically rate-limits you to like 5 guesses per second. Or in the case of the big providers like GMail or Outlook, have complex fraud-detection systems that do auto IP-blocking of suspicious activity. 99.999...% of the time, password brute-forcing is done against password hashes stolen directly from the database on which you can guess (m|b)illions of passwords per second. So while I agree with you that there is the potential for some data leakage here, I think the risk is minimal, and far outweighed by the user inconvenience of having to fumble with their OTP fob just to find out that they typo'd their password. Update addressing comments since this has become a hot network question: There are two types of Multi-factor authentication (aka "2FA" or "MFA") that really need to be thought about separately: SMS or Push Notification 2FA : when you get to the 2FA screen it sends a code to your device that you have to type in. For many users, this is probably the only type of 2FA that you've been exposed to. The attack described in the question will not work in this case because the user will receive a 2FA code they did not request and they'll know something's wrong. Moreover, doing the 2FA step regardless if the password is correct is actually harmful it this case because: An attacker could potentially cause the user to get a huge monthly data / SMS bill, or crash their device by filling its memory with notifications. It also leaks which users have 2FA enabled, and which are easy targets. "Offline" 2FA using code-generator tokens, apps, or public-key enabled smart cards / USB sticks . This is the kind of 2FA that government, military, and corporations use. So while it's less visible to end-users, it's by far the more important type of 2FA because of the value of the data it's protecting. In this case, there is no "built-in" notification to the user when an attacker gets to their 2FA screen. And usually all users are required to use 2FA, so there's no harm in leaking which user have 2FA enabled, because it's all of them. Imagine this scenario for Case 2: a corporate VPN that sits on top of the Windows Active Directory. Public-facing VPNs get hammered on all day long by password guessers, so there's nothing unusual about those logs. But if I can have the user's password confirmed by the VPN's 2FA screen, then I can walk up to their laptop and log in confident that it will not lock out the Windows account - which would certainly get noticed by the user / IT. The question correctly points out a security hole that the pattern of "got to the 2FA screen and entered nothing / entered something incorrect" should certainly be flagged as more severe than your standard "incorrect username/password" and should notify the end-user to retire their password.
{ "source": [ "https://security.stackexchange.com/questions/125051", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99083/" ] }
126,188
I wanted to buy a Librem Purism 13 because I care about my privacy and generally wanted a laptop to test Linux on. However, I was advised against it because it uses Intel i5 processors which contain binary blobs . From what I understand binary blobs are parts of code which cannot be read, and you are not really sure what they do. They could for example extract what you are doing on your computer and send this information somewhere. Is this possible and if so how much of a risk is this? Wouldn't almost all recent computers be compromised? Is there any laptop that is completely open source with no binary blobs in it?
Summary: There's probably some BS marketing going on, but on the whole they probably are making the more privacy-respecting laptop they can. Other answers mention other brands of privacy-focused laptops that avoid Intel chips in favour of 100% libre hardware, but you take a big performance and cost hit for doing it, because, well, Intel is the market leader for a reason. Who do you trust? Unless you go out into the woods with a hatchet and build yourself a life, you've got to place trust in somebody . Based on the descriptions on their page, the company Purism seems like a reasonably good group to place trust in (which you seem comfortable with if you're already considering buying their laptop). They clearly put a lot of scrutiny on their component suppliers with regards to privacy. Assuming of course that it's not all B***S***t marketing, which I have no way to know for sure. (See the "Additional Thought" at the bottom for more on this.) Intel As for binary blobs and Intel, this is actually a deeper question than you realize. "Binary Blobs" refers to software that's provided to you in binary (executable) form, but you have no access to the source code, or any good way to inspect it. Intel is a hardware manufacturer, so while they may have some binary blobs of software, what about hardware that's provided to you in chip form with no access to the designs? Do you think Intel allowed Purism to inspect the blueprints for all of the chips on an i5 board? Of course not, that's billions of dollars worth of intellectual property! This debate around Intel, privacy and black-box hardware is currently a hot-topic with Intel's RdRand instruction - an assembly instruction for retrieving a random number from an Intel hardware-RNG chip on the motherboard. I was recently at a crypto conference and overheard one of the designers of the RdRand chip having a discussion with another attendee that went something like this: Attendee : "RdRand is a blackbox chip. Will you release the source designs for security audit?" Intel engineer : "No, of course not, that's protected intellectual property." Attendee : "Intel is in American company, how do we know the American government hasn't forced you to include backdoors?" Intel engineer : "Well, I guess I can't prove it to you, but I designed it and I can assure you that there aren't any." Attendee : "Forgive me if that's not good enough. I'm going to continue doing random numbers in software." Intel engineer : "Your loss, I guess." So, should we trust them? At the end of the day, unless you live in the woods, you have to trust somebody. I personally think the Intel corporation is very security-aware - having attended multiple crypto conferences with them, and while they are under American law (I'm not American BTW), I think they are at least as trustworthy as any other closed-source hardware vendor. Ken Thompson's 1984 paper "Reflections on Trusting Trust" showed that trojans can be injected at the compiler level in a way that's almost impossible to detect, so even inspected open source code is not guaranteed to be trojan-free. At the end of the day, you're got to trust the people, not the code. Do you trust Intel? Do you trust Purism to be scrutinizing their suppliers as well as they are able to? Nothing will ever be 100% provably secure, but Purism's products are certainly better than your standard laptop from Best Buy. Additional thought: The Purism Librem product pages say: Welcome to the beautiful laptop that was designed chip-by-chip, line-by-line, to respect your rights to privacy, security, and freedom. "Line-by-line" .... right, sure. The linux kernel itself is about 16 million lines . The Librem ships with either their Debian-based PureOS, or Qubes OS, which will both contain tens of millions more lines, plus bootloaders and firmware, plus all the apps in the Debian repositories. You want me to believe that Purism's 6 developers have personally inspected every single line of code for insidious, hard-to-catch backdoors ? And have also inspected all the compilers used looking for self-replicating trojans ? Please. Over-zealous marketing. That said, if you take Ken Thompson's "Trust the authors, not the code" philosophy, and we decide that we trust the Debian devs, and the Intel engineers (I have decided to trust them out of convenience), and we trust Purism to apply the "Trust the people, not the code" philosophy appropriately, then we're probably OK.
{ "source": [ "https://security.stackexchange.com/questions/126188", "https://security.stackexchange.com", "https://security.stackexchange.com/users/113581/" ] }
126,261
I was wondering whether a cookie can carry a virus (or any security-threatening code). In some sense it is similar to a download. So by simply visiting a site, could I get harmed?
You can put any text strings into a cookie, so in theory you could put some kind of code there. But for code to do any harm something needs to run it. The web browser does not interpret the content of cookies as code and does not try to run it, so cookies should not be dangerous. (If you have heard cookies being referenced in security related discussions, it is probably in relation to privacy and not viruses.) In theory there could be a bug in the browser that makes it possible to craft a special cookie that somehow fools the browser to run it, e.g. by causing a buffer overflow. Such a bug is quite unlikely in a major browser, and if you could find one it would be considered a big deal. So I would not worry about cookies infecting me with a virus. However it is possible to be infected by malware from just visiting a website. This is called "drive by downloads" and is nowadays a common method to spread viruses. The vector that is exploited for this is generally not cookies though, but plugins like Java or Flash.
{ "source": [ "https://security.stackexchange.com/questions/126261", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103485/" ] }
126,533
I'm looking at the event description for the key-signing party at an upcoming BSD conference , and it's mentioned that I shouldn't bring my computer in to the event: Things to bring no computer What risks does bringing a computer into a key-signing party pose?
Quote from Wikipedia : Although PGP keys are generally used with personal computers for Internet-related applications, key signing parties themselves generally do not involve computers, since that would give adversaries increased opportunities for subterfuge . Rather, participants write down a string of letters and numbers, called a public key fingerprint, which represents their key. The fingerprint is created by a cryptographic hash function, which condenses the public key down to a string which is shorter and more manageable. Participants exchange these fingerprints as they verify each other's identification. Then, after the party, they obtain the public keys corresponding to the fingerprints they received and digitally sign them. another one from openwest : If you bring a computer, please keep it in your bag and powered down during the party. This is for security measures to prevent the spread of malicious software, the misplacement of private keys, and damaged or misplaced equipment .
{ "source": [ "https://security.stackexchange.com/questions/126533", "https://security.stackexchange.com", "https://security.stackexchange.com/users/17156/" ] }
126,569
Someone connected their Android phone to my MacBook and it made me think if this has put my MacBook at risk. It was for 3 seconds and I was in control of the MacBook the whole time.
Yes. Android devices have the capabality to act as basically any USB device. This opens up gates for all kind of Bad USB attacks like Rubber ducky attack that types in scripts very fast (Almost un-noticable by the user) by acting as a keyboard (HID | Human interface device). Then it could act as a network device and setup MITM These two are done by emulating normal USB devices. Also USB exploits specific to the OS or platform maybe used. If you want to try these you can try NetHunter. https://en.wikipedia.org/wiki/NetHunter https://nakedsecurity.sophos.com/2014/08/02/badusb-what-if-you-could-never-trust-a-usb-device-again/
{ "source": [ "https://security.stackexchange.com/questions/126569", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15034/" ] }
126,596
In my organization, a group policy exists that prevents users from writing data to a USB stick. This always seemed like a bit of a pointless PITA to me, just "one of those things...". I forget about it as often as not since I can download data from my thumb drive when I need to (I only remember it again when I try to take a file home). Recently, my responsibilities were expanded to include participation in state & federal audits and I've noticed some form of the following the bullet point is usually a highly-promoted feature when asserting compliance: Users cannot download sensitive data to a USB drive. Is this actually an improvement to security on its own? There are still a significant number of ways to extract data from the company domain (I've certainly used more than a few work-arounds simply for convenience.) At first I thought it could maybe be argued that it prevents low-skilled, high-frequency attacks. But with the proliferation of personal fileshare tools (that may or may not need to be unblocked by company policy - e.g. a user has both a personal & corporate GitHub account) does this sort of policy still provide additional security? Or is it just a placebo for minimally literate auditors?
One of the main reasons behind the prohibition of writing data to USB drives (I had this explained to me once) is not to prevent employees from stealing sensitive information. If they wanted to do that , they would have no end of workarounds, up to printing QR codes on A4 sheets. Rather, it is to prevent employees from saving sensitive information on USB drives in good faith, only to have those USB drives lost or stolen . Often in such cases you'll find that it's OK to save data on specific encrypted thumb drives that can't be decrypted in case of theft or loss: e.g. biometric-lock drives(*) or encrypted file systems that will be "seen" as physical fixed hard drives instead of removable hard drives, thereby circumventing a "cannot write to removable drives" security policy. Windows 7+ has explicit settings that enable treating BitLocked devices (possibly with corporate keys) differently from plain USB devices . (In my case, that was Windows XP Pro SP3 some time ago, I had a USB key with a TrueCrypt volume on it, and I was allowed to take some work at home on it. I was under orders not to copy the files anywhere else and not to use the USB drive for any other purpose. This kind of precautions is clearly directed against accidental leaks, not intentional ones). For the same reasons, some popular file sharing sites, or sites and apps that could allow file sharing, might be blocked by company firewalls. Again, not so much to stop espionage, but to prevent people from growing too careless (at least in the powers-that-be's opinion) with corporate information. (*) Note Biometric lock thumb drives and (especially) hard disks aren't necessarily secure - or even encrypted, or encrypted correctly . If the memory can be physically separated from the locking part (easy for most HDD enclosures, conceivable for many USB thumb drives) someone can try to read it directly, maybe just to "reclaim" the memory itself. After all, in case of failure the finder will have just lost some time. Once he or she has the hands on the readable memory, simple curiosity might be enough to take a peek and maybe even try and decrypt it before reformatting and repurposing. With luck, the device is vulnerable and just googling its make and model will allow someone to recover the necessary tools and/or knowledge.
{ "source": [ "https://security.stackexchange.com/questions/126596", "https://security.stackexchange.com", "https://security.stackexchange.com/users/113967/" ] }
126,718
I had a job interview yesterday where they asked what the only scenario where a one-time pad can be broken would be, my answer to which was "when the key distribution process is not secure enough". They praised my answer, but they asked me another question: Why would you even use a one-time pad if the key distribution is 100% secure? Why not simply send the plain-text message since you are sure that distribution is 100% secure? What is the correct answer to that question?
You can distribute the key now and send the message later . Suppose you are a spy sent on a mission behind enemy lines. You take the key with you (secure distribution) and when you discover a secret you can securely send it using the One-Time pad.
{ "source": [ "https://security.stackexchange.com/questions/126718", "https://security.stackexchange.com", "https://security.stackexchange.com/users/61758/" ] }
126,768
I am looking for existing protocols for a group chat with two things: End to end encrypted. Just what you would expect: messages are only decipherable by the chat members and message tampering is detected. It should not encrypt each message for each member individually. The Signal Protocol does this, turning group chats into many one-on-one chats, which is not a proper, scalable solution to the problem. With potentially hundreds of members in a group, even encrypting an encryption key for each member is a considerable downside. Every new member may receive everyone's public key upon joining, and any group key(s) must be rotated when a member leaves. This scales reasonably enough, and there might not be a way around it without compromising security, so this is allowed and does not count as 'encrypting every message for everyone'. I've looked for existing protocols, but came up with zero results that meet these requirements. I thought I read about something a few years ago where the group derived a common key and used that or something, but I cannot find anything like that. Signal, WhatsApp and Allo use the Signal Protocol which violates requirement #2 . Tox has some extensive documentation but somehow I can't seem to find how encryption happens in a group chat. Another source even claims "you can't make groups with end-to-end" (though I am fairly sure they are mistaken). And finally a bunch of other popular applications such as Mumble and XMPP cannot do end-to-end to begin with, or simply do not support group chats such as Telegram , Ricochet and ZRTP .
Let me try to sum up what the landscape of end-to-end encrypted messaging protocols for group chat looks like: Protocols like PGP have been around for some time and offer "group messaging" by simply encrypting the content with a randomly generated symmetric key and then encrypting that key asymmetrically with the public keys of each of the recipients. These protocols only sends the encrypted content once but encrypts the encryption key to each of the members of the group. Note that similarly to PGP, this approach does not provide any perfect forward secrecy, deniability or conversation integrity (and thus no transcript consistency). OTR was introduced to address some of the shortcomings of PGP, improving on perfect forward secrecy, conversation integrity and deniability. Ian Goldberg, the author of OTR also wrote a paper on a multi-party variant of the protocol, named mpOTR . mpOTR was designed with the XMPP transport in mind and inherently synchronous in its design, meaning that each group member is expected to be online at any time to negotiate new keying material. The described protocol does not provide in-session perfect forward secrecy and has not been largely deployed. N+1Sec is a similar protocol with some improvements. Note that these protocols have a lot of algorithmic complexity and tend to scale badly, especially when you add latency into the mix. Then you have a whole class of protocols, that we simply call N times protocols because they are just sending each message . These protocols have the advantage of reusing an existing one-to-one protocol, which is really convenient when you already have a channel that gives you nice features such as asynchronous perfect forward secrecy. The group structure is not a cryptographic concept in this case, losing on the cryptographic guarantees but lowering algorithmic complexity. The Open Whisper Systems blog has a great post about why Signal does this instead of mpOTR-style messaging. This class of protocols violates your second requirement since they are what we call “client-side fan-out” where the client encrypts and sends out all of the different messages. There exists an optimisation on Signal's which was adopted by WhatsApp and that you can find in their whitepaper called Sender Keys that has “server-side fan-out”. It uses N times on setup, but after the first message, each member of the group can send a single messages to the group. This protocol has perfect forward secrecy by using a hash ratchet (but does not provide perfect future secrecy). Transcript consistency is enforced by the server-side (because server-side fan-out), but not from a cryptographic perspective. These are the types of protocols that I've seen being implemented. There are challenges, both in usability and crypto research on how to combine asynchronosity with perfect future secrecy and transcript consistency in the group setting. If you want a protocol that answers both of your requirements, I think something like the Sender Keys variant of the Signal protocol is what you're looking for.
{ "source": [ "https://security.stackexchange.com/questions/126768", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10863/" ] }
126,769
I've been practicing in security-related topics and I came upon this problem which I don't understand at all. You receive a form with one input named pass , and this is the code you need to bypass: <?php error_reporting(0); session_save_path('/home/mawekl/sessions/'); session_start(); echo '<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>'; echo 'This task is as hard to beat as the castle which you can see on the bottom.<br>'; echo '<pre>'; include('./ascii.txt'); echo '</pre><br>'; $_SESSION['admin_level']=1; if (!isset($_POST['pass']) || md5($_POST['pass'])!='castle') { echo '<b>Wrong or empty password.</b><br>'; $_SESSION['admin_level']=0; } If it enters the final if statement, you lose (Need to make it so $_SESSION['admin_level'] stays at 1 ). Any help is appreciated, thanks! Clarification: I can't edit the code I posted. It's a challenge. All I can do is send a password through an input whose name is "pass". Yes, I know md5 is supposed to return a 32-char long string. That's the challenge.
Try sending a HEAD request. I'm assuming that with ascii.txt included, the output of the script is just over a nice number like 4096 bytes, a common output_buffering value. Once the script has written output_buffering bytes, it needs to flush the output buffer before continuing. Normally this works fine and the script continues, but if the request type is HEAD, there's nowhere for the output to go and execution stops, hopefully in the middle of the "wrong password" message which means admin_level is never set back to 0. If ascii.txt is a file you can control, you'll have to tweak the size so the numbers work out to exceed output_buffering while writing the "wrong password" message. Here's a paper on this technique . It might not always be applicable based on the PHP version/config, but the last example in the paper is so similar to this problem that I'd expect it to be the intended solution.
{ "source": [ "https://security.stackexchange.com/questions/126769", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114173/" ] }
126,774
My understanding of memory in a computer is that it is stored in blocks that are over written only when the computer finds it convenient. so if a program asks to write in a certain block rather than clearing that block, which takes time, the computer will just redefine where that block is so that it can use an empty block and worry about clearing other blocks latter. This is fine but with password storage that means (if I understand correctly) that even though a password may be stored in encrypted form, a non-encrypted form of the password is lying around somewhere in the "uncleared" part of my computers memory. Please explain to me either why this is not the case or why this is not a security issue. Thanks for the help.
Try sending a HEAD request. I'm assuming that with ascii.txt included, the output of the script is just over a nice number like 4096 bytes, a common output_buffering value. Once the script has written output_buffering bytes, it needs to flush the output buffer before continuing. Normally this works fine and the script continues, but if the request type is HEAD, there's nowhere for the output to go and execution stops, hopefully in the middle of the "wrong password" message which means admin_level is never set back to 0. If ascii.txt is a file you can control, you'll have to tweak the size so the numbers work out to exceed output_buffering while writing the "wrong password" message. Here's a paper on this technique . It might not always be applicable based on the PHP version/config, but the last example in the paper is so similar to this problem that I'd expect it to be the intended solution.
{ "source": [ "https://security.stackexchange.com/questions/126774", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114170/" ] }
126,819
Until today I managed to avoid paying online with my credit card (I'm weird, I know...) but somehow I managed to do it. Today though I had to book a hotel room (on booking.com) that required a deposit. I entered the card issuer, card number and card expiration date. The card issuer is basically redundant since from the number you can get the issuer. So basically, with two basic pieces of information (credit card number and expiry date) anyone can pull money from my account. If someone has some spy glasses and looks at my credit card over my shoulder while I pay for some groceries they can then pay for stuff with my credit card. How is this thing safe?
Booking.com doesn't take a deposit or any payment from you; what you're filling in is a reservation form. The card details are used as a form of payment identity in case (a) you don't turn up and they need proof you intended to stay, or (b) you stay and run off without paying when checking out. They hotel still requires a present card for payment, or the CVV to do a card-not-present transaction, or cash if you choose to pay that way instead. The bigger question of "is this secure" is more complicated. The simplest way to think about it is that there are a number of security controls in place to help prevent fraud, at various stages in the process (website, payment processor, bank), but even if these all fail the bank is insured against fraud, so you will get your money back if you use an appropriate card type. In general, credit cards offer superior and faster fraud protection in comparison to debit/bank cards.
{ "source": [ "https://security.stackexchange.com/questions/126819", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114223/" ] }
126,932
I'm wondering if it's safe to black out sensitive information from a picture just by using Microsoft Paint ? Let's take in this scenario that EXIF data are stripped and there is no thumbnail picture, so that no data can be leaked in such a way. But I'm interested in whether there is any other attack, that can be used in order to retrieve hidden information from the picture?
As mentioned in the answers to a very similar question , scribbling over part of an image will destroy the original pixels, assuming that your editor doesn't store any layers or undo history in the saved image. (Paint doesn't.) There are some things to watch out for, though: The width of the blanked region places an upper bound on the length of the secret data The height of the region could tell attackers whether the text representation of the data has ascenders or descenders (like in the letters b and p ) Any spaces in the blanked region provide information about the relative lengths of the data's parts/words (mentioned in David Schwartz's comment ) If you use a blur rather than a plain opaque rectangle/brush, a determined attacker could try lots of different possibilities in the image to see what text(s) get close to your image when blurred. Some effects can be undone almost perfectly, so make sure the one you use involves a lot of randomness or actual data destruction (e.g. a blocky pixellization). Of course, Paint doesn't have any special effects, so you should be fine. One possible thing to be wary of is JPEG compression artifacts around the secret data, which could be used to get clues about the shape of the text. It never hurts to overwrite more information than necessary when you're concerned about secrecy. (This attack isn't a problem if the image never went through JPEG compression before your redaction.)
{ "source": [ "https://security.stackexchange.com/questions/126932", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46520/" ] }
127,018
I have developed a web site during an internship, using a database with accounts. I used the crypt() method from PHP, with a secured algorithm plus salt (found on the web with a lot of feedback). Obviously, I'd like to talk about it in my report but, would it be a security issue since that report will be made public? I think that it wouldn't matter because the method itself is pretty secure, but maybe, if the attacker knows about the algorithm, it would make it a little bit easier to brute force.
Should you make the algorithm public? Trying to hide implementation details (such as which hashing algorithm you use) to preserve security is the very definition of security through obscurity . There is broad consensus that obscurity should not be your only line of defense. If you need to keep your hash algorithm a secret, you are doing it wrong and need to pick a better hashing algorithm. When you use a good algorithm, there is no reason not to tell the world about it since they won't be able to crack your hashes anyway. Also note that in your case the salt will give you away. If someone gets hold of your database, they will be able to read what algorithm was used from that. So obscurity does not make brute forcing harder here. Advertising a weak scheme, however, might encourage attackers. A strong one could have the opposite effect. The point Mike Goodwin makes in his answer should also be taken into account. Is crypt() secure? The relevant question to ask is therefore if crypt() is secure enough. Let's have a look at the PHP manual : password_hash() uses a strong hash, generates a strong salt, and applies proper rounds automatically. password_hash() is a simple crypt() wrapper and compatible with existing password hashes. Use of password_hash() is encouraged. Some operating systems support more than one type of hash. In fact, sometimes the standard DES-based algorithm is replaced by an MD5-based algorithm. The standard DES-based crypt() returns the salt as the first two characters of the output. It also only uses the first eight characters of str , so longer strings that start with the same eight characters will generate the same result (when the same salt is used). The function uses different algorithms depending on how you format the salt. Some of the algorithms are very weak, and the strong ones might not be available on all systems. So depending on the algorithm used, there are a number of problems here: For some algorithms crypt() only applies one round of hashing. That is too fast, and will enable a brute force attack. Under some circumstances crypt() will use MD5, which is known to be weak. Only using the first eight characters completely nullifies the benefits of long passwords. I therefore suggest that you switch to password_hash() . It lets you use bcrypt - a tried and tested algorithm. Then you can proudly tell the world about your hashing scheme.
{ "source": [ "https://security.stackexchange.com/questions/127018", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114394/" ] }
127,092
I am looking to make a clean install of a Debian system on my home desktop. To clarify, I am switching from Windows and wish to use it as my day-to-day home OS - I'm not going to be running any servers or anything like that. I also have reason to believe that some members of my household (who have physical access to my machine) would try to gain access to it, and look through my data or possibly even install a keylogger. For the purpose of this question, please ignore the social aspects, except for the fact that I cannot act openly confrontational, so e.g. locking my room to prevent anyone accessing my PC is not an option. The people I want to protect against are technologically literate; they know their way around linux even if they may lack much experience with it, and if something can be found with some googling and takes maybe an hour or two of messing around then it's most likely going to get attempted. That said, I am pretty certain that acquiring specialist equipment is not something they would bother with, which means that I don't have to worry about most hardware attacks, e.g. a keyboard keylogger or bug on my mobo / RAM sniffer / whatever. One other thing is that I have a Windows 7 system to which they have admin access (so it can be considered compromised). This is one of the reasons I am switching to Linux; however, I'd like to keep a dual-boot system rather than removing Windows outright. I am aware that this would allow an attacker to outright nuke my Linux partition, and that is a risk I'm willing to take. I am not concerned with securing my Windows system. I am aware it's compromised and don't really care what happens to it. As I mentioned, other people have accounts on my Windows system and occasionally use it (for legitimate reasons!). I am certainly looking to secure my Linux installation, but preventing access to Windows has no point unless it contributes to the security of the Linux part of my machine. In fact, I'd rather avoid limiting access to Windows if possible because I don't want to appear paranoid or create conflict in the household. Full-disk encryption will prevent anyone from actually accessing my data from outside my Linux installation itself, which should then take care of both the Windows system and even make booting from a USB drive mostly useless (I am quite certain that the people in question do not have the resources or the motivation to decrypt a well-encrypted drive). I will also need to password-protect the single-user mode, of course. What other things would I need to do to secure my system? I am handy with the command line and willing to get my hands dirty, but I have limited Linux experience and fragmentary knowledge of computer security. The choice of Debian is largely arbitrary and I would have no problem trying out a different distro if it would be better in my case. If there's anything I've missed, or if you have tips on things I mentioned (e.g. best practices for disk encryption?), then I would be glad to hear them. I do not believe this question is a duplicate because all of the other questions I found on securing Linux on this site concern themselves with remote attackers and protection against viruses and exploits. They certainly have good answers but that is not the kind of information I am looking for here. Another question has been brought to my attention when my post was flagged as duplicate. However, that one asks in general whether their machine is secure when others have physical access to it; the answers to it generally boil down to "Physical access = game over" and provide some tips to mitigate various attacks (including things such as rearview mirrors on your monitor). Many of those tips are not applicable here, since I am aware that unlimited physical access means the machine isn't mine anymore in theory, and hence I provide some limitations to the attackers in my threat model which fit my personal scenario.
Use a strong and difficult password for the root user. Secondly, always login and work from another user with no administrative rights (and also a strong password). Enable the BIOS password option. Every time you power on your computer, the BIOS itself will ask you for a password before even booting on. It will also prevent everyone from applying changes to the BIOS setup. Encrypt every partition of your hard drive (check cryptsetup for Debian - if it can't encrypt you Windows partition, too, use TrueCrypt (from Windows for your Windows)) Watch out for external hardware devices connected on your PC (like USB sticks or hubs) that you haven't used before. Someone might have plugged in a keylogger or something. Always lock or power off the machine when you are away. Software hardening: Install gufw (GUI for the iptables firewall which is pre-installed) and block incoming traffic. Also install rkhunter and check your system for known rootkits and other threats from time to time. That's all I can think of right now. If you have any questions feel free to comment below.
{ "source": [ "https://security.stackexchange.com/questions/127092", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114460/" ] }
127,111
Can headphones transmit malware? My friend borrowed my headphones (a pair of Apple EarPods) and plugged them into his Android mobile phone for a few minutes in order to listen to a voice message. Would it be dangerous if I plug it into my phone afterwards (since I wonder whether the headphone can store malware which would eventually go into my phone)? Would it be possible to "factory reset" my headphone (just like doing so in iOS)?
I doubt there is a way to store any information (thus transfer information) on regular headphones. Some more advanced models (such as noise canceling) have some processing ability and firmware, but I don't see it as a viable attack vector.
{ "source": [ "https://security.stackexchange.com/questions/127111", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114489/" ] }
127,238
I am not a cryptographer. Maybe that is why I don't see the issues with integrating PGP into SMTP. In my head: Lea requests the server of Luke's domain jedi.com to tell her the public key of [email protected] (The request includes an encryption method perhaps). She gets back the key PUBLIC . Then she encrypts the message and Luke can decrypt it easily. It's not that hard, why isn't it standard for years?
It's not that hard, why isn't it standard for years? Because that would not have solved the problem that PGP is trying to solve. PGP is an end to end encryption, so if there is any way for the SMTP server to subvert the encryption, then the scheme fails. In the case of the scheme you proposed, suppose Alice ([email protected]) wants to send a private message to Bob ([email protected]). Using the scheme you proposed, Alice's mail client or Alice's SMTP server fetches Bob's public key by making TLS connection to dave.com. This is fine as long as dave.com are honest and actually return Bob's public key. But dave.com could have been configured by dave.com's operator to return a forged public key created by Eve, or Eve could hack into dave.com and set this up. Now Alice's mail client/mail server would happily accept Eve's certificate, thinking the public key is Bob's. In this model, the operator of dave.com can intercept any of Bob's emails. Now, as long as dave.com is honest, this still protects against third party spoofing. Why don't we do this anyway if this protects at least against third party snooping? Mainly because SMTPS also provides the same level of protection, while also being much simpler. If MITM by the mail server operator is not your concern, you can already very well secure your emails by ensuring you both uses SMTPS. Note that the difficulty of end to end encryption isn't about fetching public keys. Most email client that supports PGP also supports automatically fetching public keys from LDAP or HPKP. The difficulty of end to end encryption is verifying the public keys. There is no known method of verifying public keys that is fully transparent to the users and fully secure. Web of Trust or Certificate Authority model comes closest, but Web of Trust comes with a lot of caveats and Certificate Authority model relies on a third party to do the verification.
{ "source": [ "https://security.stackexchange.com/questions/127238", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114622/" ] }
127,269
I'm working on one smaller system where it will be required to enter One-Time-Password (OTP, not to be confused with a One-Time-Pad ) to download sensitive files which will be delivered to the user. The primary idea was, to send an OTP via SMS within a pre-set time limit. But since the SMS in not enough secure, I started to think about any other alternative for delivering an OTP to my users. I can suggest to my users to use Signal and then deliver them information, but in that case I'm afraid that it will be too complicated for them. Any idea or should I follow my primary idea?
Why can't you use TOTP or HOTP which is standard and supported by most authenticator apps? When people register for your service they need to enroll their authenticator app by scanning a QR code which contains the secret seed used to generate codes. On subsequent visits the site prompts them to enter codes generated by the app, without any network access since codes are generated locally on the device. As a bonus, since you're using standard protocols your users may already have a compatible authenticator app installed, and if not, could nudge them into using the app for more services (their Google account, etc). In the end, users are more secure and everyone wins.
{ "source": [ "https://security.stackexchange.com/questions/127269", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46520/" ] }
127,288
One of the major up-and-coming MFA methods is U2F, which relies on an initial key exchange and challenge-response mechanism. It's a relatively new protocol, and is only starting to see more widespread adoption, notably among big web entities like Google, but it's not the first easy-to-use, key-exhchanging, challenge-responding mechanism out there; in fact, two come to mind quite easily: SSH, which has been around since 1995 and is available on essentially every Linux and BSD box set up since 2000, with growing adoption on Windows via add-on software in older versions and built-in software in newer versions; and PGP, which has been around since 1991, and is actually included on some of the newer Yubikeys (albeit, controversially, with a closed-source implementation in the latest generation), as well as on millions of PCs worldwide, with plenty of high-quality, actively-maintained implementations and libraries for a slew of OSes. It seems like it would make perfect sense to use either of these widely-available protocols/standards (respectively) as an MFA mechanism for more than just SSHing into a remote machine or encrypting email; so why haven't either gained any traction where U2F is booming?
Let's check out what PGP and SSH actually offer for this purpose: PGP: Client must install PGP software which is not installed by default in the majority of the systems. Client must create a PGP key pair. Then he must send the public key to the server so that the server can use it later for validation. When authenticating with 2FA the server will send a challenge which the client must sign with its private key and send the signed challenge back as response. Of course the client must protect its key against theft, maybe with a password. SSH: Client must install the SSH software which is not installed by default in the majority of the systems. Must create key pair and send the public part to the server. When authenticating with 2FA against some web service the client must create a SSH connection to a related server and the server must merge the successful authentication using SSH and the login to the website somehow together, maybe with some additional token the client has to give after using SSH. And oops, there might be a firewall in the way blocking SSH. And of course client must protect key against theft. Thus essentially both solutions boil down to: initially install some software create a static key pair and publish the public part (this might be integrated into the software for convenience but is currently not) somehow get a challenge from the server and somehow send the signed challenge back. And somehow the server must integrate validation of the challenge into the authentication process. "somehow" because there is no already established process for this which integrates everything with the authentication flow used in web applications. and of course the client must protect its key The same procedure can be much easier done with a client site TLS certificate. This still leaves creation of the certificate as a major problem (but this is possible within the browser too today) but at least the validation is integrated into the HTTPS protocol already. Additionally I cannot see how these solution provide a better user experience or integration experience than existing 2FA solutions. They are not easy to use , require additional software, require new ways to integrate with the server side etc. And they do not provide a better security either. So why care and not take the newer solutions which were designed with usability and server integration in mind? Apart from that the current cheap 2FA solutions make use of a mobile phone. These provide usually a better security architecture than current PC's. And they are an additional hardware device the user must have access to which makes the protection offered by 2FA stronger.
{ "source": [ "https://security.stackexchange.com/questions/127288", "https://security.stackexchange.com", "https://security.stackexchange.com/users/17156/" ] }
127,296
Short Question: Is it important for a sniffing device to be in the broadcasting range of a client (NOT THE ROUTER) to sniff its packets? Complete Scenario: I assume that a wireless sniffing tool, like airodump-ng, is able to capture the packets from a client because the client sends its packets in all the directions in a wireless connection (unlike a wired connection where the packets travel through the wire in a certain direction). Now, let's say we have a router A with a broadcasting range of 50m radius. 20m east of it is a client, B, connected to it. B has a range of 25m. And the sniffing device, say C, is placed 20m to the west of A. Is it, anyhow, possible for C to sniff B's packets?
Let's check out what PGP and SSH actually offer for this purpose: PGP: Client must install PGP software which is not installed by default in the majority of the systems. Client must create a PGP key pair. Then he must send the public key to the server so that the server can use it later for validation. When authenticating with 2FA the server will send a challenge which the client must sign with its private key and send the signed challenge back as response. Of course the client must protect its key against theft, maybe with a password. SSH: Client must install the SSH software which is not installed by default in the majority of the systems. Must create key pair and send the public part to the server. When authenticating with 2FA against some web service the client must create a SSH connection to a related server and the server must merge the successful authentication using SSH and the login to the website somehow together, maybe with some additional token the client has to give after using SSH. And oops, there might be a firewall in the way blocking SSH. And of course client must protect key against theft. Thus essentially both solutions boil down to: initially install some software create a static key pair and publish the public part (this might be integrated into the software for convenience but is currently not) somehow get a challenge from the server and somehow send the signed challenge back. And somehow the server must integrate validation of the challenge into the authentication process. "somehow" because there is no already established process for this which integrates everything with the authentication flow used in web applications. and of course the client must protect its key The same procedure can be much easier done with a client site TLS certificate. This still leaves creation of the certificate as a major problem (but this is possible within the browser too today) but at least the validation is integrated into the HTTPS protocol already. Additionally I cannot see how these solution provide a better user experience or integration experience than existing 2FA solutions. They are not easy to use , require additional software, require new ways to integrate with the server side etc. And they do not provide a better security either. So why care and not take the newer solutions which were designed with usability and server integration in mind? Apart from that the current cheap 2FA solutions make use of a mobile phone. These provide usually a better security architecture than current PC's. And they are an additional hardware device the user must have access to which makes the protection offered by 2FA stronger.
{ "source": [ "https://security.stackexchange.com/questions/127296", "https://security.stackexchange.com", "https://security.stackexchange.com/users/107891/" ] }
127,649
While testing a recent adtech integration I noticed something I can't explain. The iPhone uses two IP addresses. Seemingly one for HTTP and one for HTTPS. To further confuse things it only happens when the device is not on wifi. Although, the only carrier I've confirmed it happening with is AT&T. FWIW, this does not happen with Verizon Can anyone explain why this would be the case? Example: http://ipof.in/json returns a different IP address than https://ipof.in/json . They appear to be owned by the same carrier (AT&T), as well as both public, but are wildly different (107.77.212.XXX vs 166.216.157.XXX). It's also worth noting that the response from ipof.in contains a timestamp. Nothing is being cached. I receive similar results with similar service www.ip4.com , etc.
I am just going to take a guess here. Your telephone data carrier may have an optimizing or caching proxy for content whose IP address appears in your JSON result. As the proxy has no visibility into encrypted HTTPS packets, it cannot proxy the content, so it may be routing directly with your public (routable) IP address. If this is the case, your phone has one IP address but the carrier's routing shows different origin IP addresses at ipof.in.
{ "source": [ "https://security.stackexchange.com/questions/127649", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8318/" ] }
127,655
I was curious if it's possible to protect against an SQL injection attack by removing all spaces from an String input? I have been reading up on SQL Injection at OWASP , but they don't mention anything about removing spaces, so I was curious why it would, or would not work? I was looking at this question , which asks about trim , and the top answer says this: No, adding a trim will not prevent sql injection. trim only removes space on the outside of your string. Select * from aTable where name like '%'+@SearchString+'%' If you @SearchString held something like '' update aTable set someColumn='JackedUpValue' where someColumn like ' Then when you put it all together and execute it dynamically you would get Select * from aTable where name like '%' update aTable set someColumn='JackedUpValue' where someColumn like '%' However if you took that search string update aTable set someColumn='JackedUpValue' where someColumn like and performed the operation shown in this question , wouldn't you get updateaTablesetsomeColumn='JackedUpValue'wheresomeColumnlike which should not execute, right? I'm curious if there is any form of SQL injection that could defeat this? Are there one word dangerous commands? If this can be defeated, would removing spaces at least help a bit with defense? Note: I'm asking this question purely out of curiosity, not as way to circumvent using the "proper" methods of SQL Injection Prevention.
No. Removing spaces would not prevent SQL injection, as there are many other ways to make the parser process your input. Lets look at an example. Imagine that you had a url which used user supplied input unsafely in a query: http://example/index.php?id=1 => SELECT * from page where id = 1 In your example the attacker would use spaces: http://example/index.php?id=1%20or%201=1 => SELECT * from page where id = 1 or 1=1 . Removing the spaces would collapse the injection into a string. Now let's imagine that the attacker used another form of white space, i.e. tabs: http://example/index.php?id=1%09or%091=1 => SELECT * from page where id = 1 or 1=1 . Removing the spaces would still allow the injection through. Depending on the technology in use the attacker could replace the spaces with /**/ %00 %09 %0a %0d or any number of unicodes that causes tokenization by the SQL parser. While the referenced example removed more than just spaces, the aforementioned example takes advantage of SQL comments to cause the tokenization which are not whitespace. You would still be vulnerable. The only reliable way to prevent SQL injection is to use parameterized queries.
{ "source": [ "https://security.stackexchange.com/questions/127655", "https://security.stackexchange.com", "https://security.stackexchange.com/users/106817/" ] }
127,667
I just got a pop-up after having logged on to Gmail. It said it was from https://googleads.g.doubleclick.net and asked for username and password. What should I do about this? Has anyone else seen this? I did press cancel, nothing happened. The only add-on I have installed is HttpRequester.
This seems unlikely but not unthinkable. From the information in your question and the supplied screenshot, it seems that the Google ad domain was or currently is compromised. What to do now? Firstly , make sure that you have antivirus and anti-spyware software installed and that this software (including your operating system) is up-to-date. It is a good idea to let your antivirus and anti-spyware software run a full system scan. Secondly , even if you didn't fill in your credentials, I'd recommend to change your password as soon as possible with an ad blocker (like Adblock Plus , Adblock , uBlock Origin or similar) installed and enabled in your browser. It is recommended to enable two-factor authentication on your Google account (if you didn't do that already) to prevent (future) leaked credentials from being misused. Thirdly , contact Google about this and supply them with details like your IP, URL, screenshots, date/time and as much information as you have. You can contact Google about this at "goo.gl/vulnz" or check https://www.google.com/about/appsecurity Additional information Alternative explanation Another explanation for this could be (although this would be unlikely and amateurish for a company like Google) that the developers overlooked a mistake in the development, testing and releasing process. Also (as mentioned in different comments) this result could possibly be caused by some kind of man-in-the-middle attack (like a hacked proxy) or a malicious browser extension. Why change your password if you didn't fill in your credentials? The site showed an unusual but visible "basic auth" prompt from an external domain. Assuming that the domain was compromised (at least until it is proven not) the attackers could as well include other code that was not directly visible . Maybe persistent in cache? Maybe some kind of malware? Since we can't exclude that possibilities also, and since a password change or virus/malware scan won't hurt anyone, these are extra measures, just to be sure. Is doubleclick.net really owned by Google? Yes! As described in this Wikipedia article and as shown in the WHOIS information of doubleclick.net. Registrant Organization: Google Inc.
{ "source": [ "https://security.stackexchange.com/questions/127667", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115097/" ] }
127,934
In my work as a developer, I need username/password combo's from the customer sometimes to make sure the settings on 3rd party services are correct to work with the website/application we are building. E.g. a payment provider that we use on a game website. What would be the good way to ask for their username and password so that it stays secure? If I send them an email, for sure, they are just going to send me back a plain text email with the password, but that is not secure . UPDATE: I see my question got a bit misunderstood, so I will add an example: Suppose I am a developer for CoolSoft, a software company. Another company (let us call them Games, Inc.) wants us to create a website for them. This website will use 3rd party services for payment and customer support which we need to integrate with. Games, Inc. creates accounts with the payment provider and the customer support provider. But they are totally not technical and we need to have their credentials to set the correct callback URLs, etc.. How can they safely send their password to us so we can fix the settings on their accounts (We never meet in person)?
You don't. When you teach users to give their username and password to someone, you train them to be vulnerable to phishing or other social engineering attacks. Instead, design the system in a way that an administrator can view and edit these settings without requiring the users credentials. When you are in a situation where you really need to see things from the perspective of the user to troubleshoot a problem, ask the user to type in the password for you and then let them show you the problem. This can be done either in person or with a remote administration tool. When you are in a situation where the customer has credentials you need in your application to integrate with a 3rd party solution, develop your application in a way that a non-technical user has an easy to use user interface to set these credentials. You will need that anyway in case the customer needs to change them and you are unavailable. The user won't have to enter this until the application is deployed on their own servers. During development you should use a test account on your end to interface with the 3rd parties. You definitely don't want to cause any costs to your customer because you ran some tests on the 3rd party payment interfaces which didn't work out the way you expected.
{ "source": [ "https://security.stackexchange.com/questions/127934", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102496/" ] }
127,984
Is there a publicly accessible website which will only accept TLS 1.2 connections so that I can test to see if my application can successfully, securely connect to it? Background: I have an old VB.NET application running on Windows Server 2008 R2 (64-bit). It has code like this: Dim req As New MSXML2.ServerXMLHTTP30 req.open("POST", "https://example.com", False) From what I've read , ServerHTMLHTTP uses SChannel and you can't control the protocols used at the application level. Windows Server 2008 R2 should support TLS 1.2 , so I suspect the app will just work, but I'd like to verify by connecting to a site which only accepts TLS 1.2.
The website: https://badssl.com/ supports various versions of TLS using different subdomains, so you can test lots of variations there! This subdomain and port only supports TLSv1.2 https://tls-v1-2.badssl.com:1012/ This subdomain and port only supports TLSv1.1 https://tls-v1-1.badssl.com:1011/ This subdomain and port only supports TLSv1.0 https://tls-v1-0.badssl.com:1010/ and more. And if that domains disappears for some reason, the source to it is here on Github
{ "source": [ "https://security.stackexchange.com/questions/127984", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115399/" ] }
127,988
The CA issues certificates to clients/servers. Whenever a request is made by the client/server, the certificate is used to verify the identity. Now, if a certificate needs to be revoked then who does it and how? Does the client/server "mark" it as revoked? Or does the CA do it? If the CA does it then how does it know when to revoke what? Also, is there a field on the certificate that tells that it is revoked? Or is there a blacklist of certificates maintained somewhere that contains all the revoked certificates?
The answer by user2320464 is good, but I'd like to expand more. Summary: The certificate holder generally does not manage their own revocation information, because the whole point of revocation is to announce that holder of this certificate is not trustworthy. The rightful owner of the cert needs to be able to declare the cert Revoked, but in a way that an attacker who also has the private key can't "undo" the revocation. If the owner of the cert themself are no longer trustworthy, you need a way to revoke the cert even against their will. The best solution we've found is for a 3rd party to track the revocation info, usually the CA that issued the cert. They maintain blacklists, and can also respond to online requests for "is cert #28277392 revoked?". Why revocation is ugly Revocation really is the ugly duckling in the certificate world; we haven't yet found an elegant way to handle it. The reason it's hard is that there are seemingly conflicting requirements: When a server hands their cert to a client, the client should be able to tell if the cert is revoked. The server shouldn't be responsible for generating or passing on revocation information for its own cert. That's like asking the dog to go check if the cookies are still there, and not lie about it. So the CA needs to be dynamically generating revocation info for each cert. (or at least tracking it and publishing the black list hourly or daily). One of the beauties of certificates is that they contain all the data to validate the signature, which you can do offline and very fast. But revocation has to be real-time, you can't just put it in the initial cert and forget about it, because you never know at what point it will be revoked. If the client has to contact the CA directly for every cert validation then it's impossible to validate a cert offline, and you add network lag. There are two main revocation mechanisms in use today: CRL and OCSP (see below), but neither are really elegant solutions. Why certificates get revoked Your comment Does the client/server "mark" it as revoked? leads me to believe that you don't fully understand why a certificate would get revoked. There are typically three broad reasons for revoking a cert: The rightful owner of the cert is abusing it. Typically this would be a sub-CA who is issuing certificates they shouldn't. Their cert gets revoked to tell the world not to trust them anymore. The server is decommissioned, or the cert is no longer needed for whatever reason. Key compromise of the private key corresponding to the certificate. We can no longer trust anything signed by, or encrypted for this cert because we no longer know if we're talking to the rightful owner or the hacker. In case 2) the owner of the cert could flag their own cert for revocation in the way you're suggesting, but in the other two cases the attacker could just use an older version of the cert from before it was revoked. So revocation really needs to be handled by a 3rd party, out of the control of the cert holder. Usually it's done by the CA that issued the certs. To revoke a cert, you typically contact the CA, prove you are who you say you are, and request them to revoke the cert. I think it depends from CA to CA what level of proof they need before they will revoke a cert for you - this is to prevent an attacker from requesting a cert be revoked as a denial-of-service. For end-user certs like a server cert, often revocation is automated and signing the network message with the cert's private key is good enough (ie a cert can revoke itself). For revoking high-value certs like a sub-CA, there is a lot of IT work and cleanup to be done, end-user certs to be migrated and re-issued, fines to be paid, etc, so a revocation will be a major incident involving many people from both companies. How certificates get revoked is there a blacklist of certificates maintained somewhere that contains all the revoked certificates? Yes. The simplest mechanism is called a Certificate Revocation List (CRL) which is exactly what you expect: a list of the serial numbers of all revoked certs. Each CA is responsible for tracking the revocation status of all certs that it has issued and publishing a new CRL hourly or daily. The CRL is signed by the CA key, so it is tamper-proof. It's just a .crl file that you can download, pass around, wtv. This can be used semi-offline, as long as you connect and refresh it once every 24 hours, you can use it offline (but of course, you have no way to know if you're talking to a compromised cert until your next CRL refresh). The more complex "cloud-friendly" mechanism is called Online Certificate Status Protocol (OCSP) . The client opens a connection directly to the CA and asks Client: "Hey, I've got cert #9388038829188 that you issued, is it still good?" CA: "Yup, it's still good". This solves the delay problem with CRLs, but requires the client to be online, and adds network lag to the crypto process. In 2011 a system called OCSP Stapling was introduced that allows the sever to pre-fetch the OCSP response from the CA, say once every 5 minutes, and bundle it with the cert when handing it to the client. This, among other things, speeds up the client's crypto to validate the certificate because it has a local copy of everything it needs, no new network connections needed. This is considered an elegant solution for TLS (ie https, ftps, ssh, vpn, etc) where you are opening a connection to a server that has internet access, but it does not solve revocation for non-TLS uses of certificates, like logging into Windows with a PKI smartcard, launching code-signed software (like any mobile app), opening a signed PDF document, etc which I would like to still work even if I'm not connected to the internet. How it gets passed to the end user is there a field on the certificate that tells that it is revoked? Yes, in an X.509 certificate (like SSL) the address where you can find the CRL goes in the CRL Distribution Point field, and the OCSP address goes in the Authority Information Access field. For example, the cert for *.stackexchange.com that is protecting this page contains: [1]CRL Distribution Point Distribution Point Name: Full Name: URL=http://crl3.digicert.com/sha2-ha-server-g5.crl [2]CRL Distribution Point Distribution Point Name: Full Name: URL=http://crl4.digicert.com/sha2-ha-server-g5.crl [1]Authority Info Access Access Method=On-line Certificate Status Protocol (1.3.6.1.5.5.7.48.1) Alternative Name: URL=http://ocsp.digicert.com URL=http://cacerts.digicert.com/DigiCertSHA2HighAssuranceServerCA.crt
{ "source": [ "https://security.stackexchange.com/questions/127988", "https://security.stackexchange.com", "https://security.stackexchange.com/users/26702/" ] }
128,254
I got a notification on Facebook: " (a friend of mine) mentioned you in a comment". However, when I clicked it, Firefox tried to download the following file: comment_24016875.jse This is an obfuscated script which seems to download an executable ( autoit.exe ) and run it. This is the part I managed to deobfuscate: ['Msxml2.XMLhttp', 'onreadystatechange', 'readyState', 'status', 'ADODB.Stream', 'open', 'type', 'write', 'position', 'read', 'saveToFile', 'close', 'GET', 'send', 'Scripting.FileSystemObject', 'WScript.Shell', 'Shell.Application', '%APPDATA%\\', 'ExpandEnvironmentStrings', 'Mozila', 'https://www.google.com', 'http://userexperiencestatics.net/ext/Autoit.jpg', '\\autoit.exe', 'http://userexperiencestatics.net/ext/bg.jpg', '\\bg.js', 'http://userexperiencestatics.net/ext/ekl.jpg', '\\ekl.au3', 'http://userexperiencestatics.net/ext/ff.jpg', '\\ff.zip', 'http://userexperiencestatics.net/ext/force.jpg', '\\force.au3', 'http://userexperiencestatics.net/ext/sabit.jpg', '\\sabit.au3', 'http://userexperiencestatics.net/ext/manifest.jpg', '\\manifest.json', 'http://userexperiencestatics.net/ext/run.jpg', '\\run.bat', 'http://userexperiencestatics.net/ext/up.jpg', '\\up.au3', 'http://whos.amung.us/pingjs/?k=pingjse346', '\\ping.js', 'http://whos.amung.us/pingjs/?k=pingjse3462', '\\ping2.js', ''] Is this an exploit on Facebook? Is it possible that my friend got a virus which targets their contacts by tagging them on malicious links? Should I report this to Facebook? If so, how?
This is a typical obfuscated JavaScript malware which targets the Windows Script Host to download the rest of the payload. In this case, it downloads what appears to be mainly a Chrome Extension ( manifest.json and bg.js ), the autoit Windows executable, and some autoit scripts which install them. All of these files are named with .jpg extensions on the (likely-compromised) server they are hosted, to be less-conspicuous. The malware appears to be partially incomplete or otherwise underdeveloped or perhaps based off some other malware (quality is very low). Many of the autoit scripts don't actually do anything, and what appears to be a ZIP meant to contain a Firefox extension is actually empty. The autoit scripts are a ton of includes combined into a single file, but only one (ekl) actually has a payload at the end. The one active autoit script which runs on infection replaces the Chrome, IE, and possibly other browser shortcuts with a shortcut to Chrome with the necessary arguments to run the malicious Chrome extension. The Chrome extension is mainly how this malware is being propagated. It does some nasty things like blacklisting antivirus software domains, and sending Facebook messages automatically. Actually there was a webservice back end at http://appcdn.co/datajs serving some scripts which would be injected on any page a user visited based on the URL currently being viewed, which was how the Facebook messages were being posted. This service is now offline, likely taken down. Is this an exploit on Facebook? Not exactly, more-like abuse of Facebook. Facebook's code hasn't been exploited, your friend just has an infected browser phishing their contacts on their behalf. Is it possible that my friend got a virus which targets their contacts by tagging them on malicious links? Yep, that's exactly how this malware is spreading itself. Should I report this to Facebook? If so, how? Yes, see How to Report Things in the Facebook help center. Getting the following URL's taken offline by contacting their hosts would also be good. http://userexperiencestatics.net/ext/Autoit.jpg http://userexperiencestatics.net/ext/bg.jpg http://userexperiencestatics.net/ext/ekl.jpg http://userexperiencestatics.net/ext/ff.jpg http://userexperiencestatics.net/ext/force.jpg http://userexperiencestatics.net/ext/sabit.jpg http://userexperiencestatics.net/ext/manifest.jpg http://userexperiencestatics.net/ext/run.jpg http://userexperiencestatics.net/ext/up.jpg http://whos.amung.us/pingjs/?k=pingjse346 http://whos.amung.us/pingjs/?k=pingjse3462 http://appcdn.co/datajs Unfortunately, CloudFlare still has not taken the userexperiencestatics.net URL's down though I contacted then shortly after posting this answer, and I don't know who is actually hosting these files. CloudFlare just emailed me to say they restricted access to the files, and says they will notify the host. UPDATE: After I and likely others reported the .jse URL to Google, they appear to have taken down the file. If you find any more copies, those should also be reported. It seems people have been receiving the files from numerous sources. MORE INFO: This malware and post is getting a lot of attention, so I'll add some more info to address people's questions: Will this file automatically run when downloaded? Probably not unless you have configured your browser to do so. It is meant to trick you into opening it. Can it infect my phone, or non-Windows computer. As far as I know, Windows is the only OS which can run this malware. As I mentioned, it uses the Windows Script Host. I don't believe even Windows phone is vulnerable, though I don't know much about Windows phone. UPDATE ON RANSOMWARE: Previously it was assumed the autoit scripts contained ransomware, however after further inspection this appears not to be the case. There is just a bunch of unused crypto function obscuring the actual payload , which I've mostly deobfuscated to this . UPDATE ON CHROME EXTENSION: The unpacked Chrome extension code can be viewed here . Details on what it did integrated above. UPDATE FOR JSE SCRIPT: My de-obfuscated comment_24016875.jse script can be viewed here .
{ "source": [ "https://security.stackexchange.com/questions/128254", "https://security.stackexchange.com", "https://security.stackexchange.com/users/61951/" ] }
128,305
I'd like to skip the argument about Linux being more secure than Windows, and purely focus on the security boundaries between Host OS and VMs. Is Qubes OS theoretically more secure than say - Windows 10 running a few isolated activity related VMs (Banking, Work, Home use etc.)? And if you think it is, why is that? I realize things like clipboard redirection aren't possible and prompting to copy files is a nice feature in Qubes. But that could also be turned off in Windows.
Joanna Rutkowska, leader of the Qubes project, does a great job into documenting the concepts on which Qubes is relying. I therefore strongly suggest you to get the information at the source, and in particular to read the two following documents: Software compartmentalization vs. physical separation (Or why Qubes OS is more than just a random collection of VMs) How is Qubes OS different from... Qubes not only brings user experience improvement compared to running several VMWare instance, but it also brings a more fine-grained isolation. To explain it roughly, the model you describe is like putting a set of smaller boxes (the VMs) inside a single big box (the host system). All VMs will go through the host system to access any device (network, USB, DVDs reader, etc.), and the host system both controls the VMs, the devices, user's interface and is directly facing Internet. The idea behind Qubes is not to store the small boxes into an overpotent big box, but instead configure the small boxes in a kind of virtual local network so that, together, they look like a big box without being one and without using one. The need to look like something users already know is important for user's adoption. But behind the scene all parts of the system is meant isolated from each other. Among the main differences are the fact that the user interface is not facing the network and has no Internet connection. The VM dedicated to facing the network is isolated from the rest of the VMs by another VM dedicated to firewalling. Qubes 3.0 brought a long awaited feature allowing to have a VM dedicated to USB devices. To see this from and attacker point view: If I want to hack your Windows based solution, all I have to do is to manage to exploit your Windows host (single point of failure). Once I get it, I get power on everything and this should be relatively easy since it is facing the network, allowing a wide range of possibilities from remote exploits to reverse shell trojans. If I want to hack Qubes, I will have no choice but starting at a guest position since neither Xen nor the main Dom0 domain has any direct link with the outside world, and from there find a way to migrate from guest to guest or manage to exploit Xen core or reach the user's interface running in Dom0 (knowing that the guests have their X server replaced by a specially designed hardened display server to precisely avoid such possibility. All inter-VM communication in general has been carefully designed to reduce any exposure area to the minimum), and build the appropriate tunnels in order to still be able to communicate with your malicious software (going in is not sufficient, you also want data to be able to go out, which is trivial on a network facing system, but much more harder on isolated guest systems). I just want to add that while Qubes OS is the most well-known and most documented as being, as far as know, the only open-source initiative implementing this concept, the concept itself is not something completely new and revolutionary. Polyxene for instance is a proprietary system taking exactly the same approach in order to secure defense-level desktop systems. I say this just to highlight the fact that discussing such technology goes beyond discussing open-source and proprietary OSs.
{ "source": [ "https://security.stackexchange.com/questions/128305", "https://security.stackexchange.com", "https://security.stackexchange.com/users/108391/" ] }
128,341
I have to produce a screenshot of a web page, and want to make sure others will know without any doubt that this screenshot has been produced today. That is, I would like to embed today's date in the screenshot as irrefutable proof the screenshot has been made exactly today. Is there any way?
If you want to prove to others that you took the screenshot on a specific date and not later, you will not be able to do it yourself, you will have to rely on some common trusted third party. For low importance issues, this can be accomplished by simply posting the image on some well known public service where the date when the image has been posted will be mentioned. Be sure to check beforehand that this service does not offer any possibility to modify the picture without altering the upload date! For higher importance issues, you will have to contact a bailiff or a notary. By being present during the screenshot, they will be in measure to vouch the date and the conditions when it has been taken. For instance I've read about such procedure being used when someone wants to keep a proof of the existence of a security flaw still valid even once the flaw itself has been corrected. However, if you go that way I would strongly recommend to check the Law StackExchange website before engaging yourself into anything.
{ "source": [ "https://security.stackexchange.com/questions/128341", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115624/" ] }
128,347
Is there a way where I can distribute a public decryption key, but the encryption key cannot be computed from that key, so that anyone who reads one of my encrypted message, he/she can be quite sure that it was originally written by me? So basically it is the traditional asymmetric encryption-decryption scheme, but in reverse.
If you want to prove to others that you took the screenshot on a specific date and not later, you will not be able to do it yourself, you will have to rely on some common trusted third party. For low importance issues, this can be accomplished by simply posting the image on some well known public service where the date when the image has been posted will be mentioned. Be sure to check beforehand that this service does not offer any possibility to modify the picture without altering the upload date! For higher importance issues, you will have to contact a bailiff or a notary. By being present during the screenshot, they will be in measure to vouch the date and the conditions when it has been taken. For instance I've read about such procedure being used when someone wants to keep a proof of the existence of a security flaw still valid even once the flaw itself has been corrected. However, if you go that way I would strongly recommend to check the Law StackExchange website before engaging yourself into anything.
{ "source": [ "https://security.stackexchange.com/questions/128347", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115630/" ] }
128,412
I'm no techie and would like your expertise in understanding this. I recently read a detailed article on SQLi for a research paper. It strikes me as odd. Why do so many data breaches still happen through SQL injection? Is there no fix?
There is no general fix for SQLi because there is no fix for human stupidity. There are established techniques which are easy to use and which fix the problems (especially parameter binding) but one still has to use these techniques. And many developers are simply not aware of security problems. Most care that the application works at all and don't care much about security, especially if it makes things (even slightly) more complex and comes with additional costs like testing. This kind of problem is not restricted to SQLi but you'll find it with buffer overflows, certificate checking, XSS, CSRF... . It is more expensive to do secure programming because of the additional testing and of the additional expertise needed by the (thus more expensive) developer. And as long as the market prefers it cheap and does not care much about security you get the cheap and insecure solutions. And while security by design helps a lot to make it better developers often work around this design because they don't understand it and it is just in their way.
{ "source": [ "https://security.stackexchange.com/questions/128412", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115725/" ] }
128,422
Over the last few days, I've been hearing often about the petition to (pratically) "repeat" the Brexit referendum and I noticed that it is an online petition . I noticed that the "sign petition" form just requires name, email address, and postcode, and then you'll receive an email to confirm your signature. Is this system secure? Can't someone just create a bot that will fill the form continuously (and confirm the email) using a different address each time? Looking at the privacy page , it also appears that it doesn't even have a captcha system, and the email verification is the only "system" to prevent bots. Last but not least it doesn't have a system to ensure that the signer is British.
The petitions site is purely a mechanism to see whether there may be high enough numbers to support something, and if so, that something will be discussed in Parliament. There are some checks and balances (for example 80,000 fake votes were identified and removed) but there is no need for a strong level of trust here, as nothing is decided by any of these petitions. For the re-doing of the European Exit referendum, there are around 4 million signatories, and even with some level of fraud, the government already knows that it is something they need to discuss. In summary - can it be trusted to the level required for this purpose? almost certainly. Can it be trusted to have no fraud? No.
{ "source": [ "https://security.stackexchange.com/questions/128422", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92434/" ] }
128,462
There are lots of papers concerning car hacking. It is often done with physical access (by the OBD interface for example), sometimes without ( Remote Exploitation of an Unaltered Passenger Vehicle ). The only case of exploitation I've read about is the theft of BMW cars . Are there some other cases of exploitation in real life by villains (or by governments, which can be pretty much the same)?
As reported on Wired in March 2010: More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments. Other than that, not much, as Sophos says: The dangers of cyber attacks on cars has all been theoretical so far: at this point, there’ve been no real-world attacks, as far as we know. Only security researchers have managed to send cars into the weeds.
{ "source": [ "https://security.stackexchange.com/questions/128462", "https://security.stackexchange.com", "https://security.stackexchange.com/users/106454/" ] }
128,543
Does bandwidth alone determine how many packets a router or server can handle before the server is overran and then goes offline?
Imagine a post office. It has an entrance, a counter with a clerk who deals with the customers and their packets. The clerk is a multi-tasking talent with a lot of arms to deal with packets on the counter. The counter has a certain width, so only a certain number of customers can be processed at the same time. The clerk has a small address book with addresses of where to send the packets. Some pages are blank, and he can input new addresses. Behind the counter and the clerk are a certain number of shelves, where the assistant of the clerk puts packets until they can be further processed and sent. There are now several cases for a DDOS / DOS attack in this metaphor. Case 1 - SW Bug One mean customer steps forward, leans over to the clerk and shows the clerk the packet with the address "Vietnam" on it - which triggers the clerks PTSD 1 . This is something only a few people knew about. The clerk starts sobbing uncontrollably, sits in a corner and is of no further use to the post office. It has to close early, until the owner of the office comes, says some magic words and restarts the office. The word "Vietnam" in this case stands metaphorically for a bug that the mean customer abuses. Since usually only one packet is needed to freeze the office, this might not count as a DDOS, but certainly a DOS. Case 2 - Bandwidth The post office has only one door, through which all the customers and the packets have to squeeze themselves, incoming and outgoing ones alike. If an attacker wants so abuse this, he might gather a lot of unusually big customers and/or packets to enter the post office all at the same time. Now the doors are too tight and new customers can't enter while other customers can't leave the store. This halts the post office, until the clerk puts down a rule at the entrance: "You must be smaller than this to enter". Everyone trying to enter the office while being too big won't be allowed to enter the office. The doors here stand for the bandwidth; the more bandwidth you have the more customers can enter / leave at the same time. Since this is pretty dynamically, it's sometimes hard to find the exact point at which the store has to halt. Case 3 - CPU Load Whenever a customer comes to the clerk, he has to greet them, take the packet and process it. Usually he has to check if the address is correct, the packet is not purple (purple packets are not standardized packets and therefore bad!). If the clerk is really paranoid, he has to open the packet, look to make sure that no contraband is inside, test if the packet smells like Anthrax, etc. So depending on the rules for the post office, the clerk has to check a lot of parameters. If an attacker wants to abuse this, he can gather a lot of people with a lot of small packets. The clerk has to process EVERY packet, which can create a lot of overhead. Because of this, the customers outside start to pile up, and some even start to curse and leave the line, since they don't have all the time in the world to wait in this line forever. So CPU load usually has to do with processing in this metaphor. Case 4 - Memory The clerk works for quite some time in the post office, so he has memorized several addresses. He also has some addresses written in his address book. A attacker can abuse this by gathering a lot of people and tell them to write unusual addresses on their packets. So each time the clerk processes one of those mean customers, he has to write down another addresses into his book. This soon gets filled and now he is in trouble. How should he process new addresses? Kick out old ones? He has no idea and maybe stops his processing altogether. The address book stands for the memory, which is finite. Case 5 - Connections The counter has a mentioned width. Since the clerk is a multi-tasking talent, he can deal with 3000 customers at once, but not more. The attacker can abuse this fact by gathering a lot of people and tell them to try to rush to the counter. Now there are 5000 people, demanding to get their packet processed, causing a lot of stress for the clerk with his just 3000 arms. Case 6 - Disks (and other resources) The clerk has to store the packets into a shelf behind his counter. This is the job of his assistant. Some assistants are faster than others, so sometimes it can happen, that the clerk can deal with 3000 customers at once, while the assistant has to slowly move himself and the packet to the correct part of the shelf. So while the clerk certainly is fast enough, the poor assistant is not. The assistant in this case can be the server disks or other components. Conclusion: DDOS / DOS attacks can use a lot of different methods and parameters to stop or slow the service of a server / server-cluster. Therefore it depends on a lot of factors: Which software you use, how much bandwidth you have, the CPU, memory etc. [1] PTSD is no joke, but it fits this metaphor nicely. I apologize for using it in a jokingly fashion.
{ "source": [ "https://security.stackexchange.com/questions/128543", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95987/" ] }
128,581
I'm doing a redesign for a client who's understandably concerned about security after having been hacked in the past. I had initially suggested using a simple PHP include for header and footer templates and a contact form they wanted. They are reluctant because they were advised by their hosting company that using PHP is a security concern which might allow someone to break into cPanel and gain control of the site. This, to me, sounds about like telling someone to never drive so they won't be in a car accident. My gut instinct is that the host is trying to shift blame onto the client for security flaws in their own system. Also, the server still has PHP installed, whether or not we use it, so I'm questioning how much this actually reduces the attack surface... But since I'm not a security expert, I don't want to stick my foot in my mouth. I told my client that to process the contact form they're going to need some form of dynamic scripting. (False?) They asked if I could just use PHP on that one page. Would this be measurably safer, or is it the equivalent of locking your car doors and leaving the window rolled down? How much truth is there to the claim that using any PHP script, no matter how simple, is an inherent security problem? We're on shared hosting with no SSL. Is it reasonable to assume we got hacked due to using PHP? Will we be any safer if we don't use it, but can't uninstall it? Because if not, we have other problems. (Would the answer be different for any other language?)
It's not so much that PHP itself has security problems (assuming needed security updates), as it is there exists a lot of popular PHP-based software with rampant security problems. You could fault PHP for being a language that gives you enough rope to choke yourself, but the real problem is just how prevalent vulnerable PHP code actually is. One need look no further than the Stack Overflow PHP tag to find PHP newbies writing horrifically vulnerable code based on some atrocity of an old tutorial. Additionally, a significant number of popular PHP software known for their rampant security flaws is based on very old code and coding practices. Many of these old practices are considered bad-practices because of inherent security problems. This, to me, sounds about like telling someone to never drive so they won't be in a car accident. Pretty much yes. Better advice might be along the lines of "don't drive an old car with no airbags". My gut instinct is that the host is trying to shift blame onto the client for security flaws in their own system. Not necessarily. If a user uses the same password for the WordPress site and cPanel, compromising the WordPress password also compromises cPanel. That would be the fault of the user. Hackers rarely need to get that far though, and just use a PHP shell. I told my client that to process the contact form they're going to need some form of dynamic scripting. (False?) Not necessarily true. You could use a 3rd party service to handle the mail sending. Then the service handles the dynamic server scripting and take over the security implications. There are numerous such services available with varying feature sets, and they are popular for powering contact forms on statically generated sites. How much truth is there to the claim that using any PHP script, no matter how simple, is an inherent security problem? Some, but not much. PHP does involve some active code, in both PHP and the server software which executes it. If there were ever a security vulnerability in that process which did not depend on specific PHP code, it could be exploited. While that risk is tiny, it's a risk a server with no such support does not have.
{ "source": [ "https://security.stackexchange.com/questions/128581", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115691/" ] }
128,654
I am currently working on a web application with a significant security risk attached to its function. We're using Microsoft Identity Framework to handle user logins, with the system forcing strong passwords and registration having the extra layer of email confirmation being required before first use. We have a feeling that this is not entirely sufficient. One of our competitors uses a two-step login system with a password followed by the user entering three digits from a six-digit PIN by drop down. There is a suggestion that we should copy this. Personally, I'm uncomfortable with implementing such a solution without better understanding its pros and cons versus the alternatives. It strikes me that an extra data layer entry which is immune from keylogging is not a significant extra piece of security. Surely if an attacker already has an email/password combination, almost any conceivable way they could have obtained this will also result in them having the PIN? The obvious alternative to strengthening security is to have a two-factor authentication via SMS. This will incur a cost, but if security is paramount it would seem to actually add significant protection to the system over a PIN, which I believe will to add almost none. What's the point of having an extra PIN authentication? Does it have any advantages over 2-factor authentication? EDIT: The proposed PIN solution would issue the user with a randomly generated 6 digit number via email. When logging on to the site they would first have to enter a password (which may, of course, be stored by the browser). If successful, they would then be challenged to enter three randomly selected digits from the six (i.e. enter the first, second and fifth characters from your PIN) via drop down box. On reflection, I guess this does at least stop unauthorized access via someone relying on a stored password from the browser.
I find it hard to see what security benefits this could provide. In multifactor authentication, the point is to use different factors — i.e., "something you know", "something you have", "something you are". Just repeating the same factor twice seems a bit pointless. But let me speculate some about what the purpose could be. 1. Stop keyloggers Only dumb malware tries to get passwords by blindly logging key strokes. Requiring the use of a drop down menu may protect against some malware, but in the end trying to hide user input when the computer is already infected is a loosing game. See this question for a related discussion. In the end, I think the benefits are small here. 2. Increase entropy If you add a six digit PIN to the password, you get 10 6 times as many combinations to brute force or almost 20 extra bits of entropy, right? (Or 10 3 times or 10 bits if you only count the three digits entered.) Yeah, but why not just require a longer password? Perhaps you want to split it in two to make one part (the PIN) truly random and thereby give protection to users who pick weak passwords. But what does this protect against? For online attacks, you should already have rate limiting in place to deal with this. For offline attacks, you would need to hash the PIN together with the password to get any benefits. But since you can log in providing only three out of six digits they don't seem to be doing this (unless they keep 20 hashes for all possible digit combinations). 3. Limit the effect of stolen passwords Let's say your password gets stolen (say in a phishing attack). If the attack is only performed once, the attacker will only get half of the PIN. She will therefore not be able to easily log in if she is asked for other digits than the ones she got. I don't see this as a big benefit. Just repeat the attack a couple of times, or attempt to login multiple times (or from different IP's) until you are prompted for the digits you have. Drawbacks It makes users more likely to write the PIN (and perhaps the password while they are at it) down on a post it or an email. You can not log in using only a password manager. Why make it harder for people who use safe methods to manage their passwords? Conclusion I can't see any security benefits that would motivate having the user memorise an extra PIN and go through the hassle of picking numbers from drop down menus. To me this smells of security theater . But perhaps I am missing something. Edit: Yes, I did miss something. See supercat's answer . It's a very good point, but I'm not sure I would think it is worth it anyway.
{ "source": [ "https://security.stackexchange.com/questions/128654", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52575/" ] }
128,667
I would like to ask about this encryption method that I found: USPTO patent and it is related to this question here: A service that claims beyond army level encryption and Unseen.is encryption claims revisited with their proprietary, patented “xAES” algorithm . Didn't see any updates on this matter for a long time, so after I had found the patent had appeared online, wanted to ask you experts what do you think about this? Have we found an quantum computing resistant encryption method for the future generations? Thank you in advance. Example chapter from the patent documentation: [0020] While the example above uses the simple Caesar cipher in association with a key for encryption, more complex encryption algorithms such as NTRU, Advanced Encryption Standard (AES), and extended Advanced Encryption Standard (xAES), also use a key as mentioned above in order to encrypt and decrypt data. It should be noted that the encryption algorithm 106 may use any one of these encryption algorithms in accordance with an embodiment of the present invention. The keys associated with these encryption algorithms are significantly more complex than the Caesar cipher and have considerably more characters. Nonetheless, these advanced encryption algorithms use the same principles as the Caesar cipher during encryption and decryption processes. More specifically, each of these encryption algorithms processes data using the encryption algorithm and a key during encryption and decryption. However, the key used with these encryption algorithms have a finite number of bytes. In many instances, these encryption algorithms use a key having 256 bytes, or 32 characters, that are generated using a random number generator. Based on this finite number of keys, unauthorized third parties may correctly guess the key and then, in conjunction with the encryption algorithm, decrypt the encrypted content. In other words, unauthorized third parties may use the key with the encryption algorithms noted above to decrypt encrypted data.
I find it hard to see what security benefits this could provide. In multifactor authentication, the point is to use different factors — i.e., "something you know", "something you have", "something you are". Just repeating the same factor twice seems a bit pointless. But let me speculate some about what the purpose could be. 1. Stop keyloggers Only dumb malware tries to get passwords by blindly logging key strokes. Requiring the use of a drop down menu may protect against some malware, but in the end trying to hide user input when the computer is already infected is a loosing game. See this question for a related discussion. In the end, I think the benefits are small here. 2. Increase entropy If you add a six digit PIN to the password, you get 10 6 times as many combinations to brute force or almost 20 extra bits of entropy, right? (Or 10 3 times or 10 bits if you only count the three digits entered.) Yeah, but why not just require a longer password? Perhaps you want to split it in two to make one part (the PIN) truly random and thereby give protection to users who pick weak passwords. But what does this protect against? For online attacks, you should already have rate limiting in place to deal with this. For offline attacks, you would need to hash the PIN together with the password to get any benefits. But since you can log in providing only three out of six digits they don't seem to be doing this (unless they keep 20 hashes for all possible digit combinations). 3. Limit the effect of stolen passwords Let's say your password gets stolen (say in a phishing attack). If the attack is only performed once, the attacker will only get half of the PIN. She will therefore not be able to easily log in if she is asked for other digits than the ones she got. I don't see this as a big benefit. Just repeat the attack a couple of times, or attempt to login multiple times (or from different IP's) until you are prompted for the digits you have. Drawbacks It makes users more likely to write the PIN (and perhaps the password while they are at it) down on a post it or an email. You can not log in using only a password manager. Why make it harder for people who use safe methods to manage their passwords? Conclusion I can't see any security benefits that would motivate having the user memorise an extra PIN and go through the hassle of picking numbers from drop down menus. To me this smells of security theater . But perhaps I am missing something. Edit: Yes, I did miss something. See supercat's answer . It's a very good point, but I'm not sure I would think it is worth it anyway.
{ "source": [ "https://security.stackexchange.com/questions/128667", "https://security.stackexchange.com", "https://security.stackexchange.com/users/116093/" ] }
128,779
I've read that using Intermediate CA certificates is more secure because this way the Root CA is offline. So, if the Intermediate is compromised it does not impact the Root CA. What I understand is doing that: Allows to CA to revoke the Intermediate CA certificate. Thus, new server certificates with compromised Intermediate CA certificates are invalidated. Root CA can issue new Intermediate CA certificates, which in turn can create new server certificates. But, anyway, CA must issue new Intermediate CA certificates and revoke the old ones... so the only benefit that I can find is that CA issue different Intermediate certificate for different purposes. So the "universe" of compromised certificates is smaller that if Root CA would have signed all of the certificates. Is that correct? Is there another benefit?
Yes, the number of compromised certificates are much larger with Root Certificate compromise. But it's not just the number certificates. Getting a new root certificates deployed due to compromised root is massively more difficult than replacing the certificates whose intermediates are compromised. For starters, replacing the Root Certificate of a public CA, even in a normal scenario, involves lots of paperwork and audits. In the scenario of a compromised root, the CA need to convince software vendors (browsers and OS) to readd their new Root Certificate to the default trust store. In the fall out of a leak, the CA pretty much lost all the trust that had been built over the years, and vendors would rightly be skeptical about the capability of the CA and the viability of the CA's business going forward. At the very least, vendors would demand reauditing and lots of additional paperworks before allowing the new Root Certificate Authority. Vendors then would need to deploy the new trusted Certificate. This is extremely hard to do in a short time. People don't upgrade their browser often enough. Some softwares like browsers have mechanism to quickly broadcasts revoked root certificates, and some software vendors have processes to rush release when a critical security vulnerability is found in their product, however you could be almost sure that they would not necessarily consider adding a new Root to warrant a rush update. Nor would people rush to update their software to get the new Root. These are in addition to having to re-sign and reissue the certificates. There were a number of Intermediate certificate compromises (e.g. Comodo) where the CA quickly handled the situation and left without any major consequences. The closest we have ever got to root certificates compromise of a public CA was with DigiNotar . DigiNotar went bankrupt in the following weeks after the compromise was made public.
{ "source": [ "https://security.stackexchange.com/questions/128779", "https://security.stackexchange.com", "https://security.stackexchange.com/users/86588/" ] }
129,078
I have watched a hak5 YouTube that they did make people connect to their wifipineapple (a WiFi Honey pot) , and they stored the Wi-Fi hotspots, those the phone used to connect to. The historical hotspots are many. I am wondering how they made it. If they simulated the SSID, should they replace it with another one seconds later? Here's the Youtube video
You can't simply Force a client, but to trick him! As long as the device's WiFi is running, it keeps sending probe requests, searching for your previous connected networks. Using some software like airodump-ng , you can easily sniff out those probes. Then the attacker may create a similar evil twin using the BSSID and ESSID gathered from the previous probes. So, as the device sees the pre-saved network up again, it re-connects thinking it is the legit network. However, there is something to be aware of: Only works with public networks. This does not work with secured networks unless you already know the security phrase. Important note: This attack also can be done to pre-saved networks with passphrase, if the passphrase is weak enough to be cracked. steps are: Create the evil twin networks using the BSSID, ESSID AND the same auth type of the spoofed network "WPA, WPA2 or WEP" The client will try to connect to the network providing the passphrase challenge The authentication won't work obviously, but you can sniff the challenge Using some cracking tool, you can crack the phrase Re-create the network using the new passphrase Now the client will connect to the spoofed network even if it has some security level, so it's always a good idea to use a strong phrases!
{ "source": [ "https://security.stackexchange.com/questions/129078", "https://security.stackexchange.com", "https://security.stackexchange.com/users/109925/" ] }
129,098
Due to the Lenovo firmware ThinkPwn bug I'm trying to understand privileges and rings. If the kernel is Ring 0 and SMM (System Management Mode) is Ring -2, what could be in between that is Ring -1?
The "rings" nomenclature (0-3) you usually see these days started with the requested privilege level field in segment selectors as part of the design of x86 protected mode. Back in the day, it was possible to make exclusive sections of the memory space called segments. In "real mode" it was necessary since you only had 20-bit addressable memory. When protected mode came along it still offered segmentation, but also privilege levels. Levels 0-2 are "supervisor" level and can do most things. Rings 1-2 cannot run privileged instructions but this is the only real limit; otherwise they are as privileged as ring 0. Ring 3 meanwhile is "user mode". If you have your segment selector set to point to this ring, you require the help of the kernel via some system call interface in order to do anything requiring privileged CPU or memory access. These days, it's pretty much required in 64-bit x86 to not use segmentation. However, segment selectors are still there - all segments just overlap and cover the entire address space. So, the original purpose of ring 0-3 was to isolate privilege between user mode code and the kernel and stop user mode code walking all over system control structures. Then virtualization became a thing on x86 and Intel/AMD decided to add hardware support for it. This requires a piece of supervisor (hypervisor) code to set up some control structures (called VMCS) defining the virtual machines and then call vmenter and handle vmexit i.e. conditions on which the virtual machine needs help from the hypervisor. This piece of code is referred to as "ring -1". There is no such actual privilege level, but since it can host multiple kernels all of which believe they have ring 0 access to the system, it makes sense. System Management Mode is another beast with special instructions. Firmware (your BIOS) sets up a SMM handler to handle System Management Interrupts - configurable depending on what the firmware wants to be notified of. When these events are triggered, the OS (or even hypervisor) is suspended and a special address space is entered. This area is supposed to be invisible to the OS itself, while executing on the same processor. Hence "ring -2", since it is more privileged than a hypervisor would be. You'll also hear "ring -3" mentioned here and there in reference to Intel ME or AMD's PSP. This is a second processor running a separate firmware (Intel I believe uses ARC SoC processors) capable of doing anything it likes to the primary system. Ostensibly this is to provide IPMI/remote management of hardware type functionality. It can run whenever there is power to the hardware regardless of whether the main system is powered on or not - its purpose, as I say, would be to power on the main system. From a security perspective, the lower ring you can get yourself into, the more undetectable you can make yourself. The bluepill research was about hiding from an OS the fact it was truly running in a VM. Later research has been done on SMM persistence. SMM persistence for example would potentially allow you to reinstall your malware even on a complete wipe of the hard disk and reinstall. Intel ME potentially opens up an always on persistent networked chip to install malware on the main target. I've stuck to Intel chips here but you should be aware other platforms work differently. For example, ARM chips have "supervisor" and "user" modes, amongst others.
{ "source": [ "https://security.stackexchange.com/questions/129098", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35666/" ] }
129,273
If I already have done a 301 redirection from all the HTTP inner pages to HTTPS, why should I use HSTS as well?
Let's look at the scenario with a 301 redirect first. The victim sends a request to http://example.com (as pointed out in comments, this could because of SSLStrip or because the user just entered example.com in the URL bar and browsers default to HTTP). If there is no MitM attack they get a 301 response back, and is redirected to https://example.com . But if there is a MitM, the attacker can just choose not to send the 301, but instead serve a (possibly modified) copy of the site over HTTP. The weak point here is that an initial HTTP connection (without TLS) is made, and the attacker can modify that traffic. However, if you had used HSTS the browser would know (assuming that the victim had visited the site previously) that the page should always be served over HTTPS - no HTTP request would ever have been sent, and the browser would just send a request to https://example.com . The attacker cannot MitM the TLS connection, so the attack is prevented. (The fact that browsers cache 301 responses makes this a bit more complicated in practise. For more info on that, see bonsaiviking's great answer or this question . The short story is that the 301 being cached might help a bit, but not take you all the way that HSTS does.)
{ "source": [ "https://security.stackexchange.com/questions/129273", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114400/" ] }
129,352
I have just got some report of a penetration test and one of the recommendations was to strengthen passwords. I however realized that no passwords were provided for the testers, and I wanted to find out if it was possible to determine the strength of a password without actually knowing that password?
I would figure there are two ways they've come up with the information that they drew that conclusion from. They ran the net accounts /domain command on a users computer which dumped the password complexity requirements for your organization (assumes Windows / Active Directory) They successfully brute forced (or guessed) user passwords because they were weak. Recent password dumps like LinkedIn have provided a trove of real-world passwords that pen-testers have been using in the field to try to crack passwords. Without further information it's hard to say how they've come to that conclusion (we have no idea what the red team did or what was in scope) but those two ways are how I would assume they did it.
{ "source": [ "https://security.stackexchange.com/questions/129352", "https://security.stackexchange.com", "https://security.stackexchange.com/users/58796/" ] }
129,357
We know that UEFI measures the OS bootloader image integrity every time we power on our computer if secure boot is enabled. With the growing attacks and discoveries of UEFI vulnerabilities, the following questions arise: I want to know if there's a specification on UEFI that measures the integrity of the UEFI firmware before the Secure Boot process, so It can prevent or avoid flashing attacks to alter the firmware. Also, I want to know if the integrity of the firmware of the rest of the devices attached to the computer is measured. My concern here is that if your firmware gets compromised (flashing exploit), the installed malware is able to perform any tasks, thus tricking the early secure-boot protocol.
I would figure there are two ways they've come up with the information that they drew that conclusion from. They ran the net accounts /domain command on a users computer which dumped the password complexity requirements for your organization (assumes Windows / Active Directory) They successfully brute forced (or guessed) user passwords because they were weak. Recent password dumps like LinkedIn have provided a trove of real-world passwords that pen-testers have been using in the field to try to crack passwords. Without further information it's hard to say how they've come to that conclusion (we have no idea what the red team did or what was in scope) but those two ways are how I would assume they did it.
{ "source": [ "https://security.stackexchange.com/questions/129357", "https://security.stackexchange.com", "https://security.stackexchange.com/users/116793/" ] }
129,436
I was looking at the domain information of a website ( poaulpos.net ) on who.is that Chrome connects to whenever I visit a specific an old Tech Times article about Thunderstrike 2, a Mac firmware attack ("Thunderstrike 2 Is The Latest Nightmare Of Mac Owners"). I have Little Snitch, an application based firewall, so I blocked it the first time Chrome attempted to connect to it. My question is very basic: clicking on the diagnostics tab of any who.is entry automatically runs a ping and a traceroute on the website. Is that more or less like visiting the website by typing the hostname into your browser and letting it load? The website in question was only registered yesterday and contact information is Whoisguard protected. I'm pretty paranoid. I'm suspicious of the website and am worried if somehow, I could've just allowed something malicious onto my laptop by attempting to reach the website via the who.is diagnostics tab.
They are not the same at all. A ping request is an ICMP packet which just by default sends null data to check if the host is up (You can change around the parameters being sent (read more here ).) When you visit a website in the browser you are using the HTTP protocol which requests data and so you have a CLIENT/SERVER setup here (data is served to the client from server upon a request that is sent in the HTTP protocol). Either way, if you are not the one sending the request but rather the service ( poaulpos.net ) you are using sends it, there is no traceback to you and therefore no security risk for you.
{ "source": [ "https://security.stackexchange.com/questions/129436", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87836/" ] }
129,456
I was looking at my bank website's certificate and discovered that it's issued to multiple websites, and the bank's domain name is only listed among many other domains in the 'alternative names' field. Is this considered a bad practice? What risks are their customers subjected to? (man in the middle? impersonation?)
It is a certificate of a content delivery network – a U.S. company Incapsula Inc. – intercepting your whole communication with the bank. The certificate itself does not pose a direct risk to customers' data, but: Is this considered a bad practice? Unlike the other answers , I would say it is not normal and the situation indicates a certain level of incompetence on the bank's side. According to Incapsula's pricing plans , your bank might be using $59/month, while the company offers custom SSL for $299/month (feature "Supports custom SSL certificates") and a real plan for enterprises. Even if the bank pays more to CDN, the bank is using functionality aimed at professional blogs and not using the features their plan/agreement offers. Your bank may be violating privacy laws in your country by letting a company from another country, under a different legislature, process customers' personally identifiable data. What risks are their customers are subjected to? (man in the middle? impersonation?) The data in a encrypted between your browser and the CDN's endpoint. The private key is (hopefully) stored only on CDN's servers, so there is no risk other companies from the list could impersonate the bank. As long as security standards are met on a link between CDN and the bank, the MitM is technically not possible either.
{ "source": [ "https://security.stackexchange.com/questions/129456", "https://security.stackexchange.com", "https://security.stackexchange.com/users/116881/" ] }
129,474
I have created a master key with two subkeys: one for signing and the other for encryption. Finally, I have exported the two subkeys to a new machine. How can I tell the new machine to consider the master as "ultimate", even if it is absent from the machine? Does it matter?
You can set every key to ultimate trust through opening the key edit command line gpg --edit-key [key-id] and running the trust command. You will now be prompted to select the trust level: Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? Obviously, 5 will be the proper decision to achieve ultimate trust. Finally, save to commit the changes and exit GnuPG. The same commands apply to both GnuPG 1.4 and GnuPG 2 (and newer). Ultimate enables a key to introduce trust in the OpenPGP web of trust, with other words all ultimately trusted keys act as a starting point for trust paths. You should set your own keys to ultimate trust, but usually will not do so for other's.
{ "source": [ "https://security.stackexchange.com/questions/129474", "https://security.stackexchange.com", "https://security.stackexchange.com/users/74788/" ] }
129,499
There are a ton of questions on here that make reference to the eip : How can I partially overwrite the EIP in order to bypass ASLR? Unable to overwrite EIP register Do I always have to overwrite EIP to get to write on the stack in a buffer overflow? Etc. What is the EIP? How is it used, both as an exploit target and in benign code?
EIP is a register in x86 architectures (32bit). It holds the "Extended Instruction Pointer" for the stack. In other words, it tells the computer where to go next to execute the next command and controls the flow of a program. Research Assembly language to get a better understanding of how registers work. Skull Security has a good primer.
{ "source": [ "https://security.stackexchange.com/questions/129499", "https://security.stackexchange.com", "https://security.stackexchange.com/users/116128/" ] }
129,683
Is image blurring an unsafe method to obfuscate information in images? I.e., is it possible to "de-blur" the image, if you know the algorithm and the setting, or by trial & error? For instance, the image below is the Google logotype blurred with the Photoshop CS6 Gaussian Blur filter @ a radius of 59.0 pixels. For the naked eye, it could be difficult to figure out the blurred content. But could the blurring be "reverse engineered" to reveal the original image, or at least something that is recognizable?
Is it possible to "de-blur" the image, if you know the algorithm and the setting, or by trial & error? Here, I assume we are only considering images which were blurred using a filter applied to the image, and not as a result of a poor capture (motion/optical blur). Deblurring definitely is possible, and you will see support in many image processing tools. However, blurring intentionally reduces the amount of information in the image, so to truly get back the original image could require "brute force", whereby a (humungously) large number of candidate images are generated, which all "blur" to the same final image. Different types of blur have different losses, but it is possible to reverse all of them (albeit expensively). The cost of deblurring and the number of possible outcomes depends on the number of passes taken by the blur filter, and the number of neighbors considered while blurring. Once deblurred, many tools and services should be able to automatically remove many of the outcomes, based on knowing what type of image it is. For instance, this blog post talks about why blurring content with a low amount of entropy (e.g. check books) is much less secure than blurring something like a human face. In short, it is indeed possible to get back an image that if "blurred" will result in the same image that you provided. But you cannot guarantee that the deblurred image is the only valid deblurred version (you will need some domain knowledge and image analysis like matching edges, objects making semantic sense). For the naked eye, it could be difficult to figure out the blurred content. But could the blurring be "reverse engineered" to reveal the original image, or at least something that is recognizable? It is possible that blurring does not fundamentally transform the "signature" of an image, such that the histogram is similar and allows matching. In your case, the human eye can actually make out that this could have been the Google logo (familiar colors) but the histogram is quite different. Google itself can't identify the image and you can study the histogram and color clusters using this online tool -- the images are quite different. If probably would be safer if you were to choose to black out the sensitive content (see post here ) I wish these things weren't possible (e.g. I used to try to go as fast as possible near speed traps so that motion blur would hide my number plates, but it never works anymore). Tools to deblur are fairly common now (e.g. Blurity ) though they don't work as well with small computer-generated images (less information) as they do with photographs (see sample of what I recovered). In terms of more references, the first chapter of Deblurring Images: Matrices, Spectra, and Filtering by Per Christian Hansen, James G. Nagy, and Dianne P. O’Leary is a really good introduction. It talks about how noise and other factors make recovery of the exact original image impossible: Unfortunately there is no hope that we can recover the original image exactly! but then goes about describing how you can get a close match. This survey compares different techniques used in forensic image reconstruction (it's almost 20 years old, so it focuses on fundamentals). Finally, a link to Schneier's blog where this is discussed to some detail.
{ "source": [ "https://security.stackexchange.com/questions/129683", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91840/" ] }
129,724
Let's say I have access to the private portion of an RSA key-pair. How can I check if this key has associated passphrase or not?
The keyfile will have a different header if it is password protected. Here's the top of a key without a passphrase: -----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEA3qKD/4PAc6PMb1yCckTduFl5fA1OpURLR5Z+T4xY1JQt3eTM And here's the top of a key which is passphrase-protected: -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,556C1115CDA822F5 AHi/3++6PEIBv4kfpM57McyoSAAaT2ECxNOA5DRKxJQ9pr2D3aUeMBaBfWGrxd/Q Unfortunately, that only works looking at the files. I know of no way for a server to be able to tell if the keys being presented to it were protected with a passphrase, which is the most useful place to be able to leverage that sort of info.
{ "source": [ "https://security.stackexchange.com/questions/129724", "https://security.stackexchange.com", "https://security.stackexchange.com/users/116320/" ] }
129,898
I don't have any experience or scientific knowledge in security, I just wanted to ask if this is possible because I am interested in it. What if I encrypt data and every password decrypts it, but only the right one does not create pointless data clutter? Same could be done with a login: false login data leads to fake, dummy accounts and only the right login details get you to the right accounts. Wouldn't this be a way better method of encryption because you couldn't just try out passwords but had to look at the outcome to see if it was the right one?
The answer always depends on your threat model. Security is always woven into a balance between security and usability. Your approach inconveniences the hackers trying to break into the account, but also inconveniences a user who merely mistypes their password. If the fake account is believable enough to fool an attacker, it may also be believable enough to fool a valid user. That could be very bad. This may be desirable in extremely high risk environments. If you had to store nuclear secrets out in the open on the internet, having every failed password lead you to an account that has access to fake documents which don't actually reveal national secrets could be quite powerful. However, for most cases it is unnecessary. You also have to consider the alternatives. A very popular approach is to lock the account out after N attempts, which basically stops all brute force attempts cold, and has usability behaviors that most users are willing to accept.
{ "source": [ "https://security.stackexchange.com/questions/129898", "https://security.stackexchange.com", "https://security.stackexchange.com/users/117354/" ] }
129,917
There are websites which claim to do "DNS leak tests". I don't care so much about DNS leaks but am curious, how does the website know which DNS server I am using? I guessed it would be a header sent in the HTTP request but wasn't able to find it. Example website: https://www.dnsleaktest.com/ EDIT: there are plenty of other examples found through a google search.
This is a DNS resolution trick that could also be performed using non-http protocols but in this case is performed by using random hostnames and zero-pixel images via http. Look at the source code on the page and you will see a series of random 10-character subdomains requested for several URL's. These are very unique hostnames which neither your computer or your ISP, or more importantly your DNS provider, will have cached in local DNS. When these unique URL's hostnames hit your DNS provider they then have to request them from the testing companies website who then correlates the owners IP address doing the unique DNS request and then notifies you of the company name that made the request via a quick lookup List of randomized host names all for the same domain ixc9a5snm4.dnsleaktest.com rhl50vm36o.dnsleaktest.com 4xov3y3uvc.dnsleaktest.com 2n5t99gbzp.dnsleaktest.com 6mzklkved4.dnsleaktest.com d6z20e9c2x.dnsleaktest.com can be found in the following html (your hostnames will be different) <img width=0 height=0 src="https://ixc9a5snm4.dnsleaktest.com">. <img width=0 height=0 src="https://rhl50vm36o.dnsleaktest.com">. <img width=0 height=0 src="https://4xov3y3uvc.dnsleaktest.com">. <img width=0 height=0 src="https://2n5t99gbzp.dnsleaktest.com">. <img width=0 height=0 src="https://6mzklkved4.dnsleaktest.com">. <img width=0 height=0 src="https://d6z20e9c2x.dnsleaktest.com">.
{ "source": [ "https://security.stackexchange.com/questions/129917", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115462/" ] }
129,930
From this article: http://www.bbc.com/news/business-36762962 Apparently, it takes us 45 seconds on average just to confirm who we are. But by using computers to identify our voices, this authentication process can be cut to 15 seconds on average, saving the bank pots of cash and us lots of hassle. Citi has just begun rolling out this kind of voice biometrics authentication for its 15 million Asian banking customers, starting in Taiwan, Australia, Hong Kong and Singapore. Citi uses the "free speech" method to begin a more natural conversation with the customer immediately... Free speech has another advantage: it's harder to fake a realistic conversation using recordings. With the passphrase method it's plausible that fraudsters could record a customer's voice as he or she says the phrase and then use this high-quality recording to try to spoof their way through security in future. The drawback with the system is that banks need to obtain customers' permission before recording voiceprints. From 2018, the European Union's General Data Protection Regulation will require organisations to say what data they collect on you, for which purposes, and to obtain your explicit consent. Some customers say no, but usually only around a quarter, says Ms Thomson. Citi's Asian efforts seem to bear this out, with a 75% uptake so far. And as the technology gets cheaper over the next five years, we could soon be talking to parking meters, vending machines, robot hotel concierges and driverless taxis, to pay for things and check in. People's voice is public , which means is easy to record/process/reproduce, so how can that system be more secure than answering the usual security secret questions? (that last paragraph scare me...)
This will be abused and instead of password dumps we will see attackers trading voice sample dumps and building huge databases of identified voice samples from public documents and places like YouTube. There are also other issues here that make this a bad choice. There's no plausible deniability if someone didn't want to be coerced into authenticating an attacker. With a PIN you might be able to say you forgot your code, you can't say you forgot your voice. There's also the inability to protect what is being said from prying ears and recording devices. ATM makers learned this the hard way with ATM's and now we have shielding to help hide what numbers are being pressed. Similar attacks to ones that already exist will occur. Finally is it too far-fetched for an attacker to have a large enough sample of someones voice to have some software capable of saying whatever you want them to say in their own voice ? I think the technology could still take off simply because of a cool factor or because a big enough company forces it, but I'm not convinced this is the best security control for the task at hand and I share your concern.
{ "source": [ "https://security.stackexchange.com/questions/129930", "https://security.stackexchange.com", "https://security.stackexchange.com/users/40414/" ] }
129,972
I keep reading about hackers accessing laptop and home security camera systems. Most home users are using SOHO routers meaning they are on a private IP range behind a NAT. I realize NATs aren't designed for security, but if an IP camera is behind one, how can a hacker access it? Would it require either the camera's software or the router to have a security hole (0 day, bad code) or end-user misconfiguration (weak/default password, open/mapped ports) that would let the attacker in?
Typically this happens with in a few scenarios End-user puts the device in the DMZ because they want to access it remotely and can't be bothered trying to figure out port-forwarding rules. This might happen if a user is Torrenting or has a NAS or other device they want to access from the public internet. User has allowed direct access to the device via ingress port-forwarding or firewall rules which makes it accessible to probes / attacks. Again, this scenario might be common with a torrenting user or owner of a NAS or DVR that they want to access from the outside world. An internal device on the network has been compromised and returned a reverse shell to the attacker. Since most (if not all) SOHO routers allow all egress traffic this connection is allowed to take place. The attacker then moves laterally through the network via the originally compromised machine. The edge router itself is compromised and allows access from the outside world. There are a variety of ways of bypassing the NAT but the aforementioned are probably the most common attack scenarios.
{ "source": [ "https://security.stackexchange.com/questions/129972", "https://security.stackexchange.com", "https://security.stackexchange.com/users/117440/" ] }
130,095
Why would I need a RADIUS server if my clients can connect and authenticate with Active Directory? When do I need a RADIUS server?
Why would I need a RADIUS server if my clients can connect and authenticate with Active Directory? RADIUS is an older, simple authentication mechanism which was designed to allow network devices (think: routers, VPN concentrators, switches doing Network Access Control (NAC)) to authenticate users. It doesn't have any sort of complex membership requirements; given network connectivity and a shared secret, the device has all it needs to test users' authentication credentials. Active Directory offers a couple of more complex authentication mechanisms, such as LDAP, NTLM, and Kerberos. These may have more complex requirements - for example, the device trying to authenticate users may itself need valid credentials to use within Active Directory. When do I need a RADIUS server? When you have a device to set up that wants to do simple, easy authentication, and that device isn't already a member of the Active Directory domain: Network Access Control for your wired or wireless network clients Web proxy "toasters" that require user authentication Routers which your network admins want to log into without setting up the same account each and every place In the comments @johnny asks: Why would someone recommend a RADIUS and AD combination? Just a two-step authentication for layered security? A very common combo is two factor authentication with One Time Passwords (OTP) over RADIUS combined with AD. Something like RSA SecurID , for example, which primarily processes requests via RADIUS. And yes, the two factors are designed to increase security ("Something you have + Something you know") It's also possible to install RADIUS for Active Directory to allow clients (like routers, switches, ...) to authenticate AD users via RADIUS. I haven't installed it since 2006 or so, but it looks like it's now part of Microsoft's Network Policy Server .
{ "source": [ "https://security.stackexchange.com/questions/130095", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115025/" ] }
130,119
I'm working on a project where we need to keep a few specific files easily accessible, but encrypted. My idea for accomplishing this is to have the files available in the project's public website, encrypted to all of the members' PGP keys. I know that in theory, this means that only those members can decrypt it, and the files are safe, but is there a risk I'm missing?
There is always a risk that any given cipher will be broken at some point and data like this will become truly public. So yes there are some risks but it doesn't mean you aren't making a reasonable security trade-off. A few things you may want to consider: What's your worse case scenario with the data going public and are there implications to this data going public that you might not be aware of ? Are there any time-based factors to this data, such as the data is only useful for a year, a week, etc ? Are there any regulatory, legal, or ethical implications of this data going public ? Can you add additional security controls such that it's not just one control protecting the data ? Do the people you want to share this data with need ALL of the data or could they satisfy their needs with a smaller subset of data ? Is data masking (replacing sensitive data with known fake data ) an option here which would provide additional security ? Where will it be decrypted and where will the decryption keys be stored ? Is the passphrase to decrypt it easy to brute force ? etc... Millions of other questions go here. Nothing is 100% secure, everything is a trade-off so you need to look at the decision from a few different angles first. Generally my advice when I see a single security control is to tell you that you need additional levels of security controls rather than just one. So I would have to advise you to consider additional controls but again even then I don't know what you are protecting and if it's just a collection of Internet cat photos then hey maybe just using GPG is good enough... (no offense to GPG of course, that's a great tool but cat photos are everywhere)
{ "source": [ "https://security.stackexchange.com/questions/130119", "https://security.stackexchange.com", "https://security.stackexchange.com/users/28481/" ] }
130,151
We talked about password hashing and salting in class today. Our professor had a very different understanding of the use case of salts from mine and said that you might not store the salt at all and just check every login attempt with all possible salts and authorize if one matches. I don't really see any point whatsoever in this since now my service is much more vulnerable to brute force attacks (send one password, server checks many), but I guess it is a little more secure to pure dictionary attacks against the hashed password. So is there any use case where people would actually do this?
Not storing the salt is bad advice. The main purpose of a salt is that each user password has to be attacked individually. If you do not store the salt then, as you said, you need to try every single salt combination in order to validate the password. If you need to check every single salt combination, this means that the salt cannot be too long (you mentioned 12 bits in a comment). If the salt is not long, it means that the salt will be repeated for many users. If it's repeated for many users, it means the attacker will be able to attack many users at the same time, which will save the attacker time. By doing that, you nearly completely defeat the purpose of using a salt.
{ "source": [ "https://security.stackexchange.com/questions/130151", "https://security.stackexchange.com", "https://security.stackexchange.com/users/117604/" ] }
130,531
In the past day or so, I've received two emails from different .gov addresses that purport to be from Intuit and encourage me to click on a link to "restore access to your QuickBooks account." They aren't pretending to be government addresses. If I click reply, the mail-to field says [email protected], just as the email states in the From section. I have forwarded these emails to the FTC's recommended address for phishing scams, but I'd like to let the agencies involved know that they have compromised accounts. Is there an easy way to do that? I looked on the website of one agency, but there wasn't any contact that seemed right (Texas agriculture.gov). Would emailing that person (at the compromised address) directly be useful? I don't want to spend all day on this, since I'm working. I just want to give someone a heads up. I was hoping there was a "report misused government email" contact box somewhere, but I can't find anything like that.
It is likely that the from header has been forged. I get emails from fake .govs quite often, mostly they end up in my spam filter. The hyperlink within is either unique, allowing tracking, or just delivers malware. Most of the time I just ignore these. If you believe that the header is not forged then you can typically contact the agency by Googling their name and Webmaster or Contact Us or other such things. Do not use the phishing email, it is almost certainly fake.
{ "source": [ "https://security.stackexchange.com/questions/130531", "https://security.stackexchange.com", "https://security.stackexchange.com/users/61169/" ] }
130,562
I have read that Kerberos is used for authenticating users who wish to access services on various servers in an enterprise network, but I still do not understand the purpose of Kerberos. Why doesn't the system admin just create a user account for each user on each server, so that the users can use their username and password to access whatever resources they wish to access? Why is such an elaborate authentication protocol necessary?
Why doesn't the system admin just create a user account for each user on each server, so that the users can use their username and password to access whatever resources they wish to access? Imagine you have 50 users and 50 servers. For the sake of simplicity, suppose that all 50 users are supposed to have access to all 50 servers, they all have the same privileges on all servers, and we never have to change these privileges. This is 2,500 password entries in total that need to be created. Adding a new user When you add a new user, the admin has to go to all 50 servers and add that one user to all of them. This would need the admin to run the create user command in all 50 servers. That bit can be automated, but the part that cannot be safely automated is that the user would need to enter their password 50 times, once per server. Why this latter problem? Because password authentication systems are designed so that they do not retain a copy of the password , not even an encrypted copy. So when the user enters their new password on server #1, that server forgets the password right away, so they'd have to enter it yet again for server #2, and #3, etc. Adding a new server Maybe now you're thinking that you could write a script that: Runs on server X; Asks the new user to enter their new password; Asks the administrator for their password; Logs in to servers #1 through #50, and creates the user account on each of them with the same password. That's a bit dodgy, but even if you go for it, you're still left with another, bigger problem. Suppose your company buys a new server, and we now need our 50 users to have access on this server. Well, since none of the servers contains a copy of the users' passwords, the administrator would have to go to each user and have them enter a password for their account in the new server. And there's no getting around this unless you kept copies of users' passwords—which again, is an insecure practice . Centralized password database To solve these problems you really need your user/password database to be centralized in one place, and to have all the servers consult that database to authenticate usernames and passwords. This is really the bare minimum you need for a manageable multiuser/multiserver environment. Now: When you add a new user, all the servers automatically pick it up from the central database. When you add a new server, it automatically picks up all the users from the central database. There's a variety of mechanisms for doing this, albeit they're still short of what Kerberos offers. Kerberos Once you have such a centralized database it makes sense to buff it up with more features, both for ease of management and for convenience. Kerberos adds the ability that, when an user has successfully logged in to one service, that service "remembers" the fact and is able to "prove" to other services that this user logged in successfully. So those other services, presented with such proof, will let the user in without asking them for a password. This offers a mix of convenience and safety: Users don't get asked for their password over and over when they've already entered it successfully once that day. When you have programmers who write programs that access multiple servers, and those servers require authentication, the programs can now authenticate to those servers without having to present a password, by reusing the credentials of the server account they run under. The latter is important because it minimizes or kills an insecure but all too common practice: automated programs that require you to put copies of usernames and passwords in configuration files. Again, a key idea of secure password management is do not store copies of passwords, not even encrypted copies .
{ "source": [ "https://security.stackexchange.com/questions/130562", "https://security.stackexchange.com", "https://security.stackexchange.com/users/112259/" ] }
130,574
I am developing an IE11 based application running on a web server inside our corporate network The application makes calls to a 3rd party API which requires client certificates and username/password in the url. The url uses SSL. In the application I am using ajax to send url. This all seems to work. I am prompted for a certificate and credentials verification seems to work. My concern is building a login form and persisting the user credentials if the login succeeds. The only way I can test the credentials is to run a query and see if the result string returned is marked as successful. As far as I can tell, there is no mechanism in the API to return a session token. I think I have to keep the username and password in the client session if I want to make future url calls in the session. I was thinking of using sessionStorage objects but it seems like this could be a bad idea. I am no security expert and would appreciate any guidance on whether using sessionStorage is a reasonable approach. Is there a more secure way to do this considering the limitations of the 3rd party API?
Why doesn't the system admin just create a user account for each user on each server, so that the users can use their username and password to access whatever resources they wish to access? Imagine you have 50 users and 50 servers. For the sake of simplicity, suppose that all 50 users are supposed to have access to all 50 servers, they all have the same privileges on all servers, and we never have to change these privileges. This is 2,500 password entries in total that need to be created. Adding a new user When you add a new user, the admin has to go to all 50 servers and add that one user to all of them. This would need the admin to run the create user command in all 50 servers. That bit can be automated, but the part that cannot be safely automated is that the user would need to enter their password 50 times, once per server. Why this latter problem? Because password authentication systems are designed so that they do not retain a copy of the password , not even an encrypted copy. So when the user enters their new password on server #1, that server forgets the password right away, so they'd have to enter it yet again for server #2, and #3, etc. Adding a new server Maybe now you're thinking that you could write a script that: Runs on server X; Asks the new user to enter their new password; Asks the administrator for their password; Logs in to servers #1 through #50, and creates the user account on each of them with the same password. That's a bit dodgy, but even if you go for it, you're still left with another, bigger problem. Suppose your company buys a new server, and we now need our 50 users to have access on this server. Well, since none of the servers contains a copy of the users' passwords, the administrator would have to go to each user and have them enter a password for their account in the new server. And there's no getting around this unless you kept copies of users' passwords—which again, is an insecure practice . Centralized password database To solve these problems you really need your user/password database to be centralized in one place, and to have all the servers consult that database to authenticate usernames and passwords. This is really the bare minimum you need for a manageable multiuser/multiserver environment. Now: When you add a new user, all the servers automatically pick it up from the central database. When you add a new server, it automatically picks up all the users from the central database. There's a variety of mechanisms for doing this, albeit they're still short of what Kerberos offers. Kerberos Once you have such a centralized database it makes sense to buff it up with more features, both for ease of management and for convenience. Kerberos adds the ability that, when an user has successfully logged in to one service, that service "remembers" the fact and is able to "prove" to other services that this user logged in successfully. So those other services, presented with such proof, will let the user in without asking them for a password. This offers a mix of convenience and safety: Users don't get asked for their password over and over when they've already entered it successfully once that day. When you have programmers who write programs that access multiple servers, and those servers require authentication, the programs can now authenticate to those servers without having to present a password, by reusing the credentials of the server account they run under. The latter is important because it minimizes or kills an insecure but all too common practice: automated programs that require you to put copies of usernames and passwords in configuration files. Again, a key idea of secure password management is do not store copies of passwords, not even encrypted copies .
{ "source": [ "https://security.stackexchange.com/questions/130574", "https://security.stackexchange.com", "https://security.stackexchange.com/users/118082/" ] }
130,619
I know this is a real dumb question and I am certainly talking complete rubbish, but let me explain: We have a long SHA-256 hash, e.g.: 474bfe428885057c38088d585c5b019c74cfce74bbacd94a7b4ffbd96ace0675 (256bit) Now we use the long hash to create one SHA-1 and one MD5 hash and combine them. E.g.: c49e2143913627ea178e66571189628a41526bf3 (SHA-1; 160bit ) + 6bb225025371aaa811748da8e011773d (MD5; 128bit ) So now we got this: c49e2143913627ea178e66571189628a41526bf36bb225025371aaa811748da8e011773d (SHA-1 + MD5; 160bit + 128bit = 288bit) And it is longer than the input string, with 288bit instead of 256bit . So did we actually increased the entropy? In short this is this: hash = sha256(input) # 256bit result = sha1(hash) + md5(hash) # 288bit (That is supposed to be pseudo-code, don't know if it is valid through.) What is the error in reasoning here? I am sure you cannot increase entropy/string length in this way... Edit: Also important: Did I possibly even decreased the entropy this or stayed it the same?
And it is longer than the input string, with 288bit instead of 256bit. So did we actually increased the entropy? No, you did not increase the entropy. In this context, "entropy" basically refers to the probability of any particular guess about the content or value being correct. If I tell you that I have hashed a single lowercase US English letter's ASCII representation using SHA256, and that the hash is hexadecimal ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb, then the entropy of that value isn't 256 bits, but rather closer to five bits (log2(26) ~ 4.7) because you only need to make at most 26 guesses to arrive at the conclusion that the letter I hashed was a . (For completeness, what I really did was printf 'a' | sha256sum -b on a UTF-8 native system.) The entropy, thus, can never be greater than that of the input. And the input is, at best, the initial hash, which has 256 bits worth of entropy. (It could have less, if the string you hashed has less than 256 bits of entropy and the attacker can guess that and its value somehow.) Each hash calculation can be assumed to be O(1) when the input size is fixed. So by concatenating the SHA-1 and MD5 hashes of a string that is a SHA-256 hash, you can never get more entropy than 256 bits' worth. All you are doing is making it longer, and possibly obscuring its origin. Now, in some situations, using multiple hashes actually makes sense. For example, many Linux package managers use and validate multiple, different hashes for the same file. This isn't done to increase the entropy of the hash value, though; it's done to make finding a valid collision harder, because in such a situation the preimage or collision attack must work equally against all hashes used. Such an attack against a modern cryptographic hash is already difficult enough for most purposes; mounting a similar attack against several different hashes simultaneously would be orders of magnitude more difficult.
{ "source": [ "https://security.stackexchange.com/questions/130619", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91425/" ] }
130,732
The scenario: We have a login system for a web application that requires a plaintext password and 3 images (from a collection of images that the users select during registration, the images are provided by the site). A keylogger will catch only the keystrokes and not the selected user images, right? Is this enough to defeat keyloggers?
Nope. Keyloggers can often also do screen-capturing and mouse-coordinate-logging. So the attacker can still see what image the user selects. Another kind of two-factor authentication for which the user needs two devices (e.g. laptop and phone) would be more secure. Another good alternative is a Yubikey. A kind of device which generates a pseudo-random password each time. That way the hacker/keylogger can't guess the next password.
{ "source": [ "https://security.stackexchange.com/questions/130732", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69877/" ] }
130,790
About two months ago I've decided to use a VPN all the time (it's launched at startup) for various reasons, privacy being the first one. But recently I realized that if you agree to share your location when an HTML5 geolocalization pops up in firefox, they can still get your localization pretty accurately (about 3 km away) when my VPN IP address locates me in another country. Since I'm on a fixed computer without wifi how can they know my localization ?
I just checked this with my VPN in http://html5demos.com/geo Although I VPN through Germany, it still shows my nearby location in London. If you read https://www.mozilla.org/en-US/firefox/geolocation/ , you will see: If you consent, Firefox gathers information about nearby wireless access points and your computer’s IP address Then Firefox sends this information to the default geolocation service provider, Google Location Services, to get an estimate of your location. As @Aria noted, Google Location Services uses their collection of WiFi AP to try to pin-point your location. I assume they have a global list of AP SSIDs through the Google Street View project as well as their Android devices. edit: FYI, here is the captured request that is being sent to Google. A full list of nearby APs. You might want to launch a proxy and check for yourself. Also the fact that you are on a computer without WiFi doesn't mean that nearby APs aren't stored somewhere within your computer (Cache, Registry, Logged in Google/Firefox profiles, etc)
{ "source": [ "https://security.stackexchange.com/questions/130790", "https://security.stackexchange.com", "https://security.stackexchange.com/users/118292/" ] }
130,896
Recently, I have discovered a security flaw in a business website. This website has a password-protected "Partners Area", and like many websites it provides a form to reset the user's password. When a user asks for a password reset for his nickname, a new password is sent to their email address and that password becomes immediately effective. The problem is (if this wasn't already a problem) that the new password is a fixed one, for all users. So an attacker can easily get access to any account. Now, the only operations a user can do within their Partners Area are: View/change email address Change password Download some manuals and utilities (it's definitely not classified stuff) Fill out a repair form (then the process will continue by email) Download logos and images for marketing purposes The only things I see for a malicious attacker to exploit are: Prevent future access to a legitimate user (which will probably be able to reobtain right after a phone call) Discover information about who the company customers are (guessing random nicknames and looking at their email address). Anyway, it's not something someone would keep as a secret. Even if I am always very disturbed by things like this, in this case I must admit that it might not be a big deal. Are flaws like this acceptable compromises, in a context where not much harm can be caused? Since I think someone misunderstood a detail: that website belongs to an external company. I have no role in the development of that website, and no control over any decision about it.
Your question is: Are security flaws acceptable if no much harm can derive from them? The answer is yes , if decided by business while understanding the consequences. What you are doing is called a risk assessment. For each risk you must highlight the consequences for your company when it is instantiated. Based on that assessment you (you = someone who has the power to make the business decision) have three choices: you can accept it - by assuming that the costs of fixing it are not worth the consequences you can mitigate it: fix it to the point where you can accept the consequences you can insure against it - effectively offloading the risk to someone else. As you can imagine, there are several hot areas in a risk assessment. The first one is the assessment of the consequences and the probability. There are numerous books and articles about how to do that, at the end of the day this is based on vigorous hand waving and experience. The output is never like the one in the books we have a 76% probability of this happening, which will cost us 126,653 € but rather well, I feel that this is a risk we should take care of Note that the "consequences" part may sometimes be quantifiable (loss of profit for online commerce for instance) but usually are not (loss of image for your company for instance). Beside the dubious theoretical aspects of risk assessments there is one huge advantage you should always take advantage of: you put a risk on the table and it must be dealt with somehow. This is not only a place-where-the-back-loses-its-noble-name--coverer, it is the right tool to highlight where information security efforts should go to. It also raises your visibility (there are not so many proactive cases where you can raise your visibility) and forces you to take a hard, deep, pragmatic look on what is important and what is not.
{ "source": [ "https://security.stackexchange.com/questions/130896", "https://security.stackexchange.com", "https://security.stackexchange.com/users/118432/" ] }
131,010
When you first connect to an SSH server that is not contained inside your known_hosts file your SSH client displays the fingerprint of the public key that the server gave. I found from this question here that as a client you are able to specify within ssh_config which one of the public key pairs from the hosts' /etc/ssh/ directory you would like. From the ssh_config man page I found that the current defaults are as follows: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], ecdsa-sha2-nistp256,ecdsa-sha2-nistp384, ecdsa-sha2-nistp521,ssh-ed25519, ssh-rsa,ssh-dss Only recently my SSH server has been sending me a ECDSA fingerprint instead of an RSA, but I was wondering which algorithm should I choose if it even matters? This article claims that ECDSA is the old elliptic-curve DSA implementation that is known to have severe vulnerabilites Should I be using RSA or the newest ed25519 algorithm?
Currently, RSA is still recommended as a gold standard (strong, wide compatible) ed25519 is good (independent of NIST, but not compatible with all old clients). Server is usually providing more different host key types, so you are targeting for compatibility. The order of priority in the client config is from the stronger to more compatible ones. Frankly, for you, as an end user, it should not matter. Some of the keys might have some security concerns, but none of them is considered completely broken with reasonable lengths, which could possibly cause man-in-the-middle attack or something similar. The article mentions "severe vulnerabilites", but not saying anything specific. If it had such serve vulnerability, nobody would use it, support it, nor recommend it. Without any reference, it is pretty hard to comment on your concerns.
{ "source": [ "https://security.stackexchange.com/questions/131010", "https://security.stackexchange.com", "https://security.stackexchange.com/users/115369/" ] }
131,056
Recently I saw the following screenshot on Twitter , describing a obviously terrible password policy: I wonder what is worse for the password strength. Having no password policy at all or a poor password policy (like described in the screenshot)?
The question is: worse for what? With the policy you posted, the possible passwords are less than 64⁸ (~2.8*10¹⁴). In practice, very much passwords will probably be [a-z]*6[0-9][special char] (e.g. aabaab1!) and similar passwords. All possible passwords with the same characters and length less than 8 are just 64⁷+64⁶+64⁵+... which is ~4.5*10¹². Thats a lot less than the passwords with length 8, so it doesn't increase security much to allow them. (allowing longer passwords would obviously increase security a lot) Without any policy, many people will also use bad passwords, sure. But an attacker can't be sure about that. Also some people will use better passwords. Without any policy, an attacker can never be sure how hard it is to crack a password. If you give me a DB of password hashes, I might try to crack it. But if there are no results, after some time I might stop. With the policy in place I can be sure that after 64^8 tries I have all passwords. I would say, a bad password policy is worse. But it depends on the attack scenario. With a dump of password hashes, with no password policy it is very likely easier to crack any password but harder to crack most passwords. With a bad policy like the one given, it is easier to crack all the passwords but slightly harder to crack the first one, most likely. If your concern is how hard it is to crack your own password, without any policy you can use a secure 20 char random password if you want. With the policy in place, you are forced to use a insecure password. (in practice, it's more relevant how the passwords are stored and so on, but that's out of scope here). So as a user of the website/service, a bad password policy is a lot worse. If you look at it from an organizational standpoint, a bad password policy is way worse than none. No password policy means, probably nobody thought about it (not very good that they don't think about security, but ok...) But a bad password policy means: Somebody thought about it and came up with that crap. This means they probably have no clue about security. In the end, if you can implement a bad password policy, you can also implement a good one so there is no excuse for a bad password policy ever. Just changing the policy to "A password has to be at least 8 characters" would increase security a lot and isn't hard to do.
{ "source": [ "https://security.stackexchange.com/questions/131056", "https://security.stackexchange.com", "https://security.stackexchange.com/users/72031/" ] }
131,106
Today I logged in to pay my cellphone bill, and I found that the site has disabled paste functionality in password field. I'm a webdev and I know how to fix this, but for regular user is REALLY annoying having to type a random password like o\&$t~0WE'kL . I know that is normal to make users write the password when creating an account , but is there any reason to disable pasting passwords during login ?
There is no substantial security benefit to disallowing pasted passwords; on the contrary it is likely to weaken security by discouraging the use of password managers to generate and autofill randomized passwords. While some password managers are capable of overriding pasting restrictions, the point still stands that users should not be forced to type their password by hand. Excerpt from a relevant WIRED article : Websites, Please Stop Blocking Password Managers. It’s 2015 But what’s crazy is that, in 2015, some websites are intentionally disabling a feature that would allow you to use stronger passwords more easily—and many are doing so because they wrongly argue it makes you safer. Here’s the problem: Some sites won’t let you paste passwords into login screens, forcing you, instead, to type the passwords out. This makes it impossible to use certain kinds of password managers that are one of the best lines of defense for keeping accounts locked down.
{ "source": [ "https://security.stackexchange.com/questions/131106", "https://security.stackexchange.com", "https://security.stackexchange.com/users/40463/" ] }
131,164
I try to follow account security best-practices (strong random passwords, password manager, multi-factor authentication, etc.) but I still find myself worried about potential compromises to my accounts, in particular financial accounts (e.g., banks, investments) or accounts that could lead to financial account access (e.g., email, phone). It got me thinking. For people who's net worth is on the order of $10M or $100M or $1B, what additional precautions should they take? Special arrangements with financial institutions? Payments to security specialist firms to monitor/manage all accounts? This question is related, but the asker didn't understand basic security precautions so the answers tended to be rather basic. One of the comments points out that some rich people completely disregard their security: Worked for a guy once, owned a decent sized company (and a bunch of smaller ones), with a personal worth of mid-nine figures, and his password for everything was 'bob'. Three guesses what his first name was. Which is why I'm asking what should high net worth individuals do to secure their financial account access? I'm assuming there must be something given they're a much larger target than Ye Olde Pleb. Googling around, it seems that some financial institutions offer RSA SecurID. The obvious risk to that is still a phone call to their support team saying you lost it, but it's something. Bonus question: How much do those extra security measures cost? I.e., at what point in my Inevitable-Rise-to-be-Richer-than-God do I seek out such precautions?
The best security measure is quite simple. Don't use accounts that allow your money to be easily stolen via the Internet. Happily, high-net worth individuals have used systems like this since before the Internet was a thing, and are in fact particularly likely to choose them regardless. While banks certainly aren't perfect at protecting money from theft, historically, they do a massively better job of it than the individuals who have money to protect, high net-worth or otherwise. First of all, when you're a billionaire, your money is not in a checking account. Most of it, in fact, isn't liquid at all. It's ownership of corporations, real estate, things that are difficult if not impossible to steal in any but the most convoluted ways. For the assets that are somewhat liquid, the vast majority of them are still going to be investments. And there's not going to be in an E-Trade account with a web login. They're going to be held by an investment bank that caters specifically to high net worth individuals, managed by private bankers whose job is it to know what's in your portfolio and how its performing at all times. It simply isn't accessible to outside threat actors.
{ "source": [ "https://security.stackexchange.com/questions/131164", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20587/" ] }
131,260
In 2013, a Citibank employee had a bad performance review that ticked him off. The results were devastating: Specifically, at approximately 6:03 p.m. that evening, Brown knowingly transmitted a code and command to 10 core Citibank Global Control Center routers, and by transmitting that code, erased the running configuration files in nine of the routers, resulting in a loss of connectivity to approximately 90 percent of all Citibank networks across North America. Now, there is a question about securing a network against attacks from the inside , but that question explicitly excludes insiders going rogue. There is also a question about protecting a database against insiders , but that's concerning high-tier problems. I also read What is the procedure to follow against a security breach? , but most answers on there act based on the insider being an employee that got fired. I'm asking about someone who hasn't been fired yet. They might have had a poor performance review, but haven't yet been terminated yet. They might just be unhappy about something their partner did, or they might have gotten upset about something. The problem I'm describing here is a large company where a user who is unhappy about their job snaps on a certain day and issues system-breaking commands tha they have full privileges to issue. Things like wiping machines, physically damaging essential infrastructure,... purely technical interference, nothing like leaking emails or secrets. The aim is just to do as much damage as possible to the infrastructure to go out with a bang. The article gives a few cursory mentions of things to do, but nothing really concrete. What things can be done to prevent sudden rogue insiders from negatively impacting essential infrastructure using techniques they're privileged to do?
Two-man rule - configure your systems so that all privileged access requires two people. This could be a physical control - privileged access can only come from the NOC, and inside the NOC people physically enforce the rule. More practical would be a scripting system. Sys-admins don't directly have root access, but they can submit scripts to be run as root. They will only be run after a separate person has reviewed and approved the script. There would still need to be a method for SSH access in an emergency - and the two-man rule could be maintained in that case using physical controls. The NSA implemented this after the Snowden leaks. I have never seen a full two-man system in any of the commercial or government systems I have audited - although I have seen various partial attempts. Update - there's more information on how to implement this on a separate question .
{ "source": [ "https://security.stackexchange.com/questions/131260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34161/" ] }
131,638
Everywhere a question like this is asked, I see people responding that (in a scenario where a card is used) the card does some processing with the data it receives/generates some data when it receives a signal. How is this possible without power? And even if that's the case, why can't every NFC tag in, let's say, credit cards just be cloned because there are no variables in them and only static data? You'd think those RFID tags could be copied and used for transactions.
Because the cards contain a chip which are powered by a coil. The coil is not really a antenna, but half of a transformer. Think your regular mobile charger. This contains a transformer, that will transform the voltage from 230V or 120V AC to 5V DC. This is done by having a coil magnetize some iron, and this iron magnetizes the "receiving coil". If you draw current from the receiving coil, the primary coil will also draw more current. Now, let's go to the "passive" card again. The reader is one half of a transformer, and the card is one half of a transformer, but this transformer does create a magnetic field in the air instead of magnetizing iron. When you put the card close to the reader, the reader and card becomes a full transformer, and thus the card can be powered, like it was connected to a battery. For the reader to transmit information to the card, the reader only needs to vary the frequency or amplitude of the AC voltage that powers the primary coil. The card can sense this and act on this information. For the card to send information back to the reader, the card simply short-circuits its own antenna via a transistor and a resistor. This will, like the mobile charger, cause the primary coil, i.e. the coil in the reader, to consume more current, and the reader can sense this (by having the primary reader coil via a resistor and then measure the voltage over the resistor) and read the data the chip sends to the reader. This means that half-duplex bidirectional communication is possible with RFID, thus the chip can do anything, and work like a contact smart-card. And as you know, a contact smart card with a security chip, that can securely store a key, and only perform operations with the key, is impossible to "clone" or "copy" as the key cannot be extracted. That's the security of smart cards, they cannot be cloned, and that's why they are preferred over magnetic strip cards. Thus, the same applies to wireless/contactless RFID card.
{ "source": [ "https://security.stackexchange.com/questions/131638", "https://security.stackexchange.com", "https://security.stackexchange.com/users/119208/" ] }
131,687
I'm a contractor for a few companies. I build and host their systems on servers I rent from a popular international host. I store the system code on a popular, internationally hosted version control system. There are a mix of authentication techniques at various points, most of them near-best practices. However, I also layer on some obscurity. SSH is hidden, some things are encrypted in non-obvious ways. Alone these wouldn't be valuable but alongside the real security, I fend off most serious threats. One of my clients got a data protection process request from one of their clients today: a huge government organisation. They obviously take this red tape stuff pretty seriously and have sent us a long questionnaire that asks for specific security detail. Not just about the data we collect but also where it's secured, how it's secured, where the locks are and who has the keys. The last few of those things are the sort of stuff I keep hidden from my clients, not to mention theirs. As the person with all the keys, I'm very conscious of this overused but accurate comic: Currently my clients' clients don't know about me. Not really. But if we comply with their request, anybody with access to this request, suddenly knows who I am, where I am, what I have access to. If you wanted to break into their portion of this system, you come after me. And you can go deeper. Part of this request mentions Access Control Policies and gives an example that explains where (exactly, geographically) a private key is stored. If you follow this through every system that tangentially touches the data they submit, they have a map of every system we use and know who to come to (or hack) to gain access to the whole thing. It unsettles me. My question is, in your experience, is there a way to comply with security procedure requests that doesn't target specific people, computers or even ports? Very little of this stuff is actual legal requirement. We already meet data protection act guidelines... But again, being a large government agency, their drive to tick boxes seems several powers greater than any other organisation. Just a couple of clarifications. My client has details on the system. They have no direct access to server operations. They have access to the version control system and receive encrypted data backups on a very regular schedule and have a document (and encryption keys) that explains how to replace the system I run with one of their own in the event of my untimely demise. We have discussed the overview but the exact details are under physical lock and key. We aren't dealing with launch codes. Names, contact information and IP addresses. I didn't expect this would be relevant to the question but people are bringing up the two-man rule. That's way over the top here. This is technically sub-PCI-DSS. I am developer and operations for this small company (as well as others). Many of you are talking about me being the weak point. I am. One wrench and you have the data I have. That's why I'm asking. Please, rather than just pointing this out, some suggestions on what to do about these things on a small scale would be more useful. I can't be the only devop on the planet who tangentially deals with governments.
The request for this information may not only be for a security audit, but also a process audit, and it sounds like it might be well founded, since: If you wanted to break into their portion of this system, you come after me. What would happen if you were "hit by a truck" tomorrow? If you are the only one that knows the systems, your clients and theirs would certainly have a big problem. Ideally, someone else needs to have access also, and it should be documented who that is and how to contact them. Depending on the requirements, it may be possible for the holder of the keys to be a company rather than a person, perhaps with a primary and secondary contact named. The primary contact could likely also be someone at your client, and they probably should have all the documentation for how to access their systems in an emergency if you are not available. Also, you may be able to provide a report with certain things redacted. They likely are more interested in knowing that the non-redacted documentation exists somewhere and that the proper people have access to it when needed. They probably won't care that the port numbers, etc. are blacked out, as long as they can be sure that they exist in a document somewhere.
{ "source": [ "https://security.stackexchange.com/questions/131687", "https://security.stackexchange.com", "https://security.stackexchange.com/users/12764/" ] }
131,697
The application is an Android client for a remote service on which all users have accounts and can make purchases of non-consumable items. Each purchase is tied to the account they were purchased on. The current purchase scheme is as follows: The client initiates the purchase with the Google Play in-app billing service. The purchase request contains the payload which uniquely identifies the account. When the purchase is successfully completed, the client gets back the purchase data which contains the payload and the purchase token. Also it has a signature for the received data which is currently not checked though. The client sends the purchase token to the server. At this point it has to be authenticated. The server receives the purchase token. The server authenticates with the Google Play service using its service account key and receives the purchase data for the given token. The data contains the original payload and the information regarding the purchase. The server validates the purchase data: it is verified that the payload matches the currently logged in user id, the purchase is not yet consumed, and has successfully been paid for. If the verification is successful, it is marked that the client has purchased the product on the server and the client receives a confirmation, otherwise the client receives a rejection. What could go wrong with this scheme? For instance, it worries me that the signature is not used anywhere. However, I don't see what use is in it if the server receives the data directly from Google Play. At worst, it can receive a wrong token, but the payload contains the user id, so the purchase will be rejected if it has been made for another account. Sending the same token twice is also useless since the product is non-consumable: it either has been purchased or hasn't. But maybe I'm missing something here or there is some other flaw I don't see?
The request for this information may not only be for a security audit, but also a process audit, and it sounds like it might be well founded, since: If you wanted to break into their portion of this system, you come after me. What would happen if you were "hit by a truck" tomorrow? If you are the only one that knows the systems, your clients and theirs would certainly have a big problem. Ideally, someone else needs to have access also, and it should be documented who that is and how to contact them. Depending on the requirements, it may be possible for the holder of the keys to be a company rather than a person, perhaps with a primary and secondary contact named. The primary contact could likely also be someone at your client, and they probably should have all the documentation for how to access their systems in an emergency if you are not available. Also, you may be able to provide a report with certain things redacted. They likely are more interested in knowing that the non-redacted documentation exists somewhere and that the proper people have access to it when needed. They probably won't care that the port numbers, etc. are blacked out, as long as they can be sure that they exist in a document somewhere.
{ "source": [ "https://security.stackexchange.com/questions/131697", "https://security.stackexchange.com", "https://security.stackexchange.com/users/119274/" ] }
131,708
Windows 10 stores its backups in a protected folder, e.g. E:\System Volume Information , and only the SYSTEM account has full access to it. Does this make it safe from ransomware that encrypts files? Even when running 'As administrator' I cannot seem to access that folder, so it looks pretty safe, right? And what about the Windows 10 File History feature, is that safe? Just a note: @SilverlightFox is right, backups are not stored in the System Volume Information folder, as I thought they were, but rather in a folder having the same name as the computer, e.g. E:\MYCOMPUTER where E can be any drive. By default, this folder is accessible by the SYSTEM user and the Administrators group.
If the ransomware gains administrator access to your computer then it can damage any backups that the Windows machine may have created on that computer. If the ransomware only acquires non-administrator access (i.e. you use a non-admin account for web-browsing) then those backups will be safe. The best thing is to back up to a removable storage device. (surely Windows has an option for this) Keep this device in a safe place, separated from your computer. Not only will your backups be protected from viruses this way, you will still have your backups in the event of physical theft, or hardware failure. You can also back your files up to a separated server, as long as that sever has been properly configured (and well secured) so that the ransomeware-damaged backups do not overwrite the originals.
{ "source": [ "https://security.stackexchange.com/questions/131708", "https://security.stackexchange.com", "https://security.stackexchange.com/users/119285/" ] }
132,899
My school has recently asked us to submit our MAC address to the school along with our designated name to be used to connect to the Wi-Fi. Previously this wasn't needed. I would like to ask about what kind of information that they can collect from this? Would they be able to track our browsing history or more? What if I use Tor Browser? Would it have any effect? If they can track me, what measures can I take to prevent them from invading my privacy?
I think you should ask why they want to use the MAC address, not necessarily for privacy reasons; "why do you need the MAC Address?" I think it's a reasonable question to ask them. Firstly, they will have MAC addresses of all the individuals who connect to the WiFi. Any device connecting to the WiFi will reveal their MAC address, based on the ARP protocol. They may think locking down WiFi to known MAC addresses is a good security measure. It's not really because I can obtain your MAC address if both of us are in the same Starbucks and on the same WiFi. I can then spoof your MAC address quite easily. So from a security measure this is not great. They may want to track your activity. They can do this already without asking for your MAC, just giving them the MAC address allows them to map it to a individual easier. They can get a history of MAC & IP address from logs and their NAT can keep a history of IP Address & Ports and map back to the MAC address. If you use Tor, they will be able to say you used Tor, but not the content. So, I would ask why do you want my MAC address, giving out the MAC address is not going to really affect you. Unless of course on your home WiFi or something else you are using MAC address as a method to identify yourself; as MAC address can be easily spoofed.
{ "source": [ "https://security.stackexchange.com/questions/132899", "https://security.stackexchange.com", "https://security.stackexchange.com/users/120484/" ] }
132,949
I'm trying to ruin a scammer's day (fake check scam). I have his phone number. I am able to call (and record) and text him. He is also willing to mail items to my house. What can I do to find out more about them using social engineering or through service providers? How might I be able to build a case against him, or scare or stop him from continuing scamming people? I know I can call the FCC to report it, but I feel like that won't do anything. I also understand that I likely wont be able to get very far in following his trail.
Inform law enforcement. You can't "build a case" because as a layman you likely have no idea about proper police procedure. Most evidence you collect will likely be inadmissible in court. You might in fact also break laws while you collect evidence (for example, recording phonecalls without the consent of the other party can be illegal in some places). Attempting to scare them is a really bad idea. Remember that you are dealing with a criminal. That criminal might be violent or have contact to violent criminals. So when you try to intimidate them, they might turn that around and start to intimidate you instead.
{ "source": [ "https://security.stackexchange.com/questions/132949", "https://security.stackexchange.com", "https://security.stackexchange.com/users/105613/" ] }
133,025
I am responsible for a website and I am questioning the logic of some 'Industry standard' policies that I have been asked to comply with. When a user logs into the website, they get a message to tell them the login details are incorrect, but we do not specify if it is the username or the password that is incorrect. If the user then realizes they have forgotten their password (should have used a password manager), they can click on the forgotten password link and enter their email address and press enter. They are then shown a message telling them that if they have an account, the account has been locked and that a password reset email has been sent to their email account. So far, there is no leakage of information about which credential is incorrect or whether an account exists for the email provided. To improve the user's experience, I suggested that if the user clicks on the forgotten password link, if it is available in the login field, we should copy the email address into the email address field in the forgotten password page. However, I have been told this is against 'Industry Standards', even though nobody can show me anything to back this up! I understand that simply adding it to the URL would be bad, as it would allow people to cycle through email addresses and lock people's accounts out, causing inconvenience to the users. What security vulnerability, if any, would make this bad practice? Edit: An update for any new readers. There have been some great answers on here and I am grateful for all of the input. The process had a flaw, in the lockout on forgotten password and this will be amended. The message when a locked out account still gives the email address or password incorrect message, so the user would not know the email address was a valid one.
None as such The change that you are proposing seems only to be a user experience change. Yes, someone can say that it will make it user friendly for malicious users to lock normal users out of the system but that's not on you. The bigger problem for you is your policy of locking the user's account when they click on forgot password. It makes it very easy for a malicious user to lock you out of your account. They just have to go to forgot password -> Enter the valid email id -> Hit the reset button.
{ "source": [ "https://security.stackexchange.com/questions/133025", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99777/" ] }
133,065
TL;DR : What are the security implications of using oauth2 for authentication? I'm building an app (site A) that allows users to perform operations on another website (site B) through a simpler interface. Site B provides an API that implements OAuth2.0 for authorization of my app. I was thinking I could avoid storing passwords and having users get yet-another-account at my app by piggybacking on site B's authentication. Of course I've read the literature bashing on the use of plain OAuth for authentication , but I fail to see how this is bad from a security perspective. They seem to mostly focus on the practicality and (lack of) generality of the solution. Here's the scheme I had in mind: User Sally loads site A and clicks "login" She's redirected to site B's authorization page, where she either authenticates or has an active session If she authorizes site A to access site B on her behalf, she is redirected back to site A with an authorization code Site A gets the authorization code and exchanges it for an access token (and a refresh token) through site B's API. Site A asks site B for Sally's user_id and logs her in with that ID The tokens are stored in a database for use by a backend, which does all the real work on site B. I'll note that I'm using the so-called "server-side flow" here. Also, the authorization_code returned in the third step is a short-lived one-use code that's tied to A's client_id and can only be used with the corresponding client_secret . When Sally logs in from another device the process repeats and the new tokens are stored. The only problem I see is that the user will be asked to authorize my app on every login instead of just the first time, but that's not a problem at the moment. I'd also be asking for a new token when I actually have a valid one in store. While a bit impractical, this isn't a problem at the moment (*) . What I'm failing to see is how this is bad from a security perspective. With such a scheme: What are the security issues for the user? And for the app? I feel like I'm missing something. (*) I won't have a big userbase, just a few whitelisted users. When (if) the app grows I plan to integrate it into a bigger site that uses a real OpenID Connect provider. I just want to keep it simple, small and focused during this pilot-test
Note : If you are looking for something like OAuth2, but for authentication, you should use OpenId Connect instead. OAuth2 is meant for a user to authorize an application to load the user's resources from some resource provider. In other words: OAuth2 is a mechanism for delegation of authorization . The protocol does not support authentication (although it is commonly misused for exactly that). The security hole is in the assumption you make in the 5th bullet point. You say: Site A asks site B for Sally's user_id and logs her in with that ID While in reality it should read: Site A asks site B for the user_id from the user-data that the access_token grants access to. Figure 1: OAuth flow for (confidential) clients. If all goes as planned, the access_token is, indeed, from the user you redirected to B for authentication. But: there is no guarantee that is the case. In fact, any (malicious) website that the user has previously granted the right to access the user's data (using OAuth2 with B ), can get a valid authorization_code from B and send it to you, in bullet point 3. In other words, if I run a website which asks users for their permission to access their resources at B using OAuth2, I can impersonate all those users at all websites which misuse OAuth2 (with B as OAuth2 Authorization server) for authentication. The 'problem' with OAuth2 is that the authorization_code is not generated for a specific client_id . So if you receive an authorization_code , you can not be sure if B issued the authorization_code you received to you, or to some other service. Which is deemed acceptable for authorization but is absolutely unacceptable for authentication . Update : As to your comment: (and I restrict A to only accept one from users it previously redirected to B, but this is still unauthenticated) I believe that you are here adding an extra precaution, which is not mandatory in the OAuth protocol. As such, it cannot be relied upon.
{ "source": [ "https://security.stackexchange.com/questions/133065", "https://security.stackexchange.com", "https://security.stackexchange.com/users/59224/" ] }
133,205
I sometimes use a free Wifi service to get access to the internet. Like most/all providers of services like this, this service employs a captive portal . So if you try to make a HTTP request (request a web page) in your browser and you are not authorised on the the network, you will only ever see the page which says "Welcome to Free -wifi network. here are our terms etc..." I'm not interested in abusing the service for free wifi - I want to know what information (HTTP transactions) is exposed/insecure when I use this service and what persistent changes they are making to my system, i.e. hiding cookies in my browser (by using anonymous hosts). So I connected to their wifi access point, at which point they assign my computer an IP address and DNS service etc. $ nmcli device show wlp2s0 IP4.ADDRESS[1]: 192.168.12.199/22 IP4.GATEWAY: 192.168.12.1 IP4.ROUTE[1]: dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000 IP4.DNS[1]: 123.123.xxx.xxx IP4.DNS[2]: 123.123.xxx.xxx I then attempt a HTTP request - try to "browse to a webpage" while not yet authorised. (Being HTTP this request/response will be unencrypted, no identity check on the server handling the request [via. SSL certificate authority confirmation via public / private key signatures etc]) $ curl 'http://placeimg.com/' -H 'Host: placeimg.com' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -v -L * Trying 216.243.142.217... * Connected to placeimg.com (216.243.142.217) port 80 (#0) > GET / HTTP/1.1 > Host: placeimg.com > < HTTP/1.1 302 Found < Location: http://1.1.2.1:80/reg.php?ah_goal=auth-tc.html&ah_login=true&url=E2B8F357812412D8 < * Ignoring the response-body * Connection #0 to host placeimg.com left intact * Issue another request to this URL: 'http://1.1.2.1:80/reg.php?ah_goal=auth-tc.html&ah_login=true&url=E2B8F35792412D8' * Connection 0 seems to be dead! * Closing connection 0 * Trying 1.1.2.1... * Connected to 1.1.2.1 (1.1.2.1) port 80 (#1) > GET /reg.php?ah_goal=auth-tc.html&ah_login=true&url=E2B8F8 HTTP/1.1 > Host: 1.1.2.1 > User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > Accept-Language: en-US,en;q=0.5 > < HTTP/1.1 200 OK < Date: Mon, 08 Aug 2016 02:50:16 GMT < Connection: keep-alive < Transfer-Encoding: chunked < Content-type: text/html; charset=utf-8 < <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="viewport" content="width=device-width"/> <title>Login</title> <h1>WiFi - Free Internet Access</h1> This service is provided primarily for the purposes of accessing Email and light web browsing .. .... </div> </body> </html> I've truncated and redacted the output, but essentially its seems, when my browser (curl) makes a request the network is providing access to a real DNS server, I tested that further by running this -while connected to the service. $dig +short google.com 203.118.141.95 203.118.141.123 ... (basically real ip addresses for google.com) So its seems the service is providing real, truthful information about DNS, But even though the IP address my browser is addressing seems valid, the HTTP response coming back, is not from the intended server (its a server that has been inserted by the WiFi service). The response is 302 telling my browser make a new request, to the 1.1.2.1/reg.php server and URL the response from that server/URL is the "captive portal" page. When make a HTTPS request $ curl 'https://google.com/' -H 'Host: google.com' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' -v -L * Trying 216.58.220.142... * connect to 216.58.220.142 port 443 failed: Connection refused * Trying 2404:6800:4006:800::200e... * Immediate connect fail for 2404:6800:4006:800::200e: Network is unreachable * Trying 2404:6800:4006:800::200e... * Immediate connect fail for 2404:6800:4006:800::200e: Network is unreachable * Failed to connect to google.com port 443: Connection refused * Closing connection 0 curl: (7) Failed to connect to google.com port 443: Connection refused The request outright fails - I'm assuming because my computer won't accept a connection from a remote host it can't authenticate? So my question is, is this a "man in the middle" type interception at the TCP/IP level? in other words, when the network stack on my computer tries to establish a TCP connection with the placeimg.com server, by starting a 3 way handshake, is the Wifi services gateway or network node responding to my SYN with an ACK/SYN and saying yes I'm that server you wanted to talk to - lets complete our TCP connection , then once established its pre-programmed to automatically send the HTTP redirect and then, because my browser is actually generating the next request to 1.1.2.1 itself - all the "funny business" can stop and its all normal HTTP/HTML behaviour from that point forward. (ie. standard form filling and submissions as you would on any normal website) If not, how is this gateway intercepting and re-routing my requests while I am an "unauthenticated" user?
So my question is, is this a "man in the middle" type interception at the TCP/IP level? Yes, this is a man in the middle type interception which is easy for the access point because it is actually in the middle of your connection to the internet. Such redirects to the capture portal are easily done with a packet filter. The usual way is that once you've authorized (logged in, accepted terms, whatever...) a temporary rule will be added to the packet filter of the access point which takes preference to the redirect rule and allows direct access to the internet.
{ "source": [ "https://security.stackexchange.com/questions/133205", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73523/" ] }
133,239
We know that to slow down password cracking in case a password database leak, passwords should be saved only in a hashed format. And not only that, but hashed with a strong and slow function with a possibility to vary the number of rounds. Often algorithms like PBKDF2 , bcrypt and scrypt are recommended for this, with bcrypt seemingly getting the loudest votes, e.g. here , here and here . But what about the SHA256 and SHA512 based hashes implemented in at least glibc ( description , specification ) and used by default at least on some Linux distributions for regular login accounts? Is there some reason not to use them, or to otherwise prefer bcrypt over the SHA-2 based hashes? Of course bcrypt is significantly older (1999) and thus more established, but the SHA-2 hashes are already nine years old by now (2007), and scrypt is even younger by a bit (2009), but still seems to be mentioned more often. Is it just an accepted practice, or is there some other reason? Are there any known weaknesses in the SHA-2 based hashes, or has anyone looked? Note: I specifically mean the multi-round password hashes described by the linked documents and marked with the codes $5$ and $6$ in crypt hashes, not a single round of the plain SHA256 or SHA512 hash functions . I have seen the question this is marked as a possible duplicate of. The answers there have no mention of the SHA256-crypt / SHA512-crypt hashes, which is what I am looking for.
The main reason to use a specific password hashing function is to make life harder for attackers, or, more accurately, to prevent them from making their own life easier (when compared to that of the defender). In particular, the attacker may want to compute more hashes per second (i.e. try more passwords per second) with a given budget by using a GPU. SHA-256, in particular, benefits a lot from being implemented on a GPU. Thus, if you use SHA-256-crypt, attackers will be more at an advantage than if you use bcrypt, which is hard to implement efficiently in a GPU. See this answer for some discussion of bcrypt vs PBKDF2. Though SHA-256-crypt is not PBKDF2, it is similar enough in its performance behaviour on GPU, so the same conclusions apply. Case for SHA-512 is a bit less clear because existing GPU are much better at using 32-bit integers than 64-bit, and SHA-512 uses mostly 64-bit operations. It is still expected that modern GPU allow more hashes per second than CPU (for a given budget) with SHA-512-crypt, which again points at bcrypt as the better choice.
{ "source": [ "https://security.stackexchange.com/questions/133239", "https://security.stackexchange.com", "https://security.stackexchange.com/users/118457/" ] }