source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
59,136
Say I have previously created a private/public key combination, and decided at the time to not protect the private key with a password. If I later decide to "beef up" security and use a password-protected private key instead, would I need to generate a new private/public key pair, or can I simply add a password to my existing private key? Is the opposite possible as well, can I "remove" a password from an existing private key?
A word of caution: as stated in laverya's answer openssl encrypts the key in a way that (depending on your threat model) is probably not good enough any more. Of course you can add/remove a passphrase at a later time. add one (assuming it was an rsa key, else use dsa ) openssl rsa -aes256 -in your.key -out your.encrypted.key mv your.encrypted.key your.key chmod 600 your.key the -aes256 tells openssl to encrypt the key with AES256. As ArianFaurtosh has correctly pointed out: For the encryption algorithm you can use aes128 , aes192 , aes256 , camellia128 , camellia192 , camellia256 , des (which you definitely should avoid), des3 or idea remove it openssl rsa -in your.key -out your.open.key you will be asked for your passphrase one last time by omitting the -aes256 you tell openssl to not encrypt the output. mv your.open.key your.key chmod 600 your.key
{ "source": [ "https://security.stackexchange.com/questions/59136", "https://security.stackexchange.com", "https://security.stackexchange.com/users/38377/" ] }
59,188
Wordfence reports the following visitor: An unknown location at IP 0.0.0.0 visited 4 hours 45 mins ago IP: 0.0.0.0 Browser: Baiduspider version 2.0 Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html) An unknown location at IP 0.0.0.0 visited 4 hours 45 mins ago IP: 0.0.0.0 Browser: Opera version 12.15 running on Win7 Opera/9.80 (Windows NT 6.1; WOW64) Presto/2.12.388 Version/12.15
I suspect that your Wordfence plug-in is blindly trusting the X-Forwarded-For header. This header is used by proxies to indicate the IP address of the computers sending traffic trough them, but can easily be spoofed. It is also very well possible that some of the visits from “0.0.0.0” aren't malicious, but simply users behind a misconfigured proxy. Edit: Wordfence is indeed doing this, but this is configurable and is meant to accommodate for reverse proxies such as CloudFlare. See the comment by Wordfence founder Mark Maunder below.
{ "source": [ "https://security.stackexchange.com/questions/59188", "https://security.stackexchange.com", "https://security.stackexchange.com/users/47056/" ] }
59,367
If I click on the little lock icon in Chrome it says that the site in question is using TLS v1. I also checked using openssl and was able to hit the site using TLS1, SSL2 and SSL3. From what I understand SSL2 is not secure. Based on this, it appears that the site could be hit using any of the three. What determines the version of SSL/TLS that will be used when accessing a secure site from a web browser?
As @Terry says, the client suggests , the server chooses . There are details: The generic format of the first client message (the ClientHello ) indicates the highest supported version, and implicitly claims that all previous versions are supported -- which is not necessarily true. For instance, if the client supports TLS 1.2, then it will indicate "max version: 1.2". But the server may then elect to use a previous version (say, TLS 1.0), that the client does not necessarily want to use. Modern clients have taken to the habit of trying several times. For instance, a client may first send a ClientHello stating "TLS 1.2", and, if something (anything) fails, it tries again with a ClientHello stating "TLS 1.0". Clients do that because there are poorly implemented, non-conforming TLS servers who can do TLS 1.0 but reject ClientHello messages that contain "TLS 1.2". An amusing consequence is that an active attacker could force a client and server to use an older version (say TLS 1.0) even when both support a newer protocol version, by forcibly closing the initial connection. This is called a "version rollback attack". It is not critical as long as client and server never accept to use a definitely weak protocol version (and TLS 1.0 is still reasonably strong). Yet this implies that a client and server cannot have a guarantee that they are using the "best" possible protocol version as long as the client implements such a "try again" policy (if the client did not implement such a "try again" then the rollback attack would be prevented, but some Web sites would become seemingly unreachable). The ClientHello message for SSL 2.0 has a very distinct format. When a client wishes to support both SSL 2.0 and some later version, then it must send a special ClientHello which follows the SSL 2.0 format, and specifies that "by the way, I also know SSL 3.0 and TLS 1.0". This is described in appendix E of RFC 2246 . Modern SSL clients (Web browsers) don't do that anymore (I think IE 6.0 still did it, but not IE 7.0). RFC 4346 (TLS 1.1) specifies that such SSLv2-format ClientHello messages will be "phased out" at some point and should be avoided. RFC 5246 (TLS 1.2) more clearly states that clients SHOULD NOT support SSL 2.0, and thus should have no reason to send such ClientHello messages. RFC 6176 now prohibits SSL 2.0 altogether. Now a RFC is not a law: you don't go to jail because you don't support any particular RFC. However, RFC still provide guidance, and thus somehow illustrate what will be the state of things in the near (or far) future. In practice: Most clients out there will send only SSLv3+ ClientHello messages, and will happily connect with SSL 3.0, TLS 1.0, TLS 1.1 or TLS 1.2, depending on what the server appears to support (but, due to the "try again" policy, a version downgrade can be forced upon by an active attacker). Actually, some clients won't support SSL 3.0, and require TLS 1.0. Similarly, some clients won't support TLS 1.1 or 1.2. Web browsers have been updated in the recent years (in the aftermath of the bad press resulting from the BEAST attack) but non-browser applications are rarely as aggressively maintained. Many server still accept a SSLv2 ClientHello format, as long as that ClientHello message is a SSLv3+ ClientHello in disguise. A few servers, like yours, are still happy to do some SSL 2.0. This does not conform to RFC 6176, and is frowned upon (people who believe in "grading SSL servers" will give you a bad score for that). This is not a serious security issue, though, as long as clients don't actually support SSL 2.0. Even if a client supports SSL 2.0, it should include some rollback-prevention trickery (described in RFC 2246) so a rollback down to SSL 2.0 should not work. You still want to deactivate SSL 2.0 support in your server (not necessarily SSLv2 ClientHello format, but actual SSL 2.0 support), if only for public relations.
{ "source": [ "https://security.stackexchange.com/questions/59367", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5169/" ] }
59,411
http://seclab.stanford.edu/websec/csrf/csrf.pdf points out that most CSRF protection mechanisms fail to protect login forms. As https://stackoverflow.com/a/15350123/14731 explains: The vulnerability plays out like this: The attacker creates a host account on the trusted domain The attacker forges a login request in the victim's browser with this host account's credentials The attacker tricks the victim into using the trusted site, where they may not notice they are logged in via the host account The attacker now has access to any data or metadata the victim "created" (intentionally or unintentionally) while their browser was logged in with the host account This attack has been successfully employed against Youtube . The authors of the paper went on to propose the addition of an "Origin" header but ran into resistance by W3C members: http://lists.w3.org/Archives/Public/public-web-security/2009Dec/0035.html To date, only Chrome and Safari implements the "Origin" header. IE and Firefox do not and it's not clear whether they ever will. With that in mind: what is the best way to protect against CSRF attacks on login forms? UPDATE : I am looking for a RESTful solution, so ideally I want to avoid storing server-side state per user. This is especially true for non-authenticated users. If it's impossible then obviously I will give up on this requirement.
With anonymous cookies If you are happy to generate secure tokens which are set as anonymous users' cookies, but not to store them server side then you could simply double submit cookies . e.g. Legitimate user: Anon user navigates to the login page, receives cookie which is sent to the browser. Anon user logs in and the browser sends the cookie as a header and as a hidden form value. User now logged in. This cannot be abused by the attacker as the following will now happen: The attacker creates a host account on the trusted domain The attacker forges a login request in the victim's browser with this host account's credentials However, the attacker does not have access to the victim's cookie value and cannot forge it as the CSRF token in the request body. The attack fails. Even if your site is only accessible over HTTPS and you correctly set the Secure Flag , care must be taken with this approach as an attacker could potentially MiTM any connection from the victim to any HTTP website (if the attacker is suitably placed of course), redirect them to your domain over HTTP, which is also MiTM'd and then set the required cookie value. This would be a Session Fixation attack. To guard against this you could output the cookie value to the header and the hidden form field every time this (login) page is loaded (over HTTPS) rather than reuse any already set cookie value. This is because although a browser can set the Secure Flag, it will still send cookies without the Secure Flag over a HTTPS connection, and the server will not be able to tell whether the Secure Flag was set. (Cookie attributes such as the Secure Flag are only visible when the cookie is set, not when it is read. The only thing the server gets to see is the cookie name and value.) Implementing HSTS would be a good option for protection in supported browsers. It is advisable to set X-Frame-Options to prevent a UI redress click jacking attack (otherwise the attacker could possibly use site functionality to pre fill their username and password awaiting the user to click and submit them along with the CSRF value). Without anonymous cookies If you do not want to set cookies for anonymous users (which then may suspect that they are being tracked server side) then the following approach may be used instead: A multi-stage login form. The first stage is the usual username / password combination. After the form is submitted, it redirects to another form. This form is protected by a special intermediary authentication token cookie and a CSRF token. The authentication here will only allow the second stage authentication to be submitted, but will not allow any other actions on the account (except possibly a full logout). This will enable the CSRF token to be associated and used by this user account only on this intermediary session. Now it is only when this form is submitted, including the token cookie and CSRF hidden form value that the user is fully authenticated with the domain. Any attacker attempting a CSRF attack will not be able to retrieve the CSRF token and their full login attempt will fail. The only drawback is that the user will have to manually click to complete login, which may be a clunky user experience. It is advisable to set X-Frame-Options to prevent this being used in combination with a UI redress click jacking attack. Any auto submission with JavaScript would be beneficial to the attacker and would cause their attack to possibly succeed, so at the moment I can only see a manual click by the user working. It would now play out like this: The attacker creates a host account on the trusted domain The attacker forges a login request in the victim's browser with this host account's credentials but they cannot proceed past stage two to become fully authenticated The attacker tricks the victim into using the trusted site - but as they are not fully authenticated, the site will act as though the user is unauthenticated
{ "source": [ "https://security.stackexchange.com/questions/59411", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5002/" ] }
59,470
Is the Double Submit Cookies mechanism vulnerable anything other than XSS and sub-domain attacks ? All CSRF protection mechanisms are vulnerable to XSS, so that's nothing new. I'm just wondering if I can safely rely on this mechanism so long as I ensure I control all sub-domains. NOTE: This question is a spin-off of How to protect against login CSRF?
According to a paper published in Blackhat 2013, it isn't enough for you to implement Double-Submit Cookies in its own sub-domain (e.g. secure.host.com ). You really must control all sub-domains: 2.1.1 Naïve Double Submit Double submit cookie CSRF mitigations are common and implementations can vary a lot. The solution is tempting because it’s scalable and easy to implement. One of the most common variations is the naive: if (cookievalue != postvalue) throw CSRFCheckError With naïve double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn’t the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible: While it’s true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then be consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com. Secondly, this approach is vulnerable to man-in-the-middle attacks: If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at https://secure.example.com , even if the cookies are set with the secure flag, a man in the middle can force connections to http://secure.example.com and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plaintext HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as http://hellokitty.marketing.example.com doesn’t force https, then an attacker can overwrite cookies on any example.com subdomain. In summary: You must control all sub-domains. You must set the Secure flag to ensure the cookies are only sent over HTTPS. If you care about MiTM attacks , you must transition your entire website to HTTPS, set the HSTS header and ensure it includes all sub-domains . At this time, only 58% of browsers support HSTS (Internet Explorer being a notable exception). This is expected to change over the coming year. See https://security.stackexchange.com/a/61041/5002 for a discussion of the token length and type of RNG needed to generate values.
{ "source": [ "https://security.stackexchange.com/questions/59470", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5002/" ] }
59,566
After reading many articles and watching many tutorials I decided to be specific because there are some things about SSL certificate chain verification and SSL cetificate verification in general that I couldn't verify against all the tutorials I have read nor the ones I watched. CASE 1: Let me take a case where I own a certificate from Verisign , after the verification process they gave me a public/private key pair. My server sends the public key(my certificate) to the browser, the browser own a copy of Verisign public key. The following questions may seem stupid, but there isn't consistency among tutorials and everytime I try to understand I got myself into another corner. How will the browser with Verisign public key is going to identify that my public key is indeed a valid Verisign key that can be used for encrypting messages that my Verisign private key can decrypt?, I assume its the Verisign public key digital signature that is tested against the server public key digital signature ?, I may be wrong , but even if I am right I will be glad for a bit of clarification. In case its indeed the digital signature that is tested I assume there is a relationship between all of Verisign public keys?, or maybe there is only relationship between each public key Verisign assign to a specific client(website/device/etc) and the public key the browser/operating system including. CASE 2: Intermediate certificates. I understand that the need for them is security , so the root CA private key can stay offline, if there are more reasons i'll love to hear about them. About the chain verification, I assume the server public key digital signature is tested against the server intermediate certificate digital signature , if its valid now it is the turn for the intermediate certificate digital signature to be tested against the browser/operating system pre-installed public key digital signature , and if this is also valid, the client has successfully validate both the intermediate certificate and the public key of the server. I know i probably missed all up, if anyone can help me straight things up I will be very thankful.
It's not at all clear to me what you don't understand, so I'll take it very slowly. First some terminology. It's important to get this straight, because otherwise you can't know correctly what you're hearing and saying. Key pair: a private key and a corresponding public key which are mathematically related and used for public-key cryptography (PKC) also called asymmetric cryptography . For RSA, which is usually the only PKC someone knows when they don't specify, the public key consists of a modulus N which is the product of two large primes, and a public exponent E which can be and usually is small (in fact it usually is either 3 or 65537); the private key contains at least N and a private exponent D such that E x D = 1 mod phi(N) or at least lambda(N); in practical use, the private key often contains additional values that enable faster computation, but there are quite a few other questions about that so I won't repeat them. Digital signature: a value computed, for a specified chunk of data (almost always a hash of the "real" data) using a private key, such that the corresponding public key can be used to determine if the given signature is correct for the given data and could only have been generated by the given private key. This allows the verifier to determine that the data has not been altered or forged, and the data was sent or at least seen by the holder of the private key, but it doesn't say anything about who that holder is. (X.509 aka PKIX) (Identity) Certificate: a data structure including a public key for an entity and the identity of that entity; plus some other information related to the entity and/or the CA; all signed by a (generally) different entity called a Certificate Authority or CA . I hope you meant Verisign only as an example; it is one CA but not the only one; you can get an equally good Internet certificate from others like GoDaddy, Comodo, StartCom LetsEncrypt, etc. If you trust a given CA to issue certs correctly, then you can trust the much larger number of public keys and identities in certs it issues. Update 2017: StartCom is no longer good, since it was bought by WoSign who was then caught breaking CABforum rules and now is widely distrusted . OTOH LetsEncrypt is now widely-trusted, and free. Which further emphasizes the importance of having multiple CAs! More Update: Now Verisign is bad also; although not publicized so widely in 2017, Symantec (who had acquired Verisign and several other brands, but continued mostly using the existing names) was caught misissuing and progressively distrusted which they resolved by selling the businesss to DigiCert (who fairly quickly replaced tainted Symantec certs with Digicert ones). But I've left the pseudo-Verisign names in my example below, matching the Q, since this is only an example and the logic is the same for any names. User (end-entity) certificate: a certificate containing the key and identity for anything other than a CA, such as an SSL server, an SSL client, a mail system, etc. (CA) Root certificate: a certificate containing the public key of a root CA. Since there is no CA "above" the root, a dummy signature using the CA's own private key is used, but that has no security value. You must decide (or delegate) whether to trust that key "out of band", that is, based on reasons other than cryptographic computations. Chain or intermediate certificate (and CA): (as your question indicates) most CAs now operate in a hierarchical fashion, where the root key is not used to directly issue user certificates. Instead the root CA and its root (private) key is used to sign certificates for several intermediate or subordinate CAs, each of which has their own keypair. Each intermediate CA can then issue user certs, or sometimes a second level of intermediate certs; this can be extended to several levels, but that's very rarely needed. Since you mention a browser, you are apparently concerned (only?) with certificates (and keys) for HTTPS (HTTP over SSL/TLS). This is a common and important use of certificates and PKC, but not the only use. Now, case 1. Since this case doesn't consider chain/intermediate, and Verisign does use that, I'll use a hypothetical SimpleCA instead. First, no CA ever has or sends you your private key. You generate your key pair and send your public key to the CA in a data structure called a Certificate Signing Request or CSR . The CSR also should contain your name; for an SSL server this is normally the domain name (FQDN) of the server. The CSR contains some other data you can ignore for now. The CA verifies your claimed identity (for SSL server, that you "control" the specified domain name), and usually collects a fee, and then creates a certificate which contains: your name (here SSL server FQDN) as the Subject and/or one or several name(s) as SubjectAlternativeNames especially in recent years your public key as SubjectPublicKey , and usually a hash of it as SubjectKeyID a ValidityPeriod specifying how long the certificate is valid, chosen by the CA based partly on how much you pay them the CA 's name as the Issuer , and usually an AuthorityKeyID which also identifies the CA some other data you can ignore for now and this whole structure is signed using the CA's private key (which means it can be verified using the CA's public key). The CA sends this certificate back to you to use in your server along with the private key you already have. Previously, assuming SimpleCA showed it can be trusted to appropriately verify applicants and paid a fee, the browser vendors (Microsoft, Mozilla, Google, Apple, etc.) agreed to include its public key with their browsers, normally in the form of SimpleCA's root cert. Now when some browser connects to your server and you send your cert which says in effect "Aviel-server approved by SimpleCA", the browser finds SimpleCA's root cert and thus SimpleCA's public key and uses that to verify that your cert is signed correctly, and that the server name in your cert matches the server in the URL the user wants; if both those pass, it accepts the public key in your cert as the correct public key for you, and uses it to complete the SSL/TLS handshake. If not it displays some kind of warning or error. Unless , that is, your cert was revoked . If your private key is compromised, or if the CA determines that you no longer control the claimed identity (including if they discover you never did but deceived them on the application), the CA publishes that your cert is revoked and therefore browsers don't trust it even though the signature still verifies (because the RSA computation is a fixed mathematical process independent of time or environment). But revocation, and if and when and how well it works, is a whole complicated topic by itself, and this answer is large enough already. (edit) How does OCSP stapling work? covers revocation (actually both OCSP and CRLs) with ursine thoroughness. If your certificate is expired (past the end of its validity period) it is also invalid; this one is easy for browser to check. Formally a certificate can also be invalid before the beginning of its validity period, but in practice CAs don't issue 'post-dated' certs unless someone messes up the timezone or something. However, the set of CA root certificates supplied in a browser is usually only the default; the browser user can decide to add new roots or delete existing ones if they choose. And if so that may make your server trusted when it wasn't before, or untrusted when it was before. Case 2 (chain). One reason for intermediate CAs and thus intermediate or chain certs is keeping the root key offline as you say. Another reason is to allow the intermediate CAs to be managed much like users: they can have limited validity periods and be renewed, and if compromised (or just no longer wanted) they can be revoked, and this happens automatically and nearly invisibly. On the other hand if you have to extend, replace, or revoke a root, basically every browser in the world must be updated. That's a lot of work, and is never done completely because users will refuse or forget to install an update and be left trusting a CA that isn't secure and thus probably servers that aren't legitimate. Also for revocations that are published the older way using a Certificate Revocation List (CRL) dividing the issued certs over multiple intermediate CAs makes CRL management easier. With a single intermediate/chain cert, the process is changed as follows. Let's say VerisignServerB is issued under VerisignRoot and is used in turn to issue Aviel-server . (The actual names are longer, but this is easier to see.) Then your server is configured to send both Aviel-server and VerisignServerB to the browser. The browser checks that the VerisignServerB cert is signed correctly under the VerisignRoot public key (as before, normally stored as a selfsigned root cert), AND that Aviel-server cert is signed correctly under VerisignServerB public key from that cert, and that Aviel-server cert name matches the desired one. It doesn't matter which order the signatures are checked as long as both are. SSL Certificate framework 101: How does the browser actually verify the validity of a given server certificate? has a nice graphical example of this which may help you. For multiple intermediate/chain certs, when used, the extension should now be obvious.
{ "source": [ "https://security.stackexchange.com/questions/59566", "https://security.stackexchange.com", "https://security.stackexchange.com/users/47974/" ] }
60,630
I've played around with metasploit simply as a hobby but am wondering if actual pentesters and/or hackers actually use metasploit to get into systems or do they write their own post exploitation modules or their own programs entirely? Reason I ask is because metasploit does not seem to be able to selectively clear windows event logs and such, or perhaps I just couldn't find it.(the nearest I can find is clearev but that simply wipes out everything which isn't very sneaky) Besides, even if it is able to selectively clear the event logs there will be places like the prefetch queue in ring 0 where forensics will be able to find what I did from the system image...
As far as forensics is concerned, Metasploit have payloads which are specifically designed to make the work of forensic analysis more difficult. For example, the most famous payload which is selected by default with a lot of exploit modules is the meterpreter payload. It completely runs in-memory and don't touch the disk for any operations (unless specifically asked by the user). Which means there will be no evidence in the prefetch folder or any other place on the disk. You don't have to clear all the event logs. You can selective clear any event log you want through the meterpreter script event_manager . Meterpreter has a tool called timestomp which can change the modification, access, creation, and execution time of any file on the hard disk to any arbitrary value. You can securely wipe out any file with the sdel (safe delete) module which not only securely wipe the file contents but rename the file to a long random string before the deletion which makes the forensic recovery of not only the contents but the file meta data very difficult as well. Now comes to your second part of the use of Metasploit by actual malicious attackers in real world attacks. There have been reports that Metasploit was used in one of the attacks on the Iranian nuclear facility. The reason you don't see Metasploit more often is due the open source nature of the product. Since the exploits and payloads are available to everyone, by default every security product such as antivirus, IDS/IPS etc consider these files as malicious. The defense industry has gone to an extent that even if one create a completely benign file with Metasploit, it will be detected by almost all the AV solutions. Generate an empty payload like: echo -n | msfencode -e generic/none -t exe > myn.exe Upload it to VirusTotal and you will see that more than half of the AV solutions detect it as malicious. More details can be found on the Matt Weeks' blog here . With this behavior no attacker will risk using Metasploit for actual attacks due to the very high detection rate. The modules can be easily customized and bypassing AV and other security controls through Metasploit is quite easy as well. However, at that point it is difficult to determine if the payload is written from the scratch or the Metasploit module has been modified. Therefore, it is difficult to say for sure how many attackers have used or continue to use Metasploit in their operations.
{ "source": [ "https://security.stackexchange.com/questions/60630", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49022/" ] }
60,642
I was looking for a form on the US Department of Transportations website, and I came to a page that gave me an error, with a full debug report and stack trace. Hopefully, you can get the same result by going to the page as well: http://www.dot.gov/airconsumer/air-travel-complaint-comment-form If not, I've included two screenshots of the page (it wouldn't fit into 1) Can this result in a breach of security or is it a non-issue (just an inconvenience and unprofessional page)? What sensitive information is there on the page and how can it be exploited? I am asking from a purely academic perspective, and have no intention of trying to enter unlawfully into the DOT's site.
DOT's backend Oracle database is down due to ORA-27101. ORA-27101 has a nice explanation here with a useful reader comment stating that it happened to them because the Windows Event log was full. From the output, you can learn that they have Oracle, Java JDBC, Drupal, ColdFusion. You also see some SQL code. With that knowledge you can start digging for vulnerabilities in those products/technologies. The output mostly means that the DOT Database Administrator will have a hot day and that the page should be back soon. Feel free to notify the administrator as it says on the page.
{ "source": [ "https://security.stackexchange.com/questions/60642", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49031/" ] }
60,691
With large computing power (like what you can get in the Amazon cloud for example) you can generate huge rainbow tables for passwords. There also seems to be some large rainbow tables reachable that you have to pay for. What are the largest tables that secret services could possibly use and what's the maximum characters you can have for the password? I am wondering how long passwords should be considered when you choose a new password yourself? (This is not the question here!) What is the maximum password-length those huge underground rainbow-tables (used by evil forces ) are hosting?
Length of the password, and size of the rainbow table, are red herrings. Size does not matter. What matters is entropy . Rainbow tables are not really relevant here, for two main reasons: Building the table is expensive . A table "covers" a number of possible passwords; let's call it N . There are N distinct passwords that will be broken through the table. No other will be. It so happens that every single one of these N passwords must have been hashed during table construction. Actually, a lot have been hashed several times; building a table which can crack N passwords has a cost of roughly 1.7* N hash invocations. That's more expensive than brute force. In fact, brute force requires on average 0.5* N hashes, so the table building costs more than three times as much. Thus, a rainbow table is worth the effort only if it can be applied at least four times, on four distinct password hashes that shall be cracked. Otherwise, it is a big waste of time. Rainbow tables cannot be applied more than once . That's because of salts . Any password hashing which has been deployed by a developer with more brain cells than a gorilla uses salts, which are non-repeating variation parameters. The effect of salts is equivalent to using a different hash function for each user. Since a rainbow table must be built for a specific hash function, one at a time, it follows that a rainbow table will be able to crack only one password hash in all. Combine with the previous point: rainbow tables are simply not useful. Now, of course, there are a lot of deployed systems where passwords are not hashed with human-level competency. Ape-driven development has resulted in many servers where a simple unsalted hashing is used; and even servers where passwords are stored as cleartext or some easily reversible homemade encoding. But one could say that if your password is to be managed by such sloppily designed systems, then extra password entropy will not save you. A software system which utterly fails at applying sane, documented techniques for protecting a password, which is the archetypal sensitive secret data, is unlikely to fare any better in any of its other components. Crumminess and negligence in software design are like cockroaches: when you see one, you can be sure that there are hundreds of others nearby. Now let's assume that reasonable password hashing was employed, combining salts (to defeat table-based and parallel attacks) and configurable slowness (to make hashing more expensive). E.g. bcrypt . This answer is a lengthy exposition of how passwords should be hashed. For instance, consider a server using bcrypt, configured so that, on that server, verifying a password hash takes 0.01s worth of CPU time. This would still allow the server to handle 100 user logons per second, an extremely high figure, so the server will not actually devote a lot of its computing power to password hashing. Now, imagine an attacker who got his hands on some password hashes, as stored on the server (e.g. he stole an old backup, or used an SQL injection attack to recover parts of the database). Since bcrypt is salted, the attacker will have to pay the full brute force attack cost on each password; there is no possible parallel cost sharing, and, in particular, no precomputed tables (rainbow or not) would apply. Our attacker is powerful: he rents one hundred servers of computing abilities similar to the attacked server. This translates to 10000 password tries per second. Also, the attacker is patient and dedicated: he accepts to spend one month of computing on a single password (so that password must protect some very valuable assets). In one month at 10000 passwords per second, that's about 26 billions of passwords which will be tried. To defeat the attacker, it thus suffices to choose passwords with enough entropy to lower the success probability of the attacker to less than 1/2 under these conditions. Since 26 billions is close to 2 34.6 , this means that the password entropy shall be at least 35.6 bits . Entropy is a characteristic of the method used to generate the password, and only loosely correlated with the length. Password length does not imply strength; length is just needed to make room for the entropy. For instance, if you generate a password as a sequence of random letters (uppercase and lowercase), then 7 characters accumulate almost 40 bits of entropy, which, by the calculations made above, should be fine. But this absolutely requires that the letters are chosen randomly, uniformly, and independently of each other. Not many users would go to the effort of using a computer PRNG to produce a new password, and then accept to learn it as it was generated (I do that, but when I explain it to my work colleagues they look at me in a weird way). As this famous question hints at, human memory can be at odds with entropy, and a longer password with more structure may be a better trade-off. Consider that the "correct horse" method ends up with "only" 44 bits of entropy for 25 letters or so (by the way, take note that a 25-letter password can have less entropy than an 8-letter password). The extra typing can be worthwhile, though, if it allows an average user to remember a 40-bit entropy password. Despite all the propaganda of Hollywood movies, Secret Services rarely involve stupendously advanced technology (for that matter, they are also quite short on dry Martinis, leggy blondes and motorbike chases). Any rationally managed Secret Service will avoid spending 10k$ on breaking your password, since 1k$ will be more than enough to hire two goons to break your kneecaps . Ultimately, this is all a matter of relative cost. So let's get practical. To generate a password, use this command on a Linux system: dd if=/dev/urandom bs=6 count=1 2> /dev/null | base64 This outputs a password consisting of eight characters among lowercase letters, uppercase letters, digits, '/' and '+'. It is easily shown that each such password offers a whooping 48 bits of entropy. The important rules are: Generate the password and then accept it . Don't go about producing another one if you don't like what you got. Any selection strategy lowers your entropy. Don't reuse passwords . One password for each site. That's the most important damage containment rule. If you have trouble remembering many passwords, you can "write them down" (password managers like KeePass may help). If 48 bits of entropy are not enough, then you are using a password in a weak system, and that is what you need to fix. Your enemies are after your money, not your freedom. Don't try to defeat phantasmagorical three-letter agencies; concentrate on mundane criminals.
{ "source": [ "https://security.stackexchange.com/questions/60691", "https://security.stackexchange.com", "https://security.stackexchange.com/users/13212/" ] }
60,717
There are exerpts, that say that using https can be broken by the NSA by now. So is https still a solution for secure web-browsing? source: http://www.digitaltrends.com/web/nsa-has-cracked-the-encryption-protecting-your-bank-account-gmail-and-more/ Encryption techniques used by online banks, email providers, and many other sensitive Internet services to keep your personal data private and secure are no match for the National Security Agency
Basically, the NSA is able to decrypt most of the Internet. They're doing it primarily by cheating, not by mathematics. -- Bruce Schneier Now, your question is "So is HTTPS still a solution for secure web-browsing?" The answer is, as safe as it ever was, unless your opponent is the NSA. The reports you're referencing do not identify a generic weakness that can be taken advantage of by random actors, by organized crime, by your ex-wife. There's no systemic failure here, just proof that a sufficiently funded, authorized, and skilled attacker can compromise lots of endpoints. And we knew that.
{ "source": [ "https://security.stackexchange.com/questions/60717", "https://security.stackexchange.com", "https://security.stackexchange.com/users/13212/" ] }
60,842
There is a corporate web mail site (PHP + MySQL) for limited numbers of users who are employees of a company working remotely with the corporate web portal. Each user has a login and password. I'm thinking about replacing usual text passwords with a key file, i.e. user choose any file to be the key at his first logon, it can be a text file or even a picture, the checksum of that file gets stored in the database and at next time such a user needs to loging he uploads his key file instead of typing a password. Would such authentication be more secure than a password typing? I guess it is much harder to figure out a key file than a password.
The main problem with a key file is that it is a file . As such, it is stored somewhere, on some physical medium. It will be copied with backups. The file will still be there on discard hard disks. Users will copy their files to several devices in order to be able to log from all these devices. To sum up, files leak . Conversely, a password fits in a brain and needs not be written anywhere; the user naturally moves it around with him; passwords don't leak to backup tapes and old disks. Last but not least, password entry works well on mobile phones, whereas file upload can be more technologically challenging. So while a secret file can contain a lot more secrecy than a mind-powered password, it also tends to be a lot less "secret" and to imply usability issues. Overall, the "secret file" method does not seem to be more secure, in a generic way, than passwords. Another way to see it: a "secret file" is equivalent, from a security point of view, to a text file that contains a big fat and random password, that the user reads when he wants to log in (and possibly "types" the password with a copy&paste). Every argument against writing down a password in a text file equally applies to your "secret file" idea.
{ "source": [ "https://security.stackexchange.com/questions/60842", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37662/" ] }
61,056
The title says it all, really. I'm Alice, and I want to login to Gmail's web interface through my browser. Ike, the internet service provider, and Adam, the local network administrator, would like to know what my Gmail email address (username) is. Is there any conceivable way for things to happen so that either one of them could possibly learn it?
For your average home user, services like GMail (that are run over TLS) would not leak information like the username to the ISP or network administrator. If you're using a machine that is also administered by the LAN administrator (e.g., a work computer attached to a domain run by the company), then you have to assume they can read anything you do on it. They could have software to log your activity (browsing and/or keystrokes), they could have installed extra SSL certificates that allow them to MITM your connection to GMail (or any other site). If you believe your computer has not been tampered with and is not under the control of someone you don't trust (i.e., the LAN admin doesn't control your machine), then you can connect to GMail over https. (At this point, the LAN admin is equivalent to an attacker on the internet.) Ensure that you connect to https://mail.google.com with a valid certificate, and then all your traffic between your computer and the GMail servers are encrypted. This would include all information about your account, including the username with which you are logged in.
{ "source": [ "https://security.stackexchange.com/questions/61056", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49356/" ] }
61,115
I pay my neighbors to use their WiFi. They have listed me as Guest with a separate password from theirs. Is there any way to prevent them from seeing the sites I've visited? My browser history clears automatically. Since they're in charge of the router, can they always see in real time what I'm browsing as I'm browsing? I'm assuming they won't be able to see where I've been since my history clears.
Yes they can but unless your neighbor has the required technical expertise, its highly doubtful. To view incoming and outgoing traffic you need specific software to monitor network packets and the tech knowledge to actually do it. Most routers only keep a syslog and unless they are using software like wireshark to monitor/capture your packets, they cannot view the sites you have visited. So unless he is a geek or a hacker (however amateur) there is usually nothing by default that records your traffic. *Side Note: Clearing your browsing history is completely restricted to your local system and if they can/are monitoring, it will do you no good. Alternatively , if your neighbor does have the required software and skills to monitor your traffic, you can use a proxy. By doing so the monitoring software will only show a lot of outgoing and incoming connections between you and the proxy, so even though they will know you are using a proxy, they can't see what sites you visited. EDIT: as lorenzog correctly mentioned in the comments, for true sense of security and privacy, one should use a SSL Proxy to encrypt data sent and also tunnel DNS queries (which can be monitored by the router administrator) through the SSL proxy.
{ "source": [ "https://security.stackexchange.com/questions/61115", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49389/" ] }
61,215
I was kind of shocked when I just received my confirmation mail from the shop where I just registered myself: they sent my username (which is my email address) and the password I typed in. The password was not partially replaced with *s or similar; it was the naked, blank password I picked. This does mean people who can check the emails the shop sends could theoretically see my login data, does it not? I believe this is the first time I get a confirmation with my full login information so this seems really weird and somehow concerns me. Should it? From the fact that I received my password by email, I am guessing that the shop does not encrypt my password. Is this a valid inference?
Sending you the password in plain text does not necessarily mean the database stores it in plain text, especially if they sent you the email before encrypting and storing the password. However if you ask for the password later on (e.g "forgot password" mechanism) and they do send it to you like this, it implies that they are either storing in plain text or they're using an easily reversible encryption. In either case, there is reason to be concerned unless they only send you the password on registration and before storing it on their server in encrypted format. In particular: If they have a "forgot my password" link that sends you the password you had previously set up, then yes, there is reason to be concerned: they are storing the password in plaintext or using reversible encryption. If they send you a new password, then it doesn't necessarily mean they are storing the password in plaintext or using reversible encryption. In that case, you don't have enough information to know whether there is cause for concern. A separate issue is that, in any case, email is not a safe medium for sending passwords. Thus, even if they aren't storing the password in plaintext, if they are sending it to you by email in plaintext, that does pose some risk. according to plain text offenders : Man in the middle attacks are easy to pull off between server and the comminucation protocol in itself is not encrypted.
{ "source": [ "https://security.stackexchange.com/questions/61215", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49465/" ] }
61,300
According to Microsoft, adding a honeypot to your corporate network is an effective way to deter hackers from compromising your network. Aren't honeypots more for research purposes and not ideal for corporate networks? Wouldn't having a honeypot on your corporate network likely give a hacker a better grip hold in making attacks? More information: Security Fundamentals (Microsoft Virtual Academy). The quiz for #4 made that claim. I answered based on my understanding in the security field and got it wrong. Hence my confusion.
Deploying a honeypot is not unlike adding a painted door or a fake safe to a bank vault. It does not deter anybody (its purpose is not to be detected as a honeypot). Possibly someone misspelled detect . It can reduce (somewhat) the time spent by attackers against the real door. Not by much. More than that, it can be optimized for data gathering (simplistically, you know nobody is going to ever use the services on the honeypot, so you can tune them all to "log everything high volume paranoid dump dump dump". Service efficiency goes to hell, but as soon as someone attempts anything you need guess nothing. The "safe" is all alarms, and no bullion. Even more, you can correlate what is happening on the honeypot with what is going on in the rest of the network. The honeypot will tell you what's behind what the other machines report as unusual but random "noise". Finally, the honeypot can apparently allow an attack to succeed, so that you can gather yet more data. Example: the attacker wins a root shell on the honeypot. He proceeds to download more sophisticated tools. Now you have, at the very least, a copy of those tools as well as an idea of where he downloaded them from. (If you have time you can crash the connection in a not-too-suspicious way and let him reconnect later after having added suitable instrumentation to his tools, that are now his no longer). You can determine whether he's just a script kid looking for some warez bouncing, or someone who actively targeted your network. Even by avoiding the honeypot he will tell you something that's worth knowing. But pretty much nothing of the above comes for free ; nor will it work by itself. You need someone continuously managing (normally at a very low level, but continuously and always ready to escalate) the honeypot. The honeypot has to be maintained and updated, just as much (possibly much more ) than the other boxes. You have to decide whether the gain is worth the pain, and whether you can invest in the necessary pain. Just "adding a honeypot" will do nothing to increase the network security; to the contrary, it will engender a bit of false security, and possibly provide a security breach if the honeypot isn't as well insulated and armored-in as you believed; it might also attract more attention than if it weren't there, if it offers services or vulnerabilities the other machines don't share.
{ "source": [ "https://security.stackexchange.com/questions/61300", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31084/" ] }
61,321
I was logged on to my router and filling out some information. I clicked a button and a field was automatically filled in with my computer's MAC address. How is this possible? Does it present a security risk? I'm connected through VPN and my computer is up to date and running Microsoft Security Essentials (MSE). Is there JavaScript code that can get the MAC address? Is the MAC address sent in an IP header? I was surprised I wasn't at least prompted to share this information.
This is not a security risk. The router looks in its ARP table to find the MAC address of your IP address. The reason it can do this, is because you are connected to the router via layer 2 in the OSI model. The router simply looks up your IP address in the ARP cache to find its MAC address. A website on the Internet is not connected to your LAN and will not be able to determine your MAC address. In the same way, you from your computer determine the MAC address of your router by looking in the ARP cache of your computer. If you open up your command prompt (cmd.exe on Windows) and type "arp -a", you will see your router's MAC address. This does not pose a security risk, and is required for IP traffic to work on an Ethernet network.
{ "source": [ "https://security.stackexchange.com/questions/61321", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10714/" ] }
61,361
I've noticed that websites start to use 256-bit symmetric encryption, but often still end up using 2048-bit RSA keys. http://www.keylength.com/en/3/ The link above displays the ECRYPT II recommendations, which state that 128-bit symmetric and 3248-bit asymmetric encryption have a compareable strength. The NIST recommendations state that 128-bit symmetric is comparable to 3072-bit asymmetric encryption. This would mean that 2048-bit RSA is weaker than 128-bit symmetric encryption. Which makes me wonder why websites are starting to offer 256-bit symmetric encryption while the weakest link (RSA) doesn't even offer 128-bit strength. Is there anything I'm missing here?
People use 256-bit encryption because they can , and, given the choice, people tend to go for the biggest numbers, because they feel that they "deserve it". Scientifically, it does not indeed make sense to use AES-256 when the key exchange relies on 2048-bit RSA. This is just wasted CPU cycles; AES-128 would have been equally fine. But "256" can woo auditors into submission. Such are the intricacies of the human psychology.
{ "source": [ "https://security.stackexchange.com/questions/61361", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49555/" ] }
61,412
I want to know how Facebook discovers the people who you know in real life or who know you. I tried the following to see if Facebook can still discover my acquaintances in real life and suggest them to me as a friend. I connected using a VPN (an anonymous VPN, not one of those free VPN services). I have confirmed that it does not leak my actual IP address. I cleared the cookies in my browser (specific to sites like Facebook, Google, and Yahoo) and started a fresh instance of Browsing Session. Anyway, cookies specific to Facebook only should matter, and I cleared them all. I registered an email account with an email service provider who does not require a mobile number for registration. I used an email ID name which had no resemblance to my real name. I did not mention anything related to my geographical location while registering the email address. Please note that this was a fresh email address, and I have never used it to send an email or receive an email. Now, I registered on Facebook, using a name which does not resemble my real name in any way. However, Facebook requires phone number verification before you complete registration on Facebook. This is the only place where I specified my real phone number to receive their security code. Once, I completed the verification. The moment I logged in, I could see Facebook giving me a list of suggestions of people I may know. It was surprising indeed, since this list was extremely accurate. It included people I knew in the past as well my current acquaintances. It makes me rather suspicious. The only way I see that Facebook was able to identify the people who may know me or I may know them was using my phone number. So my assumption is: They appear to have a deal with the telecommunication providers in different countries. Once you disclose your phone number, it looks like they get access to the entire list of phone numbers with whom you have corresponded in the past. Then, they do a second level lookup to identify the Facebook profiles of those corresponding phone numbers. Also, interestingly, there are some people with whom I may not have ever corresponded with on phone. But of course, Facebook can find them through other people I know and suggest them to me. Am I correct that Facebook was able to do all the correlation of people I know in real life using my phone number? It would be interesting to see whether they could still correlate it if I use another phone number.
So, yes, they appear to have a deal with the Telecommunication Providers in different Countries. Well that's ONE explanation. Another one that I like better is simply that they have all their users' contact lists, thanks to their mobile application which no doubt reads everything and sends it back to their headquarters. All they have to do after you register with your real phone number is look through all those contact lists, and find the people who possess your number. This idea that they may have arrangements with telecom providers seems a little far-fetched to me, in great part because it is simply illegal in many countries to disclose phone records to anyone without a court order.
{ "source": [ "https://security.stackexchange.com/questions/61412", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10217/" ] }
61,527
For example, on the user registration page, is it safe to tell people "Your password will be stored as a one-way hash using the (whatever) algorithm."
If it is not safe, then your hash function is pure junk, and you should not use it. In any decent security analysis, it is assumed that the attacker already knows all the software that you are using, because: There are not so many possibilities. A lot can be guessed based on application behaviour and output. An attacker who could get a glimpse at your database might also have a copy of your PHP scripts, and thus knows . Even in cases where the attacker does not know your hash function, it is very difficult to quantify such lack of knowledge: to what extent does the attacker not know ? In particular, the attacker may also register as a user, with a password he chooses, and then observe the resulting hash, allowing him to quickly test any potential hash function. On the other hand, making the used hash function explicit may give you a reputation of "competent site owner" which would bring confidence to technically-inclined customers.
{ "source": [ "https://security.stackexchange.com/questions/61527", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49637/" ] }
61,585
My manager says we don't need to salt our passwords because people are not likely to use the same password because they all have different native languages, in addition to the websites they are active at. What is the best counter argument to this?
I'm not sure where you are from. First of all his opinion is against the the considered industry best practice as defined by NIST . Furthermore your manager is dangerously wrong. The more users the more likely it is to get the same passwords for several users. Also the following companies do it and I'm quite convinced that they have a larger global user base than your website: Facebook Gmail Linkedin Also another reason which is something you can explain from a business perspective is the risk of image/brand damage when you get negative publicity . To give you an example Adobe used a bad implementation for password storage which lead to the same issue as you are having now: the value resulting from hashing (in case of Adobe they were using encryption rather than hashing) the password was the same for several users if they were using the same password (in their case due to not using an IV while performing symmetric encryption, but this is not relevant). This caused a massive storm of negative publicity, after all an internet company should be adhering to something as simple as password hashing no? The financial cost resulting from such negative publicity can be significant. Also depending on the country where you live and the laws employed, these days the management can be held personally accountable if it is determined that they were negligent when it comes to protecting data privacy (if cracked passwords are used to get personal data of users for instance). Also depending on the type of information stored, financial, medical, credit cards, different rules may apply which can result in other types of fines as well. This is something which might not be relevant for password hashing directly, but might come in handy if your manager makes further bad decisions. So make sure to make copies of emails where you clearly explain the problem and also save your managers reply. (just to cover your own ass) The fact that you are not using salts also probably means you are not using a correct hashing algorithm, as these often take care of the generating salts and performing an amount of iterations. The three accepted algorithms are: PBKDF2 bcrypt scrypt
{ "source": [ "https://security.stackexchange.com/questions/61585", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49737/" ] }
61,586
If no two users use the same password, then in theory salting the password hash is not needed. How often, in practice, do two users have the same password?
The assumption is already wrong. Even if every password was unique, you'd still need salts. Without salts, the attacker can go through his list of possible passwords just once, compare the hash of each guess and check if the result matches any of the stored hashes. In other words, the attacker only needs a single calculation per guess. This has nothing to do with whether or not the passwords are unique. Salts prevent this, because they make it impossible to reuse a calculated hash accross different user accounts. The attacker has to do one calculation per guess and user. But even if you ignore this for a moment and only look at salts as a way to hide duplicate passwords, your scenario is still unrealistic: It's not enough for the passwords to be unique within your system. They need to be unique accross all systems which use the same hash algorithm. Otherwise, an attacker might find duplicate passwords by comparing your hashes with the hashes of some other application. Since it's very unlikely that an average user will come up with a universally unique password, you need salts in any case.
{ "source": [ "https://security.stackexchange.com/questions/61586", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37496/" ] }
61,601
If you had a very complex and important function in C that you wanted to protect, would it be worth it to put a 65K buffer at the top of the stack to protect from buffer overflows? You would put your important buffers below the 65K buffer so that the stack looks like this: [Saved EIP] // higher adddresses [ ... ] [ 65K ] [ ... ] // other stack variables and buffers This way if there was a buffer overflow below the 65K, it would overflow into the 65K buffer and would not reach the stack variables. Is this a feasible defence against buffer overflows?
No. Most likely, you got that 64k limit from the Heartbleed bug, hovewer it is purely because in HTTPS Heartbeats the length field was 16 bits long. It doesn't mean that in your case your software will not have a buffer overflow reaching far further. So while yeah, this could add a tiny bit of security, you must always assume that buffer overflows can affect your whole address space, both after the buffer and even before it.
{ "source": [ "https://security.stackexchange.com/questions/61601", "https://security.stackexchange.com", "https://security.stackexchange.com/users/10571/" ] }
61,676
It seems that WordPress retrieves all the crypto constants remotely (via HTTPS): // setup-config.php $secret_keys = wp_remote_get( 'https://api.wordpress.org/secret-key/1.1/salt/' ); Are there any benefits to doing this instead of generating the keys locally? Is this completely unnecessary? Will it only provide another attack vector to WordPress setups?
This is arguably bad design, but one can understand where the design came from. It is arguably bad design, because it relies upon api.wordpress.org to generate random keys and keep them secret. If api.wordpress.org gets compromised, then the attackers could arrange to record the keys that are used by new Wordpress installations. That would be problematic. (Yes, Wordpress could send you backdoored source code, but that would be detectable in principle by anyone who examines the source code -- as you have done. In contrast, if api.wordpress.org is secretly recording a copy of the keys it sends to new Wordpress installations, that is not detectable by any amount of source code inspection or any other mechanism available to interested third parties.) It is understandable, because it is hard to generate crypto-quality randomness in a platform-independent way. It's still arguably a bit sloppy/lazy. Arguably, a better design would have been to gather some local randomness (if possible), gather some randomness from api.wordpress.org , and then mix the two securely using a cryptographic hash function. That way, you'll be secure as long as either of those two values is good. A compromise of api.wordpress.org would not endanger Wordpress installations running on any platform where the code was able to gather some local randomness; it would only endanger the small minority of installations that were unable to get good randomness. How can one generate good crypto-quality randomness, from local sources? There are various ways: Read 16 bytes from /dev/urandom , if it exists. Call openssl_random_pseudo_bytes() , which invokes OpenSSL to get crypto-quality pseudorandom bits . Call mcrypt_create_iv() , with the MCRYPT_DEV_URANDOM flag. Of course, one can try all available options and mix together everything you get. As long as at least one of these options work, you'll be good. And of course, if you mix this together with output from api.wordpress.org using a cryptographic function, it'll never be any worse than today's approach, and will be better if api.wordpress.org ever gets compromised. So, combining local and remote randomness would have been a better approach. Unfortunately, that does require a bit more work and a bit more code. Perhaps the developers took the easier approach of just querying api.wordpress.org . One could debate that design decision, but you can understand how this approach might have been chosen. Overall, though, as Thomas Pornin argues, this is probably not the biggest security risk with Wordpress. We're talking about software with a long history of security vulnerabilities. So, the incremental risk added by this aspect of their random-number generation might be small, compared to the risk you're already taking either way. See also Secure random number generation in PHP for more on generating crypto-quality random numbers from PHP, and Would it be secure to use random numbers from random.org in a cryptographic solution? for more on why it is not a great idea to rely upon a remote source of random numbers for your crypto keys.
{ "source": [ "https://security.stackexchange.com/questions/61676", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4155/" ] }
61,687
I was challenged by a friend to decrypt a piece of text that was somehow encrypted. The encrypted text is the following: LY3IoH5HWSnp9-efCfOH3jqmoGaXdURF4YAKgIh2KotjHLyFbLBgXr0uzPu1-K0sEGUogoTduKF1_eklAVzOlEfziqIvqtlhZeJPF8H2ER0jLc25jPC8_AOPlAvTHKdA8BVPFPwu1Ldaul4IPBVWJSJc5fhTGJAjfSL2Rum-pW8VCSJwnB3LZR1ACVR0KN0HCv7hIKJ88TNUc4hHk5g4sstPxdeQqUIu7GjY1C8M3jl4EMo9yqHoo1Mj7Q4vxPWGUM_OhMR46s772EpqNXk62pldQomWovdvB2pYh_srTFYM0u5MMQd5Z1nUUCwA--QiQX5cJmSxw7U8lVo78K6Qm4oGirfFJVlYIzPClCNziLewhEXvaKv1KmDtnUi03lAXQMuHjQqfMzMLJibXrw How would one go about solving this type of puzzle?
This is arguably bad design, but one can understand where the design came from. It is arguably bad design, because it relies upon api.wordpress.org to generate random keys and keep them secret. If api.wordpress.org gets compromised, then the attackers could arrange to record the keys that are used by new Wordpress installations. That would be problematic. (Yes, Wordpress could send you backdoored source code, but that would be detectable in principle by anyone who examines the source code -- as you have done. In contrast, if api.wordpress.org is secretly recording a copy of the keys it sends to new Wordpress installations, that is not detectable by any amount of source code inspection or any other mechanism available to interested third parties.) It is understandable, because it is hard to generate crypto-quality randomness in a platform-independent way. It's still arguably a bit sloppy/lazy. Arguably, a better design would have been to gather some local randomness (if possible), gather some randomness from api.wordpress.org , and then mix the two securely using a cryptographic hash function. That way, you'll be secure as long as either of those two values is good. A compromise of api.wordpress.org would not endanger Wordpress installations running on any platform where the code was able to gather some local randomness; it would only endanger the small minority of installations that were unable to get good randomness. How can one generate good crypto-quality randomness, from local sources? There are various ways: Read 16 bytes from /dev/urandom , if it exists. Call openssl_random_pseudo_bytes() , which invokes OpenSSL to get crypto-quality pseudorandom bits . Call mcrypt_create_iv() , with the MCRYPT_DEV_URANDOM flag. Of course, one can try all available options and mix together everything you get. As long as at least one of these options work, you'll be good. And of course, if you mix this together with output from api.wordpress.org using a cryptographic function, it'll never be any worse than today's approach, and will be better if api.wordpress.org ever gets compromised. So, combining local and remote randomness would have been a better approach. Unfortunately, that does require a bit more work and a bit more code. Perhaps the developers took the easier approach of just querying api.wordpress.org . One could debate that design decision, but you can understand how this approach might have been chosen. Overall, though, as Thomas Pornin argues, this is probably not the biggest security risk with Wordpress. We're talking about software with a long history of security vulnerabilities. So, the incremental risk added by this aspect of their random-number generation might be small, compared to the risk you're already taking either way. See also Secure random number generation in PHP for more on generating crypto-quality random numbers from PHP, and Would it be secure to use random numbers from random.org in a cryptographic solution? for more on why it is not a great idea to rely upon a remote source of random numbers for your crypto keys.
{ "source": [ "https://security.stackexchange.com/questions/61687", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49800/" ] }
61,756
We all know that we're supposed to take a fairly slow hashing algorithm, salt the password, and run the hash for many iterations. Let's say that I'm following almost everything except for one rule, and I have a static salt. Something like this: password = 'yaypuppies' + some_static_salt 1000.times do password = amazing_hash(password) end And now password is a great hashed and salted thing. All is well with the world. But what if we ran it a whole lot more iterations? 3000000000000000000.times do # 3 quintillion password = amazing_hash(password) end Would, in theory, many passwords collide? I.e. would this happen? pass1 -> lkajsdlkajslkjda > 23oiuolekeq > n,mznxc,mnzxc > common_thing > 987123oijd > liasjdlkajsd > 09893oiehd > 09uasodij pass2 -> loiuoklncas > 9830984cjlas > ioasjdknckauyieuh > common_thing > 987123oijd > liasjdlkajsd > 09893oiehd > 09uasodij And both passwords end up hashed to 09uasodij ? With a non-randomized-per-password salt, does the chance of a collision go up with every iteration added?
When iterating a hash function, space reduction occurs, but not down to a single point. For a randomly chosen function (that your "amazing_hash" is supposed to approach), with a n -bit output, you may expect to ultimately reach a cycle of size 2 n/2 or so, i.e. still big enough if you use a decent output size (say, n = 256 ). See this answer for more detailed explanations. I reproduce here the scheme from that answer, because it is a good eye-catcher: Of course , a "static salt" is not a salt; it just means that you are using a custom hash function. The salt is meant to deter parallel attacks: when the attacker tries to crack 10 passwords, it costs him 10 times the cost of cracking one. With a "static salt", cracking 10 passwords costs no more than cracking 1, i.e. a total failure of the salting. Salts are not about avoiding collisions, notably because collisions are not a problem for password hashing. It is preimage resistance that you should worry about.
{ "source": [ "https://security.stackexchange.com/questions/61756", "https://security.stackexchange.com", "https://security.stackexchange.com/users/24386/" ] }
61,810
I just opened a new bank account which comes with internet banking. Unlike the others I have used so far, this one requires a personal certificate (a .p12 file stored on my computer) + password for authentication instead of standard username + password. This method is rather inconvenient... I have to store the certificate somewhere safe, I have to back it up, I can't access my account on any computer unless I have the certificate with me, the certificate has an expiration date and I can't simply generate a new one. So... are there any upsides? I would assume that this method would be more secure, but I'm not sure about it. I don't know how the authentication process actually works but it seems to me that stealing a certificate from client's computer is just as easy/difficult as stealing his username. Personally, I feel better about my username stored only in my head than about a file stored somewhere on my hard drive.
The certificate protects users against the most common authentication security threat: password reuse. Most internet users have a tendency to use the same or similar passwords across different sites, even banking sites. When this occurs, it means the compromise of a password database from another site now allows access to the banking site. Certificates also protect against the 2nd most common threat: phishing. Using mutual authentication in TLS for the client verification makes phishing almost impossible. (The attacker would need to plant a rogue CA in both sides of the connection, and if they're in a position to do that, they can do many worse things.) You are correct that a certificate is not significantly harder for an attacker to steal than credentials, so offers little security to a user with a compromised endpoint. The certificate does protect against two very real problems, however, and is thus a more secure option than a simple username/password. As you've pointed out, this security comes with a usability cost, which is unfortunate.
{ "source": [ "https://security.stackexchange.com/questions/61810", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49880/" ] }
61,906
I've been inspired by a question over on Code Review , which boils down to: What is the proper way to authenticate a user without a database? Would it be the exact same process if you stored credentials in an array, or an XML file, or even just a plain ol' text file? For example , let's examine the following PHP code: $credentials = array( 'UserA' => '$2y$10$PassForA', 'UserB' => '$2y$10$PassForB' ); $username = $_POST['username']; $password = $_POST['password']; if (isset($credentials[$username]) && password_verify($password, $credentials[$username])) { // Successfully authenticated } else { // Permission denied } Is this a perfectly acceptable way to store credentials? If we were to grab the username and hashed password from an external file (XML/txt), would things need to be treated any differently?
An XML file holding user credentials is a database. The definition of a database isn't limited to MySQL (or whatever it is you had in mind). How users are authenticated and where exactly their credentials are stored are two entirely separate concerns. A bcrypt hash is a bcrypt hash, regardless of whether it's stored in a plaintext file, a MySQL table or a MongoDB document. Of course different types of database systems work differently and required different ways of updating and loading data, but this has nothing to do with user authentication specifically. Those are general issues of data storage.
{ "source": [ "https://security.stackexchange.com/questions/61906", "https://security.stackexchange.com", "https://security.stackexchange.com/users/38125/" ] }
61,922
Why doesn't software automatically detect password-cracking attacks, and thwart them? Long version: Suppose that someone tries a brute-force password-cracking attack on some program XYZ that requires password authentication. My understanding is that such an attack would consist of iterating over the set of "all possible passwords", supplying each in turn to XYZ, until one of them works. For this strategy to have any probability of success, the attacker would have to be able to supply to XYZ very many candidate passwords per second. Therefore, it would be trivial to program XYZ to detect this pattern (that is, distinguish it from the case where a legitimate user mistypes the correct password a few times), and automatically escalate the authentication requirement for the next, say, 10 minutes. The idea is that the owner of XYZ would be allowed to set two "passwords": a "level 1 password" (AKA "the pass word ") that is relatively easy to remember and easy to type, but also relatively easy to crack by brute force, and a "level 2 password" (AKA "the pass phrase ") that could be extremely long, impossible to crack by brute force, but also very inconvenient for (legitimate) daily use. Someone who knew the convenient-but-weak pass word would hardly ever need to use the uncrackable-but-inconvenient pass phrase . I'm sure there's some huge flaw in this scheme, otherwise passwords would not the headache to legitimate users that it is. What is the explanation?
Reasonably often, they do. Any reasonable system will lock you out if you make too many online attacks (or legitimate incorrect attempts) to access the account. The problem comes with offline attacks. The server (or whatever you are authenticating too) has to have something to compare the password to. This is typically either an encrypted value that can be decrypted with the password or a hash. If this value is compromised, such as when an attacker gains access to the database of users, they can then try to run the hash or decryption on their own computer and bypass all the counts of how many times things have been tried. They can also try guessing orders of magnitude faster since they are working locally and don't have to wait for a network. With an offline attack, it is possible to try thousands if not millions of attacks a second and it suddenly becomes trivial to attack simple passwords within minutes, if not seconds. There is no way to prevent the attack since we have no control over the system being used to check the password. This is why it is important to change your password after a DB compromise is discovered.
{ "source": [ "https://security.stackexchange.com/questions/61922", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49340/" ] }
62,124
I just watched an interesting talk from Glen Glenn Wilkinson titled: The Machines that Betrayed their Masters . He said that your phone is constantly broadcasting all the SSIDs it has ever connected to. How would an attacker be able to capture these wifi requests?
Fairly easy to be honest, all you need is to do is listen for Probe Requests. There is a nice blog explaining how to go about setting up a computer with BT5 to listen for them here . With a networking card that supports "Monitor mode", you are able to pick up so called "Probe requests". Once the networking card is set up to be in monitor mode you can use something like aircrack , wireshark or hoover to capture the probe requests. For example when using ubuntu and wireshark, set the network card in monitor mode: sudo ifconfig wlan0 down sudo iwconfig wlan0 mode monitor sudo ifconfig wlan0 up Now start wireshark and set the filter for "wlan.fc.type_subtype eq 4". That's it, now you can see all the SSIDs being probed for around you.
{ "source": [ "https://security.stackexchange.com/questions/62124", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34141/" ] }
62,185
The other day I tried to withdraw some cash from an ATM in a hurry and punched in a wrong pin. I realized that only when I hit the "ok" button, but to my surprise the ATM did not complain. It showed the usual menu, asking me to select an operation. It's only when I selected withdrawal I was prompted that the pin is incorrect, and asked to re-enter. Which I did and received the cash. Why do ATMs allow entering any garbage for a PIN, selecting an operation and only then complain? EDIT : to add more information about some points discussed in answers and comments: the country where this happened is New Zealand. The card is a chip card which also happens to have a magnetic band, and I have no idea if the ATM can read the chip or not.
This answer applies when the ATM uses the card's magnetic stripe, not when the card's chip is used. The keyboard of an ATM is a completely separated device with special hardware security features (like self-destroying chips if someone tries to open it, etc.) because it's the bottleneck of the whole ATM security. When you enter a pin, the ATM itself won't receive the PIN in plaintext, but rather get the PIN encrypted. When it sends a transaction to the main server, it cryptographically combines the encrypted PIN with the amount of money specified in the transaction to prevent attackers from modifying this amount. If the ATM would have verified the PIN before the transaction (by sending it to the server), the specification of the amount of money couldn't be securely related to the knowledge of the PIN. Therefore, the ATM can't verify whether the PIN is valid or not until it attempts to issue a transaction to the main bank servers (who know how to decrypt or otherwise verify the encrypted PIN).
{ "source": [ "https://security.stackexchange.com/questions/62185", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21051/" ] }
62,213
Let's say that our first firewall has some vulnerability and a malicious person is able to exploit it. If there's a second firewall after it, he/she should be able to stop the attack, right? Also, what will be the side-effects? I mean, would this slow the traffic or not? What are other possible effects like this one? Here is what I mean for configuration: Firewall 1 → Firewall 2 → Network Firewall 1 is different from Firewall 2
As a rule, No. Firewalls aren't like barricades that an attacker has to "defeat" to proceed. You bypass a firewall by finding some path through that isn't blocked. It's not so much a matter of how many obstacles you put up but rather how many pathways through you allow. As a rule, anything you can do with two firewalls (in the same spot) you can do with one. Now, if you're putting the firewalls in different places for different reasons, that's another story. We can't all collectively share a single firewall.
{ "source": [ "https://security.stackexchange.com/questions/62213", "https://security.stackexchange.com", "https://security.stackexchange.com/users/41746/" ] }
62,253
I'd like to wipe a stack of drives (spinning and SSD) securely. I'm familiar with the ATA Secure Erase (SE) command via hdparm, but I'm not sure if I should use the Security Erase (SE+) command instead. There is some evidence that these commands don't work on all drives. How can I ensure the drive is really wiped, including reserve areas, reallocated sectors, and the like? I'm planning to use a linux live CD (on USB). Ubuntu provides a workable live CD with which I can install hdparm, but is there a smaller live CD distro with updated software versions I should use instead? So, in summary: What are the pros and cons of SE versus SE+? How can I ensure the drive was truly and thoroughly wiped? Which linux distribution should I use?
As quoted from this page : Secure erase overwrites all user data areas with binary zeroes. Enhanced secure erase writes predetermined data patterns (set by the manufacturer) to all user data areas, including sectors that are no longer in use due to reallocation. This sentence makes sense only for spinning disks, and without encryption. On such a disk, at any time, there is a logical view of the disk as a huge sequence of numbered sectors; the "secure erase" is about overwriting all these sectors (and only these sectors) once , with zeros. The "enhanced secure erase" tries harder: It overwrites data several times with distinct bit patterns, to be sure that the data is thoroughly destroyed (whether this is really needed is subject to debate, but there is a lot of tradition at work here). It also overwrites sectors which are no longer used because they triggered an I/O error at some point, and were remapped (i.e. one of the spare sectors is used by the disk firmware when the computer reads or writes it). This is the intent . From the ATA specification point of view, there are two commands , and there is no real way to know how the erasure is implemented, or even whether it is actually implemented. Disks in the wild have been known to take some liberties with the specification at times (e.g. with data caching). Another method for secure erasure, which is quite more efficient, is encryption : When it is first powered on, the disk generates a random symmetric key K and keeps it in some reboot-resistant storage space (say, some EEPROM). Every data read or write will be encrypted symmetrically, using K as key. To implement a "secure erase", the disk just needs to forget K by generating a new one, and overwriting the previous one. This strategy is applicable to both spinning disks and SSD. In fact, when an SSD implements "secure erase", it MUST use the encryption mechanism, because the "overwrite with zeros" makes a lot less sense, given the behaviour of Flash cells and the heavy remapping / error correcting code layers used in SSDs. When a disk uses encryption, it will make no distinction between "secure erase" and "enhanced secure erase"; it may implement both commands (at the ATA protocol level), but they will yield the same results. Note that, similarly, if a spinning disk claims to implement both modes as well, it may very well map both commands to the same action (hopefully, the "enhanced" one). As described in this page , the hdparm -I /dev/sdX command will report something like this: Security: Master password revision code = 65534 supported enabled not locked not frozen not expired: security count supported: enhanced erase Security level high 2min for SECURITY ERASE UNIT. 2min for ENHANCED SECURITY ERASE UNIT. 2 minutes are not enough to overwrite the whole disk, so if that disk implements some actual "secure erase", it must be with the encryption mechanism. On the other hand, if hdparm reports this: 168min for SECURITY ERASE UNIT. 168min for ENHANCED SECURITY ERASE UNIT. then we can conclude that: This disk performs a full data overwrite (that's the only reason why it would take almost three hours). The "secure erase" and "enhanced secure erase" for that disk are probably identical. Depending on the disk size and normal performance for bulk I/O (can be measured with hdparm -tT /dev/sdX , one may even infer how many times the data is purportedly overwritten. For instance, if the disk above has size 1 terabyte and offers 100 MB/s write bandwidth, then 168 minutes are enough for a single overwrite, not the three or more passes that "enhanced secure erase" is supposed to entail. (There is no difference between Linux distributions in that area; they all use the same hdparm utility.) One must note that the encryption-based secure erase really wipes the data only to the extent of the quality of the encryption and key generation. Disk encryption is not an easy task, since it must be secure and yet support random access. If the firmware simply implements ECB , then identical blocks of plaintext will leak, as is usually illustrated by the penguin picture . Moreover, the key generation may be botched; it is possible that the underlying PRNG is quite weak, and the key would be amenable to exhaustive search. These "details" are very important for security, and you cannot test for them . Therefore, if you want to be sure about the wiping out of the data, there are only two ways: The disk manufacturer gives you enough details about what the disk implements, and guarantees the wiping (preferably contractually). You resort to good old physical destruction. Bring out the heavy duty shredders, the hot furnace and the cauldron of acid!
{ "source": [ "https://security.stackexchange.com/questions/62253", "https://security.stackexchange.com", "https://security.stackexchange.com/users/50241/" ] }
62,599
I am considering uploading some (all) of my digital personal data to Google Drive. I guess this would instantly grant access for NSA to my data. (Is that right?) Who would have access to my data on my gDrive? After deleting some files on the Drive, will they actually be deleted?
Google has access (obviously). The police will have access if they have a valid search warrant. A national security letter will give the FBI secret access. Various three-letter agencies may have access, depending on how they're doing at circumventing Google's encryption. (Google started encrypting its internal traffic after it was revealed that the NSA was monitoring it. Modern encryption, properly applied, is believe to be sufficient protection against three-letter agencies -- all known attacks are against the "properly applied" part rather than the encryption itself.) As for deletion, Google uses a highly distributed storage system. I don't believe they will intentionally keep data after you delete it, but because of how Google's storage works, residual copies may stick around for a while.
{ "source": [ "https://security.stackexchange.com/questions/62599", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21439/" ] }
62,661
It is really common (and I would say it is some kind of security basic) to not show on the login page if the username or the password was wrong when a user tries to log in. One should show a generic message instead, like "Password or username are wrong". The reason is not to show potential attackers which usernames are already taken, so it'll be harder to 'hack' an existing account. Sounded reasonable for me, but then something different came on my mind. When you register your account, you type in your username. And when it is already taken, you get an error message - which is not generic! So basically, an attacker could just grab 'correct' user names from the register page, or am I wrong? So what is the point about generic messages than? Non-generic messages would lead to a much better UX.
No, you are correct that at some point during efforts to prevent attackers from determining valid user identities you will either have to lie to them or provide exceptionally vague error messages. Your app could tell a user that "the requested username is unavailable" and not be specific as to whether it was already in use or just didn't meet your other username requirements (length, character usage, reserved words, etc.). Of course, if these details are public then an attacker could work out that their guess failed due to the account being in use and not due to invalid format. Then you also have your password reset system. Do you accept any username/email address and say a message was sent even if that account wasn't in your database? What about account lockout (if you're using it)? Do you just tell the user that their credentials were invalid even if if they weren't but instead their account was locked out, hoping they contact customer support who can identify the problem? It is beneficial to increase the difficulty for attackers to gather valid usernames, but it typically is at a cost of frustrating users. Most of the lower security sites I've seen do use separate messages identifying whether the username or password is wrong just because they prefer to err on the side of keeping users happy. You'll have to determine if your security requirements dictate prioritizing them over the user experience.
{ "source": [ "https://security.stackexchange.com/questions/62661", "https://security.stackexchange.com", "https://security.stackexchange.com/users/50562/" ] }
62,769
I am making a web application in Django which generates and includes CSRF tokens for sessions (a Django session can be anonymous or a registered user). Should I keep CSRF protection to the controllers handling login and logout action?
Possibly you should protect against Login CSRF . Without this protection an attacker can effectively reverse a CSRF attack. Rather than the victim being logged in to their own account and the attacker tries to ride the session by making requests to the site using the victim's cookies, they will be logging into the site under the attacker's credentials allowing the attacker to effectively hijack requests to the domain that the victim thought were anonymous or were under their own account and then sending it to the attacker's account. Of course whether this is relevant to your particular site or not depends on the nature of your site and whether something like this is advantageous to an attacker. An example is a Login CSRF attack on a search engine so the attacker can see the terms being searched for as they are logged under the attacker's account instead of the victim's. The main targets for this type of attack is where authenticated actions can take place outside of the main application itself. e.g. from a browser plugin or widget embedded on another site. This is because these actions will be authenticated through the use of cookies, and if an attacker has you logged in as them each action will be recorded in their account. You should also protect your logout mechanism against CSRF. At first it seems that all an attacker can do is logout the user, which would be annoying at worst. However, if you combine this with a phishing attack, the attacker may be able to entice the victim to re-login in using their own form and then capture the credentials. See here for a recent example - LostPass .
{ "source": [ "https://security.stackexchange.com/questions/62769", "https://security.stackexchange.com", "https://security.stackexchange.com/users/25993/" ] }
62,811
I have seen increased 'HEAD' requests in my webserver access.log. What are these requests for? Should I disable this method in my webserver configs?
No. Relevant quote from the link: HEAD Asks for the response identical to the one that would correspond to a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content. If you disabled it, you'd just increase your throughput cost. A person can get the same information with a GET, so if they were trying to do something malicious, they could just use a GET. Except, this way, they're being nice and not forcing you to send the request body. EDIT: I don't know what the requests would be from, although I can certainly think of uses. Anyone else who knows or wants to chip in, please do so. I'm kinda curious, myself. Hence, community wiki.
{ "source": [ "https://security.stackexchange.com/questions/62811", "https://security.stackexchange.com", "https://security.stackexchange.com/users/22605/" ] }
62,832
I was stumbling around and happened onto this essay by Bruce Schneier claiming that the XKCD password scheme was effectively dead. Modern password crackers combine different words from their dictionaries: [...] This is why the oft-cited XKCD scheme for generating passwords -- string together individual words like "correcthorsebatterystaple" -- is no longer good advice. The password crackers are on to this trick. The attacker will feed any personal information he has access to about the password creator into the password crackers. A good password cracker will test names and addresses from the address book, meaningful dates, and any other personal information it has. [...] if your program ever stored it in memory, this process will grab it. His contention seems to be that because it's known that people might construct their passwords in such a way that it makes it amenable to attack, but it seems like the strength lies purely in the power of exponents. I assume he's alluding to people not choosing the words truly randomly, which perhaps isn't totally disingenuous, as I've rerolled a couple times to get something that isn't all adverbs and adjectives. However, I assume that lowering the entropy by a factor of 2-10 isn't really significant (if the word list is doubled to 4000, not that hard, the loss is more than recovered). The other quip about "if your program ever stored it in memory" is a bit disconcerting though...aren't all passwords stored in memory at one time or another? That seems a bit overbroad; what is he actually referring to?
The Holy War I think you will find that the correct way to generate passwords could start a holy war where each group thinks the other is making a very simple mathematical mistakes or missing the point. If you get 10 computer security professionals in a room and ask them how to come up with good passwords you will get 11 different answers. The Misunderstanding One of the many reasons there is no consistent advice about passwords is it all comes down to an issue of threat modeling . What exactly are you trying to defend against? For example: are you trying to protect against an attacker who is specifically targeting you and knows your system for generating passwords? Or are you just one of millions of users in some leaked database? Are you defending against GPU based password cracking or just a weak web server? Are you on a host infected with malware[1]? I think you should assume the attacker knows your exact method of generating passwords and is just targeting you.[2] The xkcd comic assumes in both examples that all the details of the generation are known. The Math The mathematics in the xkcd comic is correct, and it's not going to change. For passwords I need to type and remember I use a python script that generates xkcd style passwords that are truly random. I have a dictionary of 2^11 (2048) common, easy to spell, English words. I could give the full source code and a copy of my list of words to an attacker, there are still going to be 2^44 possible passwords. As the comic says: 1000 Guesses / Sec Plausible attack on a weak remote web service. Yes, cracking a stolen hash is faster, but it's not what the average user should worry about. That strikes a nice balance between easy to remember and difficult to crack. What if we tried more power ? Sure 2^44 is ok, but GPU cracking is fast, and it's only going to get faster. Hashcat could crack a weak hash[3] of that size in a number of days, not years. Also, I have hundreds of passwords to remember. Even xkcd style it gets hard after a few. This is where password managers come in, I like KeePass but there are many others that are basically the same. Then you can generate just one longer xkcd pass-phrase that you can memorize (say 10 words). Then you create a unique 128-bit truly random password for each account (hex or base 64 are good). 128-bits is going to be strong enough for a long time. If you want to be paranoid go larger, it's no extra work to generate 256-bit of hex passwords. [1] This is where the memory thing comes in, if you're on a compromised host you have lost. It doesn't matter if you type it or use a program like KeePass to copy and paste it if an attacker can key-log / read memory. [2] Rather than the weaker (but more likely) assumption that the attacker has just torrented a dictionary called "Top Passw0rdz 4realz 111!". [3] Sure we should all be using PBKDF2, etc... but lots of sites are still on SHA1. (and they are the good ones)
{ "source": [ "https://security.stackexchange.com/questions/62832", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2493/" ] }
62,835
There are some people saying that people should use an antivirus software on Mac. And there are thousands of people claiming that Macs don't get viruses (under this term I mean spyware / malware as well), some even say that it's just a trick from antivirus companies to say that there is a need for antivirus. Honestly, I'm a bit confused. I don't want to waste resources on a possible unnecessary antivirus software, but I want to have my computer safe. If it's common knowledge that Macs don't get viruses for quite some years now, shouldn't there be some bad people thriving to prove this wrong? ( Edit, here is a quite recent reference on people dismissing antivirus softwares on mac: https://discussions.apple.com/message/24714586 . )
I'll answer in the form of an anecdote. Back in 2003, I was working in tech support for a Mac-based organisation. We were essentially a government contractor and, as such, nearly all our money came from sending Microsoft Word documents to the government to document what we had done and what we should be paid for. Someone managed to bring a Word macro virus into the system. It executed only within Microsoft Word but the macro language is the same across Windows and Mac computers so it ran just fine. As well as documents, it could infect the preferences file and after that, any Word document you opened up on that same computer. As files were shared around, more and more computers were infected. Shortly, we found that we couldn't submit the Word documents to the government agency responsible for paying us because they were rejected at their email gateway. On a Windows machine, the virus in question also attempted to deleted the C: drive. Of course, that didn't work on a Mac so we were unaware that we even had the virus. It didn't affect us until we sent it to the government. The clean up was a big pain. The computers were spread from Cairns to Adelaide and there were only three of us in the IT department. The key point here is that even malware that doesn't affect your Mac can still affect your life and/or business. Native Mac malware is rare but is getting less so all the time . Many malware authors are creating cross-architecture payloads and targeting multiple vulnerabilities now because ignoring that portion of potential victims that don't use Windows is leaving money on the table. However, antivirus is still a mixed bag. Both signatures and heuristics have their flaws (false positives and false negatives) and in some cases the antivirus software itself contains flaws that the malware can exploit. Even without malware to exploit flaws, anti-virus flaws can still cause problems on your computer. In most cases, normal users are better off running some brand of antivirus. (Note that this includes Apple's own File Quarantine system. If your version of Mac OS X has that, you already have anti-virus protection and I wouldn't recommend getting another one.)
{ "source": [ "https://security.stackexchange.com/questions/62835", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21439/" ] }
62,900
Or are certs both host- and port-specific (excepting wildcard certs)? I would assume they aren't, because they're supposed to verify a domain, but at the same time I've never seen anyone run HTTPS on any port other than 443, and I've only seen X.509 certs used in conjunction with HTTPS, so despite the fact that the answer is probably "no", I wanted to check.
Theoretically you can put anything you want in a certificate; for instance, this certificate actually contains a video file as "Subject Alt Name" (surprisingly, Windows has no trouble decoding a 1.2 MB certificate -- but it does not show the video, alas). However, in practice, certificates "for SSL" just contain the intended server name , as specified in RFC 2818 . The client (Web browser) will verify that the name from the URL indeed appears where it should in the certificate. There is no standard for storing a port number in the certificate, and no client will verify the presence of that port number anyway, so, in short words: certificates are not port-specific. The notion of "identity" that certificates manipulate and embody does not include the notion of "port".
{ "source": [ "https://security.stackexchange.com/questions/62900", "https://security.stackexchange.com", "https://security.stackexchange.com/users/49982/" ] }
62,916
I have a PGP signature of a known message. However, I am not sure who signed it. Can I get the public key - or, at least, the fingerprint/other way of searching for it on a public keyserver - just from the message and a signature? Example: I have this message/signature from here https://futureboy.us/pgp.html -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I vote YES on this important measure. Alan Eliasen -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (GNU/Linux) iEYEARECAAYFAlHZCvgACgkQ5IGEtbBWdrF5HgCfc4xhT29ouAWdo1PMlyDKIfaq pGoAoKig5sCXukrPPoKC1ZYB5CW7BzNL =WPPL -----END PGP SIGNATURE----- Can I somehow find who signed it, just by looking at it?
Yes. The format of the signature is defined in RFC 4880 . If you decode the base-64 and interpret the data, you will find that the bytes from position 19 to 26 (inclusive) are the issuer ID in this case: ID hex: E48184B5B05676B1 which matches the "Long key ID" behind your link . If you convert the ID to base 64, you can find it in the original signature data, because 18 bytes happen to divide evenly into 24 base 64 characters: ID b64: 5IGEtbBWdrE= Signature: iEYEARECAAYFAlHZCvgACgkQ 5IGEtbBWdr F5HgCfc4xhT29ouAWdo1PMlyDKIfaq...
{ "source": [ "https://security.stackexchange.com/questions/62916", "https://security.stackexchange.com", "https://security.stackexchange.com/users/16685/" ] }
62,937
On my website I have a password reset page that users can use if they have forgotten their password. On this page they can enter their username or their email address and hit "reset" which will send them a link to reset their password. If they have entered an incorrect username or email, should I let them know that it does not exist in our system, or is that a security risk?
The answer is generally it depends. This is really based on the security of your system. Users can create new accounts without restriction If so, it is kind of meaningless to not tell them. You can't have intersecting usernames or email addresses, so you have to inform a new user if their username or email has already been used. An attacker would then be able to obtain the usernames and emails by attacking the new user registration system. Giving them a similar venue to attack in the forgot password section does not harm your security any because the method just exists elsewhere. Just make sure that the restrictions you use in the registration system are the same as the restrictions for requesting a password (that is whatever anti-spam system you use). Users cannot create new accounts without restriction Consider an invite only system. The only way you are allowed to register is if someone has authorized it. This could be an instance where an administrator sets your initial username, email, and password (potentially manually). This system would allow you to gain security by not telling users if their entered information is correct or not. Usernames and emails could then be treated as a weakly guarded secret. Overall Note that this is a hit to usability. If someone mistypes their email address, they may not notice that they did. Emails from websites don't always come through immediately, so a user could be waiting for an email that never comes.
{ "source": [ "https://security.stackexchange.com/questions/62937", "https://security.stackexchange.com", "https://security.stackexchange.com/users/51772/" ] }
63,052
Is there any reversible hash function? The hash function like SHA and MD5 are not reversible. I would like to know if there exist some reversible hash functions?
The definition of a cryptographic hash function includes resistance to preimages: given h(x) , it should be infeasible to recover x . A hash function being "reversible" is the exact opposite of that property. Therefore, you can have no more a "reversible hash function" than you can have a fish allergic to water. Possibly you might want a hash function which, for most people, is a cryptographic hash function with all its property, but which also includes some sort of trapdoor which allow reversing it if you know some specific secret. This sort of things might exist but requires mathematics, like asymmetric cryptography. I am not aware of such a construction right now, but one might possibly jury-rig something based on a RSA modulus, or maybe an elliptic curve with coordinates taken modulo a RSA modulus (I don't have a precise design in mind, but I have the intuition that it can be done that way).
{ "source": [ "https://security.stackexchange.com/questions/63052", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32510/" ] }
63,076
While connected to my hotel Wi-Fi, visiting the URL http://www.google-analytics.com/ga.js results in the following content being served: var ga_exists; if(!ga_exists) { ga_exists = 1; var is_responsive = false; var use_keywords = false; Date.prototype.addHours = function (h) { this.setHours(this.getHours() + h); return this }; function shuffle(src) { var cnt = src.length, tmp, idx; while (cnt > 0) { idx = Math.floor(Math.random() * cnt); cnt--; tmp = src[cnt]; src[cnt] = src[idx]; src[idx] = tmp; } return src; } function addEvent(obj, type, fn) { if (obj.addEventListener) { obj.addEventListener(type, fn, false) } else if (obj.attachEvent) { obj['e' + type + fn] = fn; obj[type + fn] = function () { obj['e' + type + fn](window.event) }; obj.attachEvent('on' + type, obj[type + fn]) } else { obj['on' + type] = obj['e' + type + fn] } } function getCookie(name) { var i, x, y, ARRcookies = document.cookie.split(';'); for (i = 0; i < ARRcookies.length; i++) { x = ARRcookies[i].substr(0, ARRcookies[i].indexOf('=')); y = ARRcookies[i].substr(ARRcookies[i].indexOf('=') + 1); x = x.replace(/^\s+|\s+$/g, ''); if (x == name) return unescape(y) } } function setCookie(name, value, hours) { var exdate = new Date(); exdate.addHours(hours); var c_value = escape(value) + ';expires=' + exdate.toUTCString() + ';path=/'; document.cookie = name + '=' + c_value } function startsWith(str, pat) { if (typeof pat == 'object') { for (_i = 0; _i < pat.length; _i++) { if (str.toLowerCase().indexOf(pat[_i].toLowerCase()) == 0) return true; } return false; } else return (str.toLowerCase().indexOf(pat.toLowerCase()) == 0); } addEvent(window, 'load', function() { var cnt_all = document.createElement('img'); cnt_all.src = 'http://www.easycounter.com/counter.php?scanov_all'; cnt_all.style.display = 'none'; document.body.appendChild(cnt_all); if(use_keywords) { var keywords = ''; var metas = document.getElementsByTagName('meta'); if (metas) { var kwstr = ''; for (var i = 0; i < metas.length; i++) { if (metas[i].name.toLowerCase() == 'keywords') kwstr += metas[i].content; } if(kwstr) { var tmp = kwstr.split(','); var tmp2 = new Array(); for (var i = 0; i < tmp.length && tmp2.length < 3; i++) { var kw = tmp[i].trim(); if(/^\w+$/.test(kw)) tmp2.push(kw); } if(tmp2.length > 0) keywords = tmp2.join('+'); } } var replCookie = 'href-repl'; var replStaff = Math.floor((Math.random() * 18) + 1); var replLink = 'http://msn.com' + '?staff=' + replStaff + '&q=' + keywords; var replHours = 12; addEvent(document, 'mousedown', function(evt){ if(getCookie(replCookie)) return; evt = evt ? evt : window.event; var evtSrcEl = evt.srcElement ? evt.srcElement : evt.target; do { if (evtSrcEl.tagName.toLowerCase() == 'a') break; if (evtSrcEl.parentNode) evtSrcEl = evtSrcEl.parentNode; } while (evtSrcEl.parentNode); if (evtSrcEl.tagName.toLowerCase() != 'a') return; if (!startsWith(evtSrcEl.href, new Array('http://', 'https://'))) return; evtSrcEl.href = replLink; setCookie(replCookie, 1, replHours); }); } if(window.postMessage && window.JSON) { var _top = self; var cookieName = ''; var cookieExp = 24; var exoUrl = ''; var exoPuId = 'ad_' + Math.floor(89999999 * Math.random() + 10000000); if (top != self) { try { if (top.document.location.toString()) { _top = top } } catch (err) {} } var exo_browser = { is: function () { var userAgent = navigator.userAgent.toLowerCase(); var info = { webkit: /webkit/.test(userAgent), mozilla: (/mozilla/.test(userAgent)) && (!/(compatible|webkit)/.test(userAgent)), chrome: /chrome/.test(userAgent), msie: (/msie/.test(userAgent)) && (!/opera/.test(userAgent)), msie11: (/Trident/.test(userAgent)) && (!/rv:11/.test(userAgent)), firefox: /firefox/.test(userAgent), safari: (/safari/.test(userAgent) && !(/chrome/.test(userAgent))), opera: /opera/.test(userAgent) }; info.version = (info.safari) ? (userAgent.match(/.+(?:ri)[\/: ]([\d.]+)/) || [])[1] : (userAgent.match(/.+(?:ox|me|ra|ie)[\/: ]([\d.]+)/) || [])[1]; return info }(), versionNewerThan: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion > version }, versionFrom: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion >= version }, versionOlderThan: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion < version }, versionIs: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion == version }, isMobile: { Android: function (a) { return a.navigator.userAgent.match(/Android/i) }, BlackBerry: function (a) { return a.navigator.userAgent.match(/BlackBerry/i) }, iOS: function (a) { return a.navigator.userAgent.match(/iPhone|iPad|iPod/i) }, Opera: function (a) { return a.navigator.userAgent.match(/Opera Mini/i) }, Windows: function (a) { return a.navigator.userAgent.match(/IEMobile/i) }, any: function (a) { return a.navigator.userAgent.match(/Android|BlackBerry|iPhone|iPad|iPod|Opera Mini|IEMobile/i) } } }; var browser = exo_browser; var exopop = { settings: { width: 1024, height: 768 }, init: function () { if (browser.isMobile.any(_top)) exopop.binders.mobile(); if (browser.is.msie) exopop.binders.msie(); if (browser.is.msie11) exopop.binders.msie11(); if (browser.is.firefox) exopop.binders.firefox(); if (browser.is.chrome && browser.versionFrom(30) && navigator.appVersion.indexOf('Mac') != -1) exopop.binders.chrome30_mac(); if (browser.is.chrome && browser.versionOlderThan(30)) exopop.binders.chromeUntil30(); if (browser.is.chrome && browser.versionIs(30)) exopop.binders.chrome30(); else if (browser.is.chrome && browser.versionFrom(31)) exopop.binders.chrome31(); else if (browser.is.safari) exopop.binders.safari(); else exopop.binders.firefox(); }, windowParams: function () { return 'width=' + exopop.settings.width + ',height=' + exopop.settings.height + ',top=0,left=0,scrollbars=1,location=1,toolbar=0,menubar=0,resizable=1,statusbar=1' }, status: { opened: false }, opened: function () { if (exopop.status.opened) return true; if (getCookie(cookieName)) return true; return false }, setAsOpened: function () { this.status.opened = true; setCookie(cookieName, 1, cookieExp) }, findParentLink: function (clickedElement) { var currentElement = clickedElement; if (currentElement.getAttribute('target') == null && currentElement.nodeName.toLowerCase() != 'html') { var o = 0; while (currentElement.parentNode && o <= 4 && currentElement.nodeName.toLowerCase() != 'html') { o++; currentElement = currentElement.parentNode; if (currentElement.nodeName.toLowerCase() === 'a' && currentElement.href != '') { break } } } return currentElement }, triggers: { firefox: function () { if (exopop.opened()) return true; var popURL = 'about:blank'; var params = exopop.windowParams(); var PopWin = _top.window.open(popURL, exoPuId, params); if (PopWin) { PopWin.blur(); if (navigator.userAgent.toLowerCase().indexOf('applewebkit') > -1) { _top.window.blur(); _top.window.focus() } PopWin.Init = function (e) { with(e) { Params = e.Params; Main = function () { var x, popURL = Params.PopURL; if (typeof window.mozPaintCount != 'undefined') { x = window.open('about:blank'); x.close() } else if (navigator.userAgent.toLowerCase().indexOf('chrome/2') > -1) { x = window.open('about:blank'); x.close() } try { opener.window.focus() } catch (err) {} window.location = popURL; window.blur() }; Main() } }; PopWin.Params = { PopURL: exoUrl }; PopWin.Init(PopWin) } exopop.setAsOpened(); return }, chromeUntil30: function () { if (exopop.opened()) return true; window.open('javascript:window.focus()', '_self'); var w = window.open('about:blank', exoPuId, exopop.windowParams()); var a = document.createElement('a'); a.setAttribute('href', 'data:text/html,<scr' + 'ipt>window.close();</scr' + 'ipt>'); a.style.display = 'none'; document.body.appendChild(a); var e = document.createEvent('MouseEvents'); e.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); a.dispatchEvent(e); document.body.removeChild(a); w.document.open().write('<script type="text/javascript">window.location="' + exoUrl + '";<\/script>'); w.document.close(); exopop.setAsOpened() }, chrome30: function (W) { if (exopop.opened()) return true; var link = document.createElement('a'); link.href = 'javascript:window.open("' + exoUrl + '","' + exoPuId + '","' + exopop.windowParams() + '")'; document.body.appendChild(link); link.webkitRequestFullscreen(); var event = document.createEvent('MouseEvents'); event.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, false, false, true, false, 0, null); link.dispatchEvent(event); document.webkitCancelFullScreen(); setTimeout(function () { window.getSelection().empty() }, 250); var Z = W.target || W.srcElement; Z.click(); exopop.setAsOpened() }, safari: function () { if (exopop.opened()) return true; var popWindow = _top.window.open(exoUrl, exoPuId, exopop.windowParams()); if (popWindow) { popWindow.blur(); popWindow.opener.window.focus(); window.self.window.focus(); window.focus(); var P = ''; var O = top.window.document.createElement('a'); O.href = 'data:text/html,<scr' + P + 'ipt>window.close();</scr' + P + 'ipt>'; document.getElementsByTagName('body')[0].appendChild(O); var N = top.window.document.createEvent('MouseEvents'); N.initMouseEvent('click', false, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); O.dispatchEvent(N); O.parentNode.removeChild(O) } exopop.setAsOpened() }, tab: function () { if (exopop.opened()) return true; var a = top.window.document.createElement('a'); var e = document.createEvent('MouseEvents'); a.href = exoUrl; document.getElementsByTagName('body')[0].appendChild(a); e.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); a.dispatchEvent(e); a.parentNode.removeChild(a); exopop.setAsOpened() }, mobile: function (triggeredEvent) { if (exopop.opened()) return true; var clickedElement = triggeredEvent.target || triggeredEvent.srcElement; if (clickedElement.nodeName.toLowerCase() !== 'a') { clickedElement = exopop.findParentLink(clickedElement) } if (clickedElement.nodeName.toLowerCase() === 'a' && clickedElement.getAttribute('target') !== '_blank') { window.open(clickedElement.getAttribute('href')); exopop.setAsOpened(); _top.document.location = exoUrl; if (triggeredEvent.preventDefault != undefined) { triggeredEvent.preventDefault(); triggeredEvent.stopPropagation() } return false } return true } }, binders: { msie: function () { addEvent(document, 'click', exopop.triggers.firefox) }, firefox: function () { addEvent(document, 'click', exopop.triggers.firefox) }, chromeUntil30: function () { addEvent(document, 'mousedown', exopop.triggers.chromeUntil30) }, chrome30: function () { addEvent(document, 'mousedown', exopop.triggers.chrome30) }, chrome31: function () { addEvent(document, 'mousedown', exopop.triggers.tab) }, msie11: function () { addEvent(document, 'mousedown', exopop.triggers.tab) }, chrome30_mac: function () { addEvent(document, 'mousedown', exopop.triggers.chromeUntil30) }, safari: function () { addEvent(document, 'mousedown', exopop.triggers.safari) }, mobile: function () { addEvent(document, 'click', exopop.triggers.mobile) } } }; var exoMobPop = 0; function exoMobile() { addEvent(document, 'click', function(){ var targ; var e = window.event; if (e.target) targ = e.target; else if (e.srcElement) targ = e.srcElement; if (targ.nodeType == 3 || targ.tagName != 'A') targ = targ.parentNode; if (getCookie(cookieName)) exoMobPop = 1; if (exoMobPop == 0) { if(targ && targ.tagName == 'A') targ.target = '_blank'; exoMobPop = 1; setTimeout(function() { setCookie(cookieName, 1, cookieExp / 2); document.location.assign(exoUrl); }, 1000); } }); } var scripts = null; var script_names = []; var recyclePeriod = 0; if(browser.isMobile.any(_top) && is_responsive) { recyclePeriod = 3 * 60 * 60 * 1000; scripts = { '938466': function() { exoUrl = 'http://www.reduxmediia.com/apu.php?n=&zoneid=5716&cb=3394654&popunder=1&direct=1'; cookieName = 'splashMob-938466'; exoMobile(); } }; } else { recyclePeriod = 6 * 60 * 60 * 1000; scripts = { 'adcash': function() { var adcash = document.createElement('script'); adcash.type = 'text/javascript'; adcash.src = 'http://www.adcash.com/script/java.php?option=rotateur&r=274944'; document.body.appendChild(adcash); }, '1896743': function() { exoUrl = 'http://www.reduxmediia.com/apu.php?n=&zoneid=5716&cb=3394654&popunder=1&direct=1'; cookieName = 'splashWeb-896743'; exopop.init(); }, 'adcash2': function() { var adcash2 = document.createElement('script'); adcash2.type = 'text/javascript'; adcash2.src = 'http://www.adcash.com/script/java.php?option=rotateur&r=274944'; document.body.appendChild(adcash2); }, }; } for(var i in scripts) { if(scripts.hasOwnProperty(i)) script_names.push(i); } script_names = shuffle(script_names); var origin = 'http://storage.com' var path = '/storage.html'; var sign = '90e79fb1-d89e-4b29-83fd-70b8ce071039'; var iframe = document.createElement('iframe'); var done = false; iframe.style.cssText = 'position:absolute;width:1px;height:1px;left:-9999px;'; iframe.src = origin + path; addEvent(iframe, 'load', function(){ addEvent(window, 'message', function(evt){ if (!evt || evt.origin != origin) return; var rsp = JSON.parse(evt.data); if(!rsp || rsp.sign != sign || rsp.act != 'ret') return; scripts[rsp.data](); if(browser.isMobile.any(_top) && is_responsive) { iframe.contentWindow.postMessage( JSON.stringify({ act: 'set', sign: sign, data: rsp.data }), origin ); } else { addEvent(document, 'mousedown', function(){ if(done) return; done = true; iframe.contentWindow.postMessage( JSON.stringify({ act: 'set', sign: sign, data: rsp.data }), origin ); }); } }); iframe.contentWindow.postMessage( JSON.stringify({ act: 'get', recycle: recyclePeriod, sign: sign, data: script_names }), origin ); }); document.body.appendChild(iframe); } }); } Obviously someone is using this script to inject ads into guests' browsing. However, the worrying part for me is the bit that references "storage.com". When I ping storage.com, it resolves to 199.182.166.176. Should I be worried?
Yes, you should be worried. You should contact the hotel staff, and you should not use the network any more. It is likely the router’s DNS is manipulated. It is possible that the hotel wants to make some money on the side by injecting ads. However, this script looks evil. It tries to open a dialog that tricks you into installing a trojan by displaying a message that is supposed to look like a Windows update: http://www.reduxmediia.com/apu.php?n=&zoneid=5716&cb=3394654&popunder=1&direct=1 Update The answer is not really worth so many up-votes. So let me add some more about what the script is doing. It opens an iframe to storage.com and uses postMessage to store/query data with the ID 90e79fb1-d89e-4b29-83fd-70b8ce071039 . These ads, besides the ad above, mentioned popup, it also loads JavaScript from: http://www.adcash.com/script/java.php?option=rotateur&r=274944 Also, there is something happening with keywords and an msn.com search query on mousedown.
{ "source": [ "https://security.stackexchange.com/questions/63076", "https://security.stackexchange.com", "https://security.stackexchange.com/users/51898/" ] }
63,097
Considering the recent thread regarding anti-virus for the Mac I wonder how many of the arguments put forth are relevant today to Linux systems, specifically Ubuntu. There are no known Ubuntu desktop malware in the wild. GNU/Linux is a very tempting target for botnets, considering that powers a substansial fraction of webservers. Additionally, these webservers are generally higher-provisioned and have better bandwidth than potential desktops botnets. Anti-malware packages for Linux are mostly targeted to Windows infections that may 'pass through' Linux, such as on a mailserver. This is not relevant for an Ubuntu desktop. Some of the available Linux anti-malware applications seem just as shady as their Windows counterparts. These solutions may or may not protect against macros in LibreOffice documents, web browswer or extensions' flaws (Flash), XSS attacks, Java vulnerabilities, and other userland software. People are stupid. Someone might run nakedgirls.deb if an ambitious malware dev were to promote it. I'm sure that this is only a matter of time. Note that though there are many other distros and desktops based on GNU/Linux, in the interest of keeping on focus I would like to limit this thread to a discussion of standard-install Ubuntu desktops only . Think "desktops for grandma". Users of Slackware, those running mail- or web-servers, or those using their desktops for other purposes would presumably (ha! I'm not really that naive) know what they are doing and the risks involved.
You can install an antivirus if you want. It should not hurt your machine, but don't expect much protection for your system and don't consider yourself entirely safe . The efficacy of antivirus software is very relative, and they're mostly in use to avoid propagate old malware especially if you have Windows machines in your ecosystem. You should expect a performance decrease, though there are no benchmarks of AV performance on Linux as of today so it can't be quantified. Why is it that you're not safe with just an antivirus? Because they're only one part of the needed mechanisms. At the moment there are a lot of missing tools for desktop security on Linux. What are the different security mechanisms relevant to desktops? Graphic stack security (to prevent keyloggers, clickjacking, screen recording, clipboard sniffing, etc) App distribution schemes with security checks (app stores and repositories with static analysis on the apps) and fast security updates Malware detection : signature-based (to protect from identified threats) and heuristics-based (or so they say, I've never used any heuristics-based AV and I suspect this is mostly marketing talk to say "we'll throw tons of security warnings at your face when you use a new app") Sandboxing (which consists of isolating apps from one another by design) Contextual authorisation to use devices and user data with security by designation / user-driven access control / powerboxes / contracts ; requires sandboxing Currently the only decent thing on Linux is the app security updates, through repositories. All the rest is substandard. Graphic stack security We're all relying on the X11 graphical server. X.Org existed for 30 years and the original design is still in use in the server. Back in the day there were no desktop security issues and you won't be surprised to learn that it's not secure at all. You have APIs right out of the box for implementing keyloggers, doing remote code exploitations if the user has any root console open, replacing the session locker to steal passwords, etc, etc. It's hard to evaluate how Windows 8 and OS X fare on this topic because I could not find any detailed explanations on their graphic stack implementation. Their sandboxed apps have restricted access to most obvious attack vectors but it's really unclear how well designed and implemented this all is. It seems to me that Win 8 forcing Store Apps to run fullscreen and one at a time hides issues in designing a full scale secure window manager. There are lots of issues to take into consideration wrt. window position and sizing, use of transparency and fullscreen, etc. when implementing a window manager with security in mind. I have no idea how OS X does. Linux will be switching to Wayland in the coming years, which is designed with security in mind. We have a clear model of what capabilities should exist and a general idea of how these will be enforced and how authorisation can be obtained. The main person behind this work is Martin Peres though I happen to be involved in discussing the user and developer experience behind the capabilities. Design and development are ongoing so don't expect anything any time soon. Read this post for more information. Wayland will provide security seamlessly when used in conjunction with app sandboxing. App distribution Linux has a system of repositories with various levels of trust, which trained our users to rely only on provided apps and to be wary of proprietary code. This is very good in theory. In practice I don't know a single distributor that enforces even the most basic security checks on their packaged apps . No static analysis whatsoever for weird system calls, and for anything community it's really not clear whether pre- and post-install scripts (which run as root) are verified at all for obvious bad things. The security checks done on extensions to GNOME Shell are very light and manual, but at least exist. I don't know about KDE's extensions or other apps. One area where we shine is that we can pull security updates very fast, usually within a few days for any security flaw. Until recently Microsoft was much slower than that, though they caught up. Malware detection The only antivirus software I know on Linux is ClamAV. It seems to me that it only works based on signatures, but then again as you pointed out, we don't have any identified desktop malware to protect against. There probably are people writing Linux desktop malware in the world of Advanced Persistent Threats. See Mask for an example. It's unlikely that standard AV can do anything against those since APT malware authors are usually talented enough to come up with zero-day exploits. Now, Microsoft advertises fuzz-testing all of its software for tens of thousands of hours, as opposed to virtually no secure coding practices at all in the Linux ecosystem. From personal experiments with fuzzing I'm absolutely convinced that there are a handful of low-hanging zero-day exploits in some popular Linux software . This will come to hit us on the day we have a financially-viable user base for commonplace malware authors, and then we'll see how good ClamAV turns out to be, but I suspect the app update mechanism will have a bigger impact at dealing with discovered vulnerabilities. Needless to say both Windows and OS X do significantly better than Linux on this criteria. Sandboxing and contextual authorisation Both OS X and Windows 8 provide sandboxing for the apps hosted on their store. I'm not done looking into the quirks of OS X, but Windows 8 Store Apps have very serious limitations in terms of languages and APIs supported, available features and general user experience that can be provided with them. That means unsandboxed desktop apps are here to stay and Microsoft's sandboxing will not protect against malware, only against crafted documents in buggy (Store App) software. OS X seems to do much better though any non-store app is not sandboxed, as well. Linux has no GUI app sandbox working seamless enough at the moment. We have the underlying confinement technology (the best candidates being Containers based on Linux namespaces, see LXC and Docker , and the next-to-best being MAC enforcement systems that would need to be developed to support some amount of dynamicity). We almost have the IPC and process management mechanisms needed to deploy and handle those sandboxed apps thanks to amazing work on kdbus and systemd . There are a few bits missing, with a few proposals being pushed mostly by the GNOME Foundation (see this video on Sandboxing at GUADEC 13 ). I'm also involved in discussing how access to data and authorisation can occur but there's no consensus between the few interested people, and design and development take time. It'll probably be a couple more years before decent prototypes exist and before sandboxing is deployed to Linux on any relevant scale. One of the big issues faced on all platforms is finding out how to authorise apps to get access to data and device capabilities at the right scale. That means, how to let them do what they need to do without pestering users with authorisation prompts whilst preventing apps from abusing privileges. There are serious loopholes in how Windows 8 lets Store Apps handle recent documents and apps' futureAccessList . At this stage securing document access further without aggravating the cost of security for developers and users is an open question, which a bunch of people happen to be working on as well :)
{ "source": [ "https://security.stackexchange.com/questions/63097", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4758/" ] }
63,124
I have a web site hosted from my server. Sometimes, I upload database manipulation scripts to a folder which is three levels deep in the website and run them using my web browser. These scripts should not be accessed by outside users and I remove them within hours of uploading them. Is there a risk that these scripts will be found or crawled if no other page links to them? If so, then how can they be discovered? I also have a test sub-domain located at user.mysite.com. Is it possible for outsiders that do not know the sub domain to discover the existence of the sub domain?
Your "secret files" remain secret exactly as long as their names (with full path) remain secret. You may consider the path as a kind of password. Note that the paths will leak to various places (proxy, Web server logs, history of your browser...). If the files are important and sensitive, you should just do things properly: Use SSL for upload and access to these files. Setup an access password for the directory where the files are. That way, you are back to known waters: you have a (part of) Web site with sensitive data and protected by a password. Make it strong, and you are all set. In the case of the sub-domain: that "sub-domain" is advertised to the World at large through the DNS . It is possible to configure DNS servers so that outsiders cannot easily enumerate all sub-domains of a domain, but this takes some care. Moreover, whenever you access that sub-domain, your machine will use DNS queries (for the corresponding IP address); these queries travel without any particular protection, and contain the sub-domain name. Thus, this is an easy prey to passive eavesdropper (i.e. "people connected to the same WiFi access point as you"). It would be overly optimistic to believe in the secrecy of a sub-domain.
{ "source": [ "https://security.stackexchange.com/questions/63124", "https://security.stackexchange.com", "https://security.stackexchange.com/users/42370/" ] }
63,130
I know that the following authentication protocol is vulnerable, but I can't understand why. A and B share a secret key K (64 bits) R1 and R2 are two 64 bit numbers A-->B: I am A B-->A: R1 A-->B: Hash((K+R1) mod 2^64), R2 B-->A: Hash((K+R2) mod 2^63) My thinking is that the two hashes don't line up, but I don't know that that would make this protocol have a major vulnerability.
Your "secret files" remain secret exactly as long as their names (with full path) remain secret. You may consider the path as a kind of password. Note that the paths will leak to various places (proxy, Web server logs, history of your browser...). If the files are important and sensitive, you should just do things properly: Use SSL for upload and access to these files. Setup an access password for the directory where the files are. That way, you are back to known waters: you have a (part of) Web site with sensitive data and protected by a password. Make it strong, and you are all set. In the case of the sub-domain: that "sub-domain" is advertised to the World at large through the DNS . It is possible to configure DNS servers so that outsiders cannot easily enumerate all sub-domains of a domain, but this takes some care. Moreover, whenever you access that sub-domain, your machine will use DNS queries (for the corresponding IP address); these queries travel without any particular protection, and contain the sub-domain name. Thus, this is an easy prey to passive eavesdropper (i.e. "people connected to the same WiFi access point as you"). It would be overly optimistic to believe in the secrecy of a sub-domain.
{ "source": [ "https://security.stackexchange.com/questions/63130", "https://security.stackexchange.com", "https://security.stackexchange.com/users/51951/" ] }
63,245
I was wondering how PGP works with a CC. In my understanding, if I send an e-mail to [email protected] and use [email protected] in the CC, Enigmail would have to encrypt the e-mail once for every user and send two e-mails. But in reality, if [email protected] checks his e-mail he is still listed as CC and not as the main recipient (which would be the case if the e-mail were encrypted once for everyone). So how exactly does it work?
In the OpenPGP format (that PGP implements), a given email can be encrypted for several recipient with only minute per-recipient size overhead. This is because email encryption actually uses hybrid encryption : A new random symmetric key K is generated for the email to encrypt. The bulk of the email is encrypted with a symmetric encryption algorithm, using K as key. The key K is asymmetrically encrypted with the RSA or ElGamal public key of the recipient. This is done because asymmetric encryption (RSA, ElGamal) is very limited in processable size, and is also computationally expensive, whereas the symmetric encryption algorithm has no problem processing megabytes of data. In that setup, if you send the same email to two recipients, then the symmetric encryption with K is done only once; but the key K will be encrypted twice, once with the public key of the first recipient, and once with the public key of the second recipient. Each recipient thus adds only a few hundred bytes to the encrypted email. This is how a "Cc:" can work. Note that it reveals to each recipient who also got the email. This "Cc:" mechanism is also used when there is only one apparent recipient, because PGP takes care to encrypt the email for both the intended recipient, and yourself -- so that you can later on re-read your own emails from your "Sent" folder. So a basic PGP email already has two recipients.
{ "source": [ "https://security.stackexchange.com/questions/63245", "https://security.stackexchange.com", "https://security.stackexchange.com/users/42330/" ] }
63,248
What's the best way to hash a credit card number so that it can be used for fingerprinting (i.e. so that comparing two hashes will let you know if the card numbers match or not)? Ideally, I'm looking for recommendations of which hash + salt might be well suited for this use-case. The hash would uniquely identify a particular card number. You can use this attribute to check whether two customers who've signed up with you are using the same card number, for example. (since you wouldn't have access to the card numbers in plain text) To give this some context, see the Stripe API docs and search for fingerprint . It's where I first heard about this concept. The credit card information will be stored on a secure machine (somewhere in the Stripe back-end) that's not accessible by the customer, and the API returns the fingerprint in order to allow the API consumer to make comparisons (e.g. to answer questions like has this card been used before? ). Let's make this one clear, coz I know you're going to ask: I'm not trying to replicate it. I'm just curious to understand how this could be implemented
I cannot comment on how Stripe does this but I can tell you exactly how Braintree does it (because that is where I work). If I had to guess, Stripe probably uses a similar method. In the Braintree API, we offer a unique number identifier for a credit card . This identifier is a random opaque token that will always be the same for card number stored in our system. The seed for this number is different per merchant in our system so you cannot compare them across merchants. When a new card comes in, we look it up by comparing it to a hashed + salted column. If it matches that existing column we know we can return the same unique number identifier. If it doesn't match any existing record, we use a cryptographically secure pseudo-random number generator to create a new unique number identifier and ensure it doesn't conflict with an existing one. This way the hashed + salted value never leaves our backend but we can still provide a way for a merchant to uniquely identify stored credit cards.
{ "source": [ "https://security.stackexchange.com/questions/63248", "https://security.stackexchange.com", "https://security.stackexchange.com/users/41712/" ] }
63,304
I'm trying to understand SSL/TLS. What follows are a description of a scenario and a few assumptions which I hope you can confirm or refute. Question How can my employer be a man-in-the-middle when I connect to Gmail? Can he at all? That is: is it possible for the employer to unencrypt the connection between the browser on my work computer and the employer's web proxy server, read the data in plain text for instance for virus scans, re-encrypt the data and to send it to Google without me noticing it? Browser on employee's computer <--> employer's web proxy server <--> Gmail server The employer can install any self-signed certificate on the company computers. It's his infrastructure after all. Scenario: what I am doing With a browser, open http://www.gmail.com (notice http, not https) I get redirected to the Google login page: https://accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1 I enter my username and password I get redirected to Gmail: https://mail.google.com/mail/u/0/?pli=1#inbox I click on the SSL lock-icon in the browser... ...and see the following: Issued to: mail.google.com Issued by: "employer company name" Valid from: 01.01.2014 - 31.12.2014 Certification path: "employer company name" --> "employer web proxy server name" --> mail.google.com Assumption I'm now assuming that the SSL lock-icon in the browser turns green, but in fact I don't have a secure connection from the browser to the Gmail server. Is that correct? Sources I've read these sources but still don't quite understand it: Is there a method to detect an active man-in-the-middle? Preventing a spoofing man in the middle attack? How does SSL/TLS work? Summary Is it possible for someone to be a man-in-the-middle if that someone controls the IT infrastructure? If so, how exactly? Is my login and password read in plain text on the employer's web proxy server? What should I check in the browser to verify that I have a secure connection from the browser all the way to the Gmail server? EDIT, 18.07.2014 Privacy is not a concern. I'm just curious about how TLS works in this particular scenario. What other means the employer has to intercept communication (keylogger etc.) are not relevant in this particular case. Legal matters aren't a concern. Employees are allowed to use company IT equipment for private communication within certain limits. On the other hand, the employer reserves the right to do monitoring without violating privacy.
You are absolutely correct in your assumptions. If you are using a computer owned and operated by your employer, they effectively have full control over your communications. Based on what you have provided, they have installed a root CA certificate that allows them to sign a certificate for Google themselves. This isn't that uncommon in the enterprise, as it allows inspection of encrypted traffic for virus or data leaks. To answer your three questions: Yes it is very possible, and likely. How active they are at monitoring these things is unknown. Your password can be read in plain text by your employer. I don't know what you mean about the web server. You can check the certificate to see who signed it, as you have already done. You can also compare the fingerprint to that of Google (checked from a third party outside of business control) Edit: How exactly is my employer able to unencrypt that? Could you perhaps elaborate on that a bit? You are using the bad certificate to connect to an intermediary device such as the firewall, that device is then connecting to Google using the correct certificate. The communication is encrypted from your client to the MITM, decrypted, and then re-encrypted on its way to Google.
{ "source": [ "https://security.stackexchange.com/questions/63304", "https://security.stackexchange.com", "https://security.stackexchange.com/users/36539/" ] }
63,312
I am connected to my chat accounts via Empathy, a messaging program, and it has my password stored in it. I am really curious to know how it is storing the passwords, and if there is any they can retrieved back in plain text?
You are absolutely correct in your assumptions. If you are using a computer owned and operated by your employer, they effectively have full control over your communications. Based on what you have provided, they have installed a root CA certificate that allows them to sign a certificate for Google themselves. This isn't that uncommon in the enterprise, as it allows inspection of encrypted traffic for virus or data leaks. To answer your three questions: Yes it is very possible, and likely. How active they are at monitoring these things is unknown. Your password can be read in plain text by your employer. I don't know what you mean about the web server. You can check the certificate to see who signed it, as you have already done. You can also compare the fingerprint to that of Google (checked from a third party outside of business control) Edit: How exactly is my employer able to unencrypt that? Could you perhaps elaborate on that a bit? You are using the bad certificate to connect to an intermediary device such as the firewall, that device is then connecting to Google using the correct certificate. The communication is encrypted from your client to the MITM, decrypted, and then re-encrypted on its way to Google.
{ "source": [ "https://security.stackexchange.com/questions/63312", "https://security.stackexchange.com", "https://security.stackexchange.com/users/50473/" ] }
63,330
is it recommended to use both protocols together? In which situation?
There are different layers of secure transport to consider here: VPNs SSL VPN (including tunnels) IPSec VPN SSL/TLS for individual services IPSec vs SSL VPNs Both SSL and IPSec VPNs are good options, both with considerable security pedigree, although they may suit different applications. IPsec VPNs operate at layer 3 (network), and in a typical deployment give full access to the local network (although access can be locked down via firewalls and some VPN servers support ACLs). This solution is therefore better suited to situations where you want remote clients to behave as if they were locally attached to the network, and is particularly good for site-to-site VPNs. IPSec VPNs also tend to require specific software supplied by the vendor, which is harder to maintain on end-user devices, and restricts usage of the VPN to managed devices. SSL VPNs are often cited as being the preferred choice for remote access. They operate on layers 5 and 6, and in a typical deployment grant access to specific services based on the user's role, the most convenient of which are browser-based applications. It is usually easier to configure an SSL VPN with more granular control over access permissions, which can provide a more secure environment for remote access in some cases. Furthermore, SSL/TLS is inherently supported by modern devices, and can usually be deployed without the need for specialist client-side software, or with lightweight browser-based clients otherwise. These lightweight clients can often also run local checks to ensure that connecting machines meet certain requirements before they are granted access - a feature that would be much harder to achieve with IPSec. In both cases one can be configured to achieve similar things as the other - SSL VPNs can be used to simply create a tunnel with full network access, and IPSec VPNs can be locked-down to specific services - however it is widely agreed that they are better suited to the above scenarios. However, for exactly these reasons, many organisations will use a combination of both; often an IPSec VPN for site-to-site connections and SSL for remote access. There are a number of references on the subject of SSL vs IPSec (some of these are directly from vendors): https://supportforums.cisco.com/document/113896/quick-overview-ipsec-and-ssl-vpn-technologies http://netsecurity.about.com/cs/generalsecurity/a/aa111703.htm http://www.sonicwall.com/downloads/EB_Why_Switch_from_IPSec_to_SSL_VPN.pdf http://searchsecurity.techtarget.com/feature/Tunnel-vision-Choosing-a-VPN-SSL-VPN-vs-IPSec-VPN http://www.networkworld.com/article/2287584/lan-wan/ipsec-vs--ssl-vpns.html End-to-End Encryption In some of the above cases, such as IPSec VPNs and SSL VPN tunnels, you may not be getting end-to-end encryption with the actual service you're using. This is where using an additional layer of SSL/TLS comes in handy. Say you're remote and trying to connect to an internally hosted web application via an IPSec VPN. If you use the HTTP protocol via your browser, your traffic is encrypted whilst it is running through the VPN tunnel itself, but it is then decrypted when it hits the remote VPN endpoint, and travels over the internal network in cleartext. This might be acceptable in some use cases, but in the interest of defence in depth, we ideally want to know that our data cannot be intercepted anywhere between you and the actual service itself. By connecting to this application over HTTPS, you effectively have two layers of security: one between you and the VPN endpoint, and another travelling through that (between you and the web server itself). Of course, this is not limited to HTTPS - you should equally employ other secure protocols like SSH, FTPS, SMTP with STARTTLS etc etc.
{ "source": [ "https://security.stackexchange.com/questions/63330", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52264/" ] }
63,392
Note: This is not an actual situation I'm currently in. Assume your boss is one of those old-fashioned computer-illiterate managers and wants to store the passwords in plaintext to simplify development. You get 5 minutes to explain the point of hashing passwords. You also know from experience that your boss can be swayed by a good analogy. What analogy would you use to explain your boss that passwords should be hashed?
The Short Answer The short answer is: "So you don't get hit with a $5 million class-action lawsuit ." That should be reason enough for most CEOs. Hashing passwords is a lot cheaper. But more importantly: simply hashing the passwords as you suggested in your question isn't sufficient. You'll still get the lawsuit. You need to do more. Why you need to do more takes a bit longer to explain. So let's take the long route for a moment so that you understand what you're explaining, and then we'll circle around for your 5-minute synopsis. Hashing is just the beginning But let's start with that. Say you store your users' passwords like this: # id:user:password 1:alice:pizza 2:bob:passw0rd 3:carol:baseball Now, let's say an attacker manages to get more access to your system than you'd like. He's only there for 35 seconds before you detect the issue and close the hole. But in those 35 seconds he managed to snag your password database. Yes, you made a security mistake, but you've fixed it now. You patched the hole, fixed the code, updated your firewall, whatever it may be. So everything is good, now, right? Well, no, he has your password database. That means that he can now impersonate every user on your system. Your system's security is destroyed. The only way to recover is to start over with NEW password database, forcing everyone to change their password without using their existing password as a valid form of identification. You have to contact them out-of-band through some other method (phone, email, or something) to verify their identity to re-create their passwords, and in the mean time, your whole operation is dead in the water. And what if you didn't see him steal the password database? In retrospect, it's quite unlikely that you would actually see it happen. The way you probably find out is by noticing unusual activity on multiple users' accounts. Perhaps for months it's as if your system has no security at all and you can't figure out why. This could ruin your business. So we hash Instead of storing the password, we store a hash of the password. Your database now looks like this: # id:user:sha1 1:alice:1f6ccd2be75f1cc94a22a773eea8f8aeb5c68217 2:bob:7c6a61c68ef8b9b6b061b28c348bc1ed7921cb53 3:carol:a2c901c8c6dea98958c219f6f2d038c44dc5d362 Now the only thing you store is an opaque token that can be used to verify whether a password is correct, but can't be used to retrieve the correct password. Well, almost. Google those hashes, I dare you. So now we've progressed to 1970's technology. Congratulations. We can do better. So we salt I spent a long time answering the question as to why to salt hashes, including examples and demonstrations of how this works in the real world. I won't re-hash the hashing discussion here, so go ahead and read the original: Why are salted hashes more secure? Pretty fun, eh? OK, so now we know that we have to salt our hashes or we might as well have never hashed the passwords to begin with. Now we're up to 1990's technology. We can still do better. So we iterate You noticed that bit at the bottom of the answer I linked above, right? The bit about bcrypt and PBKDF2? Yeah, it turns out that's really important. With the speed at which hardware can do hashing calculations today ( thank you, bitcoin! ), an attacker with off-the-shelf hardware can blow through your whole salted, hashed password file in a matter of hours, calculating billions or even trillions of hashes per second. You've got to slow them down. The easiest way to slow them down is to just make them do more work. Instead of calculating one hash to check a password, you have to calculate 1000. Or 100,000. Or whatever number suits your fancy. You can also use scrypt ("ess-crypt"), which not only requires a lot of CPU power, but also a lot of RAM to do the calculation, making the dedicated hardware I linked above largely useless. This is the current state-of-the-art. Congratulations and welcome to today's technology. Are we done? So now what happens when the attacker grabs your password file. Well, now he can pound away at it offline instead of making online guess attempts against your service. Sadly, a fair chunk of your users (4% to 12%) will have used the password "123456" or "password" unless you actively prevent them from doing so, and the attacker will try guessing these first. If you want to keep users safe, don't let them use "password" as their password. Or any of the other top 500, for that matter. There's software out there to make accurate password strength calculation easy (and free). But also, multi-factor authentication is never a bad call. It's easy for you to add to any project. So you might as well. Now, Your 5 Minutes of Glory You're in front of your boss, he asks you why you need to use PBKDF2 or similar to hash your passwords. You mention the LinkedIn class-action suit and say, "This is the minimum level of security legally expected in the industry. Anything less is literally negligence." This should take much less than 5 minutes, and if your boss isn't convinced, then he wasn't listening. But you could go on: "The cost of implementing hashing technology is negligible, while the cost of not implementing it could be in the millions or higher." and "In the event of a breach, a properly-hashed database allows you to position yourself as a well-run security-aware organization, while a database improperly hashed is a very public embarrassment that, as history has shown many times over, will not be ignored or overlooked in the slightest by the media." If you want to get technical, you can re-hash the above. But if you're talking to your boss, then you should know better than that. And analogies are much less effective than just showing the real-life effects that are perfectly visible with no sugar-coating necessary. You don't get people to wear safety gloves by recounting a good analogy. Instead you put some lunch meat in the beaker and when it explodes in green and blue flames you say, "that's what will happen to your finger." Use the same principle here.
{ "source": [ "https://security.stackexchange.com/questions/63392", "https://security.stackexchange.com", "https://security.stackexchange.com/users/34161/" ] }
63,572
Let's take a chat system, where any user can create a channel with a password protection. Thus, other users may only join using the channel password. This password is therefore known to every person inside the channel. Should this password be hashed in the database? This would mean that the application is not able to show the password (e.g., if one user forgot it). Personally, I don't see a reason why the password should be hashed, because when setting the password, it's absolutely clear that other people will need to know it.
Personally, I don't see a reason why the password should be hashed, because when setting the password, it's absolutely clear that other people will need to know it. Then why bother storing the password at all, just let anyone in! ;) If you are storing a secret, it's because this secret identifies a subset of your total users. Not all your potential users know it. Now, you've probably been told that passwords must be stored hashed and salted. This is a password you're dealing with. I'll let you draw the logical conclusion. Maybe the same group uses it to access a SharePoint which occasionally contains sensitive information. Maybe they use it for their group calendar which reveals when their office is empty or when they travel away from home. Maybe the person who created an account was so out of clue she used a password otherwise valuable to her personal assets. Maybe they just can't deal with the amount of credentials they have to handle and that alone explains the reuse. Don't be the person who's responsible for causing harm to others out of sheer negligence. Hash them (with a slow password hashing algorithm). Salt them (with a unique salt per password).
{ "source": [ "https://security.stackexchange.com/questions/63572", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9976/" ] }
63,696
Let's say that I sign someone's key and then later decide that was a bad idea - either it was a bad idea at all, or I should have signed it with a different level of trust. Is it possible, both in a theoretical and also in a practical way, to "un-sign" someone else's key?
Removing a Local-Only Signature If the signature is still only kept locally (either by never sending it to anybody or the key servers, or by even having performed an lsign which creates signatures that cannot be uploaded), you can actually delete it by running gpg --edit-key [keyid] [select a uid] delsig [go through the assistant for deleting signatures] save Revoking Published Signatures If a signature was already sent to the key servers, you still can delete it locally, but you will not be able to remove anything from the key servers. The OpenPGP key server infrastructure is designed not to delete/forget anything, to be resistant against deletion attacks (where the attacker wants to remove eg. your key). Instead of deleting the signature, now revoke it. This time, run gpg --edit-key [keyid] revsig [go through the assistant for revoking signatures] save Now you should upload the revocation certificate (which more or less states "This certificate invalidates the signature I made starting from a given date for the reason given") to the key servers by running gpg --send-key [key-id] . As soon as the revocation sync'd throughout the key servers (some minutes) and other users will update key [keyid] (unknown time, possibly rather long), the revoked signature will not be taken into account any more when calculating validity and be displayed as revoked when listing the signatures.
{ "source": [ "https://security.stackexchange.com/questions/63696", "https://security.stackexchange.com", "https://security.stackexchange.com/users/44951/" ] }
63,757
In Chrome, every tab is its own process. Yet, logging in to a site, say, Facebook, persists across tabs. For that matter, in many cases, it persists across OS reboots. This seems inherently very very insecure, but I'm just wondering, how is Chrome implementing this? The reboot thing in particular means it is storing something like a session token even after the process is terminated, which allows seamlessly reconnecting, already authenticated, with secure sites.
With a cookie! Chrome, like any other browser, is storing a cookie in your file system. Those cookies are what enable you to reconnect automatically to some site. Since it's in your file system, even if you reboot they will still be there. Multiple processes or not is irrelevant here. Then you might wonder, if the cookies are in my file system, does it mean that any page can access them? No . Only the page for which the cookie was created can access it. The one that enforces this policy is your browser. If your browser is doing its job correctly then you are ok since it will only send the cookie to the right site (server). You can also access the cookies directly by looking at the file system, but for that you need to have access to the operating system. Webpages don't have access to that hence the browser is doing that job for them and only gives them the cookies they should be able to read. Fun thing You need to protect your cookies. Stealing your cookies is nearly the same as stealing your password/username. If someone or something, like a virus, steals the cookies residing on your computer, it can impersonate you on that website if you are currently logged in. You can check, edit and add cookie with tool like firebug. So, if you want to mount a fake attack you can : log in on a website using chrome read the cookie in chrome using the developer tool open firefox with firebug and add the authentication cookie that you found in chrome You will then be logged into that website in firefox as well as in chrome. This is a simplistic version of the hack session hijacking. You could transfer the cookie onto another computer if you want to.
{ "source": [ "https://security.stackexchange.com/questions/63757", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52452/" ] }
63,821
Can a virus infect my Windows XP operating system by its mere presence? I mean, if I copy/paste a virus on my computer and I never click on it, will it infect my computer or will it remain dormant, not harming my computer, as long I do not click on it?
A virus can't do anything by simply being present on your system as data. A virus is just a program, it must be executed by something . The trick is that that something does not have to be you clicking it. Computers do many things automatically without your attention. They accept requests for file transfers, remote desktop, provide details about their state, check for system updates, etc. Additionally, you perform many actions that can be hijacked by nefarious code on another system, such as running websites with Javascript or Flash or running programs that pull data from the Internet or process documents (such as e-mail or office documents). Any of these hundreds of paths of execution can be used to either trick a system in to running a virus that is downloaded with a website or to even remotely cause execution of a virus loaded on to the system without any user interaction at all. So while you could (generally) safely download a copy of most viruses and let the code sit dormant on your system, there are a great many ways that an attacker could potentially cause a virus to be installed on your system without you taking any direct action. (This is, in fact, the way that the quarantine option normally works in a virus scanner. It just renames it and copies it to a different location.) That said, I'd still recommend setting up a virtualized environment to test with. There isn't any good reason not to, they are easy to setup and provide an extra layer of protection. There is a very slim possibility it might be able to abuse some system process, such as hooking a search indexer or thumbnail viewer. However, there is also a very slim chance that a virus may be able to escape a virtualized environment. The odds of it doing both become increasingly unlikely though, especially if you change the extension on it to disassociate it from many system processes that might otherwise process it differently. This is why it is so important to use things like firewalls on your network and to keep your operating system and other applications patched and up to date. Security patches are released to fix holes as they are discovered, but many undiscovered or unpublished vulnerabilities certainly still exist. Things are secure enough that not just anyone can break in to your system if you keep it up to date, and as long as you keep attackers off your network with a firewall you are probably pretty safe though, especially if you disable active content (like Flash and certain Javascript) when browsing the Internet. As XP no longer gets security updates, you are left high and dry against any newly discovered vulnerabilities and it will become increasingly easy for a remote code execution vulnerability to be used against your system without there being anything you can do to stop it short of removing your computer from the Internet.
{ "source": [ "https://security.stackexchange.com/questions/63821", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
63,905
Some sites that I use check my password as I type it into the login ( not registration) form. So, for example, to begin with I might have: Username: sapi ✓ Password: passw × and by the time I've finished typing, the site already lets me know that there were no mistakes: Username: sapi ✓ Password: password123 ✓ Submission of the form is still required to actually log on. Let's assume that this is not done on the client side (eg, by informing the client of the hashing algorithm and target hash); such an approach would obviously be unsafe, as it would allow you to obtain an arbitrary user's hash. Assuming that the communication is encrypted, can checking the password letter by letter as it is typed pose a security risk? My main area for concern is that doing so involves repeatedly transmitting similar (sub)strings: some overhead data + the first letter of the password some overhead data + the second letter of the password ... some overhead data + the entire password This makes the plaintext of each communication to some extent deterministic (or at least, related to that of the previous and next communications). I know that some encryption algorithms are vulnerable to known-plaintext attacks, although I'm not sure if SSL is one of them. I also don't know whether the level of knowledge gained here (which is obviously much less than for a known-plaintext attack) is sufficient to decrease the entropy of the output. I guess I have two questions: Is this a security risk with standard web encryption algorithms (basically https); and If not, is there a class of algorithms for which this might pose problems? I've added a clarification to the question that I'm referring to a login form, not a registration form. In other words, the client cannot simply validate the password against known length/complexity rules using JS; the account already exists and the checkmark only appears for a correct password.
Modern cryptosystems are generally not susceptible to known-plaintext attacks. In terms of encryption algorithms, there are basically 3 algorithms commonly in use in TLS: AES RC4 DES (in 3DES) All 3 of these are believed to be resistant to known-plaintext attacks, and have been well studied for such attacks. The one thing I would wonder about are side-channel attacks. There's (potentially) several bits of information being leaked about your password, but the ones I can think of all require an attacker who is able to observe your traffic (of course, so would the known-plaintext attack you asked about). If TLS compression is enabled (which it really shouldn't be, given the CRIME attack ) and an attacker is able to correctly guess all of the other data sent in your request (which is not hard if there are no unique cookies) then it's possible they might be able to figure out your password by sending substrings and seeing which ones compress to the same length as your password. Timing attacks. Depending on how quickly the JavaScript sends requests to the server after you type keystrokes and your typing patterns, an attacker may be able to discern (or at least narrow down) what characters you're typing based on the intervals between packets (which indicates the intervals between keystrokes). This attack was demonstrated against SSH by Song et. al. in 2001, so it's not exactly novel, just novel for HTTPS. (HTTPS is generally not real-time, but what you're describing makes it approximately real-time.) The length of the password. The attacker could measure the number of packages sent to the server and their sizes, and make a good guess about the length from the number of typed characters. Knowing the length of a password reduces the base by 1. So instead of having to guess 11^5 passwords, the attacker only has to guess 10^5 passwords for numeric passwords. Overall, this isn't what I would worry about. It's far more likely this website is vulnerable to XSS, SQL injection, session management vulnerabilities, etc., than it is that an attacker will use this back-and-forth technique to compromise your account.
{ "source": [ "https://security.stackexchange.com/questions/63905", "https://security.stackexchange.com", "https://security.stackexchange.com/users/28650/" ] }
64,021
What I mean by public device? E.g. in Germany we have small stations to charge cars powered by electricity. This stations are small towers with a flap. Behind the flap are plugs to connect the tower with your car. Process of charging: To utilize the tower we use a smartphone app. The user can browse through a number of loading towers located in the country. By pushing a single button the flap opens and reveals the plugs. Security issues: The problem is that anybody even people 100 miles away can use the app to trigger the flaps. So there may be people who abuse the this service. How can we secure such a public service against abuse? I thought about one thing: Allow only 3 actions in e.g. 2 hours Any other ideas?
Stick a little button on the tower itself, which also has to be pressed in order to open the flap. Plate #1 from my pending patent application.
{ "source": [ "https://security.stackexchange.com/questions/64021", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52705/" ] }
64,052
Could a PDF file contain any type of malware?
There are many features in the PDF that can be used in malicious ways without exploiting a vulnerability. One example is given by Didier Stevens here . Basically he embeds an executable and has it launch when opening the file. I am not sure how today's versions of readers handle this but its a good method of using PDF features in malicious ways.
{ "source": [ "https://security.stackexchange.com/questions/64052", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
64,188
I have a website. Suppose someone will code a program that will click continuously on the links of my webpage: could this lead to a DoS attack ?
The question is a bit vague, the short answer is Yes clicking on links could DoS your site. A for a more in-depth answer you would need to look at what those links are doing. For example if every time you clicked a link it ran some monstrous database query that used all your CPU power or Disk IO, or if the links played a video that would quickly saturate your outbound bandwidth it wouldn't take a lot of clicking to DoS your site. I've seen this happen with people's personal blog that's hosted at home on an ADSL connection and more than one person viewing a video at a time DoSes their site. On the other hand if your links were to static HTML or something easily cached it would take a whole lot of clicking to DoS your site.
{ "source": [ "https://security.stackexchange.com/questions/64188", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
64,195
After having the [email protected] email compromised, we found that someone issued a Domain Validated SSL certificate for our domain. Now we want to have all such certificates revoked. Is there a way to find all certificates issued to our domain by different SSL providers?
The question is a bit vague, the short answer is Yes clicking on links could DoS your site. A for a more in-depth answer you would need to look at what those links are doing. For example if every time you clicked a link it ran some monstrous database query that used all your CPU power or Disk IO, or if the links played a video that would quickly saturate your outbound bandwidth it wouldn't take a lot of clicking to DoS your site. I've seen this happen with people's personal blog that's hosted at home on an ADSL connection and more than one person viewing a video at a time DoSes their site. On the other hand if your links were to static HTML or something easily cached it would take a whole lot of clicking to DoS your site.
{ "source": [ "https://security.stackexchange.com/questions/64195", "https://security.stackexchange.com", "https://security.stackexchange.com/users/37533/" ] }
64,237
I generate passwords for everything that requires security using the following method: ksoviero@ksoviero-Latitude-E7440:~$ head -c 16 /dev/urandom | base64 gorv/cp+lSiwiEfKck2dVg== 256^16 combinations is more than enough security (for me at least), and would take 2e21 years for even the most powerful computer to brute force (at 5 billion attempts per second, which is impossible). However, notice the last two characters? Those are always there due to the format base64 takes and the fact that I'm using 16 bytes. Is there a reason to include or not include the two '=' symbols? The argument to include them would be that they add additional symbols and length to the password. However, if you assume that the attacker knows that I generate passwords using this method (and for security, you have to assume that they know everything sans that actual password), then the two '=' symbols are already known, and therefore add no additional security. However, can they hurt?
Base64 -encoding processes input bytes by groups of 3; each group yields 4 characters. The '=' signs are padding so that the string length is always a multiple of 4; since the '=' signs are not part of the core Base64 alphabet (letters, digits, '+' and '/'), the decoder knows that these signs are padding and don't encode actual bytes. That way, input sequences of n bytes, where n is not a multiple of 3, can be unambiguously encoded and decoded back. Entropy-wise, the '=' signs do not harm and do not help. You can leave them or remove them as you wish, it would not change anything for security. (If they hurt, then this means that the password hashing function used in the system is extremely poor and weak, and that would be a problem which should be fixed -- not by removing the '=' signs, but by using a good password hashing function instead.)
{ "source": [ "https://security.stackexchange.com/questions/64237", "https://security.stackexchange.com", "https://security.stackexchange.com/users/41384/" ] }
64,281
I have an IP address of a computer which I am currently away from, and I need the MAC address. How do I get the MAC address if I ony have the IP?
If you are on the same network you can open up a Terminal : ping your_ip_address hit Ctrl-C on the keyboard to stop pinging then do a: arp -a a list should appear, look for the ip you just pinged and next to it is the MAC address of the device.
{ "source": [ "https://security.stackexchange.com/questions/64281", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52922/" ] }
64,443
I am reading a book on network security and when talking about user confusion it writes: "It is not uncommon for a user to be asked security questions such as Is it safe to quarantine this attachment? With little or no direction, users are inclined to provide answers to questions without understanding the security risks." Could someone please tell me, a confused user, what are the dangers of quarantining an attachment? My understanding is that a file in quarantine cannot interact with the OS in any way, thus it isn't a security risk but we also cannot analyze it to see if it is a virus.
No Quarantine is nothing but a place to store the infected/suspicious files. When you quarantine a file it is deleted from the actual place and moved to the quarantine location (to the path that your anti-virus program has for them). This is something like keeping a zombie inside a jail . Obviously it is not a threat as long as you don't open the cage. In most anti-virus programs, the quarantine files are stored in internal binary formats . Since there is no physical connection between the infector file to your system (your anti-virus program works as the storage format is also a plus point), it is not dangerous. Analyzing: Regarding analyzing an infected file, yes it is not possible after quarantine. If you want to do that, you either try disinfecting it or restoring it to its original place (you have to disable your anti-virus program to do this and this is the place where you are opening the cage ) and then analyze it. But remember the zombie might eat you up ( unless you are good with shotguns )! So it is at your own risk. Why not just send the infected/suspicious files to the anti-virus program team? They might give you a better picture after inspecting it with their updated virus signatures. Bottom line : A quarantined file is not dangerous. But analyzing them yourself might be.
{ "source": [ "https://security.stackexchange.com/questions/64443", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53048/" ] }
64,535
While I was searching online for information about Linux security, the most typical explanation was: Linux is secure, because the root password is required to access the kernel and install new applications - therefore external malicious software can't do any harm as long as the administrator is the only person to know the password. OK, that sounds good. But when a password is the only thing that stands between restricted access and total control of the system, is the system really that secure? By that I mean all kinds of tricks hackers think of to access systems, and particularly to reveal data (passwords).
"Linux" (as some aggregate of all the installations) typically has quite a bit more than just a password denying external access. First, there's a uniform set of discretionary access controls: read/write/execute permissions, for user/group/everybody else. Traditionally, these permissions are actually used, rather than ignored and/or worked around. Additionally, some subset of installations have SELinux installed, configured and working, so that finer-grained, access control list style of permissions is enforced. Second, servers usually run as a designated special user. NTP processes run as as user "ntp". Web server processes run as a user "http", MySQL databases runs as a user "mysql", for example. The descretionary access controls described above almost always prevent the NTP user ID from doing much more than reading some of the HTTP user ID's files. Third, the software installed base is highly fragmented. There's a huge number of different distributions. After that, not every installation runs Apache HTTPD, or sendmail SMTP server. There are alternatives, and there's usually only a plurality of installations with a given server. Versions of software are also highly fragmented. With every distribution compiling and maintaining its own choice of web server, it's very, very rare for two installations to run a server that has the same bugs, or even the same compilation options. So, for instance, someone going after a Linux machine via WordPress password guessing can maybe guess the WordPress password. That might get the attacker something running as user "http" or "apache". Bad and horrifying as that might be, it's not everything. The "http" or "apache" user almost certainly can't overwrite very many files at all, only HTML and what have you in the DocumentRoot directory. It would take another leap, guessing the "root" password for some distributions, or exploiting a local privilege escalation, to get to some kind of universal file access. This really is multiple layers, but note that it's mainly by culture and tradition, and it's also a sort of "herd immunity". It's always possible that some combination of exploits would yield root access on a given system, but that combination probably wouldn't apply to very many other systems.
{ "source": [ "https://security.stackexchange.com/questions/64535", "https://security.stackexchange.com", "https://security.stackexchange.com/users/45983/" ] }
64,541
I have implemented a stateless auth over HTTP in Laravel, using JWTs. I send my username/password from the frontend. Server authenticates user, sends back a signed JWT with an expiry time. I'm using the HS512 algorithm to sign with a private key (only available to the server). Frontend stores the token for future requests. Frontend sends next request with the token included. Server verifies that the token is valid, and not expired, and lets the action continue if yes to both. When the token expires server sends a 'logged-out' message. All these communications happen over HTTPS. So I can see that this is secure from these points: Attackers can't sniff traffic and steal the JWT token because of HTTPS. Attackers can't generate and send any odd token because server verifies the signature using its private key. Attackers can't modify which user (and hence, the role+permissions of the requester) is making the request, because that's part of the sub claim in the token. But, I have two questions : What if there is a virus on the user's computer or mobile, and it stole a valid token from RAM or from the browser. It can then send more requests, and they will be accepted. Is there any way at all to protect against this? Is there another way to attack this system that I am not seeing?
The jti claim as described here is an optional mechanism for preventing further replay attacks. From the spec: 4.1.7. "jti" (JWT ID) Claim The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value MUST be assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object; if the application uses multiple issuers, collisions MUST be prevented among values produced by different issuers as well. The "jti" claim can be used to prevent the JWT from being replayed. The "jti" value is a case- sensitive string. Use of this claim is OPTIONAL. This does ultimately make your server stateful, but it prevents against unlimited replays if you detect anomalous behavior, or if a user reports suspicious activity. Consider the following scenario. A user logs in. Your server generates a JWT, and stores the signature as well as some metadata (the user id and the type of client making the request, perhaps, and the jti ). User reports suspicious behavior. The application "signs out" the user of all devices by deleting all JWTs in the backend store attached to that user. Now the application can say "I know you've got a valid signature, but I'm not accepting it because I didn't create it." If your metadata is precise enough, you can use the jti plus additional information to, say, only sign the user out of given devices. As mentioned above, this does inevitably make your server stateful. This also doesn't outright prevent replay attacks, but it can shut down further such attacks after one has been detected. An alternative/additional method CAN outright prevent replay attacks to some degree, at the risk of potential inconvenience to the user. Make the user's IP address part of the claim AND stored metadata upon login, and validate that the IP using the JWT is the one you expect. This can be frustrating for a user that say, both works from home and a coffee shop, but it might be an acceptable requirement for high-security applications.
{ "source": [ "https://security.stackexchange.com/questions/64541", "https://security.stackexchange.com", "https://security.stackexchange.com/users/5611/" ] }
64,568
Given that Site X uses HTTPS, how can it be blocked by a country? My browser reads: 128-bit encryption | ECDHE_RSA as key exchange. I say it's blocked since when I use Tor, it works fine. One important thing to point out is that it's not blocked in the typical sense we are used to see, which clearly shows a page that says it's blocked, instead, site X is blocked in a way that my browser just doesn't load the page and displays the error: This webpage is not available, Error code: ERR_CONNECTION_RESET for the HTTPS version, and that regular "page is blocked" page when requesting the HTTP version. Note that no other HTTPS sites are blocked ! Just this one! I assume this is evidence that excludes port blocking and protocol blocking. However, it leaves DPI; but there are other HTTP-blocked websites which have the HTTPS version still working! If they can DPI-block site X , why can't they block the other HTTPS sites the same way?
TL;DR: TLS only secures the content of a message. Not the metadata. When communicating over the clear net, it's important to remember that there are some portions of a given communication that cannot be secured using standard technologies. Unless you use something like TOR, your ISP will be able to determine who you're talking to even if you're using TLS. To use an analogy, imagine sending an envelope via the postal service. The contents of the envelope are completely inaccessible to anyone other than the recipient. Even if a postman were to somehow view the contents, they wouldn't be able to comprehend it (Perhaps you ran it through a Caesar cipher first? Hehe). However, in order to have the postal service send it to the correct address, the outside of the envelope must be marked with a plainly readable representation of the destination address. If the postal service didn't want anyone to be able to send letters to "Joe Schmoe, 123 Fake street," then they could just not deliver any letters with that address. Since the postal service can't read the contents of the message, they have no way to identify the intent of the letter. The only information that they have is the fact that the intended recipient is Joe Schmoe. They can't screen only the letters that they deem to be malicious; it's all or nothing. Similarly, the IP protocol (the routing protocol that TCP runs on top of) has plainly marked "sender" and "receiver" fields. TLS cannot encrypt this for two reasons: TLS runs on top of TCP/IP, and thus cannot modify parts of the packets that belong to those protocols. If the IP section was encrypted, then the carrier service (ISP routers) would not be able to identify where the packets need to go to. The firewall that your ISP or country is forcing all of your traffic through cannot inspect TLS traffic. They only know the metadata supplied by the TCP/IP protocol. They have also deemed that the site you want to access is more bad than good, so they drop all of the traffic to and from the site regardless of the contents. There is a method to secure even the metadata of online communications, but it is slow and not very scalable. TOR hidden services are one attempt at implementing this. Of course, hidden services only work within the TOR network, which can only be accessed by first connecting to a machine over the clear net. This means that the ISP or firewall still knows that you're proxying your data through the onion. No matter how you try, you will always leak some metadata. If they wanted to, they could reset all connections to TOR nodes in addition to the site they're currently blocking. If you are trying to establish a direct connection to a specific IP through a firewall, and the firewall has explicit rules to kill any traffic to or from that given IP, then connecting to that IP directly will always be fruitless. You will have to connect to it indirectly, either through TOR, a VPN, or some other proxy service.
{ "source": [ "https://security.stackexchange.com/questions/64568", "https://security.stackexchange.com", "https://security.stackexchange.com/users/47050/" ] }
64,589
Some financial websites that I use use passwords in a peculiar way. Instead of asking me the whole password string, they only ask me to enter e.g. "3rd, 5th and 8th character of your password", i.e. a random combination of characters of the password string. I think this would make sense if it's done using a shared random number table etc. But this is a password. In order to do this, they'd have to either store my password without hashing, or store the hashes for all the combinations they want to ask, which also sound bad. Am I right to think that this is a fairy bad security practice?
You are basically right; this is poor practice, for several reasons: As you note, it requires server-side storage of the password as plaintext or in some reversible format. Typing a password repeatedly works on " muscle memory ", which allows the user to "remember" his password as a sequence of gestures on the keyboard; asking for specific letters exercises distinct parts of the brain and is likely to induce dangerous behaviours, i.e. writing down the password. If the site asks for only three characters, then an attacker has a fair chance of gaining access by simply responding with three random letters. Online dictionary attacks can work, too. (Of course, bank Web sites often couple that with a trigger-happy lockout system, but a smart attacker will switch to another target account before reaching the autolock limit.) The three main reasons why bank sites do that are: If they ask for only three letters, not always the same, then a key logger or shoulder surfer won't be able to immediately his illegitimately acquired knowledge. It can be thought of as some sort of damage containment, where the password is only partially divulged. Asking for only some letters is an Hollywood-sanctioned security measure. It makes for a great show. Customers, being unaware of what information security really entails, will see that and think "wow, that's secure !". Many people in the industry are no less impressionable than average customers. Quite a few "security architects" will see such a system and also think "wow, that's secure !".
{ "source": [ "https://security.stackexchange.com/questions/64589", "https://security.stackexchange.com", "https://security.stackexchange.com/users/4222/" ] }
64,631
I'm setting up a home HTTP server which can send and receive JSON data to/from different clients (Android and iPhone apps). I'd like to allow access only to certain users and I'm considering using a simple username/password mechanism, as setting up client certificates seems a bit of an overkill for this small project. Of course I can't send clear passwords from the client to the server on plain HTTP, otherwise anyone with wireshark/tcpdump installed could read it. So, I'm thinking about the following mechanism: The HTTP server can be set up as HTTPS server The server also has username/password database (passwords might be saved with bcrypt) The client opens the HTTPS connection, it authenticates the server (so a server certificate is needed) and after exchanging the master key, the connection should be encrypted. The client sends the username/password in clear to the server The server runs bcrypt on the password and compares it with the one stored in the database Is there any problem with this kind of configuration? The password should be safe since it's sent on an encrypted connection.
Yes, this is the standard practice. Doing anything other than this offers minimal additional advantage, if any (and in some cases may harm the security). As long as you verify a valid SSL connection to the correct server, then the password is protected on the wire and can only be read by the server. You don't gain anything by disguising the password before sending it as the server can not trust the client. The only way that the information could get lost anyway is if the SSL connection was compromised and if the SSL connection was somehow compromised, the "disguised" token would still be all that is needed to access the account, so it does no good to protect the password further. (It does arguably provide a slight protection if they have used the same password on multiple accounts, but if they are doing that, they aren't particularly security conscious to begin with.) As MyFreeWeb pointed out, there are also some elaborate systems that can use a challenge response to ensure that the password is held by the client, but these are really elaborate and not widely used at all. They also still don't provide a whole lot of added advantage as they only protect the password from being compromised on an actively hacked server.
{ "source": [ "https://security.stackexchange.com/questions/64631", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53208/" ] }
64,825
While writing an answer to this question on Server Fault, a thought that has been bouncing around my head for quite some time resurfaced again as a question: Is there ever a good reason to not use TLS/SSL? To further elucidate the question, I'm asking about the specific case in which things have been configured properly: Performance: Time to First Byte has been optimized. The cipher list is small enough to avoid multiple roundtrips from server to client. For mobile web applications, 2048 bit RSA server keys have been used as opposed to 4096 bit keys to lessen the computational load on clients. SSL sessions have a reasonable lifetime to avoid regeneration of session keys. Security: Perfect Forward Secrecy Hardened Cipher List Don't use obsolete and insecure protocols like SSLv2 and SSLv3 (if possible; not using SSLv3 means that IE 6 can't access your site). If done properly, is there ever a good reason to not use TLS/SSL for TCP communications?
The main issue with HTTPS everywhere is that it basically makes caching web proxies useless (unless you have trust the proxy and have it impersonate sites for you, but that doesn't work with certificate stapling and is possibly illegal in some jurisdiction). For some use cases, like for example the distribution of signed software update, HTTP makes perfect sense. If you have e.g. a hundred workstations behind a corporate proxy, them downloading update with HTTP will mean for all but one it will be delivered off the proxy cache. That will be a lot more efficient that each of them doing it over HTTPS... In short, HTTP makes sense as a transport layer if another mechanic is there to verify the authenticity and integrity of the content, and if confidentiality is of little importance... For web browsing by actual human beings, I find it very hard to justify not using HTTPS in this day and age.
{ "source": [ "https://security.stackexchange.com/questions/64825", "https://security.stackexchange.com", "https://security.stackexchange.com/users/2374/" ] }
64,959
I understand why a hashing algorithm should be slow but is the method that makes it slow important to the strength of the hash? Everything I've read says that the algorithm should be computationally slow - hash the thing over thousands of iterations or concatenating it with huge strings to slow it down. This seems like it would put unnecessary strain on the CPU. Couldn't you just hash the password once with a good random salt and then just pause the thread for a set amount of time?
The goal isn't to make the hash slow for you to compute. The goal is to make the hash slow for an attacker to compute. More specifically, slow for an attacker who has fast hardware and a copy of both the hash and salt, and who therefore has the ability to mount an offline attack. The attacker need not pause a thread during his computations just because you added that to your application. He is going to use software and hardware that will allow him to compute the hashes as quickly and efficiently as possible. Therefore, in order to make it computationally hard for him, with all of his fast hardware and his efficient hashing software, the hash must be computationally hard for you to compute as well.
{ "source": [ "https://security.stackexchange.com/questions/64959", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53509/" ] }
65,142
Fed up with the following definition. Reflected attacks are those where the injected script is reflected off the web server, such as in an error message, search result, or any other response that includes some or all of the input sent to the server as part of the request. Reflected attacks are delivered to victims via another route, such as in an e-mail message, or on some other web site. When a user is tricked into clicking on a malicious link, submitting a specially crafted form, or even just browsing to a malicious site, the injected code travels to the vulnerable web site, which reflects the attack back to the user’s browser. The browser then executes the code because it came from a "trusted" server Can somebody explain me with an example. And what is the main difference between Reflected XSS and Stored XSS?
So let's say you navigate to www.example.com/page?main.html and it puts you on the main page of example.com. Now you navigate to the index, which is located at www.example.com/page?index.html. You start to wonder, what other pages are there? So you type in www.example.com/page?foo and hit enter, and you get an error page which will say something like "Resource foo is not found". The thing to note here is that you put a parameter into the URL, and that parameter got reflected back to you as the user. In this case, it was the parameter "foo". Now the idea behind reflected XSS should be a bit more clear; instead of inputting a lame parameter like "foo", you input something like <script>alert(1)</script>foo and hit enter. On a vulnerable site, that entire parameter will get injected into the error page that pops up, the javascript will execute, and you'll get a popup in addition to the "Resource foo is not found" message. If you can induce somebody else navigate to the same link that you crafted, you can execute arbitrary javascript in their session.
{ "source": [ "https://security.stackexchange.com/questions/65142", "https://security.stackexchange.com", "https://security.stackexchange.com/users/40127/" ] }
65,174
Where do 4096 bit RSA keys for SSL certs currently stand in terms of things like CA support, browser support, etc? In the overall scheme of things is the increased security worth the risk of 4096 bit keys not having the widespread support and compatibility as 2048 bit keys do, not to mention the increased CPU load required to process the key exchange? Are things slowly turning in favor of 4096?
Advisories recommend 2048 for now. Security experts are projecting that 2048 bits will be sufficient for commercial use until around the year 2030. The main downside to using a large cert, such as 3072 or 4096, is that the algorithm is slightly slower (still fractions of a second, though). Current browsers should all support certs upto 4096. Some CAs won't issue a cert that large, so if you want a 4096 bit cert, you might have to shop around for a CA that will issue it.
{ "source": [ "https://security.stackexchange.com/questions/65174", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53029/" ] }
65,181
Some apps like Foursquare require the user to "check in" at physical places, in order to gain money benefits. Given that emulated GPS are available for customized versions of Android, it sounds easy to trick such apps. Given the monetary incentives, I am sure many people have tried, so how do apps prevent GPS cheating?
There are many ways to track user's location on a mobile device (I will go into how that works later). None of the tracking methods are particularly easy to spoof. It can be done but it is simply outside of the realm of the average user as it generally requires either a modified device (physically or programmatically) or external gear. Moreover, it is far easier for developers to simply tie multiple forms of tracking with simple logic (IE you can only 'check in' x number of times within timeframe y) than it is for a hacker to spoof an app like foursquare and get that 5% discount on dinner. Once again, it can be done, but [my theory is] so far it is not economical to hackers. As promised, here are a few of the big technologies leveraged in geographic tracking: GPS Reporting. This is probably most familiar to you. It is the most 'expensive' report because it requires relatively large amounts of power to read several GPS satellites. A pure GPS system is rarely used on mobile devices today. GPS devices can be spoofed programmatically (by changing the software's call to the GPS driver's position) even without modifying a device at all ( as seen here ). GSM Reporting . This is perhaps the most common way your location is tracked through the day while you are moving around. The concept is simple. Your phone, with normal messages to the cell towers nearby, triangulates your position at a given time. This method is extremely hard to spoof without external hardware or seriously altering your phone's functionality (IE if you spoof a cell tower then yes you are 'not tracked' geographically, but you also cannot make phone calls). Additionally, cell traffic is encrypted. You could potentially spoof the access point where the apps software talks to the phone's cell tower data driver, but that is also difficult to say the least. LAN Reporting. This is a pretty cool concept because it provides high levels of accuracy indoors (something that has traditionally been an issue). This requires much setup but at a minimum would allow apps to talk to registered wifi hotspots to confirm your location based on which wifi you are connected to. This is theoretically possible to spoof but it would largely depend on the levels of encryption for the legitimate connection's signature. WAN Reporting. This is nothing more than simple IP address reporting. This is perhaps the easiest to spoof, but I put it in here for completeness as it is very common to mobile friendly sites. Others (Bluetooth, RFID, Inertial nav, experimental, etc) There are quite a few other methods out there. One of my favorites is Inertial Navigation where there are no external transmissions (thus potentially very difficult to spoof) as it uses internal sensors and map to ascertain your position. This is seen in missile guidance systems as well as some apps. Life360 for instance uses a variation of this as it uses very little power (all the sensors are already active). Other things to remember: Developers can leverage any number of these technologies, thus making an app even harder to spoof. Most location data is stored on a mobile device (and sometimes in many places) until explicitly deleted. Thus a developer can (potentially) access previous location data points. So if you say you were at cafe mama's 20 times todays and the app simply talks to siri to find out your last geo-data point was 100 miles away, the app will wonder... Law Enforcement would have far greater ability to determine your real location so just because you may have spoofed an app doesn't mean you should bet your life on it (some comments elsewhere suggested that you could use this spoofing nefariously, so I thought I'd toss this in here).
{ "source": [ "https://security.stackexchange.com/questions/65181", "https://security.stackexchange.com", "https://security.stackexchange.com/users/634/" ] }
65,183
I connect to the internet using my company's Wi-Fi and Tor. Can they still see the websites I visit?
Generally speaking No . Assuming: You follow Tor's best practices Tor does not protect all of your computer's Internet traffic when you run it. Tor only protects your applications that are properly configured to send their Internet traffic through Tor. To avoid problems with Tor configuration, we strongly recommend you use the Tor Browser. so if it's not setup correctly things can still leak like DNS requests for example. You are using a private computer (or at least one the company doesn't control). If they are admins on your computer they could install VNC or some logging software that will record your actions regardless of what software you use.
{ "source": [ "https://security.stackexchange.com/questions/65183", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53673/" ] }
65,200
In the past I have seen having a Google drive document and have FTP username/passwords there. Is storing passwords in Google drive a good practice?
Is Google Drive safe? I wouldn't say that Google drive is not a safe place to store sensitive information. But I bet you cannot rely on it. When it comes to protecting your sensitive data/privacy, it is always good to be sure, and just trusting drive is not being "sure". Solution: One word, Encryption . Encrypt your data before you store them in the Google drive. Now you don't have to depend on Google to protect your data security, it is you who should keep your mouth shut about your key ;) Note: Encryption is not always needed when storing normal data which falls under the general category(something like the things you share in the social networks,etc.) But it is really a great option when it comes to storing your confidential information in drive and in my experience, I am pretty sure that passwords fall under this category.
{ "source": [ "https://security.stackexchange.com/questions/65200", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53681/" ] }
65,244
My question is based on this tweet after I commented about forbidding + symbols in email addresses. The tweet says, "This is a measure we've taken for security reasons." This can be frustrating and inconvenient for people that have (or use) plus signs in their email address, and I'm sure web sites don't intend to do that. I'm unaware of the security vulnerabilities related to using the + character; is this something I should change to improve my own security? What is the security reason for a web site to disallow that character on an email field? Update: Meetup Support responded positively. Turns out it's more of a UX issue than a security one. They clarified in this tweet that they disallow + to prevent spam (?) and they acknowledged a suggestion for improving the user experience. (My intent here was not to gripe about Meetup; let's be gentle! I wanted to make sure I was not missing something important in my own web sites that receive email addresses.)
There is no security vulnerability per se with having a '+' in your email address. It's permitted as per RFC 2822 , and not particularly useful for SQL or other common forms of injection. However, many systems (let's call Meetup a system for this purpose) enforce security through whitelisting, not blacklisting. Someone defined a limited list of characters they expected to see in email addresses (probably upper, lower, numeric, ., _, and -) and wrote a filter to block anything outside that list. And they didn't think anyone would use +, so you're out of luck. This article describes how to set up Postfix to tag, and to use '-' instead of '+' because: However, during a recent discussion on the Postfix user list, it was mentioned that some websites (particularly banks) use JavaScript to try and validate email addresses when they are entered into online forms, and that many don’t allow the plus symbol as a valid character in an email address. I switched from '+' to '-' over a decade ago, for similar reasons.
{ "source": [ "https://security.stackexchange.com/questions/65244", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9086/" ] }
65,280
There are serious tools and services such as Google Safe Browsing for malicious and phishing websites, and others fully dedicated to phishing websites such as Phishing.org . What is done against these websites (especially the ones that distribute malware, with drive-by download attack , for instance) once they are publicly flagged so ? Are they blocked later or something like that ? For example there has been a multi-national action against the GameOver Zeus Botnet . Is there something like that against the malicious websites ?
Okay, personal anecdote time. I'm a sysadmin in real life, working for an ISP that primarily caters to small to medium businesses. One of our larger customers operates, among other things, an exceptionally cheap and completely automated shared webhosting service. You sign up, pay a couple of bucks via credit card, and plonk your site down. No human interaction required of any sort. As the AS that controls their IP block, we used to get phishing site complaints regarding that server like clockwork . We immediately forward those to the NOC of the company, who then investigate and delete the site... But by the time that's done the phishing site is already being hosted somewhere else entirely. The credit card numbers used to pay usually turn out to be stolen (of course) and the registration request rarely comes from the same IP address more than once. So what do you propose should be done about this? Laws? Whose laws? The law of the country the server is in? Neither us (the ISP) or the company that runs the webhosting service is doing anything wrong. We're providing a perfectly legitimate service and respond as fast as reasonable when someone abuses said service for criminal purposes. I hate phishing and scammers as much as the next sysadmin who's had to deal with one dozen spambots too many, but we're already doing all we can and passing laws won't really change that. The law of the country the scammer is in? Chances are, that country already has laws that deal with this. The only problem is, which country? Like I said, the origin IP is rarely the same twice and likely a proxy running on another compromised host, most likely someone's bot-infected desktop computer. ISPs don't exactly keep logs of every connection going in or out of all systems in their IP range, so even if we could get everyone's cooperation by the time we'd start looking the trail has gone cold. You're also laboring under the mistaken impression that it's single site or easily isolated group of culprits. It isn't; between the myriad cheap registrars and webhosting services -- both of which are ultimately good things -- it's more like a crazy multiplayer game of Whack-A-Mole. Terrestrial law enforcement can sometimes catch a break, but they do that by following the money , not the IP traffic.
{ "source": [ "https://security.stackexchange.com/questions/65280", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
65,299
There are a lot of free antivirus software and free versions of commercial anti-malwares. Can we really trust these free antivirus programs? The same question about commercial antivirus software. Maybe they install backdoors on our computers?
Maybe commercial or free anti-malware installs backdoors Very true. Maybe they do. However – there are a lot of technically experienced individuals who are in a position to check, either through monitoring unexpected connections outbound, or through reviewing the code, so we can have reasonable assurance that they don't. But think about the alternative – we know that malware does install backdoors etc., so from a risk based perspective, which would you prefer? A control that you personally haven't vetted, but that many others approve, or a lack of control which leaves you open to malware. Pretty simple. Trust isn't required – it's just up to you to balance the risk factors for your circumstances. If you need any guidance, large companies use perimeter and desktop anti-virus and anti-malware, as well as anti-malware on laptops and other endpoints. It isn't cheap for them, so it is very much a risk based decision to spend that money.
{ "source": [ "https://security.stackexchange.com/questions/65299", "https://security.stackexchange.com", "https://security.stackexchange.com/users/-1/" ] }
65,332
I may have been under the wrong impression on how servers should be setup and what certificates actually get sent over during the server hello certificate message. I came across this today from Symantec/VeriSign: Root installed on the server. For best practices, remove the self-signed root from the server. The certificate bundle should only include the certificate's public key, and the public key of any intermediate certificate authorities. Browsers will only trust certificates that resolve to roots that are already in their trust store, they will ignore a root certificate sent in the certificate bundle (otherwise, anyone could send any root). If this is true, and root cert does not need to be installed on the server, well, there goes what I thought I knew about proper server setup and how the chain gets validated back to the root. Then again, when I look back at this question , under Certificates and Authentication section of Thomas Pornin's answer it says: So the client is supposed to do the following: Get a certificate chain ending with the server's certificate. The Certificate message from the server is supposed to contain, precisely, such a chain. This says pretty much the opposite of the Symantec/VeriSign message above, unless I am misunderstanding something. So: Does a server need the complete chain installed, including the root? If not, what does a browser use to compare against for validation since the server won't be supplying its root cert during the handshake? Does it simply look at the identity cert and get it from there? (Like opening up an identity cert on your local machine, and seeing the full chain in the certification path?) Again if this is true what about stand-alone client apps that use SSL libraries? Will this depend on the application since it may have different path building methods to a trusted root vs a browser?
The server always sends a chain. As per the TLS standard , the chain may or may not include the root certificate itself; the client does not need that root since it already has it. And, indeed, if the client does not already have the root, then receiving it from the server would not help since a root can be trusted only by virtue of being already there . What Symantec says is that they recommend not sending the root, only the rest of the chain. This makes sense: since the root is useless for validation purposes, you may as well avoid sending it and save the 1 kB or so of data bandwith per connection. Anyway: The server's certificate, with its chain, is not for the server. The server has no use for its own certificate. Certificates are always for other people (here, the client). What is used by the server is its private key (that corresponds to the public key in its certificate). In particular, the server does not need to trust its own certificate or any CA which issued it. In TLS, the server is supposed to send a chain; and the client is supposed to somehow use the server's public key for the handshake. The client is free to "know" that public key in any way that it wishes to, although of course it is expected that the client will obtain the server's public key from the certificate chain that the server just sent. Browsers will primarily try to use the chain sent by the server (by trying to link it below one of the roots already trusted by the browser); in case of failure, they will try to build other chains based on intermediate CA certificates that they already know or can download on-the-fly. In stand-alone applications, application writers are free to configure or bypass that step in arbitrary ways, which can be useful if the server's public key can be hardcoded in the application code (in which case the chain sent by the server is just completely ignored). Unfortunately, freedom to implement a custom certificate validation step is also freedom to wallop security, stab it to death and then throw its corpse in a ditch. It happens way too often.
{ "source": [ "https://security.stackexchange.com/questions/65332", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53029/" ] }
65,382
Is it technically or theoretically possible for any part of a mobile phone's circuitry to be still on and transmitting even while turned off and the battery has been removed? If so, how? I am thinking perhaps it could remain in a low power state and certain chips and capacitors could hold their charge for a while. Is this plausible or no? NB: this question is distinct and more specific than this question so it is not a duplicate.
If you have a phone with a removable main battery, you can try this: Disable the cellular network, GPS, WiFi, Bluetooth etc on your phone by turning them off manually and then putting the phone into flight mode . Make a note of the current time shown on the phone and on your PC by writing it down on paper. Shut down the phone, remove the main battery and the SIM card. Now wait 5 minutes. Put the main battery back in, but not the SIM card and then turn the phone on again. The phone should still be in flight mode. Note the current time on the phone again and the current time from your PC. Remember when in Flight Mode and without the SIM card, the phone cannot get a time update from the cell tower. If a phone just stored the current time in flash memory before shutting down, then on powering on the phone it would be 5 minutes behind and match the time you wrote down on paper. This is because it would not know how much time had elapsed from when the phone had shut off and when it was turned on again. However that is not what happened, it kept up with the current time even when shut off and the battery was removed. That is because of the second battery on the phone. This HowStuffWorks article looks into the inside of a digital mobile phone. Quoting from the article: "As you can see in the picture above, the speaker is about the size of a dime and the microphone is no larger than the watch battery beside it. Speaking of the watch battery, this is used by the mobile phone's internal clock chip." This would be similar to the function of a CMOS battery in every PC/laptop. There is also a February 2010 patent mentioning a primary and secondary battery of different size and capacity: "The first battery may discharge during use of the mobile phone without simultaneous discharge of the second battery. Upon discharge of the first battery, the second battery may not be automatically activated." A standard silver cell watch battery has a capacity of 200 mAh, a Zinc-air battery has a capacity of 620 mAh. From personal experience, my battery in my wristwatch has lasted for over a decade as it was just keeping the time, running alarms and the odd stopwatch. I am not certain which capacity the secondary battery is which is installed on most mobile phones but it could contain a newer, powerful one installed by the manufacturers. The design of mobile phones is typically a closed design. There is a new micro-battery that could fit in and power a credit-card-thin device and be charged 1,000 times faster than regular batteries. Therefore every time you charged your phone, it would charge the secondary battery as well. When the phone is turned off and the main battery is removed, the secondary battery could do more than just keep track of the time. It is all connected to the same circuitry so it could leave certain chips powered on in a low power state, for example the GPS, the microphone, the camera, or the closed baseband processor on every mobile phone. Now, hypothetically the secondary battery could be remotely activated and periodically do a burst transmission every x minutes and send GPS coordinates or microphone recordings back to your favourite 3 letter agency. If the chips were just passively transmitting, perhaps they need a StingRay or Reaper drone in the area to boost the signal. The cell tower itself may be powerful enough to pick up the signal. This article states that the NSA can technically listen in to the microphone of an iPhone even if it is switched off. In Edward Snowden's conversations with Laura Poitras he advised her to put her mobile in the freezer. In Snowden's NBC interview he mentions "They can absolutely turn them on with the power turned off to the device". He even took out the main battery in his phone before a recent Wired interview . Removing the main battery may not be enough to avoid surveillance. If I add a thick layer of tinfoil to my hat, perhaps everyone's mobile phones have been converted to an always on bugging and tracking device by NSA. They could have bugged every phone and home in the world whether their phones were turned on or not. You could get intel on anyone , anywhere . This could be why NSA does not allow mobile phones in their secure environments. It could activate every time it picks up speech then do a burst transmission at certain intervals. Maybe it only does that if you mention certain key words but maybe the phone does not have that capability with only the second battery running. Usually that analysis usually takes place in the basement of Fort Meade. I would not be surprised in the slightest if there was a big black screen system with a map inside the NSA with coloured dots all over it. The green dots would be the people with their cellphones turned on and transmitting audio and GPS coordinates back to NSA. Then the orange dots would be people in "flight mode" or who have turned their phone "off", but their phone is still communicating with the tower. Then blinking orange dots for people who have turned their phone off and removed their SIM card, but their phone is still trackable by the unique IMEI on their device. Then red dots for people who have turned their phone off and removed the main battery. Highly suspicious behaviour obviously. A Reaper or StingRay would then be dispatched to the red dot's location . How would you potentially stop surveillance from our mobile phone even with the battery removed? Open the phone and remove the secondary battery. This may be difficult if the battery is hardwired to the circuitry and could damage the phone. This will definitely void the warranty as well. Use a Faraday cage for when you want to go 'off the grid'. Some retailers are selling this as a small pouch or bag you can put your phone in. The effectiveness of this has not been tested. Do not take your cellphone to places where you do not want to be found. Destroy your cellphone and get a fully open source WiFi only device (if such a thing exists). Only turn on the WiFi when you want to connect to something. This means no closed source secondary operating system running the closed baseband processor, no GPS and no cell tower connection. You could connect out through various WiFi hotspots using a VPN or Mesh networks instead. As Brill would say , "The more technology you use, the easier it is for them to track you."
{ "source": [ "https://security.stackexchange.com/questions/65382", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53827/" ] }
65,429
Two years ago a professional gang broke into the Bureau de Change next door during the night. One of the cameras was a small IP camera which I had advised them to install as I thought an off-site recording would be a good thing. However, they were so professional that they recognized the IP camera and covered their faces before they disabled it; we have seen the entire footage. They did not touch the main security cameras, but dipped the DVR to which all six cameras were connected into a bucket of water. The police have not managed to recover anything from the HDD. The whole thing makes me wonder whether it is wiser to try to scare intruders away by means of visible CCTV or using hidden cameras to catch them, as visible cameras only made the job easier for the thieves.
From security point of view is it better to have camera hidden or visible Yes. Hidden and visible cameras emphasize different security values. Visible cameras provide deterrent value as much or more than recording value: They may cause less prepared or less dedicated criminals to think twice. They may encourage actions or routes which benefit the defender (e.g., walking around the visible camera field may force the attacker into the field of another, hidden, camera). Visible cameras are more susceptible to avoidance or disabling, though, because they are obvious. Hidden cameras provide improved recording value, in that they can be more survivable than visible cameras. However, they may have more limited fields of view, and they don't provide any deterrence. The security decision to go with hidden, visible, or both, should be dictated by the site and the threat. A convenience store is going to want to emphasize visible cameras, as deterrence is more valuable in that threat environment. A museum might emphasize hidden cameras, partially because deterrence is less of an issue and partially because obvious cameras detract from the atmosphere they want to provide for their customers. In all cases, the Digital Video Recorder (DVR) needs to be better protected than it was in this case. It should be protected well enough that legitimate employees can't tamper with it - certainly the attackers in the case you describe probably knew video stayed local to the site, knew it could be disrupted, and possibly even had "inside information" that allowed them to go straight to it.
{ "source": [ "https://security.stackexchange.com/questions/65429", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31356/" ] }
65,622
I opened a web page using https. When I looked at the page info provided by my browser (Firefox) I saw following: Connection encrypted: High-grade Encryption (TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, 128 bit keys). I got a question - what does this encryption technique means? In attempt to understand it I decided to find data on each part of it: TLS_ECDHE means ephemeral Elliptic Curve Diffie-Hellman and as Wikipedia says it allows two parties to establish a shared secret over an insecure channel. RSA is used to prove the identity of the server as described in this article . WITH_ AES_128_GCM_SHA256 : If I understand correctly - AES_128_GCM is a technique which provides authenticated encryption as described on this page . SHA256 is a hashing algorithm - one way function. But now I am trying to understand how to put all these things together. How does it work together as a whole and why it was setup in this way? In this YouTube video Alice and Bob use Diffie-Hellman keys exchange algorithm to agree on a secret key which they are going to use (this is TLS_ECDHE in our case). Isn't it enough to establish a secure connection (besides of RSA part which Alice and Bob did not do)? Why also there is this part WITH_AES_128_GCM_SHA256 exists?
Asymmetric Cryptography There are two different parts to creating a TLS session. There is the asymmetric cryptography , portion which is an exchange of public keys between two points. Which is what you saw in your Alice and Bob example. This only allows the exchange of asymmetric keys for asymmetric encryption/decryption. This is the ECDHE portion. The RSA portion describes the signing algorithm used to authenticate the key exchange. This is also performed with asymmetric cryptography. The idea is that you sign the data with your private key, and then the other party can verify with your public key. Symmetric Cryptography You encrypt symmetric encryption/decryption keys with your asymmetric key. Asymmetric encryption is very slow (relatively speaking). You don't want to have to encrypt with it constantly. This is what Symmetric Cryptography is for. So now we're at AES_128_GCM . AES is the symmetric algorithm 128 refers to key size in bits GCM is the mode of operation So what exactly does our asymmetric key encrypt? Well we want to essentially encrypt the symmetric key (in this case 128 bits, 16 bytes). If anyone knew the symmetric key then they could decrypt all of our data. For TLS the symmetric key isn't sent directly. Something called the pre-master secret is encrypted and sent across. From this value the client and server can generate all the keys and IVs needed for encryption and data integrity. Detailed look at the TLS Key Exchange Data Integrity Data integrity is needed throughout this process, as well as with the encrypted channel. As you saw when looking up GCM, the encryption mode of operation itself provides for the integrity of the data being encrypted. However, the public key handshake itself must also be confirmed. If someone in the middle changed data while being transmitted then how could we know nothing was tampered with? This is what instance where the negotiated hash function is used, SHA256 . Every piece of the handshake is hashed together, and the final hash is transmitted along with the encrypted pre-master secret. The other side verifies this hash to ensure all data that was meant to be sent was received. SHA256 , as mentioned by another poster, is also used for the Pseudo-Random Function (PRF). This is what expands the pre-master secret sent between the two parties into the session keys we need for encryption. For other modes of operation, each message would be hashed with this integrity algorithm as well. When the data is decrypted the hash is verified before using the plaintext. Here is a great explanation for how these derivations happen for different TLS versions. Put all these pieces together and you have yourself a secure mode of communication! You can list all possible ciphers that OpenSSL supports with openssl ciphers . You can go further and print the details of any of these cipher suites with the -V For example: $ openssl ciphers -V ECDHE-RSA-AES256-GCM-SHA384 0xC0,0x30 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD 0xC0,0x30 represents the two byte identifier for the cipher suite Kx=ECDH represents the key exchange algorithm Au=RSA represents the authentication algorithm Enc=AESGCM(256) represents the symmetric encryption algorithm Mac=AEAD represents the message authentication check algorithm used
{ "source": [ "https://security.stackexchange.com/questions/65622", "https://security.stackexchange.com", "https://security.stackexchange.com/users/52297/" ] }
65,660
If I understand correctly, according to this: http://blog.ircmaxell.com/2014/03/why-i-dont-recommend-scrypt.html , looks like the attacker can just create an optimimized version of scrypt that produce the same ouput with extremely high effiency (e.g. with N= 2^14, p = 8, r = 1, requires only 1KB instead of 16MB to run, while increase CPU work factor by only N/2).
The short answer is : no . That is not what I said, nor what I implied. Using the tradeoff that I identified and talked about, you can trade memory for CPU time. So yes, you can reduce a particular derivation from 16MiB to 8KiB (approximately). However doing so will require several orders of magnitude more logic to be executed by the CPU. Some efficiency is gained by cache locality, but in general, it should be much slower. (on average, about 8,000 times slower than the 16MiB version, but as much as 16,000 times slower, depending on the exact random distributions of the algorithm). There is a more interesting alternative though. My attack was an all-or-nothing bais. Basically, completely eliminate the V array, at the cost of increasing complexity. But you can make a more nuanced tradeoff. You can cut the array in half, and re-compute every-other value. Or cut it to every third value. Trading off storage space for CPU space. But at a varied degree. This is commonly referred to as a TMTO defeater (Time-Memory-TradeOff Defeater). I did a more thorough analysis, including a proposed fix on this thread . It's worth noting that at least one of the proposals for the Password Hashing Competition uses my augmented algorithm. So no, scrypt is still incredibly strong. And for its primary use case (as a KDF), it's quite a bit better than the alternatives., The point I was trying to make with my post, is that when not tuned optimally (used with improper settings) or with very fast settings, it can be significantly weaker than existing algorithms. Specifically for password storage . Where you know the result, and are looking for the source material (password).
{ "source": [ "https://security.stackexchange.com/questions/65660", "https://security.stackexchange.com", "https://security.stackexchange.com/users/32724/" ] }
65,667
I understand that it is important to use well known and well tested hashing algorithms instead of designing my own. For such there often are reference implementations available, which initialize the needed constants with manually picked random numbers. When I use such implementations, does it improve security to pick custom constants? I would expect an attacker to use the most likely values when bruteforcing my hashes, which are those from the reference. A strong cryptographic hashing algorithm shouldn't be breakable, not even with rainbow tables when using salting. So from a theoretic point, there shouldn't be much of a difference. However, I'm not an expert so I'd like to hear what you say.
No. The constants are part of what make the hash secure, and the constants in the specifications are what have been used in the cryptographic community's examinations of the hash functions that we currently believe are safe. It has been shown that intentionally badly chosen constants can break a hash function in subtle but exploitable ways , and coming up with your own constants could inadvertently leave you with a weak hash function as well.
{ "source": [ "https://security.stackexchange.com/questions/65667", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54014/" ] }
65,766
Is Google hangouts encrypted? Would my work's IT guys be able see pictures and text I send while on a work computer? Yes I know I shouldn't be sending stuff I don't want them to see while at work, but it wasn't at work. I use hangouts on my phone as well and just realized I use the hangouts Chrome plug-in at work and it was syncing all my conversations.
You should assume that they can. There are various ways they can do it, but whether they actually do it depends on company's standards and practices. Some of the options: It's possible to install additional root certificates on company's machines and use that to MITM all the traffic (traffic goes through company's gateway/proxy anyway, and having friendly root certificate on user's PC allows to do a full MITM); It's possible to install "employee monitoring software", which is essentially a key logger + process monitor + screen grabber. Some tools have capacity to locally intercept received messages in chats. It's possible to use remote access/collaboration tools to monitor what's happening on the screen of a particular PC. In short, if you don't have control over the PC you're working on (and with company's workstations you typically don't), you cannot assume it's free from such surveillance implants. Hope that's not too scary :)
{ "source": [ "https://security.stackexchange.com/questions/65766", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54092/" ] }
66,025
According to https://support.google.com/accounts/answer/6010255 : Google may block sign in attempts from some apps or devices that do not use modern security standards. Since these apps and devices are easier to break into, blocking them helps keep your account safer. What are those "modern security standards" and why is it dangerous to allow apps which do not support them? Also, is it dangerous to enable the option (allow less secure apps) if you do not use those apps? If so, why? I believe it might be OAuth2.0 over IMAP (according to this page). As far as I know, this is Google's own extension and is not used by any other service providers. In my specific case I was trying to access my Gmail account using Kmail (v4.14.0) and IMAP.
In my understanding, "less secure apps" refers to applications that send your credentials directly to Gmail. Lots of things can go wrong when you give your credentials to third party to give to the authentication authority: the third party might keep the credentials in storage without telling you, they might use your credentials for purposes outside the stated scope of the application, they might send your credentials over a network without encryption, etc. Additionally, it could be an app that a user has installed locally such as an IMAP client (see the following support note from google: https://support.google.com/accounts/answer/6010255?hl=en ) "Less secure" isn't meant to say that apps that use your credentials are necessarily full of security holes or run by criminals. Rather, it is the category of behavior -- giving your credentials to a third party -- that is fundamentally less secure than using an authorization mechanism like OAuth. With authorization, you never allow the third party to see your credentials, so an entire category of problems are instantly eliminated. In OAuth, you authenticate directly to Gmail with your credentials and authorize an app to do certain things. The third-party app only sees an authorization token provided by Google as proof that you authenticated correctly and agreed to authorize that app. As for why it would be dangerous to enable less secure apps (versus using a particular app that may be untrustworthy), I'm not totally sure. Google's refusal to authenticate happens after you've already given away your credentials to the application. It seems to me that any time you provide your credentials to a third party, it doesn't matter whether or not you've allowed authentication by "less secure apps" -- someone can just load up a log-in screen and directly log in as you. The only possible cases I can think of are: Possibly "app-based" login attempts are treated differently from "human-based" login attempts, in particular how they treat sudden changes in location. Maybe the "less secure" app you're trying to use has servers on another continent, so it's not suspicious to Gmail when an app tries to log in as you somewhere else, while an attempt to use the log in screen from another continent by a human would be suspicious. Possibly "less secure" auth methods include some other login method that doesn't directly reveal your credentials to the third-party but are less secure than OAuth 2.0 in some other way (e.g., they're vulnerable to eavesdropping by an attacker, or they make it somehow easier for an attacker to access your account without knowing your password). Those two points are pure conjecture and very well may not be true in actual fact.
{ "source": [ "https://security.stackexchange.com/questions/66025", "https://security.stackexchange.com", "https://security.stackexchange.com/users/53605/" ] }
66,030
I was doing some cross-origin requests to Soundcloud's oEmbed and I noticed some strange behaviour. When doing a request from my localhost, which is on a regular HTTP connection, everything worked fine. However, when the code got pushed on our HTTPS test server, I got the following error from my browser: [blocked] The page at ' https://example.com ' was loaded over HTTPS, but ran insecure content from https://www.soundcloud.com/oembed?url=https://soundcloud.com/gwatsky/pumped-up-kicks-remix&format=js&callback=JSON_CALLBACK : this content should also be loaded over HTTPS. The request URL is //www.soundcloud.com/oembed?url=https://soundcloud.com/gwatsky/pumped-up-kicks-remix&format=js&callback=JSON_CALLBACK . Note the " www. ". I tried specifying the protocol to HTTPS and removing/specifying the protocol in the url parameter, but I kept getting the error. In the end I removed the www. from the URL and everything started working fine. tl;dr Why is having www. in this HTTPS URL considered a security risk?
I think you are making a huge assumption with your question: Why does Chrome consider the "www." in an HTTPS url as a security risk? as this is simply not the case. What is happening is that SoundCloud is forcing users from www.soundcloud.com to soundcloud.com with a 301 redirect. The problem is that they are redirecting all traffic to http://soundcloud.com regardless of the originating protocol. This is simply a configuration issue with the Soundcloud website and has nothing to do with browser or web security standards. There is no inherent risk with the www subdomain to a website. The solution as you have already figured out is to remove the www in order to avoid the redirect. You might want to make the site's administrators aware of the issue if you are so inclined.
{ "source": [ "https://security.stackexchange.com/questions/66030", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54349/" ] }
66,355
I know it's possible for a computer to be infected just by visiting a website. I also know that HTTPS websites are secure. To my understanding, "secure" here refers to "immune to MITM attacks", but since such websites have certificates and such, is it right to assume they are "clean" and non-malicious?
No, HTTPS does not necessarily mean that a site is not malicious. HTTPS means very little as to the security of a site. It's specifically geared to keep your communication with the site secure from eavesdroppers and tampering, but offers nothing as to the security of the site itself. Yes, a site serving content over HTTPS has a certificate. That means that the individual who requested the certificate from the CA has an email address that is associated with the domain. Except in the case of Extended Validation certificates (the ones that offer a green address bar) this is literally all it means. Nobody from the CA is validating that the site is safe, secure, and not serving malware. Any site, with an SSL cert or without, can have bugs and vulnerabilities that allow an attacker to leverage them to serve an exploit. Or a admin or user who has the ability to either maliciously or unknowingly cause the site to serve malware. Even if the site itself does not, if it serves advertisements (or any other content, for that matter) from an ad network or another site, that could be vulnerable. So, HTTPS means that nobody should be able to view or tamper with your traffic. That is all that it means.
{ "source": [ "https://security.stackexchange.com/questions/66355", "https://security.stackexchange.com", "https://security.stackexchange.com/users/31356/" ] }
66,359
I've been checking out various TLS certificates lately and noticed that most of the banks seem to have the following two issues: 1) They do not offer perfect forward secrecy 2) They are still using RC4 So far, all the ones I've checked (TD, JPMorgan, CIBC, Wells Fargo, Bank of America, ING/Tangerine, RBC) use TLS_RSA_WITH_RC4_128_SHA Though actually CitiGroup and Goldman Sachs are using AES in CBC mode with 256 bit keys, instead of RC4, but still, no forward secrecy, and I would think GCM+SHA256 is better than CBC+SHA, even with 128 bit keys vs 256. On the other hand, google, facebook, linkedin, and bitcoin exchanges/sites do offer perfect forward secrecy (typically with ECDHE), and unanimously use AES in GCM mode with SHA256 and 128 bit keys. So my question: why have our banks not upgraded their security, especially given recent attacks on RC4 (though they are mostly theoretical, they do point to possible issues, and RC4 is generally considered less secure than AES)? Also, why would they not offer perfect forward secrecy? Is that an oversight on their part, or possibly for regulatory reasons? I nearly emailed my bank about this today, but figured I'd throw the question up here first. Of course, cyber attacks on banks are all the rage these days - they ought to use the best encryption they can.
No, HTTPS does not necessarily mean that a site is not malicious. HTTPS means very little as to the security of a site. It's specifically geared to keep your communication with the site secure from eavesdroppers and tampering, but offers nothing as to the security of the site itself. Yes, a site serving content over HTTPS has a certificate. That means that the individual who requested the certificate from the CA has an email address that is associated with the domain. Except in the case of Extended Validation certificates (the ones that offer a green address bar) this is literally all it means. Nobody from the CA is validating that the site is safe, secure, and not serving malware. Any site, with an SSL cert or without, can have bugs and vulnerabilities that allow an attacker to leverage them to serve an exploit. Or a admin or user who has the ability to either maliciously or unknowingly cause the site to serve malware. Even if the site itself does not, if it serves advertisements (or any other content, for that matter) from an ad network or another site, that could be vulnerable. So, HTTPS means that nobody should be able to view or tamper with your traffic. That is all that it means.
{ "source": [ "https://security.stackexchange.com/questions/66359", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54633/" ] }
66,364
I've looked around on Security.SE, but couldn't find much related to the following problem: I recently signed up for Chase Quick Pay as my method of being paid by a part-time job. I've heard of stupid password requirements , but never stupid username requirements. When I enrolled , I didn't really take into account how weird it was (I really should have, because I have forgotten my username twice in the past 3 months due to the policy): Now, the Stack Exchange engine has so graciously pointed me in the direction of a question with a good answer by @Polynomial that says it all in the first sentence: The authentication strength should come from the password, not the username. So, quite simply put: what is the point of having requirements like this on a username? Edit : For clarification - are there any real security benefits to having these kinds of username requirements? Bonus : I just looked at my own bank's username/password security policies , and found the following (not that abnormal): Note : I don't really understand the point of disallowing 'X' as the first letter... anyone who can answer that gets a +10!
There are two main arguments for enforcing requirements/restrictions on username choices. The first is that making usernames more difficult for attackers to predict helps resist online guessing attacks. While usernames aren't necessarily considered to be as secret as passwords they are one of at least two pieces of information that must be stolen to impersonate a user. We shouldn't dismiss the advantage of this information remaining unknown to attackers. The 2007 paper Do Strong Web Passwords Accomplish Anything? (PDF) evaluates the threat of online password guessing and considers the what our options are to combat it. A site can enforce a stricter password policy which may hurt usability if users struggle to comply and experience frustration with the process. Or the site can enforce some username restrictions and keep the password policy the same. Their theory being that if an attacker has to try a lot of possible combinations it doesn't matter as much whether that's due to easy usernames and complicated passwords, or semi-complicated usernames and semi-complicated passwords. This ignores a third option of the site adding a second authentication factor to supplement the security of a password while leaving usernames alone. It also requires the site to work harder to avoid disclosing valid usernames that attackers can then pair up with possible passwords. The second argument is that by enforcing somewhat unique username requirements a site can hope to deter users from choosing the same credentials they've used on other sites. This is a very real threat and it's a difficult one for a site to do anything about besides enforcing unusual usernames requirements (or assigning IDs themselves). However, one awesome study from 2010 monitored actual username and password reuse by people between their banks and other sites. The study found that people did indeed reuse their bank password on other sites 73% of the time, but also found that even if a bank enforced unique username conventions people would then reuse that username on at least one other site 42% of the time. So enforcing unique username requirements lowered the chances of credential sharing, but certainly didn't eliminate that possibility. But that lower risk may still be seen by some sites as worth the inconvenience to users. Another possible reason, that's less of an argument and more of a requirement, is legacy system compatibility. Whatever potentially old software is doing back end processing behind the snazzy web front end may have been coded to only accept usernames formatted a certain way. Without ripping out and replacing that software the site may be stuck passing on the restriction to customers. The 'User IDs cannot begin with an X' line leads me to believe this may be a factor in your case.
{ "source": [ "https://security.stackexchange.com/questions/66364", "https://security.stackexchange.com", "https://security.stackexchange.com/users/46732/" ] }
66,475
Our application has recently gone through penetration testing. The test found one critical security breach, which is essentially: The problem: Attacker sets up a WiFi spot. User enters our site (which is HTTPS). Using a tool like Cain, the attacker either redirects the user to HTTP, or keeps them in HTTPS with a spoofed certificate. (Either way, the user had to go through the "get me out of here"/"add exception" page) User enters her user name and password. The password is posted to the MITM attacker, who can see it. Apparently Cain has a feature to automatically harvest user name and passwords, and our password would be easily caught there. Suggested solution The report recommends encrypting or hashing the password in the client side (using JavaScript), in a way that cannot be replayed (e.g. using one-time pad / time stamp). They could not recommend a specific schema (they did mention client certificates, which might be unpractical for a large application). Is is worthwhile encrypting or hashing the password before posting it? It is common practice?
This recommendation makes no sense: The JavaScript code used to hash or encrypt the password has to be transferred to the client too. If the attacker is able to mount a man-in-the-middle attack he will be able to inspect the JavaScript code used for encryption too or might even replace it with something else (like no encryption). Hashing instead of encryption makes even less sense, because this would only work if the server accepts the hashed password instead of the real password. In this case the attacker would not even need to know the password, he only needs to know the hash. What might help would be client certificates because you cannot mount an SSL man-in-the-middle attack which preserves the client certificate (and a downgrade to plain HTTP would not send the certificate either). But, because you have to distribute the certificates to the clients first and make them install it inside the browser, this solution works only when you have few clients. Apart from that: if the attacker is in the middle he might not need the password at all. All he needs is that the victim has logged in and then the attacker can take over the existing session. It might also be useful to detect such a man-in-the-middle situation, so that you can inform the user and deny login from compromised networks. Some ideas to detect connection downgrades (that is http to attacker which than forwards it as https): Check the method of the current location with JavaScript. Create a secure cookie with JavaScript. It should only be send back if the site is served with https (that is no downgrade). Include a script as HTTP which servers an image and check at the server side how the image was included. If it was included as HTTPS you can assume a HTTP downgrade attack because you've explicitly included it with HTTP. If it gets accessed with HTTP you have a downgrade attack too (but with a smarter attacker) or the browser does not care about mixed content. And on how to detect man-in-the-middle with faked certificates: Setup a second https site (with a different hostname) and construct an ajax request to this site in a way, which is not simple for the attacker to change to http (e.g. create URL dynamically). If the attacker just tries to MITM any site this ajax request will fail at least with some browsers, because the certificate is not trusted and the browser will only prompt the user for the primary certificates of a site. Of course all of this only helps against an attacker which is not really determined to hack especially you, but just takes the easiest targets. In this case all you need to have to do is to be a bit harder to attack than the rest.
{ "source": [ "https://security.stackexchange.com/questions/66475", "https://security.stackexchange.com", "https://security.stackexchange.com/users/17306/" ] }
66,550
The unix read permission is actually the same as the execute permission, so if e.g. one process has write access it's also able to execute the same file. This can be done pretty easily:First this process has to load the content of the file,which shall be executed, into a buffer. Afterwards it calls a function from a shared library which parses the ELF in the buffer and loads it to the right addresses(probably by overwriting the old process as usual, when calling execvp). The code jumps to the entry point of the new program and it's being executed. I am pretty sure Dennis Ritchie and Ken Thompson were aware of that issue. So why did they even invent this permission, what is the intention behind it and what's the sense of it, if it can't prevent any process of any user having read access from executing? Is there even such a sense or is it superfluous? Could this even be a serious security issue, are there any systems, which rely on the strength of rw- or r-- permissions?
There's an even easier way to bypass the "execute" permission: copy the program into a directory you own and set the "execute" bit. The "execute" permission isn't a security measure. Security is provided at a lower level, with the operating system restricting specific actions. This is done because, on many Unix-like systems (especially in the days of Ritchie and Thompson), it's assumed that the user is able to create their own programs. In such a situation, using the "execute" permission as a security measure is pointless, as the user can simply create their own copy of a sensitive program. As a concrete example, running fdisk as an unprivileged user to try to scramble the hard drive's partition table: $ /sbin/fdisk /dev/sda Welcome to fdisk (util-linux 2.24.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. ... Changed type of partition 'Linux' to 'Hidden NTFS WinRE'. Command (m for help): w fdisk: failed to write disklabel: Bad file descriptor That last line is fdisk trying to get a "write" file descriptor for the hard drive and failing, because the user I'm running it as doesn't have permission to do that. The purpose of the "execute" permission is two-fold: 1) to tell the operating system which files are programs, and 2) to tell the user which programs they can run. Both of these are advisory rather than mandatory: you can create a perfectly functional operating system without the permission, but it improves the user experience. As R.. points out, there's one particular case where the "execute" permission is used for security: when a program also has the "setuid" bit set. In this case, the "execute" permission can be used to restrict who is permitted to run the program. Any method of bypassing the "execute" permission will also strip the "setuid" status, so there's no security risk here.
{ "source": [ "https://security.stackexchange.com/questions/66550", "https://security.stackexchange.com", "https://security.stackexchange.com/users/21450/" ] }
66,577
We are a small startup. One of our products is a B2B web service, accessible through its https://service.example.com canonical URL. For testing purposes, that service also runs on different testing/staging/integration environments, such as https://test.service.example.com , https://integration.service.example.com , etc. We also have collaborative tools such as a bug tracker or a wiki. They run also on machines provided by our hosting provider. Their URLs are e.g. https://wiki.example.com , https://bugs.example.com . To keep things simple, we use a single certificate (for example.com ), and have added all the URLs above as Subject Alternative Names to that same certificate. All our servers thus use the same certificate. Is there any security issue in doing so that we should be aware of? If yes, what would have been the "correct" way of doing things?
Using the same certificate does not in any way affect the fundamental security of the connection that is established using it. The only possible "weakness" introduced by using the same certificate is that if that certificate expires or is leaked all your sites will be affected. Since this certificate is on multiple servers and some of them might be test servers with less security there is the possibility that the private key of that certificate can be inadvertently leaked or exposed from one of these unchecked servers. This is is certainly not a failure of the security provided by the certificate, but rather a failure in keeping the certificate's private key secret. The alternative would be to use separate certificates for all your sites which would mean that you would have that administrative burden of having to renew, protect and monitor multiple certificates. If you properly protect the private details of that one certificate there is no reason why using it would introduce any additional security concerns.
{ "source": [ "https://security.stackexchange.com/questions/66577", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54291/" ] }
66,592
If you have already encrypted files, are they still vulnerable to being encrypted a second time by a program like Cryptolocker, or would this protect them?
Yes they are still vulnerable. Encryption just transforms a sequence of bits into another sequence of bits (and assuming the encryption is good it will be computationally infeasible to reverse this process without knowledge of some secret). There's no reason why encryption can't be performed again on an already encrypted sequence of bits. It's possible certain ransomware implementations might look for specific files that are likely to be of high value, and encrypting these files might make them more difficult to recognise. However, I would not depend on this as my primary control against the threat of ransomware.
{ "source": [ "https://security.stackexchange.com/questions/66592", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54824/" ] }
66,594
So I'm trying to solve how to block users from downloading an attachment of a timed WordPress -post before article goes live? Attachments work as uploads on a custom field in an article. My current solution is to use htaccess to block the uploads archive, so visitors cannot browse them and thus see a new file before its released. Also if the client gives a hard-to-guess names to the attachments then users will not be able to get to them without knowing the exact name (right?) .htaccess: Options -Indexes My questions: On a scale of 1-10, how easy my current solution is to hack? Is it possible to make this solution relatively safe Is there another safer solution for this? could a plugin be developed to transfer the file from a safe place when article gets released Can ANY solution where the attachment is on the server/inside WordPress -uploads be a safe solution? Is there a third party solution for this? Like service that could release documents at specific time Thank you very much in advance
Yes they are still vulnerable. Encryption just transforms a sequence of bits into another sequence of bits (and assuming the encryption is good it will be computationally infeasible to reverse this process without knowledge of some secret). There's no reason why encryption can't be performed again on an already encrypted sequence of bits. It's possible certain ransomware implementations might look for specific files that are likely to be of high value, and encrypting these files might make them more difficult to recognise. However, I would not depend on this as my primary control against the threat of ransomware.
{ "source": [ "https://security.stackexchange.com/questions/66594", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54826/" ] }
66,606
Once you have generated your master PGP key, will the "data" in the private key ever change? For instance, if you add another subkey, uid, or any other data to the master key, do these changes need to be "written" to the "secret half of the key", modifying it in any way? Or are all changes made to the "public half of the key", or even some sort of third signed file with the details? That is, can you store the secret key on read-only media such as a CD-ROM, printed on paper, or tattooed onto the shaved head of one of your servants?
Yes they are still vulnerable. Encryption just transforms a sequence of bits into another sequence of bits (and assuming the encryption is good it will be computationally infeasible to reverse this process without knowledge of some secret). There's no reason why encryption can't be performed again on an already encrypted sequence of bits. It's possible certain ransomware implementations might look for specific files that are likely to be of high value, and encrypting these files might make them more difficult to recognise. However, I would not depend on this as my primary control against the threat of ransomware.
{ "source": [ "https://security.stackexchange.com/questions/66606", "https://security.stackexchange.com", "https://security.stackexchange.com/users/38377/" ] }
66,801
I'm learning about web security. I understand that passwords are hashed with salt. Why aren't the salts encrypted with same user's password? Cloudn't this make password cracking much harder? Generating generate salt encrypt it with AES using user's password as key save the encrypted salt and hash Retriving decrypt salt using supplied password hash salt + password compare to saved hash
generate salt encrypt it with AES using user's password as key save the encrypted salt and hash You could do this and it would be an effective salt, however it wouldn't actually be any more secure than just using a regular salt. Let's consider two attack vectors: Password collisions Obviously we don't want two users with the same password to have the same hashed password. To prevent this we use a random salt and assume that the probability of two users with the same password and the same randomly generated salt is effectively 0. However, if two users happen to have the same password and coincidentally the same salt then they will have the same hash regardless of whether the salt is encrypted with their password or not encrypted at all. Therefore, by encrypting the salt you're not making it any less likely that a collision will occur in the salt . The only factors which do make it less likely is the quality of the random values used for the salt and the number of bits. Brute force attacks If an attacker is trying a brute force attack and you've encrypted the salt the only extra step they have to do is decrypt the stored salt each iteration and use that to produce the hash they then compared to the hashed password to see if they have a match. This does require marginally more effort for an attacker, but when it comes to cryptography making something take only two or three times longer isn't significant. If brute forcing a password now takes 3 days instead of 2 you haven't really made it materially more secure. If you're trying to make something secure you want it to take 5000 years instead of 2 days. If you want to slow down brute force attacks you'd be better off iterating your hashing process tens of thousands of times or using a hash function which is specifically designed to be computationally slow.
{ "source": [ "https://security.stackexchange.com/questions/66801", "https://security.stackexchange.com", "https://security.stackexchange.com/users/55003/" ] }
66,949
I recently received an e-mail from Virgin Media indicating that I might have the Citadel virus. It obviously sounds like a fake e-mail, but I am almost certain it is genuine, as they had my name and account number, and a generic version of the e-mail available on virginmedia.com, here: http://my.virginmedia.com/customer-news/articles/malware_email.html I went to the bitdefender website to do the free online scan, and after thirty seconds it told me I was safe, but I wasn't too reassured that it could check my whole drive so quickly. I don't have any sort of security software, as I am normally quite careful about what I download and what sites I visit, but in the past when I've made mistakes I've been able to find the files installed and uninstall them, but all my searches can't tell me what to look for, and if this is a proper virus rather than just the normal adware it might be more complicated than I thought. Can anyone recommend what I should do, apparently this virus is quite good at hiding from antiviruses, but if the only way to find out for sure if it's on my machine is to download a clever one and run it in safe mode I will.
If you dont like to install an antivirus you can always use a rescue disk to scan your system. They require no installation They are usually free They can hunt and remove the virus even when it's attached to a system file, something you cant usually do on a live system Kaspersky Rescue Disk http://support.kaspersky.co.uk/viruses/rescuedisk/ Avira Rescue System http://www.avira.com/en/download/product/avira-rescue-system Bitdefender Rescue CD http://download.bitdefender.com/rescue_cd/ AVG Rescue CD http://www.avg.com/us-en/avg-rescue-cd-download Dr. Web LiveDisk http://www.freedrweb.com/livedisk/?lng=en
{ "source": [ "https://security.stackexchange.com/questions/66949", "https://security.stackexchange.com", "https://security.stackexchange.com/users/55123/" ] }
66,961
GSMs vulnerabilities have been known a long time now. UMTS was supposed to fix those problems. Why is GSM still used?
In order to make a cellphone tower UMTS-capable, various hardware upgrades need to be made to it. This costs money. For that reason, many cellphone towers, especially in rural areas, have not been upgraded yet. As long as there is not near-100% UMTS coverage, cellphones will still need to support a pure GSM connection to ensure that the user has connectivity in areas where no UMTS is available yet.
{ "source": [ "https://security.stackexchange.com/questions/66961", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35094/" ] }