source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
15,259 | Chrome extensions, and just like other browsers, appear to often get quite some extensive access to your browser data. In fact, most extensions I've installed require access to: Your data on all websites Your tabs and browsing activity And this got me wondering what that implies, exactly. Let's say somebody writes an evil extension, calls it "I-KNOW-EVERYTHING-YOU-DO, and a RSS Reader" (he's evil, but also honest). I really like to have a RSS reader, so I install this. I see this big warning about the extension requiring access to all of my data, but then again, so does every other extension, so I gladly grant this access. Thinking worse-case scenario, what can this extension do? Could it: Send a list of all the websites I visit to the maker? Capture data I input into forms? (like my personal data, passwords, etc.) See how long I have been on a website, and which pages I have visited? Access cookies? Access other files on my computer? (I guess not, given the Sandbox environment, but I'm still wondering) Do anything worse? | Send a list of all the websites I visit to the maker? Yes Capture data I input into forms? (like my personal data, passwords, etc.) Yes See how long I have been on a website, and which pages I have visited? Yes Access cookies? Updated, See the following comment from Bryan Field for this one. Bryan Field : Great answer, except for number 4. Cookies without the httponly flag can be accessed for sure, beyond that I don't know. I would add that it is likely that the extension could manually call, for example your Gmail page and get all your emails, even if you do not have Gmail open during the time the extension is opened. You need only to be logged in and it can call those pages. So even if the httponly cookies can not be directly viewed (number 4), it doesn't really matter, because the cookies can still be indirectly and effectively used Access other files on my computer? (I guess not, given the Sandbox environment, but I'm still wondering) No – like you say the sandbox will prevent that. Do anything worse? Read (and send) data on all the pages you visit. Some more details on why this is often needed, but not always is discussed in this question Why do Chrome extensions need access to 'all my data' and 'browsing activity'? | {
"source": [
"https://security.stackexchange.com/questions/15259",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9804/"
]
} |
15,452 | I am having pet-peeve with PHP's PDO class . As you can see on the Warning note: it reveals database password by default if an exception was not caught (when display errors is on which is a probable situation in a production server for novice webmasters). Here's my argument with a PHP Dev Me: It is quite unusual to reveal passwords since MySQL itself conceals passwords for debugging or when error occurs. For example the typical output of MySQL is "Access denied for user 'root'@'localhost' (using password: YES)" it never reveals password. As we know many novice programmers or end-users who uses third-party software that rely on PDO probably don't care about display_errors being turned off. I think this a security risk. For example, a malicious user can run a spider and search for websites with the string "Fatal error: Uncaught exception 'PDOException'" and they can now harvest passwords. I hope you consider this. PHP Core Dev: Yes, running production server with display_errors=on and printing out backtraces with argument is a security risk. That's why you should never do it. It appears that the PHP Core Dev would rather rely on the end-user to turn off error displays. Granting they made a warning, still it seems unacceptable given that MySQL itself (the one PDO is abstracting) is not revealing the password and yet PDO prefers to reveal it. Do you think the PHP Core Devs are right on revealing passwords on error? | This question is loaded, but I completely agree on the answer: no, passwords should not be revealed in error messages. Passwords should never be written out anywhere without the explicit consent of the user. Consider the following roles and how they might access the wrong password that the user just typed: The user. That's the only role who should be able to know what she just typed. Fine. Someone shoulder-surfing. Most password entry forms obscure the password into **** ; showing a password on-screen defeats this protection. This is irrelevant if the password is not shown in a production environment (but more on this below). An administrator of the system where the password was. A rogue admin can generally obtain the password by inserting some surreptitious logging code, however this is an active attack, at risk of being detected. Even if the password is only shown in development configurations, it means the code is there and can be reactivated with an innocent-looking change of many configuration variables at once. It is common practice to hide passwords from administrators: they are not logged by any non-web general-public software that I've ever seen; system administrators and IT support staff routinely avert their eyes when a user is typing a password in their presence. A developer of the application to whom a bug report has been made. That role never should have access to passwords. Including passwords in error traces, even if they are only shown to those who have the most use from the traces, is not good. An attacker stealing a backup of error traces, or being able to see a backtrace du to a misconfiguration (such as accidentally setting display_errors=on ). The value of the wrong password as an asset depends on what it is. If it is a typo on the correct password, the wrong password is practically as valuable as the correct password. If the password is a password for another site (oops, I typed my production site password in the user login form on the test environment), it is practically as valuable as a credential to that site. Revealing the wrong password has high risk of disclosing a high-value asset. The developer's response is deeply unsatisfactory: Yes, running production server with display_errors=on and printing out backtraces with argument is a security risk. That's why you should never do it. First, there is a strong security concern even in a development environment, even if the logs are only ever shown to those who would legitimately see them. Second, everything “is a security risk”, but some risks are worse than others. Backtraces can reveal confidential information that may be assets in themselves or may become one step is an attack path. That's not as bad as handing out passwords on a silver platter. Zeroth and foremost, this response shows a very narrow view of security: “if you use my software correctly, you won't have any direct risk”. Even if that was true (it isn't), security is a holistic concern. It is difficult to ensure that every component of a larger system is used precisely as the author of that component intended. Hence components must be robust — this is also known as “defense in depth”. A rule like “never log passwords” is simpler than “don't show backtraces to those (who?) who shouldn't see them, and turn them off anyway (and if you need them, too bad)”. There might be a difficulty in recognizing what is a password and what isn't. It can be argued that to hide passwords creates the expectation that passwords will always be hidden, which is a bad thing if they aren't. I think the rate of success is more than enough to justify doing best effort here. | {
"source": [
"https://security.stackexchange.com/questions/15452",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9317/"
]
} |
15,532 | Microsoft allows a CA to use Cryptography Next Generation (CNG) and advises of incompatibility issues for clients that do not support this suite. Here is an image of the default cryptography settings for a 2008 R2 CA. This machine is a non-domain connected Standalone CA: Here are the installed providers. The CNG providers are marked with a # sign My intent is to have a general-purpose offline Root-CA and then several Intermediate CAs that serve a specific purpose (MSFT-only vs Unix vs SmartCards etc) What are the ideal settings for a Root Certificate with an expiration of 5, 10, and 15 years? CSP Signing Certificate Key Character Length Since this is a RootCA, do any of the parameters affect low powered CPU (mobile devices) | Note: This is a (very very long) compendium of various recommendations and actions that Microsoft, NIST, and other well respected PKI and cryptography experts have said. If you see something that requires even the slightest revision, do let me know. Before I get into configuring the CA and its subs, it's good to know that even though MSFT's CryptoAPI requires a self-signed root, some non-MSFT software may follow RFC 3280 and allow any CA to be the trusted root for validation purposes. One reason may be that the non-MSFT software prefers a lower key length. Here are some configuration notes & guidance on setting up a CA ROOT and the Subs: Storing the CA's Private Key Best: Store the key on a HSM that supports key counting. Every time the CA's private key is used, the counter will be increased. This improves your audit profile. Look for FIPS140 Level 3 or 4 Good: Store the Private key on a smart card. Though I'm unaware of any Smart Card that offers key counting, enabling key counting may give you unexpected results in the event log Acceptable: Store the private key in Windows DPAPI. Ensure that these keys and the Key Enrollment agent don't end up in Roaming Credentials . See also: How to enumerate DPAPI and Roaming Credentials Key Length Don't use 1024 as a key length... NIST phased it out in 2011, MSFT won't ever add it into your Trusted Root CA store since it won't meet the minimum accepted technical criteria. Root CAs that supports legacy apps should never be larger than 2048 bits. Reason: MSFT Support sees many cases where Java apps or network devices only support key sizes of 2048 bytes . Save the higher bit lengths to CAs that are constrained for a specific purpose (Windows vs Network devices) etc. The NIST recommends 2048 or 3072 bits. ECC is supported, though it may cause issues with device interoperability. Plan for the strongest possible encryption (key length) throughout the PKI, otherwise expect mixed security benefits . Mobile clients have issues (High CPU) or incompatibility with large keys Expiration The algorithm & Key length can have a bearing on how long you want certificates to be valid, because they effectively determine how long it might take an attacker crack, ie the stronger the cryptography, the longer you might be prepared to have certificates valid for One approach is to establish what is the longest validity you'll require for end entity certificates, double it for the issuing ca's, and then double it again for root ca (in two tier). With this approach you would routinely renew each ca certificate when half of it's lifetime was reached - this is because a ca can't issue certificates with an expiry date after that of the ca certificate itself. Suitable values can only really be determined by your organisation & security policy, but typically a root ca would have a certificate lifetime of 10 or 20 years. If you're concerned about compatibility, set the expiration date below 2038. This is due to systems that encode a data as seconds since January 1st 1970 over a signed 32 bit integer. Read more about this issue here. Choosing a Hash You may want to imitate the Federal PKI Management Authority and set up two PKI roots. One modern SHA-256 for all devices that support it, and one legacy SHA-1. Then use cross certificates to map between the two deployments. Review this SHA-2 compatibility list for Microsoft software Notably: Windows 2003 and XP clients may need a patch for SHA2 Algorithms which include SHA256, SHA384, and SHA512. See more technical information Authenticode and S/MIME with SHA2 hashing is not supported on XP or 2003 "Regarding SHA-224 support, SHA-224 offers less security than SHA-256 but takes the same amount of resources. Also SHA-224 is not generally used by protocols and applications. The NSA's Suite B standards also do not include it." source "Do not use SHA2 suite anywhere in the CA hierarchy if you plan to have XP either trust the certificate, validate the certificate, use the certificate in chain validation, or receive a certificate from the CA. Even though XP SP3 allows validation of certiifcates that use SHA2 in the CA hierarchy, and KB 968730 allows limited enrollment of certificates that are signed by a CA using SHA2, any use of discrete signatures blocks out XP entirely." ( source ) Choosing a Cryptographic Provider View this list of providers for more information Enable random serial number generation As of 2012, this is required if you use MD5 as a hash. It's still a good idea if SHA1 or greater is used. Also see this Windows 2008R2 "how to" for more information. Create a Certificate Practice Statement A certificate practice statement is a statement of the practices that IT uses to manage the certificates that it issues. It describes how the certificate policy of the organization is interpreted in the context of the system architecture of the organization and its operating procedures. The IT department is responsible for preparing and maintaining the certificate practice statement. ( source ) NOTE: In some situations, such as when digital signatures are used on binding contracts, the certificate practice statement can also be considered a legal statement about the level of security that is provided and the safeguards that are being used to establish and maintain the security level. For assistance writing a CPS statement, here is a Microsoft produced "Job Aid" Best Practice: Although it is possible to put freeform text into this field (see notice below), the ideal solution is to use a URL. This allows the policy to be updated without reissuing the certificates, it also prevents unneeded bloating of the certificate store. [LegalPolicy]
OID = 1.3.6.1.4.1.311.21.43
Notice = "Legal policy statement text"
URL = "http://www.example.microsoft.com/policy/isspolicy.asp" Certificate Policies Also known as issuance policies, or assurance policies (in MSFT), this is a self defined OID that describes the amount of trust one should put into your certificate (high, med, low, etc). See this StackExchange answer for how to properly use this field. Ensure Application Policies and EKU Policies match Application Policies is an optional Microsoft convention. If you are issuing certificates that include both application policy and EKU extensions, ensure that the two extensions contain identical object identifiers. Enable CRL checking Normally, a Windows Server 2003 CA will always check revocation on all certificates in the PKI hierarchy (except the root CA certificate) before issuing an end-entity certificate. To disable this feature, use the following command on the CA, and then restart the CA service: certutil –setreg ca\CRLFlags +CRLF_REVCHECK_IGNORE_OFFLINE CRL Distribution Point Special Guidance for Root CAs This is optional in a Root CA, and if done incorrectly it may expose your private key . All CRL publication is done manually from an offline RootCA to all other sub-CA's. An alternative is to use an audio cable to facilitate one-way communication from the Root to Sub CA's It is perfectly acceptable to have the Root CA issue different CRL locations for each issued certificate to subordinate CAs. Having a CRL at the root is a best practice if two PKIs trust each other and policy mapping is done. This permits the certificate to be revoked. Getting the CRL "right" is pretty important since it's up to each application to do the CRL check. For example, smart card logon on domain controllers always enforce the revocation check and will reject a logon event if the revocation check cannot be performed or fails. Note If any certificate in the chain cannot be validated or is found to be revoked, the entire chain takes on the status of that one certificate. A self-signed root CA should not list any CDPs. Most windows applications don't enable the CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT flag and therefore ignore the CDP ( this is the default validation mode ). If the flag is enabled, and the CDP is blank for the self signed root cert, no error is returned. Don't use HTTPS and LDAPS. These URLs are no longer supported as distribution point references. Reason is that HTTPS and LDAPS URLs use certificates that may or may not be revoked. The revocation checking process can result in revocation loops when HTTPS or LDAPS URLs are used. To determine if the certificate is revoked, the CRL must be retrieved. However, the CRL cannot be retrieved unless the revocation status of the certificates used by HTTPS or LDAPS is determined. Consider using HTTP instead of LDAP- Although AD DS enables publication of CRLs to all domain controllers in the forest, implement HTTP instead of LDAP for revocation information publication. Only HTTP enables the use of the ETag and Cache-Control: Max-age headers providing better support for proxies and more timely revocation information. In addition, HTTP provides better heterogeneous support as HTTP is supported by most Linux, UNIX, and network device clients. Another reason to not use LDAP is because the revocation window to be smaller. When using AD LDAP to replicate CA information, the revocation window couldn't be less than the time for all sites in AD to get the CA update. Oftentimes this replication could take up to 8 hours... that is 8 hours until a smartcard user's access is revoked. 'Todo: the new recommended CRL refresh time is: ?????` Make all the URLs highly available (aka don't include LDAP for external hosts). Windows will slow down the validation process for up to 20 seconds and retry the failed connection repeatedly at least as frequently as every 30 min. I suspect that Pre-fetching will cause this to occur even if the user isn't actively using the site. Monitor the size of your CRL. If the CRL object is so large that CryptoAPI is not able to download the object within the allotted maximum timeout threshold, a “revocation offline” error is returned and the object download is terminated. Note: CRL distribution over HTTP with ETAG Support may cause issues with IE6 when using Windows 2003 / IIS6, where the TCP connection is continually reset. (Optional) Enable Freshest CRL : This non-critical extension lists the issuers and locations from which to retrieve the delta CRLs. If the “Freshest CRL” attribute is neither present in the CRL nor in the certificate, then the base CRL will be treated as a regular CRL, not as part of a base CRL/delta CRL pair. The Microsoft CA does not put the “Freshest CRL” extension into issued certificates. However, it is possible to add the “Freshest CRL” extension to an issued certificate. You would have to write code to add it to the request, write a custom policy module, or use certutil –setextension on a pending request. For more information about advanced certificate enrollment, see the “Advanced Certificate Enrollment and Management” documentation on the Microsoft Web site Warning If delta CRLs are enabled at a CA, both the base CRL and delta
CRL must be inspected to determine the certificate’s revocation
status. If one of the two, or both, are unavailable, the chaining
engine will report that revocation status cannot be determined, and an
application may reject the certificate. CRL Sizing and maintenance (CRL Partitioning) The CRL will grow 29 bytes for every certificate that is revoked. Accordingly, revoked certificates will be removed from the CRL when the certificate reaches its original expiration date. Since renewing a CA cert causes a new/blank CRL to be generated, Issuing CAs may consider renewing the CA with a new key every 100-125K certificates to maintain a reasonable CRL size. This issuance number is based on the assumption that approximately 10 percent of the issued certificates will be revoked prior to their natural expiration date. If the actual or planned revocation rate is higher or lower for your organization, adjust the key renewal strategy accordingly. More info Also consider partitioning the CRL more frequently if the expiration is more than 1 or two years, as the likelihood of revocation increases. The drawback to this is increased startup times, as each cert is validated by the server. CRL Security Precautions If using a CRL, don't sign the CRL with MD5. It's also a good idea to add randomization to the CRL signing key. Authority Information Access This field allows the Certificate validation subsystem to download additional certificates as needed if they are not resident on the local computer. A self-signed root CA should not list any AIA locations ( see reason here ) A maximum of five URLs are allowed in the AIA extension for every certificate in the certificate chain. In addition, a maximum of 10 URLs for the entire certificate chain is also supported. This limitation on the number of URLs was added to mitigate the potential use of “Authority Info Access” references in denial of service attacks. Don't use HTTP S and LDAP S . These URLs are no longer supported as distribution point references. Reason is that HTTPS and LDAPS URLs use certificates that may or may not be revoked. The revocation checking process can result in revocation loops when HTTPS or LDAPS URLs are used. To determine if the certificate is revoked, the CRL must be retrieved. However, the CRL cannot be retrieved unless the revocation status of the certificates used by HTTPS or LDAPS is determined. Enable OCSP Validation The OCSP responder is conventionally located at: http://<fqdn of the ocsp responder>/ocsp . This url needs to enabled in the AIA. See these instructions for Windows. Do know that full OCSP validation is off by default (though it should be "on" for EV certs according to the specification). In addition, enabling OCSP checking does add latency to the initial connection More secure systems will want to enable OCSP monitoring on the client or the server side OCSP Cache duration All OCSP actions occur over the HTTP protocol and therefore are subject to typical HTTP proxy cache rules. Specifically the Max-age header defines the maximum time that a proxy server or client will cache a CRL or OCSP response before using a conditional GET to determine whether the object has changed. Use this information to configure the web server to set the appropriate headers. Look elsewhere on this page for AD-IIS specific commands for this. Define a policy in issued certificates The parent CA defines whether or not to allow CA certificate policies from sub CAs. It is possible to define this setting when a issuer or application policy needs to be included in a sub CA. Example polices include an EKU for SmartCards, Authentication, or SSL/Server authentication. Beware of certificates without the Certificate Policies extension as it can complicate the Policy Tree. See RFC 5280 for more information Know that policy mappings can replace other policies in the path There is a special policy called anypolicy that alters processing There are extensions that alter anypolicy If you use certificate policies, be sure to mark them as critical otherwise the computed valid_policy_tree becomes empty, turning the policy into a glorified comment. Monitor the DN length enforcement The original CCITT spec for the OU field says it should be limited to 64 characters. Normally, the CA enforces x.500 name length standards on the subject extension of certificates for all requests. It is possible that deep OU paths may exceed normal length restrictions. Cross Certificate Distribution Points This feature assists where environments need to have two PKIs installed, one for legacy hardware/software that doesn't support modern cryptography, and another PKI for more modern purposes. Restrict the EKU In contrast with RFC 5280 that states “in general, [sic] the EKU extension will appear only in end entity certificates." it's a good idea to put constraints on the CA Key usage . A typical stand-alone CA certificate will contain permissions to create Digital Signatures, Certificate Signing, and CRL signing as key values. This is part of the issue with the FLAME security issue. The MSFT smart card implementation requires either of the following EKUs and possibly a hotfix Microsoft smart card EKU Public Key Cryptography for the Initial Authentication (PKINIT) client Authentication EKU, as defined in the PKINIT RFC 4556 It also has interesting constraints around validating EKU (link tbd). If you're interested in having any EKU restrictions you should see this answer regarding OIDs and this regarding contrained certificates Use caution with Basic Constraints "Path" The Basic Constraint should describe if the certificate is an "end entity" or not . Adding a path constraint to a intermediate CA may not work as expected since it's an uncommon configuration and clients may not honor it. Qualified Subordination for Intermediate CAs To limit the types of certificates a subCA can offer see this link , and this one If qualified subordination is done, revoking a cross signed root may be difficult since the roots don't update the CRLs frequently. Authority Key Identifier / Subject Key Identifier Note If a certificate’s AKI extension contains a KeyID, CryptoAPI requires the issuer certificate to contain a matching SKI. This differs from RFC 3280 where SKI and AKI matching is optional . (todo: Why would someone choose to implement this?) Give the Root and CA a meaningful name People will interact with your certificate when importing it, reviewing imported certificates, and troubleshooting. MSFT's recommended practice and requirement is that the root has a meaningful name that identifies your organisation and not something abstract and common like CA1. This next part applies to names of Intermediate/subCA's that will be constrained for a particular purpose: Authentication vs Signing vs Encryption Surprisingly, End users and technicians who don't understand PKI's nuances will interact with the server names you choose more often than you think if you use S/MIME or digital signatures (etc). I personally think it's a good idea to rename the issuing certificates to something more user friendly such as "Company Signer 1" where I can tell at a glance Who is the signature going to come from (Texas A&M or their rival) What is it used for? Encryption vs Signing It's important to tell the difference between a message that was encrypted between two parties, and one that was signed. One example where this is important is if I can get the recipient to echo a statement I send to them. User A could tell user B "A, I owe you $100". If B responded with an echo of that message with the wrong key, then they effectively digitally notarized (vs just encrypting) a fictitious $100 debt. Here is a sample user dialog for S/MIME . Expect similar UIs for Brower based certificates. Notice how the Issuer name isn't user friendly. Alternate Encodings Note: Speaking of names, if any link in the chain uses an alternate encoding, then clients may not be able to verify the issuer field to the subject. Windows does not normalize these strings during a comparison so make sure the names of the CA are identical from a binary perspective (as opposed to the RFC recommendation). High Security/Suite B Deployments Here is information regarding the Suite B algorithms supported in Windows 2008 and R2 ALGORITHM SECRET TOP SECRET Encryption:
Advanced Standard (AES) 128 bits 256 bits Digital Signature:
Elliptic Curve Digital Signature Algorithm (ECDSA) 256 bit curve. 384 bit curve Key Exchange:
Elliptic Curve Diffie-Hellman (ECDH) 256 bit curve. 384 bit curve Hashing:
Secure Hash Algorithm (SHA) SHA-256 SHA-384 For Suite B compliance, the ECDSA_P384#Microsoft Software Key Service Provider as well as the 384 key size and SHA384 as the hash algorithm may also be selected if the level of classification desired is Top Secret. The settings that correspond with the required level of classification should be used. ECDSA_P521 is also available as an option. While the use of a 521 bit ECC curve may exceed the cryptographic requirements of Suite B, due to the non-standard key size, 521 is not part of the official Suite B specification. PKCS#1 v2.1 XP clients and many non-windows systems do not support this new signature format. This should be disabled if older clients need to be supported. More Info I would only recommend using it once you move to ECC algorithms for asymmetric encryption. ( source ) Protect Microsoft CA DCOM ports The Windows Server 2003 CA does not enforce encryption on the ICertRequest or ICertAdmin DCOM interfaces by default. Normally, this setting is not required except in special operational scenarios and should not be enabled. Only Windows Server 2003 machines by default support DCOM encryption on these interfaces. For example, Windows XP clients will not by default enforce encryption on certificate request to a Windows Server 2003 CA. source CNG private key storage vs CSP storage If you enroll Certificate Template v3, the private key goes into the CNG private key storage on the client computer. If you enroll Certificate Template v2 or v1, the private key goes into CSP storage. The certificates will be visible to all applications in both cases, but not their private keys - so most applications will show the certificate as available, but will not be able to sign or decrypt data with the associated private key unless they support CNG storage. You cannot distinguish between CNG and CSP storages by using the Certificate MMC. If you want to see what storage a particular certificate is using, you must use CERTUTIL -repairstore my * (or CERTUTIL -user -repairstore my * ) and take a look at the Provider field. If it is saying "... Key Storage Provider", than it is CNG while all other providers are CSP. If you create the initial certificate request manually (Create Custom Request in MMC), you can select between "CNG Storage" and "Legacy Key" where legacy means CSP.
The following is my experience-based list of what does not support CNG - you cannot find an authoritative list anywhere, so this arrises from my investigations over time: EFS
Not supported in Windows 2008/Vista, Supported in Windows 7/2008R2 user encryption certificates VPN/WiFi Client (EAPTLS, PEAP Client) Windows 2008/7
Not supported with user or computer certificate authentication TMG 2010
server certificates on web listeners Outlook 2003
user email certificates for signatures or encryption Kerberos
Windows 2008/Vista- DC certificates System Center Operations Manager 2007 R2 System Center Configuration Manager 2007 R2 SQL Server 2008 R2- Forefront Identity Manager 2010 Certificate Management More information on CNG compatibility is listed here (in Czech, though Chrome handles the auto-translation well) Smart Cards & Issuing CAs If you plan on giving users a second smart card for authentication, use a second issuer CA for that. Reason: Windows 7 requirements Use the Windows command CERTUTIL -viewstore -enterprise NTAuth for troubleshooting Smartcard logins. The local NTAuth store is the result of the last Group Policy download from the Active Directory NTAuth store. It is the store used by smart card logon, so viewing this store can be useful when troubleshooting smart card logon failures. Decommissioning a PKI Tree If you deploy two PKI trees, with the intent to decommission the legacy tree at some point (where all old devices have become obsolete or upgraded) it may be a good idea to set the CRL Next Update field to Null. This will (should?) prevent the continual polling for new CRLS to the clients. The reasoning is that once the PKI is decommissioned, there will be no more administration, and no more revoked certs. All remaining certs are simply left to expire. More information on PKI decommissioning available here | {
"source": [
"https://security.stackexchange.com/questions/15532",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
15,574 | Windows seems to be saving my credentials for a variety of applications (terminal servers, etc) and I'd like to purge this data. How can I backup and purge this data? | The utility to delete cached credentials is hard to find. It stores both certificate data and also user passwords. Open a command prompt, or enter the following in the run command rundll32.exe keymgr.dll,KRShowKeyMgr Windows 7 makes this easier by creating an icon in the control panel called "Credential manager" | {
"source": [
"https://security.stackexchange.com/questions/15574",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
15,578 | So if someone steals my (ex.: I didn't teared up a piece of paper that had these and I throwed it away on the street) ID's like my: Full name Permanent address Birth date/place My mothers name Identification card number Telephone number What bad could happen to my ID's? | The utility to delete cached credentials is hard to find. It stores both certificate data and also user passwords. Open a command prompt, or enter the following in the run command rundll32.exe keymgr.dll,KRShowKeyMgr Windows 7 makes this easier by creating an icon in the control panel called "Credential manager" | {
"source": [
"https://security.stackexchange.com/questions/15578",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9616/"
]
} |
15,585 | Can a .sh file be a virus or something harmful? Is it like .exe files on Windows? If yes, can someone read this script ** and tell me if it safe and does it affect GnuPG security? It gives me an option to encrypt files with a right-click on Ubuntu. ** [mod note: link removed since we no longer analyse code] | .sh files are shell scripts. They are analogous to .bat files ( cmd scripts) under Windows. All of these (shell scripts, cmd scripts, .exe Windows executables, Linux executables (which usually have no extension)) are executable programs; if you run one, it can do anything you can do. So yes, shell scripts can be harmful. Treat a shell script (or a Perl script, or a Python script, or a Ruby script, etc.) with the same suspicion you would treat any other application. It's a bit harder to hide malware in a shell script without looking suspicious, because this is a script which can be read by people with knowledge of the scripting language. But it is not much harder; few people, even with the technical knowledge, would bother to review the code, so you could hope to go unnoticed. As a practical matter, there is less malware for Linux floating around than for Windows. This is probably mainly because Linux has a lot less of a market share than Windows on the desktop, so the payback for writing Linux malware is less. Also, there is a long-ingrained culture of sharing little improvements to the system in the Linux world, more so than in the Windows world; so the balance of probability says that this is someone sharing a little improvement and not malware. But it could be malware posing as a little improvement. In the end, you need to decide whether you can trust the site where you're getting this application, or the people who recommended this site. Favor programs that come from your distribution (i.e. that you can install from the software center), as they have undergone some review. Now regarding this specific program: I had a quick look, and it looks benign. I didn't see anything that would store your password anywhere without telling you or that would do things on your computer other than what it's advertised to do. Note that I only did a 2-minute review, which any remotely clever malware writer could get past. The program looks reasonably well-written. I wouldn't necessarily recommend this program unless you feel a pressing need that isn't addressed by packages in the Ubuntu distribution. Ubuntu comes with the seahorse GUI frontend to GPG (there is also kgpg for KDE users). You may also want to install seahorse-nautilus (or seahorse-plugin in older versions) for Nautilus integration. | {
"source": [
"https://security.stackexchange.com/questions/15585",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8780/"
]
} |
15,653 | I currently have a network set up with WPA2 and AES encryption, the password is 8 characters long but was randomly generated and contains no dictionary words. However I'm concerned about the increasing power of computers and their ability to crack handshakes, as such I was considering increasing the length. I'm aware that I can go up to 63 characters if I were extremely paranoid, but unfortunately I have to type this password into Android phones and other devices so I'd rather keep it reasonably short to allow for it to be easily typed. Would a 16-character random password be enough to secure a WPA2 encrypted network? What is the current recommendation for password lengths, especially for wireless networks and what password length would be sufficient to protect my network against a standard attack? | Yes, 16 characters is more than sufficient , if they are randomly generated using a cryptographic-strength PRNG. If you use lower-case, upper-case, and digits, and if you generate it truly randomly, then a 16-character password has 95 bits of entropy. That is more than sufficient. Actually, 12 characters is sufficient ; that gives you 71 bits of entropy, which is also more than sufficient for security against all of the attacks that attackers might try to attack your password. Once your password is 12 characters or longer, the password is extremely unlikely to be the weakest link in your system. Therefore, there's not much point choosing a longer password. I see people who recommend using a 60-character password, but I don't think there's any rational basis for doing so. My view is that usability is very important: if you make the security mechanism too hard to use, people will get annoyed and may be more reluctant to use it in the future, which isn't good. A secure mechanism that isn't used isn't doing anyone any good. That's why I prefer to choose a shorter password, like 12 characters or 16 characters in length, as it is perfectly adequate and more usable than a monstrous 60-character beast. Be careful how you choose the password. You need to use a cryptographically-strong PRNG, like /dev/urandom . For instance, here is a simple script I use on Linux: #!/bin/sh
# Make a 72-bit password (12 characters, 6 bits per char)
dd if=/dev/urandom count=1 2>/dev/null | base64 | head -1 | cut -c4-15 Don't try to choose passwords yourself. Human-chosen passwords are typically easier to guess than a truly random password. One very important caveat: There are other issues as well, beyond password length. It is very important that you turn off WPS , as WPS has major security holes . Also, I recommend that you use WPA2; avoid WPA-TKIP, and never use WEP. | {
"source": [
"https://security.stackexchange.com/questions/15653",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10377/"
]
} |
15,790 | It's still a commonly recommended way of hashing passwords, even if its insecurity had been proven in 1996: Therefore we suggest that in the future MD5 should no longer be implemented in applications like signature schemes, where a collision-resistant hash function is required. According to our present knowledge, the best recommendations for alternatives to MD5 are SHA-1 and RIPEMD-160. (The Status of MD5 After a Recent Attack, CryptoBytes, RSA Laboratories, VOLUME 2, NUMBER 2 — SUMMER 1996) Even after this study, and all upcoming papers about its defects, it has been recommended as a password hashing function in web applications, ever since. It is even used in some Certification Authorities digital signature (according to rhmrisk link below ) What is the reason why this message digest algorithm has not been prohibited for security purposes? Links: http://viralpatel.net/blogs/java-md5-hashing-salting-password/ https://rmhrisk.wpengine.com/?p=60 | To complement the good answer from @D.W.: for password hashing, MD5 is no more broken than any other hash function (but don't use it nonetheless) . The full picture: MD5 is a cryptographic hash function which, as such, is expected to fulfill three characteristics: Resistance to preimages: given x , it is infeasible to find m such that MD5(m) = x . Resistance to second-preimages: given m , it is infeasible to find m' distinct from m and such that MD5(m) = MD5(m') . Resistance to collisions: it is infeasible to find m and m' , distinct from each other, and such that MD5(m) = MD5(m') . MD5 is thoroughly broken with regards to collisions , but not for preimages or second-preimages. Moreover, the 1996 attack (by Dobbertin) did not break MD5 at all; it was a "collision on the compression function", i.e. an attack on one of the internal elements of MD5, but not the full function. Cryptographers took it as a warning flag, and they were right because the actual collision attack which was published in 2004 (by Wang) was built from the findings of Dobbertin. But MD5 was broken only in 2004, not 1996, and it was a collision attack . Collisions are not relevant to password hashing security. Most usages of a hash function for password hashing depend on either preimage resistance, or on other properties (e.g. how well the hash function work when used within HMAC , something which cannot be reduced to any of the properties above). MD5 has actually been "weakened" with regards to preimages, but only in a theoretical way, because the attack cost is still billions of billions of times too expensive to be really tried (so MD5 is not "really" broken with regards to preimages, not in a practical way). But don't use MD5 anyway . Not because of any cryptographic weakness, but because MD5 is unsalted and very fast . That's exactly what you do not want in a password hashing function. People who "recommend MD5" for password hashing just don't know any better, and they are a testament to a Truth which you should always keep in mind: not everything you find on the Internet is correct and trustworthy. Better solutions for password hashing are known, and have been used and deployed for more than a decade now. See this answer for details and pointers. | {
"source": [
"https://security.stackexchange.com/questions/15790",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5367/"
]
} |
15,822 | The recent, widely publicized security incident where millions of Linkedin were exposed reminded me to tighten up my password practices. I'm looking at several password managers now and I'm especially curious about Lastpass . They write on their homepage : LastPass is an evolved Host Proof hosted solution, which avoids the
stated weakness of vulnerability to XSS as long as you're using the
add-on. LastPass strongly believes in using local encryption, and
locally created one way salted hashes to provide you with the best of
both worlds for your sensitive information: Complete security, while
still providing online accessibility and syncing capabilities. We've
accomplished this by using 256-bit AES implemented in C++ and
JavaScript (for the website) and exclusively encrypting and decrypting
on your local PC. No one at LastPass can ever access your sensitive
data. We've taken every step we can think of to ensure your security
and privacy. How can I be sure that the bolded part is true? Is the method they describe capable of actually doing what they promise, can it prevent them from accessing my passwords? And how could I verify that they're actually doing what they're promising and not transmitting my password in any form they can access to their servers? | There is a way to see if LastPass is doing what they're saying. Use the Non-binary Chrome, Firefox, Opera, or Safari extension. This is 100% JavaScript and open in the sense that you can see it -- you can use network sniffing with a proxy (e.g. Paros) to see that the sensitive data is encrypted with AES-256-CBC from data generated from a key created with the number of rounds of PBKDF2-SHA256 you have setup on your account: http://helpdesk.lastpass.com/security-options/password-iterations-pbkdf2/ and this is done locally on your machine only. Then simply don't update/upgrade your extension until you want to audit it again. You could also audit the way we interact with the binary extension to decide if you trust that. That's a bit extreme for most people, but a number of people and organizations have audited LastPass and liked what they found. LastPass is always helpful to anyone wishing to audit, feel free to contact us if you'd like help. LastPass knows that it's perfectly reasonable to trust but verify, and encourage you to do so. There's a reason we tell people to utilize the extensions rather than the website: the extensions can't change as easily as the website could thus making them more secure. Source: I work for LastPass. | {
"source": [
"https://security.stackexchange.com/questions/15822",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10468/"
]
} |
15,865 | What are the various advantages of using extended validation (EV) certificates than normal certificates which also provide comparatively high degree of encryption like RC4, 128 Bit? I know that the browser shows green flag for EV certs. But is there any other benefit than just that? | Extended Validation certificates are intended to show the user more visibly the institution to which they were issued. The technical aspects of the certificates themselves is combined with visual clues in the user interface of the application verifying them: the green bar and a visible name next to the location bar in the browser. For example, the EV certificate at http://www.paypal.com/ will make the browser show a green bar and display "PayPal, Inc." next to it. This is designed not only to link the certificate to the domain owner (like standard domain-validated certificates do), but also link it to a more physical institution (here, PayPal, Inc.). To do this, the CA must verify that the named institution is indeed the one owning the domain. Ultimately, this is more about making a more authenticated link between the domain name and the company name than making "more secure" certificates. From a cipher suite point of view (which is what determines the encryption algorithm and key size), EV certificates are no different from DV certificates (blue bar). Stepping back a little, you need to realise that the effectiveness of HTTPS relies on the user checking that it's used correctly. (The server has no way to find out whether the client is victim of a MITM attack otherwise, unless using client-certificates too.) This means that the users have to: check that HTTPS is used when they expect it to be, check that there are no warnings, check that the website they're using is indeed the one they're intending to visit, which leads to a couple of sub-points: checking that it's the domain name they expect, checking that the domain name belongs to the company they expect. EV certificates are intended to solve that last sub-point. If you already know that amazon.com belongs to Amazon.com, Inc. or that google.com belongs to Google Inc., you don't really need them. I'm not personally convinced that this approach completely works, since they can be misused (see NatWest/RBS example below) and some CAs seem to propagate vague (and potentially misleading) information as to what they really are, in an effort to promote them. In general, if your users already know that your domain name is yours, you don't really need one. Here are more details from a previous answer I gave to a similar question : [...] The domain-validated certificates guarantee you that the certificate
was issued to the owner of that domain. No more, but no less (I'm
assuming the validation procedure was correct here). In many cases,
this is sufficient. It all depends on whether the website you are
promoting needs to be linked to an institution that is already well
known off-line. Certificates that are validated against an
organisation (OV and EV certs) are mainly useful when you need to tie
the domain to a physical organisation too. For example, it's useful for a institution that was initially known
via its building (e.g. Bank of America) to be able to say that a
certificate for bankofamerica.com is indeed for the place where you've
given your physical money. In this case, it makes sense to use an OV
or EV certificate. This can also be useful is there is ambiguity
regarding which institution is behind the domain name (e.g. apple.com and apple.co.uk ), which is even more important is the similar domain
name is owned by a rival/attacker using the name similarity for bad
purposes. In contrast, www.google.com is what defines Google to the public;
Google has no need to prove that google.com belongs to the real
Google. As a result, it's using a domain-validated certificate (same
for amazon.com ). Again, this is really useful if the user knows how to check this.
Browsers don't really help here. Firefox just says "which is run by
(unknown)" if you want more details about the cert at www.google.com ,
without really saying what is meant by this. Extended-validation certificates are an attempt to improve this, by
making the organisation-validation procedure more strict, and by
making the result more visible: green bar and more visible
organisation. Unfortunately, this is sometimes used in a way that increases
confusion, I think. Here is an example that you can check by yourself:
one of the large UK banks (NatWest) uses the https://www.nwolb.com/ for its on-line banking services. It's far from obvious that the
domain name belongs to NatWest (who also own the more logical natwest.co.uk name, by the way). Worse, the extended validation (if
you check the name next to the green bar) is done against "Royal Bank
of Scotland Group plc". For those who follow financial news, it makes sense because both RBS
and NatWest belong to the same group, but technically, RBS and NatWest
are competitors (and both have branches on the high street in the UK
-- although that's going to change). If your user doesn't have that extra knowledge about which groups trade under which name, the fact
that a certificate is issued to the name of a potential competitor
should ring alarm bells. If, as a user, you saw a certificate on gooooogle.com issued to Microsoft or Yahoo, however green the bar is,
you should not treat this as Google's site. One point to bear in mind with EV certificates is that their
configuration is hard-coded into the browsers . This is a compile-time
setting, which cannot be configured later on (unlike normal trusted
certificate stores, where you could add your own institutional CA
cert, for example). From a more cynical point of view, some could
consider this as a convenient way for the main players to keep a
strong position in the market. | {
"source": [
"https://security.stackexchange.com/questions/15865",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6862/"
]
} |
15,910 | In this interview posted on Krebs on Security , this question was asked and answered: BK: I’ve heard people say, you know this probably would not have
happened if LinkedIn and others had salted the passwords — or added
some randomness to each of the passwords, thus forcing attackers to
expend more resources to crack the password hashes. Do you agree with
that? Ptacek: That’s actually another misconception, the idea that the
problem is that the passwords were unsalted. UNIX passwords, and
they’ve been salted forever, since the 70s, and they have been cracked
forever. The idea of a salt in your password is a 70s solution. Back
in the 90s, when people broke into UNIX servers, they would steal the
shadow password file and would crack that. Invariably when you lost
the server, you lost the passwords on that server. Ptacek doesn't really explain why this is the case--he only says that salt has not prevented this type of attack in the past. My understanding is that salt can only prevent pre-computation of password hashes because of the space needed to store the precomputed hashes. But if you have compromised the system, you will have both the salt and the hash. So the time to dictionary attack the hash does not change significantly (just an extra concatenation of the salt to the dictionary word). Is my understanding correct? | Krebs follows up on this question, and Ptacek does clarify what he meant: BK: Okay. So if the weakness isn’t with the strength of the cryptographic algorithm, and not with the lack of salt added to the hashed passwords, what’s the answer? Ptacek: In LinkedIn’s case, and with many other sites, the problem is they’re using the wrong kind of algorithm. They use a cryptographic hash, when they need to use a password hash. In the next couple of paragraphs, he also elaborates on the reasons for it. The long and the short of it is that SHA1, with or without salt, is far too fast to be used as a password hash. It is so fast, that when computed using a GPU or something similar, you can brute force 10s of thousands of hashes per second. As is elaborated on later in the interview, LinkedIn should have been using bcrypt , which is an adaptive hash that would have slowed the brute force time down to the order of 10s of hashes per second. | {
"source": [
"https://security.stackexchange.com/questions/15910",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4015/"
]
} |
15,934 | How does your typical users run-of-the-mill Facebook/twitter/gmail/AIM/etc account get hacked? Is it simply a matter of a weak password? Are they typically the victim of phishing or other social engineering attacks? Is malware typically involved? Is it the result of compromising the systems holding account credentials? All of the above? What methods do the bad guys employ? What are the prevalent trends in methods used to gain access to these accounts? I see friends get their Facebook or AIM accounts hacked, and without knowing how they were likely hacked, I have no idea how to advise them and can't really explain the nature of the problem to them. | @p____h already answered pretty well most of what occurs when an account is hacked, but I wanted to add my salt regarding a recently hack of a Gmail account that is very interesting to read! It's the recent Cloudflare attack . This is just AMAZING, the attacker used 4 flaw in various services, not only Cloudflare's : AT&T was tricked into redirecting my voicemail to a fraudulent voicemail box Google's account recovery process was tricked by the fraudulent voicemail box and left an account recovery PIN code that allowed my
personal Gmail account to be reset A flaw in Google's Enterprise Apps account recovery process allowed the hacker to bypass two-factor authentication on my CloudFlare.com
address; and CloudFlare BCCing transactional emails to some administrative accounts allowed the hacker to reset the password of a customer once
the hacker had gained access to the administrative email account. I highly recommend you to look at the blog post, and if you are hurry, just read the infographic that shows the sequence of events. This attack would be worth a short movie! | {
"source": [
"https://security.stackexchange.com/questions/15934",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5363/"
]
} |
16,020 | I'm learning wireless penetration testing. It really is amazing. But it made me wonder, what about mobile phones? They are also means of wireless communication. So, our entire voice must be in the air surrounding us. So, What makes it difficult to intercept? By the way, is there any standard like 802.11 for Wi-Fi, for telecommunication over mobile phones? | For telecommunications, check out GSM, CDMA, TDMA, and EDGE. The two competing protocols in the United States are GSM and CDMA. The resources linked below are lacking when it comes to CDMA, but using site:defcon.org and site:blackhat.com in your Google searches will turn up some presentations. For interception of GSM, I refer you to a white paper Intercepting GSM traffic from the BlackHat conference: Intercepting GSM traffic - Black Hat Briefing - Washington D.C., Feb 2008 Abstract: This talk is about GSM security. We will explain the
security, technology and protocols of a GSM network. We will further
present a solution to build a GSM scanner for 900 USD. The second part
of the talk reveals a practical solution to crack the GSM encryption
A5/1. The corresponding video of the presentation: DeepSec 2007: Intercepting GSM traffic Also a talk on cellular privacy and the Android platform: DEFCON 19: Cellular Privacy: A Forensic Analysis of Android Network Traffic (w speaker) and a whitepaper on the Lawful Interception for 3G and 4G Networks (though see first comment on this answer): Lawful Interception for 3G and 4G Networks - White Paper by AQSACOM This document will first provide a brief description of the various
evolutions of public mobile networks that have been commercially
deployed, followed by a discussion on the evolution toward the newer
“long term evolution” technologies. We then discuss possible
configurations for lawful interception of the evolving mobile
networks, followed by descriptions of approaches to 3G / 4G
interception solutions now available from Aqsacom. And a SANS article on GSM security: The GSM Standard: an overview of its security Also note that smart phones typically just automatically connect to networks with SSIDs it remembers. Sniff the airwaves for beacons that it is sending out and set up an evil access point with a matching SSID. Launch a remote attack across the network or man in the middle the device and launch a client-side attack appropriate to the device. | {
"source": [
"https://security.stackexchange.com/questions/16020",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1152/"
]
} |
16,085 | How can I get the public key of a webpage like verisign , etc. using HTTPS protocol? | This command will show you the certificate (use -showcerts as an extra parameter if you want to see the full chain): openssl s_client -connect the.host.name:443 This will get the certificate and print out the public key: openssl s_client -connect the.host.name:443 | openssl x509 -pubkey -noout If you want to dig further, this question might be of interest. | {
"source": [
"https://security.stackexchange.com/questions/16085",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10639/"
]
} |
16,086 | Is there any alternative for cryptography? I heard a lot about quantum cryptography, but is this the only stuff which have a chance to exist in the future? Are there any other kinds of cryptography? | This command will show you the certificate (use -showcerts as an extra parameter if you want to see the full chain): openssl s_client -connect the.host.name:443 This will get the certificate and print out the public key: openssl s_client -connect the.host.name:443 | openssl x509 -pubkey -noout If you want to dig further, this question might be of interest. | {
"source": [
"https://security.stackexchange.com/questions/16086",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10564/"
]
} |
16,253 | How does Google Maps determine my location? I've gotten some understanding of Google Maps' geolocation methods from here: http://friendlybit.com/js/geolocation-and-google-maps/ In the newer browsers (all but IE6, IE7, or IE8) may ask you for your
positioning information from the browser. It usually shows up as a bar
at the top of the browser. The browser then gathers two specific forms
of positioning information from your computer: your IP address and the
signal strength of any wireless network near you. That information is
then sent, if you approve it, to Google, which returns the coordinates
you are at the moment. [...] If your wireless reciever is turned off, or you’re at a stationary
computer, all calculations are based on the IP number. These kind of
lookups are quite arbitrary and inaccurate, I just get to the nearest
big city when trying to use it over a non-wireless line. But mobile
connections are slowly taking over landlines, so I guess this problem
will solve itself automatically. According to this article, Google only uses my IP address if I am using a desktop. However, when I use a VPN to go online (and I can confirm that another IP geolocation service shows me as being on another continent), Google Maps is still able to accurately show my location. How does this work? | If you consent, Firefox gathers information about nearby wireless access points and your computer’s IP address . Then Firefox sends this information to the default geolocation service provider... https://www.mozilla.org/en-US/firefox/geolocation/ Firefox knows the IP address, which is used to connect to the VPN provider. Many geolocation services, however, only look at the IP address they see from the server side. By the way: With java installed, a website can read the local ip-address without asking for permission. new Socket("http://example.com", 80)).getLocalAddress().getHostAddress() example.com needs to be replaced with the name website to obey the same origin policy. | {
"source": [
"https://security.stackexchange.com/questions/16253",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10732/"
]
} |
16,354 | I'm looking at two comparable pieces of software which encrypt data on disk using a passphrase. One uses PBKDF2 to generate the encryption key from a passphrase, while the other uses two rounds of SHA256. What's the difference? Is one preferred over the other? | The difference is that: PBKDF2 by design is slow SHA256 is a good hash function; it is not slow, by design So if someone were to try a lot of different possible passphrases, say the whole dictionary, then each word with a digit appended, then each word with a different capitalisation, then two dictionary words, etc. then this process would be much slower with PBKDF2. But if your passphrase is truly secure, that is very long, pretty random and out of reach of any systematic enumeration process, then it makes no practical difference, except that an attacker may spend more resources trying to break your passphrase (or maybe less if they decide to give up sooner). | {
"source": [
"https://security.stackexchange.com/questions/16354",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10790/"
]
} |
16,453 | Let's take lulzsec as an example; they registered lulzsecurity.com. There are two problems that I don't understand how they solved: They had to pay for it. Tracking down money is generally much easier than tracking down IP addresses. I assume they didn't use stolen credit cards (with all the attention they received, people would have quickly found out and taken away their domain).. And even with prepaid credit cards it's relatively easy to find out who bought it, with security cameras/etc. They had to have played by ICANN's rules - again, because of the attention they received, if they hadn't people would have found out and they would have lost the domain. This means giving valid contact information. | Here is one method of purchasing a domain name pretty close to anonymously. Use Tor . Understand its weaknesses Buy a prepaid credit card in cash, specifically one not requiring activation or signature. Randomly generate a full alias to use during online registration. Register an account at a domain registrar. Use the prepaid credit card to buy a domain. Repeat for other needed services. Note that 2. requires non-anonymous interaction and is therefore the riskiest. Let's try another path. Use Tor . Understand its weaknesses Randomly generate a full alias to use during online registration. Earn some Bitcoins anonymously online, thus seeding without human contact . Chose a domain registrar and DNS host that supports Bitcoins Repeat for other needed services. | {
"source": [
"https://security.stackexchange.com/questions/16453",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5152/"
]
} |
16,595 | When someone says that a particular digital certificate (like an SSL cert) has been "signed with a key", what does that imply? Does that mean the certificate simply includes a key that should be used for further message exchanges? Does that mean that the cert itself is encrypted and can only be decrypted with that key? Does it imply something else? Thanks in advance. | Ideally, it means that someone looked at the certificate and decided that it is correct and legitimate. Once they've done that, they want to tell people "Hey, I've verified that this certificate is good. I trust it". To do this, they use their signing key to sign the certificate. Now when someone gets the certificate they can see who signed the certificate. If they trust one of the signers, they can trust the certificate itself. This is the basis of Web Of Trust in PKI . The actual signing probably depends on what kind of certificate it is. I think this is a useful read . A digital certificate consists of three things: A public key. Certificate information. ("Identity" information about
the user, such as name, user ID, and so on.) One or more digital signatures. Typically the "one or more digital signatures" part is done by listing a set of encrypted hashes of the certificate. So when you want to sign a certificate, you would compute the hash of the certificate, encrypt it using your private signing key, and add it to the cumulative list of digital signatures. | {
"source": [
"https://security.stackexchange.com/questions/16595",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5073/"
]
} |
16,824 | While selecting unique passwords for each purpose is a great idea, in practice this rarely happens. Therefore many select passwords from a personal pool of passwords that are easily remembered. When authenticating into systems that are used infrequently it is very probable that a number of passwords from such pool are tried sequentially. Alternatively, failed passwords are very close to the actual password in case of a typo. Since almost nobody describes the password policy in effect, including how rejected passwords are handled, should one start assuming that these are collected in a database that is sold to the highest bidder? Is there an implementation guidance? What usually happens with a candidate password when this is rejected? Are they being logged, immediately discarded or left to hang around until garbage collected? Are failed password handling procedures part of any audited controls? It seems that there are plenty of implementation requirements and recommendations regarding how valid passwords should be handled, but vague regarding rejected password values. EDIT I will try to list here the various implementations that log failed login security credentials in order to get a feel about how widespread this procedure is: Content Management Systems: Joomla via Login Failed Log plugin This Small Plug-in collect logs about each failed login attempt of your Joomla site’s administrator and sends an email about each of those to the super administrator of the site with the username, password, ip address and error. KPlaylist v1.3 and v1.4 - a free PHP system that makes your music collection available via the Internet. is logging the usernames and passwords of failed login attempts in the apache error log Drupal 5.x before version 5.19 and Drupal 6.x before version 6.13 . When an anonymous user fails to login due to mistyping his username or password, and the page he is on contains a sortable table, the (incorrect) username and password are included in links on the table. If the user visits these links the password may then be leaked to external sites via the HTTP referrer. Standalone software Reporting Server included with Symantec Client Security 3.1 and SAV CE 10.1 The administrator password for Symantec Reporting Server could be disclosed after a failed login attempt. Linux: OpenSSH via modified auth-passwd.c ; using PAM via overloading pam_sm_authenticate function EDIT #2 It seems that there is a consensus, and recording the failed passwords or PINS is regarded as a serious/major security risk, nevertheless as far as I know, the following standards provide no guidance, audited procedure or controls that specifically address this risk: PCI-DSS : Passwords procedures addressed in 8.4. and 8.5. (failed passwords are protected only during transmission; after validation not considered passwords, therefore not required to be protected) FIPS140-2 : Authentication addressed in 4.3 (life-cycle of failed authentication data only partially addressed) | Logging the value of a failed password attempt (cleartext or otherwise) seems like a security anti-pattern. I've never seen a web app that does this, and I'm not aware of any default system services such as SSH that do either. As pointed out by @tylerl below, most systems simply log meta-information about an access attempt (e.g. username, time, perhaps IP address, etc.). Why This Should Be a Security Anti-Pattern Offhand, I can think of three reasons why logging the value of a failed password attempt is a bad idea: 1. User Typos It's extremely common for people to mistype a password by one or two characters. Examining a log file of failed attempts would make many of these easy to figure out, especially if you could contrast a sequence of failed attempts with a successful auth. 2. Guess-and-Check Many people have two-or-three passwords they cycle through for everything. Consequently, when they forget which password they used on a given site, they just cycle through all of them until they find a match. This would make it trivial to hack their accounts on other sites. 3. Log Bloat Storing failed passwords serves no useful purpose for the vast majority of authentication services in production today. While there may be some edge cases, for most people, storing this data is simply throwing away disk space. On Relevant Legislation / Standards I don't know of any standards (PCI, HIPAA, etc.) that specifically address procedures for storing failed login attempts, but I think that granted the above facts a good legal argument could be made for why the same standards that apply to storing passwords in general should also apply to failed-password attempts as well. In other words, you could make a legal argument that a failed-password is still categorically a password, and as such it is subject to the same standards. While I'm certainly not a lawyer, I wouldn't want a judge to have to decide whether or not I was negligent or in violation of industry standards because failed passwords were stored in cleartext and consumers suffered the consequences. I don't think that would end with a favorable decision. I agree with the OP that it might be useful for the various standards bodies to address this issue specifically (if they indeed haven't already). To that end, my suggestion would be to create a compliance standard of not storing the value of failed password attempts at all. | {
"source": [
"https://security.stackexchange.com/questions/16824",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10923/"
]
} |
16,939 | I work on a tiny company, it's literally me (the programmer) and the owner. The owner has asked me to encrypt several fields in a database to protect the customers data. This is a web application that helps law firms manage their data, so basically it stores persons and lawsuits information (who is being sued, why, for how much). He considers this sensitive information that should not be easily seen. His fear is "I don't want unauthorized people to see this information". Only other law firms could be interested in this data, so this shouldn't be as important as credit cards, for example. I've read a lot about this on the web and I was thinking on simply using symmetric encryption on these fields, so that the performance isn't that bad. The key would be stored on the server. However, this thread on stackoverflow says this is a bad idea. I don't understand how encrypting the fields and saving the key on server can be so useless. I don't fear the discs being stolen because they are on Amazon EC2. I'm no security expert, but in my opinion, if anything could go wrong, I'd say the database leaks. Even then, the important information would be encrypted. Now if the guy managed to even hack to my EC2 server, well, I guess then there would be little to no protection I could do to help this. As we are a tiny company, we only have one server doing everything, from serving the pages to storing the data. My question is, considering we can only afford one server, is encrypting those fields with a symmetric key, which is saved on this server, ok? | General comments. It sounds like it would be helpful for you and your boss to learn some basic security concepts, before proceeding. Security is a specialized field. You wouldn't ask a random person on the street to perform open-heart surgery on you; and you shouldn't expect an average software developer to know how to secure your systems. I sense some misconceptions here. For instance, it sounds like your boss has equated security with cryptography. But this is a mistake. As Bruce Schneier has emphasized, Cryptography is not magic pixie dust that you can sprinkle on a system to make it secure . And as Roger Needham once famously said, If you think cryptography will solve your problem, either you don't understand cryptography, or you don't understand your problem . When securing a computer system, one important concept is the threat model . This means you need to think carefully about what kinds of attacks and adversaries you are trying to stop, and what you aren't. A failure to think through the threat model clearly can lead to security theater : security mechanisms that look good on first glance, but actually are woefully inadequate in practice. Good security management often comes down to risk management : careful analysis of what are the most serious risks, and then devising strategies to mitigate or manage those particular risks. It is also important to understand that security is a weakest-link property: the security of your system is only as strong as the weakest link . A vulnerability in any one part of the system can compromise the security of the entire system. This means that there's no one answer that's going to be sufficient to protect your system; instead, to defend your system, you have to get security right in a number of places. Diving into details. It sounds like your goals are to prevent unauthorized disclosure of sensitive data. If so, you're going to need to focus on a number of items. There's no one simple magic silver bullet that is going to solve this for you; you are going to need to work on application security generally. Let me suggest some things that should be priorities for you, if I've understood your goals correctly: Application security. You need to start studying up on web application security. It doesn't matter how much crypto you throw at the problem; if an attacker can find a security hole in your application code, you are hosed. For background on web application security, OWASP has many excellent resources. Make sure you learn about the OWASP Top Ten, about XSS, SQL injection, input sanitization/validation, output escaping, whitelisting, and other concepts. Access control. Your web application needs to have solid access controls, to ensure that one user of your system cannot access information of another user (without authorization). The details of this will depend upon the specifics of your particular system, so if you want additional help on this, you'll probably need to post a separate question with more details about your application and your current strategy for access control. Authentication. Your web application will need a way to authenticate its users. The standard least-effort scheme is to just use a username and password. However, this has serious limitations in practice that are well-understood. If users choose their own passwords, they often choose poor passwords, and this can subvert the security of your system. Secure software development lifecycle. You need to integrate security into the software development process. When you work out the software architecture, you should be thinking about the security requirements and performing threat modelling and architectural risk analysis. When writing code, you need to know about common implementation errors that can breach security and make sure to avoid them. After the software is built, you need to test its security and constantly evaluate how you are doing at security. When you deploy the software, your operations folks need to know how to manage it securely. Microsoft has some excellent resources on the secure software development lifecycle (SDL). See also BSIMM for more. Security assessment. If you are concerned about security, I suggest having the security of your application assessed. A simple starting point might be to have someone perform a pentest of your web application, to check for certain kinds of common errors. This is by no means a guarantee of security, but sometimes it can help serve as a wakeup call if there are many major problems present. You might look at WhiteHat Security's services; there are also many others who will perform web pentesting. If you are getting the sense that this is not a trivial undertaking, I apologize, but that is indeed the case. On the other hand, the good news is that there are a lot of resources out there, and moreover, you don't need to become an expert-level security guru: you just need to become familiar with some basic concepts and some common security mistakes in web programming, and that will take care of most of your needs. | {
"source": [
"https://security.stackexchange.com/questions/16939",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11100/"
]
} |
17,026 | Why does military and government not use special operating systems? I mean, if they do not use generic operating system based on Windows, Linux and Mac and create their own operating system, they will be much secure.. Or am I mistaken? | There are a number of reasons why building their "own OS" is not a viable option. 1. Research Cost To built a new OS, from ground up without the use of any existing code would require significant research. Even today, there are only four or five popularly used kernels like Unix , Linux kernel, BSD, XNU and Windows NT. 2. Security through obscurity It's a proven concept that security through obscurity rarely helps. Yes, it's a new OS so no "hacker" knows how it works, but it is a fact that overtime information about it will be revealed through ex or disgruntled employees. Maybe even through the researchers themselves. Being a "custom" OS, it will have security issues of their own and no one apart from the original researchers would be able to identify or fix them. 3. COST, COST and COST Even if such a OS was made, to patch various issues a dedicated maintenance team would have to be kept. You'd additionally need to tailor defensive software, etc to work perfectly on that machine. Any vulnerabilities in that software if emulated would just pass on. In order to make custom software, the OS specifications would have be disclosed, thus we'd need to make custom office software, mail clients, etc. Ultimately, it's just not viable to make an OS system and use it solely for defense. As I've said earlier security through obstructed rarely helps security, it just makes it more "time-consuming-ly" difficult, but at the end of the day, the cost outweighs the benefits largely. | {
"source": [
"https://security.stackexchange.com/questions/17026",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11153/"
]
} |
17,044 | If I encrypt some data with a randomly generated Key and Initialization Vector, then store all three pieces of information in the same table row; is it necessary to encrypt the IV as well as the Key? Simplified table structure: Encrypted data Key (encrypted using a second method) IV (encrypted?) Please assume that the architecture and method are necessary: The explanation behind it is lengthy and dull. | Update, 2022: This answer is now 10 years old. Its advice is correct, but you should not use CBC mode in new designs today. Instead use an AEAD such as ChaCha20-Poly1305 or AES-GCM, and put the IV in the associated data so that it is authenticated. While CBC is not strictly broken , it's really easy to shoot yourself in the foot while trying to use it in a real-world implementation, and improper implementation can easily lead to a complete break of your design. Real world AES-CBC implementations (even those written by experienced security-conscious developers) frequently fall victim to padding oracle attacks and other side-channel issues. Using AES-CBC securely requires significantly more cryptographic engineering work than just using an AEAD. The less cryptographic engineering work you have to do, the less likely it is that you'll introduce a vulnerability. If you just want an easy life, libsodium 's secretbox API will take care of the cryptographic decision-making and implementation details for you. You provide a message, a nonce (IV), and a key, and it'll encrypt/decrypt and authenticate the data securely. It also has APIs for securely generating keys & IVs. There are libsodium bindings for most programming languages, so you're not limited to just C/C++. I would highly recommend libsodium to anyone building production systems. Original answer below. From Wikipedia : An initialization vector has different security requirements than a key, so the IV usually does not need to be secret . However, in most cases, it is important that an initialization vector is never reused under the same key . For CBC and CFB, reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages. You don't need to keep the IV secret, but it must be random and unique. The IV should also be protected against modification. If you authenticate the ciphertext (e.g. with a HMAC) but fail to authenticate the IV, an attacker can abuse the malleability of CBC to arbitrarily modify the first block of plaintext. The attack is trivial: xor the IV with some value (known as a "tweak"), and the first block of plaintext will be xor'd with that same value during decryption. | {
"source": [
"https://security.stackexchange.com/questions/17044",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10835/"
]
} |
17,192 | The culprit in this case is a particular (and particularly large) bank that does not allow special characters (of any sort) in their passwords: Just [a-Z 1-9]. Is their any valid reason for doing this? It seems counter productive to stunt password strength like this, especially for a system protecting such valuable information. The only thing I have been able to come up with is a weak attempt to thwart SQL injections, but that would assume the passwords are not being hashed (which I sure hope isn't true). | One explanation I haven't seen here is that many financial institutions are tightly integrated with older systems and are bound to the limitations of those systems. The irony of this is that I have seen systems that were built to be compatible to older systems but now the older systems are gone and the policy still must exist for compatibility with the newer system that was built to be compatible with the older system. (the lesson here is that if you have to be compatible with an old system, allow for some future elimination of those limitations). | {
"source": [
"https://security.stackexchange.com/questions/17192",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2355/"
]
} |
17,207 | What is nowadays (July 2012) the recommended number of bcrypt rounds for hashing a password for an average website (storing only name, emailaddress and home address, but no creditcard or medical information)? In other words, what is the current capability of the bcrypt password cracking community? Several bcrypt libraries use 12 rounds (2^12 iterations) as the default setting. Is that the recommended workfactor? Would 6 rounds not be strong enough (which happens to be the limit for client-side bcrypt hashing in Javascript, see also Challenging challenge: client-side password hashing and server-side password verification )? I have read answer https://security.stackexchange.com/a/3993/11197 which gives an in-depth discussion how to balance the various factors (albeit for PBKDF2-SHA256). However, I am looking for an actual number. A rule of thumb. | I think the answer to all of your questions is already contained in Thomas Pornin's answer . You linked to it, so you presumably know about it, but I suggest that you read it again. The basic principles are: don't choose a number of rounds; instead, choose the amount of time password verification will take on your server, then calculate the number of rounds based upon that. You want verification to take as long as you can stand. For some examples of concrete numbers, see Thomas Pornin's answer. He suggests a reasonable goal would be for password verification/hashing to take 241 milliseconds per password. (Note: Thomas initially wrote "8 milliseconds", which is wrong -- this is the figure for a patience of one day instead of one month.) That still lets your server verify 4 passwords per second (more if you can do it in parallel). Thomas estimates that, if this is your goal, about 20,000 rounds is in the right ballpark. However, the optimal number of rounds will change with your processor. Ideally, you would benchmark how long it takes on your processor and choose the number accordingly. This doesn't take that long; so for best results, just whip up the script and work out how many rounds are needed to ensure that password hashing takes about 240 milliseconds on your server (or longer, if you can bear it). | {
"source": [
"https://security.stackexchange.com/questions/17207",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11197/"
]
} |
17,391 | Is it possible within the limits of the X.509 specification to mark an intermediate CA as trusted for a specific purpose, e.g. to verify a VPN, HTTPS etc. server key, just like it would work with a root CA? My VPN client(s) all have a way to explicitly provide a trusted CA certificate, which is used to verify the VPN server's authenticity. As long as I provide the root CA certificate, this works as expected - the certificate is trusted. (The intermediate certs are provided as part of the TLS handshake.) However, I'm using an intermediate CA, and would very much like to provide that certificate, instead of the root CA. In my understanding of X.509, that should work: The VPN server's key is signed by the intermediate CA, and as far as I understand X.509, that's all that is required to establish a trusted chain. But in practice, it doesn't work: My VPN client doesn't connect. In addition to VPN, I've tried this with 802.1X/EAPOL authentification, and multiple clients - with the same results: providing the root CA cert to the client works; providing my intermediate CA cert doesn't. Is that by design, or is it just how most implementations work? (I use a TLS based VPN, but as I've also tried it with 802.1X and TTLS, it seems to be related to TLS or X.509, and not to my specific VPN client/server architecture.) Update: I've fond an OpenSSL commit that implements adding non-self-signed CA certificates as trust anchors. Unfortunately, this is not yet included in any release version, so all the proposed workarounds in the comments still apply. Update 2: OpenSSL now contains this option in the release version, starting from 1.0.2. The corresponding flag for the command line client is partial_chain , and the programmatic flag seems to be X509_V_FLAG_PARTIAL_CHAIN . Additionally, I recently had to verify server certificates in Java: At least in the JDK 1.8 version of the Sun JSSE provider's SSL implementation, adding a leaf certificate to the default TrustManager works without any special configurations, and verification succeeds as if the root CA had been provided. | A root CA is actually an illusion. In X.509 , there are trust anchors . A trust anchor is, mostly, a name and a public key, which you know a priori and that you trust. Representation of that name and that public key as a "certificate file" (traditionally self-signed) is just a convenient way to keep the trust anchor as a bunch of bytes. As per X.509, a CA is "intermediate" only if you do not trust it a priori; it is not an intrinsic property of the CA. What you need to do is convince your software to consider the CA as a trust anchor. One possible trick is to reencode the relevant data (the CA name and public key) as a (purportedly) self-signed certificate. Note that the self-signature is just a Tradition; it is there mostly because the file format for a certificate has a mandatory field for a signature. For a "root" CA, this signature has no significance (it makes little sense to verify it, since it would buy you nothing security-wise). Therefore, applications which use root CA certificates rarely verify that the signature is "self". Therefore, you could build a custom "self-signed" certificate with the name of your "intermediate CA" as both SubjectDN and IssuerDN . For the signature, just put random junk bytes of approximately the right size (256 bytes for a 2048-bit signature). Then, try to set this pseudo-self-signed certificate as a root: chances are that it will work with your VPN. Alternatively, create your own root CA, and emit an extra certificate for the intermediate CA (you do not need the cooperation of the intermediate CA for that, you just need its name and public key, and you have these, in the intermediate CA certificate). | {
"source": [
"https://security.stackexchange.com/questions/17391",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5701/"
]
} |
17,407 | I have tried to run a vulnerability scanning script (Uniscan 6.0) on some websites and then I found a site which is exploitable with this following path. (included a word "invalid" , params/website are both censored) http://www.website.com/index.php?param1=invalid../../../../../../../../../../etc/passwd/././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././.¶m2=value¶m3=value For my next step, I really want to understand what exactly happen so I'm trying to manually exploit it. (I took a look at some tutorials about LFI) ../../../../../../../../../../../../../../../etc/passwd&... invalid../../../../../../../../../../../../../../../etc/passwd&... ../../../../../../../../../../../../../../../etc/passwd%00&... ../../../../../../../../../../../../../../../etc/passwd/././&... ../../../../../../../../../../../../../../../etc/passwd%00/././%... but they didn't work except the first very long path, what is going on? What php-code should I use?
And how that long path could bypass that vulnerable php-code? The following information may be helpful. < HTTP/1.1 200 OK
< Date: Thu, 19 Jul 2012 19:46:03 GMT
< Server: Apache/2.2.3 (CentOS)
< X-Powered-By: PHP/5.1.6
< Set-Cookie: PHPSESSID=[blah-blah]; path=/
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< Vary: Accept-Encoding
< Content-Length: 2094
< Content-Type: text/html | Fascinating! @catalyze has dug up a truly intriguing, lovely situation here. I wanted to take the time to summarize what's going on here, on this site. (Full credits to @catalyze and Francesco "ascii" Ongaro; I'm just summarizing what they explained.) Summary. This is not an everyday LFI attack. Instead, this is something more unusual and clever. Here we have a vulnerability that cannot be exploited through standard LFI methods; you need more trickiness to work out how to exploit it. Background. First, I need to tell you two facts about PHP's file handling that were discovered by Francesco "ascii" Ongaro and others: Fact 1. You can add stuff to the end of a filename. Everyone knows that /./etc/passwd is just another way to refer to the /etc/passwd file. But, here are some you may not have known about. On PHP, it turns out that /etc/passwd/ also refers to the /etc/passwd file: trailing slashes are stripped off. Wild, huh? This doesn't work on base Unix, so it is a bit surprising that PHP would accept such a filename, but it appears that PHP is itself stripping off trailing slashes before opening the file. You can append any number of trailing slashes: /etc/passwd//// is also OK. And, you can append ./ (as many times as you want). For instance, /etc/passwd/. , /etc/passwd/./ , and /etc/passwd/././. all refer to /etc/passwd . Go nuts! PHP doesn't care. Fact 2. Long paths are truncated. On most PHP installations, if the filename is longer than 4096 bytes, it will be silently truncated and everything after the first 4096 bytes will be discarded. No error is triggered: the excess characters are simply thrown away and PHP happily continues on. The attack. Now I am ready to describe the attack. I'll show you the vulnerable code, why standard LFI attacks don't work, and then how to build a more-clever attack that does work. The result explains what @catalyze saw in his pentest. The vulnerable code. Suppose we have code that looks something like this: <?php
include("includes/".$_GET['param1'].".php");
?> This looks like a local file include (LFI) vulnerability, right? But the situation is actually a bit trickier than it may at first appear. To see why, let's look at some attacks. Standard attacks. The standard, naive way to try to exploit this LFI vulnerability is to supply a parameter looking something like ?param1=../../../../var/www/shared/badguy/evil . The above PHP code will then try to include the file includes/../../../../var/www/shared/badguy/evil.php . If we assume that the file /var/www/shared/badguy/evil.php exists and is controlled by the attacker, then this attack will succeed at causing the application to execute malicious code chosen by the attacker. But this only works if the attacker can introduce a file with contents of his choice onto the filesystem, with a filename ending in .php . What if the attacker doesn't control any file on the filesystem which ends in .php ? Well, then, the standard attacks will fail. No matter what parameter value the attacker supplies, this is only going to open a filename that ends with the .php extension. A more sophisticated attack. With the additional background facts I gave you earlier, maybe you can see how to come up with a more sophisticated attack that defeats this limitation. Basically, the attacker chooses a very long parameter value, so that the constructed filename is longer than 4096 bytes long. When the filename is truncated, the .php extension will get thrown away. And if the attacker can arrange for the resulting filename to refer to an existing file on the filesystem, the attacker is good. Now this might sound like a far-fetched attack. What are the odds that we can find a filename on the filesystem whose full path happens to be exactly 4096 bytes long? Maybe not so good? This is where the background facts come into play. The attacker can send a request with ?param1=../../../../etc/passwd/./././././<...> (with the ./ pattern repeated many thousands of times). Now look at what filename gets included, after the prefix is prepended and the .php file extension is added: it will be something like includes/../../../../etc/passwd/./././././<...>.php . This filename will be longer than 4096 bytes, so it will get truncated. The truncation will drop the file extension and leave us with a filename of the form includes/../../../../etc/passwd/./././././<...> . And, thanks to the way PHP handles trailing slashes and trailing ./ sequences, all that stuff at the end will be ignored. In other words, this filename will be treated by PHP as equivalent to the path includes/../../../../etc/passwd . So PHP will try to read from the password file, and when it finds PHP syntax errors there, it may dump the contents of the password file into an error page -- disclosing secret information to an attacker. So this technique allows to exploit some vulnerabilities that otherwise could not be exploited through standard methods. See the pages that @catalyze links to for a more detailed discussion and many other examples. This also explains why @catalyze was not able to exploit the attack by sending something like ?param1=../../../../etc/passwd : a .php extension got added on, and the file /etc/passwd.php did not exist, so the attack failed. Summary. Peculiarities in PHP's handling of file paths enable all sorts of subtle attacks on vulnerabilities that otherwise would appear unexploitable. For pentesters, these attack techniques may be worth knowing about. For developers, the lesson is the same: validate your inputs; don't trust inputs supplied by the attacker; know about classic web vulnerabilities, and don't introduce them into your code. | {
"source": [
"https://security.stackexchange.com/questions/17407",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11373/"
]
} |
17,421 | If you expect to store user password securely, you need to do at least the following: $pwd=hash(hash($password) + salt) Then, you store $pwd in your system instead of the real password. I have seen some cases where $pwd contains the salt itself. I wonder whether the salt should be stored separately, or is it OK if an attacker gets the hashed value and the salt at the same time. Why? | TL;DR - You can store the salt in plaintext without any form of obfuscation or encryption, but don't just give it out to anyone who wants it. The reason we use salts is to stop precomputation attacks, such as rainbow tables . These attacks involve creating a database of hashes and their plaintexts, so that hashes can be searched for and immediately reversed into plaintext. For example*: 86f7e437faa5a7fce15d1ddcb9eaeaea377667b8 a
e9d71f5ee7c92d6dc9e92ffdad17b8bd49418f98 b
84a516841ba77a5b4648de2cd0dfcb30ea46dbb4 c
...
948291f2d6da8e32b007d5270a0a5d094a455a02 ZZZZZX
151bfc7ba4995bfa22c723ebe7921b6ddc6961bc ZZZZZY
18f30f1ba4c62e2b460e693306b39a0de27d747c ZZZZZZ Most tables also include a list of common passwords: 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8 password
e38ad214943daad1d64c102faec29de4afe9da3d password1
b7a875fc1ea228b9061041b7cec4bd3c52ab3ce3 letmein
5cec175b165e3d5e62c9e13ce848ef6feac81bff qwerty123 *I'm using SHA-1 here as an example, but I'll explain why this is a bad idea later. So, if my password hash is 9272d183efd235a6803f595e19616c348c275055 , it would be exceedingly easy to search for it in a database and find out that the plaintext is bacon4 . So, instead of spending a few hours cracking the hash (ok, in this case it'd be a few minutes on a decent GPU , but we'll talk about this later) you get the result instantly. Obviously this is bad for security! So, we use a salt. A salt is a random unique token stored with each password. Let's say the salt is 5aP3v*4!1bN<x4i&3 and the hash is 9537340ced96de413e8534b542f38089c65edff3 . Now your database of passwords is useless, because nobody has rainbow tables that include that hash. It's computationally infeasible to generate rainbow tables for every possible salt. So now we've forced the bad guys to start cracking the hashes again. In this case, it'd be pretty easy to crack since I used a bad password, but it's still better than him being able to look it up in a tenth of a second! Now, since the goal of the salt is only to prevent pre-generated databases from being created, it doesn't need to be encrypted or obscured in the database. You can store it in plaintext. The goal is to force the attacker to have to crack the hashes once he gets the database, instead of being able to just look them all up in a rainbow table. However, there is one caveat. If the attacker can quietly access a salt before breaking into your database, e.g. through some script that offers the salt to anyone who asks for it, he can produce a rainbow table for that salt as easily as he could if there wasn't one. This means that he could silently take your admin account's salt and produce a nice big rainbow table, then hack into your database and immediately log in as an admin. This gives you no time to spot that a breach has occurred, and no time to take action to prevent damage, e.g. change the admin password / lock privileged accounts. This doesn't mean you should obscure your salts or attempt to encrypt them, it just means you should design your system such that the only way they can get at the salts is by breaking into the database. One other idea to consider is a pepper . A pepper is a second salt which is constant between individual passwords, but not stored in the database. We might implement it as H(salt + password + pepper) , or KDF(password + pepper, salt) for a key-derivation function - we'll talk about those later. Such a value might be stored in the code. This means that the attacker has to have access to both the database and the sourcecode (or webapp binaries in the case of ASP .NET, CGI, etc.) in order to attempt to crack the hashes. This idea should only be used to supplement other security measures. A pepper is useful when you're worried about SQL injection attacks, where the attacker only has access to the database, but this model is (slowly) becoming less common as people move to parameterized queries . You are using parameterized queries, right? Some argue that a pepper constitutes security through obscurity, since you're only obscuring the pepper, which is somewhat true, but it's not to say that the idea is without merit. Now we're at a situation where the attacker can brute-force each individual password hash, but can no longer search for all the hashes in a rainbow table and recover plaintext passwords immediately. So, how do we prevent brute-force attacks now? Modern graphics cards include GPUs with hundreds of cores. Each core is very good at mathematics, but not very good at decision making. It can perform billions of calculations per second, but it's pretty awful at doing operations that require complex branching. Cryptographic hash algorithms fit into the first type of computation. As such, frameworks such as OpenCL and CUDA can be leveraged in order to massively accelerate the operation of hash algorithms. Run oclHashcat with a decent graphics card and you can compute an excess of 10,000,000,000 MD5 hashes per second. SHA-1 isn't much slower, either. There are people out there with dedicated GPU cracking rigs containing 6 or more top-end graphics cards, resulting in a cracking rate of over 50 billion hashes per second for MD5. Let me put that in context: such a system can brute force an 8 character alphanumeric password in less than 4 minutes . Clearly hashes like MD5 and SHA-1 are way too fast for this kind of situation. One approach to this is to perform thousands of iterations of a cryptographic hash algorithm: hash = H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(...H(password + salt) + salt) + salt) ... ) This slows down the hash computation, but isn't perfect. Some advocate using SHA-2 family hashes, but this doesn't provide much extra security. A more solid approach is to use a key derivation function with a work factor. These functions take a password, a salt and a work factor. The work factor is a way to scale the speed of the algorithm against your hardware and security requirements: hash = KDF(password, salt, workFactor) The two most popular KDFs are PBKDF2 and bcrypt . PBKDF2 works by performing iterations of a keyed HMAC (though it can use block ciphers) and bcrypt works by computing and combining a large number of ciphertext blocks from the Blowfish block cipher. Both do roughly the same job. A newer variant of bcrypt called scrypt works on the same principle, but introduces a memory-hard operation that makes cracking on GPUs and FPGA -farms completely infeasible, due to memory bandwidth restrictions. Update: As of January 2017, the state-of-the-art hashing algorithm of choice is Argon2 , which won the Password Hashing Competition. Hopefully this gives you a nice overview of the problems we face when storing passwords, and answers your question about salt storage. I highly recommend checking out the "links of interest" at the bottom of Jacco's answer for further reading, as well as these links: The Definitive Guide to Forms-Based Website Authentication The Open Web Application Security Project (OWASP) Similar answer on StackOverflow | {
"source": [
"https://security.stackexchange.com/questions/17421",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11415/"
]
} |
17,434 | I want to know more about how WEP (Wired Equivalent Privacy) protocol for wireless security. From this Wikipedia article I have got a basic Idea. But what is the initialize vector? Is some kind of token sent for each request? Or is the connecting device authenticated only once in the beginning or some kind of token sent for each request (equivalent to cookies for authentication in Gmail, Yahoo, etc.)? I have tried to set up WEP security for my Wi-Fi. As per instructions from my ISP, I did the following: Network Authentication - Open
WEP Encryption - Enabled
Current Network Key -1
Encryption Key -64 bit
Network Key 1 -abcdefghij (10 characters)
Network Key 2 -
Network Key 3 -
Network Key 4 - What are these network keys? | The initialization vector in WEP is a 24-bit random value that is used to seed the RC4 algorithm. RC4 is a stream cipher. This means that for each bit of plaintext, it produces one bit of keystream and xors the two, to generate the ciphertext. The keystream is simply a stream of random numbers, generated from the RC4 algorithm. In the most basic operation of a stream cipher, the algorithm is seeded with a key, such that the same key will always produce the same stream of random numbers. Since both the client and server know the key, they can produce the same keysteam. This allows the client to xor the plaintext with the keystream to produce the ciphertext, and the server to xor the ciphertext with the keystream to produce the plaintext again. The problem with this is that a key is only a few tens of bits long, but the plaintext may be gigabytes. After a large number of bits have been produced by RC4, the random numbers become predictable, and may even loop back round to the start. This is obviously undesirable, because a known plaintext attack would be able to compute the keystream (c1 xor c2 = k) and use it to decrypt new messages. In order to solve this problem, an IV was introduced to complement the seed. The IV is a random 24-bit value that changed periodically, in an attempt to prevent re-use of the keystream. Unfortunately, 24 bits is quite small, and the IV often wasn't generated in an unpredictable way, allowing attackers to guess future IVs and use them to deduce the key. Further attacks involved actively injecting packets into the network, tricking the access point into issuing lots of new IVs, which allowed attackers to crack WEP in minutes or seconds. Further reading: Fluhrer, Mantin and Shamir attack WEP Flaws How WEP cracking works (PDF) | {
"source": [
"https://security.stackexchange.com/questions/17434",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8201/"
]
} |
17,506 | In the question, substitute scope with state . Since there is an answer based on the wrong question, I'll let everything below as is. The answer is useful nonetheless. This question refers to the current draft v30 of OAuth2 and GitHub's implementation thereof. GitHub's "Web application flow" is more or less an implementation of Authorization Code Grant as described in the spec. The client (the Web application) directs the user to a special page on GitHub which asks whether the user wants to allow the application access to his or her resources. If this is confirmed, the user is redirected back to the client which then uses a temporary code to retrieve the OAuth token for future use. If the client provided a scope parameter for the user's request to GitHub, the redirect contains that parameter as well. If the scope is some secret only known to the client, the client can be sure that nobody else created that request, i.e., that the user has not been the victim of a CSRF attack. But is that really necessary? If we choose not to use a scope parameter and the user is indeed the victim of a CSRF attack, he or she must still accept the question asked by GitHub whether the client is allowed access to the user's information. This step cannot be skipped. Indeed the spec says [The] authorization server authenticates the resource owner and obtains an authorization decision (by asking the resource owner or by establishing approval via other means). If the attacker uses other techniques like clickjacking to trick the user into accepting the request, I reckon all bets are off anyway and the scope won't protect the user either. In conclusion: Against what threat does the scope actually protect the user? | State Parameter The scope parameter is not used to secure the authentication request against CSRF attacks (see below). But there is another parameter called "state", which matches your description. [Asking the user] This step cannot be skipped. I am afraid, this assumption is not correct. It is very common, that the user is only asked for permission, when he uses a client application for the first time. After that the server remembers the client application and grants access automatically. Indeed the spec says [The] authorization server authenticates the resource owner and obtains an authorization decision (by asking the resource owner or by establishing approval via other means). Other means may include that the application is trusted by the server (e. g. owned by the same company) or that the user's decision was saved. So without a state parameter, the attacker can trick a user to log in to an application, that is known to the user in principle or generally trusted by the server. Scope Parameter The scope parameter is used to indicate a list of permissions , that are requested by the client: The authorization and token endpoints allow the client to specify the scope of the access request using the "scope" request parameter. For example permissions for a social network may include: post_to_my_wall send_notification post_to_my_friends_wall read_my_age The value of the scope parameter is expressed as a list of space-
delimited, case sensitive strings. The strings are defined by the
authorization server. The server may provide all requested permissions or a modified list (for example because the user denied some permissions): If the issued access token scope is different from the one requested by the client, the authorization server MUST include the "scope" response parameter to inform the client of the actual scope granted. Source: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-30#section-3.3 | {
"source": [
"https://security.stackexchange.com/questions/17506",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11526/"
]
} |
17,664 | One of my clients has asked me to modify the login page that their board members use to access their materials. The problem here is that the person previously responsible for this has left no documentation behind and the passwords are encrypted with some (seemingly simple) algorithm. I have access to the hashes, and I know what the passwords are. The javascript is using the hash as their password. My thought is that if I can figure out what the algorithm is that I can create new accounts to suit their request. Is there a way I can check to see what the algorithm is? The user is prompted to select their name from a drop down menu and their password is associated with the hash beside their name in the HTML code. The form option looks like this (note that N denotes a numerical value and L denotes a letter). <option value='Username|NNNNN|LLLLLLLL'>Username The actual script that parses the value looks like this <SCRIPT LANGUAGE="JavaScript">
<!-- Begin
var params = new Array(4);
var alpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHI";
function check(form) {
which = form.memlist.selectedIndex;
choice = form.memlist.options[which].value + "|";
if (choice == "x|") {
alert("Please Select Your Name From The List");
return;
}
p = 0;
for (i = 0; i < 3; i++) {
a = choice.indexOf("|", p);
params[i] = choice.substring(a, p);
p = a + 1;
}
h1 = makehash(form.pass.value, 3);
h2 = makehash(form.pass.value, 10) + " ";
if (h1 != params[1]) {
alert("Incorrect Password!");
return;
};
var page = "";
for (var i = 0; i < 8; i++) {
letter = params[2].substring(i, i + 1)
ul = letter.toUpperCase();
a = alpha.indexOf(ul, 0);
a -= (h2.substring(i, i + 1) * 1);
if (a < 0) a += 26;
page += alpha.substring(a, a + 1);
};
top.location = page.toLowerCase() + ".html";
}
function makehash(pw, mult) {
pass = pw.toUpperCase();
hash = 0;
for (i = 0; i < 8; i++) {
letter = pass.substring(i, i + 1);
c = alpha.indexOf(letter, 0) + 1;
hash = hash * mult + c;
}
return (hash);
}
// End -->
</script> Is there anyway I can reverse engineer this so that I can create new user accounts? | The algorithm here is: function makehash(pw, mult) { // Password and... multiplier?
pass = pw.toUpperCase(); // Case insensitivity
var hash = 0;
for (i = 0; i < Math.min(pass.length, 8); i++) { // 8 char passwords max...
c = pass.charCodeAt(i) - 63; // A = 2, B = 3, etc.
hash *= mult;
hash += c;
}
return hash;
} I cleaned the code up a bit and added some comments. Whoever wrote this is utterly incompetent in the fields coding, security and mathematics. Anyway, it is no "official" algorithm like MD5 or AES, but homebrew and incredibly fault-intolerant. It accepts only letters, is case-insensitive, and ignores all characters after the first 8. I would highly recommend upgrading everyone's password hash. See also: How to securely hash passwords? By the way, here is the rest of the code with some formatting: var params=new Array(4);
var alpha="ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHI";
function check(form) {
which = form.memlist.selectedIndex;
choice = form.memlist.options[which].value + "|";
if (choice == "x|") {
alert("Please Select Your Name From The List");
return;
}
p = 0;
for (i = 0; i < 3; i++) {
a = choice.indexOf("|", p);
params[i] = choice.substring(a, p);
p = a + 1;
}
h1 = makehash(form.pass.value, 3);
h2 = makehash(form.pass.value, 10) + " ";
if (h1 != params[1]) {
alert("Incorrect Password!");
return;
}
var page = "";
for (var i = 0; i < 8; i++) {
letter = params[2].substring(i, i + 1)
ul = letter.toUpperCase();
a = alpha.indexOf(ul, 0);
a -= h2.substring(i, i + 1) * 1; // multiplying by one? Seriously?
if (a<0)
a+=26;
page += alpha.substring(a, a + 1);
};
top.location=page.toLowerCase() + ".html";
} I would add comments, but I'm not sure if it's worth it to find any reason in this mess. | {
"source": [
"https://security.stackexchange.com/questions/17664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4386/"
]
} |
17,719 | I'm debating if I should support cookieless sessions in my web app. It would look something like this: http://www.example.com/(S(lit3py55t21z5v55vlm25s55))/orderform.aspx Since the URL is never constant, I don't think it's possible for a CSRF attack to occur. What other attacks am I openening myself up to? ... are there mitigations to those attacks? For example, can I set an expire header (or similar) to prevent the URL from appearing in browser history? | The basics First, I assume you understand the most basic session ID security right: you are using an ID with sufficient entropy, and you use transport level security (HTTPS). Any approach to session ID (URL, cookies, whatever) that does not get those right is vulnerable, your question is specifically about ID in URL, so I will not discuss that further. Web-browser leaks The most obvious risk of ID leak is with the Referer HTTP header. The simple solution to this is either: to forbid outgoing all links ( <a href and others), all external embedded content ( <img> , <object> ...) and other external dependencies ( <script> ...), or to change all outgoing links to go to an internal redirection page (which could be another security issue, depending on how you implement this). Many Web application have zero outgoing links, and no content (esp. HTTP resources) from other users (which often has potential security/privacy issues, like Web bugs in emails). This issue is usually described as a serious risk, but I disagree; if well understood it is easily managed. User-behaviour related leaks Voluntary sharing of URL with other users Another practical issue is that users could fail to recognise the URL as the access key to the page, and share it with others , via email or in a forum. The risk is even higher if the document is public but the user might have some privileged access to it (like in SE), so sending the URL is the most obvious things to do to share a page with a friend. When the page is by nature private and contains sensitive personal informations, such as the interface to manage your account, the risk is lower, but still exits. I have personally witness more than one case of non-public URL being posted on a public forum , where the URL designates a web page random users could not ever legitimately access: link apparently to a webmail: the link was not working, I presume because the session was cookies based and not URL based; link to the configuration page of a modem-box in the administration Web application of the ISP; the administration application is on a public Web server and requires authentication (this allows ISP clients to view or change some of their account information from anywhere). Giving others access to this interface includes: giving bank account information, ability change account password, change account email contact address, buy optional features, terminate account ... or just click the "disconnect" button to terminate the session. The issue I am pointing out here is not the web application design, but that the behaviour of a few uneducated users is to copy URL and post it even when the account manages some serious stuff (not just a lame forum account). In the second example, it clearly shows that some user simply to not think about what they are doing: the shared URL was either useless for people willing to help, or a security risk. This is just simple logic, and zero technical knowledge was necessary to understand that : either an information is sufficient to access some other information, or it is not, so obviously this link was useless or a serious risk. Usually when such URL is posted in a public place, some nice guy sees the URL, copies it and if the session is still valid, clicks "disconnect"; then he explains that serious issue to the user. Most people are honest, but there is a risk. Sharing of screen shots (revealing URL) Another risk is when users want to share some data shown on one administrative web page (and not their access to such page), so they make screen shots of an administrative web page, failing to realise that the cryptic information in the URL bar is a secret. Making the URL very long so that it is not visible on user screen lowers this risk. This case is very different, as the sharing of information is voluntary but the fact is contains the URL is not realised by the user. Technical and user issues Some technical issues exist with ID in URL; other issues exist with ID in HTTP cookies. In both cases it is up to the developer to understand the issues and fix them. A user expectations issue exists with the "secret in URL" approach. The issue is not that the user can see the secret information, it is the fact the URL bar does not usually contain a secret. You could think of a social engineering attack were the user is convinced to install the (very nice) Firebug plug-in , then activate it, go to the cookie panel and copy session cookies from there. But even a user willing to do everything you tell him might not be able to correctly accomplish each of these steps! Copying the content of the URL is not just very easy (downloading malware on the Web and installing is easy enough for many users too); the user is used to copy and send URL. Some more experienced user may have explained a beginner how to send a link to a news article. The beginner might assume he can do that on any website and any webapp (even if the more experienced user never told him anything like that). The problem here is that you can review your webapp design, you can get help from security experts and pay them to do a complete analysis of your application (complete from design to every line of code), but you cannot do a security analysis of your users (unless this webapp is only intended for your highly qualified employees and security consultants). Combination? You could combine both ID in cookies and ID in URL, if: both ID are different, independent cryptographic random numbers and each ID contains sufficient entropy for secure protection if the other ID is leaked . | {
"source": [
"https://security.stackexchange.com/questions/17719",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
17,767 | Can someone explain to me in what consists the Four-way Handshake in WPA-Personal (WPA with Pre-Shared Key), which informations are being sent between AP and client, how is it possible to find the AP Pre-Shared Key from these informations after we capture the Four-way Handshake. | This book is a very good resource on wireless security. This section explains the details of the four-way handshake, but you really need to read the whole chapter to understand it. Both WPA2-PSK and WPA2-EAP result in a Pairwise Master Key (PMK) known to both the supplicant (client) and the authenticator (AP). (In PSK the PMK is derived directly from the password, whereas in EAP it is a result of the authentication process.) The four-way WPA2 handshake essentially makes the supplicant and authenticator prove to each other that they both know the PMK, and creates the temporal keys used to actually secure network data. Capturing the four-way handshake will not divulge the PMK or PSK (since capturing the handshake is trivial over wireless this would be a major vulnerability). The PMK isn't even sent during the handshake, instead it is used to calculate a Message Integrity Check (MIC). You basically need to perform a dictionary or bruteforce attack on the handshake until you find a password which results in the same MIC as in the packets. | {
"source": [
"https://security.stackexchange.com/questions/17767",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11427/"
]
} |
17,774 | I have some files I was given by my teacher at University, I could chase him up, but I may as well try getting blood from a stone, his response rate isn't great and I completed my degree a year ago! They're pdf files stored inside password protected zip files. The passwords are networking related, have upper and lowercase and numbers, but no special characters as far as I remember, and some are permutations of each other "passwordL1", "l2Password" etc. What are the different encryption algorithms employed by .zip files? How can I determine the protection in use on my zip files? Where can I find good papers and tools, which will ultimately give me back the pdfs which are annoyingly hidden by the password? | If you haven't already looked at it there's a couple of sources I'd recommend for this. John the ripper with the community jumbo patch supports zip cracking. If you look at the supported modes there's some options (including the basic brute-force) for cracking zip passwords. Elcomsoft have good zip crackers including guaranteed recovery under some circumstances There are also some companies like this one who appear to have GPU accelerated zip cracking, which could speed things up depending on your hardware. In terms of the approach it sounds like a dictionary based attack with mutation rules(so changing the dictionary with things like leet speak rules) would be the best bet, particularly if you've got the idea that the words would come from a specific domain. Straight brute-force would likely not be a good idea as it tends to top out around 8 characters (unless you're throwing a lot of CPU/GPU power at it) | {
"source": [
"https://security.stackexchange.com/questions/17774",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1644/"
]
} |
17,791 | I want to hide files inside a picture. The files may be music or video. Is it technically possible to do this? If so, how? I searched on Google to find methods. Please suggest some methods for beginners. I don't know where to start, so please guide me. Important note: Modifying the size of an image up to a certain extent e.g.,
before hiding image size: 1MB and after hiding image size can be up to 4MB. I referred to the following URLs: martinolivier.com/open/stegoverview.pdf www.garykessler.net/library/steganography.html www.jjtc.com/ihws98/jjgmu.html pcplus.techradar.com/.../secrets-of-steganography/ - United Kingdom These are some examples, sir. | It is possible to hide files in other files. For pictures you can use the least significant bits of a RGB pixel definition. A pixel has 3 bytes defining its color. Light Sea Green is defined by: 32,178,170 (R,G,B) This translates to binary: 00100000,10101100,10101010 When we change the last bit of these, the color in an image does not change significantly. Therefor we can use the Least Significant bit of every color value of the pixel. This gives us 3 bits per pixel we can use. So take a text, convert it to its binary representation and then write an algorithm that changes the LSB of every R,G and B value in the picture to the bit of that text. If you have a text of 128 bits long, you will need 128/3 pixels to hide that text. Lets say I have a text who's binary representation is: 01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100 00100001 00100000 This text is 13 bytes long, meaning there are 13*8=104 bits. We know we can hide up to 3 bits in a pixel, so 104/3= 34.666, so we need 35 pixels. So if we have a picture we'll use the first 35 pixels. To show you how it works, I'll give an example with two pixels. We can hide 6 bits in there, the first six bits of our text is 010010 Our pixels are: pixel1,R: 00010101
pixel1,G: 01011111
pixel1,B: 10111100
pixel 2,R: 10010001
pixel 2,G: 00010101
pixel 2,B: 11011100 Now we can just change the last bit for every color value to the representative bit of the text: pixel1,R: 00010101 ---> 00010100 (changes to 0)
pixel1,G: 01011111 ---> 01011111 (remains the same)
pixel1,B: 10111100 ---> 10111100 (remains the same)
pixel 2,R: 10010001 ---> 10010000 (changes to 0)
pixel 2,G: 00010101 ---> 00010101 (remains the same)
pixel 2,B: 11011100 ---> 11011100 (remains the same) If we want to extract the text from the image, we just look at the LSB of the new pixels, we get: P1 R: 0
P1 G: 1
p1 B: 0
P2 R: 0
P2 G: 1
P2 B: 0 This is our row: 010010 | {
"source": [
"https://security.stackexchange.com/questions/17791",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11679/"
]
} |
17,798 | I'm researching for a small talk about websecurity and I found one article about the formspring hack, which made me curious. They claim to have used SHA-256 + salt We were able to immediately fix the hole and upgraded our hashing mechanisms from sha-256 with random salts to bcrypt to fortify security (July 10th, 2012) … nevertheless the attackers claimed they have found ~200k passwords within the 400k published datasets (They have 11m users, it's IMHO likely that most of them have been copied). About half of the 400,000 hashes have already been reconstructed by password crackers. (July 11th, 2012) This looks suspicious to me. I know that without using PKCS#5 or similar techniques, the security of SHA-256 is limited, but how much calculation power would they need to find so many passwords so quick? My guess is that formspring was lying about the hashing. Has someone some insights on that? | None of the existing answers cover the critical part of this question to my satisfaction: what about the salts? If just the password hash values were posted, other crackers can't possibly know: The actual per-password (supposedly random, per the source) salt value. How the salt is mixed with the password in the code. All they have is the final, resulting hash! There are lots of ways the password and salt can be combined to form the hash: sha256(pass + salt)
sha256(salt + pass)
sha256(sha256(pass) + sha256(salt))
sha256(pass + sha256(salt))
sha256(sha256(...(salt + pass + salt)...)) But if the salt is the recommended 8 characters of pure randomness … sha256("monkey1" + "w&yu2P2@")
sha256("w&yu2P2@" + "monkey1") … this means a "typical" 7 or 8 character password becomes extremely difficult to brute force, because it is effectively 15 or more characters! Furthermore, to crack password hashes that you know are salted, unless you also have the salt, you have no other choice except brute force! Based on research I did using GPU cracking , I achieved 8213.6 M c/s using two high end ATI cards brute force cracking MD5 password hashes. In my benchmarking this meant I could try: all 6 character password MD5s 47 seconds
all 7 character password MD5s 1 hour, 14 minutes
all 8 character password MD5s ~465 days
all 9 character password MD5s fuggedaboudit Note that SHA-256 is 13% of the speed of MD5 on GPUs using Hashcat . So multiply these numbers by about 8 to see how much longer that would take. If the salts were not known , that means you're essentially brute forcing the equivalent of 12+ character passwords. That is far beyond the realm of any known computational power. Now if you want to argue that … The original crackers also obtained the salts, but chose not to post them. The original crackers also have the source code (or it's open source) so they know how the passwords are salted, but chose not to post that information. Formspring is lying and their passwords were not salted or salted improperly such that the salts had no effect. … then yes, cracking 200k of 400k password hashes in a few days is easily possible. | {
"source": [
"https://security.stackexchange.com/questions/17798",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11685/"
]
} |
17,816 | Lets say a user is logging into a typical site, entering their username and password, and they mistype one of their inputs. I have noticed that most, if not all, sites show the same message (something along the lines of, "Invalid username or password") despite only one input being wrong. To me, it seems easy enough to notify the user of which input was wrong and this got me wondering why sites do not do it. So, is there a security reason for this or is it just something that has become the norm? | If a malicious user starts attacking a website by guessing common username/password combinations like admin/admin, the attacker would know that the username is valid is it returns a message of "Password invalid" instead of "Username or password invalid". If an attacker knows the username is valid, he could concentrate his efforts on that particular account using techniques like SQL injections or bruteforcing the password. | {
"source": [
"https://security.stackexchange.com/questions/17816",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11710/"
]
} |
17,852 | I know malware can be gotten by downloading and running stuff, but is there a real possibility of just viewing a webpage or clicking a link and getting one? Assuming using only Firefox / Chrome and only the Flash plugin. Perhaps I should rephrase the question like this: How many drive-by viruses have been discovered in the past couple of years for Firefox, Chrome, and the Flash plugin? | You can lookup vulnerabilities at http://cve.mitre.org/ . "CVE is a dictionary of publicly known information security vulnerabilities and exposures." A rough search of: Firefox, returns 888 Chrome, returns 729 Flash, returns 371 Further filtering of the severity of these would need to be done, but this gives an upper bound of found vulnerabilities. http://web.nvd.nist.gov/view/vuln/search allows for the filtering based off of time period, with only CVE checkbox selected, searches of 3 years, and 3 months gives the following respectively: Firefox, returns 391, 64 Chrome, returns 653, 80 Flash, returns 227, 16 | {
"source": [
"https://security.stackexchange.com/questions/17852",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11765/"
]
} |
17,922 | How do YubiKeys work? Are there any alternatives? Here is a picture of one: | As I understand it, Yubikey acts like a USB keyboard. You plug it in your computer, place the cursor in a form field, press the button on the Yubikey, and it sends out a text string of 44 characters to the computer like you are typing those 44 characters. The computer doesn't know the difference between you typing it or the Yubikey generating it. A website like a Wordpress site with Yubikey plugin, or the Lastpass addon in Firefox, or any other website that has a Yubikey option, has a login form with username, password, and Yubikey password. You enter your username and password, place the cursor in the Yubikey field, then press the Yubikey button, and it enters the Yubikey password into the field. Then the form is submitted, and the Yubikey is validated in the Yubicloud. The website checks if the entered Yubikey password is valid. The Yubikey itself does not connect to the Yubicloud. It's just a device generating a string sending it out acting like a keyboard, and it does not connect to the internet or anything except as that keyboard. Before all this works, you need to update your account on the website to use Yubikey. That means you need to link your key to the account. That way the Yubicloud can check the generated code and validate it against your account. The website of course needs to implement the Yubikey functionality, which is available as a free service for website owners. If the Yubikey gets lost, you can use the normal recovery methods the website has to recover your account and disable the Yubikey. Normally this means that you get a password recovery link via email, and that link disables the Yubikey function in your account. I mailed Yubikey support to see if this answer is correct. They said this explanation was correct, except that it explained only one part of the way the key works. The other answers here don't give any real explanation. Even the Linuxjournal article doesn't explain it this way. The accepted answer gives a black-box answer - not what I was looking for when I opened this page. I hope this answer gives a better explanation and writing it made me understand the Yubikey better. | {
"source": [
"https://security.stackexchange.com/questions/17922",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6402/"
]
} |
17,931 | It seems that they are mutually exclusive, as disabling one gives me the other, and vice versa. Two-factor auth for my ssh servers sounds really nice, so is there any way to accomplish this? | With recent Fedora and RHEL 6 releases, you can use RequiredAuthentications2 pubkey,password to require both pubkey and password authentication. Usually this is done to require pubkey and 2-factor authentication token, not the user's password. Update: Now on RHEL / CentOS 7, and any system with a recent version of OpenSSH, you can use: AuthenticationMethods "publickey,password" "publickey,keyboard-interactive" It's also possible to use the Match directive to exclude IPs or Users. | {
"source": [
"https://security.stackexchange.com/questions/17931",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8121/"
]
} |
17,940 | I usually generate strong passwords using various online tools . Some time ago I mentioned it to a friend of mine and he was shocked that I do such dangerous thing. Is it really so unsafe to generate passwords online? Could there be generated some kind of cookie that tracks where I pasted it? | No. It is not safe to generate passwords online. Don't do it! In theory there are some ways that one could perhaps build a password generator that is not so bad (e.g., run it in Javascript, on your local machine, and so forth). However, in practice, there are too many pitfalls that an average user cannot be expected to detect. Consequently, I do not recommend it. For instance, an average user has no way to vet whether the password generator does indeed ensure that the password never leaves your site. The average user has no way to verify that the web site is not keeping a copy of your password. The average user has no way to verify that the password generation code is using good entropy (and Javascript's Math.random(), which is the obvious thing to use for this purpose, is not a great pseudorandom number generator). | {
"source": [
"https://security.stackexchange.com/questions/17940",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11838/"
]
} |
17,949 | What is the reason that most websites limit to 16 characters? I would have thought the longer the password the more difficult it makes it for someone to crack it? Is it something to do with hash collisions? | If you are to abide by CWE-521: Weak Password Requirements . Then all passwords must have a min and max password length. There are two reasons for limiting the password size. For one, hashing a large amount of data can cause significant resource consumption on behalf of the server and would be an easy target for Denial of Service. Especially if the server is using key stretching such as PBKDF2. The other concerns is hash length-extension attacks or the prefixing attack against MD5. However If you are using a hash function that isn't broken, such as bcrypt or sha-256 , then this shouldn't be a concern for passwords. IMHO 16 bytes is far too small . bcrypt has a built-in cap of 72 characters, which is probably a reasonable size for a heavy hash function. Key Stretching used by these functions creates the possibility of an Algorithmic Complexity Attack or ACA. | {
"source": [
"https://security.stackexchange.com/questions/17949",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11842/"
]
} |
17,979 | How secure is sending passwords through email to a user, since email isn't secured by HTTPS. What is the best way to secure it? Should i use encryption? | You should never send passwords in the clear, nor should you store them in the clear. You should hash them using a slow one-way cryptographic hash such as bcrypt or PBKDF2. If a user forgets their password, you offer them a "reset password" function, which sends a one-time reset link to their account. A scheme such as the following is reasonable: Hash all passwords using a salt plus bcrypt / PBKDF2 . See my reasoning here . (EDIT, March 2019: use Argon2 ) Validate the hashes upon login. If a user forgets their password, send them a secure one-time reset link, using a randomly generated reset token stored in the database. The token must be unique and secret, so hash the token in the database and compare it when the link is used. Enforce that a token can only be used to reset the password of the user who requested it. Once the token is used, it must be deleted from the database and must not be allowed to be used again. Have all password-equivilent tokens, including reset tokens, expire after a short time, e.g. 48 hours. This prevents an attacker exploiting unused tokens at a later date. Immediately display a form to allow the user to set a new password. Do not use temporary random generated passwords! Do all of this over SSL. I highly suggest reading through The Definitive Guide to Forms-Based Website Authentication for a full set of guidelines on how to build secure login systems. | {
"source": [
"https://security.stackexchange.com/questions/17979",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9759/"
]
} |
17,994 | When creating a hash with PBKDF2, it allows the developer to choose the size of the hash. Is longer always better? Also, what about the size of the random salt? Should that be the same size as the hash? EDIT: Particularly in hashing passwords. | For the hash function, you want to use a function for which the most efficient platform type (the one which will produce the more hash computations per second and per dollar) is the machine that you intend to use (i.e. a PC). That's because you are in a weapon race with the attacker, and the attacker can buy other kinds of hardware to get an edge over you (such as a GPU). GPU are very good at 32-bit arithmetics, but not at 64-bit arithmetics, whereas a PC (in 64-bit mode) will be quite fast at the latter. Thus, use a hash function which runs on 64-bit arithmetic operations. This points at SHA-512 . PBKDF2 is a key derivation function : it produces an output of configurable size. For password hashing, you want the size to be large enough to deter generic preimage attacks (i.e. trying random passwords until a match is found), so this would need, say, at least 80 bits of output. If only for making the security more convincing for the unwary, and also for aesthetics, go for the next power of 2: a 128-bit output . It is not useful to go beyond that. The salt must be unique -- as unique as possible. An easy way to achieve unique salt values is to generate salts with a cryptographically strong PRNG : probability of reusing a salt value will be sufficiently low to be neglected if these random salts are large enough, and "large enough" means "128 bits". So use random 128-bit salts . Personally, I prefer bcrypt over PBKDF2 for password hashing. | {
"source": [
"https://security.stackexchange.com/questions/17994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11890/"
]
} |
18,036 | I have configured sshd on an Ubuntu server to use key authentication and it is working fine. I had to disable password authentication for key authentication to work. Server is always accessed via remote terminals or putty. Now all user accounts are able to login with the authentication key and passphrase. But now I want to create only one new user without key authentication. So how should I go about doing this in such a way that does not hamper other users who are using key authentication. | You can use Match in sshd_config to select individual users to alter the PasswordAuthentication directive for. Enter these Match rules at the bottom of sshd_config file ( generally /etc/ssh/sshd_config ) Match User root,foo,bar
PasswordAuthentication no
Match User Rishee
PasswordAuthentication yes This would give root, foo and bar key authentication, and Rishee password authentication. An alternative is to match by negation, like this: PasswordAuthentication no
Match User *,!root
PasswordAuthentication yes In this case, everyone except root gets password authentication. Note: The *, syntax is necessary, as wildcard and negation syntax is only parsed in comma-separated lists. You can also match by group: Match Group usergroup
PasswordAuthentication no Reason for entering Match at the bottom of the file: If all of the criteria on the Match line are satisfied, the keywords on the following lines override those set in the global section of the config file, until either another >Match line or the end of the file | {
"source": [
"https://security.stackexchange.com/questions/18036",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9837/"
]
} |
18,086 | So I'm not familiar at all with IT Security, but I'm a bit curious about something. I was watching a TV show and at one point, a virus spreads through an office. They investigate and find out that the virus was encoded in a video and it was "activated" when the video was played. So my question is, is this possible? Could that actually happen? Once again, I'm not at all familiar with either IT Security or video encoding/codecs, so forgive my ignorance. EDIT: Thanks for all your answers. They were very interesting and insightful. If you're interested, the show in reference was White Collar Season 3 Episode 7 "Taking Account". | Yes, that's possible. The malware probably wouldn't be embedded in the video itself, but the video file would be specially crafted to exploit a vulnerability in the codec or media player, to gain code execution. The exploit would then download a file and run it, infecting the machine. These types of exploits have been common amongst popular document formats, e.g. PDF. Their proliferation makes them a good target for exploit writers, because people use them a lot and assume they're safe. At the end of the day, any file type could potentially contain an exploit, since an application that runs executable code is involved at some point. Exploits like this are usually buffer overflow attacks, which alter control flow by overwriting data structures outside the normal memory range of a buffer. More info: Buffer overflows on OWASP Buffer overflow protection Exploit Writing 101 on CoreLAN | {
"source": [
"https://security.stackexchange.com/questions/18086",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10984/"
]
} |
18,197 | Why shouldn't we create our own security schemes? I see a lot of questions around here about custom crypto and custom security mechanisms, especially around password hashing. With that in mind, I'm looking for a canonical answer, with the following properties: Easy for a newbie to understand. Clear and explicit in why rolling your own is a bad idea. Provides strong examples. Obligatory xkcd. | You can roll your own, but you probably will make a major security mistake if you are not an expert in security/cryptography or have had your scheme analyzed by multiple experts. I'm more willing to bet on an open-source publicly known encryption scheme that's out there for all to see and analyze. More eyes means more likely that the current version doesn't have major vulnerabilities, as opposed to something developed in-house by non-experts. From Phil Zimmermann's (PGP creator) Introduction to Cryptography (Page 54) : When I was in college in the early 70s, I devised what I believed was a brilliant
encryption scheme. A simple pseudorandom number stream was added to the
plaintext stream to create ciphertext. This would seemingly thwart any
frequency analysis of the ciphertext, and would be uncrackable even to the
most resourceful government intelligence agencies. I felt so smug about my
achievement. Years later, I discovered this same scheme in several introductory
cryptography texts and tutorial papers. How nice. Other cryptographers had
thought of the same scheme. Unfortunately, the scheme was presented as a
simple homework assignment on how to use elementary cryptanalytic
techniques to trivially crack it. So much for my brilliant scheme. From this humbling experience I learned how easy it is to fall into a false sense
of security when devising an encryption algorithm. Most people don’t realize
how fiendishly difficult it is to devise an encryption algorithm that can
withstand a prolonged and determined attack by a resourceful opponent. (This question has more discussion of the above quote.) If you are not convinced of "Don't Roll Your Own [Cryptography/Security]", then you probably are not an expert and there are many mistakes you likely will make. Is your application robust against: Timing Attacks . E.g., to the nanoseconds do completely-bad keys and partially-bad keys take the same amount of time in the aggregate to fail? Otherwise, this timing information can be exploited to find the correct key/password. Trivial Brute Force Attacks ; e.g., that can be done in within seconds to years (when you worry about it being broken within a few years). Maybe your idea of security may be a 1 in a billion (1 000 000 000) chance of breaking in (what if someone with a bot net tries a few billion times?). My idea is to aim for something like 1 in ~2 128 ( 34 000 000 000 000 000 000 000 000 000 000 000), which is roughly ten million billion billion times more secure and completely outside the realm of guessing your way in. Attacks on user accounts in parallel; e.g., you may hash passwords with the same (or worse no) 'salt' on all password hashes in the database like what happened with the leaked LinkedIn hashes. Attack any specific account trivially simply. Maybe there was a unique random salt with each simply hashed (e.g., MD5/SHA1/SHA2) password, but as you can try billions of possible passwords on any hash each second, so using common password lists, dictionary attacks, etc. it may only take an attacker seconds to crack most accounts. Use strong cryptographic hashes like bcrypt or PBKDF2 to avoid or key-strengthen regular hashes by a suitable factor (typically 10 (3-8) ). Attacks on guessable/weak "random" numbers. Maybe you use microtime/MT-rand or too little information to seed the pseudo-random number like Debian OpenSSL did a few years back . Attacks that bypass protections. Maybe you did hashing/input validation client side in web application and this was bypassed by the user altering the scripts. Or you have local application that the client tries running in a virtual machine or disassembles to reverse engineer it/alter the memory/ or otherwise cheat somehow. Other attacks, including (but not attempting to be a complete list) CSRF , XSS , SQL injection , network eavesdropping, replay attacks , Man in the Middle attacks , buffer overflows , etc. Best protections very quickly summarized. CSRF: require randomly generated CSRF tokens on POST actions; XSS: always validate/escape untrusted user-input before inputting into the database and displaying to user/browser. SQLi: always use bound parameters and limit how many results get returned. Eavesdropping: encrypt sensitive network traffic. Replay: put unique one-time nonces in each transaction. MitM: Web of Trust/Same as site last visited/Certificate issued by trusted CA. Buffer overflows: safe programming language/libraries/executable space protection/etc). You are only as strong as your weakest exploitable link. Also just because you aren't rolling your own scheme, doesn't mean your scheme will be secure, it's quite likely that the person who created what you rolled out was not an expert, or created an otherwise weak scheme. | {
"source": [
"https://security.stackexchange.com/questions/18197",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
18,225 | Does the software in the Mars Curiosity Rover have any security features built-in? I can't imagine how someone would hack into it, but if the rover does indeed have some protection against malicious hackers, what kind of attacks would it be protecting itself against? | I used to be a Command Controller (CC) at the Laboratory for Atmospheric and Space Physics (LASP) ( http://lasp.colorado.edu/ ). I was one of the people who would sit in front of the console during the times when spacecraft were visible to the ground stations. I would read/record telemetry to ensure spacecraft health and often send up new commands that would be executed by the spacecraft. In order to communicate with the ground stations (for both data and voice if necessary), the Mission Operation Center (MOC) at LASP had to have a connection to NASA's "red net". I am not sure if this was a LASP term or a NASA term. A google search turned up little information. In order to be even let in the room with access to the "red net" background checks were required. Then, in order to actually interact with the console, you had to be a certified CC, or a CC in training being overseen by a certified CC. All CC activity is always overseen by a "Flight Controller" (FC) or even a "Flight Director" (FD). In the training to become a CC, we all had to know exactly the packet structure of the communication protocol for every spacecraft we operated. While there were most certainly checksums in the protocol, I don't believe that there was any sort of encryption, authentication or verification of the data received by the spacecraft. The spacecraft are always designed to be very fault tolerant, and have fallback modes in case the RF communication is corrupted or there are other "single bit errors". Error detection and correction is a fundamental feature of RF spacecraft communication. I also worked on one deep space mission, though not as a CC. Anything not in earth orbit would require much larger antennas and likely what NASA calls the "Deep Space Network" ( http://deepspace.jpl.nasa.gov/dsn/ ). This makes an attack even more challenging. The risks as I see them today are several fold. I am not sure if they have since been fixed as I haven't worked at LASP in many years. I am also completely unaware of the design of anything but a few scientific missions. The worst things I think an attacker could do would be: Threaten to deorbit the spacecraft, or even simply waste precious propulsion fuel, for mischeif or for ransom. Threaten to try to change the orbit which could possibly cause a collision with other spacecraft -- again, for mischeif or ransom. Otherwise, I am not sure what could be gained by an attacker. Here are the vectors I see that may make an attack possible: Forged communication with the spacecraft. This would have to be done with knowledge of the spacecraft's ephemeris and with the ability to establish communication with the spacecraft. The ephemeris is fairly easy to obtain, but getting control of a ground station may require that the attacker be a state actor. Man in the middle (MITM) attacks between the ground station and MOC. Getting onto the NASA "red net" would be highly challenging. This is the same network that the space station operates on. However, once there, it might be possible to somehow become a MITM and pass on good or forged telemetry to the MOC while sending arbitrary commands to the ground station (ignoring any commands sent by the MOC). This would also require that the attacker have fairly vast resources and prior knowledge. In either of these cases, the payoff would likely not be worth the reward. Then again, I only worked with "smallish" scientific missions. It might certainly be worth the risk for one country to "steal" a military spacecraft from another as the cost for a country to design, launch and maintain such a spacecraft is likely much greater than the cost of the attack. I imagine these have much stronger security though. To answer your question, I don't know how secure the Mars Curiosity mission is. However, due to the distance, it probably operates solely on the DSN ground stations (very large antennas, of which there are only a handful in the world). Any attacker would either have to commandeer one of these stations, or build his own (which would be hard to hide). Further, the security of the communications between the ground stations and mission operations is certainly a top priority for NASA and JPL. In summary, communications reliability is a much greater concern when designing spacecraft -- it's really hard to talk to stuff so far away, with so much interference in-between and the constant bombardment of radiation that can affect just about anything in the process. While there may or may not be any sort of encryption, authentication or verification in the RF communication between the ground station and the spacecraft, the ability to actually interfere would likely only be available to nation-states. It would also be hard to hide who would be behind such an attack and the political fallout would likely be immense. | {
"source": [
"https://security.stackexchange.com/questions/18225",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10984/"
]
} |
18,281 | For example if I use WinRAR to encrypt a file and put a password on the archive how secure is it? I keep a personal journal and am thinking of doing this, or is there a better way? It's just one huge .docx file. | Summary: yes, but use VeraCrypt instead. From the documentation : WinRAR offers you the benefit of industry strength archive encryption using AES (Advanced Encryption Standard) with a key of 128 bits. So yes, the data is encrypted. This is only one of the elements of security, however. Another important element is how the key is derived from the password: what kind of key strengthening is performed? The slower the derivation of the key from the password, the more costly it is for an attacker to find the password (and hence the key) by brute force. A weak password is toast anyway, but good key strengthening can make the difference for a reasonably complex but still memorable password . WinRAR uses 262144 rounds of SHA-1 with a 64-bit salt , that's good key strengthening. An academic paper has been written on the security of WinRAR: On the security of the WinRAR encryption feature by Gary S.-W. Yeo and Raphael C.-W. Phan (ISC'05). Quoting from the abstract (I haven't read the full text, it doesn't seem to be accessible without paying): In this paper, we present several attacks on the encryption feature provided by the WinRAR compression software. These attacks are possible due to the subtlety in developing security software based on the integration of multiple cryptographic primitives. In other words, no matter how securely designed each primitive is, using them especially in association with other primitives does not always guarantee secure systems. Instead, time and again such a practice has shown to result in flawed systems. Our results, compared to recent attacks on WinZip by Kohno, show that WinRAR appears to offer slightly better security features. The advantage of using the encryption built into the RAR format is that you can distribute an encrypted RAR archive to anyone with WinRAR, 7zip or other common software that supports the RAR format. For your use case, this is irrelevant. Therefore I recommend using a software that is dedicated to encryption. The de facto standard since you're using Windows was TrueCrypt . TrueCrypt provides a virtual disk which is stored as an encrypted file. Not only is this more secure than WinRAR (I trust TrueCrypt, which is written with security in mind from day 1, far more than any product whose encryption is an ancillary feature), it is also more convenient: you mount the encrypted disk by providing your password, then you can open files on the disk transparently, and when you've finished you unmount the encrypted disk.
Sadly TrueCrypt is no longer in active development but it's successor VeraCrypt is. VeraCrypt is based on TrueCrypt and is compatible with the old TrueCrypt containers. Out of curiousity can what someone writes in their journal be used to incriminate someone in court? This depends on the jurisdiction, but in general, yes, as they say in the movies, anything you say or write can be used against you. You may be legally compelled to reveal encryption keys, and may face further charges if you refuse. | {
"source": [
"https://security.stackexchange.com/questions/18281",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10714/"
]
} |
18,290 | Is it possible to ensure that a string is hashed always the same way no matter which language you use to do it (Java, vb.net... ) and no matter what operative system you are? | Hash functions are deterministic : same input yields the same output. Any implementation of a given hash function, regardless of the language it is implemented in, must act the same. However , note that hash functions take sequences of bits as input. When we "hash a string", we actually convert a sequence of characters into a sequence of bits, and then hash it. There begins the trouble. Consider the string "café" : among all the possible conversions to bits, all of the following are common: 63 61 66 e9 ISO-8859-1 ("latin-1")
63 61 66 ca a9 UTF-8
63 61 66 65 cc 81 UTF-8 (NFD)
ef bb bf 63 61 66 ca a9 UTF-8 (with BOM)
ef bb bf 63 61 66 65 cc 81 UTF-8 (NFD with BOM)
63 00 61 00 66 00 e9 00 UTF-16 little-endian
00 63 00 61 00 66 00 e9 UTF-16 big-endian
ff fe 63 00 61 00 66 00 e9 00 UTF-16 little-endian (with BOM)
fe ff 00 63 00 61 00 66 00 e9 UTF-16 big-endian (with BOM)
63 00 61 00 66 00 65 00 01 03 UTF-16 little-endian (NFD)
00 63 00 61 00 66 00 65 03 01 UTF-16 big-endian (NFD)
ff fe 63 00 61 00 66 00 65 00 01 03 UTF-16 little-endian (NFD with BOM)
fe ff 00 63 00 61 00 66 00 65 03 01 UTF-16 big-endian (NFD with BOM) and all will yield very different hash values when processed with a given hash function. You have to be very precise about what you do when dealing with cryptographic functions; every bit counts. | {
"source": [
"https://security.stackexchange.com/questions/18290",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9565/"
]
} |
18,294 | Each physical user will have about 20 tables in the database. When a new user registers, another 20 tables will be created with access only for their new MySQL account. Data does not have to be shared between users. Is this logic more secure than simply having just 20 tables for all the users?
And what about performance will be dead or not ? EDIT I just want to be sure that a user who likes to "play" with injections and finds a way around will not be able to access other user data. | Hash functions are deterministic : same input yields the same output. Any implementation of a given hash function, regardless of the language it is implemented in, must act the same. However , note that hash functions take sequences of bits as input. When we "hash a string", we actually convert a sequence of characters into a sequence of bits, and then hash it. There begins the trouble. Consider the string "café" : among all the possible conversions to bits, all of the following are common: 63 61 66 e9 ISO-8859-1 ("latin-1")
63 61 66 ca a9 UTF-8
63 61 66 65 cc 81 UTF-8 (NFD)
ef bb bf 63 61 66 ca a9 UTF-8 (with BOM)
ef bb bf 63 61 66 65 cc 81 UTF-8 (NFD with BOM)
63 00 61 00 66 00 e9 00 UTF-16 little-endian
00 63 00 61 00 66 00 e9 UTF-16 big-endian
ff fe 63 00 61 00 66 00 e9 00 UTF-16 little-endian (with BOM)
fe ff 00 63 00 61 00 66 00 e9 UTF-16 big-endian (with BOM)
63 00 61 00 66 00 65 00 01 03 UTF-16 little-endian (NFD)
00 63 00 61 00 66 00 65 03 01 UTF-16 big-endian (NFD)
ff fe 63 00 61 00 66 00 65 00 01 03 UTF-16 little-endian (NFD with BOM)
fe ff 00 63 00 61 00 66 00 65 03 01 UTF-16 big-endian (NFD with BOM) and all will yield very different hash values when processed with a given hash function. You have to be very precise about what you do when dealing with cryptographic functions; every bit counts. | {
"source": [
"https://security.stackexchange.com/questions/18294",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6205/"
]
} |
18,476 | I just started to create a new web application. In the documentation, it is written that I have to prepare for the situation where users have disabled cookies. This is not the first time I have read this condition. Can anyone explain me why users want to disable cookies in their browsers? | Cookies have, historically, been a source of numerous security and privacy concerns. For example, tracker cookies can be used to identify which websites you've visited and what activities you've done on them: Site A includes hidden iframe that points at a tracker service. Tracker service issues a cookie that identifies you, and logs your visit. Site B includes the same hidden iframe . Tracker service recognises your cookie, and logs that visit too. Site A and Site B pay the tracker to get information about what other sites their users visited. This is just one application. There are other ways to use tracker cookies, some of which allow all sorts of nasty attacks such as identity theft. Another problem is cookie-stealing, which can be used to hijack insecure (i.e. non-HTTPS) sessions. Using an exploit (e.g. XSS) a page might manage to post another site's cookies back to itself, allowing an attacker to steal your session ID. Turning off cookies prevents this. Due to these problems, users often disable cookies or block them on certain sites for increased privacy and security. | {
"source": [
"https://security.stackexchange.com/questions/18476",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12123/"
]
} |
18,552 | In the wake of the recent Mat Honan story I decided to try out two-factor authentication on my Google account. But in order to keep using it with Exchange, the Android OS, Google Talk and Google Chrome you have to create application-specific passwords. Summary of the procedure Let me get a few things straight. Do I understand the security implications of application-specific passwords correctly? Google does not automatically disable app-specific passwords when they are suddenly used out of their expected context (e.g. to access e-mail even though it was set up for Chrome sync). I have to generate additional passwords that all give immediate access to my account, bypassing two-factor authentication entirely. The higher the number of application-specific passwords the higher the chances are of a brute force attack succeeding. These passwords have a fixed length and don't contain numbers or symbols, which make them more susceptible to brute force attacks than a password with unknown length containing letters, numbers and symbols. Assuming that I want to keep using features like IMAP access (which would force me to make at least one app-specific password), would I be better or worse off using two-factor authentication? | You wrote (emphasis mine): The higher the number of application-specific passwords the higher the chances are of a brute force attack succeeding. These passwords have a fixed length and don't contain numbers or symbols, which make them more susceptible to brute force attacks than a password with unknown length containing letters, numbers and symbols . Short answer: Not in any practical way. Long answer: Do the math: 16 lower case letters allows 26^16 different passwords, that is more than 10^22 = 10 × 1000^7 = ten sextillion possible passwords. If the password is chosen randomly with equal probabilities (we have no reason to believe it is not the case), the odds of breaking the password by brute force are negligible , even if Google does not notice the attack and does not take any counter measure. Even with 100 application specific passwords for one Google account, there is no way anyone would try this attack. The "susceptibility" to brute force attacks is zero. And it is much easier on many smart phones to type a password made of only lower case letter than a combination of letters and digit or mixed-case letters (for the same number of possible passwords). You also wrote: Google does not automatically disable app-specific passwords when they are suddenly used out of their expected context (e.g. to access e-mail even though it was set up for Chrome sync). That is the only real security issue here. | {
"source": [
"https://security.stackexchange.com/questions/18552",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10074/"
]
} |
18,556 | How do Address Space Layout Randomisation (ASLR) and Data Execution Prevention (DEP) work, in terms of preventing vulnerabilities from being exploited? Can they be bypassed? | Address Space Layout Randomisation (ASLR) is a technology used to help prevent shellcode from being successful. It does this by randomly offsetting the location of modules and certain in-memory structures. Data Execution Prevention (DEP) prevents certain memory sectors, e.g. the stack, from being executed. When combined it becomes exceedingly difficult to exploit vulnerabilities in applications using shellcode or return-oriented programming (ROP) techniques. First, let's look at how a normal vulnerability might be exploited. We'll skip all the details, but let's just say we're using a stack buffer overflow vulnerability. We've loaded a big blob of 0x41414141 values into our payload, and eip has been set to 0x41414141 , so we know it's exploitable. We've then gone and used an appropriate tool (e.g. Metasploit's pattern_create.rb ) to discover the offset of the value being loaded into eip . This is the start offset of our exploit code. To verify, we load 0x41 before this offset, 0x42424242 at the offset, and 0x43 after the offset. In a non-ASLR and non-DEP process, the stack address is the same every time we run the process. We know exactly where it is in memory. So, let's see what the stack looks like with the test data we described above: stack addr | value
-----------+----------
000ff6a0 | 41414141
000ff6a4 | 41414141
000ff6a8 | 41414141
000ff6aa | 41414141
>000ff6b0 | 42424242 > esp points here
000ff6b4 | 43434343
000ff6b8 | 43434343 As we can see, esp points to 000ff6b0 , which has been set to 0x42424242 . The values prior to this are 0x41 and the values after are 0x43 , as we said they should be. We now know that the address stored at 000ff6b0 will be jumped to. So, we set it to the address of some memory that we can control: stack addr | value
-----------+----------
000ff6a0 | 41414141
000ff6a4 | 41414141
000ff6a8 | 41414141
000ff6aa | 41414141
>000ff6b0 | 000ff6b4
000ff6b4 | cccccccc
000ff6b8 | 43434343 We've set the value at 000ff6b0 such that eip will be set to 000ff6b4 - the next offset in the stack. This will cause 0xcc to be executed, which is an int3 instruction. Since int3 is a software interrupt breakpoint, it'll raise an exception and the debugger will halt. This allows us to verify that the exploit was successful. > Break instruction exception - code 80000003 (first chance)
[snip]
eip=000ff6b4 Now we can replace the memory at 000ff6b4 with shellcode, by altering our payload. This concludes our exploit. In order to prevent these exploits from being successful, Data Execution Prevention was developed. DEP forces certain structures, including the stack, to be marked as non-executable. This is made stronger by CPU support with the No-Execute (NX) bit, also known as the XD bit, EVP bit, or XN bit, which allows the CPU to enforce execution rights at the hardware level. DEP was introduced in Linux in 2004 (kernel 2.6.8), and Microsoft introduced it in 2004 as part of WinXP SP2. Apple added DEP support when they moved to the x86 architecture in 2006. With DEP enabled, our previous exploit won't work: > Access violation - code c0000005 (!!! second chance !!!)
[snip]
eip=000ff6b4 This fails because the stack is marked as non-executable, and we've tried to execute it. To get around this, a technique called Return-Oriented Programming (ROP) was developed. This involves looking for small snippets of code, called ROP gadgets, in legitimate modules within the process. These gadgets consist of one or more instructions, followed by a return. Chaining these together with appropriate values in the stack allows for code to be executed. First, let's look at how our stack looks right now: stack addr | value
-----------+----------
000ff6a0 | 41414141
000ff6a4 | 41414141
000ff6a8 | 41414141
000ff6aa | 41414141
>000ff6b0 | 000ff6b4
000ff6b4 | cccccccc
000ff6b8 | 43434343 We know that we can't execute the code at 000ff6b4 , so we have to find some legitimate code that we can use instead. Imagine that our first task is to get a value into the eax register. We search for a pop eax; ret combination somewhere in any module within the process. Once we've found one, let's say at 00401f60 , we put its address into the stack: stack addr | value
-----------+----------
000ff6a0 | 41414141
000ff6a4 | 41414141
000ff6a8 | 41414141
000ff6aa | 41414141
>000ff6b0 | 00401f60
000ff6b4 | cccccccc
000ff6b8 | 43434343 When this shellcode is executed, we'll get an access violation again: > Access violation - code c0000005 (!!! second chance !!!)
eax=cccccccc ebx=01020304 ecx=7abcdef0 edx=00000000 esi=7777f000 edi=0000f0f1
eip=43434343 esp=000ff6ba ebp=000ff6ff The CPU has now done the following: Jumped to the pop eax instruction at 00401f60 . Popped cccccccc off the stack, into eax . Executed the ret , popping 43434343 into eip . Thrown an access violation because 43434343 isn't a valid memory address. Now, imagine that, instead of 43434343 , the value at 000ff6b8 was set to the address of another ROP gadget. This would mean that pop eax gets executed, then our next gadget. We can chain gadgets together like this. Our ultimate goal is usually to find the address of a memory protection API, such as VirtualProtect , and mark the stack as executable. We'd then include a final ROP gadget to do a jmp esp equivilent instruction, and execute shellcode. We've successfully bypassed DEP! In order to combat these tricks, ASLR was developed. ASLR involves randomly offsetting memory structures and module base addresses to make guessing the location of ROP gadgets and APIs very difficult. On Windows Vista and 7, ASLR randomises the location of executables and DLLs in memory, as well as the stack and heaps. When an executable is loaded into memory, Windows gets the processor's timestamp counter (TSC), shifts it by four places, performs division mod 254, then adds 1. This number is then multiplied by 64KB, and the executable image is loaded at this offset. This means that there are 256 possible locations for the executable. Since DLLs are shared in memory across processes, their offsets are determined by a system-wide bias value that is computed at boot. The value is computed as the TSC of the CPU when the MiInitializeRelocations function is first called, shifted and masked into an 8-bit value. This value is computed only once per boot. When DLLs are loaded, they go into a shared memory region between 0x50000000 and 0x78000000 . The first DLL to be loaded is always ntdll.dll, which is loaded at 0x78000000 - bias * 0x100000 , where bias is the system-wide bias value computed at boot. Since it would be trivial to compute the offset of a module if you know ntdll.dll's base address, the order in which modules are loaded is randomised too. When threads are created, their stack base location is randomised. This is done by finding 32 appropriate locations in memory, then choosing one based on the current TSC shifted masked into a 5-bit value. Once the base address has been calculated, another 9-bit value is derived from the TSC to compute the final stack base address. This provides a high theoretical degree of randomness. Finally, the location of heaps and heap allocations are randomised. This is computed as a 5-bit TSC-derived value multiplied by 64KB, giving a possible heap range of 00000000 to 001f0000 . When all of these mechanisms are combined with DEP, we are prevented from executing shellcode. This is because we cannot execute the stack, but we also don't know where any of our ROP instructions are going to be in memory. Certain tricks can be done with nop sleds to create a probabilistic exploit, but they are not entirely successful and aren't always possible to create. The only way to reliably bypass DEP and ASLR is through an pointer leak. This is a situation where a value on the stack, at a reliable location, might be used to locate a usable function pointer or ROP gadget. Once this is done, it is sometimes possible to create a payload that reliably bypasses both protection mechanisms. Sources: Windows Internals 5th Edition - Mark Russinovich An Analysis of ASLR in Windows Vista - Symantec ASLR on Wikipedia DEP on Wikipedia Further reading: Stack-based exploit writing - CoreLAN Bypassing stack cookies, SafeSEH, SEHOP, HW DEP and ASLR - CoreLAN Bypassing ASLR/DEP - exploit-db | {
"source": [
"https://security.stackexchange.com/questions/18556",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
18,615 | Countries rely on computer infrastructure for a huge percentage of communications and military management, as well as utilities like electricity. Unfortunately, nuclear attacks release massive electromagnetic pulses that damage these systems. Normally this isn't much of an issue, since everything in the pulse radius is reduced to ash, but high-altitude nuclear explosions (HANE) can cause serious electronic damage without the hassle of mass death and destruction. Ignoring the ridiculous improbability, I'd like to know how infrastructure critical to a nation during wartime might be secured against such damage. Since this is purely theoretical, I'd like to see sources for all information. I don't mind what countries you focus on, as a nuclear war is likely to affect all of them. I know this is a pretty outlandish question, but it's been on my mind for a while and I'd love to see some well-sourced answers on the subject. P.S. Yes, I did just find a legitimate use for the nuclear-bomb tag. | The Internet at large is designed to resist nuclear blasts. At least, it was a design goal of its immediate predecessor, ARPANET . There is no secret: to survive loss of components, you must have redundancy. In the context of nuclear blasts, this means that there must exist several paths for data between any two machines, and the paths should be as geographically separate as possible. Mathematically, given an assumed blast radius r of 50 miles (for a nuclear-powered EMP , this is a rather low estimate), and two machines A and B and two paths between A and B , then the following should hold: for any two points M and N where M is on path 1 and N is on path 2, and the distance between M and N is less than r , then either M or N (or both) is no farther than r from either A or B . In plain words, the two paths never come closer than 50 miles from each other, except at both extremities (the two paths necessarily join at A and B ). The packet routing nature of ARPANET, then the Internet, allows for such redundancy. Extra points to radio links, in particular satellites: the link between a base station and a satellite cannot be permanently disrupted by a nuclear blast in between. The blast may induce a high ionization of the upper layers of atmosphere, so communications may be temporarily jammed, especially for longer wave lengths; satellites work in the GHz band and should have less trouble than, say, FM. Also, geostationary satellites tend to be relatively high on the horizon (at least from southern USA -- much less so from, say, Moscow) so getting a blast between a base station in Atlanta and a geostationary satellite which is roughly over the Americas entails detonating the thing over US territory, at which point Atlanta itself is in big trouble. Transoceanic cables should also be fine: 3 miles of water are a Hell of a shield. And they offer lower latency than geostationary satellites (the ping time with a remote server through a geostationary satellite cannot be lower than half a second, because 4*36000 = 144000 km); latency is a problem for flying drones. Lower-altitude satellites are more difficult to use (from the point of view of a base station, they move a lot and often go beyond the horizon) and are in range of Anti-satellite missiles . Optic fiber is more resilient to EMP than copper links, and the US military have studied that for more than 35 years . The weak part of an optic fiber link would be repeaters: the devices which pick up the signal and re-emit it stronger. You need some of these in any long-range cable. But at least this reduces the problem to building anti-radiation bunkers at regular intervals. Actually, a bigger problem may be electricity. An EMP will imply high surges in the grid. For instance, the US grid has trouble resisting bad weather . And, of course, redundancy of network links is not sufficient: you also need to duplicate the servers (data storage, computing elements). You already need to do that to survive floods and earthquakes and even simpler events like a server room burning down . EMP resistance is just more of the same, on a slightly larger scale. | {
"source": [
"https://security.stackexchange.com/questions/18615",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
18,666 | While shopping for a basic SSL cert for my blog, I found that many of the more well-known Certificate Authorities have an entry-level certificate (with less stringent validation of the purchaser's identity) for approximately $120 and up. But then I found that Network Solutions offers one of these lower-end certs for $29.99 (12 hours ago it was $12.95) with a 4-year contract. Is there any technical security reason that I should be aware of that could make me regret buying the lowest-end certificate? They all promise things like 99% browser recognition, etc. I'm not asking this question on SE for comparison of things like the CA's quality of support (or lack thereof) or anything like that. I want to know if there is any cryptographic or PKI reason so avoid a cert which costs so little. It, like others, says that it offers "up to 256 bit encryption". | For the purposes of this discussion there are only a couple differences between web signing certificates: Extended vs standard validation (green bar). Number of bits in a certificate request (1024/2048/4096). Certificate chain. It is easier to set up certificates with a shorter trust chain but there are inexpensive certs out there with a direct or only one level deep chain. You can also get the larger 2048 and 4096 bit certs inexpensively. As long as you don't need the extended validation there is really no reason to go with the more expensive certificates. There is one specific benefit that going with a larger vendor provides - the more mainline the vendor, the less likely they are to have their trust revoked in the event of a breach. For example, DigiNotar is a smaller vendor that was unfortunate enough to have their trust revoked in September 2011. | {
"source": [
"https://security.stackexchange.com/questions/18666",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12057/"
]
} |
18,677 | The company I work for needs a system to perform monthly credit card charges to customer accounts. Customers will be able to update their credit card information from an online interface written in PHP (which will be presented through HTTP over SSL). The monthly charges will be run manually through a password-protected admin area of the same interface, which will basically amount to a batch call to Authorize.Net's API. My coworkers want to store the (encrypted) credit card information in a MySQL database. They plan to encrypt the credit card numbers with PHP's mcrypt extension , presumably using Rijndael/AES or Twofish. The subject of key management was glossed over, but the following two options were mentioned: storing keys in the database, with a separate key for each credit card entry; or storing a single encryption key for all credit card entries in a PHP file as a variable (similar to how PHP applications such as WordPress and MediaWiki store database credentials). How can I convince them not to go this route? We will be using Authorize.net for payment processing, so I mentioned their Customer Information Manager service as an alternative to storing credit card information ourselves. However, I don't know the subject well enough to make a compelling argument (my arguments were "none of us are security experts" and "I wouldn't feel comfortable with a company storing my credit card information in this manner"). If I am unable to convince them to use a 3rd party service like Customer Information Manager , what should I keep in mind in order to keep customers' credit card information safe? In particular, how should I manage encryption keys? Should there be a separate encryption key for each customer entry? Where can I store the keys so I can decrypt the credit card information before sending the transactions over to Authorize.Net? The two options mentioned above don't seem very sound to me (but again, I don't know the subject well enough to make a compelling argument against either). Update : I found someone in the company familiar with PCI DSS compliance, so I'm working with him to make sure this gets done right. However, I would still appreciate answers to my questions above (both to improve my knowledge and to help others in a similar situation). Update 2 : Success! Another developer sat down and read the PCI DSS guideline and decided that it's a bad idea to store the information ourselves after all. We will be using Authorize.Net's Customer Information Manager service. | Storing card numbers means you must comply with the requirements of PCI-DSS, or you risk fines and breach of your merchant account contract. PCI-DSS has an enormous set of requirements - some sensible, some onerous, some of questionable usefulness - and the cost of complying with it, and certifying that you've complied with it, can be very high. Download the PCI-DSS 2.0 standard and be afraid. In particular, neither of the suggested approaches are PCI-DSS compliant. Keys mustn't be stored unprotected or in the same place as the cardholder data they protect. If you must implement this in-house, you need to isolate the components that touch card numbers from everything else, so that PCI-DSS only applies to a limited number of systems. Look at tokenisation solutions to help you keep your cardholder data exposure to a minimum. If you don't do this right you can easily drop all of your servers (or even all your corporate desktops!) into PCI scope, at which point compliance becomes difficult-to-impossible. Authorize.net offer automated recurring billing services. For the love of sanity, use that. | {
"source": [
"https://security.stackexchange.com/questions/18677",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12219/"
]
} |
18,680 | This article and this search suggest that the 32-bit word 0x41414141 is associated to security exploits. Why is 0x41414141 associated to security exploits? | It's nothing fundamental. It's just a historical convention, like using foo as the name of a variable when you have no clue what to name it. In more detail: The simplest way to test for a buffer overflow is to type a long string of A's (AAAAAAAA...) into a text field, and see what happens. If the program crashes, it might be vulnerable. If the program crashes and a debugger shows 0x41414141 in the program counter, ooh boy, you hit pay dirt: the program is almost surely vulnerable. (Remember, the ASCII code for 'A' is 0x41 in hex, so 0x41414141 is what you'd see if you looked at the byte-level representation of a string of A's in a hex editor.) Why A's? No reason at all; they're just the first letter in the alphabet. So, this is a quick-and-dirty test that pentesters sometimes use. But of course, there's nothing special about 0x41414141. Douglas Adams fans could type in a long string of B's, and then look for 0x42424242. That'd be equally effective, and even more fun. I gotta remember to use that one in my next hacking demo..... | {
"source": [
"https://security.stackexchange.com/questions/18680",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11902/"
]
} |
18,853 | We all know we should be using SSL whenever we collect passwords or other sensitive information. SSL provides two main benefits: Encryption: The data can't be read by a middle-man while in transit. Protection against MITM attacks: A man in the middle can't pretend to be a server, since they can't produce a CA-signed certificate for the server. If I'm downloading an application, I'm probably going to run it at some point, maybe even as root. Some programs will be signed, but many aren't. Shouldn't downloads be done over SSL so that I know that it hasn't been tampered with during transit? If somebody steals my password, that's bad. But if somebody plants a keylogger on my computer, that's way, way worse. | Note that the part of this answer below the horizontal line was written in 2012, back when the question was asked. At the time, only a small part of the web used HTTPS. Since then, most of the web has switched to HTTPS, so the short answer is ”they are”. This answer is still relevant insofar as it explains what HTTPS does and does not secure, in the context of application downloads. Because HTTPS is not very well suited to securing downloads of large public files. For this use case, it's slow and not that useful. There are reasons for not using HTTPS well beyond incompetence or unawareness. HTTPS doesn't fully solve the problem . This If you're getting your application straight from the vendor's website, HTTPS does ensure the authenticity of the application. But if you're getting your application from a third party (e.g. mirrors of free software), HTTPS only protects the connection with the third party. A package signature scheme works better: it can protect the whole chain from the vendor. Application distribution requires end-to-end protection and HTTPS doesn't provide that . HTTPS uses more bandwidth . The overhead per download is minimal if you don't take caching into account. This is the spherical cow of “HTTPS doesn't cost more”: if you use SSL, you can't cache data except at the SSL endpoints. Application downloads are cachable in the extreme: they're large files that many people download. HTTPS is overkill . The confidentiality of an application download is rarely an issue, all we need is authenticity. Sadly, HTTPS doesn't provide authenticity without also providing confidentiality. Authenticity is compatible with caching, but confidentiality isn't. HTTPS requires more resources on the server. Google mail got it down to a 1% overhead and a 2% bandwidth overhead, but this is for a very different use case. The Gmail frontend servers do more than mindlessly serve files; a file server doesn't need a powerful CPU in the first place (it's very strongly IO-bound), so the overhead is likely to be significantly larger. The same goes for memory overhead: a file server needs very little memory per session in the first place, almost all of its memory is a disk cache. Getting the resource usage down requires a serious amount of work . HTTPS wouldn't help many people . The security-conscious will check the hash provided by the vendor ( that should be over HTTPS). The non-security-conscious will blithely click through the “this connection is insecure” message (there are so many badly configured servers out there that many users are trained to ignore HTTPS errors). And that's not to mention dodgy CAs who grant certificates that they shouldn't. If you want to make sure that you're getting the genuine application, check its signature, or check its hash against a reference value that you obtain with a signature (for example over HTTPS). Good vendors make this automatic. For example, Ubuntu provides GPG signatures of its installation media . It also provides the hashes over HTTPS (sadly not linked from anywhere near the download page as far as I can see). After that, the software installation tool automatically checks that packages come with a valid signature. See How to use https with apt-get? Note: if you're getting the application directly from the vendor (as opposed to via some package repository or application marketplace), then HTTPS does provide protection. So if you're a vendor providing your application directly for download on your website, do protect it with HTTPS! | {
"source": [
"https://security.stackexchange.com/questions/18853",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6813/"
]
} |
18,878 | I have a PDF with important information that may contain malware. What would be the best way to view it? | Document-based exploits are directed not at the document itself, but rather at some vulnerability in the viewer. If you view the document in a program that isn't vulnerable (or in a configuration that inhibits the vulnerability), then you won't be exploited. The real issue is knowing whether or not your viewer is vulnerable, which usually means knowing specifically what the exploit is. But there are alternate PDF viewers such as foxit or even Google chrome's built-in viewer that do not necessarily have the same vulnerabilities as Adobe's official viewer. This is not necessarily true for all vulnerabilities, so it's important to understand what you're getting in to ahead of time. EDIT If you find yourself frequently dealing with potentially malicious materials, it would be very wise to set up a hardened virtual environment. I'd recommend booting into a Linux system and running your target OS (usually Windows) in Virtualbox or a similar environment. Save a snapshot of the virtual OS, and then revert to that snapshot after you're done interacting with the malicious content. Also, it's not a bad idea to run the host Linux environment from a read-only installation (i.e. Live-CD). | {
"source": [
"https://security.stackexchange.com/questions/18878",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
18,919 | Note this question is related, except this one is about free SSL certs. There are providers who are offering totally free entry-level SSL certs (like StartSSL). I was wondering if they are technically the same thing as the paid ones (at least with the entry-level SSL certs like RapidSSL and PositiveSSL)? I do understand that extended/organization SSL is a different category, but if you only need entry-level SSL certs, are the free ones technically the same as the paid entry-level variants? Moreover, if they are technically the same, why would you want to pay for something that's available free? | At the byte level, X.509 is X.509 and there is no reason why the free SSL certificates would be any better or worse than the non-free -- the price is not written in the certificate. Any certificate provider can fumble the certificate generation, regardless of whether he gets paid for it or not. The hard part of a certificate is outside of it: it is in the associated procedures , i.e. everything that is in place to manage the certificates: how the key holder is authenticated by the CA, how revocation can be triggered and corresponding information propagated, what kind of legal guarantee is offered by the CA, its insurance levels, its continuity plans... For the certificate buyer, the big value in a particular CA is where the CA succeeded in placing its root key (browsers, operating systems...). The vendors (Microsoft, Mozilla...) tend to require quite a lot of administrativia and legal stuff from the CA before accepting to include the CA root key in their products, and such things are not free. Therefore, a CA which could get its root key distributed but emits certificates for free has a suspicious business plan. This is why the free-cert dealers also offer paid certificates with some extra characteristics (certs which last longer, certs with wildcard names, extra authentication procedures...): at some point, the CA operators must have an incoming cash flow. But, ultimately, that's the CA problem, not yours. If they are willing to give away certificates for free and Microsoft is OK with including their root key as a "trusted by default key" then there is no problem for you in using such certificates. Edit: and now there is Let's Encrypt , which is a free CA that got accepted by major browsers. Their business plan is not suspicious -- in fact, they don't have a business plan at all. They operate as a non-profit entity and they live from donations. Their found a nice niche: they got buy-in from major browser vendors who went on a crusade to kill non-HTTPS Web, and needed a free certificate issuer to convince admins of small Web sites to switch; and now, no browser vendor may leave because it would make them look complacent with regards to security. | {
"source": [
"https://security.stackexchange.com/questions/18919",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9317/"
]
} |
18,994 | I'm trying to figure out the best practices for accessing my devbox from a public terminal. Most sources I've found recommend protecting the box with some kind of two factor authentication, such as adding a "command=" prefix to the authorized_key that forces entering some sort of one-time-password. My question is, why is this necessary? Say I've got a pass-phrase encrypted ssh key sitting on the thumb drive. I insert the drive, enter my paraphrase and connect: in order to get into my system you've got to have the key file and the pass-phrase. Say there's a keylogger on the public box. Now an intruder has my passphrase, but my key was never copied to the local system. Now, of course once my usb drive is connected it's possible for the key to be copied off, but is this really likely? How many keyloggers also slurp SSH keys off usb drives? How many malicious-types are watching their honeypots 24/7 waiting for activity that looks like an ssh connection? To me it seems like a pass-phrase encrypted key is two factor. Can someone explain how it isn't? | A passphrase-encrypted key provides two-factor authentication, but only if used correctly. It is easy for the user to misuse the key, providing only a single factor, and the server cannot detect incorrect usage. Hence a passphrase-encrypted key cannot be considered two-factor without additional assumptions. From the point of view of the system as a whole, the passphrase-encrypted key provides two factors, but from the point of view of the server, there is only one factor, which is the private key. The password is what-you-know. However, the password is not visible to the server. The server does not know if you used a weak password or no password at all. In any case, typing a password on a machine which may be running a keylogger is not valid use of a password for authentication. The key file is what-you-have, but only if you do not copy it willy-nilly. Strictly speaking, it's the USB stick where the key file is stored that is a something-you-have authentication factor. The key file itself stops being an authentication factor once you allow for it to be copied off the stick. In the scenario that you describe, where you copy the key on a machine that you do not control, is not valid usage. It transforms what you have into what the attacker also has. If the attacker can install a keylogger on that machine, he can also install a program that makes an copy of the content of every removable media that's inserted into it. What you have must be tied to an actual physical object that is not accessible to the attacker. A key stored on your own laptop or smartphone is fine. A smartcard inserted into a smartcard slot is fine (for normal smartcard usage, where the secrets do not leave the card). A USB stick inserted into a public machine does not provide an authentication factor. And yes, there is off-the-shelf malware that grabs the content of removable media. Depending on where you plug in, it may be more or less common than keyloggers (though I'd expect the two to often go together). The attacker who installs a removable disk imager may be after authentication data, or possibly after other confidential documents. There is a resale market for corporate secrets (confidential documents, contact lists, etc.) which fosters this kind of malware, and grabbing authentication data is an easy side benefit. With a user who may insert his USB stick into a public machine and type his password there, the passphrase-encrypted key provides zero authentication factor. | {
"source": [
"https://security.stackexchange.com/questions/18994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12369/"
]
} |
19,018 | I have an internet connection with a static IP address. Almost all staff in my office know this IP address. Should I take any extra care to protect myself from hackers? | It depends. Think of your IP address as the same kinda thing as a real address. If a criminal knows the address of a bank, what can they do? It completely depends on what security is in place. If you've got a firewall running (e.g. Windows Firewall) or are behind a NAT router, you're probably safe. Both of these will prevent arbitrary incoming traffic from hitting your computer. This stops most remote exploits. My suggestions: Enable Windows firewall, or whatever firewall is available on your OS of choice. Keep up to date with patches for your OS. These are critical! Keep up to date with patches for your browser and any plugins (e.g. Flash) Keep up to date with patches for your applications (e.g. Office, Adobe PDF, etc.) If you're running any internet-facing services (e.g. httpd) on your machine, keep those up to date and configure their security appropriately. Install a basic AV package if you're really worried. Microsoft Security Essentials (MSE) is a great choice for Windows, because it's free, unintrusive and not much of a performance hog. | {
"source": [
"https://security.stackexchange.com/questions/19018",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10834/"
]
} |
19,023 | I want to be able to tell how many keys per second, using RSA 1024-bit keys, can be checked on a standard Pentium 4 system. How can I use this to determine decryption performance, and possibly remaining time? | It depends. Think of your IP address as the same kinda thing as a real address. If a criminal knows the address of a bank, what can they do? It completely depends on what security is in place. If you've got a firewall running (e.g. Windows Firewall) or are behind a NAT router, you're probably safe. Both of these will prevent arbitrary incoming traffic from hitting your computer. This stops most remote exploits. My suggestions: Enable Windows firewall, or whatever firewall is available on your OS of choice. Keep up to date with patches for your OS. These are critical! Keep up to date with patches for your browser and any plugins (e.g. Flash) Keep up to date with patches for your applications (e.g. Office, Adobe PDF, etc.) If you're running any internet-facing services (e.g. httpd) on your machine, keep those up to date and configure their security appropriately. Install a basic AV package if you're really worried. Microsoft Security Essentials (MSE) is a great choice for Windows, because it's free, unintrusive and not much of a performance hog. | {
"source": [
"https://security.stackexchange.com/questions/19023",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12377/"
]
} |
19,096 | I want to know whether my browser is using SSL or TLS connection if I see HTTPS. I want to know for IE, Firefox, Chrome and Safari. I want to know the protocol version. | There are several protocol versions : SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1 and TLS 1.2. Internally, TLS 1.0/1.1/1.2 are SSL 3.1/3.2/3.3 respectively (the protocol name was changed when SSL became a standard ). I assume that you want to know the exact protocol version that your browser is using. Internet Explorer According to what is described on this blog post , Internet Explorer can display the protocol version information. Just hit File->Properties or Right-click -> Properties , and a window would open, under Connection , you'd see something like: TLS 1.2, RC4 with 128 bit encryption (High); RSA with 2048 bit
exchange Firefox As of today, Firefox supports TLS 1.0, TLS 1.1 and TLS 1.2. You can see the negotiated protocol version if you click the padlock icon (on the left of the URL), then More Information and then under the Technical Details . Chrome Chrome can display the version.
On earlier versions of Chrome, click on the padlock icon ; a popup appears, which contains some details, including the protocol version. example: (verified on version 21.0.1180.82) The connection uses TLS 1.0 On later versions of Chrome, this information in the security tab of the developer tools . (Credit to nickd ) Opera Opera shows the protocol version in a way similar to Chrome: click on the padlock icon, then click on the "Details" button. e.g. (verified on version 12.01): TLS v1.0 256 bit AES (1024 bit DHE_RSA/SHA) Others For browsers which do not show the information, you can always obtain it running a network analyzer like Wireshark or Network Monitor : they will happily parse the public headers of the SSL/TLS packets, and show you the version (indeed, all of the data transfers in SSL/TLS are done in individual "records" and the 5-byte header of each record begins with the protocol version over two bytes). And, of course, the actual protocol version is a choice of the server, based on what the server is configured to accept and the maximum version announced by the client. If the server is configured to do TLS 1.0 only then any connection which actually happens will use TLS 1.0, necessarily. ( Edit: I have incorporated some information from the comments; done a few tests myself. Feel free to enhance this answer as needed.) | {
"source": [
"https://security.stackexchange.com/questions/19096",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12413/"
]
} |
19,155 | Do security images such as those presented upon logging into banks provide any tangible security benefits, or are they mostly theater? Per my understanding, if somebody is phishing your users, it's also trivial for them to proxy requests from your users to your authentic website and retrieve the security image with whatever credentials the user provides before receiving the image (e.g., the username). If this is the case, security images seem to provide no additional security, and may actually be harmful if they help convince users that a malicious website is legitimate. Am I missing something? | Great question! As it happens, I can present experimental data on this question -- and the data is fascinating. (I noticed that some of the answers contain speculation from first principles about how much security these security images offer. However, the data turns out to have some surprises for all of us!) Experimental methodology. "Security images" have been evaluated in a user study, conducted with ordinary users who were asked to perform online banking in the lab. Unbeknownst to them, some of them were 'attacked' in a controlled way, to see whether they would behave securely or not and whether the security images helped or not. The researchers evaluated two attacks: MITM attack: The researchers simulated a man-in-the-middle attack that strips off SSL. The only visible indication of the attack is that lack of a HTTPS indicator (no HTTPS in the address bar, no lock icon, etc.). Security image attack: The researchers simulated a phishing attack. In this attack, it looks like the users are interacting with the real bank site, except that the security image is missing. In its place, the attack places the following text: SiteKey Maintanance Notice:
Bank of America is currently upgrading our award winning SiteKey feature. Please contact customer service if your SiteKey does not reappear within the next 24 hours. I find this a brilliant attack. Rather than trying to figure out what security image to show to the user, don't show any security image at all, and just try to persuade the user that it's OK that there is no security image. Don't try to defeat the security system where it is strongest; just bypass the entire thing by undermining its foundation. Anyway, the researchers then proceeded to observe how users behaved when they were attacked in these ways (without their knowledge). Experimental results. The results? The attacks were incredibly successful. Not a single user avoided the MITM attack; every single one who was exposed to the MITM attack fell for it. (No one noticed that they were under attack.) 97% of those exposed to the security image attack fell for it. Only 3% (2 out of 60 participants) behaved securely and refused to log in when hit with this attack. Conclusions. Let me attempt to draw some lessons from this experiment. First, security images are ineffective . They are readily defeated by very simple attack techniques. Second, when assessing what security mechanisms will be effective, our intuitions are not reliable . Even expert security professionals can draw the wrong conclusions. For instance, look at this thread, where some folks have taken the view that security images add some security because they force the attacker to work harder and implement a MITM attack. From this experiment, we can see that this argument does not hold water. Indeed, a very simple attack (clone the website and replace the security image with a notice saying the security image feature is currently down for maintenance) is extremely successful in practice. So, when the security of a system depends upon how users will behave, it is important to conduct rigorous experiments to evaluate how ordinary users will actually behave in real life. Our intuitions and "from-first-principles" analyses are not a substitute for data. Third, ordinary users don't behave in the way security folks sometimes wish they would . Sometimes we talk about a protocol as "the user will do such-and-such, then the server will do thus-and-such, and if the user detects any deviation, the user will know he is under attack". But that's not how users think. Users don't have the suspicious mindset that security folks have, and security is not at the forefront of their mind. If something isn't quite right, a security expert might suspect she is under attack -- but that's usually not the first reaction of an ordinary user. Ordinary users are so used to the fact that web sites are flaky that their first reaction, upon seeing something odd or unusual, is often to shrug it off and assume that the Internet (or the web site) isn't quite working right at the moment. So, if your security mechanism relies upon users to become suspicious if certain cues are absent, it's probably on shaky grounds. Fourth, it's not realistic to expect users to notice the absence of a security indicator , like a SSL lock icon. I'm sure we've all played "Simon Says" as a kid. The fun of the game is entirely that -- even when you know to look out for it -- it is easy to overlook the absence of the "Simon Says" cue. Now think about a SSL icon. Looking for the SSL icon is not the user's primary task, when performing online banking; instead, users typically just want to pay their bills and get the chore done so they can move on to something more useful. How much easier it is to fail to notice its absence, in those circumstances! By the way, you might wonder how the banking industry has responded to these findings. After all, they emphasize their security images feature (under various marketing names) to users; so how have they reacted to the discovery that the security image feature is all but useless in practice? Answer: they haven't. They still use security images. And if you ask them about their response, a typical response has been something of the form "well, our users really like and appreciate the security images". This tells you something: it tells you that the security images are largely a form of security theater. They exist to make users feel good about the process, more than to actually protect against serious attacks. References. For more details of the experiment I summarized above, read the following research paper: The Emperor's New Security Indicators . Stuart E. Schechter, Rachna Dhamija, Andy Ozment, and Ian Fischer. IEEE Security & Privacy 2007. (edit: updated link ) | {
"source": [
"https://security.stackexchange.com/questions/19155",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12407/"
]
} |
19,473 | On DigiCert's page, they advertise a 2048 bit SSL with a 256 bit encryption: http://www.digicert.com/256-bit-ssl-certificates.htm What exactly is the difference here and why are two encryption bits being referenced? Here's a screenshot of the ad: On Geotrust's Premium SSL ad, they advertise it as: Security: domain control validation, strong 256-bit encryption, 2048-bit root So what's the difference between 256 bit encryption and 2048 bit root? Hope that clarifies the question. | The 2048-bit is about the RSA key pair: RSA keys are mathematical objects which include a big integer, and a "2048-bit key" is a key such that the big integer is larger than 2 2047 but smaller than 2 2048 . The 256-bit is about SSL. In SSL, the server key is used only to transmit a random 256-bit key ( that one does not have mathematical structure, it is just a bunch of bits); roughly speaking, the client generates a random 256-bit key, encrypts it with the server's RSA public key (the one which is in the server's certificate and is a "2048-bit key"), and sends the result to the server. The server uses its private RSA key to reverse the operation, and thus obtain the 256-bit key chosen by the client. Afterwards, client and server use the 256-bit to do symmetric encryption and integrity checks, and RSA is not used any further for that connection. See this answer for some more details. This setup is often called "hybrid encryption". This is done because RSA is not appropriate for bulk encryption, but symmetric encryption cannot do the initial public/private business which is needed to get things started. (SSL can do the key exchange with other algorithms than RSA so I have simplified description a bit in the text above, but that's the gist of the idea.) | {
"source": [
"https://security.stackexchange.com/questions/19473",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12572/"
]
} |
19,616 | I'm new to the realm of HTTP requests and security and all that good stuff, but from what I've read, if you want your requests and responses encrypted, use HTTPS and SSL, and you'll be good. Someone in a previous question posted a link to this app http://www.charlesproxy.com/ which shows that it is actually possible to sniff HTTPS requests, and see the request and response in PLAIN text. I tried this with the facebook.com login, and I was indeed able to see my username AND password in plain text. It was too easy. What's going on? I thought that was the whole point of HTTPS - to encrypt requests and responses? | This is explained in their page on SSL proxying , perhaps not with enough explanations. A proxy is, by definition, a man-in-the-middle : the client connects to the proxy, and the proxy connects to the server. SSL does two things: It ensures the confidentiality and integrity of the established connection. It performs some verification of who you are connecting to. It's the second part that's important, and seemingly broken, here: you're sitting at your browser, and surprised that your browser is connecting to the proxy whereas you expected it to connect to Facebook. Technically, the proxy is not sniffing the HTTPS traffic, it's relaying it. Your browser knows that it's connected to Facebook because the site has a certificate that says “I am really www.facebook.com ”. Public-key cryptography , by means that I will not get into here, ensures that only the holder of the private key can initiate a valid connection with this certificate. That's only half the battle: you only have the server's claim that it really is www.facebook.com and not randomhijacker.com . What your browser does is additionally check that the certificate has been validated by a certificate authority . Your browser or operating system comes with a list of certificate authorities that it trusts. Again, public-key cryptography ensures that only the CA can emit certificates that your browser will accept. When you connect to the proxy, your browser receives a certificate that says “I am really www.facebook.com ”. But this certificate is not signed by a CA that your browser trusts by default. So: either you received a warning about an insecure HTTPS connection, which you clicked through to see the concent at https://www.facebook.com/ ; or you added the CA that signed the proxy's certificate (“Charles's CA certificate”) to your the list of CAs that your browser trusts. Either way, you told your browser to trust the proxy. So it does. An SSL connection is not secure if you start trusting random strangers. Recommended reading for further information: when is it safe to click through an SSL warning message? Does an established ssl connection mean a line is really secure SSL with GET and POST How do I check that I have a direct SSL connection to a website? Does https prevent man in the middle attacks by proxy server? How is it possible that people observing an HTTPS connection being established wouldn't know how to decrypt it? How can end-users detect malicious attempts at SSL spoofing when the network already has an authorized SSL proxy? | {
"source": [
"https://security.stackexchange.com/questions/19616",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12633/"
]
} |
19,620 | I'm currently in the process of building a JavaScript SPA and have been researching how to secure it. There is currently as RESTful API that is being completely interacted with through AJAX. We also have mobile clients that interact with this API, and currently it only supports HTTP BASIC Authentication over SSL. The JavaScript app will also communicate exclusively over SSL, but BASIC Auth won't cut it as that would involve storing the password (or a derivative of it) on the client. Lastly, the SPA app will be pure JavaScript and HTML, served on the same server as the RESTful API, but without any server-side framework. Goals : No server-side framework for the javascript client (it's just another client). Maintain statelessness of the RESTful API, for the typical reasons (scalability, fault tolerance, simplified deployment, etc) Any state should be maintained by the client. For the purposes of this question, this means the login credentials. Login state maintained by the client must be secure and be resistant to session hijacking and similar attacks. What I've come up with is based on my research of OAuth and similar schemes (Amazon, etc). The user will login using and HTTP POST over SSL. The server will compute a hash as follows: HMAC(key, userId + ":" + ipAddress + ":" + userAgent + ":" + todaysDateInMilliseconds) This token will be returned to the client and supplied with every subsequent request in place of the userName and password. It will most likely be stored in localStorage or a cookie. Is this secure? My motivation for choosing the userId,ipAddress,todaysDateInMilleseconds is to create a token that is valid for only today, but does not require database lookup for every request AND is safe to be stored on the client. I cannot trust that the key will not be comprimised, thus the inclusion of IP Address in an attempt to prevent session hijacking. Let me include the following link from a related post on StackExchange because I think it addresses a lot of the issues I'm trying to solve: REST and Stateless Session Ids After the initial feedback here I've decided to use only the first two octects of the IP address to handle clients behind proxies and mobile clients better. It's still not perfect, but its a tradeoff for some additional security. | The service offered by the token is that the server will somehow recognize the token as one of its own. How can the server validate a HMAC-based token ? By recomputing it, using its secret HAMC key and the data over which HMAC operates . If you want your token to be computed over the userID, password, IP and date, then the server must know all that information. However, you do not want your server to store the password, and the client will not send it back with each request. How can your system work, then ? The basic idea, however, is sound: Users "logs in" by any way which you see fit. Upon such login, the server sends a cookie value, to be sent back with each subsequent request (that's what cookies do). The cookie contains the user ID, the date it was issued, and a value m = HMAC(K, userID || date || IP) . When the server receives a request, it validates the cookie: the userID and date are from the cookie itself, the source IP is obtained from the Web server layer, and the server can recompute the value m to check that it matches the one stored in the cookie. You could replace the whole cookie with a random session ID, if the server has some (temporary) storage space. Indeed, the server could remember the mapping from a random session ID to the user-specific information (such as his name and IP address); old session ID can be automatically expired, so the storage space does not grow indefinitely. The cookie described above is just a way to offload storage on the client itself. Note: using the IP address may imply some practical issues. Some clients are behind proxies, even load-balanced proxies, so not only is the client IP address possibly "hidden" (from the server, you see the proxy's address, not the client's address) but the IP address you obtain server-side could move around erratically (if two successive requests from the client have gone through distinct proxies in a proxy farm). | {
"source": [
"https://security.stackexchange.com/questions/19620",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12634/"
]
} |
19,676 | I have developed a backend REST API for a mobile app and I am now looking to implement token-based authentication for it to avoid having to prompt the user to login on every run of the app. What I had in mind was on the initial request the user sends their credentials using Basic authentication over SSL. Once the server authenticates the credentials it creates a secure token and sends it back to the user so they can use it in subsequent requests until the token either expires or is revoked. I am looking for some advice as to how I can generate a token which won't be susceptible to things like MoM/Replay attacks as well as ensuring the data stored within the token cannot be extracted out. I am going to use the following approach to generate the token which I think would prevent any data from being extracted from it. However, I still need to make sure it's not vurnerable from other attacks. The API will only be accessible over SSL but I am not sure if I can rely solely on this from a security perspective. | The "authentication token" works by how the server remembers it. A generic token is a random string; the server keeps in its database a mapping from emitted tokens to authenticated user names. Old tokens can be removed automatically in order to prevent the server's database from growing indefinitely. Such a token is good enough for security as long as an attacker cannot create a valid token with non-negligible probability, a "valid token" being "a token which is in the database of emitted tokens". It is sufficient that token values have length at least 16 bytes and are produced with a cryptographically strong PRNG (e.g. /dev/urandom , CryptGenRandom() , java.security.SecureRandom ... depending on your platform). It is possible to offload the storage requirement on the clients themselves. In the paragraph above, what "memory" should the server have of a token ? Namely the user name, and the date of production of the token. So, create your tokens like this: Server has a secret key K (a sequence of, say, 128 bits, produced by a cryptographically secure PRNG). A token contains the user name ( U ), the time of issuance ( T ), and a keyed integrity check computed over U and T (together), keyed with K (by default, use HMAC with SHA-256 or SHA-1). Thanks to his knowledge of K , the server can verify that a given token, sent back by the user, is one of its owns or not; but the attacker cannot forge such tokens. The answer you link to looks somewhat like that, except that it talks about encryption instead of MAC, and that's: confused; confusing; potentially insecure; because encryption is not MAC. | {
"source": [
"https://security.stackexchange.com/questions/19676",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10441/"
]
} |
19,681 | I checked the data transmission of an HTTPS website (gmail.com) using Firebug. But I can't see any encryption to my submitted data (username and password). Where does SSL encryption take place? | The SSL protocol is implemented as a transparent wrapper around the HTTP protocol. In terms of the OSI model , it's a bit of a grey area. It is usually implemented in the application layer, but strictly speaking is in the session layer. Think of it like this: Physical layer (network cable / wifi) Data link layer (ethernet) Network layer (IPv4) Transport layer (TCP) Session layer (SSL) Presentation layer (none in this case) Application layer (HTTP) Notice that SSL sits between HTTP and TCP. If you want to see it in action, grab Wireshark and browse a site via HTTP, then another via HTTPS. You'll see that you can read the requests and responses on the HTTP version as plain text, but not the HTTPS ones. You'll also be able to see the layers that the packet is split into, from the data link layer upwards. Update : It has been pointed out (see comments) that the OSI model is an over-generalisation and does not fit very well here. This is true. However, the use of this model is to demonstrate that SSL sits "somewhere" in between TCP and HTTP. It is not strictly accurate, and is a vague abstraction of reality. | {
"source": [
"https://security.stackexchange.com/questions/19681",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10834/"
]
} |
19,687 | I have been under attack last week and I was able to trace down the attacker and get his IP address. The attacker was located in Germany but I live out of Europe? From your experience what is the best way to report an international cyber crime? is it worth reporting at all? | There's always an "Abuse" email address on the whois of a netblock for reporting misuse of an IP address. You can use http://whois.domaintools.com/ to do a whois lookup to get the address. Is it worth your time? That's your call. Will it lead to anything? Nothing you'll ever see. But many of the sites I fix come from people who were first alerted of the problems on their server by someone sending the hosting company an "abuse" notification email. So it definitely can make a difference. Note that the IP address you track down is almost never the attacker himself but rather a hacked server or computer that he's using as a relay. So keep that in mind. You're not alerting the authorities on the whereabouts of a miscreant; you're notifying someone that his computer has been compromised. EDIT TO ADD You give the information you have to the appropriate authority, and then you're done. That it. As a rule, hosting companies will not share personal information of their clients unless you are local law enforcement with the appropriate warrant or court order. It's their liability if they do otherwise. Don't expect a follow-up report from them, don't expect names or arrests or anything more than an acknowledgement that they heard you -- sometimes not even that. These companies often deal with dozens of these reports a week or more. Their abuse team will deal with it, and they appreciate your assistance as they want to keep their network clean, and your report will probably trigger several days worth of activity. But they have a clear-cut policy that they follow to the letter for liability reasons, and it intentionally doesn't include reporting back the original reporter. Nothing against you specifically. Also, remember that though you found the hacker, It's almost certainly not his account on the server. | {
"source": [
"https://security.stackexchange.com/questions/19687",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10233/"
]
} |
19,705 | It is well known that SHA1 is recommended more than MD5 for hashing since MD5 is practically broken as lot of collisions have been found.
With the birthday attack, it is possible to get a collision in MD5 with 2 64 complexity and with 2 80 complexity in SHA1 It is known that there are algorithms that are able to crack both of these in far lesser time than it takes for a birthday attack. My question is: is MD5 considered insecure only for this reason that it is easy to produce collisions? Because looking at both, producing collisions in SHA1 is not that difficult either. So what makes SHA1 better? Update 02/2017 - https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html | Producing SHA-1 collisions is not that easy. It seems reasonable that the attack with has been described on SHA-1 really works with an average cost of 2 61 , much faster than the generic birthday attack (which is in 2 80 ), but still quite difficult ( doable , but expensive). That being said, we do not really know what makes hash functions resistant (see for instance this answer for a detailed discussion). With a lot of hand-waving, I could claim that SHA-1 is more robust than MD5 because it has more rounds and because the derivation of the 80 message words in SHA-1 is much more "mixing" than that of MD5 (in particular the 1-bit rotation, which, by the way, is the only difference between SHA-0 and SHA-1, and SHA-0 collisions have been produced). For more of the same, look at SHA-256, which is much more "massive" (many more operations than SHA-1, yet with a similar structure), and currently unbroken. It is as if there was a minimal amount of operations for a hash function to be secure, for a given structure (but there I am moving my hands at stupendous speed, so don't believe that I said anything really scientific or profound). | {
"source": [
"https://security.stackexchange.com/questions/19705",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11218/"
]
} |
19,714 | Fred Cohen determined theoritically that viral detetction is an undecidable problem. Can anyone provide an intuitive reasoning for that? Background Fred Cohen experimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability, self-obfuscation using rudimentary encryption, and others. His 1988 Doctoral dissertation was on the subject of computer viruses | Sure. In Cohen's famous result, he says that a perfect virus detector should emit an alarm if and only if the input program can ever act like a virus (i.e., infect your machine and do damage). Consider the following program: f();
infect_and_do_damage(); where f() is some harmless function, and infect_and_do_damage() is a viral payload that infects your machine and does all sorts of damage (wipes your hard disk, steals all your money, whatever). Let's consider what a perfect virus detector should say about this program: If f() can return, this is a virus and the virus detector should emit an alarm. On the other hand, if f() always enters an infinite loop and never returns, then the second line is dead code, infect_and_do_damage() will never be invoked, this program will never act like a virus, and the virus detector should not set off any alarms. So, the problem of determining whether this code is a virus is equivalent to the problem of determining whether the function f() can ever halt. That's the famous halting problem, which is known to be undecidable. In other words, detecting whether a program is a virus is at least as hard as detecting whether a program will halt. Thus, both problems are undecidable. Note that this is a purely theoretical result. Undecidability is a purely theoretical construct. The fact that a problem is undecidable is not the end of the conversation; it is merely the beginning of the conversation. In practice, there are various ways to attempt to deal with undecidability: e.g., try to write a solution that is probabilistically correct, even if it is not always correct on all programs; try to find a solution that works for the set of programs you're likely to find in practice, even if it doesn't work on all programs; allow the solution to occasionally answer "I don't know" or to err on the side of declaring a program a virus (or err on the side of false negatives); and so on. So you should not treat this as a definitive statement that virus detection is impossible -- just because the problem is undecidable doesn't mean it is necessarily impossible to find a good-enough solution in practice. But it does identify some fundamental barriers to building a perfect virus detector. | {
"source": [
"https://security.stackexchange.com/questions/19714",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10193/"
]
} |
19,763 | this wikipedia article describing AES says: Related-key attacks can break AES-192 and AES-256 with complexities 2^176 and 2^99.5, respectively. Does this mean that AES-256 is actually a weaker form of encryption than AES-192?
I’m currently writing a small password-manager program, which one should I use? It would also be cool if someone could explain the weakness of AES-256 compared to AES-192. | Related-key attacks are interesting mathematical properties of algorithms, but have no practical impact on security of encryption systems, as long as they are used for what they were designed, i.e. encryption (and not, for instance, as building blocks for hash functions). Bigger is not necessarily better . There is no practical need for using a 256-bit key over a 192-bit key or a 128-bit key. However, AES with a 128-bit key is slightly faster (this is not significant in most applications) so there can be an objective reason not to use bigger keys. Also, AES-128 is more widely supported than the other key sizes (for instance, outside of USA, AES-128 is available by default with Java, while bigger key sizes must be explicitly activated). None of AES-128, AES-192 or AES-256 is breakable with today's (or tomorrow's) technology ( if they are applied properly, that is). Try to work out what 2 99.5 is: it is... somewhat large. | {
"source": [
"https://security.stackexchange.com/questions/19763",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12683/"
]
} |
19,860 | We have a merchant website that uses Autorize.net's CIM and AIM. Our users may have multiple credit cards so we'd want to give them opportunity to distinguish between credit cards that they use on site. Currently we think about storing cardholder name, 4 last digits of CC number and its expiration date. What are the minimum requirements that should be held to store this sensitive data? Edit: PCI DSS says: The primary account number is the defining factor in the applicability of PCI DSS requirements. PCI DSS requirements are applicable if a primary account number (PAN) is stored, processed, or transmitted. If PAN is not stored, processed or transmitted, PCI DSS requirements do not apply. So cardholder name and expiration date can be stored without being compliant. But what about 4 last digits of PAN? | Cardholder name, 4 last digits of CC number and its expiration date are all NOT sensitive data. The cardholder name and expiration date only require protection if you are storing them with the full primary account number, not the truncated 4 digit number. If you are storing, processing, or transmitting cardholder data then you must meet all of the other PCI DSS requirements that kaushal mentions, but for the items you listed, you don't need to do anything special to protect them. See pages 7 and 8 of the PCI DSS for more information on this: https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf | {
"source": [
"https://security.stackexchange.com/questions/19860",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5501/"
]
} |
19,906 | After all these articles circulating online about md5 exploits , I am considering switching to another hash algorithm. As far as I know it's always been the algorithm of choice among numerous DBAs. Is it that much of a benefit to use MD5 instead of (SHA1, SHA256, SHA384, SHA512), or is it pure performance issue? What other hash do you recommend (taking into consideration data-bound applications as the platform)? I'm using salted hashes currently (MD5 salted hashes). Please consider both md5 file hashes and password hashes alike. | MD5 for passwords Using salted md5 for passwords is a bad idea. Not because of MD5's cryptographic weaknesses, but because it's fast. This means that an attacker can try billions of candidate passwords per second on a single GPU. What you should use are deliberately slow hash constructions, such as scrypt, bcrypt and PBKDF2. Simple salted SHA-2 is not good enough because, like most general purpose hashes, it's fast. Check out How to securely hash passwords? for details on what you should use. MD5 for file integrity Using MD5 for file integrity may or may not be a practical problem, depending on your exact usage scenario. The attacks against MD5 are collision attacks, not pre-image attacks. This means an attacker can produce two files with the same hash, if he has control over both of them. But he can't match the hash of an existing file he didn't influence. I don't know if the attacks applies to your application, but personally I'd start migrating even if you think it doesn't. It's far too easy to overlook something. Better safe than sorry. The best solution in this context is SHA-2 (SHA-256) for now. Once SHA-3 gets standardized it will be a good choice too. | {
"source": [
"https://security.stackexchange.com/questions/19906",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12661/"
]
} |
19,911 | With the advent of CRIME, BEAST's successor , what possible protection is available for an individual and/or system owner in order to protect themselves and their users against this new attack on TLS? | This attack is supposed to be presented 10 days from now, but my guess is that they use compression . SSL/TLS optionally supports data compression. In the ClientHello message, the client states the list of compression algorithms that it knows of, and the server responds, in the ServerHello , with the compression algorithm that will be used. Compression algorithms are specified by one-byte identifiers, and TLS 1.2 (RFC 5246) defines only the null compression method (i.e. no compression at all). Other documents specify compression methods, in particular RFC 3749 which defines compression method 1, based on DEFLATE , the LZ77-derivative which is at the core of the GZip format and also modern Zip archives. When compression is used, it is applied on all the transferred data, as a long stream. In particular, when used with HTTPS, compression is applied on all the successive HTTP requests in the stream, header included. DEFLATE works by locating repeated subsequences of bytes. Suppose that the attacker uses some JavaScript code which can send arbitrary requests to a target site (e.g. a bank) and runs on the attacked machine; the browser will send these requests with the user's cookie for that bank -- the cookie value that the attacker is after. Also, let's suppose that the attacker can observe the traffic between the user's machine and the bank (plausibly, the attacker has access to the same LAN of Wi-Fi hotspot than the victim; or he has hijacked a router somewhere on the path, possibly close to the bank server). For this example, we suppose that the cookie in each HTTP request looks like this: Cookie: secret=7xc89f+94/wa The attacker knows the Cookie: secret= part and wishes to obtain the secret value. So he instructs his JavaScript code to issue a request containing in the body the sequence Cookie: secret=0 . The HTTP request will look like this: POST / HTTP/1.1
Host: thebankserver.com
(...)
Cookie: secret=7xc89f+94/wa
(...)
Cookie: secret=0 When DEFLATE sees that, it will recognize the repeated Cookie: secret= sequence and represent the second instance with a very short token (one which states "previous sequence has length 15 and was located n bytes in the past); DEFLATE will have to emit an extra token for the '0'. The request goes to the server. From the outside, the eavesdropping part of the attacker sees an opaque blob (SSL encrypts the data) but he can see the blob length (with byte granularity when the connection uses RC4; with block ciphers there is a bit of padding, but the attacker can adjust the contents of his requests so that he may phase with block boundaries, so, in practice, the attacker can know the length of the compressed request). Now, the attacker tries again, with Cookie: secret=1 in the request body. Then, Cookie: secret=2 , and so on. All these requests will compress to the same size (almost -- there are subtleties with Huffman codes as used in DEFLATE), except the one which contains Cookie: secret=7 , which compresses better (16 bytes of repeated subsequence instead of 15), and thus will be shorter. The attacker sees that. Therefore, in a few dozen requests, the attacker has guessed the first byte of the secret value. He then just has to repeat the process ( Cookie: secret=70 , Cookie: secret=71 , and so on) and obtain, byte by byte, the complete secret. What I describe above is what I thought of when I read the article, which talks about "information leak" from an "optional feature". I cannot know for sure that what will be published as the CRIME attack is really based upon compression. However, I do not see how the attack on compression cannot work. Therefore, regardless of whether CRIME turns out to abuse compression or be something completely different, you should turn off compression support from your client (or your server). Note that I am talking about compression at the SSL level. HTTP also includes optional compression, but this one applies only to the body of the requests and responses, not the header, and thus does not cover the Cookie: header line. HTTP-level compression is fine. (It is a shame to have to remove SSL compression, because it is very useful to lower bandwidth requirements, especially when a site contains many small pictures or is Ajax-heavy with many small requests, all beginning with extremely similar versions of a mammoth HTTP header. It would be better if the security model of JavaScript was fixed to prevent malicious code from sending arbitrary requests to a bank server; I am not sure it is easy, though.) Edit 2012/09/12: The attack above can be optimized a bit by doing a dichotomy. Imagine that the secret value is in Base64, i.e. there are 64 possible values for each unknown character. The attacker can make a request containing 32 copies of Cookie: secret=X (for 32 variants of the X character). If one of them matches the actual cookie, the total compressed length with be shorter than otherwise. Once the attacker knows which half of his alphabet the unknown byte is part of, he can try again with a 16/16 split, and so on. In 6 requests, this homes in the unknown byte value (because 2 6 = 64). If the secret value is in hexadecimal, the 6 requests become 4 requests (2 4 = 16). Dichotomy explains this recent twit of Juliano Rizzo . Edit 2012/09/13: IT IS CONFIRMED. The CRIME attack abuses compression, in a way similar to what is explained above. The actual "body" in which the attacker inserts presumed copies of the cookie can actually be the path in a simple request which can be triggered by a most basic <img> tag; no need for fancy exploits of the same-origin-policy. | {
"source": [
"https://security.stackexchange.com/questions/19911",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/493/"
]
} |
19,930 | In detail here's the problem: I'm building an Android app, which consumes my REST API on the back-end. I need to build a Registration and Login API to begin with. After searching with Google for a while, I feel like there are only two approaches that I can take. During Registration, I use https and get the user's credentials; save it in my DB against the username(server side). During Login, again I use https , and ask the user's credentials; verify the hashed password on the DB and return him a session ID, which I'm planning to never expire unless he Logs Out. Further any other API calls (GET/POST) that the user makes, will be accompanied with this session ID so that I can verify the user. But in the above approach I'm forced to use https for any API call, else I'm vulnerable to Man in The Middle Attack , i.e. if anyone sniffs over my session ID, he can reconstruct similar GET/POST requests which I wouldn't want. Am I right with the above assumption? The second option is to follow the path of Amazon Web Services , where I use public/private key authentication. When a user registers I use a https API to save his/her credentials in the DB. From then on I use the user's hashed password as the private key. Any further API calls that the user makes will be having a hashed blob of the request URL using the user's private key. On the server side I reconstruct the hash using the saved private key. If the hash is a match I let the user do his task, else reject. In this option I need to use https only for the registration API. The REST can go on on http . But here the disadvantage is, that I'm forced to host my Registration API in a separate Virtual Directory (I'm using IIS and I'm not sure if I can host both http and https APIs in the same Virtual Directory). Hence I'm forced to develop the Registration API in a separate project file. Again Am I right with the above assumption? Edit: I'm using ASP.NET MVC4 to build the web API. The reason I'm reluctant to use https for all of my REST API calls is that I feel it's not lightweight and creates more network payload, which may not be best suited for a mobile app. Further encryption/decryption and extra handshake required may further affect a mobile's battery Life? Or is it not significant? Which of the above two approaches would you suggest? PS: We went with Https everywhere, and it was the best decision. More of that on my blog . | I'd go with SSL/TLS everywhere (since you control both sides, forcing TLS 1.2 should be feasible). It's relatively simple to use, and you get a lot of security features for free. For example if you don't use SSL, you'll need to worry about replay attacks. If you're worried about performance, make sure session resumption is supported by both the server and the client. This makes later handshakes much cheaper. Since the handshake is quite expensive in SSL compared to the encryption of the actual data, this reduces overall load considerably. If you use HTTP 2, you can even send multiple requests over a single connection, that way you avoid the complete TCP and SSL handshake overhead on later requests. | {
"source": [
"https://security.stackexchange.com/questions/19930",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13006/"
]
} |
19,969 | If we want both encryption and compression during transmission then what will be the most preferable order. Encrypt then compress Compress then encrypt | You should compress before encrypting. Encryption turns your data into high-entropy data, usually indistinguishable from a random stream. Compression relies on patterns in order to gain any size reduction. Since encryption destroys such patterns, the compression algorithm would be unable to give you much (if any) reduction in size if you apply it to encrypted data. Compression before encryption also slightly increases your practical resistance against differential cryptanalysis (and certain other attacks) if the attacker can only control the uncompressed plaintext, since the resulting output may be difficult to deduce. EDIT: I'm editing this years later because this advice is actually poor in an interactive case. You should not compress data before encrypting it in most cases. A side-channel attack method known as a "compression oracle" can be used to deduce plaintext data in cases where the attacker can interactively cause strings to be placed into an otherwise unknown plaintext datastream. Attacks on SSL/TLS such as CRIME and BREACH are examples of this. | {
"source": [
"https://security.stackexchange.com/questions/19969",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11801/"
]
} |
20,059 | I had never thought about this situation before, I may be completely wrong but I am going to have to clarify it anyway.
When a communication starts with a server, during the client handshake, the client receives a copy of the public key( CA signed). Now, the client has complete access this public key which is signed by the CA. Why cant an attacker, set up his own server, and use this public key which would essentially make a victim believe that he is the actual server. The attacker of course does not have the private key to de-crypt the communication, but that does not stop the handshake from happening. Since the certificate is signed, when this certificate reaches the victim's browser, its going to say that the cert is indeed fine. What am I missing here ? | The handshake includes these (rough) steps: The server sends its public key. The client encrypts setup info with that public key, and sends it back to the server. The server decrypts the client's submission and uses it to derive a shared secret. Further steps use that shared secret to set up the actual encryption to be used. So the answer to your question is, since an imposter can't perform step 3 (since it doesn't have the private key) it can never move on to step 4. It doesn't have the shared secret, so it can't complete the handshake. | {
"source": [
"https://security.stackexchange.com/questions/20059",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11218/"
]
} |
20,129 | I was reading HMAC on wikipedia and I was confused about a few points. Where do I use HMAC? Why is the key part of the hash? Even if someone successfully used a "length-extension attack", how would that be useful to the attacker? | A message authentication code (MAC) is produced from a message and a secret key by a MAC algorithm. An important property of a MAC is that it is impossible¹ to produce the MAC of a message and a secret key without knowing the secret key. A MAC of the same message produced by a different key looks unrelated. Even knowing the MAC of other messages does not help in computing the MAC of a new message. An HMAC is a MAC which is based on a hash function . The basic idea is to concatenate the key and the message, and hash them together. Since it is impossible, given a cryptographic hash, to find out what it is the hash of, knowing the hash (or even a collection of such hashes) does not make it possible to find the key. The basic idea doesn't quite work out, in part because of length extension attacks , so the actual HMAC construction is a little more complicated. For more information, browse the hmac tag on Cryptography Stack Exchange , especially Why is H(k||x) not a secure MAC construction? , Is H(k||length||x) a secure MAC construction? and HMAC vs MAC functions . There are other ways to define a MAC, for example MAC algorithms based on block ciphers such as CMAC . A MAC authenticates a message. If Alice sees a message and a MAC and knows the associated secret key, she can verify that the MAC was produced by a principal that knows the key by doing the MAC computation herself. Therefore, if a message comes with a correct MAC attached, it means this message was seen by a holder of the secret key at some point. A MAC is a signature based on a secret key, providing similar assurances to a signature scheme based on public-key cryptography such as RSA-based schemes where the signature must have been produced by a principal in possession of the private key. For example, suppose Alice keeps her secret key to herself and only ever uses it to compute MACs of messages that she stores on a cloud server or other unreliable storage media. If she later reads back a message and sees a correct MAC attached to it, she knows that this is one of the messages that she stored in the past. An HMAC by itself does not provide message integrity. It can be one of the components in a protocol that provides integrity. For example, suppose that Alice stores successive versions of multiple files on an unreliable media, together with their MACs. (Again we assume that only Alice knows the secret key.) If she reads back a file with a correct MAC, she knows that what she read back is some previous version of some file she stored. An attacker in control of the storage media could still return older versions of the file, or a different file. One possible way to provide storage integrity in this scenario would be to include the file name and a version number as part of the data whose MAC is computed; Alice would need to remember the latest version number of each file so as to verify that she is not given stale data. Another way to ensure integrity would be for Alice to remember the MAC of each file (but then a hash would do just as well in this particular scenario). ¹ “Impossible” as in requiring far more computing power than realistically possible. | {
"source": [
"https://security.stackexchange.com/questions/20129",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
20,134 | It's described very well by this diagram. It seems like the process used is convoluted and more round-about than it needs to be. Why is an intermediate random key generated for the payload's encryption and then transmitted with the message after its own encryption using the recipient's public key, instead of just using the recipient's public key directly on the message? Isn't it the same, as far as security properties go? | RSA is not used directly due to several reasons: RSA encrypts only messages with a limited size. With a 1024-bit RSA key, RSA (as per PKCS#1 ) can process only 117 bytes of data. To encrypt more than that, one would have to do some chaining, i.e. split the data to encrypt into several 117-byte blocks and encrypt them separately. This is routinely done for symmetric encryption (this is called "modes of operation" ) but it is not that easy to do securely, and nobody quite knows how to do a secure mode of operation for RSA. Hybrid encryption allows for efficient multi-recipient data. You symmetrically encrypt the data with a key K , then you encrypt K with the RSA key of each recipient. When you send a 3 MB file to 10 people, you would prefer to compute and send an encrypted email of size 3.01 MB, rather than ten 3 MB emails... RSA enlarges your data. With a 1024-bit RSA key, you encrypt at most a 117-byte chunk, but you get 128 bytes on output, so that's a 10% enlargement. On the other hand, symmetric encryption incurs only constant size increase. RSA encryption and decryption are fast, but not very fast. Doing a lot of RSA could prove problematic in high-bandwidth contexts (it would be fine for emails, with today's machines, not for a VPN). The fourth reason is the most often quoted, but actually the least compelling of the four. | {
"source": [
"https://security.stackexchange.com/questions/20134",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12803/"
]
} |
20,187 | https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-30#section-10.12 says: The client MUST implement CSRF protection [...] typically accomplished by requiring any request sent to the redirection URI endpoint to include a value that binds the request to the user-agent's authenticated state (e.g. a hash of the session cookie [...] It doesn't say much about the implementation, though. It only puts some light on how the CSRF works: A CSRF attack against the client's redirection URI allows an attacker to inject their own authorization code or access token, which can result in the client using an access token associated with the attacker's protected resources rather than the victim's (e.g. save the victim's bank account information to a protected resource controlled by the attacker) But use of the word "rather" rather makes the statement worthless. I am thinking how to implement the "state" in GAE (using Webapp2). It would be easiest starting at how a hacker could use a CSRF against OAuth2. I found only one good article about the matter: "Cross Site Request Forgery and OAuth2" . Unfortunately, while this blog post is well written, there's not much information beyond explaining the OAuth2. The examples don't work, and I don't know Spring. Still, I found one interesting recommendation there: the server connecting to an OAuth2 provider should store "state" as a random session key (e.g. "this_is_the_random_state":"this_doesn't_matter") , and not a value under a static key (e.g. "state":"random_state_string"). My question is, what's the sane implementation of the "state"? Should the randomly generated state be hashed, or can the same value be stored and sent to the OAuth2 provider? Is there a difference here if the session backend is secure cookies or a server-side storage technology (e.g. in GAE Memcache, or database)? Should state be stored as a key as suggested? Should state has validity period, or is session (if there is one) lifetime enough? | Let's walk through how this attack works. The Attack I visit some client's website and start the process of authorizing that client to access some service provider using OAuth The client asks the service provider for permission to request access on my behalf, which is granted I am redirected to the service provider's website, where I would normally enter my username/password in order to authorize access Instead, I trap/prevent this request and save its URL I somehow get you to visit that URL instead. If you are logged-in to the service provider with your own account, then your credentials will be used to issue an authorization code The authorization code is exchanged for an access token Now my account on the client is authorized to access your account on the service provider So, how do we prevent this using the state parameter? Prevention The client should create a value that is somehow based on the original user's account (a hash of the user's session key, for example). It doesn't matter what it is as long as it's unique and generated using some private information about the original user. This value is passed to the service provider in the redirect from step three above Now, I get you to visit the URL I saved (step five above) The authorization code is issued and sent back to the client in your session along with the state parameter The client generates a state value based on your session information and compares it to the state value that was sent back from the authorization request to the service provider. This value does not match the state parameter on the request, because that state value was generated based on my session information, so it is rejected. Your Questions Should the randomly generated state be hashed or can same value be stored and sent to OAuth2 provider? The point is that the attacker should not be able to generate a state value that is specific to a given user. It should be unguessable. Is there a difference here, if session back-end is secure-cookies or a server-side storage (in GAE Memcache or database)? I don't think this matters (if I understand you correctly) Should state be stored as a key as suggested? I don't know what this means. Should state has validity period, or session (if there is one) lifetime is enough? Yes, state should have an expiration. It doesn't necessarily have to be tied to the session, but it could be. | {
"source": [
"https://security.stackexchange.com/questions/20187",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9134/"
]
} |
20,219 | I am helpless against some kiddy with backtrack who repeatedly uses aireplay-ng to deauthenticate legitimate users on my Wifi work network. I captured and analyzed the network traffic on my Wifi work network, and I noticed a remarkable amount of 802.11 deauth packets. I realize it may not be possible to catch him, or even know where the attack came from. I just want to know: Is there any way to prevent such an attack? | Realistically, you cannot stop a bad guy from sending deauthentication packets. Instead, you should focus on ensuring you are resilient to a deauth attack. Make sure your network is configured in a way that the deauth attack doesn't enable an attacker to compromise your network. To do that, you need to make sure you are using WPA2. If you are using a pre-shared key (a passphrase), make sure the passphrase is very long and strong. If it is not already, change it immediately! If you are not using WPA2, fix that immediately! The primary reason why bad guys send deauth packets is that this helps them execute a dictionary attack against your passphrase. If a bad guy captures a copy of the initial handshake, they can try out various guesses at your passphrase and test whether they are correct. Sending a deauth packet forces the targeted device to disconnect and reconnect, allowing an eavesdropper to capture a copy of the initial handshake. Therefore, standard practice of many attackers who might try to attack your wireless network is to send deauth packets. If you are seeing many deauth packets, that is a sign that someone may be trying to attack your wireless network and guess your passphrase. Once the attacker has sent a deauth packet and intercepted the initial handshake, there are tools and online services that automate the task of trying to recover the passphrase, by guessing many possibilities. (See, e.g., CloudCracker for a representative example.) The defense against this kind of attack is to ensure your passphrase is so long and strong that it cannot possibly be guessed. If it's not already long and strong, you need to change it right away, because someone is probably trying to guess it as we speak. (The other reason a bad guy might send deauth packets is as an annoyance. However, as most users probably won't even notice, it's not a very effective annoyance.) To learn more, see these resources: How does deauthing work in aireplay-ng? Can someone get my WPA2 password with honeypots? | {
"source": [
"https://security.stackexchange.com/questions/20219",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12661/"
]
} |
20,270 | I am searching for a software that would encrypt data and then delete it (as opposed to just deleting or wiping). With that, even if the file is recovered, you still have to break the encryption to get the data. I suppose manually I could encrypt the files through GPG and then delete them but I was wondering if there a more automated option through specialized software. | I understand why you are asking and you are kinda on the right track. You think that if you encrypt the data before shredding it that if someone was able to reconstruct the data all they would get is encrypted data. It's a good thought, however when you encrypt a file it makes an encrypted copy of the file on another part of the file system. The original still has to be shredded anyway, either manually or automatically by the utility. So by encrypting it then shredding the encrypted file you're actually leaving 2 copies of the file rather than one, causing a net decrease in security. | {
"source": [
"https://security.stackexchange.com/questions/20270",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13129/"
]
} |
20,294 | Context When developing desktop applications, you will occasionally have to store credentials somewhere to be able to authenticate your application. An example of this is a Facebook app ID + secret , another one is MySQL credentials. Storing these plain text in the applications source code doesn't provide any true security since it isn't too much hassle to reverse engineer a program . Gathering the credentials from a server won't do the trick either since hackers easily can perform requests themselves.
Neither will encryption of the stored credentials make any difference since the application will need access to the decryption key to be able to use the credentials at all. Question How can one store application specific credentials securely? Preferably cross-OS. Note: The relevant language is Java, however, I believe (think) that this is a language agnostic question. | Never hardcode passwords or crypto keys in your program. The general rule of thumb is: the only credentials you should store on a user's machine are credentials associated with that user , e.g., credentials that enable that user to log into his/her account. You should not store your developer credentials on the user's machine. That's not safe. You have to assume that anything stored on the user's machine is known by the user, or can easily be learned by the user. (This is the right assumption: it is not hard to reverse-engineer an application binary to learn any keys or secrets that may be embedded in it.) Once you understand this general principle, everything becomes easy. Basically, then you need to design the rest of your system and your authentication protocol so that client software can authenticate itself using only those credentials that are safe to store on the client. Example 1. Suppose you have a Facebook app ID and key, associated with your app (i.e., associated with your developer account). Do you embed the app ID and key in the desktop software you ship to users? No! Absolutely not. You definitely don't do that, because that would allow of any of your users to learn your app ID and key and submit their own requests, possibly damaging your reputation. Instead, you find another way. For instance, maybe you set up your own server that has the app ID and key and is responsible for making the requests to the Facebook platform (subject to appropriate limitations and rate-limiting). Then, your client connects to your server. Maybe you authenticate each client by having each user set up his/her own user account on your server, storing the account credentials on the client, and having the client authenticate itself using these credentials. You can make this totally invisible to the user, by having the client app generate a new user account on first execution (generating its own login credentials, storing them locally, and sending them to the server). The client app can use these stored credentials to connect in the future (say, over SSL) and automatically log in every subsequent time the app is executed. Notice how the only thing stored on a user's machine are credentials that allow to log into that user's account -- but nothing that would allow logging into other people's accounts, and nothing that would expose developer app keys. Example 2. Suppose you write an app that needs to access the user's data in their Google account. Do you prompt them for their Google username and password and store it in the app-local storage? You could: that would be OK, because the user's credentials are stored on the user's machine. The user has no incentive to try to hack their own machine, because they already know their own credentials. Even better yet: use OAuth to authorize your app. This way your app stores an OAuth token in its app-local storage, which allows your app to access the user's Google account. It also avoids the need to store the user's Google password (which is particularly sensitive) in the app's local storage, so it reduces the risk of compromise. Example 3. Suppose you're writing an app that has a MySQL database backend that is shared among all users. Do you take the MySQL database and embed it into the app binary? No! Any of your users could extract the password and then gain direct access to your MySQL database. Instead, you set up a service that provides the necessary functionality. The client app connects to the service, authenticates itself, and sends the request to the service. The service can then execute this request on the MySQL database. The MySQL password stays safely stored on the server's machine, and is never accessible on any user's machine. The server can impose any restrictions or access control that you desire. This requires your client app to be able to authenticate to the service. One way to do that is to have the client app create a new account on the service on first run, generate a random authentication credential, and automatically log in to the service every subsequent time. You could use SSL with a random password, or even better yet, use something like SSL with a unique client certificate for each client. The other rule is: you don't hardcode credentials into the program. If you are storing credentials on the user's machine, store them in some private location: maybe a configuration file or in a directory, preferably one that is only readable by this particular app or this particular user (not a world-readable file). | {
"source": [
"https://security.stackexchange.com/questions/20294",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13224/"
]
} |
20,371 | I'm looking for an answer that explains the issue from the prespective of a developer/engineer. What is exactly being exploited and why does it work? Please note that I am not looking for instructions on how to exploit nor I am interested in reproducing it in any way; I'm merely curious about the technical background behind it. Some Relevant Links: Microsoft Security Advisory 2757760 CVE-20012-4969 on MITRE CVE-20012-4969 on NVD CVE-20012-4969 on CVE Details | CVE-2012-4969 , aka the latest IE 0-day, is a based on a use-after-free bug in IE's rendering engine. A use-after-free occurs when a dynamically allocated block of memory is used after it has been disposed of (i.e. freed). Such a bug can be exploited by creating a situation where an internal structure contains pointers to sensitive memory locations (e.g. the stack or executable heap blocks) in a way that causes the program to copy shellcode into an executable area. In this case, the problem is with the CMshtmlEd::Exec function in mshtml.dll . The CMshtmlEd object is freed unexpectedly, then the Exec method is called on it after the free operation. First, I'd like to cover some theory. If you already know how use-after-free works, then feel free to skip ahead. At a low level, a class can be equated to a memory region that contains its state (e.g. fields, properties, internal variables, etc) and a set of functions that operate on it. The functions actually take a "hidden" parameter, which points to the memory region that contains the instance state. For example (excuse my terrible pseudo-C++): class Account
{
int balance = 0;
int transactionCount = 0;
void Account::AddBalance(int amount)
{
balance += amount;
transactionCount++;
}
void Account::SubtractBalance(int amount)
{
balance -= amount;
transactionCount++;
}
} The above can actually be represented as the following: private struct Account
{
int balance = 0;
int transactionCount = 0;
}
public void* Account_Create()
{
Account* account = (Account*)malloc(sizeof(Account));
account->balance = 0;
account->transactionCount = 0;
return (void*)account;
}
public void Account_Destroy(void* instance)
{
free(instance);
}
public void Account_AddBalance(void* instance, int amount)
{
((Account*)instance)->balance += amount;
((Account*)Account)->transactionCount++;
}
public void Account_SubtractBalance(void* instance, int amount)
{
((Account*)instance)->balance -= amount;
((Account*)instance)->transactionCount++;
}
public int Account_GetBalance(void* instance)
{
return ((Account*)instance)->balance;
}
public int Account_GetTransactionCount(void* instance)
{
return ((Account*)instance)->transactionCount;
} I'm using void* to demonstrate the opaque nature of the reference, but that's not really important. The point is that we don't want anyone to be able to alter the Account struct manually, otherwise they could add money arbitrarily, or modify the balance without increasing the transaction counter. Now, imagine we do something like this: void* myAccount = Account_Create();
Account_AddBalance(myAccount, 100);
Account_SubtractBalance(myAccount, 75);
// ...
Account_Destroy(myAccount);
// ...
if(Account_GetBalance(myAccount) > 1000) // <-- !!! use after free !!!
ApproveLoan(); Now, by the time we reach Account_GetBalance , the pointer value in myAccount actually points to memory that is in an indeterminate state. Now, imagine we can do the following: Trigger the call to Account_Destroy reliably. Execute any operation after Account_Destroy but before Account_GetBalance that allows us to allocate a reasonable amount of memory, with contents of our choosing. Usually, these calls are triggered in different places, so it's not too difficult to achieve this. Now, here's what happens: Account_Create allocates an 8-byte block of memory (4 bytes for each field) and returns a pointer to it. This pointer is now stored in the myAccount variable. Account_Destroy frees the memory. The myAccount variable still points to the same memory address. We trigger our memory allocation, containing repeating blocks of 39 05 00 00 01 00 00 00 . This pattern correlates to balance = 1337 and transactionCount = 1 . Since the old memory block is now marked as free, it is very likely that the memory manager will write our new memory over the old memory block. Account_GetBalance is called, expecting to point to an Account struct. In actuality, it points to our overwritten memory block, resulting in our balance actually being 1337, so the loan is approved! This is all a simplification, of course, and real classes create rather more obtuse and complex code. The point is that a class instance is really just a pointer to a block of data, and class methods are just the same as any other function, but they "silently" accept a pointer to the instance as a parameter. This principle can be extended to control values on the stack, which in turn causes program execution to be modified. Usually, the goal is to drop shellcode on the stack, then overwrite a return address such that it now points to a jmp esp instruction, which then runs the shellcode. This trick works on non-DEP machines, but when DEP is enabled it prevents execution of the stack. Instead, the shellcode must be designed using Return-Oriented Programming (ROP) , which uses small blocks of legitimate code from the application and its modules to perform an API call, in order to bypass DEP. Anyway, I'm going off-topic a bit, so let's get into the juicy details of CVE-2012-4969! In the wild, the payload was dropped via a packed Flash file , designed to exploit the Java vulnerability and the new IE bug in one go. There's also been some interesting analysis of it by AlienVault . The metasploit module says the following: This module exploits a vulnerability found in Microsoft Internet Explorer (MSIE). When rendering an HTML page, the CMshtmlEd object gets deleted in an unexpected manner, but the same memory is reused again later in the CMshtmlEd::Exec() function, leading to a use-after-free condition. There's also an interesting blog post about the bug, albeit in rather poor English - I believe the author is Chinese. Anyway, the blog post goes into some detail: When the execCommand function of IE execute a command event, will allocated the corresponding CMshtmlEd object by AddCommandTarget function, and then call mshtml@CMshtmlEd::Exec() function execution. But, after the execCommand function to add the corresponding event, will immediately trigger and call the corresponding event function. Through the document.write("L") function to rewrite html in the corresponding event function be called. Thereby lead IE call CHTMLEditor::DeleteCommandTarget to release the original applied object of CMshtmlEd , and then cause triggered the used-after-free vulnerability when behind execute the msheml!CMshtmlEd::Exec() function. Let's see if we can parse that into something a little more readable: An event is applied to an element in the document. The event executes, via execCommand , which allocates a CMshtmlEd object via the AddCommandTarget function. The target event uses document.write to modify the page. The event is no longer needed, so the CMshtmlEd object is freed via CHTMLEditor::DeleteCommandTarget . execCommand later calls CMshtmlEd::Exec() on that object, after it has been freed. Part of the code at the crash site looks like this: 637d464e 8b07 mov eax,dword ptr [edi]
637d4650 57 push edi
637d4651 ff5008 call dword ptr [eax+8] The use-after-free allows the attacker to control the value of edi , which can be modified to point at memory that the attacker controls. Let's say that we can insert arbitrary code into memory at 01234f00 , via a memory allocation. We populate the data as follows: 01234f00: 01234f08
01234f04: 41414141
01234f08: 01234f0a
01234f0a: cccccccc // int3 breakpoint We set edi to 01234f00 , via the use-after-free bug. mov eax,dword ptr [edi] results in eax being populated with the memory at the address in edi , i.e. 01234f00 . push edi pushes 01234f00 to the stack. call dword ptr [eax+8] takes eax (which is 01234f00 ) and adds 8 to it, giving us 01234f08 . It then dereferences that memory address, giving us 01234f0a . Finally, it calls 01234f0a . The data at 01234f0a is treated as an instruction. cc translates to an int3 , which causes the debugger to raise a breakpoint. We've executed code! This allows us to control eip , so we can modify program flow to our own shellcode, or to a ROP chain. Please keep in mind that the above is just an example , and in reality there are many other challenges in exploiting this bug. It's a pretty standard use-after-free, but the nature of JavaScript makes for some interesting timing and heap-spraying tricks, and DEP forces us to use ROP to gain an executable memory block. Anyway, this was fun to research, and I hope it helps. | {
"source": [
"https://security.stackexchange.com/questions/20371",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11358/"
]
} |
20,406 | The CRIME attack taught us that using compression can endanger confidentiality. In particular, it is dangerous to concatenate attacker-supplied data with sensitive secret data and then compress and encrypt the concatenation; any time we see that occurring, at any layer of the system stack, we should be suspicious of the potential for CRIME-like attacks. Now the CRIME attack, at least as it has been publicly described so far, is an attack on TLS compression. Background: TLS includes a built-in compression mechanism, which happens at the TLS level (the entire connection is compressed). Thus, we have a situation where attacker-supplied data (e.g., the body of a POST request) gets mixed with secrets (e.g., cookies in the HTTP headers), which is what enabled the CRIME attack. However there are also other layers of the system stack that may use compression. I am thinking especially of HTTP compression . The HTTP protocol has built-in support for compressing any resources that you download over HTTP. When HTTP compression is enabled, compression is applied to the body of the response (but not the headers). HTTP compression will be enabled only if both the browser and the server support it, but most browsers and many servers do, because it improves performance . Note that HTTP compression is a different mechanism from TLS compression; HTTP compression is negotiated at a higher level of the stack, and only applies to the body of the response. However, HTTP compression can be applied to data that is downloaded over a SSL/TLS connection, i.e., to resources downloaded via HTTPS. My question: Is HTTP compression safe to use, on HTTPS resources? Do I need to do something special to disable HTTP compression of resources that are accessed over HTTPS? Or, if HTTP compression is somehow safe, why is it safe? | It seems risky to me. HTTP compression is fine for static resources, but for some dynamic resources served over SSL, it seems like HTTP compression might be dangerous. It looks to me like HTTP compression can, in some circumstances, allow for CRIME-like attacks. Consider a web application that has a dynamic page with the following characteristics: It is served over HTTPS. HTTP compression is supported by the server (this page will be sent to the browser in compressed form, if the browser supports HTTP compression). The page has a CSRF token on it somewhere. The CSRF token is fixed for the lifetime of the session (say). This is the secret that the attack will try to learn. The page contains some dynamic content that can be specified by the user. For simplicity, let us suppose that there is some URL parameter that is echoed directly into the page (perhaps with some HTML escaping applied to prevent XSS, but that is fine and will not deter the attack described). Then I think CRIME-style attacks might allow an attacker to learn the CSRF token and mount CSRF attacks on the web site. Let me give an example. Suppose the target web application is a banking website on www.bank.com , and the vulnerable page is https://www.bank.com/buggypage.html . Suppose the bank ensures that the banking stuff is only accessible by SSL (https). And, suppose that if the browser visits https://www.bank.com/buggypage.html?name=D.W. , then the server will respond with a HTML document looking something vaguely like this: <html>...<body>
Hi, D.W.! Pleasure to see you again. Some actions you can take:
<a href="/closeacct&csrftoken=29238091">close my account</a>,
<a href="/viewbalance&csrftoken=...">view my balance</a>, ...
</body></html> Suppose you are browsing the web over an open Wifi connection, so that an attacker can eavesdrop on all of your network traffic. Suppose that you are currently logged into your bank, so your browser has an open session with your bank's website, but you're not actually doing any banking over the open Wifi connection. Suppose moreover that the attacker can lure you to visit the attacker's website http://www.evil.com/ (e.g., maybe by doing a man-in-the-middle attack on you and redirecting you when you try to visit some other http site). Then, when your browser visits http://www.evil.com/ , that page can trigger cross-domain requests to your bank's website, in an attempt to learn the secret CSRF token. Notice that Javascript is allowed to make cross-domain requests. The same-origin policy does prevent it from seeing the response to a cross-domain request. Nonetheless, since the attacker can eavesdrop on the network traffic, the attacker can observe the length of all encrypted packets and thus infer something about the length of the resources that are being downloaded over the SSL connection to your bank. In particular, the malicious http://www.evil.com/ page can trigger a request to https://www.bank.com/buggypage.html?name=closeacct&csrftoken=1 and look at how well the resulting HTML page compresses (by eavesdropping on the packets and looking at the length of the SSL packet from the bank). Next, it can trigger a request to https://www.bank.com/buggypage.html?name=closeacct&csrftoken=2 and see how well the response compresses. And so on, for each possibility for the first digit of the CSRF token. One of those should compress a little bit better than the others: the one where the digit in the URL parameter matches the CSRF token in the page. This allows the attacker to learn the first digit of the CSRF token. In this way, it appears that the attacker can learn each digit of the CSRF token, recovering them digit-by-digit, until the attacker learns the entire CSRF token. Then, once the attacker knows the CSRF token, he can have his malicious page on www.evil.com trigger a cross-domain request that contains the appropriate CSRF token -- successfully defeating the bank's CSRF protections. It seems like this may allow an attacker to mount a successful CSRF attack on web applications, when the conditions above apply, if HTTP compression is enabled. The attack is possible because we are mixing secrets with attacker-controlled data into the same payload, and then compressing and encrypting that payload. If there are other secrets that are stored in dynamic HTML, I could imagine that similar attacks might become possible to learn those secrets. This is just one example of the sort of attack I am thinking of. So, it seems to me that using HTTP compression on dynamic pages that are accessed over HTTPS is a bit risky. There might be good reasons to disable HTTP compression on all resources served over HTTPS, except for static pages/resources (e.g., CSS, Javascript). | {
"source": [
"https://security.stackexchange.com/questions/20406",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/971/"
]
} |
20,411 | As a whitehat pentester I often wonder about the darkside. I see myself working in the office, and imagine that there is someone just like me in China or Romania or in their parent's basement that is pretty much doing the exact same thing, but hurting people for money or just for the "lulz". Similar to this great question: " What are the career paths in the computer security field? ". What is the job scene like on the darkside? What kind of crazy job opportunities await the amoral hacker? Do all of the "common roles" covered in AviD's answer have a darkside counterpart? What about the gray areas of computer security? ( Which some of us consider to be just plain-old blackhat ) | I can't comment on the actual job scene, but I do know a bit about the statistics of cybercrime. In terms of financial gain, the stats are quite interesting. In terms of profit, the top three are as follows: Pay-per-click advertising fraud - Wasn't so much of a profit-maker until recently, but blackhats seem to have focused on this method more intensely since the spam market got saturated. It's estimated that larger botnets, with numbers in the millions, generate up to $100k per day . Email spam - Still very profitable, but is a highly saturated market. Spam botnets have to be huge in order to turn a worthwhile profit, which makes them a top target for whitehats. More intelligent anti-spam systems also make it very difficult to target every-day users (e.g. hotmail, gmail, etc). It's estimated that the BredoLab botnet generated around $139k per month for the owner. Carding / skimming - The old standard. Whilst the effort required to capture credentials is reasonably easy, it's more labour-intensive and risky to actually use the cards. However, the pay-off generated by a usable card can be large - hundreds or thousands of dollars per success. However, don't think it's all sunshine and rainbows and high-class hookers! Most large botnet operators and card frausters get caught, and a lot of them go to prison: The writer of BredoLab got 4 years in prison. 3 people were arrested and are awaiting sentencing regarding the Mariposa botnet. As part of the FBI crackdown on the Zeus trojan, around 100 people were arrested and several received jail sentences. The Srizbi botnet caused an entire hosting company to be shut down by the government, and further arrests were made. The writer of the Akbot botnet got arrested, but was later released due to problems with the case. Lucky escape! Hundreds of people are arrested every year for credit card fraud. | {
"source": [
"https://security.stackexchange.com/questions/20411",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/975/"
]
} |
20,434 | What is the opinion of scholars, cryptographers on NSA Suite A? Containing unpublished algorithms. Could it really be that much better than the published algorithms besides being obscure and not documented publicly? | My opinion (and I am a cryptographer -- I have a shiny diploma which says so) is that: We cannot speculate on unknown algorithms, because they are, well, unknown. NSA is like all secret services in the World, they really love secrecy and will practice it for the sake of it. So the fact that their algorithms are not published is in no way indicative of some particular strength or weakness of the said algorithms. It is entirely plausible that the unpublished algorithms are indeed distinct from publicly known algorithms such as AES or RSA. It is also entirely plausible that "Suite A" and "Suite B" are, in fact, identical. At some point, to use some algorithms, you must have implementations , and these things do not grow on trees. Having your own algorithms is thus expensive. If I were a US taxpayer, I would be slightly dismayed at the misuse of my tax money, if it turned out that NSA spent it on developing and maintaining custom algorithms instead of reusing perfectly fine ones like the AES. There most probably are some people with power to decide a lot of things in the NSA, who believe that not publishing algorithms increases their security. Such people exist everywhere . It does not make them right, though. There is no better security than "cannot break it", which is what we already have with (properly used) AES, RSA, DH, ECC... The NSA could know of faster algorithms which are as secure as the public ones; however, it would be hard to beat the performance of hardware-accelerated AES , unless they have their own CPU foundry. The danger in security by obscurity is in believing that it works well. It may induce people to feel safe with homemade algorithms, because they would assume that the obscurity will hide the weaknesses of their algorithms. However, if you use good algorithms with published and well-studied protocols (i.e. AES, SSL...) then there is no harm done in not saying that you do. | {
"source": [
"https://security.stackexchange.com/questions/20434",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13249/"
]
} |
20,461 | I have a cronned Perl script that connects to our database and does various kinds of lookups and integrity checks. The original script was written by someone long ago. My job is to make some changes to it. But I really don't like staring at the username="foo", password="bar" parameters hard-coded in there for accessing the database. There's got to be a more secure way of doing this. All I could think of to do for now is to comment out the cron job, delete the line in the script that had the password, and start brainstorming about how to make this more secure. But meanwhile the things the script does have to be done by hand. Any ideas? | The standard way is to put the credentials into a config file, and attempt to protect the config file from being more readable than the perl file. This offers a moderate increase in security; for example, the code may be in source control and accessible to developers, the config file wouldn't be. The code needs to be in the web server's cgi root, and possibly downloadable under certain misconfigurations, and the config file needn't be. The ambitious way is to reversibly encrypt the credentials and put them into a config file. Of course, anything reversibly encrypted can be decrypted. The BladeLogic application did this, and it was trivial (<1 day) for me to de-compile their Java enough to find out the function to decrypt credentials and use it to decrypt them to my satisfaction. Not a mark against them; that's just the name of the reversibly encrypted game. Another option is to use OS-based authorization in concert with strictly limited database restrictions. For example, limit the database client user's access to a set of stored procedures to limit the potential for abuse, and allow that user to access the database without a password. This doesn't work if you're doing client-server over the network, which limits how often it's useful. Also, people tend to look more askance at "passwordless" OS-user access than they do at writing the password down willy-nilly. It is not completely logical, but there are standards that say all database users must have passwords, so that's that. | {
"source": [
"https://security.stackexchange.com/questions/20461",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12057/"
]
} |
20,473 | I know that scripting languages ( Perl , Ruby , Python , javascript, and even Lua!!!) are most suitable for hacking and penetration testing. My question is: What is it that makes those languages suitable? From what I know, they are slower than other languages, and operate at a higher abstraction level, which means they are too far from the hardware. The only reason I could think is because of their advanced string manipulation capabilities, but I believe that other languages have such capabilities. | Languages are useful for doing things . What type of things it's suitable for completely depends on the type of language, the frameworks available for it, what OSes have interpreters / compilers for it, etc. Let's look at the ones you've mentioned: Perl Scripting language General purpose Available on most *nix OSes since the '90s. Great for quick hacks and short scripts. Ruby Scripting language General purpose Cross-platform Object-oriented Reflective (can see its own structure and code) Good for dynamic frameworks Python Scripting language General purpose Cross-platform Designed for clear and readable source code Huge framework of libraries JavaScript Scripting language Web-based Cross-platform (available on every major browser) So what makes these particularly good for pentesting? Well, most pentesting involves writing up quick throw-away tools to do a specific job for a specific test. Writing such a tool in C or C++ every time you want to do a quick job is cumbersome and time-consuming. Furthermore, they tend to produce platform-specific binaries or source that requires platform-specific compilation, rather than cross-platform scripts that just run . Scripting languages give you the flexibility to produce such tools quickly and easily. For example, Ruby and Python are popular for more complex tasks because they have comprehensive libraries, whereas Perl is popular for quick data processing hacks. JavaScript is commonly utilised as a simple browser-based language that everyone has access to. Other languages such as C tend to be used for more low-level tasks that interface with the OS. Now, the other side of the coin is languages used as payloads. This is where the line gets blurred, because requirements are so varied. For attacking Windows boxes, any payload that has no dependencies outside of what the OS provides is useful. This might be C, C++, VBScript, x86 asm, C# / VB.NET (.NET 2.0 is on most machines these days), etc. For attacking Linux boxes you might use C, C++, bash scripts or Perl. Java is also common for cross-platform attacks. At the end of the day, pick the language that you find best for the job! | {
"source": [
"https://security.stackexchange.com/questions/20473",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10504/"
]
} |
20,497 | To prevent buffer overflows, there are several protections available such as using Canary values, ASLR, DEP, NX. But, where there is a will, there is a way. I am researching on the various methods an attacker could possibly bypass these protection schemes. It looks like there is no one place where clear information is provided.
These are some of my thoughts. Canary - An attacker could figure out the canary value and use that in his buffer injection to fool the stack guard from detecting an exploit DEP, NX - If there are calls to VirtualAlloc(), VirtualProtect() , the attacker could try to redirect code to these functions and disable DEP, NX on the pages that he wants to inject arbitrary code on. ASLR - No clue . How do ASLR and DEP work? | Canary Stack canaries work by modifying every function's prologue and epilogue regions to place and check a value on the stack respectively. As such, if a stack buffer is overwritten during a memory copy operation, the error is noticed before execution returns from the copy function. When this happens, an exception is raised, which is passed back up the exception handler hierarchy until it finally hits the OS's default exception handler. If you can overwrite an existing exception handler structure in the stack, you can make it point to your own code. This is a Structured Exception Handling (SEH) exploit, and it allows you to completely skip the canary check. DEP / NX DEP and NX essentially mark important structures in memory as non-executable, and force hardware-level exceptions if you try to execute those memory regions. This makes normal stack buffer overflows where you set eip to esp+offset and immediately run your shellcode impossible, because the stack is non-executable. Bypassing DEP and NX requires a cool trick called Return-Oriented Programming . ROP essentially involves finding existing snippets of code from the program (called gadgets) and jumping to them, such that you produce a desired outcome. Since the code is part of legitimate executable memory, DEP and NX don't matter. These gadgets are chained together via the stack, which contains your exploit payload. Each entry in the stack corresponds to the address of the next ROP gadget. Each gadget is in the form of instr1; instr2; instr3; ... instrN; ret , so that the ret will jump to the next address on the stack after executing the instructions, thus chaining the gadgets together. Often additional values have to be placed on the stack in order to successfully complete a chain, due to instructions that would otherwise get in the way. The trick is to chain these ROPs together in order to call a memory protection function such as VirtualProtect , which is then used to make the stack executable, so your shellcode can run, via an jmp esp or equivalent gadget. Tools like mona.py can be used to generate these ROP gadget chains, or find ROP gadgets in general. ASLR There are a few ways to bypass ASLR: Direct RET overwrite - Often processes with ASLR will still load non-ASLR modules, allowing you to just run your shellcode via a jmp esp . Partial EIP overwrite - Only overwrite part of EIP, or use a reliable information disclosure in the stack to find what the real EIP should be, then use it to calculate your target. We still need a non-ASLR module for this though. NOP spray - Create a big block of NOPs to increase chance of jump landing on legit memory. Difficult, but possible even when all modules are ASLR-enabled. Won't work if DEP is switched on though. Bruteforce - If you can try an exploit with a vulnerability that doesn't make the program crash, you can bruteforce 256 different target addresses until it works. Recommended reading: Corelan - Chaining DEP with ROP Corelan - Bypassing Stack Cookies, SafeSeh, SEHOP, HW DEP and ASLR ASLR/DEP bypass whitepaper (PDF) | {
"source": [
"https://security.stackexchange.com/questions/20497",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11218/"
]
} |
20,541 | I read at crackstation not to use these variants of bcrypt* ($1$, $2$, $2a$, $2x$, $3$),but I've used bcrypt ($2a$) in various sensitive implementations recently. Can any security expert clarify why recommending ($2y$, $5$, $6$) instead of ($1$, $2$, $2a$, $2x$, $3$), what is the original version proposed by Niels Provos, and in what they differ bcrypt is a key derivation function for passwords designed by Niels Provos and David Mazières, based on the Blowfish cipher, and presented at USENIX in 1999. Besides incorporating a salt to protect against rainbow table attacks, bcrypt is an adaptive function: over time, the iteration count can be increased to make it slower, so it remains resistant to brute-force search attacks even with increasing computation power. | 2 - the original BCrypt, which has been deprecated because of a security issue a long time before BCrypt became popular. 2a - the official BCrypt algorithm and a insecure implementation in crypt_blowfish 2x - suggested for hashes created by the insecure algorithm for compatibility 2y - suggested new marker for the fixed crypt_blowfish So 2a hashes created by the original algorithm or the java port are fine, and identical to 2y-hashes created by crypt_blowfish. But 2a hashes created by crypt_blowfish are insecure. 5 is sha256crypt 6 is sha512crypt The shaXXXcrypt algorithm are inspired by bcrypt but use sha2 instead of blowfish as hash functions in order to satisfy compliance requirements in the USA. | {
"source": [
"https://security.stackexchange.com/questions/20541",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12661/"
]
} |
20,591 | I have always wondered why it is that bad to reuse old passwords; it should not be the end of the world if we happen to use an old password that we previously used. After all, I believe most of the time that we change our passwords isn't because of real threats (it will usually be because our internal paranoia). But while it is true that at some point, one of those threats will be real, and we will succeed in saving our accounts, is not likely that the hacker will store the unsuccessful password and try again. But as I always say, companies don't bother thousands of users for no good reason, the pros are clearly outweighing the cons for them . However, I am involuntarily covering important details with my fingers when I take this photo, anyone that have the complete picture mind to tell me what did I miss? (I have really large fingers.) | The first question is: why do some services require passwords to be periodically changed. The answer is "Risk Mitigation". Corporate governance requires IT security policies to be defined in accordance to a risk management plan. One of the question that risk management plans ask is how can one mitigate a risk if it occurs. In the context of passwords, the question is how can we limit the damage of a password leak. If the system administrator is aware of the leak then users can be notified and other steps can be taken. To reduce the damage cause by a password leak of which the administrator is not aware, the lifetime of passwords is limited so that any leaked password can be used only for a short period of time. So services require periodic changing of passwords. The problem is that users really don't like changing their passwords. So what users used to do when forced to change their password was to change it twice - once to some temporary password and then a second time back to the original password. This of course nullifies the purpose of the policy to require passwords to be changed. So the next thing administrators did was store the last two passwords and check that the new password is different than the previous two passwords. The wily users countered that by changing the password three times - two temporary passwords and back to the original password. You might think that users wouldn't go to all that trouble just in order to not change passwords, but this is what actually happened. A administrator friend of mine once compared the hashed passwords in his system after a year and found that almost all passwords were the same - despite the fact that that password policy forced users to change passwords every three months. So the administrators started storing the last 10 passwords. And the users countered by using a fixed password plus a single changing digit at the end for a cycle of 10 passwords. And thus we've reached the situation today where many systems store all previous passwords. Having said all that, the real value of these policies is dubious. Human beings have a limited capacity for remembering passwords and if it's wasted on remembering these rapidly changing passwords it can't be used to keep different passwords on different sites (which is much more important). | {
"source": [
"https://security.stackexchange.com/questions/20591",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13323/"
]
} |
20,660 | I've noticed that there are a good number of sites (Google, Twitter, Wikipedia ) that are serving up every page over HTTPS . I can understand given that everyone is concerned over privacy now, but has there been some sort of best practice/impetus for this change? Perhaps it's one of those things that it's just easier to use, because you get certain guarantees out-of-the-box? It's been explained to me that it could be because of privacy concerns that were emphasized when the Firesheep Firefox plugin was released at Toorcon 12, but that was two years ago, and I seem to recall major sites making the switch to HTTPS-exclusive more recently. | HTTPS is the easiest solution to a large number of security problems: Every form of Man In The Middle attack is completely impractical over an HTTPS connection (you'd need to either break SSL or hack into a certificate authority). This includes protecting your users while they're on public wifi. If any page isn't secure, and a user clicks a "login" link (which is presumably HTTPS), an attacker could replace it with a link to an insecure version that steals passwords. The only secure way to do this is either to serve the whole site over HTTPS or make sure users pay attention to the URL bar. Only one of those two options is possible. Since all of your pages are secure, you don't need to think about which pages are secure and which ones aren't (user clicks login -> redirect to HTTPS version -> user logs in -> redirect back to HTTP -> user goes to their profile -> redirect to HTTPS...). Modern browsers give mixed content warnings if an HTTPS page contains insecure content (styles, scripts, images, etc.). Most browsers treat this kind of page as if it's not secure at all (showing the scary red URL box). The easiest way to make sure you never run into this problem is to just serve all of your content over HTTPS. If you're HTTPS-only, you can enable HTTP Strict Transport Security to further reduce vulnerability to MITM attacks (once a user has been to your site once, their browser will always choose the HTTPS version, even if directed to a http:// URL). Honestly, I don't know why anyone doesn't enable HTTPS. It's completely trivial and it can be free . | {
"source": [
"https://security.stackexchange.com/questions/20660",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6433/"
]
} |
20,706 | I am learning the basics of SSH protocol. I am confused between the contents of the following 2 files: ~/.ssh/authorized_keys : Holds a list of authorized public keys for servers. When the client connects to a server, the server authenticates the client by checking its signed public key stored within this file ~/.ssh/known_hosts : Contains DSA host keys of SSH servers accessed by the user. This file is very important for ensuring that the SSH client is connecting the correct SSH server. I am not sure what this means. Please help. | The known_hosts file lets the client authenticate the server, to check that it isn't connecting to an impersonator. The authorized_keys file lets the server authenticate the user. Server authentication One of the first things that happens when the SSH connection is being established is that the server sends its public key to the client, and proves (thanks to public-key cryptography ) to the client that it knows the associated private key. This authenticates the server: if this part of the protocol is successful, the client knows that the server is who it claims it is. The client may check that the server is a known one, and not some rogue server trying to pass off as the right one. SSH provides only a simple mechanism to verify the server's legitimacy: it remembers servers you've already connected to, in the ~/.ssh/known_hosts file on the client machine (there's also a system-wide file /etc/ssh/known_hosts ). The first time you connect to a server, you need to check by some other means that the public key presented by the server is really the public key of the server you wanted to connect to. If you have the public key of the server you're about to connect to, you can add it to ~/.ssh/known_hosts on the client manually. By the way, known_hosts can contain any type of public key supported by the SSH implementation, not just DSA (also RSA and ECDSA). Authenticating the server has to be done before you send any confidential data to it. In particular, if the user authentication involves a password, the password must not be sent to an unauthenticated server. User authentication The server only lets a remote user log in if that user can prove that they have the right to access that account. Depending on the server's configuration and the user's choice, the user may present one of several forms of credentials (the list below is not exhaustive). The user may present the password for the account that he is trying to log into; the server then verifies that the password is correct. The user may present a public key and prove that he possesses the private key associated with that public key. This is exactly the same method that is used to authenticate the server, but now the user is trying to prove its identity and the server is verifying it. The login attempt is accepted if the user proves that he knows the private key and the public key is in the account's authorization list ( ~/.ssh/authorized_keys on the server). Another type of method involves delegating part of the work of authenticating the user to the client machine. This happens in controlled environments such as enterprises, when many machines share the same accounts. The server authenticates the client machine by the same mechanism that is used the other way round, then relies on the client to authenticate the user. | {
"source": [
"https://security.stackexchange.com/questions/20706",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11855/"
]
} |
20,803 | How does SSL work? I just realised we don't actually have a definitive answer here, and it's something worth covering. I'd like to see details in terms of: A high level description of the protocol. How the key exchange works. How authenticity, integrity and confidentiality are enforced. What the purpose of having CAs is, and how they issue certificates. Details of any important technologies and standards (e.g. PKCS) that are involved. | General SSL (and its successor, TLS ) is a protocol that operates directly on top of TCP (although there are also implementations for datagram based protocols such as UDP). This way, protocols on higher layers (such as HTTP) can be left unchanged while still providing a secure connection. Underneath the SSL layer, HTTP is identical to HTTPS. When using SSL/TLS correctly, all an attacker can see on the cable is which IP and port you are connected to, roughly how much data you are sending, and what encryption and compression are used. He can also terminate the connection, but both sides will know that the connection has been interrupted by a third party. In typical use, the attacker will also be able to figure out which hostname you're connecting to (but not the rest of the URL): although HTTPS itself does not expose the hostname, your browser will usually need to make a DNS request first to find out what IP address to send the request to. High-level description of the protocol After building a TCP connection, the SSL handshake is started by the client. The client (which can be a browser as well as any other program such as Windows Update or PuTTY) sends a number of specifications: which version of SSL/TLS it is running, what ciphersuites it wants to use, and what compression methods it wants to use. The server identifies the highest SSL/TLS version supported by both it and the client, picks a ciphersuite from one of the client's options (if it supports one), and optionally picks a compression method. After this the basic setup is done, the server sends its certificate. This certificate must be trusted by either the client itself or a party that the client trusts. For example, if the client trusts GeoTrust, then the client can trust the certificate from Google.com because GeoTrust cryptographically signed Google's certificate. Having verified the certificate and being certain this server really is who he claims to be (and not a man in the middle), a key is exchanged. This can be a public key, a "PreMasterSecret" or simply nothing, depending on the chosen ciphersuite. Both the server and the client can now compute the key for the symmetric encryption whynot PKE? . The client tells the server that from now on, all communication will be encrypted, and sends an encrypted and authenticated message to the server. The server verifies that the MAC (used for authentication) is correct and that the message can be correctly decrypted. It then returns a message, which the client verifies as well. The handshake is now finished, and the two hosts can communicate securely. For more info, see technet.microsoft.com/en-us/library/cc785811 and en.wikipedia.org/wiki/Secure_Sockets_Layer . To close the connection, a close_notify 'alert' is used. If an attacker tries to terminate the connection by finishing the TCP connection (injecting a FIN packet), both sides will know the connection was improperly terminated. The connection cannot be compromised by this though, merely interrupted. Some more details Why can you trust Google.com by trusting GeoTrust? A website wants to communicate with you securely. In order to prove its identity and make sure that it is not an attacker, you must have the server's public key . However, you can hardly store all keys from all websites on earth, the database would be huge and updates would have to run every hour! The solution to this is Certificate Authorities, or CA for short. When you installed your operating system or browser, a list of trusted CAs probably came with it. This list can be modified at will; you can remove whom you don't trust, add others, or even make your own CA (though you will be the only one trusting this CA, so it's not much use for public website). In this CA list, the CA's public key is also stored. When Google's server sends you its certificate, it also mentions it is signed by GeoTrust. If you trust GeoTrust, you can verify (using GeoTrust's public key) that GeoTrust really did sign the server's certificate. To sign a certificate yourself, you need the private key, which is only known to GeoTrust. This way an attacker cannot sign a certificate himself and incorrectly claim to be Google.com. When the certificate has been modified by even one bit, the sign will be incorrect and the client will reject it. So if I know the public key, the server can prove its identity? Yes. Typically, the public key encrypts and the private key decrypts. Encrypt a message with the server's public key, send it, and if the server can repeat back the original message, it just proved that it got the private key without revealing the key. This is why it is so important to be able to trust the public key: anyone can generate a private/public key pair, also an attacker. You don't want to end up using the public key of an attacker! If one of the CAs that you trust is compromised, an attacker can use the stolen private key to sign a certificate for any website they like. When the attacker can send a forged certificate to your client, signed by himself with the private key from a CA that you trust, your client doesn't know that the public key is a forged one, signed with a stolen private key. But a CA can make me trust any server they want! Yes, and that is where the trust comes in. You have to trust the CA not to make certificates as they please. When organizations like Microsoft, Apple, and Mozilla trust a CA though, the CA must have audits; another organization checks on them periodically to make sure everything is still running according to the rules. Issuing a certificate is done if, and only if, the registrant can prove they own the domain that the certificate is issued for. What is this MAC for message authentication? Every message is signed with a so-called Message Authentication Code , or MAC for short. If we agree on a key and hashing cipher, you can verify that my message comes from me, and I can verify that your message comes from you. For example with the key "correct horse battery staple" and the message "example", I can compute the MAC "58393". When I send this message with the MAC to you (you already know the key), you can perform the same computation and match up the computed MAC with the MAC that I sent. An attacker can modify the message but does not know the key. He cannot compute the correct MAC, and you will know the message is not authentic. By including a sequence number when computing the MAC, you can eliminate replay attacks . SSL does this. You said the client sends a key, which is then used to setup symmetric encryption. What prevents an attacker from using it? The server's public key does. Since we have verified that the public key really belongs to the server and no one else, we can encrypt the key using the public key. When the server receives this, he can decrypt it with the private key. When anyone else receives it, they cannot decrypt it. This is also why key size matters: The larger the public and private key, the harder it is to crack the key that the client sends to the server. How to crack SSL In summary : Try if the user ignores certificate warnings; The application may load data from an unencrypted channel (e.g. HTTP), which can be tampered with; An unprotected login page that submits to HTTPS may be modified so that it submits to HTTP; Unpatched applications may be vulnerable for exploits like BEAST and CRIME; Resort to other methods such as a physical attack; Exploit side channels like message length and the time taken to form the message; Wait for quantum attacks . See also: A scheme with many attack vectors against SSL by Ivan Ristic (png) In detail: There is no simple and straight-forward way; SSL is secure when done correctly. An attacker can try if the user ignores certificate warnings though, which would break the security instantly. When a user does this, the attacker doesn't need a private key from a CA to forge a certificate, he merely has to send a certificate of his own. Another way would be by a flaw in the application (server- or client-side). An easy example is in websites: if one of the resources used by the website (such as an image or a script) is loaded over HTTP, the confidentiality cannot be guaranteed anymore. Even though browsers do not send the HTTP Referer header when requesting non-secure resources from a secure page ( source ), it is still possible for someone eavesdropping on traffic to guess where you're visiting from; for example, if they know images X, Y, and Z are used on one page, they can guess you are visiting that page when they see your browser request those three images at once. Additionally, when loading Javascript, the entire page can be compromised. An attacker can execute any script on the page, modifying for example to whom the bank transaction will go. When this happens (a resource being loaded over HTTP), the browser gives a mixed-content warning: Chrome , Firefox , Internet Explorer 9 Another trick for HTTP is when the login page is not secured, and it submits to an https page. "Great," the developer probably thought, "now I save server load and the password is still sent encrypted!" The problem is sslstrip , a tool that modifies the insecure login page so that it submits somewhere so that the attacker can read it. There have also been various attacks in the past few years, such as the TLS renegotiation vulnerability , sslsniff , BEAST , and very recently, CRIME . All common browsers are protected against all of these attacks though, so these vulnerabilities are no risk if you are running an up-to-date browser. Last but not least, you can resort to other methods to obtain the info that SSL denies you to obtain. If you can already see and tamper with the user's connection, it might not be that hard to replace one of his/her .exe downloads with a keylogger, or simply to physically attack that person. Cryptography may be rather secure, but humans and human error is still a weak factor. According to this paper by Verizon , 10% of the data breaches involved physical attacks (see page 3), so it's certainly something to keep in mind. | {
"source": [
"https://security.stackexchange.com/questions/20803",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5400/"
]
} |
20,806 | As Mikko Hypponen highlighted in a recent tweet , anti-Virus companies collect and store both samples of malicious software and samples of legitimate software: This got me to thinking, what differentiates that from software piracy? If I download the torrent for Adobe Photoshop CS then I am pirating that software - deemed by many countries to be illegal. If anti-virus software uploads that same copy of Adobe Photoshop CS to their servers for analysis then how is that any different? It's possible that this is a moot point - software vendors understand and allow this particular type of 'piracy' or in certain countries this comes under "fair use". But what about malware authors? A piece of malware is just a piece of software and the author of that software is afforded the same rights as any software author. Could a malware author take the anti-virus company to court for 'pirating' their software? This doesn't just apply to software like the BlackHole Kit but what about state sponsored malware like Stuxnet ? So this is my question. Are anti-virus companies afforded special protection from copyright laws and if not could these be used against them, especially by corporate or state malware authors? | First, I would seriously question whether a malware author would ever bring suit against an antivirus vendor, since it would require admission of a serious crime. But let's suppose that the malware author has already been charged for creation and use of the malware and therefore has nothing to lose by fully admitting authorship. Copying software for malware analysis seems like a textbook case of fair use (under U.S. law, anyway). Of course, U.S. fair use is a defense that is usable only once you already engaged in a lawsuit (and its parameters are notoriously vague), but let's indulge in a little armchair speculation about how such a lawsuit might resolve. To take the fair use criteria one by one: Purpose and character of use: The use of the copy is legally transformative , which means that it creates something new, instead of merely attempting to recreate the original. Here, the analysts are producing a malware assessment based on the original software. They're not creating a copy just to have an extra copy; they use the copy to produce something novel. This factor heavily favors the analysts. Nature of the copied work : The piece of malware is a published, creative work that rightfully enjoys copyright protection. This factor favors the malware authors. Amount and substantiality: The analysts use the whole software in their analyses. This factor favors the malware authors. Effect upon work's value: Virtually none, which favors the analysts. In fact, the work has little legitimate market, since its primary use is illegal. (While it may be the case that AV vendors reduce the value of malware by building defenses against it, this is not the same as harm caused by creating a substitute work . Wikipedia sumarizes it aptly: "Courts recognize that certain kinds of market harm do not oppose fair use, such as when a parody or negative review impairs the market of the original work." ) While factors #2 and #3 are in favor of the malware authors, the transformative use of the malware lends tremendous legal weight to the fair use argument in favor of the analysts. Only a judge can make a final ruling on fair use, but I suspect that a reasonable judge would rule in favor of copying malware for analytic purposes. N.B.: This answer considers the legality of copying from a limited, copyright-only perspective. There are other statutes beyond copyright (e.g., ACTA, DMCA) that may be violated when copying malware or legitimate software without permission from the copyright holder. Even if a use is protected by fair use, the fair use defense protects against copyright infringement only, not other violations that may also occur during (or be necessary for) the act of a fair use. (For example, you may want to include a few seconds of a movie in a video report for your cinematography class, but your copy of the movie only plays in a proprietary player that does not allow exporting snippets of the film. If you download a tool for circumventing the "no exporting of snippets" restriction of your player, then you have violated the DMCA, even though your ultimate goal was probably fair use.) In short : the analysts' copyright infringement is probably legally defensible under fair use, but analysts may still be in violation of other statutes that are separate from traditional copyright. | {
"source": [
"https://security.stackexchange.com/questions/20806",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1363/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.