source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
94,390 | For a Diffie–Hellman (D-H) key exchange (TLS) the server generates a prime p and a generator g , which is a primitive root modulo p. When setting up a webserver with SSL/TLS (e.g. nginx) one can use a directive ssl_dhparam dhparam4096.pem The dhparam4096.pem file can be generated using openssl dhparam -out dhparam4096.pem 4096 What exactly is the purpose of these D-H Parameters? Can they be public? (i.e. can I publish my dhparam4096.pem file?) Here are the contents of my dhparam4096.pem file: That seems to be a hexadecimal representation of a 4096bit integer, is that correct? -----BEGIN DH PARAMETERS-----
MIICCAKCAgEAsb8mBiOtYJZ3XR7f/rCXQwpAAnUPXf7l9pwjYxxy30A7rIzMTrsz
bXuhhEmzroJcDqKbu2nIzOBNO6HuyQ3n9x/ZbY5kiEt6q7UrB5jC9LwyPASZhd/F
6xLC7yqFs9sdCaXzuyMS4Ep5sPH6lOvftdsuGZZF9HriTepv/lpD1cPtab+3vHZX
IELVc2WBwVzvNRGd/SQB8RJ+NNF8IseCV/pb/tb67O1p7sn+JsD5xgNB7Lte/XjD
QBXv86aNuI2Z6zAuCiQsQ4rJuWcdnyAz0+zW9DRHz0osB1+IfHYcf4tNmvMKbC8E
u/vI+Q2WsMXkbTyhWibV2zH8cXdfsj5xpIgtbxm4G1ELGFgqyX9LD0IddXE7Md86
qwoKSTBNOyCIEZwwNfKDXY0b7zzObv7b3//J7gM323bAcm9g3uVaYBxF7ITd/jGm
AxpnF55mfhAOYemPZtNinnPAdvqf6BhZe29wfVC1yCIhg7ec9spRaFn2GgW0eL3d
q/+Ux8DJBtzKE10YyLa7bh1Dhml/xtA7rpqLL4+jg5c6lLEvSGmAZtm879NYS0za
33/2LN0/KB4z46Ro5hwqq3UIIe8oDsxdlNGb0mb9F0lKw5//J24PK/t11qMt8yuG
oKUE7TkDfwXlEBxd/ynW2/kLIjhG1JG55Vz8GWet8+ZGzfl/VQeUb9MCAQI=
-----END DH PARAMETERS----- | What exactly is the purpose of these DH Parameters? These parameters define how OpenSSL performs the Diffie-Hellman (DH) key-exchange . As you stated correctly they include a field prime p and a generator g . The purpose of the availability to customize these parameter is to allow everyone to use his / her own parameters for this. This can be used to prevent being affected from the Logjam attack (which doesn't really apply to 4096 bit field primes). So what do they define? A Diffie-Hellman key exchange operates as follows (for TLS 1.2 and before 1 ): The server Bob uses these parameters to calculate B=g^b mod p . He sends (B,g,p) to the client Alice who computes A=g^a mod p on her own along with K=B^a mod p . She sends A to Bob and he computes K=A^b mod p . As A^b=g^(a*b)=g^(b*a)=B^a mod p holds both parties will agree on a shared key. The parameters p and g define the security of this key-exchange. A larger p will make finding the shared secret K a lot harder, defending against passive attackers. And why do you have to pre-compute them? Finding the prime p means finding a value for p for which p=2q+1 holds, with q being a prime. p is then called a safe prime . Finding such primes is really computational intense and can't be afforded on each connection, so they're pre-computed. Can they be public? Yes , it's no risk publishing them. In fact they're sent out for every key-exchange that involves some Diffie-Hellman (DH) key exchange. There are even a few such parameters standardized for example in RFC 5114 . The only possible problems with publishing may be that a powerful attacker may be interested in performing some computations on them, enabling him to perform the Logjam attack . However as your parameters use a 4096 bit field prime p this isn't a risk. To explain why publishing them isn't a risk you may want to take a look at the above key-exchange description and note that the parameters are only used as a base for the computations but all the secrets ( a,b ) are completely independent of g,p . 1: For TLS 1.3, the client first guesses the parameters of the servers from a standardized set. Then it A s for all of these parameters to the server who then either responds with a B of his own along with the choice of parameter set or responds with a list of parameters actually supported - which may include the custom generated ones this Q&A is all about. | {
"source": [
"https://security.stackexchange.com/questions/94390",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60931/"
]
} |
94,432 | With arguments expressed in this answer , there is a few seconds delay between user enters an incorrect password and when he/she actually learns, that password was incorrect. This security solution is implemented in an operating system (here an elementary OS ) and in console commands like sudo etc. Should I implement the same mechanism in my website or web service? Or may I easily assume, that a typical delays in exchanging information between browser and server will be enough to block bloated brute-force attacts (in takes more than one second even on local system between pushing Login button on my site until an information about an incorrect password is returned; this seems enough long AFAIK). There is a similar question on this matter. However, both answers ( this and this ) are not satisfying my question or are even a bit off-topic. I'm not asking about suspension or temporal locking of user account after each failed login (and thus arguments about locking attacker preventing real user from login are off). I'm talking only about possible delay between displaying login form again. | I assume that your intention with the failure delay is to prevent brute force attacks: if an attacker is trying to guess a user's password, she will first fail many times; if we can make those failures take a substantial amount of time longer, then it will make the attack an order of magnitude harder, and thus unlikely to succeed (in a reasonable time frame). However. This assumes that the attacker waits patiently between login attempts. And, as we all know, cyberhackers are a very polite and patient folk. That said, some attackers may choose NOT to wait, and simply send many requests in parallel. If a login attempt doesn't receive a response immediately, the attacker can interpret that as a failed attempt, kill that request, and move on the next one. An extra benefit of this scheme is that it would be that much easier for an attacker to create unreasonable load on the server (just send a lot of failing login requests, each one will occupy a thread for several seconds...), and possibly even succeed in DoSing the server. In fact, the main problem with this solution is that it precisely does NOT prevent many parallel requests. If an attacker is attempting an online brute force attack, she is not going to sit in front of the keyboard typing in many passwords, one after the other, as fast as Hugh Jackman possibly can - if that was the case, you would only be at risk if the user has a dead simple password. In reality, she will have a script or automated tool send (almost) as many requests as the server can handle, and then keep going. The risk is not that someone will try 30 different passwords a minute, it's 1000 passwords a minute, or 10,000. The only way to prevent that is user throttling / account lockout / incremental suspension - call it what you will, this is all basically the same idea of allowing X number of login attempts per account within a given timeframe. So even though this doesn't cut down to "a single login attempt at a time", it does get close enough. | {
"source": [
"https://security.stackexchange.com/questions/94432",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11996/"
]
} |
94,502 | I realize something like this would be the holy grail of communications security and have never heard anything to suggest it has been done. Am just wondering. A lot of mathematicians and computer scientists spend time doing proofs for theoretical situations. Has anyone done a proof to rule out the possibility of ever doing a key-exchange (or some other encryption setup) over an open or already compromised line? To be clear I mean : A and B are communicating while C has the potential to be listening (ideally C is not listening in real time and is only logging A&B for action later). Is it impossible for A and B to ever exchange information over the compromised line to setup a new encrypted line that C can't immediately crack? Last Clarification : I'm realizing public key may technically satisfy the problem by not transferring the whole method of encryption over the line. Is there anything else you do, even if only theoretical? This feels like more of a loophole than directly solving things. | If the attacker is only passively listening to the connection then Diffie Hellman Key Exchange can be done to create a common key known only to the communication peers. But, if the attacker can not only listen to the connection but also actively modify the transferred data, then the attacker might mount a man-in-the-middle attack and claim to be the expected communication partner for both A and B. This can only be prevented if A can identify B before starting the encrypted communication and thus knows that it exchanged the key with the expected partner. For this it needs a secure way to verify the identity even if the line is compromised. This can be done by public-key cryptography. But of of course it needs some kind of prior knowledge about the expected identity of B, i.e. either A knows B already (direct trust) or it knows somebody who knows B etc (trust chain). You'll find an implementation for all of this with SSL/TLS and the associated PKI . This is used with https in the browsers and the necessary trust anchors for building the trust chain are the public certificate agencies which are either known by the operating system or by the browser. For more information see How does SSL/TLS work? . | {
"source": [
"https://security.stackexchange.com/questions/94502",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81448/"
]
} |
94,529 | Not sure if I should report it here, but within my website I collect each request in DB, and from time to time view these records. Among data collected are user agent, reqested url, referrer (i.e. previous) url, time, and others. Today I found a following user agent bot (which seems to hide this): (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko IP: 181.114.49.183 Most of requests are within 1-2 seconds intervals, and the most interesting is what urls it wants my server to check: | /wp/wp-login.php |
| /blog/wp-login.php |
| /site/wp-login.php |
| /cms/wp-login.php |
| /section/wp-login.php |
| /wp-admin/wp-login.php |
| /wordpress/wp-login.php | (from my DB) Referrer in each cases is null (meaning there was no referrer; it expected to find a webpage at this uri). Why a bot may possibly want to check if there is a wordpress installed on a webpage, and what is its url (my website doesn't use wordpress, or any other CMS)? To me, it's like searching for a possible way to get into an admin panel of the website. And I do have an admin panel which I created myself. Should I do anything about it? | Every computer with a public IP gets this kind of attention permanently. There's nothing you can do to stop it (I once tried to complain to the provider owning the offender's IP, never got a reply and gave up). What you can do is to make sure you're well protected against a possible attack (this bot seemed to look for WordPress, but there are others looking for Apache, SSH, you name it). A few rules: Expose as few services as possible. If you don't need SSH, FTP, etc., disable it. For the services you expose (the web server in your case) make sure you install security patches regularly. If your service has some form of authentication (like WordPress admin page), be sure to pick a strong random password. Online bots usually check for default passwords and extremely weak combinations like root/r00t, but I wouldn't risk using any dictionary word, or anything shorter than 12-16 characters. If you want to stop wasting resources on people who try to guess your password (assuming you have a good password) you can install Fail2Ban which bans an IP address for 10 minutes after 6 failed login attempts, rendering password guessing scripts impractically slow. Of course, you can configure the ban delay and the number of attempts to your liking. For services which are intended for a specific group of users (you, your company etc), you can also use other techniques like port knocking and limiting access to IP ranges you are likely to use to access your services (your country only, your ISP, your work ISP etc.). | {
"source": [
"https://security.stackexchange.com/questions/94529",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60289/"
]
} |
94,783 | When visiting some websites like http://www.monip.org or http://ip-api.com , I get the following result: Your current IP Address
- IP: 197.158.x.x
- Internal IP: 192.168.x.x I understand that I can see my public IP address (197.158.x.x). What I can't figure out is that how come my internal IP address is visible through the Internet ? Those websites do not seem to use a Flash plugin, Java applet or other scripts. My ISP is performing a NAT of my internal IP address in order to access the Internet: 3G Wireless Modem [192.168.x.x] -------- ISP [NAT to 197.158.x.x] ------- Internet So how is it possible for a website to see my internal IP address? | The most likely source of this information is your browser's WebRTC implementation. You can see this in the source code of ip-api.com. From https://github.com/diafygi/webrtc-ips , which also provides a demo of this technique: Firefox and Chrome have implemented WebRTC that allow requests to STUN servers be made that will return the local and public IP addresses for the user. These request results are available to javascript, so you can now obtain a users local and public IP addresses in javascript. It was recently noted that the New York Times was using this technique to help distinguish between real visitors and bots (i.e. if the WebRTC API is available and returns valid info, it's likely a real browser). There are a couple of Chrome extensions that purport to block this API, but they don't seem to be effective at the moment. Possibly this is because there aren't yet the hooks in the browser, as that GitHub README alludes to: Additionally, these STUN requests are made outside of the normal XMLHttpRequest procedure, so they are not visible in the developer console or able to be blocked by plugins such as AdBlockPlus or Ghostery. | {
"source": [
"https://security.stackexchange.com/questions/94783",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81698/"
]
} |
94,793 | How can we identify which root CA client used when there are multiple root CAs on the server? We can compare the public keys of the client certificate and the root certificate but if we have many root certificates this is an unnecessary overhead. Is there any way to find out from the client certificate (x.509) which root CA (alias) is used? Edited to add this clarification: if the intermediate certificates in the certificate chain are not available/accessible and if the same CA issued all the multiple root certificates(e.g. different tenants), is there any other approach to match the incoming client certificate to the corresponding root certificate on the server? | The most likely source of this information is your browser's WebRTC implementation. You can see this in the source code of ip-api.com. From https://github.com/diafygi/webrtc-ips , which also provides a demo of this technique: Firefox and Chrome have implemented WebRTC that allow requests to STUN servers be made that will return the local and public IP addresses for the user. These request results are available to javascript, so you can now obtain a users local and public IP addresses in javascript. It was recently noted that the New York Times was using this technique to help distinguish between real visitors and bots (i.e. if the WebRTC API is available and returns valid info, it's likely a real browser). There are a couple of Chrome extensions that purport to block this API, but they don't seem to be effective at the moment. Possibly this is because there aren't yet the hooks in the browser, as that GitHub README alludes to: Additionally, these STUN requests are made outside of the normal XMLHttpRequest procedure, so they are not visible in the developer console or able to be blocked by plugins such as AdBlockPlus or Ghostery. | {
"source": [
"https://security.stackexchange.com/questions/94793",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81709/"
]
} |
94,806 | Let's say you have a server at shop.example.com and you are requesting a certificate from some trusted CA like Comodo (they say they'll issue it within few minutes online). First of all, how can they ensure that you really are an admin of shop.example.com ? Perhaps it was an attacker who requested this certificate to be able to perform a man-in-the-middle attack. And secondly, let's say it really was you, and you got that certificate. What prevents the same attacker requesting a certificate for the same domain from some other CA? Is there a common storage of all requests so that CAs can detect such duplicates easily? Or is there some other means of preventing it? Can someone also easily get a certificate for shop.example.net and hope for some mistyping users? | A CA is supposed to make sure that the certificates it issues contain only truthful information. How they do that is their business; serious CA are supposed to publish detailed "Certification Practice Statements" that document their procedures. In practice, when you want to buy a certificate for a www.myshop.com domain, the CA "challenges" you, so that you demonstrate that you indeed have control of that domain. Some classic methods include: The CA sends you a piece of random data, to include as a host name in the domain. You thus demonstrate your control of the DNS related to the domain. The CA sends you a piece of random data that you must put for download (over plain HTTP) from the www.myshop.com site. You thus show that you control the main server that corresponds to the domain. The CA sends that piece of random data over an email sent to [email protected] . You return back that data to them, thereby demonstrating that you can read the emails sent to the administrator of the domain. None of these mechanisms is really strong; and, moreover, it suffices for an attacker to succeed in fooling one CA among the hundred or so that are trusted by default by usual Web browsers. Nevertheless, fraudulent certificates appear to be a rarity (say, once or twice per year, worldwide), so one has to admit that these authentication mechanisms, however flimsy they look, must be good enough for the job. As for mistyped domain, it is a very classic method which is commonly handled by either buying those mistyped domains, or unleashing your lawyers on whoever tries to buy a domain that seems to be "too close" to your own domain. | {
"source": [
"https://security.stackexchange.com/questions/94806",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36623/"
]
} |
94,993 | I have a Cordova app that transforms some images to base64. This violates CSP with this message: Refused to load the image
'data:image/svg+xml;charset=US-ASCII,%3C%3Fxml%20version%3D%221.0%22%20encod…E%3C%2Fg%...%3C%2Fsvg%3E'
because it violates the following Content Security Policy directive:
"default-src 'self'". Note that 'img-src' was not explicitly set, so
'default-src' is used as a fallback. According to this answer , I can simply add data: to my Content-Security-Policy meta, but I would very much like to know, if this is safe? data: does not specify origin and therefore I fear it's unsafe. | This is a great question, and I commend you taking the time to think about this from a security perspective rather than knee-jerk implement the solution from the link you sent. Yes, as you have feared, use of data: in a CSP directive is unsafe, since this allows for XSS vulnerabilities to be opened up as data: can handle any URI. This is spelled out in Mozilla's CSP Documentation. and in this W3C Working Draft There is no way in CSP to specify "allow only SVG images to be embedded via data URIs, but no any other type of URIs". CSP just lets you specify data: . As a best practice I would endeavor to address the root issue about the images being provided as base64 and see if that can be done another way so as not to require modification of the CSP directive. | {
"source": [
"https://security.stackexchange.com/questions/94993",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/74787/"
]
} |
95,029 | At some point I told a friend that it's dangerous to reveal your birth date (kind of like your social security number or your mother's maiden name), because it's a crucial piece of information for identity theft. However, I'm not sure what exactly an identity thief could do if the only non-public information he had about me was my birth date. (I'd consider my name, and probably my address, to be public here.) How and why exactly is revealing your birth date itself dangerous? Note that I'm not asking why knowing it in combination with other personal information (e.g. SSN) can be dangerous. I'm asking why even knowing it in isolation is dangerous.
What kinds of things could an ID thief do with just with my birth date? Can he, for example, open a bank account? Recover a bank password? Open a credit card? Take a car loan? etc. (I'm assuming the country is the United States of America.) | The issue is not the birthday itself, the issue is a that unfortunately a lot of companies and websites are still using it for verification purposes. This is certainly bad practice and lots of companies are changing their policies just for that reason. Banks sometimes used it for retrieving a password, but in recent years they too have changed their procedures significantly (depending on bank you use). So to answer your question is it safe to reveal your birthdate. Preferably you only reveal it whenever it is really necessary (e.g. inquire each time if they really need it), the information is not considered secret and you will be required to disclose it on occasions for legitimate purpose. As with any personal information the best thing to do is to disclose in as few occasions as possible. On the other hand if identity theft is done and the culprit turns out to be someone being able to retrieve information or performing an action by just giving your birthday (which is easy to find). Then most likely the company will be liable for not adequately protecting your personal information or being to negligent in their verification process. Of course this will mean you will have to deal with it (which is a nuisance and very time-consuming). | {
"source": [
"https://security.stackexchange.com/questions/95029",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5613/"
]
} |
95,124 | I came across an old (>3 years) accounts information list which has been leaked to the web. The list included thousands (>10.000) of account details from a service or services. Apparently the event was a small-scale news item back in the days, so there's not too much to do now, even if the one page I found would be removed from the web right now. However the list included my account name and hashed password which could be easily decrypted to plain text. I haven't used the password for years anymore, but I still use a rather similar one on some websites. I also use the account name every now and then. I haven't come across anything suspicious on my accounts for what I remember. My question is what should I do to prevent any possible upcoming harm? I'm unsure of what should I do. I guess I should have a thought of at least these things: Other accounts using the user name and/or (similar) password Search-engines - is there any way to globally remove the page/pages appearing on searches? | First of all, you should make sure that you don't use that password, or a derivative, anywhere that you care about . This is most important if you use the same username or email address, but still something that you should do for completely unrelated accounts. If a password is crackable, it may have been incorporated into wordlists already. And even if not, hey - it was crackable in the first place! Second, it seems prudent to go over the rest of the account information in that dump, and see if there's anything that you consider sensitive or private. What you do if you find such information will depend on circumstance, but I don't see why you should stop at just checking the password. As for trying to get that information removed from the web? You could try, but it's really, really hard to purge information once it's been posted. Especially since password dumps are likely to be shared on all sorts of places which don't respect polite requests. If your password is the most important thing in the dump, I'd just assume that the password is compromised, and instead focus on limiting the damage that can be done. | {
"source": [
"https://security.stackexchange.com/questions/95124",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81980/"
]
} |
95,132 | I was just informed of the StageFright vulnerability in Android devices. A specially crafted MMS message can gain access to data on the phone; so presumably it's a buffer overflow with subsequent privilege escalation. Details have not yet been disclosed, but the practical question is: how can common users defend against an attack using this vulnerability? It seems that not opening MMS messages would be the most important part. Are there other steps that end users should take to prevent themselves from this vulnerability? | You should disable the automated downloading of media files through SMS/MMS, there are multiple services that use this. Depending on which you use, you should disable this in the settings per service you use. For google messenger: More can be found here . Besides that, don't open any messages containing multimedia files from someone you don't know or trust as you can still download the file manually and trigger the file that way. Note that the SMS/MMS part is not the real threat here, it's just a way of getting malicious media files onto your phone and getting them to execute without user input. The actual threat is in the way media files are being processed. So receiving & viewing a media file through other channels will be just as dangerous. | {
"source": [
"https://security.stackexchange.com/questions/95132",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5405/"
]
} |
95,165 | Digital Trends describes the Stagefright Vulnerability thus: The exploit in question happens when a hacker sends a MMS message containing a video that includes malware code. What’s most alarming about it is that the victim doesn’t even have to open the message or watch the video in order to activate it. The built-in Hangouts app automatically processes videos and pictures from MMS messages in order to have them ready in the phone’s Gallery app. How is it that a video file that hasn't been 'played' yet can be used to execute malicious code? Would anyone be able to give a more low-level explanation of how this is possible? P.S.: To protect yourself just make sure to disable all the 'Auto-Download' features for MMS in your messaging apps (Hangout, Messaging, etc.). | The details will be released on the 5th of august. However, on the Cyanogenmod github repository there are several interesting details that appear to be related : it appears that certain fields in 3GPP video metadata are vulnerable to buffer overflow attacks. In short, a 3GPP video can be given a string of metadata that, at first, exceeds a certain length, and in the end includes machine code that lands in memory that is off-limits to the application. Update: Cyanogenmod has released a patch for this vulnerability. | {
"source": [
"https://security.stackexchange.com/questions/95165",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/79164/"
]
} |
95,178 | I have a fresh install of Arch Linux on a RaspberryPi model B. I'm setting up OpenVPN and using easy-rsa with OpenSSL 1.0.2d to generate initial keys and certificates. All went fine until I ran ./build-dh (script here ). It was 24 hours later when I wrote this . I have previously configured OpenVPN on other devices and the same RaspberryPi, but under Raspbian. And I don't remember this command ever taking so long. Last time I used 2048 bit key and it took about an hour. Now I'm trying with a 4096 bit key and it's been more than a day. In fact it's been another 12 hours since I reinitiated all my settings, enabled the build-it hardware random number generator and tried again. But it's still ongoing. cat /proc/sys/kernel/random/entropy_avail returns values in the range of 3000-3100. Does anyone have any previous experience with this? How do I check if it's just not executing in a loop? | If openssl uses a lot of CPU then it is not blocked waiting for "entropy". OpenSSL is actually sane in that respect, and uses a cryptographically secure PRNG to extend an initial seed into as many bits as it needs. When you use dhparam , OpenSSL not only generates DH parameters; it also wants to assert his social status by taking care to use for the modulus a so-called "strong prime", which is useless for security but requires an awful lot more computational effort. A "strong prime" is a prime p such that ( p -1)/2 is also prime. The prime generation algorithm looks like this: Generate a random odd integer p . Test whether p is prime. If not, loop. Test whether ( p -1)/2 is prime. If not, loop. Random odd 4096-bit integers are probability about 1/2000 to be prime, and since both p and ( p -1)/2 must be prime, this will need on average generating and testing for primality about 4 millions of odd primes. This is bound to take some time. When going from 2048-bit to 4096-bit, the density of strong primes is divided by 4, and the primality tests will also be about 4 times slower, so if generating a 2048-bit DH modulus takes 1 hour on average, the same machine with the same software will use an average of 16 hours for a 4096-bit DH modulus. This is only averages ; each individual generation may be faster or slower, depending on your luck. The reasonable solution would be to add the -dsaparam option. openssl dhparam -dsaparam -out /etc/ssl/private/dhparam.pem 4096 This option instructs OpenSSL to produce "DSA-like" DH parameters ( p is such that p -1 is a multiple of a smaller prime q , and the generator has multiplicative order q ). This is considerably faster because it does not need to nest the primality tests, and thus only thousands, not millions, of candidates will be generated and tested. As far as academics know, DSA-like parameters for DH are equally secure; there is no actual advantage to using "strong primes" (the terminology is traditional and does not actually imply some extra strength). Similarly, you may also use a 2048-bit modulus, which is already very far into the "cannot break it zone". The 4096-bit modulus will make DH computations slower (which is not a real problem for a VPN; these occur only at the start of the connection), but won't actually improve security. To some extent, a 4096-bit modulus may woo auditors, but auditors are unlikely to be much impressed by a Raspberry-Pi, which is way too cheap anyway. | {
"source": [
"https://security.stackexchange.com/questions/95178",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/78130/"
]
} |
95,226 | Is this an exploit of the email system of some sort? I receive a great deal of spam every day, and my accounts in Yahoo and Gmail sometimes show me spam emails from the future. Yes, the future. Sometimes it will give me a day ahead when they send it, which could mean they are a day ahead via time zones, but sometimes I have also seen a week ahead, but rarely I find emails sent a month in the future. Are these spammers exploiting a part of email that allows them to spoof the timestamp of the email? I have always been curious how they are managing to send me mail from the future. | That's not an "exploit", rather the way e-mail works. Datetime, sender, receiver, and all other headers of an e-mail message can be set by the sender to whatever value he wishes; mail protocols make no security check on them. Hence, spoofing the sender of an e-mail (as spammers, scammers, and phishers often do) it's a child's play. As Priyank correctly said, if you look at the full headers of the message you received you'll see that only the first hop (the sender) bears a date in the future; all the other hops (the MTAs between the sender and you) are correctly timestamped with the actual date. | {
"source": [
"https://security.stackexchange.com/questions/95226",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/17478/"
]
} |
95,245 | Please Consider : English is my second language. On the Security Now! podcast episode 518 ( HORNET: A Fix for TOR? ), at the 27:51 mark Steve Gibson quotes an example of vulnerable code in C/C++: "[...] one of them [problems with vulnerable code] is creating a new array
of a certain size [...]. And the fix is 'of a certain size + 1'. So,
[...] it [the vulnerable code] was just one byte too short. Probably a
NULL terminator, so that when you fill the array with size objects,
you would have one extra byte of NULL that would guarantee NULL
termination, and that would prevent that string from being overrun . But
that's not what the coder did: they'd forgotten the ' + 1 ' [...]" I understand what he means: when you create an array, you need to allow one extra byte for the NULL termination byte. What I would like to achieve with this post is to get a pointer for further research into the impact of having an array whose last byte is not the byte terminator; I don't understand the full implications of such negligence, and how this could lead to an exploit. When he says that having the NULL termination "would prevent that string from being overrun", my question is "how is it overrun in cases where the NULL termination character is neglected?". I understand that this is a huge topic and therefore do not to impose on the community too comprehensive of an answer. But if anyone could be kind enough to provide some pointers for further reading, I would be very appreciative and happy to go and do the research myself. | String Termination Vulnerability Upon thinking about this more, using strncpy() is probably the most common way (that I can think of) that could create null termination errors. Since generally people think of the length of the buffer as not including \0 . So you'll see something like the following: strncpy(a, "0123456789abcdef", sizeof(a)); Assuming that a is initialized with char a[16] the a string will not be null terminated. So why is this an issue? Well in memory you now have something like: 30 31 32 33 34 35 36 37 38 39 61 62 63 64 65 66
e0 f3 3f 5a 9f 1c ff 94 49 8a 9e f5 3a 5b 64 8e Without a null terminator standard string functions won't know the length of the buffer. For example, strlen(a) will continue to count until it reaches a 0x00 byte. When is that, who knows? But whenever it finds it it will return a length much larger than your buffer; lets say 78. Lets look at an example: int main(int argc, char **argv) {
char a[16];
strncpy(a, "0123456789abcdef", sizeof(a));
... lots of code passes, functions are called...
... we finally come back to array a ...
do_something_with_a(a);
}
void do_something_with_a(char *a) {
int a_len = 0;
char new_array[16];
// Don't know what the length of the 'a' string is, but it's a string so lets use strlen()!
a_len = strlen(a);
// Gonna munge the 'a' string, so lets copy it first into new_array
strncpy(new_array, a, a_len);
} You've now just written 78 bytes to a variable that only has 16 bytes allocated to it. Buffer Overflows A buffer overflow occurs when more data is written to a buffer than is allocated for that buffer. This is no different for a string except that many of the string.h functions rely on this null byte to signal the end of a string. As we saw above. In the example we wrote 78 bytes to a buffer that is only allocated for 16. Not only that, but it's a local variable. Which means that the buffer has been allocated on the stack. Now those last 66 bytes that were written, they just overwrote 66 bytes of the stack. If you write enough data past the end of that buffer you'll overwrite the other local variable a_len (also not good if you use it later), any stack frame pointer that was saved on the stack, and then the return address of the function. Now you have really gone and screwed things up. Because now the return address is something completely wrong. When the end of do_something_with_a() is reached, bad things happen. Now we can add a further to the example above. void do_something_with_a(char *a, char *new_a) {
int a_len = 0;
char new_array[16];
// Don't know what the length of the 'a' string is, but it's a string so
// lets use strlen()!
a_len = strlen(a);
//
// By the way, copying anything based on a length that's not what you
// initialized the array with is horrible horrible coding. But it's
// just an example.
//
// Gonna munge the 'a' string, so lets copy it first into new_array
strncpy(new_array, a, a_len);
// 'a_len' was on the stack, that we just blew away by writing 66 extra
// bytes to the 'new_array' buffer. So now the first 4 bytes after 16
// has now been written into a_len. This can still be interpreted as
// a signed int. So if you use the example memory, a_len is now 0xe0f33f5a
//
// ... did some more munging ...
//
// Now I want to return the new munged string in the *new_a variable
strncpy(new_a, new_array, a_len);
// Everything burns
} I think my comments pretty much explain everything. But at the end you've now written a huge amount of data into an array most likely thinking that you're only writing 16 bytes. Depending on how this vulnerability manifests itself this could lead to exploitation via remote code execution. This is a very contrived example of poor coding, but you can see how things can escalate quickly if you're not careful when working with memory, and copying data. Most of the time the vulnerability will not be this obvious. With large programs you have so much going on that the vulnerability might not be easy to spot, and could be triggered by code multiple function calls away. For more on how buffer overflows work . And before anyone mentions it, I ignored endianess when referencing the memory for the sake of simplicity Further Reading Full Description of the Vulnerability Common Weakness Enumeration (CWE) entry Secure Coding Strings Presentation (PDF automatically downloads) University of Pittsburgh - Secure Coding C/C++: String Vulnerabilities (PDF) | {
"source": [
"https://security.stackexchange.com/questions/95245",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
95,291 | A friend of mine managed to expose both id_rsa and id_rsa.pub file from his server via apache. I mentioned to him that this is serious security issue but he brushed this off like it was no big deal. Since I don't possess enough security knowledge to explain to him why this is bad can someone help me out here? Or maybe my friend is right and it is not as bad as I imagine it to be? | I suppose that these files are for "his SSH key" as a client. Revealing the public key ( id_rsa.pub ) has no consequence: when we call it a public key we mean it. The private key ( id_rsa ) is of course the problem. The way you use, as a client, your private key, is to push the corresponding public key into the server, in the .ssh/authorized_keys of the target account. When the public key is in this file, then whoever knows the private key may log into the server under that name. That's how the public/private keys work on the client side in SSH. So there are basically three possible, rational reasons that would allow your friend to disregard the disclosure of his id_rsa file: Maybe he never pushed the public key anywhere. When he logs into a server, he does it by typing his account password for that server, always. If his key pair is never used , then revealing the private key is harmless. But then, why would he have such a key pair? Possibly, all the servers he connects to, with private key authentication, are located on a private network, with only non-hostile users, and strong isolation from the outside world. It is conceivable that the private key is protected by a password (or "passphrase" in SSH terminology), which means that it really is encrypted with a key derived from the password; and your friend has great trust in the strength of his password. Note that even if the private key is unprotected (or protected with a guessable password) and grants access to some servers reachable by attackers, then what the attackers can do is log in to these servers in the name of your friend (which is already a big problem). This does NOT grant to attackers the power to do a Man-in-the-Middle attack (MitM is double-impersonation, so a MitM attacker must know the private keys on both client and server); they cannot either decrypt past or future sessions that they eavesdrop on, or alter data of ongoing sessions (notably, the asymmetric keys in SSH are used for authentication, but the key exchange uses Diffie-Hellman ). | {
"source": [
"https://security.stackexchange.com/questions/95291",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68154/"
]
} |
95,304 | Do you know of any good free platform in which we can practice different attack vectors and vulnerabilities such as XSS, SQL injection, low-level hacking, reverse engineering, etc.? For example, I liked Smash The Stack back then. (Note: it's not recommended to get into the Tags section , Just SSH to the server and start to play). | Depending on what you call "online", a simple Google search on "damn vulnerable" will reveal the existence of freely downloadable applications of even full OS, meant for, indeed, learning all the ways software can be horribly vulnerable. One of them is Damn Vulnerable Web App , which is, you guessed it, a damn vulnerable Web app. There also used to be a full OS called Damn Vulnerable Linux ; it is apparently discontinued (though of course lack of security patches was the point of it) but this question discusses replacements. These are not "online machines" for you to hack, but you can download them and install them on a virtual machine on your own computer, which can be done for free (there are good free VM solutions, e.g. VirtualBox ) and is a lot more flexible than an online target; it will teach you more since you can modify it and reset it at will. All these resources are subject to obsolescence, modification and replacement, so the important point of this answer is to give the correct keywords for searching. And these keywords are "damn vulnerable". | {
"source": [
"https://security.stackexchange.com/questions/95304",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6350/"
]
} |
95,325 | In the bits that I've searched about this, I've seen a few people declare as word-of-god that you should only sanitize outputs and not inputs. Why? Would it not be safer to cover both ends? | When you sanitize input, you risk altering the data in ways that might make it unusable. So input sanitization is avoided in cases where the nature of the data is unknown. For instance, perhaps some special characters hold significance in the data and stripping them means destroying that significance. A scenario like this may be that your system stores data that may later be pulled out into a third party system, and in that system those characters hold meaning. By stripping them you've altered the data in a significant way. For instance, perhaps the string is used as a key to look up a record in the third party system and by stripping the symbol you alter the key such that the record cannot be found. Input sanitization can be used when that nature of the data is known and sanitization would not adversely affect the data in anyway. Your decision to sanitize input data is in part a business decision. Will third party system depend on input exactly as it is provided? If so, it's probably not a good idea. However, you may be able to shape expectations such that the third parties understand that you will be sanitizing input data based on a specified criteria that you share with them. | {
"source": [
"https://security.stackexchange.com/questions/95325",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82170/"
]
} |
95,703 | While surfing a news website on my mobile, I receive a virus infection alert warning that triggers my phone to vibrate incessantly. The alert looks like the following: I didn't expect my phone to vibrate and the alert is able to tell me the model of my phone (first panel) and the OS system (second panel). Clicking the back button causes another warning to pop-up (third panel). I almost wanted to follow the instructions on the second panel to install what looks like an anti-virus. But luckily, I was able to calm my nerves sufficiently to realize that this is a scare-ware served through an ad-server and that the anti-virus could be the actual virus. Given that the HTML5 vibrate function is a new feature that people hardly encounter on websites. It would not be a surprise that there are people falling prey to this tactic. Is HTML5 vibration feature a security vulnerability? Should mobile browsers enable such a feature on websites by default? | A popup was used to show the alert. Does this mean that the popup feature introduces vulnerabilities? Then by that line of reasoning JavaScript is the source of all problems. There are people who actually think that JS is an important vector for attacks and block it on untrusted websites with extensions like NoScript. Many features can be misused, and is up to people creating the standards, browsers and even websites to judge what is bad and to change the standards or implement mitigations. Of course those people can be wrong and some feature can be unexpectedly used to attack users. A nice example is the browser's console which is very often used to trick users into pasting JS code that attacks the user. This helped Facebook worms to propagate with great success. Facebook noticed this and introduced this message in the console: This vibrate function might trick some users into thinking that it is actually the OS showing the alert, but I think the latest mobile browsers do a good job of showing the user that he is still inside the browser. In this case, the message from the browser is clear enough "The page at andro-apps.com says:" If this becomes an important vector for attack, I'm sure the browser manufacturers will notice that and will make changes to reduce the impact. | {
"source": [
"https://security.stackexchange.com/questions/95703",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9312/"
]
} |
95,771 | Looking through error logs I found lots of requests to a web-app where the URL contains: /if(now()=sysdate(),sleep(10),0)/*'XOR(if(now()=sysdate(),sleep(10),0))OR'"XOR(if(now()=sysdate(),sleep(10),0))OR"*/ I read that could be a part of an attack on websites developed in PHP which we don't use. Also we're using an ORM to query to our databases. So is this an SQL injection attempt? | This is most likely a blind SQL injection, testing whether you're vulnerable to SQL Injection by checking whether your server takes the specified time or more to reply to the request. This is not actually doing any data edit nor exposing anything; it's just checking whether you're vulnerable. It's also worth noting that this specifically targets MySQL databases, as the if and sleep syntaxes are the ones of that db engine. If the attack is isolated, you were probably "probed" by an automated vulnerability scanner that is preparing a large-scale attack, so if your webapp is not vulnerable you have nothing to worry about. However, if you recieve more weird requests with different attack patterns, you could be the specific target of those attackers, and should take actions to prevent all attacks that may succeed. See Time-Based Blind SQL Injection Attacks for more information. | {
"source": [
"https://security.stackexchange.com/questions/95771",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82483/"
]
} |
95,781 | Many PDFs are distributed as encrypted PDFs to lock out some of their functionality (eg printing, writing, copying). However, PDF cracking software is available online, which usually cracks the PDF passwords in less than 1 second. It doesn't make sense that the PDF system is so easy to crack if Adobe implemented a proper encryption techniques in their document security, and it looks like that there is some major implementation error in their PDF encryption scheme that allows documents to be unlocked with trivial amounts of work. What is the security scheme used in such locked PDF files, and why do these PDF password removers take so little time to defeat it? | There are two types of PDF protection: Password-based encryption and User-Interface restrictions. You are describing the second type of protection, namely the missing permission to copy-and-paste, to print and so on. If there are user-interface restrictions placed on a PDF file, the viewer still needs to decrypt the contents to display it on your screen, so you are not in an "password-based encryption" scenario where you are missing a key to decrypt the document, but in a "DRM" scenario where you trust that the applications that are able to decrypt the file (based on static knowledge like master keys) do only the things the author wants them to do. Nothing prevents computer experts reverse engineering how the legitimate application decrypts the data (no password needed), and performing the decryption themselves. After having the document decrypted, rights may be "adjusted" to e.g. include printing permission or the decrypting application can do things (like copy all bitmap images) itself. Adobe tries to prevent "rogue applications" that allow you to circumvent the usage restrictions by their license on the PDF specification: They revoke the license to use the (claimed) intellectual property in that specification for applications that do not obey the usage restrictions. AFAIK some open source tools have or had a build switch for whether the usage restrictions should be obeyed or not. This makes a perfect starting point for people selling "PDF deprotector" software. In the case described above, the "user password" is the empty string. PDF readers are required to try to an empty user password if a protected PDF file is opened. Only if that fails the password validity check is the user asked for a password. begueradj describes the key derivation in his answer, and as you see, the "DRM permissions" (/P entry) enters the key derivation, so if you just "fix the permissions" in a protected PDF file, a conformant reader will derive the wrong key and fail to open the document. On the other hand, if a PDF file is completely protected by a password (even against opening), the user password is no longer empty, and this type of PDF protection is reasonably secure. | {
"source": [
"https://security.stackexchange.com/questions/95781",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/63600/"
]
} |
95,820 | A mail made it through the spam filter and i wonder what the purpose is. It is not spam.
Tracking? But how? Who? and why? In the source code there are this weird passages like ... =EA=85=9F
=EA=8F=92 who benefits how?
no links nothing else in this email. Delivered-To: [email protected]
Received: by 10.28.158.140 with SMTP id h134csp1731559wme;
Mon, 3 Aug 2015 04:22:13 -0700 (PDT)
X-Received: by 10.55.41.195 with SMTP id p64mr24023265qkp.5.1438600933481;
Mon, 03 Aug 2015 04:22:13 -0700 (PDT)
Return-Path: <[email protected]>
Received: from nm38-vm9.bullet.mail.bf1.yahoo.com (nm38-vm9.bullet.mail.bf1.yahoo.com. [72.30.239.25])
by mx.google.com with ESMTPS id j34si16595518qkh.82.2015.08.03.04.22.12
for <[email protected]>
(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
Mon, 03 Aug 2015 04:22:13 -0700 (PDT)
Received-SPF: pass (google.com: domain of [email protected] designates 72.30.239.25 as permitted sender) client-ip=72.30.239.25;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of [email protected] designates 72.30.239.25 as permitted sender) [email protected];
dkim=pass [email protected];
dmarc=pass (p=REJECT dis=NONE) header.from=yahoo.com
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1438600932; bh=2Le9dnlRHEHV2DHi6g9XBTAZHFuEvLsr8SjC/C2a2+Y=; h=Date:From:Reply-To:To:Subject:From:Subject; b=ab5c6U0O35AE1JHNL7n1OB10kVvCjIPh5ilkWw5ct2nWs6w4b9CSkyaBQKibdqI3gbQB+NQo8/FINRQMjloHxunlRa91MRWQEZ48S3EUOH65D4b7tVMyfs4pB+VSJb/8ohLwDFs0nFS5V9S55M1DD3o+WqLOkwb49ijxE8J9enDY8jtLWaJ7RZ794nZcvRH3a3Y4r31Y3zahRUVmKQKc2vvPDOrEbncmu2PEJOhcJEELTQcc1MXtaVWHzspmyPZBuBVzvd4cvvYStguk7p5UL9kvyLWG3ZyhaPyDGfbt0egQcFropcb6Xw3ttdikVlC7YYVipZUgzp/IzajFZks6jw==
Received: from [66.196.81.170] by nm38.bullet.mail.bf1.yahoo.com with NNFMP; 03 Aug 2015 11:22:12 -0000
Received: from [98.139.212.241] by tm16.bullet.mail.bf1.yahoo.com with NNFMP; 03 Aug 2015 11:22:12 -0000
Received: from [127.0.0.1] by omp1050.mail.bf1.yahoo.com with NNFMP; 03 Aug 2015 11:22:12 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: [email protected]
X-YMail-OSG: V0Jf1mgVM1nboRi87_16P3wYo7hVU_Wr4wYa8QonNjb6jD1sZDPz1QMe5617lEj
.KTslKteP6Aay2J5FC1JdWzUFlVlqBbvFsFsuumiJcZNTt05csrlKh1v3H5Gzb0ArIimMooZB3WF
V4xucEAi6v6l.Dx4G6r66fHLgmvW_3nukrV5HBBj49nHgUkd6ZWNWvVJ..pnsjI3WTLyo_B3PKTC
tvyVuliPBVKPv4oDLkFbiAcS6czdirjBw04SDlyXyz6zVVvgyrFQx8Jxu7Z0yEfA18KRNWlrn4kd
Ozgpri8uHm.hdcj.DYlF5lVANlBACmDfsboQOL9Ma69nsNeWvRGVoDrxYGsXCfOT13yAfXLLdf_c
KwEOEIXQcfnWY5tWHHqhLPaEJM36vGb7PrSVPjbGFvuGxO.a66wkphgI_Gn3rcXkXGBluiVveg5O
_KFt15xpsEM1nd7kvyyBo2M2GJn_A_GuD_0KNoPKrk8Gtorh9Z7TdSW.0WtU80P8m6vsRydyp2u9
7H14-
Received: by 76.13.27.197; Mon, 03 Aug 2015 11:22:12 +0000
Date: Mon, 3 Aug 2015 11:22:11 +0000 (UTC)
From: Shawn <[email protected]>
Reply-To: Shawn <[email protected]>
To: <removed to protect privacy>
Message-ID: <[email protected]>
Subject: fdihkesdhlffljrks djssldhfvkljdelsfkah
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="----=_Part_120230_1658237110.1438600931848"
Content-Length: 1531
------=_Part_120230_1658237110.1438600931848
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
dolomite fiddle armpits moribunditygNIt's=EA=85=9FShawn=EA=80=BCby=EA=8F=92=
the=EA=87=91way.famished nonsalaried artichokes deadlockingAaI'm=EA=87=8Bex=
cited=EA=8D=BEabout=EA=91=8Fyour=EA=89=AFanswer))symbiotes perspire
------=_Part_120230_1658237110.1438600931848
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"background-color:#ccdd15;display:block;color:#ccd=
d15;"><div style=3D"font-family: Unfeignedly, Gascony, Pancakes;font-size:5=
px;">dolomite fiddle armpits moribundity<div>gN<div style=3D"font-size:20px=
;color:#455e81;display:inline-block">It's</div>=EA=85=9F<strong style=3D"fo=
nt:20px normal;color:#455e81">Shawn</strong>=EA=80=BC<div style=3D"color:#4=
55e81;font-size:20px;display:inline-block">by</div>=EA=8F=92<em style=3D"fo=
nt:20px normal;color:#455e81">the</em>=EA=87=91<em style=3D"font:20px norma=
l;color:#455e81">way.</em></div>famished nonsalaried artichokes deadlocking=
<div>Aa<i style=3D"color:#455e81;font:20px normal">I'm</i>=EA=87=8B<span st=
yle=3D"color:#455e81;font-size:20px">excited</span>=EA=8D=BE<big style=3D"f=
ont-size:20px;color:#455e81">about</big>=EA=91=8F<strong style=3D"font:20px=
normal;color:#455e81">your</strong>=EA=89=AF<i style=3D"color:#455e81;font=
:20px normal">answer))</i></div>symbiotes perspire</div></div></body></html=
>
------=_Part_120230_1658237110.1438600931848-- | This is spam -- but possibly the spammer was not very good at spamming. The '=EA' bits are Quoted-Printable , an encoding for bytes into ASCII characters. '=EA=85=9F' thus stands for bytes of values 0xEA, 0x85 and 0x9F, in that order; this is the UTF-8 encoding for 'ꅟ' (that's U+A15F YI SYLLABLE NDEX, one of the symbols of Yi script ). Whoever sent that email hopes that your mail reader software will not include a Yi font, and thus display the character as a space. The point of using such symbols is to try to confuse antispam filters: the filter may try to react on the sentence "It's xxx by the way" (for random names instead of "xxx"); the extra characters may make this filter fail. Chances are that the spam, being sent by the million, will use random characters from unusual sets (like Yi glyphs). The random words ("fiddle", "armpits"...) serve the same purpose: to evade detection, especially by Bayesian spam filters . Note that the extra words are "hidden" in the HTML view, by being displayed with a very small font and with the same colour as the background. All of this is very spammish, and since your spam filter let the mail flow, then the spammer actually won this round: his evasive maneuvers worked, and your spam filter was defeated. Now, what can be the point of all this ? The point of spam is to trigger some reaction from the spammee. This can be "clicking on a link" but it could also be "send an email in response". I can make several conjectures: It has been pointed out (e.g. in this study ) that the business model of most spammers requires pinpointing stupid people. For the spammer, sending out millions of spams costs about nothing; however, when a spammee answers, a human agent of the spammer must read and respond, and there things become very expensive for the spammer. Thus, what the spammer really wants is that the few people who actually get hooked on the initial spam will be ready to believe the most fantasmagorical stories. Along that hypothesis, the spam you received might be a way to find the people who are dumb enough to believe that the sender is really named Shawn, and are ready to talk to Shawn. Spammers are (technically) human beings, with all the flaws that this entails. The spammer uses a spamming tool but may be bad at using it. I often receive spams that greet me as "Hello %RANDUSER", an occurrence that can only be explained by a spammer who should be reading the documentation for his spamming tool. | {
"source": [
"https://security.stackexchange.com/questions/95820",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82526/"
]
} |
95,972 | I'm creating HTTP REST service which will be available over tls only. For authentication purposes I plan to generate JWT token for every user using HMAC HS256 . I need a secret key for HMAC. What are the requirements for secret key ? Do I need a long string of random characters? Or fixed-length string? Or what? | I've added my answer here as I feel the existing ones don't directly address your question enough for my liking. Let's look at RFC 4868 (regarding IPSec, however it covers the HMAC-SHA256 function you intend to use - em mine ): Block size: the size of the data block the underlying hash algorithm
operates upon. For SHA-256, this is 512 bits , for SHA-384 and
SHA-512, this is 1024 bits. Output length: the size of the hash value produced by the
underlying
hash algorithm. For SHA-256, this is 256 bits , for SHA-384 this
is 384 bits, and for SHA-512, this is 512 bits. As WhiteWinterWolf notes , longer than B (block size) is discouraged because the value must be hashed using SHA-256 first (i.e. 512 bits in this case) and less than L (output length) is discouraged (256 bits in this case). However, a 256 bit key is overkill as anything that is 128bits or greater cannot be brute forced in anyone's current lifetime, even if every computer in the world was working on cracking it. Therefore I'd recommend a 128 bit key, generated with a cryptographically secure pseudo random number generator (CSPRNG). If you want to store this as text then a 128 bit key can be represented by generating a random 32 character length hex string, or alternatively you could generate 16 random bytes and then run them through a base64 function. | {
"source": [
"https://security.stackexchange.com/questions/95972",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/53488/"
]
} |
95,997 | The email message looks like this. This person used a church member's name and used a totally different email address. It went like this: Hi! How are you? (then gives a bad web address) Gary Mecham Then, below that in the bottom left corner it says "Sent from my iPhone, with a fraudulent web address" The email domain is totally different from the church member's actual domain. I sent an email to the people (that I knew) that this imposter sent this to to warn them that it is NOT who they think it is. I work at a church. That's where I received the email, on the church's computer. I did click on the link, but we have McAfee Internet Security software on our computer and, quickly, it gave a warning message not to go to that website. So, of course, I didn't. Can I report this to the FCC? Or who? It's not really a consumer spam, but I think it's an email to get people to click on a website that probably has a virus in it that will attack their computer. | I've added my answer here as I feel the existing ones don't directly address your question enough for my liking. Let's look at RFC 4868 (regarding IPSec, however it covers the HMAC-SHA256 function you intend to use - em mine ): Block size: the size of the data block the underlying hash algorithm
operates upon. For SHA-256, this is 512 bits , for SHA-384 and
SHA-512, this is 1024 bits. Output length: the size of the hash value produced by the
underlying
hash algorithm. For SHA-256, this is 256 bits , for SHA-384 this
is 384 bits, and for SHA-512, this is 512 bits. As WhiteWinterWolf notes , longer than B (block size) is discouraged because the value must be hashed using SHA-256 first (i.e. 512 bits in this case) and less than L (output length) is discouraged (256 bits in this case). However, a 256 bit key is overkill as anything that is 128bits or greater cannot be brute forced in anyone's current lifetime, even if every computer in the world was working on cracking it. Therefore I'd recommend a 128 bit key, generated with a cryptographically secure pseudo random number generator (CSPRNG). If you want to store this as text then a 128 bit key can be represented by generating a random 32 character length hex string, or alternatively you could generate 16 random bytes and then run them through a base64 function. | {
"source": [
"https://security.stackexchange.com/questions/95997",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82716/"
]
} |
96,004 | What makes Linux so different than Windows in terms of anti-virus needs? My question is not if I should get an anti-virus for my Linux . I perfectly understand why an AV is important. I would like to understand if there are conceptual (technical) differences which make Linux less vulnerable than Windows (comparing for example Ubuntu 14 and Windows 7). | There are several reasons why Windows is so heavily inflated with anti-virus products. (I am pointing to out-of-the-box (OOTB) experiences). Windows users are, by default, local administrators, so any social engineering done on Windows can usually lead to an execution of software. Modern Linux has users set-up as low-privilege local users. It requires your password to elevate privilege. Windows tried to simplify as many things as possible including security and looking back at its history their butchering ( Windows Vista anyone?) of security controls left their user-base numb to constant false positives about software. The proverbial "Do you want to install this software? Do you REALLY want to install this software?" lead to just click-throughs or disabling UAC . Software repositories vs standalone installs: Linux has had software repositories forever and they provide a good mechanism for installing software. These are usually signed, approved, software being protected by companies with budgets for security following standards for security. (I know about the breaches to repositories in the past, but this is generally good). Windows users are used to pulling sources from everywhere and installing on their system, unsigned or not. Users generally have different mindsets: Windows is an all-purpose, all-user platform. It generally tries to solve everyone's problems and in doing so, OOTB doesn't protect the user like it should. This why Microsoft pushes so hard to force every piece of software to be signed by a "trusted signer". There's plenty of debate on this, but generally from a security standpoint this is smart; Microsoft just happens to have a track record that leaves trust to be desired. Linux users are generally technical and the systems are usually server systems. That's why software usually comes with GPG keys and/or SHA/ MD5 hash for comparison, as these are from a Linux administrator perspective, de-facto processes for installing software. I know many Linux users who ignore this, but I have yet to see a Windows administrator even think about it. So it does go beyond market share. Expansion: I will address a few things from the comments (which have valid points.) Repositories: From an OOTB experience modern Linux distributions have pre-signed packages which are more for identifying that a package works with the distribution, but also proves a secure method for verification. Other package management system have been discussed such as pip and npm which are independent of the distributions themselves and are servers to install specific packages for their particular programming language. It can be argued that there is no inherent way for verification on these systems. This is primary because Linux has a philosophy of programs doing one specific thing and doing it well. This is typically why multiple tools are used such as using GPG or PGP for integrity. Script Downloads cURL | sh has been mentioned and are truly no different than clicking on a .exe after you have downloaded the file. To point out, cURL is a CLI tool for transferring data. It can do authentication, but it doesn't do verification specifically. UAC vs sudo Lastly, here are a few things about these two security features. UAC is an approval process for untrusted software installation. A user which has local administrator rights simple gets a yes or no (the behavior can be changed, but it's not default). I am still looking to see if this behavior has changed on Windows 8+, but I haven't seen anything on it. Sudo is a fine-grained permission elevation system. By default it's essentially the same thing as UAC, but it has more ability to be configured to limit accessibility. | {
"source": [
"https://security.stackexchange.com/questions/96004",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
96,121 | Are the spelling and grammar mistakes in phishing emails done on purpose? Is there some wisdom behind it? Or they are simply indicative of the fact that they've been written by someone who does not natively speak English? | This may well be for the same reason as many scammers rely on the tired old 'Nigerian Prince' strategy: by self-selecting for gullible targets, they can be more efficient . In phishing, as in scams, sending the initial batch of emails is the easy part. The hard part is coaxing information out of the target (which can require a concerted exchange of emails). That can represent a significant investment of time. As a result, it's really important to ensure that the people you correspond with may actually give you the information that you're after. It can therefore be advantageous to send a badly-drafted email, on the basis that the people who respond to that are likely to be gullible enough to be phished. (I would probably draw a distinction between these broad, drag-net approaches and targeted phishing, where you're much more likely to see carefully-crafted and legitimate-looking emails.) | {
"source": [
"https://security.stackexchange.com/questions/96121",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82823/"
]
} |
96,134 | I just read that "ransom" attacks are on the rise - where the attacker uses a vulnerability to enable them to encrypt files and demand money for the key. Why is this any different to a disk failure, where the solution is "get the backup"? | Most people don't have backups. Most people who do have backups, haven't tested them to make sure they work. The real difference between disk failure and ransomware is that paying the ransom is cheaper than paying a data-recovery company, and is more likely to get your data back. | {
"source": [
"https://security.stackexchange.com/questions/96134",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55509/"
]
} |
96,178 | If I encrypt all my files can I get "attacked" by ransom attacks? Because my files are already encrypted, they cannot access them, so I should be safe or am I wrong? Also, if someone could tell me how this encryption works, I would be really thankful. I already read some articles on Wikipedia and it states there that the encryption does not work while booting (in the English article it's called Cold-Boot-Attack), so would it be possible to get access to the files somehow when booting? Not that I need it now, but you never know. | TLDR: Any file encryption does not protect you against Ransom Attack. We can consider two scenarios: You encrypt your files with some tools (e.g. encrypted zip), You have encrypted whole partition (Truecrypt, dm-crypt etc.). In the first case, even if you have encrypted your files they can be encrypted again by ransomware. And then you won't be able to decrypt them. Bad situation. In the second case, ransomware lives in the computer's runtime (while you're using it), therefore it has an access to decrypted files on your computer. The disk partition is decrypted on boot up and encrypted again when you shutdown your machine. Again, bad situation. A file encryption does not protect you against ransomware. The Cold boot attack is a bit different story and you shouldn't consider it here to not confuse yourself. I've tried to explain it in the easy way, I hope I helped somehow :) To protect against ransomware you can (should!) do at least these three things: Do not visit malicious sites. Backup important stuff (on a separate, unplugged drive) :) You can also install some antivirus, EMET etc. The likelihood of being successfully ransom-attacked will for sure decrease. | {
"source": [
"https://security.stackexchange.com/questions/96178",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82859/"
]
} |
96,377 | Why do people use IP address bans (e.g. to block a malicious user from an internet service) when IP addresses change often? For example, we turn our router off every night so our IP address often changes in the morning. Furthermore, often a simple power-cycle is enough to change the IP address. Thus IP address bans are relatively ineffective. On the other hand, banning IP addresses can cause a lot of grief for innocent users who are using the former IP addresses of a malicious user, and sometimes a range of IP addresses is banned thus causing the banning of innocent users to affect even more people. So why are IP address bans still used? P.S. I am referring specifically to long-term bans. I perfectly understand the advantages of short-term bans e.g. to put a block on a spam or DoS attack, or other situations where briefly disrupting the malicious traffic is beneficial. | IP address bans have flaws as you mention, but I think the primary reason they are used is simply that there aren't really any better alternatives. Other identifying features, like browser user agent, cookies, browser fingerprint, etc. are even easier to spoof or circumvent. There are plenty of extensions you can use to change your user agent or fingerprint, and cookies can simply be cleared. For example, we turn our router off every night so our IP address
often changes in the morning. Furthermore, often a simple power-cycle
is enough to change the IP address. Thus IP address bans are
relatively ineffective. The ease with which you can change your IP address depends heavily on the ISP. For instance, back when I had Verizon DSL, my IP address would change each time I turned the modem off and back on just like what you describe. But after switching to Comcast , my IP address has not changed for the entire two years I've been with them, even after multiple power outages and modem restarts. So the "router reboot" workaround won't necessarily work for everyone. Another thing you should consider is that even if you're one of those people who can change your IP address with a reboot, you're likely still getting an IP address from a fairly limited pool of addresses. This is because ISPs generally don't assign addresses completely randomly; they divide their service area into smaller areas (e.g. neighborhoods), and then allocate a small range of addresses to assign to customers in each area. So if there was a really persistent and problematic user, a site administrator could ban the entire address range (though this could cause significant problems for other users as you mention). Side note: It's worth mentioning that there are other ways of masking your IP address that get around this problem, like using a VPN service or Tor . Some sites, like Wikipedia, try to block all IP addresses of known public proxies to counter this. On the other hand, banning IP addresses can cause a lot of grief for
innocent users who are using the former IP addresses of a malicious
user, and sometimes a range of IP addresses is banned thus causing the
banning of innocent users to affect even more people. Yes, IP address bans are a blunt tool and this is one of the problems inherent with them. This is especially the case when an IP address is shared by hundreds or thousands of users in the same building, or even a large part of an entire nation via carrier-grade NAT . It is the responsibility of site administrators to minimize the effects of IP address bans on legitimate users. Various measures can be taken - for instance, you could make an effort to identify IP addresses are shared and make sure those IP addresses are only banned for short periods, or make it so that users with a certain minimum reputation can still log in from banned IP addresses and remain unaffected by them. If done right, IP address bans can be very effective at blocking unwanted users while having minimal impact on legitimate ones. | {
"source": [
"https://security.stackexchange.com/questions/96377",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/79645/"
]
} |
96,505 | Assume the following: I am doing all my work, including the one with sensitive information on Google Docs. No documents are stored on local hard drive. I am not using the Google Drive App, just doing everything via web interface I am using incognito mode all the time. There is no one "looking over my shoulder", or manages to install any keylogger kind of software So, my question is, can FBI or government agencies still be able to get hold of my files stored on Google Docs by confiscating my laptop? What I am asking is can my laptop still give me away if I use online cloud storage? We all know that even if I delete a Microsoft Word file even from Recycle Bin, there are chances that the file can still be retrieved as long as the investigators have the hard disk. Note that this question is very different from this one here , because we are not talking about accessing the deleted data from HDD per se , but rather, accessing the data that is stored purely on the cloud and only exists temporarily on web browser | When you store data on Google Docs, as you may already know, it is not encrypted at all. I read that everything you upload to Google Docs and similar services are not only yours anymore because you agree Google to own them too. This means, at first glance, Google has already access to your files since Google confesses that by itself: When you upload, submit, store, send or receive content to or through
our Services, you give Google (and those we work with) a worldwide
license to use , host , store , reproduce , modify , create derivative works (such as those resulting from translations, adaptations or other
changes we make so that your content works better with our Services), communicate , publish , publicly perform , publicly display and distribute such content. Also, I am not American but from my understanding of the NSL , the FBI can access your data on Google Docs without even warning you. In your case, I'd rather prefer to encrypt the data before storing it on Google Docs after choosing a strong algorithm and you found a secure way to protect your private key. Update: More than 24 hours after the question was asked, a user modified the title of the question giving it an other meaning than the original one: First time I answered your question, you were online around fourty minutes while my answer was the only one available. You answered only to two comments of @schroeder but nothing else even after you received different answers dealing with different aspects of your question. Even one of your 2 comments was too confusing because you said a thing and its opposite: if the government agency subpoena Google, then my data wouldn't be safe, am I right? No, I don't worry that the government agency will do that – You did not react to accept it or comment it to ask something. After few users said that your question's title is not the same as its content, I asked you clarification via a comment I deleted (because it is too chatty already) but you did not react either. Following your silence to different answers and comments and your refusal to clear the ambiguity of your question, I need to update my answer by focusing on an other side you may wanted to ask, for further readers as the posts here are intended to last. I had to focus on your question's title because good answers to so many questions as yours already exist here : Is it possible to recover securely deleted data from a hard drive using forensics? How to recover securely deleted data How does forensic software detect deleted files Is data-remanence a concern in RAM? How can I reliably erase all information on a hard drive? Prove that you deleted the file Ensure data doesn't linger after being deleted I'm leaving my job and want to erase as many personal details etc. as possible; any tips? Overwriting hard drive to securely delete a file? Secure file deletion vs wiping free space And the list of similar questions with good answers is still so long on this website. This is just a short list I picked myself. And we all know that when you delete a file from Windows' Recycle bin (trash), it becomes invisible to your operating system only because the file allocation table does not point to it anymore, otherwise the file exist somewhere in your hard drives and you can get it back (if the OS did not override it especially if it has been a long age since you deleted it) There is also a point I want to add to those answers about the possibility to FBI to recover your data. It is the easiest way but the answers I read elsewhere here forget it frequently: The easiest way for them is to use the Restore System feature of Windows. That way they could, by chance, get back your browsing history, cookies and even your secret files that you uploaded to Google Docs. By chance, I mean all depends -for the files, for instance- on the restore point you set. This is the easiest way. Also, other answers to your question mentioned that your coockies, browsing history and browser's cache are a way for the FBI to get your data. That is true even in the case you cleared these elements from your browser because all what the FBI needs is to find where your browser stores its cache: P.S. If the FBI is interested to check your browsing history (supposed you hid some of your data on your secret Wordpress blog, for example), they can find where you surfed (your blog) by checking your DNS because your computer uses DNS servers to resolve hostnames to IP addresses, such queries are temporarily stored in your DNS cache. When you clear your browser history, your DNS cache is not touched. You can try this command yourself: ipconfig /displaydns to display the contents of the DNS resolver cache. Do not forget also that the FBI can check your router logs (even if in most routers this functionality is deactivated by default). This can be useful for them in case you downloaded some data you are not allowed to access, or in case you stored your secret data elsewhere before you upload it to Google Docs, or simply they can find a proof you used Google Docs. One thing to mention also: your browsing history still can be detected in case your ISP , the government, or whoever else decides to cache your list of browsed sites. Finally, some files of your operating system such as Index.dat (hidden file) contain all of the Web sites that you have ever visited. Every URL and every Web page is listed there. | {
"source": [
"https://security.stackexchange.com/questions/96505",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9308/"
]
} |
96,664 | So the traffic is encrypted to a website so the password is safe during transmission and also if the website is hacked then the database only contains hashes. But couldn't the hacker create a server side script to store the usernames and passwords used to login to the website in a file since they are decrypted and are not already hashed? This wouldn't get every user however it would allow them to view users passwords who try to log onto the site while it is compromised. Wouldn't it be better to hash passwords at the client side? | The weakness you allude to is real. An important point is that once the server is compromised, the attacker has little incentive to grab passwords that grant access to that server -- he is already in the place. However, human users have the habit of reusing passwords, and that is a big problem, because a reused password means that compromises on one server tend to "propagate", exactly in the way you explain: the attacker grabs password P for user U , and guesses (alas correctly, most of the time) that the same user U will have used the same password on some other servers. Hashing client-side is a nice idea, but there are details : Whatever the client sends to the server, be it the password itself or the hashed password, grants access. It is password-equivalent . If the server stores these hashes "as is" in its database, then it is in the same situation as a server storing plaintext passwords, and that's bad. Thus, the server MUST also do some hashing. Client-side hashing occurs only if there is code on the client side that does the hashing. In a Web context, this means JavaScript. Unfortunately, JavaScript is poor at cryptographic implementations; in particular, it is quite slow, so the client-side hashing will not be able to use the many iterations that are usually needed for good password hashing (see this for details). Also, this would not work for clients without JavaScript. Still in a Web context, the client-side hashing, if it occurs, is done by JavaScript code sent by the server . If the server has gone under hostile control, then it may send malicious JavaScript that pretends to do the hashing, but does not. The user will be none the wiser. Thus, the root problem is not solved. Client-side hashing is usually envisioned under the name of "server relief", not to increase security against evil servers, but to allow the server to handle many concurrent users without spending all its CPU on a lot of hashing. JavaScript being what it is, this is not commonly done nowadays. | {
"source": [
"https://security.stackexchange.com/questions/96664",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83308/"
]
} |
96,713 | Windows 10 is perhaps the most Internet-connected and cloud-centric operating system released by Microsoft to date. This, of course, has caused many users to be concerned about how the OS respects their privacy (or doesn't). Multiple sources are now claiming that this OS reports user data to Microsoft which could be violating the users' assumptions of privacy. (A couple of examples are linked below.) How legitimate are these concerns and claims? Is Microsoft actually collecting data about Windows 10 users' location and activity? Are they actually authorized to do so, simply by a user's acceptance of the EULA? I'm aware that Windows 10 sends malware files to Microsoft for analysis. This is a common and generally-accepted practice for most antivirus products, and antivirus is known to be integrated into this OS. What about the other information? TechWorm - Microsoft’s Windows 10 has permission to watch your every move BoingBoing - Windows 10 automatically spies on your children and sends you a dossier of their activity | Microsoft Windows Pre-Release Preview (aka Windows Insiders) Privacy Statement, January 2015 : (no longer applies) When you acquire, install and use the Program software and services,
Microsoft collects information about your use of the software and
services as well as about the devices and networks on which they
operate. Examples of data we may collect include your name, email
address, preferences and interests; location, browsing, search and
file history; phone call and SMS data; device configuration and sensor
data; voice, text and writing input; and application usage. For
example, when you: install or use Program software and services, we may collect information about your device and applications and use it for purposes
such as determining or improving compatibility (e.g., to help devices
and apps work together), when you use voice input features like speech-to-text, we may collect voice information and use it for purposes such as improving
speech processing (e.g., to help the service better translate speech
into text), when you open a file, we may collect information about the file, the application used to open the file, and how long it takes to use it
for purposes such as improving performance (e.g., to help retrieve
documents more quickly), or when you input text, handwrite notes, or ink comments, we may collect samples of your input to improve these input features, (e.g.,
to help improve the accuracy of autocomplete and spellcheck). This is so serious that even some political parties here in France that have nothing to do with technologies denounced Microsoft Windows 10 practices. A member claimed that the statement above does not concern the shipped version of Windows 10. Well: We have not been provided any proof that Microsoft removed all those monitoring modules of its Windows 10 beta version in the final release. And, since Windows is closed-source, there's no way for us to check ourselves. The media has reported a history of Microsoft spying as its practice (e.g. Microsoft, China clash over Windows 8, backdoor-spying charges , also NSA Built Back Door In All Windows Software by 1999 ). For the shipped version of Windows 10, we can see the same information with smoother words: Privacy Statement Additionally, after the release of the shipped version of Microsoft Windows 10, this is what was written in Microsoft Windows 10 Privacy Policy: We will access, disclose and preserve personal data, including your
content (such as the content of your emails, other private
communications or files in private folders), when we have a good faith
belief that doing so is necessary to protect our customers or enforce
the terms governing the use of the services, Only by the start of this August, and after lot of organizations and even political parties complained about Windows 10 being a spyware, Microsoft changed its privacy policy statement to softer terms to which I linked to. But is this change of policy statement followed by retrieving Windows 10 from the market and replacing it by a new one? Of course not. Note that the last paragraph I quoted is only still available in external websites including famous newspapers by the start of this August (which thing means after Microsoft started already to sell its Windows 10), but we do not find this paragraph anymore in the updated version of the privacy policy statement anymore. So Microsoft removed it already. Update: From Windows 10 feedback, diagnostics, and privacy: FAQ (shipped version of Windows 10, NOT Pre-Release Preview), we can also read regarding Diagnostics Tracking Service : As you use Windows, we collect performance and usage information that
helps us identify and troubleshoot problems as well as improve our
products and services. We recommend that you select Full for this
setting. Basic information is data that is vital to the operation of Windows. This data helps keep Windows and apps running properly by
letting Microsoft know the capabilities of your device, what is
installed, and whether Windows is operating correctly. This option
also turns on basic error reporting back to Microsoft. If you select
this option, we’ll be able to provide updates to Windows (through
Windows Update, including malicious software protection by the
Malicious Software Removal Tool), but some apps and features may not
work correctly or at all. Enhanced data includes all Basic data plus data about how you use Windows, such as how frequently or how long you use certain features
or apps and which apps you use most often. This option also lets us
collect enhanced diagnostic information, such as the memory state of
your device when a system or app crash occurs, as well as measure
reliability of devices, the operating system, and apps. If you select
this option, we’ll be able to provide you with an enhanced and
personalized Windows experience. Full data includes all Basic and Enhanced data, and also turns on advanced diagnostic features that collect additional data from your
device, such as system files or memory snapshots, which may
unintentionally include parts of a document you were working on when a
problem occurred. This information helps us further troubleshoot and
fix problems. If an error report contains personal data, we won’t use
that information to identify, contact, or target advertising to you.
This is the recommended option for the best Windows experience and the
most effective troubleshooting. Note that only on Enterprise Edition one can turn Diagnostics Tracking Service off totally. Diagnostics Tracking Service available in Windows 8.1, Windows Server 2012 R2, Windows 7 Service Pack 1 (SP1), and Windows Server 2008 R2 SP1 and Windows 10. The quoted paragraphs concern the Diagnostics Tracking Service mechanism in which other modules, apart from Telemetry, are included. Diagnostics Tracking Service consists in these files: telemetry.asm-windowsdefault.json diagtrack.dll utc.app.json utcresources.dll Note that the answer below claiming that nothing private is collected by Windows 10 as a qualified user may listen to the traffic of his Windows operating system is wrong. It is impossible to know what Windows collects and sends permanently. Windows does not stop sending information on his/her behalf as this study shows: Even when told not to, Windows 10 just can’t stop talking to Microsoft. But still what the official documentation describes is not very good for the user such as when Windows takes system files or MEMORY SNAPSHOTS , which may unintentionally include PARTS OF A DOCUMENT YOU WERE WORKING ON on when a problem occurred (From: What are the privacy and security implications of Windows Telemetry ) | {
"source": [
"https://security.stackexchange.com/questions/96713",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
96,761 | When I was reading a page on Wikipedia several months ago (December 2014) I saw what looked like a pop-up window from BT , but I soon realized that when I closed the page the pop-up disappeared. I then opened Firebug and inspected the box and saw that it was actually part of the webpage itself and that clicking its confirmation button would take me somewhere I didn't want to go. It was very cleverly disguised to look real. I've never before seen anything like this. Is this the only case or has this been known to happen? I've provided a screenshot to show what it looked like: | Assuming that you are coming from a BT connection, it's possible that this is part of the BT parental controls program. There is a discussion of a similar looking pop-up here , which seems to tie into what you're seeing, and also a thread here on the BT site which has a link to a process to turn off that setting. To test this theory you could log into your account and opt-out of parental controls. I wouldn't advice doing it from any link presented in the pop-up in case it is malicious though. Once you've done that try accessing the same page to see if the issue re-occurs. Another way to test it would be to access the site over HTTPS as then they shouldn't be able to inject any content unless they've installed a trusted root certificate in your browser, for which you would have needed to install BT software on the affected system. As to your original questions about criminals injecting content into legitimate websites, this is a common vector of attack, either via compromise of the website (e.g. through exploitation of outdated software) or via content injected into the site such as adverts (this is common enough to have its own Wikipedia definition " malvertising ") As to whether it could happen to Wikipedia, I'm not aware of that occuring and with Wikipedia being such a well trafficked site, I'd expect it to be a large target. | {
"source": [
"https://security.stackexchange.com/questions/96761",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83279/"
]
} |
96,765 | Let's say someone is logging onto Dropbox in the library, but the computer has a keylogger on it that steals their password. Fortunately, they are using two factor authentication, which could have prevented this... almost. But there's a checkbox "trust this computer". The user doesn't check it, but the keylogger is able to check it silently, giving them a token which gives permanent access to their account. Even if this isn't the main purpose of two factor authentication, wouldn't its utility be massively improved without this checkbox? There are other ways to allow the user to trust the computer they are on, such as texting them two codes, one for temporary access and another for permanent. That way the decision of whether to allow permanent access would be entirely up to the user, and the keylogger wouldn't be able to interfere with it. | Assuming that you are coming from a BT connection, it's possible that this is part of the BT parental controls program. There is a discussion of a similar looking pop-up here , which seems to tie into what you're seeing, and also a thread here on the BT site which has a link to a process to turn off that setting. To test this theory you could log into your account and opt-out of parental controls. I wouldn't advice doing it from any link presented in the pop-up in case it is malicious though. Once you've done that try accessing the same page to see if the issue re-occurs. Another way to test it would be to access the site over HTTPS as then they shouldn't be able to inject any content unless they've installed a trusted root certificate in your browser, for which you would have needed to install BT software on the affected system. As to your original questions about criminals injecting content into legitimate websites, this is a common vector of attack, either via compromise of the website (e.g. through exploitation of outdated software) or via content injected into the site such as adverts (this is common enough to have its own Wikipedia definition " malvertising ") As to whether it could happen to Wikipedia, I'm not aware of that occuring and with Wikipedia being such a well trafficked site, I'd expect it to be a large target. | {
"source": [
"https://security.stackexchange.com/questions/96765",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1371/"
]
} |
96,798 | If there's one thing the Internet is good at, it's keeping track money and knowing where it has moved. Ransomware attacks typically request payment for a decryption key. How is it with such an easily tracked transaction that authors of ransomware aren't prosecuted more often? | They use alternative methods from traditional credit card/wire transactions, namely prepaid cards and crypto-currency like bitcoin. This is because it's much easier to stay hidden this way. This article outlines more information if you're interested: TeslaCrypt: Following the Money Trail and Learning the Human Costs of Ransomware | {
"source": [
"https://security.stackexchange.com/questions/96798",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/82232/"
]
} |
96,814 | I am quite new to creating services using PHP-cURL. I want to offer some data/service to only registered domains. I know that I can identify these domains using app ID and API keys for data encryption. However, I was thinking about different types of threats. What if some of my customers give their credentials to another domain? How can I validate caller domain (or any other solution) so that the credentials will be used only by that domain? I really tried to detect exact domain name that is calling my service, but I fail every time, because anyone can set a fake domain name in the header. I was thinking about using IP, but that would be a really complex solution. Also, I don't want to limit calls. I just want to be sure about which domain I am giving data. | They use alternative methods from traditional credit card/wire transactions, namely prepaid cards and crypto-currency like bitcoin. This is because it's much easier to stay hidden this way. This article outlines more information if you're interested: TeslaCrypt: Following the Money Trail and Learning the Human Costs of Ransomware | {
"source": [
"https://security.stackexchange.com/questions/96814",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83446/"
]
} |
96,985 | I'm working on a web app that notifies users on whether or not the JavaScript that they entered is malicious. I'm using this article ( Examples of malicious javascript ) for reference. Is it possible to create an equation with coefficients -coefficients that include occurrences of uncommon characters- that can give a strong indication that a certain piece of code is malicious/benign? If not, is there another method that I could use? | You are trying to fulfill something impossible. If it is that easy, web malware would be dead few decades ago. If you want to use mathematical tools to track malicious JavaScript code, you need first to know which features are employed by JavaScript malware. Once you understood these features, you may guess that it will be impossible to factor anything meaningful in one or several mathematical equations; so let's throw a glance over the employed and common features of JavaScript attacks: Server side polymorphism Literally meaning many shapes, polymorphism is a technique used by malware authors to evade signatures based detectors. Polymorphism is qualified as being server sided when the engine which produces several but different copies of the malware is hosted on a compromised web server (Server-Side Polymorphism:
Crime-Ware as a Service Model (CaaS)) . simulated metamorphic encryption generator (SMEG) version 1.0 was the first engine developed to implement the notion of polymorphism for computer viruses on the early 1990's ( Parallel analysis of polymorphic viral code using automated deduction system ) Code obfuscation The other common feature you may find in malicious JavaScript code is that obfuscation is always used. This common factor -obfuscation- does not make even things simpler: because innocuous JavaScript code also uses obfuscation (for instance, some developers for example do not want their personal pretty JavaScript function to be understood by others as you can easily read HTML and JS pages codes). Along with server side polymorphism, code obfuscation is a widely used technique by malware authors to circumvent antivirus scanners. A myriad of techniques could be used to obfuscate JavaScript codes such as string reversing, Unicode and base 64 encoding, string splitting and document object model (DOM) interaction ( Malware with your Mocha? Obfuscation and anti-emulation tricks
in malicious JavaScript. ). Code unfolding Code unfolding is the mechanism with which a new code is introduced at run time. In JavaScript, this is made concrete by invoking functions like document.write() and eval() in order to execute obfuscated portions of code and functions. ( Weaknesses in Defenses Against Web-Borne Malware ) Heap spray This attack targets mainly web browsers. The user controllable data can corrupt the heap by a remote execution code if the miscreant has compromised the user's computer to the point he can have access to this vulnerable memory area ( BuBBle: A Javascript Engine Level Countermeasure against Heap-Spraying Attacks ) Drive-by download Drive-by download attacks consist in downloading and and executing or installing malicious programs without the user's consent. Such attacks occur by exploiting browsers' vulnerabilities, their add-ons or plugins such as ActiveX controls or unpatched useful software such as Acrobat Reader and Adobe Flash Player ( Drive-by download attacjs: effect and detection methods, MSc Information Security ) Multi execution paths It is possible to trigger an action only if certain conditions are fulfilled. Such circumstances could be the arrival of a given date or the existence of a file on the system on which the malware is intended to be executed. An other quick and well known example could be a denial of service attack that must be fired only if the number of the botnet's nodes has reached a certain value. That
is the notion of multi execution paths ( Exploring Multiple Execution Paths for Malware Analysis ) Implicit conditionals This technique is mainly used against dynamic approach detectors. The main idea for this process is to execute a set of instructions by hiding the condition that fires it ( Weaknesses in Defenses Against Web-Borne. Malware ) Given these common features and tactics used by JaaScript malware, if you want to detect this type of malware as you asked, you need first to study the state of the art of the methods used to detect that. Various methods have been developed so as to detect web (JavaScript) malware. We can divide them into two main categories as follows: Machine learning based classifiers Features : HTML and JavaScript codes distinguishing features extraction. These features are then evaluated to train a machine learning for classifier generation. The premise of this approach is that malicious webpages are likely to be different from benign ones (Thesis: Effective Analysis, Characterization, and Detection of Malicious Web Pages ) Advantages : Lightweight approach, useful to deal with a bulk of websites analysis. Drawbacks : Obsolete against obfuscated JavaScript code and totally useless against new malicious code patters or zero attacks. Dynamic methods Features : Based on the dynamic behavior analysis, these techniques are implemented using either proxies where a page is rendered to the visitor only after its safety is checked, or a sandboxing environment relying on honeyclients (Same thesis: Effective Analysis, Characterization, and Detection of Malicious Web Pages ). Advantages : Efficient against zero day attacks and obfuscated code. Drawbacks : Resources and time consuming. Sandboxing environments rely on low interaction honeyclients which themselves are based on virus signatures, and thus suffer from the same disadvantages as the static methods' ones. What you have tried to do belongs to the first category. Now, after you are well informed about all this, it can be useful for you to study some available tools dedicated for this purpose in order to implement your own technique. So let me mention you three important tools among so many others: Zozzle Zoozle relies on Bayesian classification abstract syntax tree (AST) . It is legitimately classified as mostly static web malware detector because it embeds another engine that supervises the JavaScript code execution at run time. Its authors claim that it has a very low false positive rate of 0.0003% and is able to process over one megabyte of HTML and
JavaScript code per second. This tool is intended to be used as a browser plugin; its aim is to protect browsers against heap spray attack. It is time to point out how ZOZZLE operates. How ZOZZLE operates? The following figure summarizes its core ( ZOZZLE: Fast and Precise In-Browser JavaScript Malware Detection ): Extraction and labeling phase: The classifier needs training data. This data is extracted from obfuscated JavaScript code. Instead of developing an efficient de-obfuscation technique, Compile function interception calls is performed. Compile function is located in jscript.dll library. It is a smart way to obtain plain JavaScript code because it is called each time <SCRIPT> and <IFRAME> tags, or eval() and document.write() functions have been called, which thing defines also the code context. Each code context is saved on the hard drive for further analysis. Feature selection : JavaScript AST is used to tag each labeled context code for its safety or malignancy. The features are pre-selected using this formula: Where: A : malicious context with feature B : benign context with feature C : malicious context without feature D : benign context without feature Classification : The Bayesian classifier is used for classification because even if it seems obsolete, in practice it gives good results and it is not time consuming. Profiler Profiler follows the static schema to detect web malware. It combines static features analysis of HTML and JavaScript code, including unified resource locator (URL)s. Then it uses machine learning techniques to teach a classifier that decides if a webpage embeds malicious content or not. Suspicious webpages are not processed by this tool. It rather forwards them to third party
technologies such as Wepawet ( Prophiler: A Fast Filter for the Large-Scale Detection of Malicious Web Pages ) SpyProxy SpyProxy follows the dynamic analysis principles. It monitors the active content of webpages within a virtual machine before deciding to render them to the visitor or not. The architecture of SpyProxy is illustrated through this figure ( SpyProxy: Execution-based Detection of Malicious Web Content ): ( a ): The proxy performs a static analysis over the requested page. In the case it judges is likely to be malicious, if forwards it to the virtual machine. basically only pages with active
content are forwarded to the virtual machine (VM). ( b ): The virtual machine loads the malicious pages to monitor their activities. ( c ): Only benign pages are rendered back to the proxy which forwards them in turn to the user's browser. Iceshield ICESHIELD performs in-line dynamic code analysis using a set of heuristics to verify attack attempts. Its authors take an inventory of the attacks that usually target the DOM properties of a website that are performed by injecting JavaScript into the website's source code. ICESHIELD supervises the running JavaScript code by predefining a set of rules related to functions calls and
applying heuristics on them in the hope to determinate whether the script is malicious or not ( IceShield: Detection and Mitigation of Malicious Websites with a Frozen DOM ). | {
"source": [
"https://security.stackexchange.com/questions/96985",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83638/"
]
} |
97,064 | I have always heard that email is an insecure method of communication; I assume this has something to do with the email protocol itself. But when sending an email from one Gmail account to another, Google has complete control over how the email is transmitted, and Google seems decently concerned about information security. So it seems that they could , if they wanted, turn Gmail-to-Gmail messages into a secure communication channel and that this would probably be in their best interest. So have they done this? If not, why not? Is Gmail-to-Gmail communication still insecure (for the purpose of, say, sending a credit card or social security number to a trusted recipient)? | Email is historically considered insecure for two reasons: The SMTP network protocol is unencrypted unless STARTTLS is negotiated, which is effectively optional The mail messages sit unencrypted on the disk of the source, destination, and any intermediate mail servers Google mail servers all speak STARTTLS if possible, so for gmail-to-gmail the transmission step shouldn't be a concern. However, the sending server stores an unencrypted copy of the email in your Sent folder. The receiving server stores an unencrypted copy in the recipient's Inbox. This leaves them open to various threats: Rogue Google employees reading that email Google choosing to read that email despite their assurances to the contrary Governments forcing Google to hand over that email Hackers breaking into Google and accessing that email If you can trust everything to go right, then gmail-to-gmail is perfectly secure. But you can't always expect everything to go right. For these reasons, the security and privacy community long ago reached the stance that only end-to-end email encryption is secure . That means the email remains encrypted on server disks and is decrypted when you're reading it, and never stored decrypted. There have been an enormous number of comments, so let me expand/clarify a few things. End-to-end encryption - in the context of email, when I say end-to-end encryption I mean something like PGP, where the message is encrypted until it reaches the recipient's email client, and only decrypted to be read. Yes, this means it can't be searched on the server, and often also means it doesn't remain "backed up" on the server either. This is a case where security and functionality are at odds; pick one. Security and privacy community - unlike many Information Security topics, email security is one that extends out to other communities. The question of what stateful inspection in a firewall means is not something often extended out to interest others, for example. But email security is of direct, significant interest to Human rights workers Whistleblowers Insurgents Forget about credit card data, there are people trying to communicate with email whose lives, and the lives of their families, depend upon the security of the email. So as there are phrases in the comments below like "depends upon what your standards are for 'secure'", "sufficiently motivated adversary", "there is an illusion of security at the email-level" - am I being too strong to say the server can't be trusted? Not for people whose lives are at stake. That's why the phrase "email is insecure" has been the mantra of the privacy movement for 20 years. Trusting the server - In the US, " your cap for liability for unauthorized charges on a credit card is $50 " so you may well be happy trusting the server with your credit card. If you're cheating, on the other hand, you might lose a lot more as the result of leaving unencrypted email on the server . And will your service provider shut their doors to protect your privacy ? Probably not. STARTTLS - STARTTLS is SSL for email; it uses the same SSL/TLS cryptographic protocol to encrypt email in transit. However, it is decidedly less secure than HTTPS for several reasons: STARTTLS is almost always "opportunistic", meaning that if the client asks and the server supports it, they'll encrypt; if either of those things are not true, the email will quietly go through unencrypted. Self-signed, expired, and otherwise bogus certificates are generally accepted by email senders, so STARTTLS provides confidentiality but almost none of the authentication. It's relatively trivial to Man-In-The-Middle email if you can get in between servers on the network. | {
"source": [
"https://security.stackexchange.com/questions/97064",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45294/"
]
} |
97,157 | Well, it all started when I wanted to take the sim card out of my tablet and I was about to shut it down so that I could take the simcard out.
But then a thought raced through my mind, "What if I take out the simcard before I turn off the tablet and go to google and see whether I can still search for something?" So I quickly took out the simcard and I typed a word in the browser I have never searched for before(I checked my history to confirm) and....... It loaded and displayed a webpage! I clicked on a website very quickly and it also began to load it was almost through but before it fully opened the tablet quickly turned itself off. This proves a theory I once had:Sim cards are identified on their individual operator networks using the IMSI. Mobile network operators connect mobile phone calls and communicate with their market SIM cards using their IMSIs using a certain format.(Actually that's the factual part not the theory part I was talking about) (Now the theory part) That IMSI format that is stored in the simcard is accessed by the phone as it transmits that data using a radio link to a cellular network .Now as the device sends this data it stores the same data along with the encryption key of the simcard on its RAM. Since the device here is the middle guy he can store the information being transferred without the simcard or the carrier. Meaning when the device obtains the encryption key it stores it on its RAM as it passes it to the mobile operator requesting access and authentication and once the mobile is granted access to the operator's network it stores the encryption key because the encryption key is used to encrypt all further communications between the device and the network hence having no need for the simcard anymore. Which explains why when I removed my simcard from the device I was able to surf the net. But as a security procedure(one that worked terribly slow allowing me to surf the net for a brief period of time) the tablet turned itself off, and since it was stored in the RAM once it turns off the information is lost. But isn't this a great security concern? What if someone made a program that overrides the devices settings allowing the stored information to be continually used without having the simcard? Basically what I'm asking is that, isn't it a great security concern for devices to function this way? | As you found out, a SIM card is only required for initializing a connection to the mobile carrier and is not required anymore until the device loses the connection and needs to reconnect (which happens very frequently with mobile devices when you move them around). Your device might power down when the SIM card is removed, but there is no good reason why it must do that. But cloning a SIM card is not as easy as you think. Every SIM card stores an unique Authentication Key which is only known to the network carrier. This key can not be read through normal means. During the connection process, the carrier sends a random number to the device. The SIM card then uses a cryptographic function which takes that random number and the authentication key as inputs and outputs a new number based on these. This function happens inside the SIM card , not on the device, so the device never processes the authentication key. That number is then sent to the network carrier. The same happens on the carriers side, and when the numbers don't match the connection attempt is aborted. The calculation is (relatively) cryptographically secure, so it is not (easily) possible to reverse-engineer the authentication from observing which random number gets which response from the SIM card. It has some vulnerabilities, though . | {
"source": [
"https://security.stackexchange.com/questions/97157",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83736/"
]
} |
97,249 | I have read from multiple sources that it might be better to have a password composed of several random words since this is easier to remember than a random sequence of characters. For example this article from Thomas Baekdal . I even see this xkcd comic quite often. Now, I read this article about a new tool called brainflayer, currently target Bitcoin wallets, that can guess 130000 passwords a second. This makes Bitcoin brainwallets useless. I wonder if a similar tool could be used against all passwords and are passwords such as "this is fun" really as safe as Thomas Baekdal claims? | I wrote brainflayer and gave a talk about it at DEFCON. Neither Thomas Baekdal's article nor XKCD's comic apply well to modern offline attacks. I read Thomas's article and his FAQ about it, and it may have been marginally reasonable when he wrote it, it no longer is. A key point is that password cracking attacks have gotten much better since then. Q: If I cannot write "this is fun" because of the spaces, can I not just write "thisisfun"? A: Absolutely not! The reason why "this is fun" is 10 times more secure, is simply because it is much longer (11 characters). By removing the spaces, you reduce the length and the complexity substantially. The spaces are effectively special characters, which in itself makes the password much more secure. Use "this-is-fun" instead. Password crackers don't try long brute force attacks much - it's all about cracking ROI. A smart cracker will try word combinations with various delimiters, so using spaces, hyphens, underscores or nothing all ends up providing about the same security. Today's cracking methods use wordlists - which can include phrases - and large corpuses of previously compromised passwords along with popularity. This is combined with rule-based permutation and statistical models. Ars Technica posted a great article detailing modern techniques mid-2013, and attacks only get better. I am also of the (possibly controversial) opinion that it is pointless to talk about guesses per second for offline attacks. A much better way of thinking about it is guesses per dollar . If you want to be pedantic you could add a one-time guesses per second per dollar cost, but the operational cost will tend to dominate. Brainflayer's upper bound on operational cost is 560M guesses per dollar, based on EC2 spot instance benchmarks - with zero one-time cost. It's possible to make these costs many orders of magnitude higher with a "harder" hash function like bcrypt , scrypt , PBKDF2 or, once it is finalized, Argon2 . | {
"source": [
"https://security.stackexchange.com/questions/97249",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83825/"
]
} |
97,296 | Does the act of requiring certain criteria for passwords make them easier to brute-force? It's always seemed to me that when websites limit the use of "insecure" passwords, it might make it easier for the passwords to be brute-forced because it removes the need for attackers to check any of those passwords. The most basic of these requirements (and probably the most common) would be the need for a password to be 8 characters or longer. It has been discussed in a few other topics: Do password complexity requirements reduce security by limiting search space? Insecure to require numbers in passwords? Doesn't imposing a minimum password length make the password weaker by reducing the number of possible combinations? The general consensus on this is that requiring longer/harder passwords doesn't necessarily make them easier to crack because most of the passwords it is not allowing aren't probable passwords anyway (because the great majority of people wouldn't use a random string of characters). I still feel like most people are probably using a password that is the required length or only 1 or 2 characters over the limit. Assuming the use of only alphanumeric characters, requiring 8+ characters removes about 3.5 trillion password possibilities (most of them would just be random gibberish). This leaves ~13 quadrillion passwords that are 8-9 characters. My main question is: Would it make more than a negligible difference in security for websites to only have password requirements sometimes? Example: Maybe 1/100 attempts to create a password would not need to meet a certain criteria, which would require attackers to test all passwords because of the possibility that the password is less than 8 characters | One related question that you missed in your list is this one: How critical is it to keep your password length secret? The accepted answer there (disclaimer: mine) shows that if you have a password scheme which allows all 95 printable ascii characters, then the key space ramps insanely quickly every time you increase the length of the password by 1. You can check all the passwords up to length N in about 1% of the time that it'll take you to check only passwords of length N+1. By rejecting any password shorter than some cutoff length, you give up far less than 1% of your key space. So, I strongly second @Iszi in saying The benefit gained by forcing increased length far outweighs the number of possible passwords lost. Next point: let's get out of the idea that 8-characters is long for a password. It is not. You say "~13 quadrillion passwords" as if that's a big number. It is not. According to this article (which is a great read btw) his password cracking rig could make 350 billion guesses per second, so every single one of your ~13 quadrillion passwords can cracked one-by-one in ~10 hours. And that's on 2013 hardware, GPUs have come up a lot in power since then. My opinion is that websites can squabble about who has the better password requirements, but they are all far too weak . Our ability to crack passwords is growing WAY faster than our ability to remember longer ones. This is because security is clashing with usability. Try telling anybody who's not a tech nerd that they need to memorize a 32-character password that doesn't contain any English words, and a different one for each account they have! You'll be laughed at and then ignored. Websites that try to enforce anything better than pathetic password policies have to deal with mountains of angry customers. The solution is to do away with passwords all together and move towards strong 2-Factor type authentication, where offline cracking isn't feasible. Unfortunately companies have only been seriously thinking about alternatives to passwords for less than a decade and the offerings are far from polished (they are plagued with convenience and usability problems which are preventing mass adoption), so in the meantime we get to continue having these useless debates comparing one mostly useless password scheme against another. End opinion . | {
"source": [
"https://security.stackexchange.com/questions/97296",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83862/"
]
} |
97,333 | Around a year ago I have asked a question about the weakest factor of authentication . I have had some good answers that convinced me as I always imagined the authentication process in my head as some employee in a high security facility trying to get access to his office by entering his pin or someone trying to login into his PC by entering his password but the answers make little sense if we were talking about a vehicle. Car keys can get easily lost or stolen by a stranger you met in some pub but it's highly unlikely that you shout your password while you are sleep talking It's a big hassle and an expensive process to change your car keys; Passwords are very easy to change. As you can tell from the other question, the biggest issues with passwords (according to the answers I received) were: If someone has your password, you may not be able to tell that they are actively exploiting that knowledge. Passwords enable random guessing, offline dictionary search, and other attacks. Well... That's true if someone were spying on your system, but if a stranger had your car keys I don't think they would return your car and if they did, you will be able to tell that someone else had access to your car. Having the car locked for 5 minutes after three failed attempts is a pretty good solution. Are you in hurry to go to work? Get inside the house and get the master physical key; having a master physical key that overrides the password system is a good rescue solution, but not when you carry it with you all the time. Carrying the authentication secret in your head is safer than carrying it in your pocket. Few other things that come to my mind which makes me wonder why I've never seen a car with a password You can always use your car as a getaway car in a bank robbery and you later claim that you have lost the keys and it was not you; you can't do that with a password. A similar idea has been introduced by an infosec expert got turned down the other day on Dragons' Den even when he has invented a nice combination of a device that get attached to the car engine and a mobile app. The mobile app is superior to your physical key and you can't start the car without the app, even if you have the key. Peter Jones attacked the idea based on the fact that your mobile might run out of charge; the authentication system of the car would never run out of charge as it gets powered by the car battery; it's replaceable, protected and if it's down, the car is down anyway and you can't blame the authentication system. | Poor password choices The primary threat that a car lock protects against is theft of the car or of objects inside the car. Most theft is opportunistic, not targeted: go to a parking lot, try multiple cars until you find a poorly protected one. With passwords or PIN, you know that many people are going to pick password or 1234 or for the more paranoid their date of birth. Locking a car after failed attempts doesn't matter: the thief will just try the three most likely values on each car then move on. Additionally, force-locking the car after failed attempts would be annoying if your kid starts mashing the buttons. Shoulder surfing Typing a password is vulnerable to shoulder surfing. It's hard to duplicate a physical key solely from pictures (it can be done, but only with precise enough pictures). It's impossible for an unaided human to duplicate a physical key. It's easy for an unaided human to remember the PIN they've just seen somebody type. Pass by someone in a parking lot, note the PIN, see them the next day/week around the same time, profit. Loaning I can loan my car keys to someone. When they give me back the key, I can be reasonably confident that they no longer have access to my car. Sure, they might have duplicated the keys, but that requires time (if they only borrow the car for a short time, I know they haven't done it), and if I trust them enough to loan my car, I probably trust them not to copy the keys. If there's a single password to open the car, then if I let someone use my car, they have access forever. This can be solved by having multiple passwords to open the car, of course. But that adds another set of difficulties. One is that the key space might need to be larger: with a small key space such as a 4-digit PIN, the probability of an uninformed guess can become non-negligible with multiple valid codes. A bigger difficulty is that this requires Joe Random to do key management. Joe Random's VCR blinks 12:00 since the last power failure. (Maybe not anymore with DVR that have an Internet connection.) Joe Random understands physical tokens — if I have the object in my hand, I control it — but not password management. | {
"source": [
"https://security.stackexchange.com/questions/97333",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
97,342 | If you install some kind of Linux distributions on your computer, is their a possibility that their owners may spy on you? For example, can Ubuntu, Kali or Arch linux send data to its owners on what you are doing? Speaking about Kali, I guess it would be interesting for NSA or any other corporation to know why do you need it and what are you doing with it... It is just the matter of security for them... | Any time you execute code acquired from someone that you haven't fully reviewed and it runs on an Internet connected system, there is a risk that the person who wrote or deployed that code could transmit data about your usage to another system. That's true regardless of the OS. So yes it's possible. The question then becomes "has this happened in the past", and "is it likely to happen in the future". The only case that springs to mind where people may have been sending data form a linux distro to 3rd parties inadvertently was the ubuntu linux Shopping Lens which could be regarded as spyware and was by some. Outside of that, I'm not aware of any instances of large-scale spying from the mentioned linux distros. As you say pentest distros like kali are obviously an inviting target, but then their users are more likely to notice an indiscriminate transmission of data from their systems. Ultimately it comes down to trust. By executing code belonging to someone else you are trusting them (think of trust in this context as "the power to betray") with any data you enter into that system. How you establish trust in a system is a really good question and one which, as far as I can see, is far from answered. | {
"source": [
"https://security.stackexchange.com/questions/97342",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83902/"
]
} |
97,377 | If I want to save a hash of a certain number of bits, but my hashing algorithm gives me more bits than I need, then what is the safest way to shorten it? Should I: Just remove the first or last few bits XOR the first or last few bits with the part I'm keeping Do something else Does it matter how I'm shortening it, or does it depend on the hash algorithm I'm using? Does this reduce the security of the hash (ex.: is a 128-bit hash trimmed to 64 bits worse than a 64-bit hash of the same algorithm)? | Simply truncating a hash is the common and accepted way to shorten it. You don't need to do anything fancy. There are plenty of questions here, and on crypto.stackexchange about whether doing this reduces the strength of the hash (see the list of related questions at the bottom). The answer is that No, truncating a hash does not reduce its strength (apart from the obvious that shorter hashes have more collisions ). According to @Reid's answer in [2] and @ThomasPornin's answer in [3], the idea of truncating hashes is fully supported by NIST, in fact SHA-224 is just SHA-256 truncated, SHA-384 is just SHA-512 truncated, etc. RELATED QUESTIONS [1] truncated hash for message authentication? [2] Is truncating a SHA512 hash to the first 160 bits as secure as using SHA1? [3] Should I use the first or last bits from a SHA-256 hash? | {
"source": [
"https://security.stackexchange.com/questions/97377",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
97,416 | I have downloaded a couple of Virtual Machine images from http://modern.ie , in order to test different versions of Internet Explorer. When running these images, a complete Windows environment is started, with a default user account. The details for the user account are displayed in clear text on the Windows desktop background: What is the reason for complicating the password (e.g. Passw0rd! instead of just password ), when the password is clearly visible to anyone, and is supposed to be public? | Reason may be: This Windows has implemented a strong password policy, thus the user MUST HAVE a "strong" password. | {
"source": [
"https://security.stackexchange.com/questions/97416",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/40091/"
]
} |
97,550 | Say I have the a website with the following code on it: <input type="text" id="search-text" name="query" value="?" /> Double quotes aren't escaped so I can break out of the value attribute, however, I can't break out of the HTML tag itself as '<' and > are being filtered out. My goal here is to get a javascript popup to appear. There's the onfocus attribute so I guess if someone clicked on the text input box a javascript popup could appear. However is there a way to make a javascript popup appear when the page first loads? | Try this: " onfocus="alert(1)" autofocus=" It will expand to: <input type="text" id="search-text" name="query" value="" onfocus="alert(1)" autofocus="" /> Which will cause an alert box, demonstrating XSS. | {
"source": [
"https://security.stackexchange.com/questions/97550",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15922/"
]
} |
97,556 | The Secure Socket Tunneling Protocol is a VPN protocol developed by Microsoft. It sends traffic over an SSL version 3 connection. Since SSlv3 is vulnerable to the POODLE attack, does this mean that SSTP trafic is vulerable? This answer says that: The main and about only plausible scenario where [the conditions
required for the attack] are met is a Web context but would this include web browsing over an SSTP VPN? | Try this: " onfocus="alert(1)" autofocus=" It will expand to: <input type="text" id="search-text" name="query" value="" onfocus="alert(1)" autofocus="" /> Which will cause an alert box, demonstrating XSS. | {
"source": [
"https://security.stackexchange.com/questions/97556",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13154/"
]
} |
97,706 | TL;DR: Perhaps I've gone overboard with my question's detail, but I wanted to be sure the question was clear since the topic seems very broad. But here it is. The word "smartest" is meant fundamentally, not literally. Is a server infrastructure fundamentally possible which the smartest person can't breach? Background: I've read articles about the servers of massive banks (or cheating sites) being compromised, and in one article based on an interview with an internet security company interested in the case, a specialist claimed that there are highly skilled criminal organizations especially in China and Russia which have vast resources, tools, and some of the "best" hackers in the world at their disposal, and this specialist claimed that there was "no system on earth" (connected to the web, of course) which they couldn't compromise with the resources available to them. (Web-Server) Information Security is Like Chess: I'm not much of a Chess player, and I'm not much of an Info Sec. expert, but I am a programmer who writes server software, and I'm interested in this. Disregarding any factors of Chess that might null my scenario, such as maybe the person who moves first has an advantage or something of the sort, imagine Information Security as being a game of Chess between two of the best Chess players in the world. Classic: If you and I were to play a game of Chess, the one which possesses greater skill, knowledge, and intelligence regarding the game of Chess will win. Programmed Scenario 1: Or perhaps, if we play the game digitally, the one of us who writes the smartest chess-playing software will win. Programmed Scenario 2: Or, and here's the key, perhaps it's possible for us to both be so good at both Chess and programming that we both write Chess-playing computer programs so good that neither of our programs can win, and the game ends in a stalemate. Consider a server infrastructure, for example a banking server, or application server which must communicate with clients on the web, but which must not allow criminal parties to break into its data stores. The security of this server infrastructure could either be like Programmed Scenario 1 , meaning no matter what, whoever has the best software and knowledge of Information Security, the people who invent the security strategies for example, will always have a chance to break through a server infrastructure's defense, no matter how secure. No perfect defense is fundamentally possible. Or it could be like Programmed Scenario 2 , where it's fundamentally possible to develop a server infrastructure which uses a security strategy that (fundamentally) cannot be bested by a smarter program. A perfect defense is fundamentally possible. The Question So which one is it? | "No perfect defense is fundamentally possible." In chess, you have 64 squares, 2 people playing, and one set of immutable, commonly known rules. In server infrastructures, there are an untold number of assets and ways to approach those assets, an unknown number of people playing, and rules that change constantly with players purposely seeking to bend, break, or bypass the rules. Consider 2 elements that will prove my point: zero days and chocolate bars . Firstly, zero days change the rules while the game is being played. While one side gains the benefit of this element, the other side is unaware of the advantage and is possibly still unable to counter these attacks, even if they are eventually known. Each zero day is a new rule that is unevenly applied to the game. Even if a "perfected security strategy" can be devised and perfectly applied, zero days can mean that the strategy is built upon unknown weaknesses that might never be known to the defending side. Secondly, chocolate bars can do more to break the security of an infrastructure than any other element. What I mean is that people can be bribed or enticed to "switch sides" and grant advantage to the opposing side, sometimes for something as small as a chocolate bar (studies show). Phishing, bribes, data leakage, etc. are all part of the human side of the game that technology cannot account for entirely. As long as there is a human with power in the infrastructure, there will always exist that weakness to the system. What to do? In history, we see multiple situations where a massive attempt at defence was defeated by something small and unforeseen (e.g. the Great Wall of China's gates opened to a concubine who was a double agent for the Mongols). The goal, as defenders, is not to mount the perfect defence, but rather to design a resilient and transparent infrastructure where attacks can be seen quickly and responded to completely. Not taller walls, but more alert militia. Not unshakable foundations but a replaceable architecture. | {
"source": [
"https://security.stackexchange.com/questions/97706",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81923/"
]
} |
97,739 | We have set the set X-Frame-Options in the header as ALLOW-FROM same origin but there is a requirement to open below page from some 3rd party website. Do you see security issue here? HTTP::header replace X-Frame-Options "SAMEORIGIN" | "No perfect defense is fundamentally possible." In chess, you have 64 squares, 2 people playing, and one set of immutable, commonly known rules. In server infrastructures, there are an untold number of assets and ways to approach those assets, an unknown number of people playing, and rules that change constantly with players purposely seeking to bend, break, or bypass the rules. Consider 2 elements that will prove my point: zero days and chocolate bars . Firstly, zero days change the rules while the game is being played. While one side gains the benefit of this element, the other side is unaware of the advantage and is possibly still unable to counter these attacks, even if they are eventually known. Each zero day is a new rule that is unevenly applied to the game. Even if a "perfected security strategy" can be devised and perfectly applied, zero days can mean that the strategy is built upon unknown weaknesses that might never be known to the defending side. Secondly, chocolate bars can do more to break the security of an infrastructure than any other element. What I mean is that people can be bribed or enticed to "switch sides" and grant advantage to the opposing side, sometimes for something as small as a chocolate bar (studies show). Phishing, bribes, data leakage, etc. are all part of the human side of the game that technology cannot account for entirely. As long as there is a human with power in the infrastructure, there will always exist that weakness to the system. What to do? In history, we see multiple situations where a massive attempt at defence was defeated by something small and unforeseen (e.g. the Great Wall of China's gates opened to a concubine who was a double agent for the Mongols). The goal, as defenders, is not to mount the perfect defence, but rather to design a resilient and transparent infrastructure where attacks can be seen quickly and responded to completely. Not taller walls, but more alert militia. Not unshakable foundations but a replaceable architecture. | {
"source": [
"https://security.stackexchange.com/questions/97739",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/66678/"
]
} |
97,825 | I've been reading in the last couple of days about CORS and in a lot of places it's mentioned as it is a "Security" feature to help the world from cross domain forgery. I still don't see the benefit and the reasoning for CORS. Ok, browsers will do a preflight request / server will validate the origin. But an attacker can easily create an HttpRequest top-bottom with whatever Headers(Origin) he wants and he will get access to the resource. How is CORS helping and what's the benefit of it? | I'll start my answer by saying that many people misunderstand the Same Origin Policy and what CORS brings to the table. Some of the up-voted answers already here are stating that the Same Origin Policy prevents cross-site requests, and therefore prevents CSRF . This is not the case. All the SOP does is prevent the response from being read by another domain (aka origin). This is irrelevant to whether a "classic" CSRF attack is successful or not. By "classic" I'm referring to the types of request that were possible before CORS came about. That is, the types of request that can be sent via HTML forms as well as XHR (e.g. GET or POST without custom headers). The only time the SOP comes into play with "classic" CSRF is to prevent any token from being read by a different domain. Of course, now we have CORS and all sorts of cross-domain requests are possible such as PUT and DELETE, CORS does in fact protect against these by requiring a pre-flight. However, generally speaking CORS is not providing greater net benefit because the reason this functionality is available in the first place is due to CORS. All CORS does is relax the SOP when it is active. It does not increase security (except perhaps allowing cross-domain resource sharing to be standardised and prevent developers from introducing flaws with something like JSONP), it simply allows some exceptions to take place. Some browsers with partial CORS support allow cross site XHR requests (e.g. IE 10 and earlier), however they do not allow custom headers to be appended. In CORS supported browsers the Origin header cannot be set, preventing an attacker from spoofing this. I mentioned domains were different origins. Origins can also differ by port and protocol when talking about AJAX requests (not so much with cookies). Finally, all of the above has nothing to do with forged requests coming directly from an attacker, for example with curl. Remember, the attacker needs to use the victim's browser for their attack. They need the browser to automatically send its cookies. This cannot be achieved by a direct curl request as this would only be authenticating the attacker in this type of attack scenario (the category known as "client-side attacks"). The benefit of CORS is that it allows your domain to allow reads from another trusted domain. So if you have http://data.example.org you can set response headers to allow http://site.example.com to make AJAX requests and retrieve data from your API. | {
"source": [
"https://security.stackexchange.com/questions/97825",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/84334/"
]
} |
97,856 | The novel Daemon is frequently praised for being realistic in its portrayal rather than just mashing buzzwords. However, this struck me as unrealistic: Gragg's e-mail contained a poisoned JPEG of the brokerage logo. JPEGs were compressed image files. When the user viewed the e-mail, the operating system ran a decompression algorithm to render the graphic on-screen; it was this decompression algorithm that executed Gragg's malicious script and let him slip inside the user's system—granting him full access. There was a patch available for the decompression flaw, but older, rich folks typically had no clue about security patches. Is there such a thing? Is this description based on some real exploit? This was published in December 2006. Is it sensible to say "the operating system" was decompressing the image to render it? Note this has nothing to do with security of PHP image uploading scripts. I'm asking about the decoding process of displaying a JPEG , not scripts taking input from remote users, nor files misnamed as .jpeg . The duplicate flagging I'm responding to looks poor even for a buzzword match; really nothing alike other than mentioning image files. | Is there such a thing? Absolutely. Feeding malicious input to a parser is one of the most common ways of creating an exploit (and, for a JPEG, "decompression" is "parsing"). Is this description based on some real exploit? It might be based on the Microsoft Windows GDI+ buffer overflow vulnerability : There is a buffer overflow vulnerability in the way the JPEG parsing
component of GDI+ (Gdiplus.dll) handles malformed JPEG images . By
introducing a specially crafted JPEG file to the vulnerable component,
a remote attacker could trigger a buffer overflow condition. ... A remote, unauthenticated attacker could potentially execute arbitrary
code on a vulnerable system by introducing a specially crafted JPEG
file. This malicious JPEG image may be introduced to the system via a malicious web page, HTML email, or an email attachment. . This was published in December 2006. The GDI+ JPEG parsing vulnerability was published in September 2004. Is it sensible to say "the operating system" was decompressing the image to render it? Sure; in this case, it was a system library that required an OS vendor patch to correct it. Often such libraries are used by multiple software packages, making them part of the operating system rather than application-specific. In actuality, "the email application invoked a system library to parse a JPEG," but "the operating system" is close enough for a novel. | {
"source": [
"https://security.stackexchange.com/questions/97856",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/80219/"
]
} |
97,920 | Cloudflare offers 3 free SSL options: Flexible SSL, Full SSL, and Full Strict SSL. The article “ CloudFlare’s great new features and why I won’t use them ” explores the shortcomings of the Flexible and Full (non-strict) SSL options. The Full Strict SSL option encrypts clients’ connections to Cloudflare, and also Cloudflare’s connection to origin server — for which a Valid CA signed certificate is required. However, even with this option selected for your site, Cloudflare must be trusted, as they are the middleman, receiving the data, decrypting it and then encrypting it on its way to the origin server — and vice versa. Additionally, they must be trusted to actually require valid certificates from origin servers. So, this setup allows Cloudflare to monitor, record, and modify any traffic between clients and the origin servers. The fact that they can do this is a huge security concern, is it not? It cheapens the SSL system by appearing (to an average user) that your connection to the site you are visiting is secure cryptographically end-to-end, rather than the reality, whereby trust in Cloudflare is required. How could Cloudflare offer SSL without requiring users to trust them? | From what I understand, no, Cloudflare couldn't work any other way. Cloudflare analyses the connection before passing it to your webserver to ensure that it's correct and coming from a legitimate client. In order to do this, it needs to be able to see the contents of each packet from and to your server. With SSL/TLS, each packet is encrypted and therefore not visible to Cloudflare. It needs to be able to decrypt any traffic before it can analyse it. To do that, it needs to have the private key for the cert used to encrypt the traffic. The only way around this I see, is if Cloudflare sold its application that you could then self host. That still requires trusting the application (it may forward information to Cloudflare's servers, for instance), but at least it wouldn't be hosted elsewhere and totally out of your control. This would be a trade off, as you'd lose Cloudflare's distributed network. You would still have some benefits (anything implemented in software e.g. SQL injection protection), but would lose anything relying on the large network capacity (e.g. some DDoS protection). | {
"source": [
"https://security.stackexchange.com/questions/97920",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76527/"
]
} |
97,964 | I've been doing some research into some DRM solutions, specifically "self-protecting containers". One example of this is DigiBox. Normally, the protected data is encrypted in some kind of container. However, once the data is in use, it is decrypted in memory. What is stopping me from copying that data from memory and into another file on the harddisk? For example, if I had some sort of Word document that is protected. Can't I click save-as? Or does some DRM solutions work with Word to stop this? | According to the E-book A survey of complex object technologies for digital libraries , DigiBox seems to be a container format that can contain different file types (although it was mostly used for PDFs). The basic concepts here are: The file is encrypted in a way that it's relatively difficult to read without special software (i.e you can't just read these PDFs with any PDF reader, it has to be "DigiBox complient reader software") The special software then goes to some lengths to prevent you from saving a copy of the file in a non DigiBox format, although you're right - it's not really feasible to protect the contents from being dumped out of memory by a seasoned professional. For example, if I had some sort of Word document that is protected. Can't I click save-as? Or does some DRM solutions work with Word to stop this? You're right - the software that you're using to read the file needs to be complicit in the enforcement of DRM. I think the main point here is not to make it impossible to break the DRM on the file, but to make it so difficult that the average (read: not technically skilled) person would rather just pay for it. | {
"source": [
"https://security.stackexchange.com/questions/97964",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69582/"
]
} |
97,986 | I've heard about Ghostery, a browser extension/plugin that blocks web trackers. But according to this link it sells our data. Are add-ons and plugins open source in Firefox? Is there another alternative to Ghostery? | You can prevent Ghostery from selling your data by opting out of the Ghost Rank feature. The feature is opt-in, so if you didn't already opt in there is nothing you need to do. It is then safe for you to use. Using a clone of Ghostery which is identical in every aspect except not having the Ghost Rank feature would make no practical difference from running Ghostery without opting in to Ghost Rank. If your intention is to actively punish Ghostery for their evil data trading by boycotting them, then you will achieve nothing. They already gain nothing from you commercially when you run Ghostery with Ghost Rank disabled. If anything you help them by uninstalling Ghostery because you no longer consume any of their resources. But if you are really looking for an alternative option: the classical method is to edit your operating systems hosts file and forward the hostnames of known trackers and advertising networks to 0.0.0.0 . There are recommended blocklists available which you can find with a websearch (I can't vouch for their quality, so I won't recommend any specific ones). The advantage is that it doesn't just block advertisements in one web browser, but in all web browsers you have installed and in any other applications which might access these hosts for whatever reason. The drawback is that you will have to maintain your blocklist manually. | {
"source": [
"https://security.stackexchange.com/questions/97986",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/84509/"
]
} |
97,990 | A simple question, where do you place PHP files on a server? I can't seem to find any good information on this. Obviously public files should be in the document root somewhere/somehow, since they need to be accessed, but what about the rest of the files, all the classes etc? I am attempting to refactor a single page web app with a PHP backend, where most of the files are within the document root, but there are some sensitive files (with passwords etc) above the document root. I was trying to setup autoloading which led me to folder layout. This split layout feels quite messy, is there a better way to do it? My thought is it should follow the principle of least privilege, if those files don't need to be public then they shouldn't be. Why is this something that is not right at the start of PHP the right way ? I'm using Apache here, it would seem Node is better in that everything is private unless you point your router at it. Is there is a standard way of handling this? If so how to implement it on apache? | You can prevent Ghostery from selling your data by opting out of the Ghost Rank feature. The feature is opt-in, so if you didn't already opt in there is nothing you need to do. It is then safe for you to use. Using a clone of Ghostery which is identical in every aspect except not having the Ghost Rank feature would make no practical difference from running Ghostery without opting in to Ghost Rank. If your intention is to actively punish Ghostery for their evil data trading by boycotting them, then you will achieve nothing. They already gain nothing from you commercially when you run Ghostery with Ghost Rank disabled. If anything you help them by uninstalling Ghostery because you no longer consume any of their resources. But if you are really looking for an alternative option: the classical method is to edit your operating systems hosts file and forward the hostnames of known trackers and advertising networks to 0.0.0.0 . There are recommended blocklists available which you can find with a websearch (I can't vouch for their quality, so I won't recommend any specific ones). The advantage is that it doesn't just block advertisements in one web browser, but in all web browsers you have installed and in any other applications which might access these hosts for whatever reason. The drawback is that you will have to maintain your blocklist manually. | {
"source": [
"https://security.stackexchange.com/questions/97990",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/63569/"
]
} |
97,994 | I've been reading a lot of questions about security and hash functions on this site and others. I am not an expert, just a curious mind and in my understanding the more iterations used to hash a password, the better and you are supposed to get one output equals one input (modulo collision with broken functions). In this regard, the output won't be the same if the iterations number varies. But what I don't get is how a hacker who tries to find the plain knows how many iterations were used. Because he has to know it or did I miss something? | Number of rounds is often stored with the password and hash. For example, using bcrypt: $2a$10$oEuthjiY8HJp/NaBCJg.bu76Nt4eY4jG/S3sChJhZjqsCvhRXGztm The 10 indicates the work factor, effectively adding 10 bits of entropy in terms of hashing time to brute force. 2^10 = 1024 rounds . It is stored with the hash in case of the need to up the work factor due to Moore's law . If your system had a secret work factor that is the same for all accounts, this could be used as an additional security measure, much like a pepper. In fact in the case of an attacker being able to set their own password and view the hash, you would need to combine it with a pepper otherwise they would be able to determine the number of iterations in play from viewing the resulting hash. However, it is more complex to up this in future than it is when storing with each separate password, as you'll need to instead store an indicator of which iteration configuration version was used for each password. You could though have a secret work factor that is added to the value stored with the hash. Veracrypt has recently added a PIM (Personal Iteration Multiplier) feature that can be used as an additional secret to protect your encrypted data. | {
"source": [
"https://security.stackexchange.com/questions/97994",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/84519/"
]
} |
98,282 | I was wondering about differences between using OpenSSl or Keytool and generating certificates by them. I think Keytool is used by Java community but for someone who is not remotely connected to Java uses OpenSSL for generating stuff. I mean public key, private key and certificates. I assume with using OpenSSL you have to generate keys and certificates individually but keytool can generate a keystore that will give you a sort of briefcase that can contain keys+certificates. Question is: is what I think correct or not? If it matters, I use windows. And I am referring to: How do I use OpenSSL with the Java Keytool? | In short, they're both crypto key generation tools, but keytool has the additional feature of manipulating Java's preferred key storage file format, the KeyStore. Java strongly prefers to work with keys and certificates that are stored in a KeyStore (also called a TrustStore when it's only got certificates in it). It is possible , but not trivial, to get Java to work with straightforward PEM/CER/CRT/PKCS/etc files, so for all intents and purposes if you're coding crypto in Java you're going to use a KeyStore. Keytool is a tool that comes with Java that works with KeyStores - it can create KeyStores and manipulate keys and certificates inside them. It can also create keys and sign certificates . So it is both a key generation and a KeyStore-file-administration tool. OpenSSL works with standard formats (PEM/CER/CRT/PKCS/etc) but does not manipulate KeyStore files. It is possible to generate a key and/or certificate with OpenSSL, and then import that key/cert into a KeyStore using keytool , but you can't put the key/cert into the KeyStore directly using OpenSSL. (OpenSSL also has a wider array of functionality than keytool - performing symmetric encryption, acting as an SSL network client and server, handling more formats. It just doesn't happen to speak KeyStore.) | {
"source": [
"https://security.stackexchange.com/questions/98282",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29160/"
]
} |
98,363 | We use sha1sum to calculate SHA-1 hash value of our packages. Clarification about the usage: We distribute some software packages, and we want users to be able to check that what they downloaded is the correct package, down to the last bit. The SHA-1 cryptographic hash algorithm has been replaced by SHA-2 since SHA-1 is known to be considerably weaker. Can we still use sha1sum ? Or should we replace it with sha256sum , or sha512sum ? | I suppose you "use sha1sum " in the following context: you distribute some software packages, and you want users to be able to check that what they downloaded is the correct package, down to the last bit. This assumes that you have a way to convey the hash value (computed with SHA-1) in an "unalterable" way (e.g. as part of a Web page which is served over HTTPS). I also suppose that we are talking about attacks here, i.e. some malicious individual who can somehow alter the package as it is downloaded, and will want to inject some modification that will go undetected. The security property that the used hash function should offer here is resistance to second-preimages . Most importantly, this is not the same as resistance to collisions. A collision is when the attacker can craft two distinct messages m and m' that hash to the same value; a second-preimage is when the attacker is given a fixed m and challenged with finding a distinct m' that hashes to the same value. Second-preimages are a lot harder to obtain than collisions. For a "perfect" hash function with output size n bits, the computational effort for finding a collision is about 2 n /2 invocations of the hash function; for a second-preimage, this is 2 n . Moreover, structural weaknesses that allow for a faster collision attack do not necessarily apply to a second-preimage attack. This is true, in particular, for the known weaknesses of SHA-1: right now (September 2015), there are some known theoretical weaknesses of SHA-1 that should allow the computation of a collision in less than the ideal 2 80 effort (this is still a huge effort, about 2 61 , so it has not been actually demonstrated yet); but these weaknesses are differential paths that intrinsically require the attacker to craft both m and m' , therefore they do not carry over second-preimages. For the time being, there is no known second-preimage attack on SHA-1 that would be even theoretically or academically faster than the generic attack, with a 2 160 cost that is way beyond technological feasibility, by a long shot . Bottom-line: within the context of what you are trying to do, SHA-1 is safe, and likely to remain safe for some time (even MD5 would still be appropriate). Another reason for using sha1sum is the availability of client-side tools: in particular, the command-line hashing tool provided by Microsoft for Windows (called FCIV ) knows MD5 and SHA-1, but not SHA-256 (at least so says the documentation)(*). Windows 7 and later also contain a command-line tool called "certutil" that can compute SHA-256 hashes with the "-hashfile" sub-command. This is not widely known, but it can be convenient at times. That being said, a powerful reason against using SHA-1 is that of image : it is currently highly trendy to boo and mock any use of SHA-1; the crowds clamour for its removal, anathema, arrest and public execution. By using SHA-1 you are telling the world that you are, definitely, not a hipster. From a business point of view, it rarely makes any good not to yield to the fashion du jour , so you should use one of the SHA-2 functions, e.g. SHA-256 or SHA-512. There is no strong reason to prefer SHA-256 over SHA-512 or the other way round; some small, 32-bit only architectures are more comfortable with SHA-256, but this rarely matters in practice (even a 32-bit implementation of SHA-512 will still be able to hash several dozens of megabytes of data per second on an anemic laptop, and even in 32-bit mode, a not-too-old x86 CPU has some abilities at 64-bit computations with SSE2, which give a good boost for SHA-512). Any marketing expert would tell you to use SHA-512 on the sole basis that 512 is greater than 256, so "it must be better" in some (magical) way. | {
"source": [
"https://security.stackexchange.com/questions/98363",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24842/"
]
} |
98,425 | I recently got a phishing mail of unusually bad quality; "Please imediatly sign in under the following link, as we are your bank, you know"-ish. The link points to an unconvincing URL with .php at the end. I was asking myself why they might use a PHP script instead of just faking the look of the given page and submitting entered data to a form. I don't really want to click the link to find out, but I would love to get to know what this PHP file is about to do. Is there a way of downloading the script, such as you could with the client-sided JavaScript? Or am I not able to access the PHP file, as it is executed by the server? Are there other ways of analyzing this file and its behavior without any danger? | If the server is configured correctly, you cannot download a PHP file. It will be executed when called via the webserver. The only way to see what it does is to gain access to the server via SSH or FTP or some other method. This is because PHP is a serverside language, all the actions are performed on the server, then the result is sent to your browser (which is client side). If you are afraid to mess with your system, you could use Virtualbox with an Ubuntu VM and open the page from there. If you take a snapshot of the VM after installing, and before doing dangerous things, you can later go back to that snapshot and undo anything the script could have done to the VM. | {
"source": [
"https://security.stackexchange.com/questions/98425",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56061/"
]
} |
98,436 | I think I have a new denial-of-service attack on TLS, and I'd like you to check whether this is a real vulnerability. I recently learned about the proper way to drop an authenticated client session (see this question ).
According to RFC 5246 & RFC 5077 this is done by sending a fatal alert. But nothing is said about whether full session resumption is really required for such dropping or not, which means that actually any client may purge authenticated sessions of any other client, knowing just the sessionID. The steps are: connect to server and send client_hello with target sessionID wait for server answer, which will include server_hello, change_cipher_spec & finished send UNENCRYPTED fatal alert with any value RFC 5246 says ( §7.2.2 ): Upon transmission or receipt of a fatal alert message, both parties
immediately close the connection. Servers and clients MUST forget any
session-identifiers, keys, and secrets associated with a failed
connection. Thus, any connection terminated with a fatal alert MUST
NOT be resumed. This works perfectly well in any connection & cipher state, including NULL. I have just checked the actual OpenSSL implementation. The vulnerability (if it is a vulnerability) is present. What would community say? Am I to push this issue as real vulnerability or is it just the way it should be and nothing to worry about? | To summarize: It may work, or not, depending on how the server manages its cache for session parameters. The RFC are not consistent. It is not a "real" vulnerability. TLS sessions were initially meant to be an optimization, to avoid client and server doing the full handshake with its "heavy asymmetric crypto" for each connection (the actual cost of such crypto is often overrated but that's not the point). Existing, deployed clients and servers tend to remember sessions for some time, then forget them when it becomes inconvenient to keep remembering (typically, on the server side, when the RAM buffer dedicated to such storage is full, old sessions are evicted). The idea is still that client and server may still do a full handshake, transparently, when needed; the session resumption is opportunistic. Some deployed systems rely on session resumption to work a bit more reliably; in particular, Web-based applications with client authentication with a smart card: using the smart card implies making a signature with the card, which has a small computational cost (say 1 second) and a high user cost (the human user may have to type a PIN code). However, even in these cases, it is understood that session parameters are stored in RAM only, so if the client browser is closed and then reopened, no session will be resumed and a full handshake will occur. RFC 5246 , in section 7.2.2 , contains this paragraph: Error handling in the TLS Handshake protocol is very simple. When
an error is detected, the detecting party sends a message to the
other party. Upon transmission or receipt of a fatal alert
message, both parties immediately close the connection. Servers
and clients MUST forget any session-identifiers, keys, and secrets
associated with a failed connection. Thus, any connection
terminated with a fatal alert MUST NOT be resumed. This is a blanket statement meant, informally, to deter some as yet unspecified brute force attacks relying on the attacker "trying out" a lot of session resumptions. There are a few enlightening points that must be made about that prescription: Historically, sessions had also to be "forgotten" when a connection was not properly closed (with an explicit close_notify ). However, existing Web servers close connection abruptly, so Web browsers have adapted by keeping sessions "alive" even when they were terminated in a way that TLS-1.0 would frown upon. This notion of forgetting the session upon a fatal alert does not work on the server when session tickets are used, since, by definition, a server with session tickets does not manage his own memory. In that respect, RFC 5077 and RFC 5246 are inconsistent: this "MUST NOT be resumed" cannot be enforced by a server that uses session tickets. It so happens that most SSL implementations are reluctant to break laws of physics in order to comply to RFC. Even without sending an alert, a "fatal condition" can always be forced by an attacker: it suffices for the attacker to open a connection by himself, reusing the same session ID as a connection used by the genuine client. The server will try to resume the session ( ServerHello , ChangeCipherSpec , Finished ) then expect the ChangeCipherSpec then Finished from the client. Since the client is the attacker and the attacker does not know the master key for the session he is purporting to resume, his Finished message won't decrypt properly, triggering a bad_record_mac from the server. If section 7.2.2 is to be followed, then the server should then forget the session. It shall be pointed out that the "kill the session by failed Finished " attack expressed in the previous paragraph may be easier to pull off than the one you are describing, since it involves only observing a genuine connection (to get the session ID), not modifying that connection. The third point above is important because it shows that until it has received a properly encrypted-and-MACed Finished message from the client, the server has no proof that it is really talking to the real client. Arguably, the session is not "resumed" until that point, and therefore should not be "invalidated" for any erroneous condition occurring before that point (be it an explicit alert from the client, or a MAC failure, or anything else). However, the RFC is not clear about that either, so what really happens really depends on how the server manages its cache, at the whims of the server implementor. The vulnerability is not "real" in that any attacker who is in position to fiddle with client connections can already do a lot more harm by, for instance, responding to the client's ClientHello message with a synthetic alert message or simply random junk that will convince the client that there is no working SSL/TLS server on the other side -- a much more comprehensive denial-of-service than simply making the server and client spend a couple of milliseconds of CPU for a full handshake. | {
"source": [
"https://security.stackexchange.com/questions/98436",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/84868/"
]
} |
98,458 | I accidentally, the other day, went to a dodgy site, and today I discovered that in my cache there lay a text file. And after reading the contents of the file, I have deduced that it is the source code for some potentially new malware which exploits a vulnerability in a security program. Now I want to submit it to my anti-virus vendor, however, do I just submit it as a text file (as they would have to compile it in order to get a virus signature and I don't know how automated the process is), or should I stick it in my IDE, build it and then send the built version to them, or how should I send it to them? In the most safe and responsible manner that is. | I'm afraid your compiled binary will differ a lot from the actual malware that can be found in the wild. Different compilers and command-line flags will produce completely different binaries, and the malware binary may be further optimized/obfuscated using additional tools or even manually. Submitting them your compiled binary is likely to be counter-productive and will only waste everyone's time. Instead, if you can't directly submit the source code file (because their form expects a binary, etc), try to get in touch with a human and give them the source. | {
"source": [
"https://security.stackexchange.com/questions/98458",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/-1/"
]
} |
99,886 | I'm trying to learn the best way to implement a secure web and e-mail server. Getting SSL certificates is a must, but what happens with the private keys? I've seen that you have to store them on the server but does it imply that my hosting provider will have access to them? If I would own the hardware, I would simply encrypt the hard drive, but in this cloud/shared/ VPS services world it doesn't seem to be possible. Is there a workaround? | Yes, your hosting provider is necessarily able to see your SSL private key, if the fancy takes him to do so. Because that SSL keys is used by his software running on his machines. (This still holds in the case of a hosted virtual machine -- in practice, a malicious host could simply take a snapshot of your running VM and analyse it at his leisure, and you would not know it.) But note that, for the very same reason, the hosting provider can see all your site contents, i.e. everything that the SSL protects, so the possible exposure of the private key does not substantially change things here. If you had your own hardware, and the hosting provider simply rented space, power and network bandwidth, then you might hope for some level of privacy and security against the provider. The provider would still have physical access to your hardware (it is located at his premises, not yours) but breaking into a physical machine takes a bit more effort than making a VM snapshot, and, more importantly, is hard to do without leaving physical traces. It really depends on how much the evil hosting provider is intent on doing things discreetly . For very sensitive machines (e.g. a Certification Authority), you could rent an isolated cage, with padlocks for which you have the key (not the provider), and with in-cage security cameras that continuously send pictures to your remote control facility. That way, you could gain some reasonable assurance that the provider is not trying to physically break into your machines. Of course, this kind of setup tends to be expensive. | {
"source": [
"https://security.stackexchange.com/questions/99886",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86367/"
]
} |
99,892 | There are several login forms (e.g. Google's) on the internet where you first enter your login name and then, once that's submitted, you get to enter your password. One of the advantages of this is, I suppose, that the server can pull up an image only it knows and display it to the user to help foil simple phishing schemes. My question is why nobody does it the other way around - first ask for a password and then for the login name. I can see the obvious answer (that the password might not be unique and therefore the server wouldn't know whose anti-phishing-images to display), but that doesn't convince me. Immediately disabling accounts which share passwords or at least forcing users to change their passwords to a unique string next time they log in might be ways around that and as an aside it would solve the "123456" password phenomenon. Another issue I can see is that in a phishing scenario where a user enters his password correctly and then notices that the wrong images are displayed to him, he's already given up his password and all that remains for the phisher to do is to identify who this password belongs to. What I'd like to know is whether the login-then-password sequence is mostly due to convention or user interface considerations or whether there are other security issues with reversing the sequence to ask for the password first (besides the two I've mentioned). | A lone password is not necessarily verifiable by itself. In particular, if the server does things properly, then it stores not the passwords themselves, but the output of a password hashing function computed over the password. A password hashing function (as opposed to a mere hash function) includes some extra features, including a salt (for very good security-related reasons). Verifying a password then requires knowledge of the corresponding salt. Since the salt is instance-specific (the point of the salt is that distinct users have their own salts), the user identity is required (the user identity is used as an indexing key for the salt). | {
"source": [
"https://security.stackexchange.com/questions/99892",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86371/"
]
} |
99,906 | Is it possible for RAM to retain any data after power is removed? I don't mean within a few minutes such as cold boot Attacks but rather 24 hours plus. Working with classified systems the policy always seems to treat RAM the same as disks and must be removed and disposed of according to classification. Is this a myth which has become standard practice or is there really a data security risk present? I am assuming regular PC RAM designs over the past 20 years. | This 2013 article analyses retention time for several DRAM chips. Among the relevant information, one may list the following: Retention time depends on a lot of things, including the values of neighbouring bits. A DRAM bit is a potential well, and it loses its contents by moving charges from or into neighbouring areas, so whether there is room in these neighbours matters. Temperature is very important for retention time (which is why cold-boot attacks insist on cold : if you plunge the machine in liquid nitrogen, you can keep the charges in place for substantially longer). At room temperature, typical retention time is counted in milliseconds, at best a few seconds, and, more importantly, the discharge is exponential in nature (it goes in e - Ct for some constant C ), as could be expected (capacitors also work that way). So the remaining charge after 2 minutes will be half that after 1 minute; after 10 minutes you are down to a thousandth of the initial charge; after 20 minutes, a millionth; after 30 minutes, a billionth. To sum up: 24 hours... forget it. You won't find meaningful data in DRAM that has been kept unpowered, at room temperature, after 24 hours (even if the room is, say, in Canada). This is for DRAM , where a stored bit can be envisioned as a charged capacitor. This is the kind of RAM commonly found in PC for the last 20 years. There also exists SRAM , where each bit is stored as the current state of a bistable circuit that consists in 6 transistors. SRAM is substantially faster than DRAM; it is also a lot more expensive. In PC, SRAM is used for cache (usually integrated in the CPU). Without power, SRAM loses any trace of its contents within microseconds. There are some stories about bits being "burned" into RAM when the same value is stored for a long time in a specific emplacement in a chip. To the best of my knowledge, these stories are exactly that: stories. They come from "thought by analogy", by people who think of RAM in the same way as they think about CRT displays (which could have "burn-in" effects, hence the development of "screensavers"). I am not aware of any case where such stories were ever substantiated. But fears and doubts are powerful forces that cannot always be dispelled by the strongest logic. | {
"source": [
"https://security.stackexchange.com/questions/99906",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83006/"
]
} |
100,100 | Hard drive in question has sensitive unencrypted data but has failed and no longer responds so can't be wiped. I'd like to physically destroy the said hard drive (3-1/2" desktop, spinning platter drive) before discarding it. What "home remedies" are good options? EDIT: To the close voters: None of the other similar questions talk about techniques one can perform at your everyday home (hard drive degaussers, industrial shredders etc). IMHO this question is similar but uniquely distinct. EDIT2: We're not talking about corporate data, national security data or personal banking data. Encrypted backups mostly, with some unencrypted personal identifiable information when the said drive was used to migrate data. | You want fast and simple? Step 1: Try and take it apart. If you have the right screwdrivers, great, if not, just go to the next step. EDIT2: Also use sandpaper on the platters before smashing them. It's very hard to smash into small enough pieces, and very hard to sand afterwards. If you can spend a bit of money, there are also dedicated kits, such as DiskStroyer which provide instructions. Apparently, they also provide a magnet and screwdrivers. Step 2: Have at it with the biggest, heaviest metal hammer you have. Hit the platters a few times and it should shatter. ( EDIT: NB: Make sure you smash the logic board (all the green stuff) up decently as well. Modern HDDs have 32-64 MBs of cache, and SSHDs have around 8 GB, and we don't want anyone to get a single bit) Step 3: Find a big magnet and go over the disk a few times. Step 4: Find a really hot flame, and melt the data off. A good gas flame can get up to 1200 °C, easily enough to demagnetise even the toughest materials. And you're done! Send your now thoroughly unusable drive into the bin, or a recycling center, or whatever else you do to dispose of electronics. EDIT: To be completely honest, I would do this to an encrypted drive as well, with the logic that any drive needing encryption should be disposed securely to prevent the exploitation of vulnerabilities in the encryption discovered in the future. | {
"source": [
"https://security.stackexchange.com/questions/100100",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/13914/"
]
} |
100,124 | I am thinking on a way which would prevent unauthorized copying or recording of data by photographing screens. I also think, if the content of a screen is understable for an eye of a living human, it is also photographable by any mobile. Thus, I think, in general case, the answer is "no". I am right? Extension, problem details: As I explained in comments, the primary security objective is to protect sensitive data from employee working on it. A secondary objective is to protect it from non-employee (and thus, having passed generally much easier security criteria). The effective solution of this problem is clearly social and obvious (no-cam policy, etc). The goal is here to find the (admittedly narrow) possibilities of the technical defenses, if there is any. | There is mainly two kind of people to consider in this question: The person working on the computer. This person is your employee, they went through your HR screening and abides by your policies. They have been trusted to access and work with some data. Due to this, since they need to see , no technical measure can prevent them from taking photographs (using a phone, a pen camera, ...), taking notes or remembering what they saw. The people around the computer. The computer could be a laptop in an airport or at a customer site, a desktop at a front desk etc., the other people may be unknown people, customers, or even other employees. Here the issue is not the same, and for this use-case you can buy privacy screen filters . These filters reduce the viewing angle of the screen, ensuring that only the person right in front of the screen can see its content (this person being obviously assumed to lock the computer when not in front of the screen). | {
"source": [
"https://security.stackexchange.com/questions/100124",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/35683/"
]
} |
100,129 | We have a mobile app for iOS and Android available in the Apple and Google Play stores. The app communicates with our server’s Web Services over HTTPS. We have attackers able to spoof the app traffic. This probably means our attackers are decrypting and monitoring the HTTPS-encrypted Web Services request/response with a tool such as Fiddler . Our next step is to encrypt inside the mobile app. We encrypt the Web Service payload in both directions using AES in CBC mode with HMAC. This means both our server and the downloadable mobile app have a shared secret key (actually, at least two). The problem is that it’s quite possible to reverse-engineer iOS or Android mobile apps and harvest any secret keys. To be sure, we have a higher level of security. It takes more expertise to reverse-engineer a mobile app than it takes to observe HTTPS traffic via Fiddler. We may not be a big enough target for anyone to bother with harvesting our secret keys. But we’d like to grow to the point where the attack is worthwhile! Any key-exchange protocol that I’m aware of requires the mobile app to have a secret key somewhere in the process. With the app available as a free download from Apple App Store and Google Play, I simply have no way to ensure its secret remains secret. Arxan.com claims to provide an Enterprise-grade solution which appears to be a sufficiently-complex process of hiding the keys from would-be attackers. I’m not yet ready to ask them about pricing! Looked at one way, my question is, “How do I protect a secret key in a mobile app in the wild?” I have seen two clear answers: You can’t. Use HTTPS. However, we do this and it’s clear to us that our traffic is still being monitored. So, I think my question becomes one of Authentication as opposed to Authorization: On the server side, how do I distinguish between an attacker pretending to be our mobile app, and legitimate traffic from our app? Assuming that we are encrypting and correctly using HMAC, a valid message tells us: The message has not been altered. The sender is in possession of our shared secret key(s). Everything to this point depends on the secret key not being compromised. The only answer I’ve thought of so far is traffic analysis. We detect brute-force attack traffic and certain other traffic profiles. We detect various forms of replay attack. We could thus infer that our secret key HAS been compromised, if it ever is, and invalidate the key (thus invalidating all mobile installs in the wild). But it stands to reason that if an attacker can figure out one secret key, that attacker can figure out the next key as well. Nor do we want to invalidate our entire legitimate installed-app user base! We can install a certificate inside our app, but does that solve the problem? And if so, how? Don’t we have the same issue of the app being reverse-engineered and spoofed? EDIT: What are we trying to protect? Our attacker can run a password list against us by spoofing our app's User Login web sequence. Since our attacker is able to observe HTTPS traffic as it exits the app, our attacker knows how to construct the web service request and authentication. We can encrypt the login (and other) information before it leaves the app. This should prevent both sniffing and spoofing, assuming our attacker does not possess the secret keys used in the encryption. Given that inbound (to the server) traffic is more important to protect, Public Key encryption would be a possibility. Only the server could decrypt it. However, since the key is public, our attacker could still run a password list pretending to be the app. So what are we trying to protect? We're trying to protect ourselves from an attacker breaking into user accounts via our web services. Specifically, how might we protect ourselves from an attacker sufficiently motivated to reverse-engineer our mobile app in the wild? SECOND EDIT: Let's try restating the problem. How might we design a robust and secure Web Services API? It must be able to withstand observation by a would-be attacker. We must be able to distinguish and reject spurious messages from a would-be attacker. Specifically, we want to: Prevent our attacker from using our Web Services API to run a password list and potentially log in to a member account. We have other brute-force detection measures in place, but we'd like to directly secure our API from such attack. Prevent our attacker from using our Web Services API to probe our server for other attack possibilities. | I think I can help resolve your concerns. So what are we trying to protect? We're trying to protect ourselves from
an attacker breaking into user accounts via our web services. Specifically, how might we protect ourselves from an attacker sufficiently motivated to reverse-engineer our mobile app in the wild? You can't. It's that simple. Trying to do the impossible is what's causing your concern. Instead, design and implement your server under the assumption that the client may be your app or it may be some unknown malicious client. If you can't secure your app with these assumptions, then you will need to buy into shipping an insecure app (likely bad a bad choice) or only have clients in locations where you have physical control (also likely a bad choice). Let me address some of your specific statements: I have to assume mobile banking apps have this figured out. But they are not so likely to share that knowledge here! No. This is wrong. I'm not sure who you think these secretive mobile banking app developers are but I can assure you that many on this site have developed banking apps, mobile or otherwise, and helped secure them. Our attacker can run a password list against us by spoofing our app's User Login web sequence. Since our attacker is able to observe HTTPS traffic as it exits the app, our attacker knows how to construct the web service request and authentication. Absolutely right. The attacker can execute a brute-force password attack on your server. Techniques such as account lockout, exponential delay , and CAPTCHA will help defend against this. We can encrypt the login (and other) information before it leaves the app. This should prevent both sniffing and spoofing, assuming our attacker does not possess the secret keys used in the encryption. Given that inbound (to the server) traffic is more important to protect, Public Key encryption would be a possibility. Only the server could decrypt it. Trust SSL. It is well tested and works well. There's no need to have app-based encryption to protect against a MiTM attack . I know it is freaky to hook up a debugging proxy like Fiddler and see your supposedly encrypted traffic in clear text, but it's not really a security problem. It's the user's data, let them see it if they want. And being the server was written with the concept that it can never know what app the client is running, your protocol should be able to withstand examination by a would-be attacker. Note that you must still do proper SSL validation to prevent a 3rd-party MiTM attack. So skip the HMAC and client-side app encryption when using SSL. Instead, write a robust and secure API. Not because I say so, but because there is no alternative. Your own analysis proved this. EDIT: Prevent our attacker from using our Web Services API to run a password list and potentially log in to a member account. We have other brute-force detection measures in place, but we'd like to directly secure our API from such attack. I previously mentioned account lockouts though the OWASP guide seems less favorable on them due to the fact that they can be used in a DOS. The did mention delaying the response on repeated failed attempts. The guide goes on to say: Although brute-force attacks are difficult to stop completely, they are easy to detect because each failed login attempt records an HTTP 401 status code in your Web server logs. It is important to monitor your log files for brute-force attacks-in particular, the intermingled 200 status codes that mean the attacker found a valid password. While nobody wants their users to be hacked, recognizing and responding to a successful password attack will dramatically reduce the cost of the attack. Monitoring can also allow you to (temporarily) lockout IP addresses to terminate malicious users (though a strong attack will likely come from many IP addresses making this hard to do). Two-factor authentication reduces the impact of a stolen password. Prevent our attacker from using our Web Services API to probe our server for other attack possibilities. Sorry but you can't really do this without physical security. You can embed secrets in your app, obfuscate it, etc... but none of that really makes the app more secure, it just raises the cost of an attack as those can be backward engineered. So whether you implement those strategies or not, you must still design your API as if an attacker has access to your secrets and code. I'm a big fan of both SAST and DAST . Run static analyses and resolve all high/critical issues before every deployment. The same for a dynamic scan. Vulnerability scanners can also check your site for known vulnerabilities and bad configuration. | {
"source": [
"https://security.stackexchange.com/questions/100129",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86589/"
]
} |
100,268 | AgeUK (and others) warn about making phone calls directly after receiving a scam call and advise you to "wait for the line to clear": Use a different phone if you can, or wait 5 to 10 minutes after the cold call if using the same phone - just in case they waited on the line. How does this work? Why can't the telephone network fix this? Does the scammer require specialized equipment or does this work from any landline phone? | This is because of Called Subscriber Held (CSH). This is not specific to telephony in the UK but rather a line state applicable on PSTNs caused by the person who made the call not hanging up. The person from which the call originates must hang up for the call to disconnect as it is the person from which the call originates that is paying for the bill. The person receiving the call may hang up, but this will not disconnect the call unless the originating caller also hangs up. Usually once a CSH condition is detected, a timer will start which will clear the call after a specified period of time (for example 3 minutes). Back in ye olden days when people had phone handsets connected to base stations with a wire(!) this feature meant they could put the phone down and move to another room without it dropping the call. :) Using the recall (R) button on most phones will put any current call into an on hold state and give you a dial tone. | {
"source": [
"https://security.stackexchange.com/questions/100268",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86541/"
]
} |
100,380 | Maybe a noob question. I've tried searching but maybe my search terms were too specific. I've just conducted a pen test on a client's site. The site allowed me to download .htaccess just by simply browsing to directory where it was stored; www.example.com/dir/.htaccess What causes this and how can it be remediated? | It depends on the type of the webserver in question.
If it's Apache 2.2, it should contain something like this in the config file (usually in the "main" apache.conf): <Files ~ "^\.ht">
Order allow,deny
Deny from all
Satisfy all
</Files> If it's missing, that can cause the problem you described. The other typical cause of this is that the client have used Apache in the past, but switched to something else (e.g. Nginx) which does not use .htaccess and hence doesn't treat it in a special way. The solution in this case is webserver-specific, but it usually boils down to restricting access to files beginning with ".ht", or - if they are really not used - you can simply delete them. | {
"source": [
"https://security.stackexchange.com/questions/100380",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86895/"
]
} |
100,388 | I have a Toshiba notebook, which came with Norton Internet Security.
In addition, I have installed Avast. I received a warning from Norton about a OS Attack: GNU Bash CVE-2014-6271 intrusion attempt. Norton "blamed" Avast, in my own computer, for the attempt. How should I regard the warning? More details follow. I was at a hotel. I had plugged an external long range Wi-Fi adapter for the first time in my PC. It was not operative since I do not have the drivers installed yet, but perhaps it was involved in the attack. The report is attached below. It mentions 4 items involving 2 different IPs. It is confusing about who were the attacker and attacked. OFFICE2 is the name of my computer (unless some device replicated it). My intention is to let go Norton when the trial period expires, but for the time being, both are active. The point in question here is beyond the convenience or not of having both active. I have found only one link with a similar case. It is brief, and in Arabic, I guess. Report: Category: Intrusion Prevention
Date & Time: 15/09/2015 01:20:45 p.m.
Risk: High
Activity: An intrusion attempt by OFFICE2 was blocked.
Status: Blocked
Recommended Action: No Action Required
IPS Alert Name: OS Attack: GNU Bash CVE-2014-6271
Default Action: No Action Required
Action Taken: No Action Required
Attacking Computer: "OFFICE2 (10.100.105.51, 56941)"
Attacker URL: 10.100.100.1/cgi-bin/a2/out.cgi
Destination Address: "10.100.100.1, 80"
Source Address: 10.100.105.51 (10.100.105.51)
Traffic Description: "TCP, Port 56941"
Network traffic from <b>10.100.100.1/cgi-bin/a2/out.cgi</b> matches the signature of
a known attack. The attack was resulted from \DEVICE\HARDDISKVOLUME4\PROGRAM FILES\AVAST
SOFTWARE\AVAST\AVASTSVC.EXE. To stop being notified for this type of traffic, in the
<b>Actions</b> panel, click <b>Stop Notifying Me</b>. | This is probably an issue with the co-existance' of multiple realtime AV software. One will suspect the other one's activities as malicious. Avast is trying to scan your router/default gateway and Norton flags it as malicious here. You should uninstall any of the two as soon as possible. Running multiple AV's on the machine can have effects like low performance, instead of increasing security, it decreases the security level, more false positives etc. Instead of running multiple AVs you can supplement your favorite ONE with Anti-Malware tools like Malwarebytes antimalware or Hitman Pro and observe safe internet usage practices. Also, when uninstalling any, be sure to use the removal tool provided by vendor.
For Norton : Norton Removal Tool Using more than one anti-virus program is not advisable. Why? The
primary concern with doing so is due to Windows resource management
and significant conflicts that can arise especially when they are
running in real-time protection mode simultaneously. Even if one of
them is disabled for use as a stand-alone on demand scanner, it can
affect the other and cause conflicts. Anti-virus software components
insert themselves deep into the operating systems core where they
install kernel mode drivers that load at boot-up regardless of whether
real-time protection is enabled or not. Thus, using multiple
anti-virus solutions can result in kernel mode conflicts causing
system instability, catastrophic crashes, slow performance and waste
vital system resources. When actively running in the background while
connected to the Internet, each anti-virus may try to update their
definition databases at the same time. As the programs compete for
resources required to download the necessary files this often can
result in sluggish system performance or unresponsive behavior. When scanning engines are initiated, each anti-virus may interpret the
activity of the other as suspicious behavior and there is a greater
chance of them alerting you to a "false positive". If one finds a
virus or a suspicious file and then the other also finds the same,
both programs will be competing over exclusive rights on dealing with
that threat. Each anti-virus may attempt to remove the offending file
and quarantine it at the same time resulting in a resource management
issue as to which program gets permission to act first. If one
anit-virus finds and quarantines the file before the other one does,
then you may encounter the problem of both wanting to scan each
other's zipped or archived and update files and each reporting the
other's quarantined contents. This can lead to a repetitive cycle of
endless alerts that continually warn you that a threat has been found
after it has already been neutralized. Anti-virus scanners use virus definitions to check for malware and
these can include a fragment of the virus code which may be recognized
by other anti-virus programs as the virus itself. Because of this,
many anti-virus vendors encrypt their definitions so that they do not
trigger a false alarm when scanned by other security programs. Other
vendors do not encrypt their definitions and they can trigger false
alarms when detected by the resident anti-virus. Further, dual
installation is not always possible because most of the newer
anti-virus programs will detect the presence of another and may insist
that it be removed prior to installation. If the installation does
complete with another anti-virus already installed, you may encounter
issues like system freezing, unresponsiveness or similar symptoms as
described above while trying to use it. In some cases, one of the
anti-virus programs may even get disabled by the other. To avoid these problems, use only one anti-virus solution. | {
"source": [
"https://security.stackexchange.com/questions/100388",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76799/"
]
} |
100,451 | The Java plugin for web browsers is known to have many security issues, at least in the past. Google Chrome is not even supporting it any more, describing it as decrepit technology, and Firefox having a little warning message near it. But is the JRE secure with out the browser plugin? Are Java desktop, mobile and server applications as vulnerable as the Java plugin? | Yes - Java desktop and server applications are basically secure. When you run a desktop application - Skype, Picassa, whatever - you give that software full access to your computer. You have to trust the software. In contrast, when you run a Java applet in your web browser, the applet runs in a restricted environment called a sandbox. The sandbox exists so you do not have to trust the Java applet. Java has had a lot of vulnerabilities; almost all of them are "sandbox escapes". In other words, if you're running an old version of Java, a malicious applet can break out of the sandbox and take control of your computer. Not many technologies support sandboxes. In fact, there are only three common technologies where people routinely run untrusted software: Java, JavaScript and Flash. All of these have had many sandbox escape vulnerabilities, which demonstrates the difficulty of writing a secure sandbox. When you run Java on your desktop, or on a server, you trust the Java code you are running, so you are not relying on the sandbox. In that context the main concern is whether untrusted data can interfere with the application. For example, if you're talking to someone on Skype, could they send a malicious message that Skype mishandles and allows them to take control of your computer. (I'm just using Skype as an example here). There have been very few instances where bugs in the Java runtime would allow a desktop or server application to be hacked. Typically this happens because of bugs in the application code, not Java itself. | {
"source": [
"https://security.stackexchange.com/questions/100451",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/77192/"
]
} |
100,481 | I have some domains/websites as well as emails with Bluehost. Every time I need support, they need the last 4 characters of my main password for the account. They cannot tell me how they store the password, so I am intrigued in how they could safely store my password(s) and still see the last 4 characters. Do they see the full password in plain text? | There's several possibilities. They could be storing the full password in plaintext, and only
displaying the last 4 characters to the support person. They could be hashing the password twice. Once hashing the full password,
and again with just the last 4. Then the support person types in
the last 4 to see if it matches the hashed value. The problem with
this is that it makes it easier to brute force the full password
since the last 4 characters are in a separate hash, reducing entropy. They could be hashing the full password, and storing the last 4 in
plaintext. Obviously this makes it much easier to brute force the
password if an attacker gaining access to the password database knows the last 4 digits. Something else where the last 4 characters are stored in some way
that's discover able, such as encryption that Mike Scott mentions below. If the secret to unlock the 4 characters can be discovered, this is as bad as plaintext. All scenarios are very bad, and greatly reduce the security of the system. It's not possible to know which scenario they're using, but each of them shows a lack of consideration for security breaches. I'd advise caution if this is a site where you care about your account being breached. | {
"source": [
"https://security.stackexchange.com/questions/100481",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/78980/"
]
} |
100,487 | I keep reading that Symmetric session key is used in SSL after handshake only for faster performance but isn't true a secondary session key either Symmetric or Asymmetric key generated by the client during run-time is needed to make the communication secure. Isn't it because the network traffic can be decrypted that is generated by the server using the private key which can be encrypted by any client which has a public key as long as they can sniff the traffic like the man-in-the-middle if no session key exists ? Can someone please clarify ? | There's several possibilities. They could be storing the full password in plaintext, and only
displaying the last 4 characters to the support person. They could be hashing the password twice. Once hashing the full password,
and again with just the last 4. Then the support person types in
the last 4 to see if it matches the hashed value. The problem with
this is that it makes it easier to brute force the full password
since the last 4 characters are in a separate hash, reducing entropy. They could be hashing the full password, and storing the last 4 in
plaintext. Obviously this makes it much easier to brute force the
password if an attacker gaining access to the password database knows the last 4 digits. Something else where the last 4 characters are stored in some way
that's discover able, such as encryption that Mike Scott mentions below. If the secret to unlock the 4 characters can be discovered, this is as bad as plaintext. All scenarios are very bad, and greatly reduce the security of the system. It's not possible to know which scenario they're using, but each of them shows a lack of consideration for security breaches. I'd advise caution if this is a site where you care about your account being breached. | {
"source": [
"https://security.stackexchange.com/questions/100487",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87018/"
]
} |
100,488 | I am doing my own login authentication system, and I want to do a robust system. I want to decide which direction I should take to keep the user logged in. I have put a question for each way, and also what I think the pros/cons are. Cookie How safe is it to store a "User" object, with many properties
(including encrypted pass) in a base64 string? Disadvantage : Must be encrypted so the user can't easily manipulate data Advantage : Can be set for longer periods of time, and can persist during sessions. Session Variable How safe is to store sensitive plain text data? Disadvantage : Only lasts for the server session. Advantages : Cookie is automatically encrypted, and simpler to code than cookies. What are other advantages and disadvantages? | There's several possibilities. They could be storing the full password in plaintext, and only
displaying the last 4 characters to the support person. They could be hashing the password twice. Once hashing the full password,
and again with just the last 4. Then the support person types in
the last 4 to see if it matches the hashed value. The problem with
this is that it makes it easier to brute force the full password
since the last 4 characters are in a separate hash, reducing entropy. They could be hashing the full password, and storing the last 4 in
plaintext. Obviously this makes it much easier to brute force the
password if an attacker gaining access to the password database knows the last 4 digits. Something else where the last 4 characters are stored in some way
that's discover able, such as encryption that Mike Scott mentions below. If the secret to unlock the 4 characters can be discovered, this is as bad as plaintext. All scenarios are very bad, and greatly reduce the security of the system. It's not possible to know which scenario they're using, but each of them shows a lack of consideration for security breaches. I'd advise caution if this is a site where you care about your account being breached. | {
"source": [
"https://security.stackexchange.com/questions/100488",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87016/"
]
} |
100,650 | I have a text file in which I store all my bank details. I compress and encrypt it with 7-Zip using the following parameters: Compression parameters: Archive format : 7z Compression level : Ultra Compression method : LZMA2 Dictionary size : 64 MB Solid Block size : 4 GB Number of CPU threads : 4 Encryption parameters: Encryption method : AES-256 Encrypt file names : True The password for the encryption is chosen such that it won't be found in any dictionary and is rather an almost random string (composed of 15-20 upper and lower case letters, numbers, and symbols). I do not store this password anywhere. Also, the filename of the text file is kept such that no one will be able to tell that the file is related to bank details at all. Is this secure enough, under the following scenarios? The attacker takes full control of the system, but does not know that this particular file is of any importance to him. The attacker is in possession of the file, and is actively trying to decrypt it, knowing that it has the bank details. | 7-zip (or any other similar utilities) encryption is designed to protect archived files. So, as long as the tool designers did their job well, you are safe for the second case (somebody getting his hand on the encrypted file and trying to crack it). However, such utility are not designed to protect you against your first mentioned case (someone getting access to your account data on your machine and/or you accessing the file content regularly). Indeed, someone having taken a full (or even just minimal, no need to escalate privileges) access to your system will see you use this file and will also be able to capture your keystrokes while you type your password. Even worse: an attacker will actually will not even have to bother with this since the file will most probably be present in clear form in your Windows Temp directory. So, for your first threat, I would definitively recommend you to use a tool designed for this usage, like KeePass which will avoid to store decrypted data in temporary files and will provide a minimum protection when typing the password . | {
"source": [
"https://security.stackexchange.com/questions/100650",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87164/"
]
} |
100,743 | This is my mouse. I used it with my old computer which was full of viruses. If I use this mouse for my new PC, can my new computer be infected from my mouse? | USB devices as a rule, in principle, can carry viruses. But that doesn't mean that all USB devices are capable of carrying viruses, it just means that if you don't know where the device came from, then it you shouldn't plug it into your computer even if it doesn't look like a device that could transmit a virus. That said, most mice (presumably including this one) don't contain any writable memory. So the mouse can't be modified by an infected computer. So if the mouse wasn't dangerous to begin with, then plugging it into a dangerous computer generally can't make it a dangerous mouse. Interestingly, this particular mouse is unusual in that it does actually have some amount of memory in the form of programmable macros , actually stored on-device. This makes the device slightly more suspect -- a malicious piece of software could theoretically overwrite your macros. How that might translate into subversive behavior is anyone's guess, but for run-of-the-mill malware infections, transmission by way of this particular macro function is quite unlikely, if for no other reason than because this mouse is not very common. There's some chatter about the possibility of overwriting the mouse's firmware so as to persist an attack. Flashing firmware was the basis for the Bad USB attack class. But this requires that the firmware be user-flashable. For most mice there's nothing to worry about. Adding a firmware modification feature to the USB connection is expensive and uncommon. But if you expect to see such a feature anywhere, it would be on overly complicated and expensive peripherals targeted at gamers. The anatomy of such an attack would be, almost certainly, to emulate a keyboard and inject a script of keystrokes when you're not looking -- see the USB Rubber Ducky for how this works. | {
"source": [
"https://security.stackexchange.com/questions/100743",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87244/"
]
} |
100,945 | There is a system that, on a login form, presents about 40 boxes for password letters (to hide password's actual length), and only random ones (the same amount each time) are editable. Explanation is - to secure me from keyloggers. Seems legit. But does it means my password is kept in plain text? Or is there any known hash algorithm that would permit this? Note: I'm not asking if it is a good or bad practice, I ask about possibility / feasibility / method of hashing. | No - there's no practical way to hash your password in this case. For the website to work this way, the hash would have to allow you to verify that a 5-character subset of the password matches a certain value. That means you could brute force those 5 characters on their own, which is readily practical, and once you know 5 characters, it's easy to brute force the rest. This is somewhat similar to the weakness of LANMAN passwords. Partial passwords do have security benefits, especially against key loggers. Banking apps sometimes have a normal password that is hashed on the server, and an additional authentication code, which you only enter partially, and is NOT hashed on the server. This is considered completely acceptable, but becoming less common as multi-factor authentication is a better solution. In this particular case, it is not clear-cut that this is a bad idea. You need to weigh the risk of server compromise against the risk of client-side key loggers. In fact, it is the client that is at greater risk . So despite going against normal best practice, this website is not so dumb after all. | {
"source": [
"https://security.stackexchange.com/questions/100945",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32019/"
]
} |
100,951 | On web applications I build I don't hash the passwords. Instead I symmetrically encrypt the username using the user's password and store the result in the password field. I use RijndaelManaged on dotnet. On login I do it again and check if there is a match. My question is, how secure is this now when more and more databases are hacked? I do not know at any given time the user's real password. What the hacker would get from db is the username encrypted with an unknown password. With hashing there is the rainbow tables problem, but with this method not. Knowing the decrypted value already (username) makes it easier to find the password? | What you do really is hashing: you built a hashing function out of a block cipher. Incidentally, "encryption of a known value with the password as key" is how the traditional DES-based crypt scheme for Unix was designed. On the plus side, your construction includes the user name, which partially provides the effect of a salt: parallel cracking is discouraged because two users using the same password will end up with two distinct hash outputs. "Partially", because upon password change, the user keeps his username, so some form of parallelism is still feasible here; and, perhaps more importantly, the same username may be used in several Web apps (e.g. 'admin'), which makes precomputed tables worthwhile (and the point of the salt is to prevent parallelism, in particular the efficient use of precomputed tables). On the minus side: This is a homemade construction. 50 years of research in cryptography have taught us that homemade crypto is very rarely good. In fact, we can "know" that a given algorithm is correct only through years of review by other skilled researchers, and homemade schemes, by definition, never got much of a review. This is limited. Rijndael is a block cipher that can accept keys up to 256 bits; thus, if you use the password as key, then it must fit within 256 bits. This prevents using so-called "passphrases" which can be strong but easy to memorize passwords by using a lot of space for its entropy (human brains don't like concentrated entropy, but are happy with diluted entropy). This is way too fast, and this is the biggest immediate problem in your scheme. With a basic PC, an attacker can try a lot of passwords per second, in particular since modern PC have dedicated opcodes for AES. With a quad-core PC, one should be able to try up to about half a billion
of passwords per second . Not many user-chosen password can survive such an onslaught for long. If you are interested in how password hashing works, then read this . The short answer, though, is still the same: don't use a homemade scheme; use bcrypt . | {
"source": [
"https://security.stackexchange.com/questions/100951",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87464/"
]
} |
101,044 | I recently generated a new SSH key in the ed25519 format.
The public key is only 69 bytes long while my old RSA key is 373 bytes. From my perception ed25519 is the more recent and secure format. So why isn't longer better here? | No, longer is not better. Let me explain. In symmetric cryptography , keys are just bunches of bits, and all sequences of bits are valid keys. They have no internal structure. Provided that you use decent algorithms, the best possible attack on a key for symmetric encryption is brute force : the attacker tries all possible keys until he finds the right one. If the key has n bits, then there are 2 n possible keys, and the attacker, on average, will find the right one after trying half of them, i.e. 2 n -1 . Longer keys thus make brute force harder; in that case, longer is better. (Note that there is a limit to that; when keys are sufficiently long that brute force is no longer feasible, increasing the key length does not make things "more secure" in any meaningful way. So, for symmetric keys, longer is better until they are long enough, at which point longer is just longer.) RSA and EdDSA relate to asymmetric cryptography where things are completely different. A key for asymmetric cryptography is a mathematical object that has a specific internal structure; breaking the key consists in unravelling that structure, and can be done a lot more efficiently than trying out all possible private keys. Note the two points: Against brute force, what matters is not the length of the public key, but that of the private key, since what the attacker wants is the private key, not the public key. Brute force is not the most efficient attack against keys used in asymmetric cryptography. For RSA keys, attack succeeds by factoring the modulus. Integer factorization is a long-studied problem; with the best known algorithm, breaking a 2048-bit RSA key (i.e., a RSA public key whose modulus is a 2048-bit integer) requires about 2 110 or so elementary operations. For EdDSA keys, the public key is a point P on an elliptic curve, such that P = xG where x is the private key (a 256-bit integer) and G is a conventional curve point. The best known algorithm for recovering x from P and G requires about 2 128 elementary operations, i.e. more than for a 2048-bit RSA key. In general, to break a n -bit elliptic curve public key, the effort is 2 n /2 . Breaking either key is way beyond that which is feasible with existing or foreseeable technology. But in an "academic" viewpoint, the EdDSA key is somewhat stronger than the RSA key; also, elliptic curves give you more security per bit (technically, we say that integer factorization is a sub-exponential problem). See this site for more information on that subject. | {
"source": [
"https://security.stackexchange.com/questions/101044",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8458/"
]
} |
101,137 | I'm using the Ubuntu GNOME Linux distribution as my desktop. Recently, it became impossible for me to connect to the Internet with it. When I checked its network settings, I saw that the DNS address was 46.161.40.29 . When I searched this IP address on my other desktop (a Windows 7 machine), I found the article http://anti-hacker-alliance.com/index.php?details=46.161.40.29 . On my Linux machine, I set the DNS nameserver in /etc/resolv.conf to 8.8.8.8 and 8.8.4.4 , and was able to access the Internet. Note that every time I reboot my machine, I'm unable to connect to the Internet, and the DNS file ( /etc/resolv.conf ) is blank. My wired connection is also not showing in the network manager. I still see 46.161.40.29 in my wired connection's settings: Is my computer compromised? If so, what can an attacker do? Note: I'm using the D-Link DSL-2520u home modem, and the firmware version is v1.08. When I checked my DNS settings in the modem interface (192.168.1.1) it was pointing the primary DNS address to 188.42.254.137, and the secondary to 8.8.8.8. | Something in your environment has definitely been compromised. It seems more likely that your router has been compromised. You haven't provided much information, so I'm going to make some basic assumptions: You're at home You are behind a commercial router, provided by your ISP You haven't done anything to secure your router Your linux desktop is a DHCP client of the router. These devices often have default passwords that users never change and critical firmware vulnerabilities that go unpatched. As a DHCP client of the router, your Linux desktop is going to pull DNS information as part of its DHCP request, and so will see the behavior you've described above. Configuring other DNS servers in resolv.conf is only a workaround. I strongly suggest that you try to log in to your router (probably @ 192.168.1.1, based on your screenshot). I bet you won't be able to. You'll probably have to reset it to factory defaults, then log in. You'll want to secure it better - update firmware, change default passwords, and hope that's enough. For confirmation without logging into your router, check the DNS configuration on your Windows desktop. If it points to the same 46.161.40.29, then it's very likely the router. | {
"source": [
"https://security.stackexchange.com/questions/101137",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81855/"
]
} |
101,172 | Occasionally I will fail to hit Tab properly when entering a username/password combination. This results in me submitting username "myUsername$ecretPa$$word" along with a blank password. I always try to change my password shortly after doing this, fearing that it's entirely possible that someone maintaining the site is logging failed login attempts. It seems reasonable that someone (even a security-conscious admin) would consider logging attempted usernames both useful and safe. But if this happens my password would be stored in some unencrypted log somewhere, right along with my username. Is this a reasonable concern? Am I being too paranoid? | The short answer is that it is very, very likely that your concatenated username and password exist on an unencrypted log somewhere that a larger group of people would conceivably have access to than the restricted logs. You are not paranoid to change your password and should change it when this happens. | {
"source": [
"https://security.stackexchange.com/questions/101172",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/16510/"
]
} |
101,272 | So I just saw this picture on Imgur: http://imgur.com/gxRCrCM The intriguing thing about it is that the picture refers to an old Daft Punk song named "Face 2 Face". The image's MD5 is 6b0cc07a5c4d3d8fface2face79d8205 which, amazingly enough, contains the phrase face2face in it. How does one go about generating this type of hash? I always thought that one gets a totally different hash when even one byte of the message is modified. What kind of computing power is required to perform this trick? Of course, I am assuming this is not a mere coincidence. Also I'd love to know if there are other examples of such hashes, and what are some tools available for Linux or Windows? | 'face2face' is only 9 characters, i.e. 36 bits since we are using hexadecimal encoding. It suffices to generate many pictures with some internal variations (subtle variations that do not impact the graphical output) and hash them all until the target string is obtained. Since we are looking for a 36-bit pattern and accept that pattern wherever it appears in the 32-character output (24 possible positions), then the average number of pictures to produce and hash will be about 2 36 /24, i.e. about 2.8 billion. Since a basic desktop PC can compute several (many) million MD5 hashes per second, this should be done in less than an hour with some decently optimized code. This has nothing to do with known weaknesses of MD5 with regards to collisions. The same could be done with SHA-1 or SHA-256. This has already been discussed in this question . | {
"source": [
"https://security.stackexchange.com/questions/101272",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56659/"
]
} |
101,276 | I am trying to understand how works various SSO technologies like SAML 2.0, OpenID Connect 1.0. In general, they work in a similar way providing tokens (XML, JSON) through Identity Provider to Service Provider. What I don't fully understand is how are these tokens secured so nobody can steal it and use it from different device in order to get authenticated session and impersonate. Also, how are these tokens stored? Are they stored as a cookie in a browser? If so, would it be possible to steal a token cookie and use it to impersonate somebody? How can I detect that the token is used in an unauthorised manner? I am looking for a practical view how are tokens secured and represented when authentication is established or when using as SSO, so authentication is not required by Identity Provider. | 'face2face' is only 9 characters, i.e. 36 bits since we are using hexadecimal encoding. It suffices to generate many pictures with some internal variations (subtle variations that do not impact the graphical output) and hash them all until the target string is obtained. Since we are looking for a 36-bit pattern and accept that pattern wherever it appears in the 32-character output (24 possible positions), then the average number of pictures to produce and hash will be about 2 36 /24, i.e. about 2.8 billion. Since a basic desktop PC can compute several (many) million MD5 hashes per second, this should be done in less than an hour with some decently optimized code. This has nothing to do with known weaknesses of MD5 with regards to collisions. The same could be done with SHA-1 or SHA-256. This has already been discussed in this question . | {
"source": [
"https://security.stackexchange.com/questions/101276",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/34132/"
]
} |
101,324 | In this question on software recommendations, the OP asks for an alternative to Google reCAPTCHA because "for a security reasons also we don't want to depend on any out side services". As far as I know, you ask Google for a CAPTCHA, you display it, you send the user’s input to Google who tell you if the input was correct or not. Where’s the possible insecurity? It's not like you send Google the user's log in, password, d.b.b, bank account number or any other sensitive information. I suppose that a truly determined attacker could use some man-in-the-middle attack, but why? CAPTCHAs are generally used to prove you are not a bot and not as login credentials. Am I missing something? Is the OP correct to be concerned or is he just going to cause himself a heap of extra work, possibly by trying to implement his own system which he believes to be safer than Google's because ... security through obscurity? | Well, there could be a couple of concerns: Denial of service: In case the Google reCaptcha system somehow goes down, your users will probably not be able to authenticate anymore. This could also happen if they implement some kind of update, which breaks the whole system. External JavaScript libraries: when using Google reCaptcha, you need to include some JS libraries . In case Google wants to execute some XSS attack, you have just made it a lot easier for them. Tracking: anyone clicking through the reCaptcha 'consents' to be tracked by Google . This may impact your user's privacy, as well as provide a means for Google to track traffic on your website. However, as mentioned in this answer , they probably care more about their reputation than about compromising your website. Although I doubt that when it comes to the 'tracking' concern. | {
"source": [
"https://security.stackexchange.com/questions/101324",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/4321/"
]
} |
101,385 | I always read that using the same password on multiple sites is a risk. I'm wondering what is the real reason for this? In my case, I use the same password on multiple sites everywhere. My password is, however, very strong and complicated and long that I saved in a text file and then copy for each connection on sites to which I subscribe. Does this method protect me from the risk because my password is too complicated and long? | Password reuse is a security bad practice because of of a simple attack scenario like this one: Source: XKCD With reusing a long and complicated password, you are still facing the same threats as highlighted in the above schema. In addition to this, saving a password in a text file is an unsafe practice in that the privacy of your password depends also on the safety of your computer (think of malicious browser plugins you may install, drive-by download attacks installing spyware on your machine ...) and that of your network. You may also consider other scenarios like with phishing attacks on which the strength of your password does not play a role. | {
"source": [
"https://security.stackexchange.com/questions/101385",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/87908/"
]
} |
101,390 | I'm changing my employer, and I'm about to leave my office computer. Due to internal regulations and my supervisor's orders, I'm unable to format the disk drive. I was hoping I would be able to do this, as I was using that computer partially for my private purposes. The computer is running Windows 7. Beside uninstalling any software that contains my personal data (like Google Chrome or Dropbox) and clearing everything in every browser cache / history, what other steps should I undertake or take into consideration in order to leave my office computer without any personal data-related concerns? Note that I understand that formatting the drive is the best option here (and thus I really regret that such an option was taken from me). As far as I know and understand, using an office computer for private purposes isn't the best idea. A bad thing happened, however, so comments about that won't help me in this situation. As per comments: My computer must be fully usable after I leave the office and thus I can't simply trash my disk! :> And my Windows is not a part of a domain. | Note that if you were on an AD Domain, domain administrators would have had full access to your computer anyway. The usual caveats about physical access, unencrypted drives, etc, all apply, so this is not real security but will prevent subsequent users of your computer from getting easy access to your data. If you were not part of a domain, then the best you can do is create a new administrator account, and then delete your old account and profile from the new one. Make sure that the recycle bin has been emptied. If the Volume Snapshot Service is running, delete any volume shadow copies by running cmd with elevated privileges: vssadmin delete shadows /for=c: Finally run the following command for each drive: cipher /w:c:\ Where c: in both cases is the drive letter designation. This will wipe all free space, making it unrecoverable. See this answer for more information. | {
"source": [
"https://security.stackexchange.com/questions/101390",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11996/"
]
} |
101,402 | I'm using http://acme.com/software/mini_httpd/ for my embedded system. Is it as "secure" as more known web servers like Apache or lighttpd? Being a lesser known web server means that it's less likely to be targeted by attacks, but maybe also less tested. | Update: As pointed out by Moti Korets here , my answer relates to "Vector Ultra Mini Httpd", not "ACME mini_httpd" that I now realise was being asked for by the OP. Vector Ultra Mini Httpd No it is not secure. Even the latest version, v1.21 suffers from a serious stack based buffer overflow meaning that a remote attacker can gain control of your system. CVE here . ACME mini_httpd See jimis excellent answer here . | {
"source": [
"https://security.stackexchange.com/questions/101402",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70035/"
]
} |
101,504 | A law firm I've been in contact with has recently been broken into 3 times in the past 4 months. In spite of a number of laptops and other equipment containing sensitive information being stolen, the tech support company occasionally doing work for them has done nothing to safeguard against future theft. I proposed installing anti-theft software (such as Prey) and later considered establishing one system as an information-gathering honeypot with hidden keyloggers sending information to a number of email addresses. All physical security best practices aside, what other measures could possibly be implemented here? These are all Windows systems. I don't doubt the separate incidents are linked and being carried out by connected if not the same criminals. The firm has already been in contact with the local authorities. However due to a lack of documentation on the stolen items, there are no serial numbers or additional helpful information on the missing devices besides make and model, so recovery seems a dead end but at the least they're aware of the incidents. | Consider getting a software product which fully encrypts your hard drives. Such a software will prompt the user to enter the password used to encrypt the hard drive during boot. Without the correct password, the hard drive (including both the OS and any data) can not be decrypted, the system won't boot and the user won't get any access to the data. In that case a thief might still be able to sell the hardware by nuking the disks, but won't have access to any sensitive information stored on it. The default solution for Windows is Microsoft BitLocker , which is already available out-of-the-box in some editions of Windows. There are also other products on the market like Sophos Safeguard or Truecrypt. For recommendations which product to use, consult Software Recommendations Stackexchange . | {
"source": [
"https://security.stackexchange.com/questions/101504",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/88021/"
]
} |
101,520 | This morning, I noticed that a new Windows update was offered to me. It looks very suspicious to me: Here are the update details: gYxseNjwafVPfgsoHnzLblmmAxZUiOnGcchqEAEwjyxwjUIfpXfJQcdLapTmFaqHGCFsdvpLarmPJLOZYMEILGNIPwNOgEazuBVJcyVjBRL
Download size: 4,3 MB
You may need to restart your computer for this update to take effect.
Update type: Important
qQMphgyOoFUxFLfNprOUQpHS
More information:
https://hckSLpGtvi.PguhWDz.fuVOl.gov
https://jNt.JFnFA.Jigf.xnzMQAFnZ.edu
Help and Support:
https://IIKaR.ktBDARxd.plepVV.PGetGeG.lfIYQIHCN.mil Obviously, this seems way too fishy to install, but I would like to know more. Has everyone received this update (Google only has a couple of hits for this)? Could this be an attack? Is there a way to download the update data without installing it? I'm open to any ideas. I'm running an Windows 7 Pro (64-bit). As @Buck pointed out below, the update is no longer available through Windows Update. I'm not sure how this question will be resolved. | The official communication from Microsoft at this time: “We incorrectly published a test update and are in the process of removing it.“ – a Microsoft spokesperson I won't add commentary, but will update the answer as more information becomes available. | {
"source": [
"https://security.stackexchange.com/questions/101520",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/6822/"
]
} |
101,560 | What is the recommend way and best-practice to send private keys and SSL private keys? I was thinking of zipping up the files, then using gpg : gpg -c thefile.zip The problem then becomes how do you send the passphrase used to encrypt to the other end? Is there a better solution? | TL;DR: private keys are called private for a reason. You can secure private keys by not transmitting them at all . If you have shell access to the server they are used at, you simply generate them in situ . If the target device is too weak/underpowered to generate the keys, it is also too weak to use asymmetric encryption (this includes entropy sources). Of course, there are memory-constrained hardwired devices (smart cards, for instance), but you can use a physically secure link to move the keys into them without being connected to the Internet. A rather obscure corner case is asymmetric encryption used in a remote device (read: a space probe or a satellite) but the conditions are unlikely to be met in practice (one key compromised but another secure link still available). If you don't have secure shell access (SSH) to the target device, you generally cannot securely copy (SCP) the files. Using email to move the keys, even in an encrypted form, draws unwarranted attention to the message and leaves the trail in perhaps dozens of places across the Internet. Sending the keys means the people at the other end of the channel must trust you and you must trust them. It is not always a good proposition in business and other high-stakes environments (either party can forge communications with the private keys in hand). EDIT: For the stated use case of private keys for a hosting provider, you are still better off generating them in place. | {
"source": [
"https://security.stackexchange.com/questions/101560",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/15850/"
]
} |
101,797 | Okay, I'll just begin with the question and then elaborate a bit below. It is: Why has the world's dominant maker of non-Apple smartphone operating systems, Google, still not adopted a straight-to-the-user model of distributing security updates for Android, instead sticking with the current, obviously deeply flawed approach of relying on phone makers and wireless carriers to rapidly test, approve, and distribute them? How substantial would the technical obstacles really be in moving Android to the direct-update model if Google genuinely wanted to do it? So, some additional not-good news came out yesterday regarding the fun Stagefright set of media player-component security bugs in Android. First, some newly-announced flaws in the same component appear to extend the scope of Android phones affected to virtually every one that's ever been made & sold. Second, there are apparently even more flaws in the same component that have been disclosed to Google and that are in the pipeline for public announcement soon, including some that will get a "critical" label. I'm not intimately familiar with the Android security scene, but this certainly seems like it's being held as the most important security event that Android has had to deal with so far. Of course, the discovery of awful vulnerabilities, on any platform, always leads to the next question: "When is my device going to be patched?" Unfortunately, the way things work now with Android security updates--Google pushes them to hardware makers and carriers, who have to sign-off on off on them and be willing and to distribute the updates to users-- the vast majority of the 1 billion+ Android users can expect only one of two answers: --For the lucky ones: it will be months. (This report , also linked above, quotes an estimate that it usually takes somewhere from 9-18 months for patches to get from Google to wind their way through testing and various approvals to a user's phone. Now, one assumes that with Stagefright that will be hurried up to some degree, but still...) --For the unlucky ones: never. (Some Android makers barely provide any post-sale update servicing at all for their phones. Others seem to have an unwritten support period limit of maybe a year, or perhaps two, after which the user is out in the cold.) All of which raises the question: Why can't Google just do what companies who make operating systems for other computers do--PCs and servers--for example-- and bypass OEMs and service providers to deliver security updates straight to the user? Now, obviously there are probably both technical aspects to this and business aspects to this. I'm thinking more along the lines of technical aspects (though I admit that sometimes the two are less easy to separate from each other than one might think). In what ways could going to a process of Google directly issuing security updates--but not necessarily directly shipping any updates that involved anything beyond fixing security vulnerabilities--cause problems with hardware and/or software compatibility that could be so troublesome enough for users, phone makers, carriers, and Google itself that that factor could outweigh the value of getting these patches out much more quickly and far, far more widely than they are likely to under the present system? Or is it really a slam dunk in favor of Google going to direct distribution, as 99% of security news reporters & commentators seem to think? | The crux of the problem is that with only a few notable exceptions, every phone ships with a fork of Android, not with the software written by Google. So Google can't push changes to Samsung's phones any more than FreeBSD can push changes to Apple's Macbooks. Android is Open Source, which is a bit unusual. This is the first time a major consumer operating system with this size of userbase (1.4 billion users and growing fast) has been an open source project rather than a centrally-controlled one. We're used to the idea of the creator of the OS being able to take responsibility for updating it. And as evidenced by this question, we somehow expect Google to be able to control Android the same way Microsoft controls Windows and Apple controls iOS. But by allowing companies like Samsung and Sony and Motorola to ship their own modified version of Android, Google gives up that control in a way that they can't get it back. Samsung then takes over not only control of their own flavor of Android, but also responsibility for keeping it updated. And by allowing Verizon to fork Samsung's version, Samsung then sheds both control and responsibility now to Verizon. Theoretically this all works; theoretically Verizon will be just as responsible and dependable as Google. Except when they're not. So there's three possible solutions. Either: Manufacturers could start taking more responsibility for their OSes. Since Samsung's Android belongs to Samsung, we get nowhere unless Samsung takes some initiative on keeping it updated. This may require some cooperation with companies like Verizon if Samsung has allowed them to fork the code as well. This is more or less that status quo, but with more wishful thinking. Google could take back control of Android. By switching to a closed-source license, they could impose licensing restrictions like requiring companies like Samsung to push patches within a limited timeframe. Of course, if Google went this route, there'd be no end of shouts about how "evil" and "anti-consumer" they were being, despite the fact that they're literally the only major player that is Open Source to begin with. Politically this is probably a no-go. Companies like Verizon and Samsung could voluntarily give control back to Google without being forced into it by a licensing agreement. This is the sort of utopia arrangement where companies decide to do the right thing out of their own free will. Until a few weeks ago, this was the least likely of the three. But since the stagefright mess, several companies have pledged to do more or less exactly this. So we'll see where it goes in the coming months and years. | {
"source": [
"https://security.stackexchange.com/questions/101797",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86410/"
]
} |
101,855 | Forensics books often recommend working on an image of the hard drive instead of the original drive. Should I take this precaution even if I use a write blocker?
If so, why? | Because normal read operation on a disk presenting error (physical or logical) may cause data corruption, destruction and even writing to recover bad blocks. You have to keep in mind that even the read operation may lead to physical damage or data modification. | {
"source": [
"https://security.stackexchange.com/questions/101855",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70136/"
]
} |
101,965 | I've come across several hosts that throw SSL3 handshake errors even though I explicitly request TLS 1.2. Why is this? Am I using the openssl client wrong? $ openssl s_client -tls1_2 -connect i-d-images.vice.com:443
CONNECTED(00000003)
140735150146384:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:s3_pkt.c:1472:SSL alert number 40
140735150146384:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:656:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1444078671
Timeout : 7200 (sec)
Verify return code: 0 (ok)
--- | In SSL/TLS, the client does not request a specific protocol version; the client announces the maximum protocol version that it supports, and then the server chooses the protocol version that will be used. Your client does not tell "let's use TLS 1.2"; it says "I know up to TLS 1.2". A client may have its own extra requirements, but there is no room to state them in the ClientHello message. If the client wants to do TLS 1.2 only, then he must announce "up to TLS 1.2" in its ClientHello , and also close the connection if the server responds with a message that says anything else than "let's do TLS 1.2". In your case, things did not even reach that point: the server responded with a fatal alert 40 ("handshake_failure", see the standard ). As @dave_thompson_085 points out, this is due to a lack of SNI : this is an extension by which the client documents in its ClientHello message the name of the target server. SNI is needed by some servers because they host several SSL-enabled sites on the same IP address, and need that parameter to know which certificate they should use. The command-line tool openssl s_client can send an SNI with an explicit -servername option. As @Steffen explained, SSL 3.0 and all TLS versions are quite similar and use the same record format (at least in the early stage of the handshake) so OpenSSL tends to reuse the same functions. Note that since the server does not respond with a ServerHello at all, the protocol version is not yet chosen, and SSL 3.0 is still, at least conceptually, a possibility at that early point of the handshake. | {
"source": [
"https://security.stackexchange.com/questions/101965",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9955/"
]
} |
102,133 | Whenever I maximize the Tor browser, it shows a warning: Maximizing Tor Browser can allow websites to determine you monitor size, which can be used to track you... How can screen resolution or monitor size be used to track a person? | All Tor browser users are asked to surf pages using the default window size. So if you follow this practice, you are just like other users; I mean the screen resolution won't be used as a factor to identify you. From here , you can read an interesting comment that fits your question: Using an unusual screen resolution was sufficient to identify me
uniquely to panopticlick. With my portrait mode screen resolution of
1200 wide by 1920 high, the default window size of 1000x1765 was
unique, no resizing or maximizing needed. Visit browswerspy webpage that implements a demo where you can find out information about your screen including width, height, DPI, color depth, font smoothing. Conclusion: do not distinguish yourself from others. Act as everybody else. | {
"source": [
"https://security.stackexchange.com/questions/102133",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/54717/"
]
} |
102,137 | Bulletproof Hosting Providers (BPHS) allow servers containing illegal porn, malware, organized (cyber)crime, major spam and all this on the WWW? If Spamhaus can block the entire subnet of such a BPHS why can't the government? i.e. How can a BPHS combat blocking and tracking? EDIT:
In light of Trend Micro's paper I found 3 comprehendible methods there: using multilayered VPS with Nginx reverse proxy is, I must say, feasible. But I didn't understand these two: 1. How can cloudfare function as a whitelisted proxy for a website? 2. What kind of BPHS can afford to move its operations physically every time they are blacklisted? | All Tor browser users are asked to surf pages using the default window size. So if you follow this practice, you are just like other users; I mean the screen resolution won't be used as a factor to identify you. From here , you can read an interesting comment that fits your question: Using an unusual screen resolution was sufficient to identify me
uniquely to panopticlick. With my portrait mode screen resolution of
1200 wide by 1920 high, the default window size of 1000x1765 was
unique, no resizing or maximizing needed. Visit browswerspy webpage that implements a demo where you can find out information about your screen including width, height, DPI, color depth, font smoothing. Conclusion: do not distinguish yourself from others. Act as everybody else. | {
"source": [
"https://security.stackexchange.com/questions/102137",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76663/"
]
} |
102,283 | I work at an office typing numbers into a computer. It gets quite boring sometimes, so I go on Netflix while using their WiFi on my phone. Can they tell what apps I'm using, based on network traffic? | Short answer is yes. If there is any logging on their WiFi router they might not be able to see exact apps, but they'll be able to see the server domain/hostnames that you're connecting to. You can also look at these question and answers: Is there a way for my ISP or LAN admin to learn my Gmail address? Can an employer see cellular network traffic routed through company-owned device? Long Answer Secure WiFi only encrypts traffic up to the Access Point. It's decrypted, and the traffic can be monitored. If it's an enterprise router it's more than capable of logging specific types of traffic. If your work has a firewall in their network then it's even more likely that your employer has the capability to monitor traffic. These do have to be configured, and your employer has to care, but it's very possible. Domain Name Server (DNS) requests contain the domain hostname that you're trying to reach, and these requests are sent before an SSL/TLS channel can be established and secure. Source If you're using SSL/TLS it's possible to see the hostname of the server that you're connecting to in two ways. First is the server's certificate. The common name generally needs to match the domain name that you're browsing to. Can they link this back to you? This really depends on a lot of factors. If they're really adamant they'll be able to track down the MAC address of your device, IMEI, and other mobile phone identifiers. This is a lot of work, and in the end still might not link you to the traffic. In all honesty, if they care it's just easier to block the traffic than it is to try and track you down. | {
"source": [
"https://security.stackexchange.com/questions/102283",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/88793/"
]
} |
102,434 | Seems like a relatively obvious way to prevent (software) keylogging would be to force only the current (in-focus) app to be able to receive keystrokes. There could be a way to make explicit exceptions for macro apps etc. Querying the exception list would make finding a keylogger trivial. Is there any reason operating systems don't enforce this policy by default? | Because it wouldn't help. Most keyloggers are installed at the operating-system level, and the operating system needs to have access to the keystrokes. Alt-Tab program switching, using Ctrl-Alt-Del to terminate malfunctioning programs, and detecting keyboard activity to keep your screensaver from activating all require the OS to see keystrokes. There's also the minor matter that if you eliminated OS access to the keyboard, every application would need to have a complete set of keyboard drivers built into it. | {
"source": [
"https://security.stackexchange.com/questions/102434",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/88948/"
]
} |
102,491 | For a number of reasons, including remote accessibility, my company would like to move all of our accounting records, account applications, and marketing materials to Google’s Apps for Work Drive. At first glance, this sounds like a pretty bad idea, but I don't have the firsthand knowledge or reasons to argue a case against it. The information stored includes banking information, names, and addresses; as well as SSNs, mostly in the form of PDF documents. Can we legally host our files with Google Drive, who would be responsible for the costs associated with a breach in access, and is the amount of risk worth the increased accessibility? Documents provided by Google support: Security and Compliance Whitepaper Certifications Summary Audit & Certification Summary Certificate HIPAA Business Associate Amendment | Google Drive is no more or less safe than any other web-based service with a single logon. Your company must decide for itself whether it is willing to put the data online (albeit behind Google's authentication) At the very least, I'd recommend that 2-factor authentication is used Any data travelling outside the organisation is encrypted. Google Drive is presumably fairly secure, but as we saw with iCloud, people can (and do) sometimes get access to systems they shouldn't be able to access. One piece of advice a tutor at university gave me was: Treat anything that isn't behind your firewall as though it was on a USB pen you'd just left on a train Meaning: assume that it may fall into the wrong hands, and ensure that you've taken sufficient precautions to make it useless to them. (In fact, I'm a fan of treating things that are behind my firewall the same...) Edit To add to the "liability" question: This is mostly a matter of what is stated in the contracts, agreements (EULA or ToS etc) etc between you and Google, and potentially you and any third parties who the data belongs to. Note that this doesn't just include clients/customers, it also includes your staff, if you are storing their personally identifiable information in the cloud - so this could not only cause financial issues, but also destroy employee trust if there is a breach! Your bank may also refuse to reimburse any money lost if bank details are stored in the cloud, as this could be considered negligence. In general, though: unless specifically stated, at least some liability will always remain with you. Some liability may or may not fall upon Google for data breaches etc, but this will depend on the agreement. You will still be liable for any third party data, however, in as much as you have chosen to entrust it to a third party. If there was a data breach, it would then become a (likely very long and drawn out!) legal question, and would revolve around whether either Google or yourself were negligent, the nature of your agreement, and whether you both took any and all reasonable steps to protect that data. I think the crux of your question is "Will Google take responsibility for the security of data on Google Drive" to which the answer is "No, probably not". But I am not a lawyer, nor do I play one on TV. | {
"source": [
"https://security.stackexchange.com/questions/102491",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/89014/"
]
} |
102,536 | I just got a new job at a medium-sized (~100 employees) company and one of the first things I was told is that I cannot use my own computer, because I need to be able to connect to their network, access files, etc. I didn't think that made much sense because to my knowledge, as long as I'm on their network, I should be able to access anything I need to. So I asked my friend this question, who told me it might be a security thing. Could there be a security-related reason as to why I'm required to use my employer's machine? | So this is an interesting question with a few points into why you not only should WANT to do this, but should do this for your own safety and security. It helps first if you understand that companies point of view before we talk about how it can benefit you. Why would a company want to do this? Many reasons. It makes it assured that your computer can access the network, do what it needs to do, and function how they need it to at a baseline. This way the IT department can maintain it easily, quickly, and up to standard. Can they make me do this? YES THEY CAN! They are having you work on their property, with their property, to make sure it works properly. This way you can actually do your job. Should I do this? Oh god yes. This lets you pass the buck if needed. Now if something that is supposed to work, doesn't work, it isn't your fault. Maintenance becomes a breeze because if your files are backed up to a safe place (any installers you used as well), then if something really bad happens they can restore it to a disk image and have your computer back to you in the matter of a few hours instead of days. IF you leave the company for any reason, you don't have to relinquish your personal computer to them for driver scrubbing or making sure you don't take any company software or intellectual property with you. IT has no claim to touch your personal computer for any reason. For security reasons, you can make sure that your work computer is up to their standards and any potential breach won't be your fault, but their bad policies' fault. And here's the whammy: It keeps you safe from the company! In using the company computer your own personal information won't be on the company network, and you can keep your private life away from your work life. This is a big advantage because you can make sure that your own data, is your own data. | {
"source": [
"https://security.stackexchange.com/questions/102536",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70687/"
]
} |
102,873 | We all know the story of the USB drive left outside a power plant which was found by a worker and inserted into a computer to see the contents which then allowed a hack to ensue. Here is my question, how? I get that code is executed but how? I would really like to be able to do this (for my own curiosity of course). I have always had a good grasp on security how to make things secure etc etc but things like viruses, trojans, USB drivers... how are they activated with little human interaction? I would really like to learn about these things, I am a programmer/sys admin so would like to knock up a script but having never been taught or never have done it I don't know how or where to begin. I would really like a big discussion on this with as much information as possible. | Take a look at this USB keyboard : "But that's not a keyboard! That's an USB drive, silly!" Actually, no. It looks like a USB drive to you, but when it gets connected to a computer, it will report that it is a USB keyboard. And the moment it is installed, it will start typing key sequences you programmed on it beforehand. Any operating system I know automatically trusts USB keyboards and installs them as trusted input devices without requiring any user interaction the moment they are connected. There are various payloads available for it. For example, there is one which types the keyboard input to open a shell, launches WGET to download a binary from the Internet, and runs it. | {
"source": [
"https://security.stackexchange.com/questions/102873",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69594/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.