source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
74,067 | I have a recurring DNS problem that has been plaguing our users occasionally causing their laptops to appended our companies domain to the end of all DNS queries. The problem only occurs when users are offsite and it appears to be fairly random. It will work one day and then out of the blue it will show the invalid entry. This affects mostly Windows XP users but has recently been seen on Vista as well. Here is an example using nslookup. C:\Users\Username>nslookup www.yahoo.com
Server: Linksys
Address: 192.168.0.1
Non-authoritative answer:
Name: www.yahoo.com.EXAMPLE.COM
Address: 192.0.2.99 I have replaced the IP address that is reported with a placeholder but I can tell you that what it returns is the default *. entry on our Network Solutions configuration. Since obviously www.yahoo.com.EXAMPLE.COM doesn't exist this makes sense. I believe the user's internal equipment is functioning properly. Internally we run a Windows 2k3 Active Directory w/ Windows based DHCP and DNS servers. Eventually the problem resolves itself usually over a couple of hours or a number of reboots. Has anyone seen this behavior before? | If you launch nslookup and turn on debugging you'll see that Windows always tries to append its suffix first. C:\>nslookup
Default Server: itads.example.com
Address: 0.0.0.0
> set debug=true
> www.yahoo.com
Server: itads.example.com
Address: 0.0.0.0
------------
Got answer:
HEADER:
opcode = QUERY, id = 2, rcode = NXDOMAIN
header flags: response, auth. answer, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional = 0
QUESTIONS:
www.yahoo.com.example.com, type = A, class = IN
AUTHORITY RECORDS:
-> example.com
ttl = 3600 (1 hour)
primary name server = itads.example.com
responsible mail addr = itads.example.com
serial = 12532170
refresh = 1200 (20 mins)
retry = 600 (10 mins)
expire = 1209600 (14 days)
default TTL = 3600 (1 hour)
------------
------------
Got answer:
HEADER:
opcode = QUERY, id = 3, rcode = NOERROR
header flags: response, want recursion, recursion avail.
questions = 1, answers = 4, authority records = 0, additional = 0
QUESTIONS:
www.yahoo.com, type = A, class = IN
ANSWERS:
-> www.yahoo.com
canonical name = www.wa1.b.yahoo.com
ttl = 241 (4 mins 1 sec)
-> www.wa1.b.yahoo.com
canonical name = www-real.wa1.b.yahoo.com
ttl = 30 (30 secs)
-> www-real.wa1.b.yahoo.com
internet address = 209.131.36.158
ttl = 30 (30 secs)
-> www-real.wa1.b.yahoo.com
internet address = 209.191.93.52
ttl = 30 (30 secs)
------------
Non-authoritative answer:
Name: www-real.wa1.b.yahoo.com
Addresses: 209.131.36.158, 209.191.93.52
Aliases: www.yahoo.com, www.wa1.b.yahoo.com As you can see above my machine tried to look for www.yahoo.com.example.com first, and the DNS server responded NXDOMAIN (entry not found). You can confirm this by running nslookup www.yahoo.com. (note the dot at the end of .com!) and you'll see that it is resolved normally. What's happening is that your external DNS server is responding that they have an entry for "www.yahoo.com.example.com" and is returning your IP address for the root of your site. I'm not sure what service you use but I'm guessing that you have a wildcard mapping that tells your server to respond to any unknown query with a valid response, rather than returning NXDOMAIN . You'll need to double check your settings for the server and confirm that it is only set to respond to queries for entries it actually has ( example.com , www.example.com , mail.example.com , etc.). Remember that DNS works by checking the configured server and working its way up from there. The DNS query can take a path like the following pattern (of course this is just a example, it is probably wrong): Machine -> Local Router DNS (linksys) -> ISP DNS -> (2nd ISP DNS?) -> Root Server DNS -> TLD DNS -> Your External DNS server. Someone along that path is saying that www.yahoo.com.example.com exists. Chances are it's your external DNS server. EDIT I figured I'd include one more tidbit about the randomness you mention. If this is really happening sporadically you may have a misconfigured external DNS server or their ISP could be providing a DNS hijacking service. Unfortunately I've seen more and more residential ISPs provide a "search service" for invalid domain names. Since almost all end users use their ISP DNS servers, the ISPs are now starting to redirect invalid domain entries to a search page - one usually laden with ads, irrelevant links and a small "Did you mean www.example.com?" with some results that may or may not be related to the domain name. I know that Verizon and Comcast are starting to do this, I believe Quest is starting to as well. Another possibility is OpenDNS, since they provide the same "search for a related domain" if it doesn't exist (it's their revenue after all). My problem with suggesting that as the problem, though, is the fact that you say it's returning the address of your root record, which none of these would do if they were trying to search for it, they'd give you an IP of one of their web servers to handle the search. | {
"source": [
"https://serverfault.com/questions/74067",
"https://serverfault.com",
"https://serverfault.com/users/10481/"
]
} |
74,158 | Is there an option to put the password on the line as well with sftp? linux~ $ sftp [email protected]:/DIRECTORY_TO_GO_TO/ Like this linux~ $ sftp [email protected]:/DIRECTORY_TO_GO_TO/ -p PASSWORD? | As others have mentioned, a command-line password should be the last resort. However, if nothing else is possible; one can go for ssh pass sshpass -p <password> sftp user@host | {
"source": [
"https://serverfault.com/questions/74158",
"https://serverfault.com",
"https://serverfault.com/users/4629/"
]
} |
74,176 | Does SFTP use port 21 or port 22? | While TCP port 22 is the general right answer, this is dependent on the fact that SSH is configured to use the standard port and not an alternative port. As SFTP runs as a subsystem of SSH it runs on whatever port the SSH daemon is listening on and that is administrator configurable. | {
"source": [
"https://serverfault.com/questions/74176",
"https://serverfault.com",
"https://serverfault.com/users/4629/"
]
} |
74,672 | What are the advantages of checking the "Enable IO APIC" option in VirtualBox? While I can't find any information on advantages when I google it, two disadvantages are clear. First, it can break older Windows VMs if it is disabled after installation. Second, it reduces VM performance. Yet, I noticed that it is enabled by default when installing Ubuntu 64bit. | Here is the quote from VirtualBox documentation : Enable I/O APIC Advanced Programmable Interrupt Controllers (APICs) are a newer x86 hardware feature that have replaced old-style Programmable Interrupt Controllers (PICs) in recent years. With an I/O APIC, operating systems can use more than 16 interrupt requests (IRQs) and therefore avoid IRQ sharing for improved reliability. Note : Enabling the I/O APIC is required for 64-bit guest operating systems, especially Windows Vista; it is also required if you want to use more than one virtual CPU in a virtual machine. However, software support for I/O APICs has been unreliable with some operating systems other than Windows. Also, the use of an I/O APIC slightly increases the overhead of virtualization and therefore slows down the guest OS a little. Warning : All Windows operating systems starting with Windows 2000 install different kernels depending on whether an I/O APIC is available. As with ACPI, the I/O APIC therefore must not be turned off after installation of a Windows guest OS. Turning it on after installation will have no effect however. In addition, you can turn off the Advanced Configuration and Power Interface (ACPI) which VirtualBox presents to the guest operating system by default. ACPI is the current industry standard to allow operating systems to recognize hardware, configure motherboards and other devices and manage power. As all modern PCs contain this feature and Windows and Linux have been supporting it for years, it is also enabled by default in VirtualBox. It can be turned off on the command line; e see the section called “VBoxManage modifyvm”. | {
"source": [
"https://serverfault.com/questions/74672",
"https://serverfault.com",
"https://serverfault.com/users/22895/"
]
} |
74,696 | I'm backing up a Linux server and storing it on another server. I began with a simple rsync -aPh --del server.example.com:/ /mnt/backup Then someone pointed out that I shouldn't back up /proc , since you don't want to restore the /proc of one server on another. Is there anything else that I should/shouldn't include? For instance, what about /sys ? | This really depends on how you are going to restore your system. If you will rebuild then you only need the configuration/data files for your services (eg: /etc, /opt, /var, /home) If you are after a full system restore, then it you could omit /proc, /boot & /dev. Then you can install the minimum OS from your boot media and then restore your system via your backup. Of course, the best backup is one that has been tested and verified . So omit what you don't think you need, try to restore in a VM and verify you can get back your system using this data. | {
"source": [
"https://serverfault.com/questions/74696",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
74,716 | Suppose I have a hard disk with data I don't want to expose to a third party. The warranty period for that disk still lasts. Now the disk starts malfunctioning. I can't use a disk wiping program on a malfunctioning disk - it just wouldn't work. If I run any destructive action on the disk - burn it, open and scratch it, smash it, whatever similar - the retailer will refuse to exchange the disk saying that the destructive actions void warranty. How can I destroy sensitive data on a failed disk without voiding warranty? | I would ask, what is the value of the data on the disk? If it's more than the cost of a new disk, then my preference would be to destroy the faulty disk and buy a new one. You could spend a lot of time trying to get the disk working long enough that you could do a proper erase, but is it worth it? And do you know that it's definitely worked? What if there were some bad sectors that you weren't able to erase properly and still contain some data, even if damaged. With the cost of hard drives today, if your data's valuable then buying a new disk is not a big expense. | {
"source": [
"https://serverfault.com/questions/74716",
"https://serverfault.com",
"https://serverfault.com/users/101/"
]
} |
74,822 | I'm using Process Explorer to monitor my windows server while it reconstructs some data. It's primarily a CPU intensive process, but I want to make sure it's not swapping. How can I tell if it is using Process Explorer? My initial guess is in the System Information window, it's Paging File Write Delta. Yes? No? I'm an idiot? *Screenshot is not of the server... just an example. alt text http://www.malwareinfo.org/bootcamp/img/ProcessExplorer2.jpg | "Pages Input / sec is the counter to watch, but you shouldn't worry about it "swapping" as windows does not use the page file like *nixes do. First you need to understand that windows pages in not out. I'm going to quote the relevent portion of Eric Lipperts blog post (lightly edited) since I can't say it any better myself: "RAM can be seen as merely a performance optimization. Accessing data in RAM, where the information is stored in electric fields that propagate at close to the speed of light is much faster than accessing data on disk, where information is stored in enormous, heavy ferrous metal molecules The operating system keeps track of what pages of storage from which processes are being accessed most frequently, and makes a copy of them in RAM, to get the speed increase. When a process accesses a pointer corresponding to a page that is not currently cached in RAM, the operating system does a “page fault”, goes out to the disk, and makes a copy of the page from disk to RAM, making the reasonable assumption that it’s about to be accessed again some time soon. The operating system is also very smart about sharing read-only resources. If two processes both load the same page of code from the same DLL, then the operating system can share the RAM cache between the two processes. Since the code is presumably not going to be changed by either process, it's perfectly sensible to save the duplicate page of RAM by sharing it. But even with clever sharing, eventually this caching system is going to run out of RAM. When that happens, the operating system makes a guess about which pages are least likely to be accessed again soon, writes them out to disk if they’ve changed, and frees up that RAM to read in something that is more likely to be accessed again soon. When the operating system guesses incorrectly, or, more likely, when there simply is not enough RAM to store all the frequently-accessed pages in all the running processes, then the machine starts “thrashing”. The operating system spends all of its time writing and reading the expensive disk storage, the disk runs constantly, and you don’t get any work done. This also means that "running out of RAM" seldom results in an “out of memory” error. Instead of an error, it results in bad performance because the full cost of the fact that storage is actually on disk suddenly becomes relevant. Another way of looking at this is that the total amount of virtual memory your program consumes is really not hugely relevant to its performance. What is relevant is not the total amount of virtual memory consumed, but rather, (1) how much of that memory is not shared with other processes, (2) how big the "working set" of commonly-used pages is, and (3) whether the working sets of all active processes are larger than available RAM. By now it should be clear why “out of memory” errors usually have nothing to do with how much physical memory you have, or how even how much storage is available. It’s almost always about the address space, which on 32 bit Windows is relatively small and easily fragmented. " A few additional points: dlls and program files are always only paged in, never out as they are already on disk ( and usually the first pages freed when physical ram gets low) you are far more likley to run out of free page table entries or have heavily fragemented memory than any other memory issues (other than overall poor performance as already mentioned even if you run with no page file you can still get page faults generally speaking looking at commited memory is more telling of how a process uses memory for a complete picture of how memory mannagement work in windows see The Virtual-Memory Manager in Windows NT if you think you have a memory issue I'd first suggest watching this presentation on troubleshooting windows memory Here's a great explaination of why sometimes you get "out of memory" when you are not thanks to memory fragmentation: See also Pushing the Limits of Windows: Physical Memory More on Virtual Memory, Memory Fragmentation and Leaks, and WOW64 RAM, Virtual Memory, Pagefile and all that stuff (microsoft support) Update: Windows 10 does something a bit different with memory and, over time you will see a process called "System and compressed memory" Windows 10 adds a "compression store" to the paging out list. This ram is USER memory that is owned by system (typically system only had kernel memory)
This memory is compressed in place for an average reduction to about 30%. This allows more pages to be stored in memory (for those of you doing the math that's 70% more space) Note that if the memory still has pressure then pages from the compression store (user mode System process space) can be placed on the modified list (compressed) which can then be written to the physical pagefile. The system will see they are from the system user mode space and compressed and won't try to put them back in the store. So on windows 10 systems it may look like system is inhaling ram but in fact it's just trying to be more efficient at using ram. Mac users have been using a similar feature since 2013, and newer versions of the Linux kernel employ a version of memory compression. This method of conserving memory is not only better, but already common among other operating systems. | {
"source": [
"https://serverfault.com/questions/74822",
"https://serverfault.com",
"https://serverfault.com/users/7033/"
]
} |
75,362 | Debugging a Nagios warning on ssh, I've discovered that gssapi-with-mic is causing long lags in authentication. I've turned it off, but what exactly am I missing? I gather that GSSAPI is a tool for authentication, but what about the -with-mic part? | Message Integrity Code . This is also called a Message Authentication Code, but that acronym gets used for other things, so MIC is less ambiguous. From that Wikipedia page: The term message integrity code (MIC) is frequently substituted for the term MAC, especially in communications, where the acronym MAC traditionally stands for Media Access Control. | {
"source": [
"https://serverfault.com/questions/75362",
"https://serverfault.com",
"https://serverfault.com/users/919/"
]
} |
76,013 | For any URL with a plus sign (+) in the base URL (not the querystring), IIS7 and IIS7.5 (Windows Server 2008 and 2008 R2) do not appear to forward the URL to the default handler on an ASP.NET application. I started noticing the issue with a custom HTTP handler on *.html but I have the same issue with *.aspx . IIS6 (Server 2003) has no problem with these same URLs. To replicate the issue, in an ASP.NET site, I created a set of ASPX files that did a simple Response.Write with various names: test_something.aspx test_some+thing.aspx test_some thing.aspx The third file was a test to see if IIS7[.5] was treating plus symbols as spaces (as it would in the querystring); this does not appear to be the case. With all of these files in place, hitting http://somehost/test_some+thing.aspx or http://somehost/test_some%2bthing.aspx will work fine in IIS6 but 404 in IIS7/IIS7.5 before getting to any ASP.NET handler. Is there some configuration in IIS7/7.5 that I am missing to get it to "see" a plus sign in the URL without missing the final extension used to determine an HTTP handler? | After searching for more combinations of IIS and plus, it appears that IIS7[.5] is set up to reject URLs with a plus sign by default out of some fear of the use of that character; that symbol is still allowed in the querystring, though. The solution is to alter the requestFiltering attribute default on <system><webServer><security><requestFiltering> to allow doubly-encoded characters with a command line call (ultimately modifying your ASP.NET web.config): %windir%\system32\inetsrv\appcmd set config "Default Web Site" -section:system.webServer/security/requestFiltering -allowDoubleEscaping:true This may be a bit more dangerous than one prefers to be with their web site, but there didn't appear to be a way to be more specific than a blanket allow. The warnings were regarding the mismatching that could occur between using a plus in a URL and its typical translation as a space. It looks like the only other alternative is to stop using plus characters in your URLs at all. | {
"source": [
"https://serverfault.com/questions/76013",
"https://serverfault.com",
"https://serverfault.com/users/751/"
]
} |
76,042 | Say that I setup a symbolic link: ln -s /root/Public/mytextfile.txt /root/Public/myothertextfile.txt is there a way to see what the target of myothertextfile.txt is using the command line? | Use the -f flag to print the canonicalized version. For example: readlink -f /root/Public/myothertextfile.txt From man readlink : -f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist | {
"source": [
"https://serverfault.com/questions/76042",
"https://serverfault.com",
"https://serverfault.com/users/1098/"
]
} |
76,715 | This is a Canonical Question about Active Directory domain naming. After experimenting with Windows domains and domain controllers in a virtual environment, I've realized that having an active directory domain named identically to a DNS domain is bad idea (Meaning that having example.com as an Active Directory name is no good when we have the example.com domain name registered for use as our website). This related question seems to support that conclusion , but I'm still not sure about what other rules there are around naming Active Directory domains. Are there any best practices on what an Active Directory name should or shouldn't be? | This has been a fun topic of discussion on Server Fault. There appear to be varying "religious views" on the topic. I agree with Microsoft's recommendation : Use a sub-domain of the company's already-registered Internet domain name. So, if you own foo.com , use ad.foo.com or some such. The most vile thing, as I see it, is using the registered Internet domain name, verbatim, for the Active Directory domain name. This causes you to be forced to manually copy records from the Internet DNS (like www ) into the Active Directory DNS zone to allow "external" names to resolve. I've seen utterly silly things like IIS installed on every DC in an organization running a web site that does a redirect such that someone entering foo.com into their browser would be redirected to www.foo.com by these IIS installations. Utter silliness! Using the Internet domain name gains you no advantages, but creates "make work" every time you change the IP addresses that external host names refer to. (Try using geographically load-balanced DNS for the external hosts and integrating that with such a "split DNS" situation, too! Gee-- that would be fun...) Using such a subdomain has no effect on things like Exchange email delivery or User Principal Name (UPN) suffixes, BTW. (I often see those both cited as excuses for using the Internet domain name as the AD domain name.) I also see the excuse "lots of big companies do it". Large companies can make boneheaded decisions as easily (if not moreso) than small companies. I don't buy that just because a large company makes a bad decision that somehow causes it to be a good decision. | {
"source": [
"https://serverfault.com/questions/76715",
"https://serverfault.com",
"https://serverfault.com/users/3295/"
]
} |
76,766 | Here is the error I get when booting up Apache2: * Starting web server apache2
apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
[Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results
[Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results
[Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results
[Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results
[Wed Oct 21 16:37:26 2009] [warn] NameVirtualHost *:80 has no VirtualHosts I first followed this guide on setting up Apache to host multiple sites: http://www.debian-administration.org/articles/412 I then found a similar question on ServerFault and tried applying the solution, but it didn't help. Here is an example of my final VirtualHost config: <VirtualHost *:80>
ServerAdmin [email protected]
ServerName www.xxx.com
ServerAlias xxx.com
# Indexes + Directory Root.
DirectoryIndex index.html
DocumentRoot /var/www/www.xxx.com
# Logfiles
ErrorLog /var/www/www.xxx.com/logs/error.log
CustomLog /var/www/www.xxx.com/logs/access.log combined
</VirtualHost> with the domain X'd out to protect the innocent :-) Also, I have the conf.d/virtual.conf file mentioned in the guide looking like this: NameVirtualHost * The odd thing is that everything appears to work fine for two of the three sites. | The IP addresses named with NameVirtualHost have to match the IP address in each VirtualHost element. Example: NameVirtualHost *:80
NameVirtualHost *:81
<VirtualHost *:80>
# ...
</VirtualHost>
<VirtualHost *:81>
# ...
</VirtualHost>
# This will not work!
<VirtualHost *>
# ...
</VirtualHost> Read the Apache Virtual Host documentation for details. | {
"source": [
"https://serverfault.com/questions/76766",
"https://serverfault.com",
"https://serverfault.com/users/2662/"
]
} |
76,875 | I need to run script, that takes long time to execute, or I just want it to run forever. I can't just SSH to my machine, because when I disconnect it stops running. Is there any way to run script that isn't dependent on the shell that started it? I'm using Ubuntu 9.04. | You can run the command with the nohup command before it. You can also run it in 'screen', which will allow you reattach the terminal. For example: ssh mySever 'nohup bash myscript.sh' Or just ssh into and run the nohup command. It should keep running even when you disconnect. This is because the nohup will intercept the SIGHUP singal (hangup). Screen is a bit more involved, but for the 20 minutes it might take you to learn the basics, it is one of the most useful tools out there. Here is a tutorial . | {
"source": [
"https://serverfault.com/questions/76875",
"https://serverfault.com",
"https://serverfault.com/users/10157/"
]
} |
76,884 | On our 2008 R2 domain, I have a strange entry in the list of folder replica. \\?\C:\Windows\SYSVOL\domain dc1 Enabled SYSVOL Share
C:\Windows\SYSVOL\domain dc2 Enabled SYSVOL Share
C:\Windows\SYSVOL\domain dc3 Enabled SYSVOL Share
C:\Windows\SYSVOL\domain dc4 Enabled SYSVOL Share While everything seems to work fine, I'm trying to decided if the first entry is something I should worry about. Cheers, Stephen. | You can run the command with the nohup command before it. You can also run it in 'screen', which will allow you reattach the terminal. For example: ssh mySever 'nohup bash myscript.sh' Or just ssh into and run the nohup command. It should keep running even when you disconnect. This is because the nohup will intercept the SIGHUP singal (hangup). Screen is a bit more involved, but for the 20 minutes it might take you to learn the basics, it is one of the most useful tools out there. Here is a tutorial . | {
"source": [
"https://serverfault.com/questions/76884",
"https://serverfault.com",
"https://serverfault.com/users/16651/"
]
} |
77,162 | Is there any way to get pgrep to give me all the info about each process that ps does? I know I can pipe ps through grep but that's a lot of typing and it also gives me the grep process itself which I don't want. | pgrep 's output options are pretty limited. You will almost certainly need to send it back through ps to get the important information out. You could automate this by using a bash function in your ~/.bashrc . function ppgrep() { pgrep "$@" | xargs --no-run-if-empty ps fp; } Then call the command with. ppgrep <pattern> | {
"source": [
"https://serverfault.com/questions/77162",
"https://serverfault.com",
"https://serverfault.com/users/8625/"
]
} |
77,710 | A friend is talking with me about the problem of bit rot - bits on drives randomly flipping, corrupting data. Incredibly rare, but with enough time it could be a problem, and it's impossible to detect. The drive wouldn't consider it to be a bad sector, and backups would just think the file has changed. There's no checksum involved to validate integrity. Even in a RAID setup, the difference would be detected but there would be no way to know which mirror copy is correct. Is this a real problem? And if so, what can be done about it? My friend is recommending zfs as a solution, but I can't imagine flattening our file servers at work, putting on Solaris and zfs.. | First off: Your file system may not have checksums, but your hard drive itself has them. There's S.M.A.R.T., for example. Once one bit too many got flipped, the error can't be corrected, of course. And if you're really unlucky, bits can change in such a way that the checksum won't become invalid; then the error won't even be detected. So, nasty things can happen; but the claim that a random bit flipping will instantly corrupt you data is bogus. However, yes, when you put trillions of bits on a hard drive, they won't stay like that forever; that's a real problem! ZFS can do integrity checking every time data is read; this is similar to what your hard drive already does itself, but it's another safeguard for which you're sacrificing some space, so you're increasing resilience against data corruption. When your file system is good enough, the probability of an error occurring without being detected becomes so low that you don't have to care about that any longer and you might decide that having checksums built into the data storage format you're using is unnecessary. Either way: no, it's not impossible to detect . But a file system, by itself, can never be a guarantee that every failure can be recovered from; it's not a silver bullet. You still must have backups and a plan/algorithm for what to do when an error has been detected. | {
"source": [
"https://serverfault.com/questions/77710",
"https://serverfault.com",
"https://serverfault.com/users/1920/"
]
} |
78,048 | I know that 127.0.0.1 ~ 127.255.255.254 are the loopback IP addresses for most modern operating systems, and these IP addresses can be used to refer to our own computer. But what's 0.0.0.0? It seems it also refers to the local computer, so what's the difference ? And, could you explain the following IP connections for me: | The only thing is that you're not saying "all addresses should have access" -- that's done in your firewall(s) and/or the server software and/or other security layers like tcpwrappers. 0.0.0.0, in this context, means "all IP addresses on the local machine" (in fact probably, "all IPv4 addresses on the local machine"). So, if your webserver machine has two IP addresses, 192.168.1.1 and 10.1.2.1, and you allow a webserver daemon like apache to listen on 0.0.0.0, it will be reachable at both of those IP addresses. But only to what can contact those IP addresses and the web port(s). Note that, in a different context (routing) 0.0.0.0 usually means the default route (the route to "the rest of" the internet, aside from routes in your local network etc.). | {
"source": [
"https://serverfault.com/questions/78048",
"https://serverfault.com",
"https://serverfault.com/users/60424/"
]
} |
78,089 | How can I find out the name/IP address of the AD domain controller on my network? | On any computer, that has DNS configured to use AD's DNS server do: Start -> Run -> nslookup set type=all
_ldap._tcp.dc._msdcs.DOMAIN_NAME Replace DOMAIN_NAME with the actual domain name e.g. example.com . Read more here . | {
"source": [
"https://serverfault.com/questions/78089",
"https://serverfault.com",
"https://serverfault.com/users/18682/"
]
} |
78,343 | When renting a dedicated server, how can one be certain than he/she is not getting a VPS or some other virtual machine variant instead of a true dedicated hardware box? Which checks can be run (assuming it is a linux box) to detect such case? | First of all, physical machines tend to have more memory than VPSs. Question 512MB or less. Secondly you can check several things to find a VPS. You'll commonly find virtual machines have surprisingly basic looking hardware in them. Like KVM has a "Cirrus Logic GD 5446" graphics card. VMWare used to have an RTL8129 network card in. This is so most OS installation media has drivers for the virtual devices. The facter (part of Puppet) virtual.rb script has several useful techniques for finding out what type of machine you're running. OpenVZ Look for /proc/vz/veinfo Xen Look for one of /proc/sys/xen , /sys/bus/xen or /proc/xen vserver Look for s_context or VxID in /proc/self/status VMWare or Parallels Run lspci and look for VMWare VGA adapter Run dmidecode and look for mention of VMWare or Parallels KVM Run lspci and look for RAM memory: Qumranet, Inc. Virtio memory balloon | {
"source": [
"https://serverfault.com/questions/78343",
"https://serverfault.com",
"https://serverfault.com/users/14117/"
]
} |
78,351 | Say I have a server and client . I need to create connection from client to a website through server like it was proxy. Is it possible to do this using a SSH tunel, or do I have to install some proxy service to the server ? | You can do this using ssh ssh -L 80:remotehost:80 user@myserver You will have a tunnel from your local port 80 to the remotehost port 80 then. This does not have to be the same as myserver.
To make that transparent you should add an entry to the hosts file. If you don't do that vhosts will not work.
If you want a SOCKS-proxy connection you could also use ssh -D 5000 user@myserver This will create a SOCKS-proxy on localhost port 5000 which routes all requests through myserver. | {
"source": [
"https://serverfault.com/questions/78351",
"https://serverfault.com",
"https://serverfault.com/users/10157/"
]
} |
78,365 | Greetings, I'm using vpnc for a VPN client. I'm also doing some tricky things with route to make sure I can still access my local network, etc. etc. (the particulars here are not very important). Sometimes I get the routing table so jacked up I get ping: sendto: Network is unreachable for urls that should otherwise resolve. Currently, if I restart Mac OS X then everything is back to normal. What I'd like to do is reset the routing tables to the "default" (e.g. what it is set to at boot) without a whole system reboot. I think that step 1 is route flush (to remove all routes). And step 2 needs to reload all of the default routes. Any thoughts on how to do this? (e.g. what is step 2?) EDIT Also, I'm noticing another symptom is traceroute also fails on the address in question. For instance: traceroute the.good.dns.name traceroute: bind: Can't assign requested address | You need to flush the routes . Use route -n flush several times . Afterwards add your routes with route add. | {
"source": [
"https://serverfault.com/questions/78365",
"https://serverfault.com",
"https://serverfault.com/users/20179/"
]
} |
78,367 | We've got several RHEL5 development servers, one for each developer. Each server is developer's own sandbox, with Subversion checkout available through Samba shares (having RHEL5 clients is out of option, corporate policy requires Windows XP). Now several developers have notebooks as main development boxes and would like to have their code available even when there's no network connectivity, e.g. in the presentation room or at home. R/W preferred, of course, but R/O would do too. I'm thinking about some system with transparent persistent caching, lie a virtual drive which would synchronize with the original share when online and replace the network drive when offline. There are possibly other solutions, what would you recommend? EDIT: Judging from the comments, I notice how difficult it is to explain what we are doing. I'll just try again. There is a central SVN repository and developers' devel-boxes to work on. However, they are not allowed to use those boxes as Linux clients directly, but instead they need to use Windows XP as development client because of corporate restrictions. So the webserver stays on the RHEL5 devel-box (which is reference platform), checkout stays there too and is shared to the developer via Samba (exclusively!). There is no misunderstanding of what SVN is and isn't, it's just that the checkouts are located on the server and not on the client. Because of that, they are not available online, but it's desired that they are -- and this has been the essence of my question. Tell me if this is easier to understand :) | You need to flush the routes . Use route -n flush several times . Afterwards add your routes with route add. | {
"source": [
"https://serverfault.com/questions/78367",
"https://serverfault.com",
"https://serverfault.com/users/120/"
]
} |
79,043 | I have to configure a MySQL server to act as a replication-master. I modified my.cnf to activate binary logs, but now in order to reload configuration I have to reload the service with /etc/init.d/mysqld restart .
The problem is that the server receives several queries per second and I don't want to lose all the data that could arrive meanwhile. Is there a way to reload configuration file my.cnf without restarting the service? | MySQL Specificly: The options in my.cnf are system variables . These variables are either dynamic (can be changed at runtime) or not dynamic. The ones that are dynamic, can be changed at run time with the SET variable syntax. You can see the variables with SHOW VARIABLES; . But according to this link in the manual , the binary log option is not dynamic. So it looks like you have to restart. You might want to wait for someone who knows mysql a little better than myself to confirm this however. Daemons in General: In Linux, /etc/init.d/ holds scripts that start and stop daemons (services). Since these are scripts, you can view them with a text editor. Many of these scripts will take a reload argument. Looking at my mysql script, reload as an argument uses the mysqladmin command. So the manual for mysqladmin under reload says: reload Reload the grant tables. So looks like in general, this isn't for configuration changes, but rather changes in privileges (Maybe the equivalent flush privileges command? ). | {
"source": [
"https://serverfault.com/questions/79043",
"https://serverfault.com",
"https://serverfault.com/users/11420/"
]
} |
79,077 | Is it safe to use an EBS volumne while a snapshot is being created? I've currently got a 100Gb EBS volume mounted. I am in the process of snapshotting it. Goodness it's slow!! It's going to end up taking more than 45 minutes to snapshot. My question : Is the EBS volume already copied and just being saved somewhere? Or, is the snapshot actively copying from my mounted volume right now? Basically, if I start using it before the snapshot completes, am I hosed? I just can't believe it takes this long to copy. There really isn't even 100GB in use. It's more like 25Gb. | You're safe to use the volume once you have triggered the snapshot, even if it's still in a pending state according to AWS - see this post . If you're taking a snapshot for the first time, it probably will take a while as it has to make a full copy to the region-wide S3 bucket, but remember, it's incremental after the first one has been stored so should be a lot faster. NOTE: You can't create a volume out of a snapshot which is in a pending state. You'll get the error "Snapshot is in invalid state" if you do this. So please make sure to wait until the snapshot is in the "available" state. | {
"source": [
"https://serverfault.com/questions/79077",
"https://serverfault.com",
"https://serverfault.com/users/9271/"
]
} |
79,453 | Update: I got it working now. Jim Zajkowski's answer helped me detect that my /etc/init.d/couchdb reboot calls weren't actually rebooting the instance. After I manually killed the CouchDB processes and started a new instance, it picked up the required BindAddress change. I have installed CouchDB via aptitude install couchdb From my server, I can connect via telnet localhost 5984 and execute RESTful commands. When I try to access the server from another machine on our network or from a machine external of our network, I get a The connection was reset error. I've set up port forwarding on the router, and the server is otherwise accessible via Apache, Tomcat, SSH, etc. I'm new to Linux/Ubuntu, so I wasn't sure if there was a default firewall blocking the connection, so I ran: iptables -A INPUT -p tcp --dport 5984 -j ACCEPT but it didn't help. Here is the dump from running iptables -L -n -v Chain INPUT (policy ACCEPT 2121K packets, 1319M bytes)
pkts bytes target prot opt in out source destination
70 3864 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5984
9 1647 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 1708K packets, 1136M bytes)
pkts bytes target prot opt in out source destination I assume the bytes showing as transfered for 5984 are due to my localhost connection. Here is the dump from running netstat -an | grep 5984 tcp 0 0 127.0.0.1:5984 0.0.0.0:* LISTEN I configured couch.ini to have "BindAddress=0.0.0.0" and rebooted, so it should be listening on all interfaces. When I run "sudo /etc/init.d/couchdb stop" then run netstat, however, I still see the above entry. It looks like CouchDB isn't actually stopping at all. This may explain my problem, because it means it may mean that CouchDB never actually rebooted and never picked up the BindAddress change. I manually killed the CouchDB process and started it up again. Now netstat shows: tcp 0 0 127.0.0.1:5984 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:5984 127.0.0.1:35366 TIME_WAIT I still can't connect though, even from another machine on the LAN. | What does netstat -an | grep 5984 say? Does it say 127.0.0.1:5984 or *:5984 ? If it's 127.0.0.1 , then couchdb needs to be set to listen to either all interfaces. | {
"source": [
"https://serverfault.com/questions/79453",
"https://serverfault.com",
"https://serverfault.com/users/2662/"
]
} |
79,485 | I run a (remotely hosted) virtual Server with Windows 2008 Server for a client.
Initially, it had 10 GB of space.
During the course of a few weeks - during which nothing was done on the machine except normal work using a web-basedt icket system - , Windows began to fill up its infamous "winsxs" directory so much that in the end, the hard disk was full and we had to order another 5 GB.
Now, three weeks later, these 5 GB have been consumed by winsxs as well, and again I can't work on the machine. Winsxs is now 8 GB big, the rest of the windows directory 5 GB. I have found various sources on the web that describe the same problem. Apparently, Windows 2008 stores all language versions for all DLLs it downloads in the normal updating process. Just deleting stuff there is described as mortally dangerous as it contains vital components. I have not found any kind of tool or instructions to identify and remove those files that are no longer needed. What can I do?
Is this normal behaviour and if it is, how do other servers with equally limited space manage? Is there something I can turn off or on? Of the pre-defined server roles, only "File services" (or whatever it's called in english, it's a swiss server) is activated. In addition, I have installed Apache, mySQL, and Subversion. Automatic updates are activated. Edit: The problem persists. Note: I am aware that the WinSXS directory consist mainly of symlinks and that users often panic looking at its size. Still, Out of 15 GB of space, I have 1.5 MB used by programs and data, and nothing left. I'm glad I can even access the damn machine. *I have already freed up 1 GB of data, which was filled by the windows Windows within 24 hours. It's like in a horror movie.
What I have tried: Installing SP2 (which comes with compcln.exe) is not an option, as the disk space is not enough for even that. There is no vsp1clean.exe on the machine, probably because SP1 has already been merged into the system. In fact, there exists no file named *cln.exe anywhere. There are no shadow copies. Shadow copies are not active. As far as I can tell, there are no system restore points active. The only server role activated is "file server". The standard "cleanup" function (right-click on C: drive) is offering me a baffling 2 MB in trash contents and temporary internet files. Using one of the "cleanup winsxs" scripts around is not an option for me, they all look too shady. I can't find anything directly from Microsoft addressing this issue. | The WinSxS directory doesn't take up nearly the space reported by Explorer since it uses Hard Links to physical files, not actual files. Explorer just has issues reporting the size of hard links. This article on Disk Space (referenced here http://aspoc.net/archives/2008/11/20/winsxs-disk-space-and-windows-7/ ) has a great explanation on the WinSxS directory. As to your actual disk usage problem - You can try to run COMPCLN.EXE to see if you can clean up any old service pack and hot fix files, which should help quite a bit. I would also look at any logging directories to see if theres something else going on. | {
"source": [
"https://serverfault.com/questions/79485",
"https://serverfault.com",
"https://serverfault.com/users/24210/"
]
} |
79,917 | I'm looking into buying a few rolls of bulk Cat6 cable to wire up a few cubicles to a test rack we have. One of the choices is PVC cable versus Plenum cable. What is the difference? Will this make any difference with crimping or crosstalk/interference? | There's no telecommunication difference (e.g. noise, crimping, termination), just the sheathing. The difference is an electrical code safety issue. Regular network cable (i.e. non-plenum) is flammable, can catch fire, can spread fire, and emit toxic fumes when burning. Plenum quality cable is required for use if you run your cable in air handling spaces (i.e. air ducts), or if you're running them between floors . Plenum cable is fire-resistant (it will melt, but not support combustion or carry a flame). It also emits less toxic fumes when it does burn. If you look at some network cable, you can see which type it is: CMP : Plenum rated cable (plenum means " air handling space ") CMR : Riser rated cable (riser means " between floors ") LSZH : Low smoke zero halogen rated cable CM/CMG/CMx : General purpose cable PVC : Unrated cable CMP burning: CMR burning: Notes CAT6 is still very precise about how the cables are terminated. Sloppy termination will reduce the capacity of the run. This is generally not the case with CAT5. The linked wikipedia article, has good diagrams illustrating what is plenum space and what isn't. Generally, you should use plenum cable in a dropped ceiling unless you are certain that it is not air-handling space. If a duct in your drop ceiling is missing a section: your drop ceiling suddenly, inadvertently, becomes an air-handling space. See also Wikipedia: Plenum cable Videos of different types of network cables burning | {
"source": [
"https://serverfault.com/questions/79917",
"https://serverfault.com",
"https://serverfault.com/users/3454/"
]
} |
80,014 | While looking around for some plugs to use for running and crimping some bulk Cat-6 cable, I noticed the online sites show RJ-45 plugs for Cat-5e and separate ones for Cat-6. Is there actually any difference between the two? The Cat-6 plugs mention having an "insert", but does this really matter? Last time I needed to crimp cable, it was for Cat-5e, which is why I'm asking this now. Thanks! | "It depends" . To the best of my knowledge, the standards themselves do not mandate any changes to the plugs. I would guess -- but I'm not 100% sure -- that the standards are mainly concerned with externally observable characteristics of the cabling such as crosstalk and attenuation, and leave the internal implementation details mostly up to each manufacturer. Having said that, the following comes to mind: 23 gauge copper wiring (thicker wires) is more common in Cat6 installations than in Cat5 in my experience. If the wires are thicker, the plugs are different. More and more manufacturers are updating their cabling systems, both to allow faster cabling work, and to ensure more consistent and/or higher quality. Many cabling systems now use a little 'form' to hold the wires in place before the plug. This is to minimize crosstalk and noise near the plug (where the cable is un-twisted, and much more susceptible to interference). See John Gardeniers answer regarding stranded / solid wiring ; these should use different plugs. Solid wiring is often used in building wiring. Regarding OP's link to a no-name plug, I think it's mostly marketing. While there can and should be differences in how plugs are designed, in the no-name space I don't think you'll find a consistent set of differences between no-name Cat5 and Cat6 plugs. Here is a little video that shows how some modern structured cabling systems use an insert / form. The same brand uses a "smart connector" for the 8P8C plugs as well. But this is a name-brand structured cabling system . Cabling systems will typically be installed by certified installators, and be validated end-to-end after installation by measuring that they meet or exceed an agreed level of performance. | {
"source": [
"https://serverfault.com/questions/80014",
"https://serverfault.com",
"https://serverfault.com/users/3454/"
]
} |
80,478 | When you want to have public key based ssh logins for multiple machines, do you use one private key, and put the same public key on all of the machines? Or do you have one private/public key pair for each connection? | I use one key per set of systems that share a common administrative boundary. This limits the number of machines that get popped if a key is compromised, whilst not completely overwhelming my capacity to store and manage several thousand keys. Different passphrases on each key means that even if all your private keys are stolen and one key is compromised, the rest don't go down the toilet with it. Also, if you do do something stupid (like copy a private key onto an untrusted machine), again you don't have to rekey everything, just the machines associated with that key. | {
"source": [
"https://serverfault.com/questions/80478",
"https://serverfault.com",
"https://serverfault.com/users/15623/"
]
} |
80,486 | We have a vCenter 4.0 Server with two ESX 4.0 hosts. We have a vCenter Server 4 Foundation license and a vSphere 4 Advanced 6CPU license. Licenses have been assigned to the server and hosts (each host has two CPUs leaving 2 unassigned on the vSphere license.) The licenses are applied successfully via the vCenter license management interface, then moments later each host displays the following error on the Summary page: Configuration Issues
License assignment on the host fails. Reasons: The license key can not be assigned. What causes this error and how can it be resolved? | I use one key per set of systems that share a common administrative boundary. This limits the number of machines that get popped if a key is compromised, whilst not completely overwhelming my capacity to store and manage several thousand keys. Different passphrases on each key means that even if all your private keys are stolen and one key is compromised, the rest don't go down the toilet with it. Also, if you do do something stupid (like copy a private key onto an untrusted machine), again you don't have to rekey everything, just the machines associated with that key. | {
"source": [
"https://serverfault.com/questions/80486",
"https://serverfault.com",
"https://serverfault.com/users/370/"
]
} |
80,727 | VMware and many network evangelists try to tell you that sophisticated (=expensive) fiber SANs are the "only" storage option for VMware ESX and ESXi servers. Well, yes, of course. Using a SAN is fast, reliable and makes vMotion possible. Great. But: Can all ESX/ESXi users really afford SANs? My theory is that less than 20% of all VMware ESX installations on this planet actually use fiber or iSCS SANs. Most of these installation will be in larger companies who can afford this. I would predict that most VMware installations use "attached storage" (vmdks are stored on disks inside the server). Most of them run in SMEs and there are so many of them! We run two ESX 3.5 servers with attached storage and two ESX 4 servers with an iSCS san. And the "real live difference" between both is barely notable :-) Do you know of any official statistics for this question? What do you use as your storage medium? | I do a lot of VMware consulting work and I'd say that the percentages are closer to 80% of the installed base use high availability shared storage (FC, iSCSI or high end NAS) and a lot of my clients are SME's. The key factor that I've found is whether the business treats its server up time as critical or not, for most businesses today it is. You certainly can run very high performance VM's from direct attached storage (a HP DL380 G6 with 16 internal drives in a RAID 10 array would have pretty fast disk IO) but if you are building a VMware or any other virtualized environment to replace tens, hundreds, or thousands of servers, then you are insane if you aren't putting a lot of effort (and probably money) into a robust storage architecture. You don't have to buy a high end SAN for the clustering functions - you can implement these with a fairly cheap NAS (or a virtualized SAN like HP\Lefthand's VSA) and still be using certified storage. However if you are using shared storage and it doesn't have redundancy at all points in the SAN\NAS infrastructure then you shouldn't really be using it for much more than testing. And redundancy is (at a minimum) dual (independent) HBA's\storage NICs in your servers, dual independent fabrics, redundant controllers in the SAN, battery backed cache\cache destaging, redundant hot swappable fans and power supplies etc, RAID 5\6\10\50 and appropriate numbers of hot spares. The real live difference between your systems is that if one of your standalone systems catastrophically fails you have a lot of work to do to recover it and you will incur downtime just keeping it patched. With clustered SAN attached systems, patching the hypervisors, or even upgrading hypervisor hardware, should result in zero downtime. A catastrophic server failure simply brings the service down for the length of time that it takes to reboot the VM on a separate node (at worst) or if you have Fault Tolerance covering those VMs then you have no downtime at all. | {
"source": [
"https://serverfault.com/questions/80727",
"https://serverfault.com",
"https://serverfault.com/users/5787/"
]
} |
80,767 | What are named and default instances?
What is/are the difference(s) between them?
Why they are used? | According to Microsoft regarding named vs default Client applications connect to an instance of Microsoft SQL Server 2005
to work with a SQL Server database.
Each SQL Server instance is made up of
a distinct set of services that can
have unique settings. The directory
structure, registry structure, and
service name all reflect the specific
instance name you identify during
setup. An instance is either the default, unnamed instance, or it is a named
instance. When SQL Server 2005 is in
installed in the default instance, it
does not require a client to specify
the name of the instance to make a
connection. The client only has to
know the server name. A named instance is identified by the network name of the computer plus
the instance name that you specify
during installation. The client must
specify both the server name and the
instance name when connecting. By default, SQL Server installs in the default instance unless you
specify an instance name. SQL Server
Express, however, always installs in a
named instance unless you force a
default installation during setup. | {
"source": [
"https://serverfault.com/questions/80767",
"https://serverfault.com",
"https://serverfault.com/users/24753/"
]
} |
80,862 | I would like to create a loop that repeats a ncftp transfer if it returns an error. I'm a little unsure how the exit code variable can be used in a loop.
Would something like this work? until [$? == 0]; do
ncftpput -DD -z -u user -p password remoteserver /remote/dir /local/file
done | I found the basis for this elegant loop elsewhere on serverfault. Turns out there is no need save the exit code, as you can test directly on the command itself; until ncftpput -DD -z -u user -p password remoteserver /remote/dir /local/file; do
echo Tansfer disrupted, retrying in 10 seconds...
sleep 10
done | {
"source": [
"https://serverfault.com/questions/80862",
"https://serverfault.com",
"https://serverfault.com/users/22153/"
]
} |
80,939 | I'm trying to determine who was recently logged into a specific machine in my office. So I used last , but wtmp begins yesterday (Monday) around 14:30. I was hoping to find info stretching back to Sunday, at least. Is there anyway to get that info without plodding through the authorization log file? | Presumably your wtmp file has been rotated, so try last -f /var/log/wtmp.1 or last -f /var/log/wtmp.0 to read the previous files. If those don't work, ls /var/log/wtmp* and see if they're called something else. If they're compressed ( .gz extension), decompress 'em. If they're not there, find whoever setup the bollocks rotation scheme and give them a solid foot-punch to the pantaloons. There's no reason not to keep at least a few weeks' of wtmp logs. | {
"source": [
"https://serverfault.com/questions/80939",
"https://serverfault.com",
"https://serverfault.com/users/9735/"
]
} |
81,022 | Let's say I have 2 sites(Superuser and Serverfault) running from their own Apache virtual host on one box. The 2 sites are powered by Django and are running on Apache with mod-wsgi. A typical configuration file for one of the site will look like the following: WSGIDaemonProcess serverfault.com user=www-data group=www-data processes=5 The host is a linux machine with 4GB of RAM running Ubuntu. Can anyone suggest the number of processes I should specify above for my 2 sites? Let's assume they have the same traffic as the actual Superuser and Serverfault sites. | Well, how much traffic do the actual Superuser and Serverfault sites have? Hypotheticals aren't much use if they don't have enough info to make the answer easier... Your worst-case process count should be the peak number of requests per second you want the site to be able to handle, divided by the number of requests per second that one process can handle if all those requests are made to your slowest action (so the reciprocal of the processing time of that action). Add whatever fudge factor you think is appropriate, based on the confidence interval of your req/sec and time measurements. The average case count is the same, but you divide the req/sec by the weighted mean of your requests per second figure for each action (the weight is the percentage of requests you expect to hit that particular action). Again, fudge factors are useful. The actual upper bound of how many processes you can run on the machine is dictated by the upper amount of memory each process takes; spool up one process, then run a variety of memory-hungry actions (ones that retrieve and process a lot of data, typically) against it with a realistic data set (if you just use a toy data set for testing, say 50 or 100 rows, then if one of your actions retrieves and manipulates every row in the table it won't be a good measurement for when that table grows to 10,000 rows) to see what the memory usage balloons out to. You can artificially constrain your per-process memory usage with a script that reaps workers that reach a certain memory usage threshold, at the risk of causing nasty problems if you set that threshold too low. Once you've got your memory use figure, you deduct some amount of memory for system overhead (I like 512MB myself), deduct a pile more if you've got other processes running on the same machine (like a database), and then some more to make sure you don't run out of disk cache space (depends on your disk working set size, but again I'd go with no less than 512MB). That's the amount of memory that you divide by your per-process memory usage to get the ceiling. If the number of processes you need to service your peak load is greater than the number of processes you can fit on the box, you need more machines (or to move the database to another machine, in the simplest case). There you are, several years of experience scaling websites distilled into one small and simple SF post. | {
"source": [
"https://serverfault.com/questions/81022",
"https://serverfault.com",
"https://serverfault.com/users/16033/"
]
} |
81,028 | How many remote connections can be made in windows server 2008. I want to work with 2 my friends on a same machine. But windows server 2008 forces, second user to logout when 3rd user try to login. | Well, how much traffic do the actual Superuser and Serverfault sites have? Hypotheticals aren't much use if they don't have enough info to make the answer easier... Your worst-case process count should be the peak number of requests per second you want the site to be able to handle, divided by the number of requests per second that one process can handle if all those requests are made to your slowest action (so the reciprocal of the processing time of that action). Add whatever fudge factor you think is appropriate, based on the confidence interval of your req/sec and time measurements. The average case count is the same, but you divide the req/sec by the weighted mean of your requests per second figure for each action (the weight is the percentage of requests you expect to hit that particular action). Again, fudge factors are useful. The actual upper bound of how many processes you can run on the machine is dictated by the upper amount of memory each process takes; spool up one process, then run a variety of memory-hungry actions (ones that retrieve and process a lot of data, typically) against it with a realistic data set (if you just use a toy data set for testing, say 50 or 100 rows, then if one of your actions retrieves and manipulates every row in the table it won't be a good measurement for when that table grows to 10,000 rows) to see what the memory usage balloons out to. You can artificially constrain your per-process memory usage with a script that reaps workers that reach a certain memory usage threshold, at the risk of causing nasty problems if you set that threshold too low. Once you've got your memory use figure, you deduct some amount of memory for system overhead (I like 512MB myself), deduct a pile more if you've got other processes running on the same machine (like a database), and then some more to make sure you don't run out of disk cache space (depends on your disk working set size, but again I'd go with no less than 512MB). That's the amount of memory that you divide by your per-process memory usage to get the ceiling. If the number of processes you need to service your peak load is greater than the number of processes you can fit on the box, you need more machines (or to move the database to another machine, in the simplest case). There you are, several years of experience scaling websites distilled into one small and simple SF post. | {
"source": [
"https://serverfault.com/questions/81028",
"https://serverfault.com",
"https://serverfault.com/users/24845/"
]
} |
81,165 | In IIS 7 on Windows Server 2008, application pools can be run as the "ApplicationPoolIdentity" account instead of the NetworkService account. How do I assign permissions to this "ApplicationPoolIdentity" account. It does not appear as a local user on the machine. It does not appear as a group anywhere. Nothing remotely like it appears anywhere. When I browse for local users, groups, and built-in accounts, it does not appear in the list, nor does anything similar appear in the list. What is going on? I'm not the only one with this problem: see Trouble with ApplicationPoolIdentity in IIS 7.5 + Windows 7 for an example. "This is unfortunately a limitation of the object picker on Windows Server 2008/Windows Vista - as several people have discovered it already, you can still manipulate the ACL for the app-pool identity using command line tools like icacls ." | Update: The original question was for Windows Server 2008, but the solution is easier for Windows Server 2008 R2 and Windows Server 2012 (and Windows 7 and 8). You can add the user through the NTFS UI by typing it in directly. The name is in the format of IIS APPPOOL\{app pool name}. For example: IIS APPPOOL\DefaultAppPool. IIS APPPOOL\{app pool name} Note: Per comments below, there are two things to be aware of: Enter the string directly into the "Select User or Group" and not in the search field. In a domain environment you need to set the Location to your local computer first. Reference to Microsoft Docs article: Application Pool Identities > Securing Resources Original response: (for Windows Server 2008) This is a great feature, but as you mentioned it's not fully implemented yet. You can add the app pool identity from the command prompt with something like icacls, then you can manage it from the GUI. For example, run something like this from the command prompt: icacls c:\inetpub\wwwroot /grant "IIS APPPOOL\DefaultAppPool":(OI)(CI)(RX) Then, in Windows Explorer, go to the wwwroot folder and edit the security permissions. You will see what looks like a group (the group icon) called DefaultAppPool. You can now edit the permissions. However, you don't need to use this at all. It's a bonus that you can use if you want. You can use the old way of creating a custom user per app pool and assigning the custom user to disk. That has full UI support. This SID injection method is nice because it allows you to use a single user but fully isolate each site from each other without having to create unique users for each app pool. Pretty impressive, and it will be even better with UI support. Note: If you are unable to find the application pool user, check to see if the Windows service called Application Host Helper Service is running. It's the service that maps application pool users to Windows accounts. | {
"source": [
"https://serverfault.com/questions/81165",
"https://serverfault.com",
"https://serverfault.com/users/10023/"
]
} |
81,362 | How can I configure yum to use some repository which has git rpms? | Use the EPEL (Extra Packages for Enterprise Linux) repository. The easiest way to enable it is by installing the epel-release package. Here's how if you have RHEL 5 x86_64: [root@localhost]# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-5.noarch.rpm
[root@localhost]# yum install git | {
"source": [
"https://serverfault.com/questions/81362",
"https://serverfault.com",
"https://serverfault.com/users/14257/"
]
} |
81,402 | We have several domains all pointing their MX records at mail.ourdomain.com, an internal mail server. We are looking to outsource our email to a new supplier, who would like us to use mail.newsupplier.com; their mail server. We'd rather not change all of the domain names to point to that MX record; several aren't in our control, and it would mean attempting to get many parties to change their MX records at the same time, which seems problematic. Simpler would be to repoint mail.ourdomain.com at the IP for the new supplier. The problem is that our supplier isn't able to guarantee that IP will be fixed. My question is, therefore: is changing mail.ourdomain.com to CNAME to mail.newsupplier.com an acceptable solution? (For the record, only the email is moving, so we'd want to leave www.ourdomain.com and everythingelse.ourdomain.com unchanged.) I've found several messages warning of the dangers of CNAMES in MX records, but I can't quite find someone talking about this particular setup, so any advice will be greatfully received. | According to RFC 1123, the MX record cannot point to a CNAME. If I were in your situation, I would setup mail.ourdomain.com as an A record pointing to the new suppliers IP address and then quickly work on changing all MX records over to the correct data. Then address why changing MX records is so difficult in your organization. That being said, most mail servers will still submit mail to a CNAME; however, you can't be guaranteed of it. | {
"source": [
"https://serverfault.com/questions/81402",
"https://serverfault.com",
"https://serverfault.com/users/24986/"
]
} |
81,544 | Since I use the *nix command screen all day, and I couldn't find anyone starting this question, I figured it should be started. You know the drill: community wiki, one answer per features so we all can vote. | I love using it for connecting to serial consoles, i.e. screen /dev/ttyS0 19200 This command simply opens up a connection to serial port 0 (ttyS0) with a baud speed of 19200 | {
"source": [
"https://serverfault.com/questions/81544",
"https://serverfault.com",
"https://serverfault.com/users/11086/"
]
} |
81,609 | I just added the following to my Apache config file: AddOutputFilterByType DEFLATE text/html text/plain text/xml How do I check if it is actually working? Nothing on the browser tells me if the page contains gzipped content. | An alternative way to quickly check the headers of the HTTP response would be to use curl . For instance, if the Content-Encoding header is present in the response, then mod_deflate works: $ curl -I -H 'Accept-Encoding: gzip,deflate' http://www.example.org/index.php
[...]
Content-Encoding: gzip
[...] If you run the above command without the -H 'Accept-Encoding: gzip,deflate' part, which implies that your HTTP client does not support reading compressed content, then Content-Encoding header will not be present in the response. Hope this helps. | {
"source": [
"https://serverfault.com/questions/81609",
"https://serverfault.com",
"https://serverfault.com/users/24457/"
]
} |
81,689 | Are there any downsides to giving Application Pools multiple Worker Processes in IIS? They seem really easy to enable and (almost) everything I’ve read seems to suggest they’re good... so why doesn’t IIS give each App Pool 10+ Worker Processes? There must be some detrimental effects, right? | You're right to be suspicious. Web Gardens having no downside is a massive myth, they can cause you no end of problems, but many people still don't even know when they should be used. According to Chris Adams (from the IIS team) there is only a single reason you would want to use a Web Garden: To give applications, that are not CPU-bound but execute long running requests, the ability to scale and not use up all threads available in the worker process. There are lots of reasons why they can be bad, however, it is a common misconception that there's no downside. They increase system overheads (they don't share cache), they don't share sessions (the user can lose their session if they're switched over to another process), InProc can get messed up. In short, they're actually, more often than not, a lot of trouble, and you shouldn't be using one without good reason. Read Chris's full explanation: http://blogs.iis.net/chrisad/archive/2006/07/14/1342059.aspx Further reading: http://weblogs.asp.net/owscott/why-you-shouldn-t-use-web-gardens-in-iis-week-24 | {
"source": [
"https://serverfault.com/questions/81689",
"https://serverfault.com",
"https://serverfault.com/users/23950/"
]
} |
81,723 | What is the difference between SAN, NAS and DAS? | First it is best to define the difference between a block device and filesystem. This is easier grasped if you are familiar with UNIX because it makes an objective distinction between the two things. Still the same applies to Windows. A block device is a handle to the raw disk. Such as /dev/sda for a disk or /dev/sda1 for a partition on that disk. A filesystem is layered on top of the block device in order to store data. You can then mount this. Such as mount /dev/sda1 /mnt/somepath . With those terms in mind it's then easier to see the distinction between the following. DAS is a block device from a disk which is physically [directly] attached to the host machine. You must place a filesystem upon it before it can be used. Technologies to do this include IDE, SCSI, SATA, etc. SAN is a block device which is delivered over the network. Like DAS you must still place a filesystem upon it before it can used. Technologies to do this include FibreChannel, iSCSI, FoE, etc. NAS is a filesystem delivered over the network. It is ready to mount and use. Technologies to do this include NFS, CIFS, AFS, etc. | {
"source": [
"https://serverfault.com/questions/81723",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
81,736 | We are moving servers to another facility with different block of IP addresses. Will we need to get new SSL certificates issued and installed once the move has taken place? If so, is there any way to get prepared for this before the server is moved instead of waiting for it to boot up to then go through the process of requesting from IIS, going to certificate vendor, etc? | Most (I think ALL) SSL certificates are domain-name-based, so there should be no need to get a new certificate as long as the hostname of the server will be the same after the move. It will require a DNS change, timed with the move, however. | {
"source": [
"https://serverfault.com/questions/81736",
"https://serverfault.com",
"https://serverfault.com/users/22037/"
]
} |
81,746 | I have an encrypted FAT volume (for compatibility) containing a private key file and other sensitive data. I want to connect to my server through SSH using my private key, but of course, as FAT doesn't support file permission, it ignores my key saying its permissions are too open. So currently I'm copying it somewhere else on my hard drive with 0600 permissions, using it and then securely erasing it, but it's a pain. Is there a way to bypass permission check on this very ssh/scp command line ? Edit : Precision: it was a TrueCrypt volume on OS X. On the solution: The accepted answer below solved my problem (using a SSH key file located on a TrueCrypt volume with Mac OS X), but it is a workaround. Looks like there is no way to "bypasssh key file permission check". | Adding the key from stdin worked for me: cat /path/to/id_rsa | ssh-add -k - | {
"source": [
"https://serverfault.com/questions/81746",
"https://serverfault.com",
"https://serverfault.com/users/25106/"
]
} |
81,876 | We use Windows "Remote Desktop" to log into server machines. At the moment, I am getting the following error message: The terminal server has exceeded the maximum number of allowed connections. Now, the cause is obvious (2 other people are logged on right now!). I recall that in the past I solved this by logging on to some other machine in the same domain and then going to some admin tool which I cannot recall. From there I could see who was logged in and remotely terminate their session (assuming I had sufficient privileges) -- thereby freeing up one of the connections. Does anyone know how to do this? | You can use Terminal Services Manager under Administrative Tools. If you prefer a command-line solution, you can use this to list RDP sessions: query session /server:servername To reset a session, look for the relevant session ID in the "ID" column of the output from the above command, then use: reset session <sessionid> /server:servername | {
"source": [
"https://serverfault.com/questions/81876",
"https://serverfault.com",
"https://serverfault.com/users/15829/"
]
} |
82,306 | If I have 3 domains, domain1.com, domain2.com, and domain3.com, is it possible to set up a default virtual host to domains not listed? For example, if I would have: <VirtualHost 192.168.1.2 204.255.176.199>
DocumentRoot /www/docs/domain1
ServerName domain1
ServerAlias host
</VirtualHost>
<VirtualHost 192.168.1.2 204.255.176.199>
DocumentRoot /www/docs/domain2
ServerName domain2
ServerAlias host
</VirtualHost>
<VirtualHost 192.168.1.2 204.255.176.199>
DocumentRoot /www/docs/everythingelse
ServerName *
ServerAlias host
</VirtualHost> If you register a domain and point it to my server, it would default to everythingelse showing the same as domain3. Is that possible? | Yes, that should work, except ServerAlias should be "*", with ServerName set to an actual hostname. You might need to make sure that VirtualHost is the very last loaded... | {
"source": [
"https://serverfault.com/questions/82306",
"https://serverfault.com",
"https://serverfault.com/users/22230/"
]
} |
82,626 | Right now, I make everyone do ~/.vimrc and put their settings there. How can I make a global, default .vimrc for new users? | usually by creating /etc/vimrc or /etc/vim/vimrc. Depends on your version of vim and linux/unix | {
"source": [
"https://serverfault.com/questions/82626",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
82,801 | I've occasionally lost my config file "/etc/mysql/my.cnf", and want to restore it. The file belongs to package mysql-common which is needed for some vital functionality so I can't just purge && install it: the dependencies would be also uninstalled (or if I can ignore them temporarily, they won't be working). Is there a way to restore the config file from a package without un- ar -ing the package file? dpkg-reconfigure mysql-common did not restore it. | dpkg -i --force-confmiss mysql-common.deb will recreate any missing configuration files, ie /etc/mysql/my.cnf in your case. | {
"source": [
"https://serverfault.com/questions/82801",
"https://serverfault.com",
"https://serverfault.com/users/12097/"
]
} |
82,857 | I have scheduled a cron job to run every minute but sometimes the script takes more than a minute to finish and I don't want the jobs to start "stacking up" over each other. I guess this is a concurrency problem - i.e. the script execution needs to be mutually exclusive. To solve the problem I made the script look for the existence of a particular file (" lockfile.txt ") and exit if it exists or touch it if it doesn't. But this is a pretty lousy semaphore! Is there a best practice that I should know about? Should I have written a daemon instead? | There are a couple of programs that automate this feature, take away the annoyance and potential bugs from doing this yourself, and avoid the stale lock problem by using flock behind the scenes, too (which is a risk if you're just using touch). I've used lockrun and lckdo in the past, but now there's flock (1) (in newish versions of util-linux) which is great. It's really easy to use: * * * * * /usr/bin/flock -n /tmp/fcj.lockfile /usr/local/bin/frequent_cron_job | {
"source": [
"https://serverfault.com/questions/82857",
"https://serverfault.com",
"https://serverfault.com/users/21415/"
]
} |
83,508 | Can anyone tell me—in a nutshell—what the purpose of these two directories are in Debian? /etc/apache2/sites-enabled
/etc/apache2/sites-available I notice that diffing sites-available/000-default and sites-enabled/default shows they are identical. What gives? | sites-available contains the apache config files for each of your sites. For example: <VirtualHost *:80>
ServerName site.mysite.com
ServerAdmin [email protected]
DirectoryIndex index.php
DocumentRoot /home/user/public_html/site.mysite.com/public
LogLevel warn
ErrorLog /home/user/public_html/site.mysite.com/logs/error.log
CustomLog /home/user/public_html/site.mysite.com/logs/access.log combined
</VirtualHost> When you want to add a new site (for example, site.mysite.com), you add it here, and use: a2ensite site.mysite.com To enable the site. Once the site is enabled, a symlink to the config file is placed in the sites-enabled directory, indicating that the site is enabled. | {
"source": [
"https://serverfault.com/questions/83508",
"https://serverfault.com",
"https://serverfault.com/users/25736/"
]
} |
83,522 | Software developers have the concept of "dogfooding", which is where they personally use the software that they are developing, often on a regular basis. For some projects, the direct interaction it provides can be invaluable in debugging the system. So I ask the community: What is the system administration equivalent to dogfooding? | I don't think there'll be as clear an answer as for programming, but a couple partial answers come to mind: Using a PC that's set up from a standard image the same as anyone else. Running with user privs. most of the time, elevating only when necessary. Another thought: Ask a close friend or relative to go through your documentation and follow it and tell you honestly if it's clear. | {
"source": [
"https://serverfault.com/questions/83522",
"https://serverfault.com",
"https://serverfault.com/users/23300/"
]
} |
83,856 | Is there any way to configure a user on a Linux box (Centos 5.2 in this case) so that they can use scp to retrieve files, but can't actually login to the server using SSH? | DEPRECATED : Please note the following answer is out of date. rssh is no longer maintained and is no longer a secure method. rssh shell ( http://pizzashack.org/rssh/ ) is designed for precisely this purpose. Since RHEL/CentOS 5.2 doesn't include a package for rssh, you might look here to obtain an RPM: http://dag.wieers.com/rpm/packages/rssh/ To use it just set it as a shell for a new user like this: useradd -m -d /home/scpuser1 -s /usr/bin/rssh scpuser1
passwd scpuser1 ..or change the shell for an existing one like this: chsh -s /usr/bin/rssh scpuser1 ..and edit /etc/rssh.conf to configure rssh shell - especially uncomment allowscp line to enable SCP access for all rssh users. (You may also want to use chroot to keep the users contained in their homes but that's another story.) | {
"source": [
"https://serverfault.com/questions/83856",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
]
} |
83,874 | I have the following data in my DNS zone file for my domain: $ORIGIN mydomain.com.
@ IN A 208.X.Y.Z
mail IN A 208.X.Y.Z
... etc.. What does the @ line mean? I know what an A record is.. but a host with an ampersand at sign ? | RFC 1035 defines the format of a DNS zone file. ... on page 35 you'll find: @ A free standing @ is
used to denote the current origin. This means that @ is a shortcut for the name defined with $ORIGIN . You can find more information on $ORIGIN here , which is an excerpt from Pro DNS and BIND , published by Apress. | {
"source": [
"https://serverfault.com/questions/83874",
"https://serverfault.com",
"https://serverfault.com/users/58/"
]
} |
84,291 | What algorithm does Windows use to decide which DNS Server it will query in order to resolve names? Let's say I have several interfaces, all active, some with no dns server specified, some told to determine it automatically, and some with it specified manually (in interface ipv4 AND interface ipv6). I'm asking for an answer to this general question hoping that I know how to solve a more specific problem in Windows Vista - I have two interfaces, one a lower metric and a DNS server specified manually. nslookup uses THIS DNS server and resolves the names correctly. However, all other applications fail to resolve the name unless I manually specify a DNS server for the other interface, which the applications then use. nslookup also uses the DNS server specified for this other interface once it is specified. Thanks | If I'm not mistaken, it's determined by the NIC binding order in the Advanced Settings in the network connections folder. You can verify it by changing the binding order of the various NIC's and running nslookup as a test. To expand on my answer, citing the article that Evan linked , here is an excerpt from said article: The DNS Client service queries the DNS servers in the following order: The DNS Client service sends the name query to the first DNS server on the preferred adapter’s list of DNS servers and waits one second for a response. If the DNS Client service does not receive a response from the first DNS server within one second, it sends the name query to the first DNS servers on all adapters that are still under consideration and waits two seconds for a response. If the DNS Client service does not receive a response from any DNS server within two seconds, the DNS Client service sends the query to all DNS servers on all adapters that are still under consideration and waits another two seconds for a response. If the DNS Client service still does not receive a response from any DNS server, it sends the name query to all DNS servers on all adapters that are still under consideration and waits four seconds for a response. If it the DNS Client service does not receive a response from any DNS server, the DNS client sends the query to all DNS servers on all adapters that are still under consideration and waits eight seconds for a response. The preferred adapter in step 1 being the adapter that's listed first in the binding order. | {
"source": [
"https://serverfault.com/questions/84291",
"https://serverfault.com",
"https://serverfault.com/users/25946/"
]
} |
84,430 | Here's what I'd like to automate: 00 08 * * * psql -Uuser database < query.sql | mail [email protected] -s "query for `date +%Y-%m-%dZ%I:%M`" Here's the error message: /bin/sh: -c: line 0: unexpected EOF while looking for matching ``'
/bin/sh: -c: line 1: syntax error: unexpected end of file | From crontab(5) : The ``sixth'' field (the rest of the line) specifies the
command to be run. The entire command portion of the line, up
to a newline or % character, will be executed by /bin/sh
or by the shell specified in the SHELL variable of the
crontab file. Percent-signs (%) in the command, unless escaped
with backslash (), will be changed into newline characters,
and all data after the first % will be sent to the command as
standard input. There is no way to
split a single command line onto multiple lines, like the shell's
trailing "\". Just add backslashes before % signs: 00 08 * * * psql -Uuser database < query.sql | mail [email protected] -s "query for `date +\%Y-\%m-\%dZ\%I:\%M`" | {
"source": [
"https://serverfault.com/questions/84430",
"https://serverfault.com",
"https://serverfault.com/users/2788/"
]
} |
84,521 | I'm using puppet to admin a cluster of debian servers. I need to change the timezone of each machine on the cluster. The proper debian way to do this is to use dpkg-reconfigure tzdata . But I can only seem to change it if I use the dialog. Is there some way to automate this from the shell so I can just write an Exec to make this easy? If not, I think the next best way would probably be to have puppet distribute /etc/timezone and /etc/localtime with the correct data across the cluster. Any input appreciated! | You need to specify the frontend as `noninteractive' and it will save your current settings. dpkg-reconfigure will take the current system settings as gospel, so simply change your timezone the way you would normally and run it with the non-interactive flag e.g. for me to change to "Europe/Dublin" where I am: # echo "Europe/Dublin" > /etc/timezone
# dpkg-reconfigure -f noninteractive tzdata Obviously this allows you to use puppet/cfengine as you like to distribute /etc/timezone also. EDIT: after @gertvdijk comment pointing to https://bugs.launchpad.net/ubuntu/+source/tzdata/+bug/1554806 and @scruss answer you will probably have to do it like this in most modern distributions: $ sudo ln -fs /usr/share/zoneinfo/Europe/Dublin /etc/localtime
$ sudo dpkg-reconfigure -f noninteractive tzdata | {
"source": [
"https://serverfault.com/questions/84521",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
84,685 | We're about to get our first sysadmin to look after a multitude of SQL Servers which have previously been awkwardly looked after by a mixture of the developers and IT support. It's long overdue, and we've been trying to persuade the higher-ups to agree to one for years. Well, finally they did but the salary we were able to offer wasn't exactly inspiring to say the least. Nevertheless, we've snagged one somehow. What I'd like to know is what are the early signs to look out for that a new sysadmin doesn't really know what they're doing or what dangerous habits to look for, with particular focus on SQL Server. I'm a little nervous that our bargain-basement hunting may not work out too well, which has been the case with other roles. Any thoughts, please? | Take this first part with a grain of salt, because it is perhaps influenced by my having worked as a contractor for so many years. Consider looking at a contractor if your ability to pay is such that you can't attract top talent in a full-time capacity. If you're paying too little and asking for too much you're going to either get poorly skilled employees, employees who have glaring defects that may not be skill-related (poor interpersonal skills, substance abuse issues, etc), or you'll end up with a "revolving door" position where employees work for awhile and leave for better pay. If your company is hung-up on both paying too little and needing someone for a given period of time, as opposed to fulfill a given set of tasks, then you're probably in a hopeless scenario. Likewise, if the tasks will keep a full-time employee busy and the company is planning on paying too little, then it's also hopeless. You will get what you pay for in the long run, one way or another. My guess is that you don't really have a full-time need, and the company could probably spend the planned salary, or less, on a contractor who would do everything you need. A contractor is much easier to "get rid of" if the relationship is a "bad fit". A contractor can typically be much more flexible than a full-time employee re: work logistics (weekends, evenings, etc). A good contractor is going to treat your company's needs with a very high degree of skill and care because they know how easily your company can sever the relationship and look elsewhere. This is going to sound really trite, but more than any of the other items below, pay attention to your sysadmin's ability to communicate with others. Basic writing and speaking skills are important, and do a lot to indicate the state of the mental processes occurring "behind the scenes". A sysadmin's work should involve communication with other IT and non-IT employees, and an ability to communicate effectively is essential. Having an ability to form analogies and communicate abstract concepts is certainly nice "icing on the cake", but if your sysadmin can't even write complete sentences or speak complete thoughts then it's hopeless already. There are points in everybody else's answers that ring true to me re: a "bad fit" (be it an employee or a contractor). I've been the guy who helps companies bridge the gap between firing a bad sysadmin and hiring a replacement, and I've seen a number of bad scenarios play out. (Being the person who is changing passwords, looking for "back doors", etc, while the sysadmin is over in the CEO's office being fired is fun work, but stressful, too.) Some "IT specific" nasty attitudes I've seen (cribbing from some parts of other posters' answers here, unashamedly) in dysfunctional situations include: Rip everything out and start over : It's one thing to identify something that's a "ticking time bomb" and take care of it, but often in IT I run into (often immature and entry-level) sysadmins who seek to "build an empire" in their image, and obsess over removing old infrastructure for the sake of installing new. It's one thing to make a business case supported by facts and ROI projections, but I've seen this particular dysfunction as being nothing more than a strong personal drive to replace systems for the sake of replacement. I can't tell you that : These are the sysadmins who, while taking a strong personal ownership stake in their work, go too far and become overly possessive, secretive, and paranoid. The computers belong to the business, not the sysadmin. Failure to document work, disclose passwords, or be open about how systems work (or fail) isn't a good sign. I've heard some sysadmins cite "security" as a reason to be secretive, but security by obscurity isn't security. I've also heard sysadmins with this attitude say things like "Yeah, but if I give the passwords to so-and-so they'll just screw it up." Usually, this is accompanied by a veiled or overt statement of fear for being blamed if something goes wrong subsequent to the disclosure. If a business is so unstable that this fear is justified the sysadmin would do better to leave and find another job than to play games with secrecy. Blame somebody / everybody / anybody else : These are the sysadmins that constantly cite third parties, their predecessor, or the users experiencing issues as the cause of problems. Certainly, there are issues caused by all these factors, but a pattern of consistent and repeated finger-pointing is a bad sign. We've all had to deal with hardware errata, software bugs, and users creating problems for themselves. Being able to identify one of these sources as a root cause to an issue doesn't make it finger-pointing. Being unwilling to investigate an issue and identify a root cause, though, combined with the reaction of vaguely waving hands and saying "It's gotta be that buggy Windows / Linux / Cisco router / etc..." is cause for concern. Power trip : These are the sysadmins who delight and setting up roadblocks for users because of a personal agenda or a perceived business agenda. Again, it's one thing to place limitations on users for justifiable business reasons. It's quite another, though, to be the "preventer of IT services" simply for the mad power rush of being able to control others. I've seen this particular dysfunction extend into really nasty things like "e-stalking" of employees by reading their email, covertly performing screen / session captures, listening to phone calls, and just generally being a "creepy" person to others. Policies don't apply to me : Often combined with the "power trip" attitude, these are the sysadmins who refuse to be subject to the IT policies that they, themselves, otherwise enforce or dictate. While it can be benign and harmless, I've seen this cause nasty situations like threatened sexual harassment litigation (a sysadmin surfing and prominently displaying work-inappropriate content). Sysadmins are in trusted positions, and need to maintain an attitude of professionalism. Part of that attitude means playing by the same rules and being accountable like everyone else. Just because we have the ability to perform activities "off the record" w/ our elevated access permissions and rights doesn't mean that we should do it. Can't admit weakness : It takes a strong person to say "I don't know the answer to that, but I can find it for you." Everyone has gaps in their knowledge and experience. This particular dysfunction often results in situations where a sysadmin ends up vastly over their head. It's important to take calculated risks in career development, and it can be said that great personal growth occurs when people "bite off more than they can chew" and succeed. On the other hand, great expense (or outright failure) for a business could easily occur when a sysadmin decides to tackle important issues like disaster recovery or IT security and fails for lack of ability. Managers who unreasonably disallow their employees access to third-party resources / training / support can help to create this kind of culture. No one should be penalized for admitting that they don't know how to do something while expressing a willingness to help find the right answer (or, even better, learning how to do it themselves). These are my toys : This is the sysadmin who treats the business IT infrastructure as an exciting toy. It's one thing to identify a particularly interesting technology that happens to fulfill a business need well, but it's quite another to influence a business to spend money on technology for the unstated purpose of being something fun to play with. I've seen situations where sysadmins became enamored with a given technology and decides to bring that technology in to solve a problem not because it's suited to the business need, but because it's something they'd like to play with. I've seen this happen all kinds of things: fiber optics, virtualization, SAN gear, wireless networking, etc. Management should keep this in check as much as possible, but non-technical managers may always understand if a given technology really is something the business needs or not. I've always done it this way : This is the sysadmin who is dead set in their ways. Usually, I've found this combined with an attitude of "I don't want to learn about new things", too. Our field is changing. Some of the work that we did 10 years ago is automated today, and some of it remains the "same old, same old". Everything about our industry is constantly being revised, updated, and refreshed. Best practices change more slowly, but even they change too. It's unreasonable to expect that every sysadmin will keep up with the "cutting edge" of technology, but it's also unacceptable for a sysadmin to languish in years-old technology showing no sign of interest in updating skills. If a business is a growing concern, its IT operation should be forward-looking. (Obviously, there's a balance here, too. You can tip the scales too far and end up in a "these are my toys" scenario, as well...) No understanding of business : Business "does IT" because it helps in doing business efficiently. Any other use of IT in business is counter-productive. Too often I've seen sysadmins who are unaware of basic concepts of accounting and business (revenue less expenses equals profit, etc). I would never expect a sysadmin to be an expert in accounting, but I would expect them to understand the basic way in which a business incurs expenses for the purpose of turning a profit. In poor economic times, especially, it's nice to have your sysadmin understand where the money comes from and why the business makes the decisions it does related to where the money goes. A sysadmin who believes that IT stands apart from the "business" part of the business isn't an asset. No desire for continuity : In today's occupational culture, it's should be assumed that we'll all work for a variety of employers. Our job today isn't, statistically, going to be our job forever. A good sysadmin should prepare documentation not because "they might get hit by a bus", but because their eventual replacement will need it. An unwillingness to prepare documentation because of perceived "job security" reeks, to me, of an individual who has no desire for upward mobility. I don't work for a single employer anymore, but if I did I'd be planning for what I was going to be doing next, and keeping documentation up to date so that my replacement will have better time of it (just like I'd like from my predecessor at my next job). | {
"source": [
"https://serverfault.com/questions/84685",
"https://serverfault.com",
"https://serverfault.com/users/15681/"
]
} |
84,750 | To monitor HTTP traffic between a server and a web server, I'm currently using tcpdump . This works fine, but I'd like to get rid of some superfluous data in the output (I know about tcpflow and wireshark , but they're not readily available in my environment). From the tcpdump man page: To print all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets. tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' This command sudo tcpdump -A 'src example.com and tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' provides the following output: 19:44:03.529413 IP 192.0.32.10.http > 10.0.1.6.52369: Flags [P.], seq 918827135:918827862, ack 351213824, win 4316, options [nop,nop,TS val 4093273405 ecr 869959372], length 727 E.....@....... ....P..6.0.........D...... __..e=3...__HTTP/1.1 200 OK
Server: Apache/2.2.3 (Red Hat)
Content-Type: text/html; charset=UTF-8
Date: Sat, 14 Nov 2009 18:35:22 GMT
Age: 7149 Content-Length: 438 <HTML>
<HEAD>
<TITLE>Example Web Page</TITLE>
</HEAD>
<body> <p>You have reached this web page ...</p>
</BODY>
</HTML> This is nearly perfect, except for the highlighted part. What is this, end -- more importantly -- how do I get rid of it? Maybe it's just a little tweak to the expression at the end of the command? | tcpdump prints complete packets. "Garbage" you see are actually TCP package headers. you can certainly massage the output with i.e. a perl script, but why not use tshark, the textual version of wireshark instead? tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' it takes the same arguments as tcpdump (same library) but since its an analyzer it can do deep packet inspection so you can refine your filters even more, i.e. tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
-R'http.request.method == "GET" || http.request.method == "HEAD"' | {
"source": [
"https://serverfault.com/questions/84750",
"https://serverfault.com",
"https://serverfault.com/users/12585/"
]
} |
84,815 | I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name , but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, s3cmd doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data? | The AWS CLI now supports the --query parameter which takes a JMESPath expressions. This means you can sum the size values given by list-objects using sum(Contents[].Size) and count like length(Contents[]) . This can be be run using the official AWS CLI as below and was introduced in Feb 2014 aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[].Size), length(Contents[])]" | {
"source": [
"https://serverfault.com/questions/84815",
"https://serverfault.com",
"https://serverfault.com/users/11475/"
]
} |
84,821 | I want to proxy requests from an SSL site via a non-SSL site. My Apache httpd.conf looks like this: <VirtualHost 1.2.3.4:80>
ServerName foo.com
ProxyPass / https://bar.com/
</VirtualHost> So, when I visit http://foo.com , I expect apache to make a request to https://bar.com and send me the the page it fetched. Instead, I get a 500 error, and in the error log, I see: [error] proxy: HTTPS: failed to enable ssl support for 4.3.2.1:443 (bar.com) Presumably I'm missing a directive here. Which might it be? Never mind the security implications. I fully understand the risks. | You'll need mod_ssl , mod_proxy and optionally mod_rewrite . Depending on your distribution and Apache version you may have to check if mod_proxy_connect and mod_proxy_http are loaded as well. The directives for enabling SSL proxy support are in mod_ssl: <VirtualHost 1.2.3.4:80>
ServerName foo.com
SSLProxyEngine On
SSLProxyCheckPeerCN on
SSLProxyCheckPeerExpire on
ProxyPass / https://secure.bar.com
ProxyPassReverse / https://secure.bar.com
</VirtualHost> IIRC you can also use: RewriteRule / https://secure.bar.com [P] # don't forget to setup SSLProxy* as well | {
"source": [
"https://serverfault.com/questions/84821",
"https://serverfault.com",
"https://serverfault.com/users/6800/"
]
} |
84,890 | I have installed Debian Lenny, PHPmyadmin and postfix. When using PHPmyadmin GUI and access any table with data i get: Can't create/write to file '/tmp/#sql_xxxx.MYI' (Errcode: 13) doing perror 13 says: OS error code 13: Permission denied I find the tmpdir lik so: mysqladmin -p variables | grep -w tmpdir
| tmpdir | /tmp Now that means that mysql cannot write to /tmp. Making the permissions to :777 fixes that. But I doesn't feel right I have to do that. Is there a better way/fix? Should I change the value tmpdir in /etc/mysql/my.cnf ? | It looks like your permissions on /tmp are wrong. They really should be read/write/execute for everyone with the sticky bit set. chmod 1777 /tmp The sticky bit add some restrictions to how other users interact with files not created or owned by them, so there's no reason to worry. If you wish, you may also create a seperate directory owned and writeable by the mysql user and specify that directory in my.cnf to be used instead of the system wide /tmp. | {
"source": [
"https://serverfault.com/questions/84890",
"https://serverfault.com",
"https://serverfault.com/users/23929/"
]
} |
84,963 | I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F # Flush all rules
iptables -X # Delete all chains
# Disable routing. Drop packets if they reach the end of the chain.
iptables -P FORWARD DROP
# Drop all packets with a bad state
iptables -A INPUT -m state --state INVALID -j DROP
# Accept any packets that have something to do with ones we've sent on outbound
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# Accept any packets coming or going on localhost (this can be very important)
iptables -A INPUT -i lo -j ACCEPT
# Accept ICMP
iptables -A INPUT -p icmp -j ACCEPT
# Allow ssh
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow httpd
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
# Allow SSL
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Block all other traffic
iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question , Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more? | ICMP is way, way more than "traceroute" and "ping." It is used for feedback when you run a DNS server (port unreachable) which, in a modern DNS server, may actually help select a different machine to query faster. ICMP is also, as was mentioned above, used for path MTU discovery. Chances are your OS sets "DF" (don't fragment) on TCP packets it sends. It is expecting to get an ICMP "fragmentation required" packet back if something along the path fails to handle that size of packet. If you block all ICMP, your machine will have to use other fallback mechanisms, which basically use a timeout to detect a PMTU "black hole" and will never optimize correctly. Additionally, you should ask yourself why you want to block ICMP. What specifically are you attempting to prevent here? It's pretty clear you don't understand what ICMP is used for, which is rather common. I'd be extremely cautious in blocking something you don't fully understand. To make it even harder to learn about this, many common firewall books say "block ICMP" -- it's clear their authors have never read an RFC or had to solve issues surrounding such advice. It's bad advice to block all ICMP. Now, rate limiting it can also hurt. If your machine is busy, or even if it's not, you can get a good amount of ICMP traffic. My web server probably gets about 10-100 ICMP packets per minute, most of which is PMTU discovery. Even if someone chose to attack my server with ICMP packets of some type, it's really not that big of a deal. If your machine accepts even one TCP connection (ssh, http, mail, etc) chances are that's a bigger attack vector than misunderstood ICMP ever will be. | {
"source": [
"https://serverfault.com/questions/84963",
"https://serverfault.com",
"https://serverfault.com/users/11478/"
]
} |
85,027 | I'm having trouble getting an FTP server setup on Windows 7. I've added the service using Control Panel -> Programs -> Turn Windows features on and off. I can see the service has started in Control Panel -> Services. But then when I fire up a Windows command-line window, cmd , I get Not connected. , C:\Users\mattf>ftp localhost
ftp> ls
Not connected.
ftp> open localhost
ftp> ls
Not connected.
ftp> dir
Not connected.
ftp> quit
C:\Users\mattf> And that's as far as I've got. I have no idea why this isn't working - could it be firewall settings? | I just replicated your results. Contrary to Phoebus' comment, it appears you manage Windows 7 FTP sites with the same 7.5 management console as the web services. Also, it appears as if Microsoft does not create an FTP site when the FTP service is created (as was done in the past). After you've installed the FTP Service and IIS Management Console, perform the following steps. Run Administrative Tools | Internet Information Services (IIS) Manager Expand the local machine. Right-click Sites and Add FTP Site. Call it "Default FTP Site" with a path of "C:\inetpub\ftproot"; hit next Enable Start FTP site automatically, select Allow SSL; hit next Enable Anonymous Authentication; hit Finish You should now be able to FTP to localhost. You may choose different options, but the options described above work for me and are very similar to the default options in IIS 6 FTP. Note, you may also need to enable the FTP server in the firewall. For that use the following command. netsh advfirewall firewall set rule group="FTP Server" new enable="yes" | {
"source": [
"https://serverfault.com/questions/85027",
"https://serverfault.com",
"https://serverfault.com/users/19810/"
]
} |
85,078 | Possible Duplicate: How to use DNS to redirect domain to specific port on my server My web application is running on myserver.mydomain:10000
I would like to make it available on the intranet as mywebapp.mydomain. Reading Forward port to another Ip/port , I have looked into rinetd, but I don't fully understand how I can achieve my goal: create a cname alias mywebapp --> myserver on the name server run rinetd on myserver, redirecting port 80 to 10000 ?!? That would redirect all http traffic. I seem to have a gap in my understanding. Can anyone help me ? | If you don't want to create another IP, then all you can do is install a reverse http proxy on the main IP and a name based virtual host to route the traffic using mod_proxy. Here is how you can do it with apache, almost any http server can do it, other popular alternatives are squid, nginx, lighthttpd, etc. Listen IP_ADDR:80
NameVirtualHost IP_ADDR:80
<VirtualHost IP_ADDR:80>
ServerName yourname.yourdomain
ProxyPass / http://localhost:10000/
ProxyPassReverse / http://localhost:10000/
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/85078",
"https://serverfault.com",
"https://serverfault.com/users/26210/"
]
} |
85,083 | My 'Download Master' client tries to connect to this port on my new Asus router.
Without succes of course, because it does not seem to be running. Any idea? Thanks | If you don't want to create another IP, then all you can do is install a reverse http proxy on the main IP and a name based virtual host to route the traffic using mod_proxy. Here is how you can do it with apache, almost any http server can do it, other popular alternatives are squid, nginx, lighthttpd, etc. Listen IP_ADDR:80
NameVirtualHost IP_ADDR:80
<VirtualHost IP_ADDR:80>
ServerName yourname.yourdomain
ProxyPass / http://localhost:10000/
ProxyPassReverse / http://localhost:10000/
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/85083",
"https://serverfault.com",
"https://serverfault.com/users/26212/"
]
} |
85,087 | We have ~50 machines which connect via a 3G modem from Option (either the Globesurfer Icon , Icon 401 or Icon 7.2 ) to the network, for some reason from the telco they will be dropped (signal issue, tower, butterfly flapping it's wings - the telco is not much help here). After the drop, the machines fail to reconnect. The error message that comes up is Cannot load phonebook. Error 1722 RPC is unavailable and checking the event log the following issue us listed there: Event Type: Error Event Source: MSDTC Client Event Category: (10) Event ID: 4427 Date: 2009/11/12 Time: 02:31:02 PM User: N/A Computer: TERMINAL Description: Failed to initialize the needed name objects. Error Specifics: d:\xpsp\com\com1x\dtc\dtc\msdtcprx\src\dtcinit.cpp:215,
Pid: 3500 No Callstack,
CmdLine: C:\WINDOWS\system32\dllhost.exe /Processid:{02D4B3F1-FD88-11D1-960D-00805FC79235} Data: 0000: 05 40 00 80 .@. The same issue will appear in the event log occur when trying to access the COM+ snap-in in the control panel. The solution is to reinstall MSDTC by doing the following: net stop msdtc msdtc -uninstall Delete the msdtc registry key msdtc -resetlogs msdtc -install net start msdtc This is on Windows XP Embedded SP 3. What I am trying to find is the cause of the corruption of msdtc, but I am not sure where to start. Updates (17/11) The solution above, which reinstalls MSDTC, works in so that MSDTC is no longer corrupted and the machines can reconnect - however it does not correct the reconnection issue permanently. The machines can reconnect for a while (yet to determine how long or what changes) and then will fail - however without the MSDTC corruption this time. (18/11) Testing the machines with a network connection, there issue never occurs. It would seem to indicate that the cause is something in the 3G modem. (19/11) Tried upgrading the drivers to the latest versions with no change. Was also recommended to change the MTU to 1354, which has also not helped. | If you don't want to create another IP, then all you can do is install a reverse http proxy on the main IP and a name based virtual host to route the traffic using mod_proxy. Here is how you can do it with apache, almost any http server can do it, other popular alternatives are squid, nginx, lighthttpd, etc. Listen IP_ADDR:80
NameVirtualHost IP_ADDR:80
<VirtualHost IP_ADDR:80>
ServerName yourname.yourdomain
ProxyPass / http://localhost:10000/
ProxyPassReverse / http://localhost:10000/
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/85087",
"https://serverfault.com",
"https://serverfault.com/users/103/"
]
} |
85,470 | Why does my server show total used free shared buffers cached
Mem: 12286456 11715372 571084 0 81912 6545228
-/+ buffers/cache: 5088232 7198224
Swap: 24571408 54528 24516880 I have no idea on calculating the memory in linux. I think it says that 5088232 is used where as 7198224 is free, meaning it is actually consuming 5GB of RAM? | Meaning of the values The first line means: total : Your total (physical) RAM (excluding a small bit that the kernel permanently reserves for itself at startup); that's why it shows ca. 11.7 GiB , and not 12 GiB, which you probably have. used : memory in use by the OS. free : memory not in use. shared / buffers / cached : This shows memory usage for specific purposes, these values are included in the value for used . The second line gives first line values adjusted. It gives the original value for used minus the sum buffers+cached and the original value for free plus the sum buffers+cached , hence its title. These new values are often more meaningful than those of first line. The last line ( Swap: ) gives information about swap space usage (i.e. memory contents that have been temporarily moved to disk). Background To actually understand what the numbers mean, you need a bit of background about the virtual memory (VM) subsystem in Linux. Just a short version: Linux (like most modern OS) will always try to use free RAM for caching stuff, so Mem: free will almost always be very low. Therefore the line -/+ buffers/cache: is shown, because it shows how much memory is free when ignoring caches; caches will be freed automatically if memory gets scarce, so they do not really matter. A Linux system is really low on memory if the free value in -/+ buffers/cache: line gets low. For more details about the meaning of the numbers, see e.g. the questions: In Linux, what is the difference between "buffers" and "cache" reported by the free command? Why does Red Hat Linux report less free memory on the system than is actually available? Changes in procps 3.3.10 Note that the output of free was changed in procps 3.3.10 (released in 2014). The columns reported are now "total", "used", "free", "shared", "buff/cache", "available" , and the meanings of some of the values changed, mainly to better account for the Linux kernel's slab cache. See Debian Bug report #565518 for the motivation, and What do the changes in free output from 14.04 to 16.04 mean? for more details information. | {
"source": [
"https://serverfault.com/questions/85470",
"https://serverfault.com",
"https://serverfault.com/users/26336/"
]
} |
85,602 | Quick question but Gooling has not revealed an answer. When I do iptables -L , it seems to lag on displaying items in where I have limited the source to internal ips 192.168.0.0/24 The whole listing takes about 30 seconds to display. I just want to know: Does this affect the speed of my incoming connections or is this simply a side effect of having all these ranges within my iptables rules? Thanks! | Include the -n option so it doesn't try to use DNS to resolve names for every ip address, network and port. Then it will be fast. | {
"source": [
"https://serverfault.com/questions/85602",
"https://serverfault.com",
"https://serverfault.com/users/23914/"
]
} |
85,893 | (I have already read How can I test a new cron script ? .) I have a specific problem (cron job doesn't appear to run, or run properly), but the issue is general: I'd like to debug scripts that are cronned. I am aware that I can set up a * * * * * crontab line, but that is not a fully satisfactory solution. I would like to be able to run a cron job from the command line as if cron were running it (same user, same environment variables, etc.). Is there a way to do this? Having to wait 60 seconds to test script changes is not practical. | Here's what I did, and it seems to work in this situation. At least, it shows me an error, whereas running from the command line as the user doesn't show the error. Step 1 : I put this line temporarily in the user's crontab: * * * * * /usr/bin/env > /home/username/tmp/cron-env then took it out once the file was written. Step 2 : Made myself a little run-as-cron bash script containing: #!/bin/bash
/usr/bin/env -i $(cat /home/username/tmp/cron-env) "$@" So then, as the user in question, I was able to run-as-cron /the/problematic/script --with arguments --and parameters This solution could obviously be expanded to make use of sudo or such for more flexibility. Hope this helps others. | {
"source": [
"https://serverfault.com/questions/85893",
"https://serverfault.com",
"https://serverfault.com/users/4153/"
]
} |
85,992 | I have a Mac OS X machine (Mac mini running 10.5) with Remote Login enabled. I want to open the sshd port to the Internet to be able to login remotely. For security reasons I want to disable remote logins using passwords, allowing only users with a valid public key to login. What is the best way to set this up in Mac OS X? | After a little trial and error, I found the answer myself. These options need to be set in /etc/sshd_config : PasswordAuthentication no
ChallengeResponseAuthentication no Only changing one of them is not enough. | {
"source": [
"https://serverfault.com/questions/85992",
"https://serverfault.com",
"https://serverfault.com/users/3208/"
]
} |
86,048 | What are the critical files I need to backup from GPG? I guess my private key would qualify of course, but what else? | The most critical are your secret/private keys: gpg --export-secret-keys > secret-backup.gpg secret-backup.gpg is then the file to keep safe. Otherwise the ~/.gnupg/ directory contain all private and public keys(secring.gpg and pubring.gpg respectively) as well as configuration and trustdb which could be convenient to have stored. | {
"source": [
"https://serverfault.com/questions/86048",
"https://serverfault.com",
"https://serverfault.com/users/21938/"
]
} |
86,049 | I am working on software and configuration for a device that uses Linux. I have been asked to provide an administrative interface that requires no knowledge of Linux to use and that has restricted capabilities. I believe that the best way to handle this will be to use a console login (either through ssh or with a serial port) to present a menu of options. Let's say I have a program called admin_menu that handles all my desired tasks. So, I should be able to create a line in /etc/password like so: admin:x:230:235:Administrative Interface:/home/admin:/local/sbin/admin_menu Are there any special considerations I need to be aware of when I create my admin_menu program? Direct answers are good, but pointers to good docs are even better. General areas that I want more information on are: What environment variables can I expect to be set? Which ones will I need to set myself? Are there any special considerations when spawning child processes? What happens upon termination of my menu process? Do I use STDIO for interaction with the console or some other interface? | The most critical are your secret/private keys: gpg --export-secret-keys > secret-backup.gpg secret-backup.gpg is then the file to keep safe. Otherwise the ~/.gnupg/ directory contain all private and public keys(secring.gpg and pubring.gpg respectively) as well as configuration and trustdb which could be convenient to have stored. | {
"source": [
"https://serverfault.com/questions/86049",
"https://serverfault.com",
"https://serverfault.com/users/24453/"
]
} |
86,478 | Say I have a directory of files at /home/user1/dir1 and I want to create a tar with only "dir1" as the leading directory: /dir1/file1
/dir1/file2 I know I can first cd to the directory cd /home/user1/
tar czvf dir1.tar.gz dir1 But when writing scripts, jumping from directory to directory isn't always favorable. I am wondering is there a way to do it with absolute paths without changing current directories? I know I can always create a tar file with absolute paths INSIDE and use --strip-components when extracting but sometimes extra path names are extra private information that you don't want to distribute with your tar files. Thanks! | tar -C changes directory tar -C /home/user1/ -cvzf dir1.tar.gz dir1 btw, handy one for keeping track of changing directories... use pushd and popd. pushd .
cd /home/user1
tar cvfz dir1.tar.gz
popd | {
"source": [
"https://serverfault.com/questions/86478",
"https://serverfault.com",
"https://serverfault.com/users/26699/"
]
} |
86,532 | I have an ASP.NET application that I am trying to convert to an ASP.NET 4 application. The application is fairly simple. I have created a new web application in IIS 7.5 pointing to the directory that the ASP.NET application exists in. When I attempt to execute the application, but entering http://localhost:[port] into my browser, I receive the following error: Error Summary HTTP Error 500.21 - Internal Server Error
Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list Most likely causes: Managed handler is used; however, ASP.NET is not installed or is not installed completely. There is a typographical error in the configuration for the handler module list. | I have the same problem when try publishing SL App using VS2010 although there is no prob before with .NET 3.5SP1 and VS2008. So try run this ( %windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i ) as described here forums.iis.net/t/1149449.aspx and here www.gotknowhow.com/articles/fix-bad-module-managedpipelinehandler-in-iis7 and It works now. So the problem is ASp>NET 4.0 has not properly installed, huuu... :) | {
"source": [
"https://serverfault.com/questions/86532",
"https://serverfault.com",
"https://serverfault.com/users/26712/"
]
} |
86,674 | How does a site like rambler serve dynamic content so fast? Even faster than Yahoo (which has a server in my country- SE Asia; rambler does not). Is this purely Nginx’s capability? Where should I be looking into to learn about such capabilities? Pretty much a newbie here, I believe that serverfault.com if served from Nginx will be much faster the IIS 7 (assuming db access time to be same in both the case). Is this a fair assumption? Edit: Post from Karl using Nginx in front of IIS7 | You may see this presentation for an overview of nginx internals. The main difference is asynchronous handling of requests instead of using threads as Apache does. You may have a look at this documentation as well. | {
"source": [
"https://serverfault.com/questions/86674",
"https://serverfault.com",
"https://serverfault.com/users/26763/"
]
} |
86,747 | I'm trying to block various IP addresses from every site that I have hosted from an server running Windows 2008 and IIS7. I've found various information about how to do this using Deny rules from "IPv4 Address and Domain Name Deny Rules (IIS 7)" in the Features View of the IIS7 manager ( http://technet.microsoft.com/en-us/library/cc733090(WS.10).aspx ), but I don't have any icon that reads like that. How do I get that UI in my IIS manager? | OK, so it turns out, the Role has to be added. I went to Server Manager > Roles > Add Role Services. Under Security node in the Role Services tree there is an option for IP and Domain Restrictions. Checking that installed the role services and now my IIS Manager has an icon for "IPv4 Address and Domain Restrictions". Feels like this should be installed by default. ***Note: the installer warns about a restart but I was not prompted for one after the install completed. My sites all stayed up during the install as well. | {
"source": [
"https://serverfault.com/questions/86747",
"https://serverfault.com",
"https://serverfault.com/users/26793/"
]
} |
87,035 | I would like to run an Oracle script through SQL Plus via a Windows command prompt. The script does not contain an "exit" command, but I would still like SQL Plus to exit, returning control to the command prompt on completion of the script. My goal is to do this without modifying the script. Is this possible? | Another way is to use this command in the batch file: echo exit | sqlplus user/pass@connect @scriptname | {
"source": [
"https://serverfault.com/questions/87035",
"https://serverfault.com",
"https://serverfault.com/users/7685/"
]
} |
87,472 | There are fields on my server's control panel like this Minute - Hour - Day of month - Month - Day of the week - Command How can I create a cron job runs on first day of the month with this fields? | This will run the command foo at 12:00AM on the first of every month 0 0 1 * * /usr/bin/foo This article describes the various fields, look to the bottom of the page: http://en.wikipedia.org/wiki/Cron To add this to your cron file, just use the command crontab -e | {
"source": [
"https://serverfault.com/questions/87472",
"https://serverfault.com",
"https://serverfault.com/users/27113/"
]
} |
87,774 | mount shows mount devices like: /dev/mapper/VolGroup01-LogVol00 on /var type ext3 (rw) or /dev/mapper/VolGrp_backups-backups on /mnt/backups type ext3 (rw) but iostat uses dm- notation. like dm-0 , dm-1 and so on. Where can I find a way to know which is which? | ls -l /dev/mapper/* , the device minor number (field 6 of what ls -l outputs) corresponds to the number in dm-\d+ . | {
"source": [
"https://serverfault.com/questions/87774",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
87,927 | Looking at the Ethernet entry on Wikipedia, I can't figure out how it's indicated how long the Ethernet frame is. The EtherType/Length header field apparently can indicate either a frame type or an explicit length, and I'm guessing that in the case of a frame type, it has to do some other logic to figure out how long the packet is. For example, if the EtherType field is 0x0800, that indicates an IPv4 payload, and so the receiving NIC would have to examine the first 32 bits of the payload to find the length of the IP packet, and therefore to figure out the total length of the Ethernet frame, and know when to look for the end-of-frame checksum and interframe gap. Does this sound correct? I also looked at the IEEE 802.3 spec for Ethernet (part 1, anyway) which seems to corroborate this, but it's pretty opaque. | The Physical Coding Sublayer is responsible for delimiting the frames, and sending them up to the MAC layer. In Gigabit Ethernet, for example, the 8B/10B encoding scheme uses a 10 bit codegroup to encode an 8-bit byte. The extra two bits tell whether a byte is control information or data. Control information can be Configuration, Start_of_packet, End_of_packet, IDLE, Carrier_extend, Error_propagation. That is how a NIC knows where a frame start and ends. This also means that the length of the frame is not known before it has fully decoded, analogous to a NULL-terminated string in C. | {
"source": [
"https://serverfault.com/questions/87927",
"https://serverfault.com",
"https://serverfault.com/users/10376/"
]
} |
87,933 | To compile something, I needed the zlib1g-dev package to be installed so I launched an apt-get install zlib1g-dev . apt-get informed me nicely that the package was already auto-installed because of an other package, and that it understands that I want it installed explicitly now : # apt-get install zlib1g-dev
zlib1g-dev is already the newest version.
zlib1g-dev set to manually installed. My compilation done, I don't need it any more explicitly, so I want to revert its status to the previous one : auto-installed. This way it will be pruned automatically when it will not be needed any more with a simple apt-get autoremove . I cannot do an apt-get remove zlib1g-dev since some packages still depends on it. So how may I revert the package zlib1g-dev installation state to auto-installed ? I know that I might edit /var/lib/apt/extended_states by hand from Package: zlib1g-dev
Auto-Installed: 0 to Package: zlib1g-dev
Auto-Installed: 1 ... but it just doesn't feel right. | Aptitude can help you when you initially install the package: aptitude install "zlib1g-dev&M" Or, after your have installed the package: aptitude markauto "zlib1g" Edit: If you do not have aptitude, you can use apt-mark auto zlib1g-dev | {
"source": [
"https://serverfault.com/questions/87933",
"https://serverfault.com",
"https://serverfault.com/users/1813/"
]
} |
88,000 | How do I tell if apache is running (or configured to run) as prefork or worker? | The MPM is configured at compile time. One way to figure it out afterwards is to list compiled in modules. That list will include the chosen MPM. The listing can be accomplished running the apache binary, with the -l flag. andreas@halleck:~$ apache2 -l
Compiled in modules:
core.c
mod_log_config.c
mod_logio.c
worker.c
http_core.c
mod_so.c
andreas@halleck:~$ Here we find the module worker.c, hence I'm running the worker MPM. | {
"source": [
"https://serverfault.com/questions/88000",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
88,001 | I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server
------------------------------
Cannot connect to MSSQLSERVER.
------------------------------
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476
------------------------------
BUTTONS:
OK
------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks | The MPM is configured at compile time. One way to figure it out afterwards is to list compiled in modules. That list will include the chosen MPM. The listing can be accomplished running the apache binary, with the -l flag. andreas@halleck:~$ apache2 -l
Compiled in modules:
core.c
mod_log_config.c
mod_logio.c
worker.c
http_core.c
mod_so.c
andreas@halleck:~$ Here we find the module worker.c, hence I'm running the worker MPM. | {
"source": [
"https://serverfault.com/questions/88001",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
88,064 | My LAN has 50 Windows hosts. At the Windows command line I try
ping to get the IP address of a running Windows machine. The question is how to get hostname of a specific IP address in the same Windows workgroup? Another question is how to know the hostname of Windows machine from a Linux box if I have an IP address? Which command do you use? I have one host running Kubuntu 9.04. | If you want to determine the name of a Windows machine without DNS, you should try Nbtstat . But that will only work on Windows: For example, NBTSTAT -A 10.10.10.10 On Linux, you should try nmblookup that does nearly the same. | {
"source": [
"https://serverfault.com/questions/88064",
"https://serverfault.com",
"https://serverfault.com/users/9401/"
]
} |
88,510 | I would like to be able to view the scripts/triggers associated with a package due for upgrade so that I can tell, for example, whether it will result in the web server being restarted. I can't find an aptitude option to show me that (or apt/dpkg); the best I can get is the contents (files). Is there some combination of simulate/verbose/contents switches that I have missed that will show this? Additionally, if a package results in something happening - like a service restart - that I don't want to happen right now, is there a way to install the package without running some or all of the scripts? | You can print the control file and some other information with dpkg -I package.deb , or use dpkg -e package.deb to extract only the control information files. Also, you can do a dry run to see what dpkg would do with --dry-run : dpkg --dry-run -i package.deb | {
"source": [
"https://serverfault.com/questions/88510",
"https://serverfault.com",
"https://serverfault.com/users/3654/"
]
} |
88,830 | We are running virtual servers on our windows server, I noticed that one of the server wont connect and when connected through virtual machine interface we found that server is up and running and we can access network/internet within the server but no outsider can connect to server. We removed virtual network interface and added new one (That will generate new MAC address for virtual network interface) and then server was accessible. Same problem occured both in VMWare as well as HyperV, not both at same time but with gap of 3-4 days. I want to know that is it possible that two network interface on same LAN with same MAC address but different IP can create problem? | Hell yes, an unreservedly bad idea - they NEED to be unique. | {
"source": [
"https://serverfault.com/questions/88830",
"https://serverfault.com",
"https://serverfault.com/users/13630/"
]
} |
89,114 | I can find my IP address using ifconfig or hostname -i command. But how do I find my Public IP? (I have a static public IP but I want to find it out using unix command) | curl ifconfig.me curl ifconfig.me/ip (for just the ip) curl ifconfig.me/all (for more info, takes time) For more commands visit: http://ifconfig.me/#cli_wrap | {
"source": [
"https://serverfault.com/questions/89114",
"https://serverfault.com",
"https://serverfault.com/users/16842/"
]
} |
89,399 | I've set up SSL for my domain and it works from Apache perspective. The problem is that accessing my domain over HTTPS sometimes results in timeouts. When it doesn't work, it takes some time to access my website over HTTP but it never times out. Why does this happen for HTTPS and is there a way to control timeout time for HTTPS? My configuration: Apache 2.2.11 on CentOS 5 NameVirtualHost *:443
<VirtualHost *:443>
SuexecUserGroup foo
DocumentRoot /home/mydomain/www/
ServerName example.com
SSLEngine on
SSLProtocol -all +TLSv1 +SSLv3
SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
SSLCertificateFile /path/example.com.com.crt
SSLCertificateKeyFile /path/example.com.key
SSLVerifyClient none
SSLProxyVerify none
SSLVerifyDepth 0
SSLProxyVerifyDepth 0
SSLProxyEngine off
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0
<Directory "/home/mydomain/www">
SSLRequireSSL
AllowOverride all
Options +FollowSymLinks +ExecCGI -Indexes
AddHandler php5-fastcgi .php
Action php5-fastcgi /cgi-bin/a.fcgi
Order allow,deny
Allow from all
</Directory>
<Directory "/var/suexec/mydomain.com">
AllowOverride None
Options None
Order allow,deny
Allow from all
</Directory>
</VirtualHost> EDIT It's a self-signed certificate. When visiting my domain works, it results in a SSL warning saying that the certificate is not trusted but accepting it lets me see the website over HTTPS. | I found the cause of this problem. Port 443 was closed in my firewall configuration. It worked sometimes because my IP was added to firewall as a safe one. That's why it did not work for other IPs. All I had to do is open port 443 in firewall and it works just fine :) | {
"source": [
"https://serverfault.com/questions/89399",
"https://serverfault.com",
"https://serverfault.com/users/10705/"
]
} |
89,654 | From the shell and without root privileges, how can I determine what Red Hat Enterprise Linux version I'm running? Ideally, I'd like to get both the major and minor release version, for example RHEL 4.0 or RHEL 5.1, etc. | You can use the lsb_release command on various Linux distributions: lsb_release -i -r This will tell you the Distribution and Version and is a little bit more accurate than accessing files that may or may not have been modified by the admin or a software package. As well as working across multiple distros. For RHEL, you should use: cat /etc/redhat-release | {
"source": [
"https://serverfault.com/questions/89654",
"https://serverfault.com",
"https://serverfault.com/users/13433/"
]
} |
89,655 | I have a pc and two embedded linux based systems that I am trying to chain together through IP forwarding.
They span two separate local networks. Both networks are wired. I can ping 10.10.10.6 from the PC and ping 10.10.30.1 from target 1.
the problem occurs when I run an application on target 1 that generates a healthy(< 3MBps) amount of UDP traffic, directed toward the PC.
The performance of the system can not keep up and I see the RX dropped packet count continuously increase on target 0's 10.10.10.4 interface. There is plenty of CPU cycles on both targets. So the question is, why is target 0 dropping so many packets? Is there something I can do with the configuration to get the through put up? Increase buffer sizes? +-----------+ +--------------------------------------+ +-------------------+
| PC |-->| target 0 |-->| target 1 |
|10.10.30.1 | | eth1 =10.10.30.2 eth0 =10.10.10.4 | | eth0 = 10.10.10.6 |
+-----------+ +--------------------------------------+ +-------------------+ target 0 has two ethernet interfaces and ip_forwarding = 1 And I have the following route's setup:
on the PC: Active Routes:
Network Destination Netmask Gateway Interface Metric
10.10.10.6 255.255.255.255 10.10.30.2 10.10.30.1 1
10.10.30.0 255.255.255.0 10.10.30.1 10.10.30.1 20 and on target 0: Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.30.0 * 255.255.255.0 U 0 0 0 eth1
10.10.10.0 * 255.255.255.0 U 0 0 0 eth0
default 10.10.10.6 0.0.0.0 UG 0 0 0 eth0
default 10.10.30.1 0.0.0.0 UG 0 0 0 eth1 and on target 1: Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.30.1 10.10.10.4 255.255.255.255 UGH 0 0 0 eth0
10.10.10.0 * 255.255.255.0 U 0 0 0 eth0
default 10.10.10.4 0.0.0.0 UG 0 0 0 eth0 Edit:
target 0 # ip link list | grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 target 1 # ip link list | grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 Both connections have been confirmed to be set at 100Mbps Full duplex. Another thought: As it appears the input to target 0 is getting flooded, can the switch between the two targets, buffer the packets and then flood target 0 with a big burst of data? Update: By replacing the 10/100/1000 switch with an old 10M half duplex hub the system is working fine, no dropped packets. In the bad condition, ethtool register dump indicated RX_DROP count continuously increasing. This is indicating the Processor/Linux is not able to remove the data quickly enough from the ethernet chip. With the 10M hub this is no longer a problem. So this sounds to me like Linux is not configured correctly to keep up the higher(but not that high) data rate. I would like to understand why the OS/processor can't keep up? | You can use the lsb_release command on various Linux distributions: lsb_release -i -r This will tell you the Distribution and Version and is a little bit more accurate than accessing files that may or may not have been modified by the admin or a software package. As well as working across multiple distros. For RHEL, you should use: cat /etc/redhat-release | {
"source": [
"https://serverfault.com/questions/89655",
"https://serverfault.com",
"https://serverfault.com/users/27670/"
]
} |
89,681 | My ASP.NET MVC application on my development box is running wild; I can't even connected to localhost. In order to know what is the problem, I want to find the log file and examine it. Where is the location of the ASP.NET Server log file? I couldn't find it in event viewer, so I don't know where else to look | ASP.NET uses IIS logging, so it's really an IIS question. Though there is some detailed info in Event Viewer for some types of events. In IIS6 (and prior), this is located in %SystemRoot%\system32\logfiles , and in IIS7, this is located in %SystemDrive%\inetpub\logs\LogFiles . In both cases, it will be placed in a subfolder called W3SVC{Id} . The Id is the site Id. You can find it by clicking on "Web Sites" in IIS Manager and the site ID will show in that view. | {
"source": [
"https://serverfault.com/questions/89681",
"https://serverfault.com",
"https://serverfault.com/users/1605/"
]
} |
89,773 | I have nginx log file, and I want to find out market share for each major version of browsers. I am not interested in minor versions and operating systems. I would like to get something like this: 100 IE6
99 IE7
20 IE8
200 FF2
300 FF3 I know how to get the list of user agents from the file, but I want to aggregate the list to see only the major versions of the browsers. Is there a tool that does it? | awk -F'"' '/GET/ {print $6}' /var/log/nginx-access.log | cut -d' ' -f1 | sort | uniq -c | sort -rn awk(1) - selecting full User-Agent string of GET requests cut(1) - using first word from it sort(1) - sorting uniq(1) - count sort(1) - sorting by count, reversed PS. Of course it can be replaced by one awk / sed / perl / python /etc script. I just wanted to show how rich unix-way is. | {
"source": [
"https://serverfault.com/questions/89773",
"https://serverfault.com",
"https://serverfault.com/users/175/"
]
} |
89,955 | I've installed a stock mysql 5.5 installation, and while I can connect to the mysql service via the mysql command, and the service seems to be running, I cannot connect to it through spring+tomcat or from an external jdbc connector. I'm using the following URL: jdbc:mysql://myserver.com:myport/mydb with proper username/password, but I receive the following message: server.com: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. the driver has not received any packets from the server. and tomcat throws: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Which seems to be the same issue as if I try to connect externally. | This can happen for a variety of reasons. I just saw it myself a few weeks ago but I can't remember what the fix was for me. 1) Verify the address mysql is bound to, it's probably 127.0.0.1 (only) which I believe is the default (at least on standard Ubuntu server). You'll have to comment out the bind-address
parameter in my.cnf to bind to all available addresses (you can't choose multiple, it's one or all). 2) If it is bound to 127.0.0.1 and you can't connect using "localhost", make sure it's not resolving to the IPv6 localhost address instead of IPv4. (or just use the IP address) 3) Double and triple-check the port that mysql is listening on. 4) Make sure you're using the right JDBC connector for your JDK. 5) Make sure you're not doing something really silly like starting mysql with --skip-networking. I think my first suggestion has the most promise...in fact I think that's where I saw it recently...I was trying to connect to mysql remotely (also on Ubuntu 8.04). | {
"source": [
"https://serverfault.com/questions/89955",
"https://serverfault.com",
"https://serverfault.com/users/11669/"
]
} |
90,400 | How to find all Debian managed configuration files which have been changed from the default? | To find all Debian managed configuration files which have been changed from the default you can use a command like this. dpkg-query -W -f='${Conffiles}\n' '*' | awk 'OFS=" "{print $2,$1}' | md5sum -c 2>/dev/null | awk -F': ' '$2 !~ /OK/{print $1}' Edit (works with localized systems): dpkg-query -W -f='${Conffiles}\n' '*' | awk 'OFS=" "{print $2,$1}' | LANG=C md5sum -c 2>/dev/null | awk -F': ' '$2 !~ /OK/{print $1}' | sort | less Edit (works with packages with OK in the filename): dpkg-query -W -f='${Conffiles}\n' '*' | awk 'OFS=" "{print $2,$1}' | LANG=C md5sum -c 2>/dev/null | awk -F': ' '$2 !~ /OK$/{print $1}' | sort | less | {
"source": [
"https://serverfault.com/questions/90400",
"https://serverfault.com",
"https://serverfault.com/users/27905/"
]
} |
90,725 | I was reading through some of the notes on Google's new public DNS service : Performance Benefits Security Benefits I noticed under the security section this paragraph: Until a standard system-wide solution to DNS vulnerabilities is universally implemented, such as the DNSSEC2 protocol, open DNS resolvers need to independently take some measures to mitigate against known threats. Many techniques have been proposed; see IETF RFC 4542: Measures for making DNS more resilient against forged answers for an overview of most of them. In Google Public DNS, we have implemented, and we recommend, the following approaches: Overprovisioning machine resources to protect against direct DoS attacks on the resolvers themselves. Since IP addresses are trivial for attackers to forge, it's impossible to block queries based on IP address or subnet; the only effective way to handle such attacks is to simply absorb the load. That is a depressing realization; even on Stack Overflow / Server Fault / Super User, we frequently use IP addresses as the basis for bans and blocks of all kinds. To think that a "talented" attacker could trivially use whatever IP address they want, and synthesize as many unique fake IP addresses as they want, is really scary! So my question(s): Is it really that easy for an attacker to forge an IP address in the wild? If so, what mitigations are possible? | As stated by many others, IP headers are trivial to forge, as long as one doesn't care about receiving a response. This is why it is mostly seen with UDP, as TCP requires a 3-way handshake. One notable exception is the SYN flood , which uses TCP and attempts to tie up resources on a receiving host; again, as the replies are discarded, the source address does not matter. A particularly nasty side-effect of the ability of attackers to spoof source addresses is a backscatter attack. There is an excellent description here , but briefly, it is the inverse of a traditional DDoS attack: Gain control of a botnet. Configure all your nodes to use the same source IP address for malicious packets. This IP address will be your eventual victim. Send packets from all of your controlled nodes to various addresses across the internet, targeting ports that generally are not open, or connecting to valid ports (TCP/80) claiming to be part of an already existing transaction. In either of the cases mentioned in (3), many hosts will respond with an ICMP unreachable or a TCP reset, targeted at the source address of the malicious packet . The attacker now has potentially thousands of uncompromised machines on the network performing a DDoS attack on his/her chosen victim, all through the use of a spoofed source IP address. In terms of mitigation, this risk is really one that only ISPs (and particularly ISPs providing customer access, rather than transit) can address. There are two main methods of doing this: Ingress filtering - ensuring packets coming in to your network are sourced from address ranges that live on the far side of the incoming interface. Many router vendors implement features such as unicast reverse path forwarding , which use the router's routing and forwarding tables to verify that the next hop of the source address of an incoming packet is the incoming interface. This is best performed at the first layer 3 hop in the network (i.e. your default gateway.) Egress filtering - ensuring that packets leaving your network only source from address ranges you own. This is the natural complement to ingress filtering, and is essentially part of being a 'good neighbor'; ensuring that even if your network is compromised by malicious traffic, that traffic is not forwarded to networks you peer with. Both of these techniques are most effective and easily implemented when done so in 'edge' or 'access' networks, where clients interface with the provider. Implementing ingress/egress filtering above the access layer becomes more difficult, due to the complexities of multiple paths and asymmetric routing. I have seen these techniques (particularly ingress filtering) used to great effect within an enterprise network. Perhaps someone with more service provider experience can give more insight into the challenges of deploying ingress/egress filtering on the internet at large. I imagine hardware/firmware support to be a big challenge, as well as being unable to force upstream providers in other countries to implement similar policies... | {
"source": [
"https://serverfault.com/questions/90725",
"https://serverfault.com",
"https://serverfault.com/users/1/"
]
} |
90,737 | Apparently it's a URL shortener. It resolves just fine in Chrome and Firefox. How is this a valid top-level domain? Update: for the people saying it's browser shenanigans, why is it that: http://com./ does not take me to: http://www.com/ ? And, do browsers ever send you a response from some place other than what's actually up in the address bar? Aside from framesets and things like that, I thought browsers tried really hard to send you content only from the site in the address bar, to help guard against phishing. | Basically, someone has managed to convince the owners of the ccTLD 'to.' (Tonga?) to assign the A record to their own IP address. Quite a coup in the strange old world of URL shorteners. Normally these top-levels would not have IP addresses assigned via a standard A record, but there is nothing to say that the same could not be done to .uk, .com, .eu, etc. Strictly speaking there is no reason to have the '.' specified, though it should prevent your browser from trying other combinations like 'to.yourdomain.com' first, and speed up the resolution of the address. It might also confuse browsers, as there is no dot, but Safari at least seems to work ok with it. | {
"source": [
"https://serverfault.com/questions/90737",
"https://serverfault.com",
"https://serverfault.com/users/8806/"
]
} |
90,908 | What situations are you thinking are good candidates for a service like this? I have been concerned about our ISP's DNS - they are redirecting to advertising pages, and showing other signs of questionable integrity. I was considering OpenDNS - but wasn't feeling that they were going to be much better - and heard mixed things about them. Our operation is quite small, so I don't want anything too complicated. And I certainly don't want a bunch of extra headaches. | Looks like Google will be a good fit when you want a DNS that conforms to RFC 1034, and when you aren't all tinfoil-hat about Google. OpenDNS hijacks your unresolved DNS queries and redirects you to advertising. This breaks the NXDOMAIN response. However, their claim to fame is that they provide user-definable filtering at the DNS level. Frankly, few things piss me off more than a DNS provider that hijacks NXDOMAIN, so I'll probably be switching over to Google for my personal stuff. And hey, hard to get DNS IPs that are easier to remember! (8.8.8.8 and 8.8.4.4) | {
"source": [
"https://serverfault.com/questions/90908",
"https://serverfault.com",
"https://serverfault.com/users/2379/"
]
} |
91,285 | We have one particular SQL Server 2008 query (not a stored proc, but the same SQL string -- executes every 5 minutes) that intermittently caches a very bad query plan. This query normally runs in a few milliseconds, but with this bad query plan, it takes 30+ seconds. How do I surgically remove just the one bad cached query plan from SQL Server 2008, without blowing away the entire query cache on the production database server? | I figured out a few things select * from sys.dm_exec_query_stats will show all the cached query plans. Unfortunately, no SQL text is shown there. However, you can join the SQL text to the plans like so: select plan_handle, creation_time, last_execution_time, execution_count, qt.text
FROM
sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt From here it's pretty trivial to add a WHERE clause to find the SQL I know is in the query, and then I can execute: DBCC FREEPROCCACHE (plan_handle_id_goes_here) to remove each query plan from the query plan cache. Not exactly easy or convenient, but it appears to work.. edit: dumping the entire query cache will also work, and is less dangerous than it sounds, at least in my experience: DBCC FREESYSTEMCACHE ('ALL') WITH MARK_IN_USE_FOR_REMOVAL; | {
"source": [
"https://serverfault.com/questions/91285",
"https://serverfault.com",
"https://serverfault.com/users/1/"
]
} |
91,482 | I am installing a MySql server in Ubuntu desktop. I could connect MySql Query Browser to it when specifying the Server Hostname as localhost , but when I replace it by the machine's IP it stops working (even if the MySql Query Browser runs in the same machine). All I have done so far is removing the iptables, but it seems it have nothing to do with it. Here is the error message Could not connect to host '192.168.0.2'. MySQL Error Nr. 2003 Can't connect to MySQL server on '192.168.0.2' (111) Click the 'Ping' button to see if there is a networking problem. The ping is ok, though | You'll have to bind your mysqld to a IP different from 127.0.0.1. Open your my.cnf (/etc/mysql/my.cnf usually) and change the line that says bind = 127.0.0.1 to whatever IP your machine uses to connect to the outside world. 0.0.0.0 binds to all IP addresses. Don't forget to restart your mysqld after that change. | {
"source": [
"https://serverfault.com/questions/91482",
"https://serverfault.com",
"https://serverfault.com/users/958/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.