source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
478,150
I've been using nginx without any problem on windows for the last few months. Today when I tried to start it up, I got this error: nginx: [emerg] bind() to 0.0.0.0:80 failed (10013: An attempt was made to access a socket in a way forbidden by its access permissions) Why did this start happening all of a sudden? I didn't change any configs or anything.
Check Skype. Skype automatically updated itself, and turned the "use port 80" option back on. It's in settions->advanced.
{ "source": [ "https://serverfault.com/questions/478150", "https://serverfault.com", "https://serverfault.com/users/6180/" ] }
478,558
I have small Linux server (Debian Squeeze) which runs a Samba server which is configured to share some folders with some windows machines. While trying to delete one of the directories from windows I received the "Cannot delete folder" error. I tried to delete the directory from the linux's console I got a similar error: # rm dir-name -rf rm: cannot remove `dir-name': Directory not empty I listed the contents of the directory and found a file named .fuse_hidden followed by a hex number (000bd8c100000185). # ls -la dir-name -rwxrwxrwx 1 root root 5120 Feb 13 11:46 .fuse_hidden000bd8c100000185 I tried to delete the .fuse_hidden file, but a new file was created instantly (note the hex number change). # rm dir-name/.fuse_hidden000bd8c100000185 # ls -la dir-name -rwxrwxrwx 1 root root 5120 Feb 13 11:46 .fuse_hidden000bd8c100000186 I also tried using Midnight Commander to delete the file with no success. Other solutions I have found so far involve GUI and I've only got console. Any suggestions are appreciated.
This is similar to what happens when you delete a file that another system has open on a NFS mount. The problem is that the file has been removed from the filesystem while its "link count" was >1, meaning that other processes are still holding it open. Log in to the system where the file physically resides. (no network mount) Execute lsof dir-name/.fuse_hidden000bd8c100000185 to find out what processes are holding the file handle open. Terminate those processes if it makes sense to, or figure out what steps you can perform to "gracefully" release the open file handle without terminating the process. Normally, when you delete a file on your local filesystem that another process has open, the OS complies with your request and removes it from the directory tree, but the inode that tree points to is still considered in use by the operating system. Every time a file is opened, its "link count" increments by one, and the space is only truly released when that link count hits zero. When you run into a problem of this nature, it means that the OS has for whatever reason decided to not remove that file from the directory tree: usually because it has reason to believe that it still needs to be accessed by things that can't utilize the direct inode number. It might initially seem to comply, but behind the scenes the OS renames it to have a hidden dot-prefix so that is still accessible with some form of filesystem path addressing. The space will still be freed when the link count hits zero, but that object will will remain in the directory until the links are gone.
{ "source": [ "https://serverfault.com/questions/478558", "https://serverfault.com", "https://serverfault.com/users/158866/" ] }
478,564
My dedicated provider did the following: wget S03HvTechAccess > /dev/null 2>&1 mv S03HvTechAccess /etc/rc3.d/ > /dev/null 2>&1 chmod 755 /etc/rc3.d/S03HvTechAccess > /dev/null 2>&1 and it shows: /usr/bin/openvt -c 8 /bin/bash What is openvt? It mentions you can login without a password. How does that work in terms of how do you connect to it?
This is similar to what happens when you delete a file that another system has open on a NFS mount. The problem is that the file has been removed from the filesystem while its "link count" was >1, meaning that other processes are still holding it open. Log in to the system where the file physically resides. (no network mount) Execute lsof dir-name/.fuse_hidden000bd8c100000185 to find out what processes are holding the file handle open. Terminate those processes if it makes sense to, or figure out what steps you can perform to "gracefully" release the open file handle without terminating the process. Normally, when you delete a file on your local filesystem that another process has open, the OS complies with your request and removes it from the directory tree, but the inode that tree points to is still considered in use by the operating system. Every time a file is opened, its "link count" increments by one, and the space is only truly released when that link count hits zero. When you run into a problem of this nature, it means that the OS has for whatever reason decided to not remove that file from the directory tree: usually because it has reason to believe that it still needs to be accessed by things that can't utilize the direct inode number. It might initially seem to comply, but behind the scenes the OS renames it to have a hidden dot-prefix so that is still accessible with some form of filesystem path addressing. The space will still be freed when the link count hits zero, but that object will will remain in the directory until the links are gone.
{ "source": [ "https://serverfault.com/questions/478564", "https://serverfault.com", "https://serverfault.com/users/112405/" ] }
478,636
[root@localhost ~]# cat /etc/issue Fedora release 17 (Beefy Miracle) Kernel \r on an \m (\l) [root@localhost ~]# uname -a Linux localhost.localdomain 3.6.10-2.fc17.i686 #1 SMP Tue Dec 11 18:33:15 UTC 2012 i686 i686 i386 GNU/Linux [root@localhost ~]# tcpdump -i p3p1 -n -w out.pcap -C 16 tcpdump: out.pcap: Permission denied Why I get error?? What should I do?
i tried on Centos 5, still the same even on tmp or root folder. from the tcpdump man page, privileges are dropped when used with -Z option (enabled by default) before opening first savefile. because you specified "-C 1", the permission denied occur because of the file size already reached 1, and when create new file it will raise an permission denied error. so just specify the -Z user # strace tcpdump -i eth0 -n -w out.pcap -C 1 fstat(4, {st_mode=S_IFREG|0644, st_size=903, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2aea31934000 lseek(4, 0, SEEK_CUR) = 0 read(4, "root:x:0:root\nbin:x:1:root,bin,d"..., 4096) = 903 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2aea31934000, 4096) = 0 setgroups(1, [77]) = 0 setgid(77) = 0 setuid(77) = 0 setsockopt(3, SOL_SOCKET, SO_ATTACH_FILTER, "\1\0\0\0\0\0\0\0\310\357k\0\0\0\0\0", 16) = 0 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 recvfrom(3, 0x7fff9563d35f, 1, 32, 0, 0) = -1 EAGAIN (Resource temporarily unavailable) fcntl(3, F_SETFL, O_RDWR) = 0 setsockopt(3, SOL_SOCKET, SO_ATTACH_FILTER, "\1\0\17\0\0\0\0\0P\327\233\7\0\0\0\0", 16) = 0 open("out.pcap", O_WRONLY|O_CREAT|O_TRUNC, 0666) = -1 EACCES (Permission denied) write(2, "tcpdump: ", 9tcpdump: ) = 9 write(2, "out.pcap: Permission denied", 27out.pcap: Permission denied) = 27 write(2, "\n", 1 ) = 1 exit_group(1) = ? you can see the strace result above, tcpdump dropped the privileges into user and group pcap (77). # grep 77 /etc/group pcap:x:77: # grep 77 /etc/passwd pcap:x:77:77::/var/arpwatch:/sbin/nologin From tcpdump man page, -C # man tcpdump -C Before writing a raw packet to a savefile, check whether the file is currently larger than file_size and, if so, close the current savefile and open a new one. Savefiles after the first savefile will have the name specified with the -w flag, with a number after it, starting at 1 and continuing upward. The units of file_size are mil- lions of bytes (1,000,000 bytes, not 1,048,576 bytes). **Note that when used with -Z option (enabled by default), privileges are dropped before opening first savefile.** # tcpdump --help tcpdump version 3.9.4 libpcap version 0.9.4 Usage: tcpdump [-aAdDeflLnNOpqRStuUvxX] [-c count] [ -C file_size ] [ -E algo:secret ] [ -F file ] [ -i interface ] [ -M secret ] [ -r file ] [ -s snaplen ] [ -T type ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -Z user ] [ expression ] Specify specific user with -Z user # tcpdump -i eth0 -n -w out.pcap -C 1 -Z root tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 35 packets captured 35 packets received by filter 0 packets dropped by kernel
{ "source": [ "https://serverfault.com/questions/478636", "https://serverfault.com", "https://serverfault.com/users/158897/" ] }
479,443
I have Nginx + php5-fpm . Several times per hour my website stucks and in logfile I see the following: WARNING: [pool www] server reached pm.max_children setting (5), consider raising it. /etc/php5/fpm/pool.d/www.conf file contains the following configuration: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 Server: AMD Opteron™ 3280, Octo-Core, 8x 2.4 GHz, 16 GB DIMM (DDR3). I have no idea what numbers should I put in www.conf file for this server. Can me help somebody? Thanks
There are many possible reasons why your PHP-FPM would reach the max_children . Most common ones are: A lot of parallel requests from your clients Slow execution of the PHP scripts Very low setting of the max_children Looking at the specs of your machine, assuming there is nothing else than PHP+Nginx running, I think you could set it much higher than 5. You say you have 8 Cores, usually Nginx needs much less CPU than PHP, so with 5 children you will probably never be able to use all of them. I'm usually setting it to something like the number of cores x 2 or number of cores x 4 , depending on the memory consumption of your PHP scripts.
{ "source": [ "https://serverfault.com/questions/479443", "https://serverfault.com", "https://serverfault.com/users/145932/" ] }
479,460
Is it possible to find the command line of a running process with its pid ? the output of /proc/${PID}/cmdline seems that it removes the space character to it is hard to read the output.
From: https://stackoverflow.com/questions/993452/splitting-proc-cmdline-arguments-with-spaces cat /proc/${PID}/cmdline | tr '\000' ' ' cat /proc/${PID}/cmdline | xargs -0 echo
{ "source": [ "https://serverfault.com/questions/479460", "https://serverfault.com", "https://serverfault.com/users/158757/" ] }
479,945
Assume an environment with a puppet-managed cluster of different servers - various hardware, software, operating systems, virtual/dedicated, etc. Would you choose meaningful hostnames (mysqlmaster01..99, mysqlslave001..999, vpnprimary, vpnbackup, etc.) or would you prefer meaningless hostnames such as characters from a book or movie? The problem I see with meaningful hostnames is that names usually represent a single service and if a server has more than one purpose it gets really messy (especially if server roles change often). Isn't mapping a service name to an IP address and maintaining that mapping what DNS is supposed to do? What are the advantages and drawbacks of both approaches and what actual problems have you had to tackle with the approach you chose?
Once upon a time I had an opportunity to decide on a naming scheme. So I went round and asked my developers, who after all were the people who had to work with these names on a day-to-day basis, whether they preferred functional names (that is, names which represent, in some encoded form, the purpose of the machine) or mnemonic names (that is, names drawn from some pre-existing human naming scheme, which contained no implicit content about the machine's purpose). Out of 38 developers, 37 preferred mnemonic names; only one preferred functional names. So I named them all after rivers (there's a very large pool of possible names, and many of them are short, easy to remember, and quick to type). The human brain is pretty well-designed for attaching meaning to names. If you provide names that are memorable, people will pretty quickly remember what those names are used for, and use them. If you use names drawn from some common background (eg rivers, elements, stars, counties, drinks, you get the idea) it helps people to immediately recognise a company hostname when they come across it; otherwise statements like "all the email ended up on betelgeuse " can be a bit confusing). Conversely, my developers felt that they had in previous jobs had a really hard time remembering exactly what pr1ms001 was. But I should add that we used CNAMEs in the internal DNS to provide a functional name to mnemonic name mapping, so if you really found it easier to remember that the main mail server at the first cluster at the PR site was pr1ms001 , then the DNS would let you know that that was currently orwell . Also, that let us have many functional names per machine, so as long as you always used the functional name relevant to the function you were working on, you could be sure that pr1imap001 would always point to the IMAP server, even if we moved that functionality from orwell to rhine . And when hudson died, we could change the name of the replacement without affecting operational functions, so that we never had the "do you mean new hudson or old hudson ?" confusion.
{ "source": [ "https://serverfault.com/questions/479945", "https://serverfault.com", "https://serverfault.com/users/61658/" ] }
480,208
I use ssh [email protected] -p 1234 -D 9898 command for tunneling, and I set firefox socks5 ip to 127.0.0.1 and its port to 9898. It works successfully ,but in terminal I have error in output: channel 39: open failed: connect failed: Connection timed out channel 41: open failed: connect failed: Connection timed out channel 42: open failed: connect failed: Connection timed out channel 43: open failed: connect failed: Connection timed out channel 44: open failed: connect failed: Connection timed out It's occurs periodically. What's this? Is it a problem? What can I do?
I have experienced similar issues. If you are tunneling with Firefox through ssh, some http connections can simply timeout due to server load or improper configuration. When the connection actually does timeout, you'll get an error message like the one you indicated. You can suppress these messages with the following command ssh [email protected] -p 1234 -D 9898 -q From the man page ssh(1) -q Quiet mode. Causes most warning and diagnostic messages to be sup- pressed. Suppressing the message will keep the warnings from messing up your ssh or screen sessions.
{ "source": [ "https://serverfault.com/questions/480208", "https://serverfault.com", "https://serverfault.com/users/152804/" ] }
480,241
I'm having trouble configuring nginx. I'm using nignx as a reverse proxy. I want to send my all requests to my first server. If the first server is down, I want to send requests to second server. In short, how can I have a failover solution without load balancing?
What you want is an active+passive setup. Here's an example nginx conf snippet to get you going: upstream backend { server 1.2.3.4:80 fail_timeout=5s max_fails=3; server 4.5.6.7:80 backup; } server { listen 80; server_name whatevs.com; location / { proxy_pass http://backend; } } So, 'normally', all requests will go to host 1.2.3.4. If we get three failures to that box, then 4.5.6.7 will take over.
{ "source": [ "https://serverfault.com/questions/480241", "https://serverfault.com", "https://serverfault.com/users/160679/" ] }
480,271
In Installing RVM manual I see a lot of lines starting with '\': Install RVM with ruby: $ \curl -L https://get.rvm.io | bash -s stable --ruby I'd think it is just mistype but they repeat it many times. So what is the reason?
There's no error, it's a little hack to avoid using a curl shell alias if any exists. This works too : 'curl' (...) "curl" (...) /usr/bin/curl (...) command curl (...) command -p curl (...)
{ "source": [ "https://serverfault.com/questions/480271", "https://serverfault.com", "https://serverfault.com/users/103132/" ] }
480,291
I am proposing an Azure environment with the following: VM SQL Server for core relational data Table Storage for bulk data I want to mirror the SQL Server database to another server so that Reports can be run on this server so to minimize data load on the primary database, and It can serve as a failover server in case the primary server goes down. In order to achieve these 2 objectives I would also need to mirror the Azure Table Storage too. I can't seem to find any information on this. Is this even possible?
There's no error, it's a little hack to avoid using a curl shell alias if any exists. This works too : 'curl' (...) "curl" (...) /usr/bin/curl (...) command curl (...) command -p curl (...)
{ "source": [ "https://serverfault.com/questions/480291", "https://serverfault.com", "https://serverfault.com/users/158789/" ] }
480,371
Based on a previous question , I installed ipmitool ( yum install ipmitool ). Even after a reboot, though, i get the following error when trying to run ipmitool power status : Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory Unable to get Chassis Power Status Is this an OS/hardware issue (CentOS 6.3 x64 on a hosted machine in a remote datacenter - unsure on hardware vendor)? Or have I missed something more elemental in installing ipmitool ?
You probably need to load the IPMI kernel modules: modprobe ipmi_devintf modprobe ipmi_si You can add these to /etc/modules to have them loaded automatically (just list the module names): ipmi_devintf ipmi_si
{ "source": [ "https://serverfault.com/questions/480371", "https://serverfault.com", "https://serverfault.com/users/2321/" ] }
480,551
From logrotate manpage. It will not modify a log more than once in one day unless the criterion for that log is based on the log's size According to the man page, logrotate should rotate file if the configuration is based on logs size. But, my file is not getting even if the filesize if greater than 100k. Can somebody point out what is the issue. My configuration /home/jetech/work/lampstack-5.3.9-0/apache2/logs/access_log { copytruncate compress # dateext rotate 365 size 100k olddir /home/jetech/work/lampstack-5.3.9-0/apache2/old_logs notifempty nomail missingok }
How do you know the file is not getting rotated? On a Debian 6 Linode I have, in the default configuration logrotate was only scheduled by cron to run once per day, and at a very odd time at that. If only run once per day, naturally it'll only have one opportunity per day to look at the configuration, do the comparisons and perform the rotations required. So, are you sure you're actually running your logrotate? Might want to check your /etc/cron* and /etc/cron*/* to see when and how often logrotate is scheduled to run. For example, if logrotate script is present in /etc/cron.daily , then you may want to move it to /etc/cron.hourly , or, if hourly is not good enough, create a file in /etc/cron.d/ with the following content, to run logrotate every 10 minutes: */10 * * * * root /usr/sbin/logrotate /etc/logrotate.conf
{ "source": [ "https://serverfault.com/questions/480551", "https://serverfault.com", "https://serverfault.com/users/155488/" ] }
482,730
I need to specify a boot order for processes to start. I have 389 Directory Server and Samba running on Fedora 18. How can I have the network services boot, then 389 DS, then Samba? Is there a GUI to manage this in Fedora? I have enabled Samba to start with systemctl enable smb.service . I have also enabled 389 DS with systemctl enable dirsrv.target .
Use systemctl edit smb.service to update the dependencies. After=dirsrv.target - Will ensure the smb.service is started after dirsrv.target. For robustness, (which will be worth while if you're tinkering with this stuff) you may also wish to include some of the following: Requires=dirsrv.target - Activate dirsrv.target when smb.service is activated. Will cause smb.service to fail if dirsrv.target fails. Wants=dirsrv.target - Activate dirsrv.target when smb.service is activated. Won't cause smb.service to fail if dirsrv.target fails. BindsTo=dirsrv.target - If dirsrv.target is deactivated, deactivate smb.service. Source: http://www.freedesktop.org/software/systemd/man/systemd.unit.html systemd-ui provides a GUI for systemd. Gives a good view of the state of systemd but you'll still have to use a text editor to modify the unit files.
{ "source": [ "https://serverfault.com/questions/482730", "https://serverfault.com", "https://serverfault.com/users/118588/" ] }
482,733
Every 5 or so days (including just now) I get a barrage of timeout errors from my webapp. If I look at what my SQL instance is doing in CloudWatch, it reports this: Freeable space: http://cl.ly/NBRM DB Connections: http://cl.ly/NBLH Write throughput: http://cl.ly/NBFs Read IOPS: http://cl.ly/NBp3 Write IOPS: http://cl.ly/NAre Queue: http://cl.ly/NBA7 What the heck is happening? I don't believe its traffic related. How do I find out what happened? **Update: ** incremental backups are taken every 5 mins, and daily backups are done at 4am (i.e. not when this happens) Thanks
Use systemctl edit smb.service to update the dependencies. After=dirsrv.target - Will ensure the smb.service is started after dirsrv.target. For robustness, (which will be worth while if you're tinkering with this stuff) you may also wish to include some of the following: Requires=dirsrv.target - Activate dirsrv.target when smb.service is activated. Will cause smb.service to fail if dirsrv.target fails. Wants=dirsrv.target - Activate dirsrv.target when smb.service is activated. Won't cause smb.service to fail if dirsrv.target fails. BindsTo=dirsrv.target - If dirsrv.target is deactivated, deactivate smb.service. Source: http://www.freedesktop.org/software/systemd/man/systemd.unit.html systemd-ui provides a GUI for systemd. Gives a good view of the state of systemd but you'll still have to use a text editor to modify the unit files.
{ "source": [ "https://serverfault.com/questions/482733", "https://serverfault.com", "https://serverfault.com/users/7802/" ] }
482,907
In ssh_config , one can choose to export some environment variables to the host using SendEnv . Is there also a way to force a given value for this variable, per host? For example, would it be possible to export variable $FOO with value bar only when connecting to host example.com ?
You can't give a specific value for an environment variable in ssh_config , but you can certainly send the existing environment variable only to specific hosts. Host example.com SendEnv FOO To complete the chain: FOO=bar ssh [email protected] Finally, the remote server must have the environment variable listed in AcceptEnv in its sshd_config . AcceptEnv FOO
{ "source": [ "https://serverfault.com/questions/482907", "https://serverfault.com", "https://serverfault.com/users/29328/" ] }
482,913
When the accuracy of a DNS cache is in question, dig +trace tends to be the recommended way of determining the authoritative answer for an internet facing DNS record. This seems to be particularly useful when also paired with +additional , which also shows the glue records. Occasionally there seems to be some disagreement on this point -- some people say that it relies on the local resolver to look up the IP addresses of the intermediate nameservers, but the command output offers no indication that this is happening beyond the initial list of root nameservers. It seems logical to assume that this wouldn't be the case if the purpose of +trace is to start at the root servers and trace your way down. (at least if you have the right list of root nameservers) Does dig +trace really use the local resolver for anything past the root nameservers?
This is obviously a staged Q&A, but this tends to confuse people often and I can't find a canonical question covering the topic. dig +trace is a great diagnostic tool, but one aspect of its design is widely misunderstood: the IP of every server that will be queried is obtained from your resolver library . This is very easily overlooked and often only ends up becoming a problem when your local cache has the wrong answer for a nameserver cached. Detailed Analysis This is easier to break down with a sample of the output; I'll omit everything past the first NS delegation. ; <<>> DiG 9.7.3 <<>> +trace +additional serverfault.com ;; global options: +cmd . 121459 IN NS d.root-servers.net. . 121459 IN NS e.root-servers.net. . 121459 IN NS f.root-servers.net. . 121459 IN NS g.root-servers.net. . 121459 IN NS h.root-servers.net. . 121459 IN NS i.root-servers.net. . 121459 IN NS j.root-servers.net. . 121459 IN NS k.root-servers.net. . 121459 IN NS l.root-servers.net. . 121459 IN NS m.root-servers.net. . 121459 IN NS a.root-servers.net. . 121459 IN NS b.root-servers.net. . 121459 IN NS c.root-servers.net. e.root-servers.net. 354907 IN A 192.203.230.10 f.root-servers.net. 100300 IN A 192.5.5.241 f.root-servers.net. 123073 IN AAAA 2001:500:2f::f g.root-servers.net. 354527 IN A 192.112.36.4 h.root-servers.net. 354295 IN A 128.63.2.53 h.root-servers.net. 108245 IN AAAA 2001:500:1::803f:235 i.root-servers.net. 355208 IN A 192.36.148.17 i.root-servers.net. 542090 IN AAAA 2001:7fe::53 j.root-servers.net. 354526 IN A 192.58.128.30 j.root-servers.net. 488036 IN AAAA 2001:503:c27::2:30 k.root-servers.net. 354968 IN A 193.0.14.129 k.root-servers.net. 431621 IN AAAA 2001:7fd::1 l.root-servers.net. 354295 IN A 199.7.83.42 ;; Received 496 bytes from 75.75.75.75#53(75.75.75.75) in 10 ms com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS j.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. a.gtld-servers.net. 172800 IN A 192.5.6.30 a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 b.gtld-servers.net. 172800 IN A 192.33.14.30 b.gtld-servers.net. 172800 IN AAAA 2001:503:231d::2:30 c.gtld-servers.net. 172800 IN A 192.26.92.30 d.gtld-servers.net. 172800 IN A 192.31.80.30 e.gtld-servers.net. 172800 IN A 192.12.94.30 f.gtld-servers.net. 172800 IN A 192.35.51.30 g.gtld-servers.net. 172800 IN A 192.42.93.30 h.gtld-servers.net. 172800 IN A 192.54.112.30 i.gtld-servers.net. 172800 IN A 192.43.172.30 j.gtld-servers.net. 172800 IN A 192.48.79.30 k.gtld-servers.net. 172800 IN A 192.52.178.30 l.gtld-servers.net. 172800 IN A 192.41.162.30 ;; Received 505 bytes from 192.203.230.10#53(e.root-servers.net) in 13 ms The initial query for . IN NS (root nameservers) hits the local resolver, which in this case is Comcast. ( 75.75.75.75 ) This is easy to spot. The next query is for serverfault.com. IN A and runs against e.root-servers.net. , randomly selected from the list of root nameservers we just got. It has an IP address of 192.203.230.10 , and since we have +additional enabled it appears to be coming from the glue. Since it is not authoritative for serverfault.com, this gets delegated to the com. TLD nameservers. What isn't obvious from the output here is that dig did not derive the IP address of e.root-servers.net. from the glue. In the background, this is what really happened: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 02:03:43.301022 IP 192.0.2.1.59900 > 75.75.75.75.53: 63418 NS? . (17) 02:03:43.327327 IP 75.75.75.75.53 > 192.0.2.1.59900: 63418 13/0/14 NS k.root-servers.net., NS l.root-servers.net., NS m.root-servers.net., NS a.root-servers.net., NS b.root-servers.net., NS c.root-servers.net., NS d.root-servers.net., NS e.root-servers.net., NS f.root-servers.net., NS g.root-servers.net., NS h.root-servers.net., NS i.root-servers.net., NS j.root-servers.net. (512) 02:03:43.333047 IP 192.0.2.1.33120 > 75.75.75.75.53: 41110+ A? e.root-servers.net. (36) 02:03:43.333096 IP 192.0.2.1.33120 > 75.75.75.75.53: 5696+ AAAA? e.root-servers.net. (36) 02:03:43.344301 IP 75.75.75.75.53 > 192.0.2.1.33120: 41110 1/0/0 A 192.203.230.10 (52) 02:03:43.344348 IP 75.75.75.75.53 > 192.0.2.1.33120: 5696 0/1/0 (96) 02:03:43.344723 IP 192.0.2.1.37085 > 192.203.230.10.53: 28583 A? serverfault.com. (33) 02:03:43.423299 IP 192.203.230.10.53 > 192.0.2.1.37085: 28583- 0/13/14 (493) +trace cheated and consulted the local resolver to obtain the IP address of the next hop nameserver instead of consulting the glue. Sneaky! This is usually "good enough" and won't cause a problem for most people. Unfortunately, there are edge cases. If for whatever reason your upstream DNS cache is providing the wrong answer for the nameserver, this model breaks down entirely. Real world example: domain expires glue is repointed at registrar redirection nameservers bogus IPs are cached for ns1 and ns2.yourdomain.com domain is renewed with restored glue any caches with the bogus nameserver IPs continue to send people to a website that says the domain is for sale In the above case, +trace will suggest that the domain owner's own nameservers are the source of the problem, and you're one call away from incorrectly telling a customer that their servers are misconfigured. Whether it's something you can (or are willing to) do something about is another story, but it's important to have the right information. dig +trace is a great tool, but like any tool, you need to know what it does and doesn't do, and how to troubleshoot the issue manually when it proves insufficient. Edit: It should also be noted that dig +trace will not warn you about NS records that point at CNAME aliases. This is a RFC violation that ISC BIND (and possibly others) will not attempt to correct. +trace will be completely happy to accept the A record it gets from your locally configured nameserver, whereas if BIND were to be performing full recursion it would be rejecting the entire zone with a SERVFAIL. This can be tricky to troubleshoot if glue is present; this will work just fine until the NS records are refreshed , then suddenly break. Glueless delegations will always break BIND's recursion when a NS record points at an alias.
{ "source": [ "https://serverfault.com/questions/482913", "https://serverfault.com", "https://serverfault.com/users/152073/" ] }
483,339
I have the following grants for a user/database mysql> SHOW GRANTS FOR 'username'@'localhost'; +---------------------------------------------------------------------------+ | Grants for username@localhost | +---------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'username'@'localhost' IDENTIFIED BY PASSWORD 'xxx' | | GRANT ALL PRIVILEGES ON `userdb`.* TO 'username'@'localhost' | +---------------------------------------------------------------------------+ To enable external access to the database, I need to change localhost to % . One way to do this is REVOKE all permissions and set it again. The problem is, that there is a password set which I don't know, so if I revoke the permission, I can't set it back. Is there a way to change the hostname localhost to % (and back again) without revoking the permission itself?
If you've got access to the mysql database, you can change the grant tables directly: UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='username'; ...and an analogous UPDATE -statement to change it back. Also you might need to make changes to the mysql.db table as well: UPDATE mysql.db SET Host='%' WHERE Host='localhost' AND User='username'; and then flush to apply the privileges: FLUSH PRIVILEGES;
{ "source": [ "https://serverfault.com/questions/483339", "https://serverfault.com", "https://serverfault.com/users/6366/" ] }
483,465
There are plenty of resources out there about this topic, but none I found which covers this slightly special case. I have 4 files; privatekey.pem certificate.pem intermediate_rapidssl.pem ca_geotrust_global.pem And I wish to import them into a fresh keystore. Some site suggest to use DER-format, and import them one by one, but this failed because the key is not recognized. Another site suggested a special "ImportKey"-class to run for import, and this worked until I saw that the chain is broken. I.e. the chain length on the certificate is 1, ignoring the intermediate and ca. Some sites suggest PKCS7, but I can't even get a chain from that. Other suggest PKCS12 format, but as far as my tests go that failed as well for getting the whole chain. Any advice or hints are much welcome.
This may not be perfect, but I had some notes on my use of keytool that I've modified for your scenario. Import a root or intermediate CA certificate to an existing Java keystore: keytool -import -trustcacerts -alias root -file ca_geotrust_global.pem -keystore yourkeystore.jks keytool -import -trustcacerts -alias root -file intermediate_rapidssl.pem -keystore yourkeystore.jks Combine the certificate and private key into one file before importing. cat certificate.pem privatekey.pem > combined.pem This should result in a file resembling the below format. BEGIN CERTIFICATE ... END CERTIFICATE BEGIN RSA PRIVATE KEY ... END RSA PRIVATE KEY Import a signed primary certificate & key to an existing Java keystore: keytool -import -trustcacerts -alias yourdomain -file combined.pem -keystore yourkeystore.jks
{ "source": [ "https://serverfault.com/questions/483465", "https://serverfault.com", "https://serverfault.com/users/151401/" ] }
483,576
I asked my hoster to add three subdomains all pointing to the IP of the A record. It seems he simply added a wildcard DNS record because any random subdomain resolves to my IP now. This is OK for me from a technical point of view, since there are no subdomains pointing anywhere else. Then again I don't like him not doing what I asked for. And so I wonder whether there are other reasons to tell him to change that. Are there any? The only negative I found is that someone could link to my site using http://i.dont.like.your.website.mywebsite.tld .
If you ever put a computer in that domain, you will get bizarre DNS failures, where when you attempt to visit some random site on the Internet, you arrive at yours instead. Consider: You own the domain example.com . You set up your workstation and name it. ... let's say, yukon.example.com . Now you will notice in its /etc/resolv.conf it has the line: search example.com This is convenient because it means you can do hostname lookups for, e.g. www which will then search for www.example.com automatically for you. But it has a dark side: If you visit, say, Google, then it will search for www.google.com.example.com , and if you have wildcard DNS, then that will resolve to your site, and instead of reaching Google you will wind up on your own site. This applies equally to the server on which you're running your web site! If it ever has to call external services, then the hostname lookups can fail in the same way. So api.twitter.com for example suddenly becomes api.twitter.com.example.com , routes directly back to your site, and of course fails. This is why I never use wildcard DNS.
{ "source": [ "https://serverfault.com/questions/483576", "https://serverfault.com", "https://serverfault.com/users/154298/" ] }
483,650
My server is sending the spam email and I am not able to find out which script is sending them. The emails were all from nobody@myhost so disabled from the cpanel that nobody should not be allowed to send emails Now at least they are not going out, I keep receiving them. This is mail I get: A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] Mail sent by user nobody being discarded due to sender restrictions in WHM->Tweak Settings ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from nobody by cpanel.myserver.com with local (Exim 4.80) (envelope-from <[email protected]>) id 1UBBap-0007EM-9r for [email protected]; Fri, 01 Mar 2013 08:34:47 +1030 To: [email protected] Subject: Order Detail From: "Manager Ethan Finch" <[email protected]> X-Mailer: Fscfz(ver.2.75) Reply-To: "Manager Ethan Finch" <[email protected]> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="----------1362089087512FD47F4767C" Message-Id: <[email protected]> Date: Fri, 01 Mar 2013 08:34:47 +1030 ------------1362089087512FD47F4767C Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit This is my logs for exim logs: 2013-03-01 14:36:00 no IP address found for host gw1.corpgw.com (during SMTP connection from [203.197.151.138]:54411) 2013-03-01 14:36:59 H=() [203.197.151.138]:54411 rejected MAIL [email protected]: HELO required before MAIL 2013-03-01 14:37:28 H=(helo) [203.197.151.138]:54411 rejected MAIL [email protected]: Access denied - Invalid HELO name (See RFC2821 4.1.1.1) 2013-03-01 14:37:28 SMTP connection from (helo) [203.197.151.138]:54411 closed by DROP in ACL 2013-03-01 14:37:29 cwd=/var/spool/exim 2 args: /usr/sbin/exim -q 2013-03-01 14:37:29 Start queue run: pid=12155 2013-03-01 14:37:29 1UBBap-0007EM-9r ** [email protected] R=enforce_mail_permissions: Mail sent by user nobody being discarded due to sender restrictions in WHM->Tweak Settings 2013-03-01 14:37:29 cwd=/var/spool/exim 7 args: /usr/sbin/exim -t -oem -oi -f <> -E1UBBap-0007EM-9r 2013-03-01 14:37:30 1UBHFp-0003A7-W3 <= <> R=1UBBap-0007EM-9r U=mailnull P=local S=7826 T="Mail delivery failed: returning message to sender" for [email protected] 2013-03-01 14:37:30 cwd=/var/spool/exim 3 args: /usr/sbin/exim -Mc 1UBHFp-0003A7-W3 2013-03-01 14:37:30 1UBBap-0007EM-9r Completed 2013-03-01 14:37:32 1UBHFp-0003A7-W3 aspmx.l.google.com [2607:f8b0:400e:c00::1b] Network is unreachable 2013-03-01 14:37:38 1UBHFp-0003A7-W3 => [email protected] <[email protected]> R=lookuphost T=remote_smtp H=aspmx.l.google.com [74.125.25.26] X=TLSv1:RC4-SHA:128 2013-03-01 14:37:39 1UBHFp-0003A7-W3 Completed 2013-03-01 14:37:39 End queue run: pid=12155 2013-03-01 14:38:20 SMTP connection from [127.0.0.1]:36667 (TCP/IP connection count = 1) 2013-03-01 14:38:21 SMTP connection from localhost [127.0.0.1]:36667 closed by QUIT 2013-03-01 14:42:45 cwd=/ 2 args: /usr/sbin/sendmail -t 2013-03-01 14:42:45 1UBHKv-0003BH-LD <= [email protected] U=root P=local S=1156 T="[cpanel.server.com] Root Login from IP 122.181.3.130" for [email protected] 2013-03-01 14:42:45 cwd=/var/spool/exim 3 args: /usr/sbin/exim -Mc 1UBHKv-0003BH-LD 2013-03-01 14:42:47 1UBHKv-0003BH-LD aspmx.l.google.com [2607:f8b0:400e:c00::1a] Network is unreachable 2013-03-01 14:42:51 1UBHKv-0003BH-LD => [email protected] R=lookuphost T=remote_smtp H=aspmx.l.google.com [74.125.25.27] X=TLSv1:RC4-SHA:128 2013-03-01 14:42:51 1UBHKv-0003BH-LD Completed 2013-03-01 14:43:22 SMTP connection from [127.0.0.1]:37499 (TCP/IP connection count = 1) 2013-03-01 14:43:23 SMTP connection from localhost [127.0.0.1]:37499 closed by QUIT Is there any way to find which script, or which user, is generating those?
Linux Malware Detect ( http://www.rfxn.com/projects/linux-malware-detect/ ) installation is quite easy :). Go via this link, download http://www.rfxn.com/downloads/maldetect-current.tar.gz . The link to this file is located at the very top of the web-page. Then unzip this archive, go to newly created directory by running cd in your terminal. In the directory run sudo ./install.sh which will install the scanner to your system. To perform the scanning itself you are to run sudo /usr/local/sbin/maldet -a / -a option here means that you want ro scan all the files. Use -r instead to scan only recent ones. / specifies the directory where scan should be performed. So just change it to any directory you want. Just that )
{ "source": [ "https://serverfault.com/questions/483650", "https://serverfault.com", "https://serverfault.com/users/162259/" ] }
483,657
/assets - file1.mp3 - file2.mp3 ... - fileX.mp3 (millions of files) So what I want to prevent is to access the content directly if the user is not logged in like http://domain.com/assets/file1.mp3 Ideally the URL to the asset file will be changing every time a user logs in, using his session something like http://domain.com/assets/51303ca30479c7a79b75373a/file1.mp3 how to enforce it? so the goal - is to change the URLs often and before redirecting to actual binary file. I understand I can do something like this RewriteRule ^(.*\.mp3)$ /path/to/auth.php?i=$1 But I prefer not to use PHP to process the auth. Is there an elegant way with apache to process this issue? thanks, Dmitry
Linux Malware Detect ( http://www.rfxn.com/projects/linux-malware-detect/ ) installation is quite easy :). Go via this link, download http://www.rfxn.com/downloads/maldetect-current.tar.gz . The link to this file is located at the very top of the web-page. Then unzip this archive, go to newly created directory by running cd in your terminal. In the directory run sudo ./install.sh which will install the scanner to your system. To perform the scanning itself you are to run sudo /usr/local/sbin/maldet -a / -a option here means that you want ro scan all the files. Use -r instead to scan only recent ones. / specifies the directory where scan should be performed. So just change it to any directory you want. Just that )
{ "source": [ "https://serverfault.com/questions/483657", "https://serverfault.com", "https://serverfault.com/users/108649/" ] }
483,660
We are planning to use Zmanda/Amanda community backup in our organisation. The problem we are facing is that we have our servers scattered at different data centers across globe, some are in client network, so we don't have seamless access to the servers. We have only ssh connection to the clients, and after reading thru documentations and forum, i understand that amanda uses a port range and 10080 port to take backup, which is not possible in this case. Our network topology is something like this: Can someone suggest how can we achieve configuring amanda in this scenario. if I've missed something...
Linux Malware Detect ( http://www.rfxn.com/projects/linux-malware-detect/ ) installation is quite easy :). Go via this link, download http://www.rfxn.com/downloads/maldetect-current.tar.gz . The link to this file is located at the very top of the web-page. Then unzip this archive, go to newly created directory by running cd in your terminal. In the directory run sudo ./install.sh which will install the scanner to your system. To perform the scanning itself you are to run sudo /usr/local/sbin/maldet -a / -a option here means that you want ro scan all the files. Use -r instead to scan only recent ones. / specifies the directory where scan should be performed. So just change it to any directory you want. Just that )
{ "source": [ "https://serverfault.com/questions/483660", "https://serverfault.com", "https://serverfault.com/users/141436/" ] }
483,798
How do I configure nginx to return http status code 429 (Too Many Requests) instead of the default 503 (Service Unavailable) when throttling/rate limiting? FYI, I'm using nginx as a reverse proxy with the HttpLimitReqModule. The draft spec for 429 status code is RFC6585 . This (closed) question on stackexchanged shows that it is possible to use the error_page directive. However, I don't want to return a 429 if there really is a server problem (not the customer hitting us too much) and the server should be returning 503 Service Unavailable. Any suggestions?
Good news, with Version 1.3.15 http://mailman.nginx.org/pipermail/nginx/2013-March/038306.html we have the "limit_req_status" and "limit_conn_status" directives. I just tested them on Gentoo Linux (note that you need to have the modules limit_req and limit_con compiled in). With these settings I think you can achieve what you've asked for: limit_req_status 429; limit_conn_status 429; I have verified this with a quick: ab2 -n 100000 -c 55 "http://127.0.0.1/api/v1 On which most request failed after activating the directive due to the high request rate and the configured limit in nginx: limit_req zone=api burst=15 nodelay;
{ "source": [ "https://serverfault.com/questions/483798", "https://serverfault.com", "https://serverfault.com/users/162163/" ] }
483,938
What happens when I assign multiple security groups to an instance? Is it permissive in the sense that the traffic is allowed in if any one of the security groups allows it. OR is it restrictive in the sense that every security group must allow the traffic in for it to be passed in? For example, lets say I have a class of instances that will only ever talk to other instances in the same account. I also have a class of instances that will only accept traffic via HTTP (port 80). Is it possible to restrict access to internal instances and only via HTTP by creating and applying two security groups: An "internal" security group. Allow all traffic in from other members of that security group on all ports for all transports (TCP, UDP, ICMP) Create an "http" security group. Allow all traffic into port 80 via TCP from any source. OR am I forced to create a single security group that allows traffic from port 80 where the source is itself?
Permissive. According to AWS here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#security-group-rules If there is more than one rule for a specific port, we apply the most permissive rule. For example, if you have a rule that allows access to TCP port 22 (SSH) from IP address 203.0.113.1 and another rule that allows access to TCP port 22 from everyone, everyone has access to TCP port 22.
{ "source": [ "https://serverfault.com/questions/483938", "https://serverfault.com", "https://serverfault.com/users/61173/" ] }
483,941
Is there any documentation or resource describing how to generate and host a profile for an OpenVPN client to import? Ideally would like my users to not have to separately fetch a .zip file of the .ovpn + certs, extract it to the proper directory, tweak their .ovpn, etc.
Apparently since OpenVPN 2.1 a inline configuration has been supported. Allowing you to locate your certs, and keys all in a single configuration file. But the documentation about how to create this configuration file was not added until the recent release of 2.3. See the INLINE FILE SUPPORT section of the OpenVPN man page for more info. client proto udp remote openvpnserver.example.com port 1194 dev tun nobind key-direction 1 <ca> -----BEGIN CERTIFICATE----- # insert base64 blob from ca.crt -----END CERTIFICATE----- </ca> <cert> -----BEGIN CERTIFICATE----- # insert base64 blob from client1.crt -----END CERTIFICATE----- </cert> <key> -----BEGIN PRIVATE KEY----- # insert base64 blob from client1.key -----END PRIVATE KEY----- </key> <tls-auth> -----BEGIN OpenVPN Static key V1----- # insert ta.key -----END OpenVPN Static key V1----- </tls-auth> The docs for the config file are the same as the docs for the commandline options: OpenVPN allows any option to be placed either on the command line or in a configuration file. Though all command line options are preceded by a double-leading-dash ("--"), this prefix can be removed when an option is placed in a configuration file.
{ "source": [ "https://serverfault.com/questions/483941", "https://serverfault.com", "https://serverfault.com/users/16277/" ] }
484,082
I have setup a new email server and now I need to test that Clam Antivirus is scanning messages correctly. How should I do this in a safe and controlled way?
The easiest way would be to us an EICAR test file. Create a text file and add in the following code: X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H* More information here
{ "source": [ "https://serverfault.com/questions/484082", "https://serverfault.com", "https://serverfault.com/users/139604/" ] }
484,091
How is it that some servers doesn't allow resuming downloads? Where - in terms of server configurations - you have to configure to disable/enable this feature? Is it an http config? or it has something to do with your TCP connection? or both? of course the problem isn't specific to http. FTP and HTTPS protocols must have the same config, right? is there any workaround to this issue? Of course i'm just looking for a platform independent answer to this question. you might answer with reference to your experience with specific platforms (Windows-IIS, Linux-apache or whatever). PS. 1- I'm not asking about hosted file-sharing services (like rapidshare, etc... ) whose business model depends on such feature. As far as I understand it, they change their URL everytime you ask for a file. but take for example http://ocw.yale.edu . it doesn't allow resuming your downloads after you become disconnected from its servers. another example is ted videos. once your connection is lost for whatever the reason, you have to start over. 2- and i'm not asking whether you are enable to resume your downloads on the client side. for the sake of this question, suppose that on the client side, I've the tools to resume my downloads.
The easiest way would be to us an EICAR test file. Create a text file and add in the following code: X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H* More information here
{ "source": [ "https://serverfault.com/questions/484091", "https://serverfault.com", "https://serverfault.com/users/121855/" ] }
484,208
I was reading about DNS some days ago and learned how the requests are processed. If you surf to www.example.com , then a request will go to the Root Name Servers to see who owns that .com address, then another request will go to another, more local, DNS server to see who owns the example.com address and so on. How is it technically possible that the 13 Root Name Servers can handle all requests done by earth's billions of Internet users simultaneously without being overloaded leading to a Denial-of-Service?
They're 13 highly available clusters of servers, not simply 13 servers. Among other things, root nameserver operators are required to have enough capacity to handle three times their normal traffic load ( RFC 2870 ). This leads to rather large clusters. However, the root nameservers only serve responses for the top level domains themselves, i.e. com. , net. , uk. , ae. , etc., and the nameservers which query the root can cache this information up to 48 hours , which dramatically reduces the load at the root nameservers. This leads to smaller clusters. The root nameservers are in over 130 physical locations in 53 countries; with only 13 server names, this is done through the magic of IPv4 anycast. The root nameservers also have their own web site , which you may find interesting reading.
{ "source": [ "https://serverfault.com/questions/484208", "https://serverfault.com", "https://serverfault.com/users/111542/" ] }
484,475
I have the following rules on our server within UFW: To Action From -- ------ ---- 22 ALLOW 217.22.12.111 22 ALLOW 146.200.200.200 80 ALLOW Anywhere 443 ALLOW Anywhere 22/tcp ALLOW 109.104.109.0/26 The first two rules are our internal IP's which we want to ensure can always SSH in (port 22). The next two rules are to allow HTTP and HTTPS viewing from any IP addresses anywhere. The final rule is to allow SSH from our code deployment system. I set a ufw default deny rule up but it doesn't appear to be showing. Should I also have a final rule which denies everything? If I add a deny everything rule, does the order the rules appear above make a difference? Presumably if this list gets longer adding another allow rule above a deny rule is impossible, meaning I'll have to remove and re-add some rules?
If you're interested in reordering your UFW rules, this is one way to do it. $ sudo ufw status numbered To Action From -- ------ ---- [ 1] 22 ALLOW IN Anywhere [ 2] 80 ALLOW IN Anywhere [ 3] 443 ALLOW IN Anywhere [ 4] 22 (v6) ALLOW IN Anywhere (v6) [ 5] 80 (v6) ALLOW IN Anywhere (v6) [ 6] 443 (v6) ALLOW IN Anywhere (v6) [ 7] Anywhere DENY IN [ip-to-block] Say you accidentally added a rule to the end, but you wanted up top. First you will have remove it from the bottom (7) and add it back. $ sudo ufw delete 7 Note, be careful of removing multiple rules one after another, their position can change! Add back your rule to the very top (1): $ sudo ufw insert 1 deny from [ip-to-block] to any
{ "source": [ "https://serverfault.com/questions/484475", "https://serverfault.com", "https://serverfault.com/users/41698/" ] }
484,896
The alternatives command (package chkconfig ) on RHEL/Fedora manages symlinks which link a generic name to one of the alternative implementations. For example, mta group of symlinks can be provided by Sendmail and Postfix (to implement i.e. sendmail command): alternatives --display mta While I can --display a group of symlinks, I need to guess its name first (i.e. mta ). Can I simply list all possible configurable symlink groups (like mta ) to pick from? The reason is that I forget some group names occasionally.
On Debian (but not Fedora or RHEL), to see a list of all "master alternative names": update-alternatives --get-selections --get-selections list master alternative names and their status. And for each of those listed, you can run --list $ALTERNATIVE_NAME , e.g. update-alternatives --list editor --list name Display all targets of the link group. If you would like to see a list of all alternatives in their respective groups, you could run the following in fish shell: for alternative in (update-alternatives --get-selections) echo $alternative update-alternatives --list (echo $alternative | cut -d" " -f1) echo end | pager The (ba|z)?sh syntax should be something similar. To change the alternatives, run sudo update-alternatives --config $ALTERNATIVE_NAME
{ "source": [ "https://serverfault.com/questions/484896", "https://serverfault.com", "https://serverfault.com/users/134406/" ] }
485,006
I have a web site with an admin subdirectory that is protected by integrated Windows authentication. Works flawlessly from remote PCs. But when I attempt to access these pages on the server itself, I get an authorization failure. I'm using the proper hostname, not localhost. Tried Chrome and IE, same result. Any suggestions?
You are almost certainly running into the Windows loopback check that was introduced with IIS 5.1. This is a security feature to avoid certain types of reflection attacks against the system. Microsoft has a KB article describing workarounds. They basically boil down to modifying the registry to either disable the loopback check, or to allow certain hostnames (e.g. your local host name or site name) to back-connect. You can quickly disable the check via PowerShell: New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa -Name "DisableLoopbackCheck" -Value "1" -PropertyType dword Below are Microsoft's official instructions. Note that although the below instructions indicate a reboot, I've found that IE usually picks up the change right away. Method 1: Specify host names (Preferred method if NTLM authentication is desired) Set the DisableStrictNameChecking registry entry to 1 . Click Start , click Run , type regedit , and then click OK . In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 Right-click MSV1_0 , point to New , and then click Multi-String Value . Type BackConnectionHostNames , and then press ENTER . Right-click BackConnectionHostNames , and then click Modify . In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click OK . Quit Registry Editor, and then restart the IISAdmin service. Method 2: Disable the loopback check (less-recommended method) Set the DisableStrictNameChecking registry entry to 1 . Click Start , click Run , type regedit , and then click OK . In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Right-click Lsa , point to New , and then click DWORD Value . Type DisableLoopbackCheck , and then press ENTER . Right-click DisableLoopbackCheck , and then click Modify . In the Value data box, type 1 , and then click OK . Quit Registry Editor, and then restart your computer. Addendum: To set the DisableStrictNameChecking registry entry to 1: Click Start , click Run , type regedit , and then click OK . In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanmanServer\Parameters Right-click Parameters , point to New , and then click DWORD Value . Type DisableStrictNameChecking , and then press ENTER . Right-click DisableStrictNameChecking , and then click Modify . In the Value data box, type 1 , and then click OK . Quit Registry Editor, and then restart your computer.
{ "source": [ "https://serverfault.com/questions/485006", "https://serverfault.com", "https://serverfault.com/users/21146/" ] }
485,063
I'm getting a lot of requests turning up in our apache logs that look like this www.example.com:80 10.240.1.8 - - [06/Mar/2013:00:39:19 +0000] "-" 408 0 "-" "-" - There seems to be no request and no user agent. Has anyone seen this before?
Are you by any chance running your web servers in Amazon behind an Elastic Load Balancer? It seems they generate a lot of 408 responses due to their health checks . Some of the solutions from that forum thread: RequestReadTimeout header=0 body=0 This disables the 408 responses if a request times out. Change the ELB health check to a different port. Disable logging for the ELB IP addresses with: SetEnvIf Remote_Addr "10\.0\.0\.5" exclude_from_log CustomLog logs/access_log common env=!exclude_from_log And from this blog post : Adjust your request timeout to be 60 or above.
{ "source": [ "https://serverfault.com/questions/485063", "https://serverfault.com", "https://serverfault.com/users/38/" ] }
485,487
My problem is that I need to set a few variables, and output a few lines every time I login to the ssh shell, and at the same time I have to be able to use sftp to tarnsfer files via Filezilla. Now, as per the openssh FAQ at http://www.openssh.org/faq.html , if your startup scripts echo any kind of output, it messes up with sftp. So it either delays indefinitely, or errors out with a "Connection closed by server with exit code 128". I have tried solutions like moving .bashrc to .bash_profile, or using the following code in .bashrc: if [ "$TERM" != "dumb" ] then source .bashc_real fi And: if [ "$TERM" = "xterm" ] then source .bashc_real fi However, nothing works. My shell terminal is bash, and I connect to sftp with filezilla.
Try doing this instead if [ "$SSH_TTY" ] then source .bashc_real fi
{ "source": [ "https://serverfault.com/questions/485487", "https://serverfault.com", "https://serverfault.com/users/161126/" ] }
485,597
I need to add a .pem cert file to my default CA cert bundle but I don't know where the default CA Cert bundle is kept. I need to append my new .pem file to this default bundle. I'd rather do that than specify my own location using --capath cURL clearly knows where to look but I don't see any cURL commands that reveal the location. Is there a command that will reveal this location? How can I find it? According to cURL: Add the CA cert for your server to the existing default CA cert bundle. The default path of the CA bundle used can be changed by running configure with the --with-ca-bundle option pointing out the path of your choice. Thanks
Running curl with strace might give you a clue. strace curl https://www.google.com |& grep open Lots of output, but right near the end I see: open("/etc/ssl/certs/578d5c04.0", O_RDONLY) = 4 which /etc/ssl/certs/ is where my certificates are stored.
{ "source": [ "https://serverfault.com/questions/485597", "https://serverfault.com", "https://serverfault.com/users/145544/" ] }
485,798
From time to time "my" server stalls because it runs out of both memory and swap space. (it keeps responding to ping but nothing more than that, not even ssh). I'm told linux does memory overcommitment, which as far as I understand is the same as banks do with money: it grants to processes more memory than actually available, assuming that most processes won't actually use all the memory they ask, at least not all at the same time. Please assume this is actually the cause why my system occasionally hangs, let's not discuss here whether or not this is the case (see What can cause ALL services on a server to go down, yet still responding to ping? and how to figure out ). So, how do I disable or reduce drastically memory overcommitment in CentOS? I've read there are two settings called vm.overcommit_memory (values 0, 1, or 2) and vm.overcommit_ratiom but I have no idea where I have to find and change them (some configuration file hopefully), what values should I try, and whether I need to reboot the server to make the changes effective. and is it safe? What side effects could I expect? When googling for overcommit_memory I find scary things like people saying their server can't boot anymore.... Since what causes the sudden increase in memory usage is mysql because of queries that are made by php which in turn is called while serving http requests, I would expect just some php script to fail to complete and hence some 500 response from time to time when the server is too busy, which is a risk I can take (certainly better that have the whole server become inaccessible and have to hard reboot it). Or can it really cause my server to be unable to reboot if I choose the wrong settings?
Memory overcommit can be disabled by vm.overcommit_memory=2 0 is the default mode, where kernel heuristically determines the allocation by calculating the free memory compared to the allocation request being made. And setting it to 1 enables the wizardry mode, where kernel always advertises that it has enough free memory for any allocation. Setting to 2, means that processes can only allocate up to a configurable amount ( overcommit_ratio ) of RAM and will start getting allocation failure or OOM messages when it goes beyond that amount. Is it safe to do so, no. I haven't seen any proper use case where disabling memory overcommit actually helped, unless you are 100% certain of the workload and hardware capacity. In case you are interested, install kernel-docs package and go to /Documentation/sysctl/vm.txt to read more, or read it online . If you set vm.overcommit_memory=2 then it will overcommit up to the percentage of physical RAM configured in vm.overcommit_ratio (default is 50%). echo 0/1/2 > /proc/sys/vm/overcommit_memory This will not survive a reboot. For persistence, put this in /etc/sysctl.conf file: vm.overcommit_memory=X and run sysctl -p . No need to reboot.
{ "source": [ "https://serverfault.com/questions/485798", "https://serverfault.com", "https://serverfault.com/users/80498/" ] }
486,406
I have mydomain.com that is hosted on an Azure VM instance called mymachine.cloudapp.net I need to configure DNS so that both www.mydomain.com and mydomain.com get mapped to the same host. I'm using GoDaddy as registrar. Currently GoDaddy offers me to create an empty ( @ ) A record, so that if I ping mymachine.cloudapp.net and resolve its VIP address I can store it in the A record. Unfortunately, if the VIP changes and I forget to re-ping I get mydomain.com unreachable, and that's normal. When I try to move that @ record to the CNAME section so it best points to the VM hostname, I get the following error: A record of a different type exists for the hostname @, could not create CNAME This occurs both if I delete the A record and write CNAME, and if there is no @ record in the A section. How can one set a @ CNAME record in a GoDaddy managed domain?
In short, you can't make the @ record a CNAME without deleting all other resource records for @, and you can't do that since some (like the NS records) are required for proper DNS functionality. This is one reason why providers such as Heroku tell you not to use naked domain names . You will need a host to perform the HTTP redirection from example.com to www.example.com for you, to which you will point A (and AAAA ) record for @. If your DNS is hosted with GoDaddy, then they have a free service that will do this for you. In your GoDaddy domain manager, look on the left hand side for "Forwarding" and click "Manage". Then set it to forward example.com to www.example.com and update your DNS to support the change. You should leave the Advanced Options at their defaults.
{ "source": [ "https://serverfault.com/questions/486406", "https://serverfault.com", "https://serverfault.com/users/64579/" ] }
486,518
As I was putting together a presentation for beginning Windows administration, I was struck with a question that I'm amazed I haven't asked sooner. I know that: AD is logically setup in sites to aid in replication and decreasing the latency of domain-necessary communications between client computers and domain services. Sites are defined by the subnets applied to them the _msdcs subdomain contains a hierarchy of SRV records for general lookup (_tcp) and for site-specific lookup (_sites) Computers somehow know what site they are in, or the domain controller decides transparently in some magic of DNS... or does it? This blog post hints that client computers in an AD network can "know" what site they are a member of. My question is, if this is the case, how do they find it out? If the client itself doesn't know, how does the DC aid the machine in the process of selecting the closest AD services to that client computer?
The answer is that the first time a client ever authenticates to Active Directory, it doesn't know what site it is in. When first joining the domain, the client makes general DNS and LDAP queries and gets a list of all the domain controllers in the domain, and it goes down the list, trying LDAP binds, and the first successful DC that it binds to - that is the first DC it authenticates with. After the client has joined the domain, Active Directory will tell the client which site it belongs to. Active Directory knows this because the administrator has put the IP subnet of the client in AD Sites & Services and associated it to a Site. Active Directory tells the client what its AD site is, and the client stores that in its own registry in the HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\DynamicSiteName registry value. That way, the next time the client boots up, it knows what site-specific DNS query to make so that it gets only the DCs that are in that site. Of course the full behavior is documented in KB247811 , but if you want to see it for yourself, you could run Wireshark or NetMon and do a packet trace, and then join a domain while the trace is running. You will see the exact sequence of DNS queries and LDAP binds. Subsequent DNS queries and LDAP binds are made to the site-specific sub-zones because the client has been told by AD what site it belongs to. The Netlogon service will periodically refresh its AD site info, so if you move to a different network, your client will get its new site automatically. This can be adjusted in the HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SiteNameTimeout registry value. ( Link )
{ "source": [ "https://serverfault.com/questions/486518", "https://serverfault.com", "https://serverfault.com/users/62258/" ] }
487,159
Logging in to my Webmin control panel, I noticed that virtually all of my disk space is full. I searched for the ten largest files/ directories on my system and found that a file called ibdata1 is taking up around 94GB of space. It resides in my /var/lib/mysql directory. What does ibdata1 do? Am I safe to remove it? My assumption is that it's a dump of some kind, but that's just a wild guess.
The file ibdata1 is the system tablespace for the InnoDB infrastructure. It contains several classes for information vital for InnoDB Table Data Pages Table Index Pages Data Dictionary MVCC Control Data Undo Space Rollback Segments Double Write Buffer (Pages Written in the Background to avoid OS caching) Insert Buffer (Changes to Secondary Indexes) Please note ibdata1's place in the InnoDB Universe (on Right Side) You can separate Data and Index Pages from ibdata1 by enabling innodb_file_per_table . This will cause any newly created InnoDB table to store data and index pages in an external .ibd file. Example datadir is /var/lib/mysql CREATE TABLE mydb.mytable (...) ENGINE=InnoDB; , creates /var/lib/mysql/mydb/mytable.frm innodb_file_per_table enabled, Data/Index Pages Stored in /var/lib/mysql/mydb/mytable.ibd innodb_file_per_table disabled, Data/Index Pages Stored in ibdata1 No matter where the InnoDB table is stored, InnoDB's functionality requires looking for table metadata and storing and retrieving MVCC info to support ACID compliance and Transaction Isolation . Here are my past articles on separating table data and indexes from ibdata1 Oct 29, 2010 : My Original Post in StackOverflow Nov 26, 2011 : ERROR 1114 (HY000) at line 6308 in file & The table user_analysis is full Feb 03, 2012 : Scheduled optimization of tables in MySQL InnoDB Mar 25, 2012 : Why does InnoDB store all databases in one file? Apr 01, 2012 : Is innodb_file_per_table advisable? WHAT TO DO NEXT You can continue having ibdata1 stored everything, but that makes doing LVM snapshots real drudgery (my personal opinion). You need to use my StackOverflow post and shrink that file permanently. Please run this query: SELECT ((POWER(1024,3)*94 - InnoDBDiskDataAndIndexes))/POWER(1024,3) SpaceToReclaim FROM (SELECT SUM(data_length+index_length) InnoDBDiskDataAndIndexes FROM information_schema.tables WHERE engine='InnoDB') A; This will tell how much wasted space can be reclaimed after applying the InnoDB Cleanup.
{ "source": [ "https://serverfault.com/questions/487159", "https://serverfault.com", "https://serverfault.com/users/97732/" ] }
487,165
I am having a little trouble. I followed the tutorials step by step at rtcamp.com for wordpress multisite, only changing the ‘example.com’ to my domain name. I have the nginx helper plugin installed and it looks like the whole site and everything works great. (It’s a network of about 40 sites moved from a LAMP setup) I’m having an issue though. Even with the configuration using the fastcgi_cache, my page load time is extremely high. This is what it reads when I view the source code timestamp: <!--Cached using Nginx-Helper on 2013-03-12 00:21:19. It took 62 queries executed in 13.000 seconds. Visit http://wordpress.org/extend/plugins/nginx-helper/faq/ for more details --> This doesn't happen with the root site, only with the sub-directory sites. Here are my configs: This is the sites-available/example.com #move next 3 lines to /etc/nginx/nginx.conf if you want to use fastcgi_cache across many sites fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=WORDPRESS:500m inactive=60m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; server { #@DM - uncomment following line for domain mapping or you will need to add every mapped-domain to server_name list #listen 80 default_server; server_name example.com *.example.com ; #@DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /var/www/example.com/htdocs; index index.php index.html index.htm; #fastcgi_cache start set $skip_cache 0; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } # Don't cache uris containing the following segments if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $skip_cache 1; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^/[^/]+(/.*.php)$ $1 last; } location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_pass 127.0.0.1:9000; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; } location ~* ^.+.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } # location ~ /. { deny all; access_log off; log_not_found off; } ##SITEMAP ABILTIES rewrite ^/sitemap_index\.xml$ /index.php?sitemap=1 last; rewrite ^/([^/]+?)-sitemap([0-9]+)?\.xml$ /index.php?sitemap=$1&sitemap_n=$2 last; } And my nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## Block spammers and other unwanted visitors ## include blockips.conf; #FastCGI fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_buffers 8 32k; fastcgi_buffer_size 64k; fastcgi_read_timeout 120; fastcgi_index index.php; limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} Help with this problem would be highly appreciated! :)
The file ibdata1 is the system tablespace for the InnoDB infrastructure. It contains several classes for information vital for InnoDB Table Data Pages Table Index Pages Data Dictionary MVCC Control Data Undo Space Rollback Segments Double Write Buffer (Pages Written in the Background to avoid OS caching) Insert Buffer (Changes to Secondary Indexes) Please note ibdata1's place in the InnoDB Universe (on Right Side) You can separate Data and Index Pages from ibdata1 by enabling innodb_file_per_table . This will cause any newly created InnoDB table to store data and index pages in an external .ibd file. Example datadir is /var/lib/mysql CREATE TABLE mydb.mytable (...) ENGINE=InnoDB; , creates /var/lib/mysql/mydb/mytable.frm innodb_file_per_table enabled, Data/Index Pages Stored in /var/lib/mysql/mydb/mytable.ibd innodb_file_per_table disabled, Data/Index Pages Stored in ibdata1 No matter where the InnoDB table is stored, InnoDB's functionality requires looking for table metadata and storing and retrieving MVCC info to support ACID compliance and Transaction Isolation . Here are my past articles on separating table data and indexes from ibdata1 Oct 29, 2010 : My Original Post in StackOverflow Nov 26, 2011 : ERROR 1114 (HY000) at line 6308 in file & The table user_analysis is full Feb 03, 2012 : Scheduled optimization of tables in MySQL InnoDB Mar 25, 2012 : Why does InnoDB store all databases in one file? Apr 01, 2012 : Is innodb_file_per_table advisable? WHAT TO DO NEXT You can continue having ibdata1 stored everything, but that makes doing LVM snapshots real drudgery (my personal opinion). You need to use my StackOverflow post and shrink that file permanently. Please run this query: SELECT ((POWER(1024,3)*94 - InnoDBDiskDataAndIndexes))/POWER(1024,3) SpaceToReclaim FROM (SELECT SUM(data_length+index_length) InnoDBDiskDataAndIndexes FROM information_schema.tables WHERE engine='InnoDB') A; This will tell how much wasted space can be reclaimed after applying the InnoDB Cleanup.
{ "source": [ "https://serverfault.com/questions/487165", "https://serverfault.com", "https://serverfault.com/users/164228/" ] }
487,170
I've recently begun an exercise in looking into SQL best practices in terms of performance. I've read that it is recommended to put MS SQL data files (mdf) and log files (ldf) on separate volumes. And this is how my organization has been doing it, when SQL was all on local server hard disks. Now that the company has purchased an Equallogic SAN, I'm wondering if this still necessary. Dell support tells me that any volume created on the SAN will be spanned across all drives in the RAID set. In this case, that is 14 spindles. Dell says there is no performance advantage to separating those mdfs and ldfs since the volume will already have max I/O across all those drives. Creating two volumes isn't increasing the number of spindles in use... Any thoughts or suggestions?
The file ibdata1 is the system tablespace for the InnoDB infrastructure. It contains several classes for information vital for InnoDB Table Data Pages Table Index Pages Data Dictionary MVCC Control Data Undo Space Rollback Segments Double Write Buffer (Pages Written in the Background to avoid OS caching) Insert Buffer (Changes to Secondary Indexes) Please note ibdata1's place in the InnoDB Universe (on Right Side) You can separate Data and Index Pages from ibdata1 by enabling innodb_file_per_table . This will cause any newly created InnoDB table to store data and index pages in an external .ibd file. Example datadir is /var/lib/mysql CREATE TABLE mydb.mytable (...) ENGINE=InnoDB; , creates /var/lib/mysql/mydb/mytable.frm innodb_file_per_table enabled, Data/Index Pages Stored in /var/lib/mysql/mydb/mytable.ibd innodb_file_per_table disabled, Data/Index Pages Stored in ibdata1 No matter where the InnoDB table is stored, InnoDB's functionality requires looking for table metadata and storing and retrieving MVCC info to support ACID compliance and Transaction Isolation . Here are my past articles on separating table data and indexes from ibdata1 Oct 29, 2010 : My Original Post in StackOverflow Nov 26, 2011 : ERROR 1114 (HY000) at line 6308 in file & The table user_analysis is full Feb 03, 2012 : Scheduled optimization of tables in MySQL InnoDB Mar 25, 2012 : Why does InnoDB store all databases in one file? Apr 01, 2012 : Is innodb_file_per_table advisable? WHAT TO DO NEXT You can continue having ibdata1 stored everything, but that makes doing LVM snapshots real drudgery (my personal opinion). You need to use my StackOverflow post and shrink that file permanently. Please run this query: SELECT ((POWER(1024,3)*94 - InnoDBDiskDataAndIndexes))/POWER(1024,3) SpaceToReclaim FROM (SELECT SUM(data_length+index_length) InnoDBDiskDataAndIndexes FROM information_schema.tables WHERE engine='InnoDB') A; This will tell how much wasted space can be reclaimed after applying the InnoDB Cleanup.
{ "source": [ "https://serverfault.com/questions/487170", "https://serverfault.com", "https://serverfault.com/users/164229/" ] }
487,185
We are moving to an office with a server closet that may not have sufficient depth to have a standard server rack. I found a vertical rack mount online (that mounts to the wall) that is 4U. Are there negative effects to mounting servers vertically instead of horizontally?
Certain specific case designs may have issues with mounting in other-than-horizontal attitudes, but there isn't anything inherent to server cases that would suggest this is bad. Bad case-designs would have parts vibrating loose after long periods without gravity to retain them, but this shouldn't be a problem with a major server vendor case. Most blade-servers are vertical in my experience!
{ "source": [ "https://serverfault.com/questions/487185", "https://serverfault.com", "https://serverfault.com/users/100102/" ] }
487,335
I'm working on a multiple virtualhost Environment. I've installed PhpMyadmin for Mysql Remote Control. Environment is configurate as below: one.domain.com two.domain.com onlyphpmyadmin.domain.com Now, if i accesso to one of the three domains http://one.domain.com/phpmyadmin/ http://two.domein.com/phpmyadmin/ http://onlyphpmyadmin.domain.com/phpmyadmin/ the result is the same, the access to Phpmyadmin is allowed. The goal is to obtain a situation like this one below http://one.domain.com/phpmyadmin/ --> access denied http://two.domein.com/phpmyadmin/ --> access denied http://onlyphpmyadmin.domain.com/phpmyadmin/ -->access allowed whith no hack similar to <?php if($_SERVER['HTTP_HOST'] != 'onlyphpmyadmin.domain.com') die('access denied'); ... ?> on some Phpmyadmin file. Here my Phpmyadmin configuration file Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_admin_flag allow_url_fopen Off php_value include_path . php_admin_value upload_tmp_dir /var/lib/phpmyadmin/tmp php_admin_value open_basedir /usr/share/phpmyadmin/:/etc/phpmyadmin/:/var/lib/phpmyadmin/ </IfModule> </Directory> # Authorize for setup <Directory /usr/share/phpmyadmin/setup> <IfModule mod_authn_file.c> AuthType Basic AuthName "phpMyAdmin Setup" AuthUserFile /etc/phpmyadmin/htpasswd.setup </IfModule> Require valid-user </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory>
Remove the Alias declaration Alias /phpmyadmin /usr/share/phpmyadmin from the server context and put it in the relevant vhost context <VirtualHost *:80> ServerName onlyphpmyadmin.domain.com . . . Alias /phpmyadmin /usr/share/phpmyadmin </VirtualHost> It may be easier and preferable to just include the whole phpmyadmin config into the relevant vhost <VirtualHost *:80> ServerName onlyphpmyadmin.domain.com . . . include /path/to/phpmyadmin.conf </VirtualHost> and then remove that include from the server context and restart apache for the changes to take affect.
{ "source": [ "https://serverfault.com/questions/487335", "https://serverfault.com", "https://serverfault.com/users/53693/" ] }
487,463
I'm looking into rate-limiting using nginx's HttpLimitReqModule . However, requests are all coming from the same IP (a loadbalancer), with the real IP address in the headers. Is there a way to have nginx rate-limit based on the ip in the X-Forwarded-For header instead of the ip of the source?
Yes, typical rate-limiting configuration definition string looks like: limit_req_zone $binary_remote_addr zone=zone:16m rate=1r/s; where $binary_remote_addr is the unique key for limiter. You should try changing it to $http_x_forwarded_for variable which gets the value of X-Forwarded-For header. Although this will increase memory consumption because $binary_remote_addr is using compressed binary format for storing IP addresses and $http_x_forwarded_for is not. limit_req_zone $http_x_forwarded_for zone=zone:16m rate=1r/s;
{ "source": [ "https://serverfault.com/questions/487463", "https://serverfault.com", "https://serverfault.com/users/164378/" ] }
488,486
I can't make lighttpd listen to port 80. ~# /etc/init.d/lighttpd start Starting web server: lighttpd2013-03-16 23:15:02: (network.c.379) can't bind to port: 80 Address already in use failed! Actually I have apache2 installed on my server, too (listening to port 80) but it is not active. I used netstat / netstat -npl but it wasn't helpful How can I figure out what is using the port?
In depsite of people got used to netstat for such kind of operations, it's good to know, that Linux has another great (and, actually superior) networking tool — ss . For e. g., to find out which process has opened port 80 you run it so: sudo ss -pt state listening 'sport = :80' so there's no need to pipe through external filters. Surely it has lots more useful knobs, so get yourself familiar with it. For completeness sake and since recently I came across man fuser , I can also mention: sudo fuser 80/tcp — this one also saves you from tinkering at cut / grep / awk … keep in mind this notation is a short-cut, in case there's an ambiguity, you should use one of namespaces allowed with -n … , like sudo fuser -n tcp 80 sudo lsof -n -sTCP:LISTEN -i:80 — was pointed out by @ wallenborn . Meanwhile -n is not strictly required it's strongly advised since otherwise it uses DNS resolving which usualy slows down output terribly.
{ "source": [ "https://serverfault.com/questions/488486", "https://serverfault.com", "https://serverfault.com/users/150817/" ] }
489,140
I am looking for a utility to encrypt certain directories in Linux. I am not looking for any full disk encryption services, but simply to encrypt a few directories for the purposes of storing files in the cloud. Once retrieving them, I should have to decrypt them before they can be accessed. Looking to do this for a couple of directories (a few hundred GB in size). Any ideas? Preferably CLI based.
I use just GnuPG for this task. The folders get first packed into a TAR-GZ archive: tar czf files.tar.gz /path/to/my/files If not already done, you need to create a GPG private/public key-pair first: gpg --gen-key Follow the instructions. The defaults should be sufficiant for a first test. Something like this will appear: gpg (GnuPG) 2.0.18; Copyright (C) 2011 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 4096 Requested keysize is 4096 bits Please specify how long the key should be valid. 0 = key does not expire = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y GnuPG needs to construct a user ID to identify your key. Real name: File Encryption Key Email address: [email protected] Comment: File Encryption Key You selected this USER-ID: "File Encryption Key (File Encryption Key) " Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o You will be asked for a passphrase to the key. It's highly recommended to use a strong one. It is not needed for encryption of files anyway, so don't be worried about the batch use later. If everything is done, something like this will appear on your screen: We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. gpg: key FE53C811 marked as ultimately trusted public and secret key created and signed. gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u pub *****/******** 2013-03-19 Key fingerprint = **** **** **** **** **** **** **** **** **** **** uid File Encryption Key (File Encryption Key) sub *****/******** 2013-03-19 Now you may want export the public keyfile for importing it on other machines: gpg --armor --output file-enc-pubkey.txt --export 'File Encryption Key' The File Encryption Key is the name I entered during the key generation procedure. Now I'm using GnuPG on the newly created archive: gpg --encrypt --recipient 'File Encryption Key' files.tar.gz You now have a files.tar.gz.gpg file which is encrypted. You can decrypt it with the following command (you will be asked for your passphrase): gpg --output files.tar.gz --decrypt files.tar.gz.gpg That's the whole magic. Make sure you back up your key! And never forget your passphrase! If not backed up or forgotten, you have gigabytes of data junk! Backup your private key with this command: gpg --armor --output file-enc-privkey.asc --export-secret-keys 'File Encryption Key' Advantages None of the encrypters needs to know sensitive information about the encryption - encryption is done with the public key. (You can create the key pair on your local workstation and only transfer the public key to your servers) No passwords will appear in script files or jobs You can have as much as encrypters on any system you want If you keep your private key and the passphrase secret, everything is fine and very very hard to compromise You can decrypt with the private key on Unix, Windows and Linux platforms using the specific PGP/GPG implementation No need for special privileges on encrypting and decrypting systems, no mounting, no containers, no special file systems
{ "source": [ "https://serverfault.com/questions/489140", "https://serverfault.com", "https://serverfault.com/users/148168/" ] }
489,192
All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling. From my laptop I launch: $ ssh -L 7000:localhost:7000 user@host -N -v Then, in another shell: $ irssi -c localhost -p 7000 The ssh debug says: debug1: Connection to port 7000 forwarding to localhost port 7000 requested. debug1: channel 2: new [direct-tcpip] channel 2: open failed: connect failed: Connection refused debug1: channel 2: free: direct-tcpip: listening port 7000 for localhost port 7000, connect from 127.0.0.1 port 53954, nchannels 3 I tried also with localhost:80 to connect to the (remote) web server, with identical results. The remote host runs NetBSD: bash-4.2# uname -a NetBSD host 5.1_STABLE NetBSD 5.1_STABLE (XEN3PAE_DOMU) #6: Fri Nov 4 16:56:31 MET 2011 root@youll-thank-me-later:/m/obj/m/src/sys/arch/i386/compile/XEN3PAE_DOMU i386 I am a bit lost. I tried running tcpdump on the remote host, and I spotted these 'bad chksum': 09:25:55.823849 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 67, bad cksum 0 (->3cb3)!) 127.0.0.1.54381 > 127.0.0.1.7000: P, cksum 0xfe37 (incorrect (-> 0xa801), 1622402406:1622402421(15) ack 1635127887 win 4096 <nop,nop,timestamp 5002727 5002603> I tried restarting the ssh daemon to no avail. I haven't rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh. Ideas..?
Problem solved: $ ssh -L 7000:127.0.0.1:7000 user@host -N -v -v ...apparently, ' localhost ' was not liked by the remote host. Yet, remote /etc/hosts contains: ::1 localhost localhost. 127.0.0.1 localhost localhost. while the local network interface is lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33184 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 Sigh. so much for the bounty of 100rp I put on :)
{ "source": [ "https://serverfault.com/questions/489192", "https://serverfault.com", "https://serverfault.com/users/27213/" ] }
489,532
In RHEL, instead of using service network restart command, how can i restart a particular network interface, lets say "eth1", with only one command. "Only one command" because that is the only interface where my ssh is working on also. So if i'm about to use: ifdown and then ifup , i will never be able to hit the ifup command as my ssh has been terminated once after ifdown eth1 command. So there should be a single command which allows me to altogether bring down and then bring up the interface which is serving my current ssh connection. So i do not need to worry about connection totally lost to my server. Any idea please?
You can use: ifdown eth1 && ifup eth1 As a single command. The && just runs one command, then the other if the first command succeeds . If you are required to use sudo make sure you use it before each command: sudo ifdown eth1 && sudo ifup eth1 As long as your interface is configured to have the neccessary IP and route to match the current configuration, your ssh connection won't drop. If you're worried about using it on a production server that you don't have another method of access to, that's understandable. Though the command does exactly what you want, it's very easy to have a configuration error that is only noticed after running this command. If you don't have an alternate method of access (for example, out-of-band console, or SSHD running on another interface), it's safest not to do this . I use this technique often to perform a 'restart' of the interface, but I generally have a backup method of access available just in case when I do it.
{ "source": [ "https://serverfault.com/questions/489532", "https://serverfault.com", "https://serverfault.com/users/128300/" ] }
489,538
I'm having a strange issue with nginx and PHP-FPM. I have a server set up to serve downloads for my website. It's not very beasty, it's only a Celeron G530 and 4GB of RAM, because of this, I'm running nginx for its low overhead. The server is typically transfering at 30-40Mbps constantly and the port is 100Mbps. The problem is, when I'm requesting some PHP scripts from the server over HTTP the request often times out. I know the time limit in nginx is 60 seconds, and I've verified through the logs that it's hitting that time and closing the connection. I also have Munin running on the server to monitor things, and while this is still over HTTP, on the same server and under the same conditions, it's very quick and snappy with a page load taking no longer than 150ms. In my head it makes logical sense that the problem lies with PHP-FPM (as far as I know Munin uses Perl), but how can I check this? What can I do to drill down on the problem and see what the actual bottlenecks are? If it is PHP-FPM, what can I do to perhaps speed things up? It's not taking up a lot of CPU or RAM, and it's set up to use a socket connection rather than a TCP one with nginx. Thanks for any help.
You can use: ifdown eth1 && ifup eth1 As a single command. The && just runs one command, then the other if the first command succeeds . If you are required to use sudo make sure you use it before each command: sudo ifdown eth1 && sudo ifup eth1 As long as your interface is configured to have the neccessary IP and route to match the current configuration, your ssh connection won't drop. If you're worried about using it on a production server that you don't have another method of access to, that's understandable. Though the command does exactly what you want, it's very easy to have a configuration error that is only noticed after running this command. If you don't have an alternate method of access (for example, out-of-band console, or SSHD running on another interface), it's safest not to do this . I use this technique often to perform a 'restart' of the interface, but I generally have a backup method of access available just in case when I do it.
{ "source": [ "https://serverfault.com/questions/489538", "https://serverfault.com", "https://serverfault.com/users/161427/" ] }
490,825
Similarly to hostname that can be changed in different ways: temporarily using the hostname command permanently using /etc/hostname (or /etc/sysconfig/network or /etc/HOSTNAME , these files are used by the init scripts) I want to change my domain name. I can use the domainname command, but is there a way to make it permanent across reboots? I think it can be configured in /etc/resolv.conf but this file is generally generated and I don't know exactly the difference between search and domain directives. And at what time exactly the information there is passed to the domainname program to set the domain name? Do you have any ideas on that? I'd like to be mostly compatible across distributions. So if if anyone has pointers on the different distributions flavours, I'd gladly accept them.
Set FQDN I'm using Debian 7 and this is what worked for me; thanks to Fernando Ribeiro . sudoedit /etc/hostname server # here's where you put the server's host name activate hostname sudo hostname -F /etc/hostname add domain name and address to the server sudoedit /etc/hosts 192.168.1.2 server.domain server VERIFY > hostname --short server > hostname --domain domain > hostname --fqdn server.domain > hostname --ip-address 192.168.1.2
{ "source": [ "https://serverfault.com/questions/490825", "https://serverfault.com", "https://serverfault.com/users/90931/" ] }
491,007
I need to redirect only http://shop.test.com to http://www.test.com/fedex-orders/ Just homepage. Nothing else. ie http://shop.test.com/?page=blog should NOT redirect.
location = / { return 301 http://www.test.com/fedex-orders/; } The use of = in location = / specifies that the URL must match / exactly, with nothing else preceding or following it.
{ "source": [ "https://serverfault.com/questions/491007", "https://serverfault.com", "https://serverfault.com/users/151023/" ] }
491,033
i'm having trouble with useradd when im moving /etc/passwd /etc/shadow /etc/group from /etc to /home and create a symlink in order to have /etc/{passwd,shadow,group} respecively pointing to /home/{passwd,shadow,group} i cannot create any user and have useradd outputing: root@client:/home# useradd testuser Adding user `testuser' ... Adding new group `testuser' (1000) ... groupadd: cannot open /etc/group btw useradd output is root@client:/home# adduser testuser useradd: cannot open /etc/passwd
Why does useradd refuse to open a symlinked /etc/passwd ? To answer the question we need to take a look at the source code of useradd (I did this on Ubuntu 12.04, on Debian it may differ slightly): Find out which package owns /usr/sbin/useradd : $ dpkg-query -S /usr/sbin/useradd passwd: /usr/sbin/useradd Install the source: $ apt-get source passwd Reading package lists... Done Building dependency tree Reading state information... Done Picking 'shadow' as source package instead of 'passwd' (...) dpkg-source: info: extracting shadow in shadow-4.1.4.2+svn3283 dpkg-source: info: unpacking shadow_4.1.4.2+svn3283.orig.tar.gz dpkg-source: info: applying shadow_4.1.4.2+svn3283-3ubuntu5.1.diff.gz (...) cd to the source directory: $ cd shadow-4.1.4.2+svn3283/ Search the directory for useradd 's source file, which ideally should be called useradd.c : $ find . -name useradd.c ./src/useradd.c Bingo! Look for error message cannot open /etc/passwd (in fact I only search for cannot open , since the whole string doesn't return any results): $ grep -B 1 'cannot open' src/useradd.c (...) if (pw_open (O_RDWR) == 0) { fprintf (stderr, _("%s: cannot open %s\n"), Prog, pw_dbname ()); (...) -B 1 means print 1 line of leading context before the matching line. This is where the error message you see is being generated. Function pw_open controls whether /etc/passwd can be opened or an error should be thrown. pw_open is not a Linux syscall ( apropos pw_open doesn't return any results), so it is probably implemented within this package. Let's search for it. Tracing pw_open leads to: $ grep -R pw_open * (...) lib/pwio.c:int pw_open (int mode) (...) pw_open implementation is: $ grep -A 3 'int pw_open (int mode)' lib/pwio.c int pw_open (int mode) { return commonio_open (&passwd_db, mode); } Getting closer, but we're not there yet. commonio_open is our new objective. Search for commonio_open : $ grep -R commonio_open * (...) lib/commonio.c:int commonio_open (struct commonio_db *db, int mode) Open lib/commonio.c and scroll to function commonio_open : int commonio_open (struct commonio_db *db, int mode) { (...) fd = open (db->filename, (db->readonly ? O_RDONLY : O_RDWR) | O_NOCTTY | O_NONBLOCK | O_NOFOLLOW); Do you see O_NOFOLLOW ? This is the culprit (from man 2 open ): O_NOFOLLOW If pathname is a symbolic link, then the open fails. Summarizing, useradd.c uses pw_open , which in turn uses commonio_open , which opens /etc/passwd using syscall open with option O_NOFOLLOW , that rejects symbolic links. Although a symlink can be used as a replacement of a file in many (I'd say most) situations, useradd is quite picky and rejects it, probably because a symlinked /etc/passwd strongly suggests that /etc has been tampered with. Why should I leave passwd in /etc ? There are several files in /etc needed to boot and log in, for example (but not limited to): fstab , inittab , passwd , shadow and the init scripts in init.d/ . Any sysadmin expects those files to be there, not symlinked to /home or wherever. So even if you could, you should leave passwd in /etc . Furthermore, the filesystem structure in Linux is well defined, take a look at it here: http://www.pathname.com/fhs/pub/fhs-2.3.html . There is also a chapter for /etc . Moving things around is not recommended.
{ "source": [ "https://serverfault.com/questions/491033", "https://serverfault.com", "https://serverfault.com/users/166252/" ] }
491,222
I'm getting a very strange error when I try to launch SSMS from my taskbar. It seems to open in normal speed, but responds to any input with just a beep. CPU usage is 0% in Task Manager, and there is no [not responding] message. It doesn't respond to a right-click & close all windows command from the taskbar, but "End Task" from task Manager does kill it. When I launch from the command line and specify the server using the -S switch & use Windows Auth using -E it runs fine.
This might be a very simplistic answer, but this happened to me because a dialog it was attempting to present was off of the desktop and/or hidden behind something else. When it beeps like that, can you do the whole "Alt-Space m" and move something into focus?
{ "source": [ "https://serverfault.com/questions/491222", "https://serverfault.com", "https://serverfault.com/users/42580/" ] }
491,585
So I stumbled across svn color and thought it was something useful all our devs would appreciate. The read me for this says to put some code in your ~/.bash_profile , but I'm wondering how I might include this globally on the server, so it's a default for everyone. Is there some global .bash_profile I could add this to? Perhaps another way?
To include commands for all users on your system, use /etc/profile . From http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. Some distros also read /etc/profile.d/* by default, and you can put your customizations there. See http://bash.cyberciti.biz/guide//etc/profile.d
{ "source": [ "https://serverfault.com/questions/491585", "https://serverfault.com", "https://serverfault.com/users/85879/" ] }
491,588
I create an open ad-hoc wlan by using iwconfig (I have the same issue with wpa_supplicant as well). there are 4 nodes on the network as seen on the figure below. The nodes run ubuntu 12.04 and debian squeeze, and have 3.7.1, 3.5 and 3.2 kernels. I use two different usb dongle brands (TP link and ZCN) that all have AR9271 chipset and ath9k_htc driver (here is lsusb output and ethtool output ). The problem I am experiencing is that two nodes ( 10.0.0.2 and 10.0.0.5 ) which have TP link usb wifi dongles can ping any node on the network, and vice-versa. However, the other nodes ( 10.0.0.6 and 10.0.0.7 ) that have ZCN wifi dongle cannot ping each other, but they have no problem communicating with TP-link wifi modules. tcpdump shows that 10.0.0.6 and 10.0.0.7 cannot see their arp-request, e.g. 20:37:52.470305 ARP, Request who-has 10.0.0.7 tell 10.0.0.6, length 28 20:37:53.463713 ARP, Request who-has 10.0.0.7 tell 10.0.0.6, length 28 20:37:54.463622 ARP, Request who-has 10.0.0.7 tell 10.0.0.6, length 28 20:37:55.472868 ARP, Request who-has 10.0.0.7 tell 10.0.0.6, length 28 20:37:56.463439 ARP, Request who-has 10.0.0.7 tell 10.0.0.6, length 28 20:37:57.463469 ARP, Request who-has 10.0.0.7 tell 10.0.0.6, length 28 but they are able to see and get reply from TP-link's modules. 20:39:23.634459 ARP, Request who-has 10.0.0.2 tell 10.0.0.6, length 28 20:39:23.634551 ARP, Reply 10.0.0.2 is-at 64:70:02:18:d4:6a (oui Unknown), length 28 20:39:23.636687 IP 10.0.0.6 > 10.0.0.2: ICMP echo request, id 572, seq 1, length 64 20:39:23.636809 IP 10.0.0.2 > 10.0.0.6: ICMP echo reply, id 572, seq 1, length 64 20:39:24.635497 IP 10.0.0.6 > 10.0.0.2: ICMP echo request, id 572, seq 2, length 64 20:39:24.635558 IP 10.0.0.2 > 10.0.0.6: ICMP echo reply, id 572, seq 2, length 64 20:39:28.651946 ARP, Request who-has 10.0.0.6 tell 10.0.0.2, length 28 20:39:28.654021 ARP, Reply 10.0.0.6 is-at 00:19:70:94:7c:8b (oui Unknown), length 28 My question is that what could be the reason that 10.0.0.6 and 10.0.0.7 cannot see the arp-request that they send each other? How can I find out the problem? If I add couple more nodes with ZCN wifi dongle on the network, these nodes are also not able to talk with each other, but they are fine with TP-link. Or if I swap the wifi modules, the nodes with ZCN have always problem but TP-link modules are fine. here is the /etc/network/interfaces , ifconfig , iwconfig , ip a , ip r , route outputs EDIT: I was suspecting if the problem is arp_filter related but /proc/sys/net/ipv4/conf/*/arp_filter is 0 on the all subdomains(*). If I add arp info of 10.0.0.6 and 10.0.0.7 manually on these nodes, tcpdump and wireshark does not show that they send ping to each other. If I ping the broadcast address (10.0.0.255 in my case), 10.0.0.6 and 10.0.0.7 are able hear it. EDIT2: Here is pcap files http://filebin.net/6cle9a5iae from 10.0.0.6 (ZCN module), 10.0.0.7 (ZCN module), and 10.0.0.5 (TP-link module that does not have problem). here is the ping outputs from 10.0.0.6 http://pastebin.com/swFP2CJ9 I captured the packages simultaneously. The link also includes ifconfig ; iwconfig ; and uname- a outputs for each node.
To include commands for all users on your system, use /etc/profile . From http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. Some distros also read /etc/profile.d/* by default, and you can put your customizations there. See http://bash.cyberciti.biz/guide//etc/profile.d
{ "source": [ "https://serverfault.com/questions/491588", "https://serverfault.com", "https://serverfault.com/users/166516/" ] }
493,071
I have a minimal CentOS 6.3, 64 bit acting as gateway with 4 NIC (1 Gbps), each bonded together one for public traffic and other for private, which performs NATing. It has 6 GB RAM and 4 logical cores. We have been using this for the past two years without any problems. I don't have any experience with hardware routers, but I have heard that they have less RAM and CPU and use flash disks. How can a box with low hardware configuration perform better (as in, handle more concurrent connections) than a machine with more RAM and CPU? What are the limiting factors, other than IOS using different methods to handle this?
ASICs . Instead of using a general purpose CPU and task-specific software, you can skip the software and just make the silicon handle the task directly. High performance networking hardware uses ASICs instead of software for the computationally heavy (but relatively logically simple) tasks of something like comparing an IP address to an enormous internet routing table, checking a CAM table for a switching decision, or checking a packet against an ACL. This makes an enormous difference in the speed of those time-sensitive operations, providing a significant advantage over a general-purpose CPU.
{ "source": [ "https://serverfault.com/questions/493071", "https://serverfault.com", "https://serverfault.com/users/133041/" ] }
493,090
we have a setup consisting of 60+ hosts (solaris and linux). we would like to develop/install a tool that helps us with passwd resets, account creation/deletion, and other common user account mgmt tasks. we have looked at webmin , puppet and AD-integration as potential solutions. but either they are too expensive, have too many holes (vulnerabilities) or our architecture does not permit such a deployment. so we are still looking. our requirements are - 1. free and preferably oss. 2. doesn't need to have a web UI. could be a simple library/api that we can use to script up a user mgmt tool. 3. works with linux and solaris hosts.
ASICs . Instead of using a general purpose CPU and task-specific software, you can skip the software and just make the silicon handle the task directly. High performance networking hardware uses ASICs instead of software for the computationally heavy (but relatively logically simple) tasks of something like comparing an IP address to an enormous internet routing table, checking a CAM table for a switching decision, or checking a packet against an ACL. This makes an enormous difference in the speed of those time-sensitive operations, providing a significant advantage over a general-purpose CPU.
{ "source": [ "https://serverfault.com/questions/493090", "https://serverfault.com", "https://serverfault.com/users/166743/" ] }
493,213
Is there a way to temporarily disable public key authentication when ssh'ing, and use password authentication instead? I currently want to access remote server, but I'm using another laptop, not mine. Browsing that link , I found that the command ssh -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no host1.example.org doesn't work everywhere. And yes, it doesn't work for me. I'm using: OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 Edit: I also tried to type ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no but still have "Permission denied (publickey)". So, is there a specific configuration to do in the remote server, for that command to work? Or, when that command will work as expected? Thanks a lot for advices.
If you want to bypass key authentication when logging to the server, just run: ssh -o PubkeyAuthentication=no user@host
{ "source": [ "https://serverfault.com/questions/493213", "https://serverfault.com", "https://serverfault.com/users/166813/" ] }
495,723
On my SSD imaging (Source and Destination are 2 SSDs) I get 12GBpm using CloneZilla while with dd I get only 5GBpm. What makes Clonezilla so much faster than dd?
dd just reads from block 0 to block 99999 and copies the data. Clonezilla understands filesystems and understands when there is nothing to be copied (because that's empty space or data from a file that's been deleted). Once you know not to copy all the useless data, it is much easier to copy the real data. From the web page "For unsupported file system, sector-to-sector copy is done by dd in Clonezilla."
{ "source": [ "https://serverfault.com/questions/495723", "https://serverfault.com", "https://serverfault.com/users/110004/" ] }
495,726
something less software related. :) I need to identify the antenna connector for two of 3g modems. the connector looks like this: I do have a crc9 antenna i would like to use, so i just have to find something like an adapter or so. Problem, I live in tanzania where something like this is usually not available best regards and thanks michael
dd just reads from block 0 to block 99999 and copies the data. Clonezilla understands filesystems and understands when there is nothing to be copied (because that's empty space or data from a file that's been deleted). Once you know not to copy all the useless data, it is much easier to copy the real data. From the web page "For unsupported file system, sector-to-sector copy is done by dd in Clonezilla."
{ "source": [ "https://serverfault.com/questions/495726", "https://serverfault.com", "https://serverfault.com/users/158099/" ] }
495,914
I'm trying to set up a vagrant. Host is Ubuntu 12.10. Here's my vagrant file: Vagrant::Config.run do |config| config.vm.share_folder("v-root", "/vagrant", ".", :nfs => true) config.vm.network :bridged, :bridge => "eth0" config.vm.define "restserver" do |chefs_config| chefs_config.vm.box = "precise64" chefs_config.vm.box_url = "http://files.vagrantup.com/precise64.box" chefs_config.vm.host_name = "restserver" chefs_config.vm.network :hostonly, "192.168.20.50" chefs_config.vm.forward_port 80, 8080 config.vm.provision :chef_solo do |chef| chef.log_level = :debug chef.cookbooks_path = "cookbooks" chef.run_list.clear chef.add_recipe "apt" chef.add_recipe "base" chef.add_recipe "mongodb::default" chef.add_recipe "nginx" end end end The problem is that my internet access from within the vagrant is terrible. It's very slow. I think the routing tables might be messed up. Here's the output from route -n : Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 eth0 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 There are 2 routes to the default destination, although on the same NIC and to the same gateway. But perhaps this is causing an issue. At least that's what I thought but deleting the first default route doesn't help. I need host-only networking so the nfs share will work. NAT is used for the port forwarding, and I've added the bridged network to try to give this guest access to the internet. Has anyone any idea what's wrong? DNS is very slow to resolve, and it's slow to download anything from the internet.
Answer: Add the following to the vagrant config: config.vm.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] See here for more: Vagrant / VirtualBox DNS 10.0.2.3 not working
{ "source": [ "https://serverfault.com/questions/495914", "https://serverfault.com", "https://serverfault.com/users/167715/" ] }
496,139
The other day, we notice a terrible burning smell coming out of the server room. Long story short, it ended up being one of the battery modules that was burning up in the UPS unit, but it took a good couple of hours before we were able to figure it out. The main reason we were able to figure it out is that the UPS display finally showed that the module needed to be replaced. Here was the problem: the whole room was filled with the smell. Doing a sniff test was very difficult because the smell had infiltrated everything (not to mention it made us light headed). We almost mistakenly took our production database server down because it's where the smell was the strongest. The vitals appeared to be ok (CPU temps showed 60 degrees C, and fan speeds ok), but we weren't sure. It just so happened that the battery module that burnt up was about the same height as the server on the rack and only 3 ft away. Had this been a real emergency, we would have failed miserably. Realistically, the chances that actual server hardware is burning up is a fairly rare occurrence and most of the time we'll be looking at the UPS the culprit. But with several racks with several pieces of equipment, it can quickly become a guessing game. How does one quickly and accurately determine what piece of equipment is actually burning up? I realize this question is highly dependent on the environment variables such as room size, ventilation, location, etc, but any input would be appreciated.
The general consensus seems to be that the answer to your question comes in two parts: How do we find the source of the funny burning smell? You've got the "How" pretty well nailed down: The "Sniff Test" Look for visible smoke/haze Walk the room with a thermal (IR) camera to find hot spots Check monitoring and device panels for alerts You can improve your chances of finding the problem quickly in a number of ways - improved monitoring is often the easiest. Some questions to ask: Do you get temperature and other health alerts from your equipment? Are your UPS systems reporting faults to your monitoring system? Do you get current-draw alarms from your power distribution equipment? Are the room smoke detectors reporting to the monitoring system? (and can they? ) When should we troubleshoot versus hitting the Big Red Switch? This is a more interesting question. Hitting the big red switch can cost your company a huge amount of money in a hurry: Clean agent releases can be into the tens of thousands of dollars, and the outage / recovery costs after an emergency power off (EPO, "dropping the room") can be devastating. You do not want to drop a datacenter because a capacitor in a power supply popped and made the room smell. Conversely, a fire in a server room can cost your company its data/equipment, and more importantly your staff's lives. Troubleshooting "that funny burning smell" should never take precedence over safety , so it's important to have some clear rules about troubleshooting "pre-fire" conditions. The guidelines that follow are my personal limitations that I apply in absence of (or in addition to) any other clearly defined procedure/rules - they've served me well and they may help you, but they could just as easily get me killed or fired tomorrow, so apply them at your own risk. If you see smoke or fire, drop the room This should go without saying but let's say it anyway: If there is an active fire (or smoke indicating that there soon will be) you evacuate the room, cut the power, and discharge the fire suppression system. Exceptions may exist (exercise some common sense), but this is almost always the correct action. If you're proceeding to troubleshoot, always have at least one other person involved This is for two reasons. First, you do not want to be wandering around in a datacenter and all of a sudden have a rack go up in the row you're walking down and nobody knows you're there. Second, the other person is your sanity check on troubleshooting versus dropping the room, and should you make the call to hit the Big Red Switch you have the benefit of having a second person concur with the decision (helps to avoid the career-limiting aspects of such a decision if someone questions it later). Exercise prudent safety measures while troubleshooting Make sure you always have an escape path (an open end of a row and a clear path to an exit). Keep someone stationed at the EPO / fire suppression release. Carry a fire extinguisher with you (Halon or other clean-agent, please). Remember rule #1 above. When in doubt, leave the room . Take care about your breathing: use a respirator or an oxygen mask. This might save your health in case of chemical fire. Set a limit and stick to it More accurately, set two limits: Condition ("How much worse will I let this get?"), and Time ("How long will I keep trying to find the problem before its too risky?"). The limits you set can also be used to let your team begin an orderly shutdown of the affected area, so when you DO pull power you're not crashing a bunch of active machines, and your recovery time will be much shorter, but remember that if the orderly shutdown is taking too long you may have to let a few systems crash in the name of safety. Trust your gut If you are concerned about safety at any time, call the troubleshooting off and clear the room. You may or may not drop the room based on a gut feeling, but regrouping outside the room in (relative) safety is prudent. If there isn't imminent danger you may elect bring in the local fire department before taking any drastic actions like an EPO or clean-agent release. (They may tell you to do so anyway: Their mandate is to protect people, then property, but they're obviously the experts in dealing with fires so you should do what they say!) We've addressed this in comments, but it may as well get summarized in an answer too -- @DeerHunter, @Chris, @Sirex, and many others contributed to the discussion
{ "source": [ "https://serverfault.com/questions/496139", "https://serverfault.com", "https://serverfault.com/users/81366/" ] }
496,149
My application seems to be having issues with memcached not stopping. I'm using php, and have specific keys expiring after every hour. However those keys & values are not repopulating anymore. When I run: /etc/init.d/memcached restart I get the following: Stopping memcached: [FAILED] Starting memcached: [ OK ] I have to run a killall memcached for memcached to stop. I then run a restart and everything is fine. I'm not exactly sure what is causing this, but I need memcached to be restarting every hour. Where should I be looking to find out what is causing this?
The general consensus seems to be that the answer to your question comes in two parts: How do we find the source of the funny burning smell? You've got the "How" pretty well nailed down: The "Sniff Test" Look for visible smoke/haze Walk the room with a thermal (IR) camera to find hot spots Check monitoring and device panels for alerts You can improve your chances of finding the problem quickly in a number of ways - improved monitoring is often the easiest. Some questions to ask: Do you get temperature and other health alerts from your equipment? Are your UPS systems reporting faults to your monitoring system? Do you get current-draw alarms from your power distribution equipment? Are the room smoke detectors reporting to the monitoring system? (and can they? ) When should we troubleshoot versus hitting the Big Red Switch? This is a more interesting question. Hitting the big red switch can cost your company a huge amount of money in a hurry: Clean agent releases can be into the tens of thousands of dollars, and the outage / recovery costs after an emergency power off (EPO, "dropping the room") can be devastating. You do not want to drop a datacenter because a capacitor in a power supply popped and made the room smell. Conversely, a fire in a server room can cost your company its data/equipment, and more importantly your staff's lives. Troubleshooting "that funny burning smell" should never take precedence over safety , so it's important to have some clear rules about troubleshooting "pre-fire" conditions. The guidelines that follow are my personal limitations that I apply in absence of (or in addition to) any other clearly defined procedure/rules - they've served me well and they may help you, but they could just as easily get me killed or fired tomorrow, so apply them at your own risk. If you see smoke or fire, drop the room This should go without saying but let's say it anyway: If there is an active fire (or smoke indicating that there soon will be) you evacuate the room, cut the power, and discharge the fire suppression system. Exceptions may exist (exercise some common sense), but this is almost always the correct action. If you're proceeding to troubleshoot, always have at least one other person involved This is for two reasons. First, you do not want to be wandering around in a datacenter and all of a sudden have a rack go up in the row you're walking down and nobody knows you're there. Second, the other person is your sanity check on troubleshooting versus dropping the room, and should you make the call to hit the Big Red Switch you have the benefit of having a second person concur with the decision (helps to avoid the career-limiting aspects of such a decision if someone questions it later). Exercise prudent safety measures while troubleshooting Make sure you always have an escape path (an open end of a row and a clear path to an exit). Keep someone stationed at the EPO / fire suppression release. Carry a fire extinguisher with you (Halon or other clean-agent, please). Remember rule #1 above. When in doubt, leave the room . Take care about your breathing: use a respirator or an oxygen mask. This might save your health in case of chemical fire. Set a limit and stick to it More accurately, set two limits: Condition ("How much worse will I let this get?"), and Time ("How long will I keep trying to find the problem before its too risky?"). The limits you set can also be used to let your team begin an orderly shutdown of the affected area, so when you DO pull power you're not crashing a bunch of active machines, and your recovery time will be much shorter, but remember that if the orderly shutdown is taking too long you may have to let a few systems crash in the name of safety. Trust your gut If you are concerned about safety at any time, call the troubleshooting off and clear the room. You may or may not drop the room based on a gut feeling, but regrouping outside the room in (relative) safety is prudent. If there isn't imminent danger you may elect bring in the local fire department before taking any drastic actions like an EPO or clean-agent release. (They may tell you to do so anyway: Their mandate is to protect people, then property, but they're obviously the experts in dealing with fires so you should do what they say!) We've addressed this in comments, but it may as well get summarized in an answer too -- @DeerHunter, @Chris, @Sirex, and many others contributed to the discussion
{ "source": [ "https://serverfault.com/questions/496149", "https://serverfault.com", "https://serverfault.com/users/167853/" ] }
496,150
I'm in the somewhat embarrassing position of having unintentionally deleted multiple TB of important data via Puppet, and I'm just trying to understand why this might have happened. Firstly, I'm pretty sure the reason it's gone (as in unrecoverable except via backups) is: File { backup => false } in my site.pp. The nodes were set up to hard mount something via NFS, so a mount point /mount, and a line in fstab like this: nfsserver:/mount /mount nfs <options> 0 0 I wanted to get rid of the mount, and replace it with a symlink to the same eventual location (though a different path). My puppet manifest looked like this: class symlinks::linkdirtest ( ) { file { '/mount': ensure => "link", target => "/anotherdir/mount", } mount { "/mount": ensure => "absent", } } This yielded the following when doing a puppet run: notice: /Stage[main]/Symlinks::Linkdirtest/File[/mount]: Not removing directory; use 'force' to override So, I duly (or stupidly) added: class symlinks::linkdirtest ( ) { file { '/mount': ensure => "link", target => "/anotherdir/mount", force => "true", } .... And lo and behold, puppet proceeded to consign the contents of the all-important mount to oblivion, while the mount point itself remained. Any idea why this might have happened? Thanks
The general consensus seems to be that the answer to your question comes in two parts: How do we find the source of the funny burning smell? You've got the "How" pretty well nailed down: The "Sniff Test" Look for visible smoke/haze Walk the room with a thermal (IR) camera to find hot spots Check monitoring and device panels for alerts You can improve your chances of finding the problem quickly in a number of ways - improved monitoring is often the easiest. Some questions to ask: Do you get temperature and other health alerts from your equipment? Are your UPS systems reporting faults to your monitoring system? Do you get current-draw alarms from your power distribution equipment? Are the room smoke detectors reporting to the monitoring system? (and can they? ) When should we troubleshoot versus hitting the Big Red Switch? This is a more interesting question. Hitting the big red switch can cost your company a huge amount of money in a hurry: Clean agent releases can be into the tens of thousands of dollars, and the outage / recovery costs after an emergency power off (EPO, "dropping the room") can be devastating. You do not want to drop a datacenter because a capacitor in a power supply popped and made the room smell. Conversely, a fire in a server room can cost your company its data/equipment, and more importantly your staff's lives. Troubleshooting "that funny burning smell" should never take precedence over safety , so it's important to have some clear rules about troubleshooting "pre-fire" conditions. The guidelines that follow are my personal limitations that I apply in absence of (or in addition to) any other clearly defined procedure/rules - they've served me well and they may help you, but they could just as easily get me killed or fired tomorrow, so apply them at your own risk. If you see smoke or fire, drop the room This should go without saying but let's say it anyway: If there is an active fire (or smoke indicating that there soon will be) you evacuate the room, cut the power, and discharge the fire suppression system. Exceptions may exist (exercise some common sense), but this is almost always the correct action. If you're proceeding to troubleshoot, always have at least one other person involved This is for two reasons. First, you do not want to be wandering around in a datacenter and all of a sudden have a rack go up in the row you're walking down and nobody knows you're there. Second, the other person is your sanity check on troubleshooting versus dropping the room, and should you make the call to hit the Big Red Switch you have the benefit of having a second person concur with the decision (helps to avoid the career-limiting aspects of such a decision if someone questions it later). Exercise prudent safety measures while troubleshooting Make sure you always have an escape path (an open end of a row and a clear path to an exit). Keep someone stationed at the EPO / fire suppression release. Carry a fire extinguisher with you (Halon or other clean-agent, please). Remember rule #1 above. When in doubt, leave the room . Take care about your breathing: use a respirator or an oxygen mask. This might save your health in case of chemical fire. Set a limit and stick to it More accurately, set two limits: Condition ("How much worse will I let this get?"), and Time ("How long will I keep trying to find the problem before its too risky?"). The limits you set can also be used to let your team begin an orderly shutdown of the affected area, so when you DO pull power you're not crashing a bunch of active machines, and your recovery time will be much shorter, but remember that if the orderly shutdown is taking too long you may have to let a few systems crash in the name of safety. Trust your gut If you are concerned about safety at any time, call the troubleshooting off and clear the room. You may or may not drop the room based on a gut feeling, but regrouping outside the room in (relative) safety is prudent. If there isn't imminent danger you may elect bring in the local fire department before taking any drastic actions like an EPO or clean-agent release. (They may tell you to do so anyway: Their mandate is to protect people, then property, but they're obviously the experts in dealing with fires so you should do what they say!) We've addressed this in comments, but it may as well get summarized in an answer too -- @DeerHunter, @Chris, @Sirex, and many others contributed to the discussion
{ "source": [ "https://serverfault.com/questions/496150", "https://serverfault.com", "https://serverfault.com/users/87359/" ] }
496,736
I've taken the following steps: Created a VPC (with a single public subnet) Added an EC2 instance to the VPC Allocated an elastic IP Associated the elastic IP with the instance Created a security group and assigned it to the instance Modified the security rules to allow inbound ICMP echo and TCP on port 22 I've done all this and I still can't ping or ssh into the instance. If I follow the same steps minus the VPC bits I am able to set this up without issue. What step am I missing?
To communicate outside of the VPC, each non-default subnet needs a routing table and an internet gateway associated to it (the default subnets get an external gateway and a routing table by default). Depending on the way you have created public subnet in the VPC, you might need to explicitly add them additionally. Your VPC setup sounds like it matches Scenario 1 - a private cloud (VPC) with a single public subnet, and an Internet gateway to enable communication over the Internet from the AWS VPC documentation. You will need to add an internet gateway to your VPC and inside the Public subnet's routing table assign 0.0.0.0/0 (default route) to go to the assigned internet gateway. There is a nice illustration of the exact network topology inside the documentation. Also, for more information, you can check the VPC Internet Gateway AWS documentation. Unfortunately it's a little messy and a non-obvious gotcha. For more details about connection issues, see also: Troubleshooting Connecting to Your Instance .
{ "source": [ "https://serverfault.com/questions/496736", "https://serverfault.com", "https://serverfault.com/users/25820/" ] }
496,983
Assuming hardware failure is not a factor, and the requirement of being able to update periodically, is it possible to never shutdown Linux? I typically do a full reboot after updates, especially kernel updates, but is there a way to keep my machine on and still do these? People always hear about incredible up-time, but how is that really possible if you must reboot after major updates. Maybe a different run level? But then how would the kernel update kick in?
Server/Box "uptime" is an illusion. Unless your objective is to have incredible uptime in order to prove some kind of point then I wouldn't focus on it. What matters is service availability . If you need a service to be available all the time then it might be useful to improve individual system uptime or it may well be simpler and more cost effective to create a cluster, for example, than to try and take the availability of a commodity server from 99% to 99.999%
{ "source": [ "https://serverfault.com/questions/496983", "https://serverfault.com", "https://serverfault.com/users/164245/" ] }
497,169
I am trying to install fail2ban on our Amazon EC2 Linux AMI (CentOS). I know that fail2ban is in the EPEL so I have done the following: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm sudo rpm -Uvh epel-release*rpm However, when I do that I get the following message: package epel-release-6-8.9.amzn1.noarch (which is newer than epel-release-6-8.noarch) is already installed Which implies to me that EPEL is already available but if I do: sudo yum install fail2ban I get: Loaded plugins: priorities, security, update-motd, upgrade-helper amzn-main | 2.1 kB 00:00 amzn-updates | 2.3 kB 00:00 Setting up Install Process No package fail2ban available. Error: Nothing to do I assume that I am misunderstanding something but how can I install from EPEL? EDIT: I have just done the following and found that the repo is not enabled: yum repolist all SO how do I enable a repo on EC2?
You should check that epel is enabled using yum repolist enabled If it's not then you can edit /etc/yum.repos.d/epel.repo and change the [epel] section enabled=0 to enabled=1 or use yum-config-manager --enable epel
{ "source": [ "https://serverfault.com/questions/497169", "https://serverfault.com", "https://serverfault.com/users/121800/" ] }
497,430
I have nginx with the following setup: server { listen 80; server_name site.com www.site.com; root /home/site/public_html; listen 443; #server_name site.com www.site.com; #root /home/site/public_html; ssl_certificate /root/site.pem; ssl_certificate_key /root/site.key; However, when I view the SSL connection I am getting: An error occurred during a connection to grewpler.com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) I am using TrustWave Premium SSL as the SSL cert authority.
Solved. You need to add "ssl" to the end of the listen. listen 443 ssl;
{ "source": [ "https://serverfault.com/questions/497430", "https://serverfault.com", "https://serverfault.com/users/112405/" ] }
497,438
Could anyone kindly provide the commands to completely reset the iptables (firewall) for Ubuntu 12.04 to its default "factory" setting? From what I understand, doing this wrong would cause one to be locked out of the linux box?
Set the default policy on the iptables to ACCEPT: iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT Then flush the rules: iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD Note, this will not affect alternate tables, NAT tables, PRE/POST routing tables, etc.
{ "source": [ "https://serverfault.com/questions/497438", "https://serverfault.com", "https://serverfault.com/users/168619/" ] }
498,500
I have the following /etc/hosts file on a ubuntu 12.04 machine 127.0.0.1 localhost 10.248.27.66 ec2-50-112-220-110.us-west-2.compute.amazonaws.com puppetmaster # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts However the host command does not resolve the name puppetmaster correctly, while the telnet command is does root@ip-10-248-34-162:/home/ubuntu# host puppetmaster Host puppetmaster not found: 3(NXDOMAIN) root@ip-10-248-34-162:/home/ubuntu# telnet puppetmaster 8140 Trying 10.248.27.66... Connected to ec2-50-112-220-110.us-west-2.compute.amazonaws.com. Escape character is '^]'. Why does the host command not resolve entries in /etc/hosts?
The host program uses libresolv to perform a DNS query directly, i.e., does not use gethostbyname . Most programs, when attempting to connect to another host, invoke the gethostbyname system call or a similar function. This function obeys the configuration of /etc/nsswitch.conf . This file has a line which in Ubuntu 12.04 defaults to the following: hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 which means that it will first use /etc/hosts , then fall back to DNS queries. If you want to perform a host lookup this way, you can do this with getent hosts . For example: $ getent hosts serverfault.com 198.252.206.16 serverfault.com I hope this helps.
{ "source": [ "https://serverfault.com/questions/498500", "https://serverfault.com", "https://serverfault.com/users/85577/" ] }
498,900
Is there anyway, with Linux, to purposely cause a block device to report an I/O error, or possibly simulate one for testing purposes?
Yes, theres a very plausible way to do this with device mapper. The device mapper can recombine block devices into a new mapping/order of your choosing. LVM does this. It also supports other targets, (some which are quite novel) like 'flakey' to simiulate a failing disk and 'error' to simulate failed regions of disk. One can construct a device which deliberate has IO blackholes on it which will report IO errors when crossed. First, create some virtual volume to use as a target and make it addressable as a block device. dd if=/dev/zero of=/var/lib/virtualblock.img bs=512 count=1048576 losetup /dev/loop0 /var/lib/virtualblock.img So, to start this creates a 512M file that is the basis of our virtual block device which we will punch a 'hole' in. No hole exists yet though. If you were to mkfs.ext4 /dev/loop0 you'd get a perfectly valid filesystem. So, lets use dmsetup which, using this block device -- will create a new device which has some holes in it. Here is an example first dmsetup create errdev0 0 261144 linear /dev/loop0 0 261144 5 error 261149 787427 linear /dev/loop0 261139 This will create a device called 'errdev0' (typically in /dev/mapper). When you type dmsetup create errdev0 it will wait for stdin and will finish on ^D being input. In the example above, we've made a 5 sector hole (2.5kb) at sectors 261144 of the loop device. We then continue through the loop device as normal. This script will attempt to generate you a table that will place holes at random locations approximately spread out around 16Mb (although its pretty random). #!/bin/bash start_sector=0 good_sector_size=0 for sector in {0..1048576}; do if [[ ${RANDOM} == 0 ]]; then echo "${start_sector} ${good_sector_size} linear /dev/loop0 ${start_sector}" echo "${sector} 1 error" start_sector=$((${sector}+1)) good_sector_size=0 else good_sector_size=$((${good_sector_size}+1)) fi done echo "${start_sector} $((${good_sector_size}-1)) linear /dev/loop0 ${start_sector}" The script assumes you have also created a 512Mb device and that your virtual block device is on /dev/loop0 . You can just output this data to a text file as a table and pipe it into dmsetup create errdev0 . Once you have created the device you can then begin to use it like a normal block device, first by formatting it and then by placing files on it. At some point you should come across some IO problems where you hit sectors that are really IO holes in the virtual device. Once you have finished use dmsetup remove errdev0 to remove the device. If you want to make it more likely to get an IO error you can add holes more frequently or change the size of the holes you create. Note putting errors in certain sections is likely to cause problems off of the get-go, I.E at 32mb into a device you cant write a superblock which ext normally tries to do, so the format wont work.. For added fun -- you can actually just losetup then mkfs.ext4 /dev/loop0 and fill it with data. Once you've got a nice working filesystem on there, simply unmount the filesystem and add some holes using dmsetup and remount that!
{ "source": [ "https://serverfault.com/questions/498900", "https://serverfault.com", "https://serverfault.com/users/160212/" ] }
499,084
I'm trying to design a program that sends a text when a certain (non-periodic) event occurs. Right now, I'd like a script that finds when this event occurs, and then schedules a (cron-like) job that will send a text just before that even occurs. A more concrete example would look like this: Script A runs and detects the next time of the event Script A uses ??? to schedule Script B be to run at $time At $time, ??? calls script B which sends the text. The problem is, the event could be at a random time within 11 days, and it only happens once. Cron seems inappropriate for this -- I don't want this job to run more than once. So I guess (in short), is there a utility that provides for the delayed execution of a script that's not periodic?
Yup. It is called at . Example: echo 'logrotate -f /etc/logrotate.conf' | at '00:00'
{ "source": [ "https://serverfault.com/questions/499084", "https://serverfault.com", "https://serverfault.com/users/169382/" ] }
499,269
I have multipath IO configured server 2012 blade that shows warnings like the following during MPIO path failure: The IO operation at logical block address 0 for Disk 7 was retried. I know what is causing the warning to happen so I am not looking for the cause but what does this message actually mean? Does it mean that if this IO was a write operation then server actually lost data that it was trying to write? Thank you for any light you can shed on the meaning of this warning message.
No it does not mean that the data was lost. It simply means that the IRP (IO Request Packet) timed out while the IO System waited for it to complete, and so it was tried again. When a thread begins any IO operation, the IO manager creates an IRP to represent the operation as it passes through the system. The IRP gets stored in its initial state in a buffer/look-aside list, so that it can be retried if it fails the first time. That provides the atomicity that one would expect from any transactional system so that we can be more confident that you're not going to get a bunch of corrupted or incomplete data written to your disk. This event makes perfect sense in the event of an MPIO failure. Say Windows goes to read or write something from SAN storage. The request is dispatched, and at the same instant, I cut one of the cables to the SAN. That request is never going to complete, and so Windows will try the request again, only this time the request will follow the other path. These events also occur when the disks are overburdened or just really slow. You might notice these messages coincide with scheduled backups, etc. The disk might just be slow and busy, and some random IRP timed out and had to try again. The IRP could be getting stuck in an interrupt service routine, or a deferred procedure call, or whatever. I could see having a lot of IO filter drivers in your stack exacerbating this issue as well. It's not that this behavior did not occur just like this in previous versions of Windows, it's just that Microsoft apparently decided to surface these events in Win8/Server 2012. Edit: You can find the outstanding IRPs of a thread with a kernel debugger: kd> !irp 1a2b3c4d , where you previously found that address by issuing the command kd> !process 8f7d6c4a which will list all the IRPs associated to the threads associated with that process. kd> !process 0 0 to list all the processes running. Once you list the information about an IRP using the !irp command, you can easily spot which driver last handled the IRP because it will have a > pointing to it in the list. Then to get more information about what that driver was doing with that IRP, do a kd> !devobj 1a2b3c4d5e6f where that is the actual address of the device object. Then do a kd> dt 0x1a2b3c3c2b1a _CLASS_PRIVATE_FDO_DATA using the address of the PrivateFdoData structure you got. Now you're ready to dump the AllTransferPacketsList data structure you got from PrivateFdoData. The idea is, you're tracking down what driver was doing what with the IRP the last time it was seen. If the IRP is AWOL for too long, it's timed out and retried from the beginning. This can be caused by so many things... even a stray cosmic ray. But the important thing is that the transaction will be retried from the beginning, and it will not be considered complete until the IO manager says it is. Oh, and there's also thread-agnostic IO which is a completely different can of worms. :) For further reading on this topic, I highly recommend chapter 8, I/O System, of Windows Internals 6th edition, from Mark Russinovich, Margosis, et al. **Edit: ** I did finally find the official KB for this error: http://support.microsoft.com/kb/2819485/EN-US The IO operation should be retried 8 times, once per minute, until Windows gives up. Edit: As promised: https://docs.microsoft.com/en-us/archive/blogs/ntdebugging/interpreting-event-153-errors
{ "source": [ "https://serverfault.com/questions/499269", "https://serverfault.com", "https://serverfault.com/users/12890/" ] }
499,565
I was wondering if there is a way to change the default directory that I get put into after I SSH into my Ubuntu server. 99% of the time when I'm logging into my server, it is to access files within a specific directory: /var/www/websites Is there a config file that I can edit that will make sure I am put straight into this directory when I login?
There are four ways to achieve this: add cd /var/www/websites to the end of your .bash_profile . This is executed only for interactive logins (e.g. ssh). add cd /var/www/websites to the end of your .profile . This is more likely to be called by shells which are not bash (e.g. zsh). (Added from @Phil Hord's comment) add cd /var/www/websites to the end of your .bashrc . I use this one on our puppetmasters as I always want to be in /etc/puppet/environments/dkaarsemaker there instead of my homedir :-) Change your homedirectory on the server to /var/www/websites (this is not really a good idea)
{ "source": [ "https://serverfault.com/questions/499565", "https://serverfault.com", "https://serverfault.com/users/156809/" ] }
500,467
I have Apache2 with PHP + PHP-FPM configured according to: http://wiki.apache.org/httpd/PHP-FPM I am writing a script that will take a long time to execute on an internal Vhost, but keep getting timed out, everything runs flawlessly if the script executes in under 30 seconds. My apache log tells me: [Wed Apr 17 21:57:23.075175 2013] [proxy_fcgi:error] [pid 9263:tid 140530454267648] (70007)The timeout specified has expired: [client 58.169.202.172:49017] AH01075: Error dispatching request to :, referer: When trying to run the script I am given a 503 Service Unavailable after exactly 30 seconds of execution time. Logically this would mean I have a timeout directive or setting set to 30 seconds, but I have these in my Vhost's config: Timeout 600 <IfModule proxy_module> ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/home/pyrokinetiq/scripts/$1 timeout=600 ProxyTimeout 600 </IfModule> (php-fpm is running on port 9001 for me) I have also tried placing the Timeout and ProxyTimeout in httpd.conf with no difference. It seems there's another timeout setting somewhere that's specific to mod_proxy_fcgi , but I can't find it. I installed the Apache2 httpd from the official tarball, none of the mods seem to have come with any configuration files. If anyone can point me in the right direction it would be much appreciated.
I finally fixed this problem after testing several configuration parameters. I tested the solution twice, removing all previous changes. Only one parameter was needed for me to fix it. For the latest versions of httpd and mod_proxy_fcgi you can simply add timeout= to the end of the ProxyPassMatch line, e.g.: ProxyPassMatch ^/(.+\.php.*)$ fcgi://127.0.0.1:9000/<docroot>/$1 timeout=1800 For older versions it was a little more complicated, e.g.: <Proxy fcgi://127.0.0.1:9000> ProxySet timeout=1800 </Proxy> ProxyPassMatch ^/(.+\.php.*)$ fcgi://127.0.0.1:9000/<docroot>/$1 I needed to add the Proxy directive to set the timeout to 30 minutes. In some applications, usually when operating database, there are routines that can take more than 10 minutes to execute. I temporary set the timeout to 30 minutes to ensure they finish. Specifically useful when using the installation wizard, which takes too much time (in my humble opinion). By the way the intial input that helped me to solve this issue was found in the following URL address .
{ "source": [ "https://serverfault.com/questions/500467", "https://serverfault.com", "https://serverfault.com/users/169681/" ] }
500,764
I am building a provisioning script for a ubuntu vagrant vm , on a ubuntu host , both 12.10 64bit When installing the following packages: sudo apt-get -y install php5-xsl graphviz php-pear unison I get the warning: dpkg-reconfigure: unable to re-open stdin: No file or directory have tried searching but results are throwing up every other error with apt-get possible, can't find out how to supress the warning above. The installs work, but the warning above is causing error lines in the vagrant up stdout. Anybody any idea what could be the cause or how to suppress the warning
I got the error message to go away by putting the following in my provisioning script, prior to any apt-get calls: export DEBIAN_FRONTEND=noninteractive This makes debconf use a frontend that expects no interactive input at all, preventing it from even trying to access stdin .
{ "source": [ "https://serverfault.com/questions/500764", "https://serverfault.com", "https://serverfault.com/users/91774/" ] }
501,562
I would like to know how to configure IIS 7.0 to allow the download of APK files? I found an article which tells me to add a new MIME type: File Name Extension: .apk MIME Type: application/vnd.android.package-archive Because In IIS there needs to be a MIME type added to allow IIS to support the .APK file type. Is this all that is needed? Thanks for any replies
Generally, adding a new MIME type should be all that's required: application/vnd.android.package-archive
{ "source": [ "https://serverfault.com/questions/501562", "https://serverfault.com", "https://serverfault.com/users/170644/" ] }
502,593
I'm running a shell command at the end of a Jenkins deployment to restart a forever script: npm install && forever stop app.js && forever start -a -l /var/log/forever.log app.js When I run that as a user jenkins everything works fine and the console output from the build history also tells me that the forever script is running. However, the process stops right after the deployment is finished and the forever process is stopped. What causes this behavior and how can I fix it?
Jenkins kills all process spawn by the job. This can be disabled by setting the BUILD_ID environment variable to something else: export BUILD_ID=dontKillMe see https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller for details
{ "source": [ "https://serverfault.com/questions/502593", "https://serverfault.com", "https://serverfault.com/users/59170/" ] }
502,597
I have a VM (Xen based), I can boot up the machine but there are some warning during the boot up. Are they normal? If not, how to solved? md: Scanned 0 and added 0 devices. md: autorun ... md: ... autorun DONE. EXT3-fs: barriers not enabled kjournald starting. Commit interval 5 seconds EXT3-fs (xvda): mounted filesystem with writeback data mode VFS: Mounted root (ext3 filesystem) readonly on device 202:0. devtmpfs: mounted Freeing unused kernel memory: 688k freed Write protecting the kernel read-only data: 12288k Freeing unused kernel memory: 560k freed Freeing unused kernel memory: 780k freed init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device init: ureadahead main process (1331) terminated with status 5 init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device init: Failed to create pty - disabling logging for job init: Temporary process spawn error: No space left on device FATAL: Module nf_conntrack_ftp not found. FATAL: Module nf_nat_ftp not found. FATAL: Module nf_conntrack_netbios_ns not found. fsck from util-linux 2.20.1 mountall: Disconnected from Plymouth /dev/xvda: clean, 195668/2653056 files, 1402362/6160384 blocks When I run df -h after boot up, it is df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 24G 6.0G 17G 27% / devtmpfs 493M 4.0K 493M 1% /dev none 99M 180K 99M 1% /run none 5.0M 0 5.0M 0% /run/lock none 494M 0 494M 0% /run/shm
Jenkins kills all process spawn by the job. This can be disabled by setting the BUILD_ID environment variable to something else: export BUILD_ID=dontKillMe see https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller for details
{ "source": [ "https://serverfault.com/questions/502597", "https://serverfault.com", "https://serverfault.com/users/52746/" ] }
502,714
I'm almost positive everyone on here knows the meaning of 127.0.0.1. But, why is that ALWAYS localhost? Who picked that arbitrary IP? Why was that IP picked? Why not something more simple such as 1.0.0.0? Is there some special meaning to 127.0.0.1?
Jon Postel picked 127. Before the Internet Assigned Numbers Authority took over ( RFC 3232 ) around the time of his death ( RFC 2468 ), he was the "czar" of Internet address and port assignments, having essentially nominated himself for the task. ( RFC 349 ) Back in the early 1980s, when IPv4 as we know it was first being hashed out, existing networks were given "class A" address blocks in the 32-bit address space that would go into effect in 1983 ( RFC 801 ). Both the initial assignments and the 127 assignment you ask about, as well as the first definitions of "class A", "class B" and "class C" IP addresses, were first published in Postel's RFC 790 . (Note that "classes" were superseded by CIDR in RFC 1519 , now RFC 4632 .) In RFC 790, Postel defined 127 as "reserved". 127.rrr.rrr.rrr Reserved [JBP] Its first formal definition appears in RFC 990 , where it is defined as follows: The class A network number 127 is assigned the "loopback" function, that is, a datagram sent by a higher level protocol to a network 127 address should loop back inside the host. No datagram "sent" to a network 127 address should ever appear on any network anywhere. And again in RFC 1060 : (g) {127, <any>} Internal host loopback address. Should never appear outside a host. Thus, any address within 127.0.0.0/8 is to be considered loopback and be routed back to the local host. The current list of special-use IPv4 addresses is RFC 6890 , which obsoleted RFC 5735 , which in turn obsoleted RFC 3330 . RFC 5735 states: 127.0.0.0/8 - This block is assigned for use as the Internet host loopback address. A datagram sent by a higher-level protocol to an address anywhere within this block loops back inside the host. This is ordinarily implemented using only 127.0.0.1/32 for loopback. As described in [RFC1122], Section 3.2.1.3 , addresses within the entire 127.0.0.0/8 block do not legitimately appear on any network anywhere. Finally, in any IPv4 subnet , the lowest address is not usable as it represents the network route. So the first usable address in the subnet, and therefore the most commonly seen, is 127.0.0.1.
{ "source": [ "https://serverfault.com/questions/502714", "https://serverfault.com", "https://serverfault.com/users/171236/" ] }
502,849
I am using webmin (which uses yum) to install updates on my server, and it somestimes updates kernel as well including kernel-firmware and kernel-headers. Do I need to restart the server after a kernel update?
There are at least two reasons for rebooting: You probably want to use the advantages of the newer version (security fixes) Usually during a kernel update the module tree of the old kernel is removed. Thus if you (or some script) unload a module then the system cannot load it again because it finds only the newer one on disk (if at all) and this is compiled for a different kernel and thus cannot be loaded (at least usually).
{ "source": [ "https://serverfault.com/questions/502849", "https://serverfault.com", "https://serverfault.com/users/51792/" ] }
503,513
I am having trouble understanding why we need to purchase SSL certificates when we can generate them locally using openSSL. What is the difference between the certificate I purchase and a test certificate I generate locally? Is it just a big scam?
One word - trust. The SSL certificate from a provider that your browser trusts means that they have at least done basic verification to say that you are who you say you are. Otherwise I could make my own certificates for google.com or yourbank.com and pretend to be them. Paid certificates do not provide any extra level of encryption over self signed (usually). But a self signed certificate will cause the browser to throw an error. Yes parts of SSL are a scam (a verisign certificate vs a geotrust where verisign are up to 100x more expensive) but not all of it. If this is all internal stuff, then there is no need for a paid certificate as you can employ your own trust methods (e.g. Do nothing, or perhaps just fingerprint checking).
{ "source": [ "https://serverfault.com/questions/503513", "https://serverfault.com", "https://serverfault.com/users/157990/" ] }
503,721
I have several game servers that use a certain .dll file to run. Sometimes I need to update the game servers but I don't wanna interrupt the games that are already running. Is there a way to replace the .dll file (it's locked by Windows) so the next instances of the game servers that use that file open the new version, and the old ones keep using the old version of that .dll until they are restarted? Is it safe to just unlock the file using one of those tools that do that and replace it?
Actually, you can and it usually works without any issue (although not always) What you do is rename the file without moving it (i.e. without changing the file location in the folder structure) and move the new file to the containing folder. That will keep the handles to the file valid and working so that pre-existing instances still will be able to access the file properly and new instances (or new handles) wil go to the new file. Obviously, if a program re-opens the same dll file and expects it to stay exactly the same (for instance, if there is resources to be loaded from the dll and that references to these resources is extracted from code running when the dll is loaded), this will cause issue but this is definitely not the norm.
{ "source": [ "https://serverfault.com/questions/503721", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
504,070
New to Puppet and Chef tools. Seems like the job that they are doing can be done with shell scripting. Maybe it was done in shell scripts until these came along. I would agree they are more readable. But, are there any other advantages over shell scripts besides just being readable?
A domain-specific language makes a big difference in the amount of code you write. For example, you could argue that there's not much difference between: chmod 640 /my/file and file { "/my/file": mode => 640, } but there's a great deal of difference between these: FILE=/my/file chmod 640 $FILE chown foo $FILE chgrp bar $FILE wget -O $FILE "http://my.puppet.server/dist/$FILE" # where the URL contains "Hello world" and file { "/my/file": mode => 640, owner => foo, group => bar, content => "Hello world", } What happens if the wget fails? How will your script handle that? And what happens if there's something after that in your script that requires $FILE to be there with the correct contents? You might argue that one could just put echo "Hello world" > $FILE in the script, except that in the first example the script must be run on the client, whereas puppet compiles all of this on the server. So if you change the content, you only have to change it on the server and it changes it for as many systems as you want to put it on. And puppet handles dependencies and transfer problems for you automatically. There's just no comparison - proper configuration management tools save you time and complexity. The more you try to do, the more shell scripts seem inadequate, and the more effort you will save by doing it with puppet.
{ "source": [ "https://serverfault.com/questions/504070", "https://serverfault.com", "https://serverfault.com/users/114503/" ] }
504,308
When configuring HA Proxy, how do you decide what values to assign to the timeouts? I've read a half dozen samples in various blogs, and everyone uses different timeouts and no one discusses why. HAProxy seems specifically worried about client, connect, and server, which HAPRoxy throws a warning about if you leave completely unset: While not properly invalid, you will certainly encounter various problems with such a configuration. To fix this, please ensure that all following timeouts are set to a non-zero value: 'client', 'connect', 'server'. The documentation is unhelpful in this regard: it suggests "slightly above multiples of 3 seconds" but not why you'd choose a multiple of 1 vs 100 or 42. The RPM I'm using (Amazon Linux repository) sets these defaults: timeout connect 10s timeout client 1m timeout server 1m Two of which are exact multiples of 3 seconds, violating the only official advice I've seen. If you don't have specific tuning advice, maybe an easier question is: what should I expect to go wrong with really short or really long timeouts?
The TCP RTO (receive timeout) starts at three seconds. ( RFC 1122 ) If a transmitted packet hasn't had an acknowledgement returned in that time, then it's assumed to be lost and retransmitted. This is almost certainly what the author is referring to. (Note that the RTO gets tuned up or down dynamically by various algorithms , outside the scope of this question.) Keep in mind that this really only applies to connections between your frontend server and the clients (i.e. web users). In normal scenarios, the connections between HAProxy and your backend servers should be on a LAN and you should use much shorter timeouts, so that malfunctioning backends get taken out of service sooner. As for your web users, some of them may be on very high latency connections, such as satellite, and may experience higher than normal retransmits due to this. The RTT on a connection where a satellite is in use may exceed 2000 ms even if all is well. With all this in mind, you will generally want very short timeouts for timeout connect and very long ones for timeout client . For timeout server , this depends on your web application. When setting the timeout, consider the complexity of the web app being served, and how long it might take in the worst case to process a complex request. If in doubt, raise the value.
{ "source": [ "https://serverfault.com/questions/504308", "https://serverfault.com", "https://serverfault.com/users/4786/" ] }
504,431
I would like to view the HTTP headers sent from Apache (listening on port 80) to Tomcat (on port 4080) in a Linux machine. According to Wikipedia , Header fields are colon-separated name-value pairs in clear-text string format. I've tried some variations of the following tcpdump command: $ sudo tcpdump -lnX dst port 4080 -c 10 11:29:28.605894 IP SOME_IP.33273 > SOME_IP.4080: P 0:49(49) ack 1 win 23 <nop,nop,timestamp 1191760962 509391143> 0x0000: 4500 0065 3a9f 4000 3f06 0084 628a 9ec4 E..e:.@.?...b... 0x0010: 628a 9c97 81f9 0ff0 9e87 eee0 144b 90e1 b............K.. 0x0020: 8018 0017 fb43 0000 0101 080a 4708 d442 .....C......G..B 0x0030: 1e5c b127 4845 4144 202f 6461 7070 6572 .\.'HEAD./dapper 0x0040: 5f73 6572 7669 6e67 2f41 644d 6f6e 6b65 _serving/AdMonke 0x0050: 793f y? The result was always the same - a strange mix of gibberish and English words (e.g. HEAD ). How can I view the headers in a human-readable format?
Here's a one-liner I came up with for displaying request and response HTTP headers using tcpdump (which should work for your case too): sudo tcpdump -A -s 10240 'tcp port 4080 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | egrep --line-buffered "^........(GET |HTTP\/|POST |HEAD )|^[A-Za-z0-9-]+: " | sed -r 's/^........(GET |HTTP\/|POST |HEAD )/\n\1/g' It limits cuts the packet off at 10Kb and only knows GET, POST and HEAD commands, but that should be enough in the majority of cases. EDIT : modified it to get rid of the buffers at every step to make it more responsive. Needs Perl and stdbuf now though, so use the original version if you don't have those: EDIT : Changed script port targets from 80 to 4080, to actually listen for traffic that went through apache already, instead of direct outside traffic arriving to port 80: sudo stdbuf -oL -eL /usr/sbin/tcpdump -A -s 10240 "tcp port 4080 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)" | egrep -a --line-buffered ".+(GET |HTTP\/|POST )|^[A-Za-z0-9-]+: " | perl -nle 'BEGIN{$|=1} { s/.*?(GET |HTTP\/[0-9.]* |POST )/\n$1/g; print }' Some explanations: sudo stdbuf -oL -eL makes tcpdump run line-buffered the tcpdump magic filter is explained in detail here: https://stackoverflow.com/questions/11757477/understanding-tcpdump-filter-bit-masking grep is looking for any lines with GET, HTTP/ or POST; or any lines that look like a header (letters and numbers followed by colon) BEGIN{$|=1} causes perl to run line-buffered s/.*?(GET |HTTP/[0-9.]* |POST )/\n$1/g adds a newline before the beginning of every new request or response
{ "source": [ "https://serverfault.com/questions/504431", "https://serverfault.com", "https://serverfault.com/users/10904/" ] }
504,626
I am trying to find some documentation or best practice guides for virtualization with respect to provisioning vCPUs per physical core (of a CPU). If it matters, I am looking at vmWare for the virtualization implementation. For example, an Intel Xeon CPU may have 4, 8, etc. cores. I am interested in learning more about provisioning beyond just one vCPU per one physical core. The vendor I am talking to definitely thinks that a single core can be provisioned into multiple vCPUs. What I commonly see in my research thus far is, "Well, it depends on your application." And in that case, my application is editing code, compiling/linking, testing, and configuration management. Of course not all of the VMs need to be configured with multiple vCPUs per core, but in the general case.
A single physical CPU can be utilized as many vCPUs. You rarely run out of CPU resources in virtualization solutions. RAM and storage are always the limiting factors... Remember, in VMware, CPU utilization is represented in MHz used, not cores... Unless you're pegging all of your virtual CPUs at 100% ALL OF THE TIME , I don't think your vendor is correct. Let's look at the following cluster of systems... 9 ESXi hosts. 160 virtual machines 104 physical CPU cores across the cluster. The average virtual machine profile is: 4 vCPU and 4GB to 18GB RAM. CPU can safely be oversubscribed... but remember, it can also be limited, reserved and prioritized at the VM level. from another active cluster - 3 hosts 42 virtual machines
{ "source": [ "https://serverfault.com/questions/504626", "https://serverfault.com", "https://serverfault.com/users/20665/" ] }
505,015
How I can use a variable name in file path ? ssl_certificate /home/ec2-user/.certificados/$server_name.crt; ssl_certificate_key /home/ec2-user/.certificados/$server_name.key;
You cannot use variables in every directive. ssl_certificate is treated as a literal string and is one of the many directives where variables are unsupported. To specify different certificates for hosts, you have to explicitly write it in a server block: server { server_name example.com; ssl_certificate /home/ec2-user/.certificados/example.com.crt; ssl_certificate_key /home/ec2-user/.certificados/example.com.key; # ... } server { server_name example.net; ssl_certificate /home/ec2-user/.certificados/example.net.crt; ssl_certificate_key /home/ec2-user/.certificados/example.net.key; # ... } # ... If you feel uncomfortable duplicating the configuration, create templates and generate the nginx configuration using those templates. See also http://nginx.org/en/docs/faq/variables_in_config.html .
{ "source": [ "https://serverfault.com/questions/505015", "https://serverfault.com", "https://serverfault.com/users/171413/" ] }
505,098
tail -f /var/log/nginx/error.log 2013/05/04 23:43:35 [error] 733#0: *3662 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: _, request: "GET /robots.txt HTTP/1.1", host: "kowol.mysite.net" HTTP/1.1", host: "www.joesfitness.net" 2013/05/05 00:49:14 [error] 733#0: *3783 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: _, request: "GET / http://www.qq.com/ HTTP/1.1", host: "www.qq.com" 2013/05/05 03:12:33 [error] 733#0: *4232 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: _, request: "GET / HTTP/1.1", host: "joesfitness.net" I am getting these from nginx error log, I don't have a "kowol" sub domain, I don't have any links to qq.com or joesfitness.net on my site. Whats going on? Edit: Nginx default config: server { listen 8080; ## listen for ipv4; this line is default and implied listen [::]:8080 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; #With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
It's a strange one all right, though I'm going to bet the problem is with: try_files $uri $uri/ /index.html; The problem here is that the second parameter here, $uri/ , causes each of the files in your index directive to be tried in turn. If none are found, it then moves on to /index.html , which causes the same location block to be re-entered, and since it still doesn't exist, you get an endless loop. I would rewrite this as: try_files $uri $uri/ =404; to return a 404 error if none of the index files you specified in the index directive exist. BTW, those requests you are seeing are Internet background noise . In particular, they are probes to determine whether your web server is an open proxy and can be abused to hide a malicious user's origin when he goes to perform malicious activity. Your server isn't an open proxy in this configuration, so you don't really need to worry about it.
{ "source": [ "https://serverfault.com/questions/505098", "https://serverfault.com", "https://serverfault.com/users/167369/" ] }
505,300
How can i configure ISC DHCP server to infinite lease time for all clients? man dhcpd: Lease Lengths DHCP leases can be assigned almost any length from zero seconds to infinity. What lease length makes sense for any given subnet, or for any given installation, will vary depending on the kinds of hosts being served. but dhcpd completely not works with the zero lease-time value: ddns-update-style none; #option domain-name "dobisel.com"; option domain-name-servers 8.8.8.8,8.8.4.4; default-lease-time 0; <---- here max-lease-time 0; <----- here authoritative; log-facility local7; subnet 192.168.11.0 netmask 255.255.255.240 { range 192.168.11.2 192.168.11.14; option routers 192.168.11.1; option broadcast-address 192.168.11.15; option subnet-mask 255.255.255.240; }
It is not mentioned explicitly in the manpage, but setting lease time to -1 in any of the options you mention, default-lease-time -1; max-lease-time -1; is effectively disabling the expiry time of the leases, so their expiration will be effectively set "to infinity".
{ "source": [ "https://serverfault.com/questions/505300", "https://serverfault.com", "https://serverfault.com/users/152439/" ] }
505,929
A customer of ours makes industrial robots that run on very old, but stable, hardware and software. The only bottleneck has always been the hard drive in these moving machines. Due to constant movement (shocks etc.) HDDs normally don't survive beyond six months. So now we're trying to connect an SSD. The motherboard doesn't have a SATA connection (no surprise there) so we're using a SATA-to-IDE converter to connect it to the IDE port on the motherboard. This works and the BIOS recognizes the drive. Only problem is that it won't boot. It freezes on POST. In the BIOS (from the 1990s), we need to specify some values, called 'HEADS', 'SYL', 'CLUSTER', and 'LANDZ'. Unlike traditional HDDs, this drive obviously has no platters. Is there a way the drive mimics these things on IDE and can we somehow find out what these values should be for our specific drive? We have changed the values at random and sometimes it passes POST, sometimes it doesn't. If it does, however, it still doesn't boot and just says there's no drive connected. In short, does anyone have any experience connecting a SATA SSD to an old IDE motherboard and what can we do to make this work (if anything)?
I would use an industrial IDE SSD ...( another option ). It doesn't sound like you need much space, and there are SSDs made specifically for this purpose. I would NOT bother with IDE adapters and consumer-level SSDs for this application. If you do go for compact flash, again, try something that's purpose-built for the application.
{ "source": [ "https://serverfault.com/questions/505929", "https://serverfault.com", "https://serverfault.com/users/145776/" ] }
505,949
I am looking for a solution to move files that are year older from today. My log partition is getting full, but I can not remove them. They are needed for a long long time. Anyway one solution I came up with is: find /sourcedirectory -mtime 365 -exec mv "{}" /destination/directory/ \; Would this work? Asking because of the " -mtime 365 " would this move the files that are year older from today to a new location? Thank you!
You're almost right. -mtime 365 will be all files that are exactly 365 days old. You want the ones that are 365 days old or more, which means adding a + before the number like this -mtime +365 . You may also be interested in the -maxdepth 1 flag, which prevents you from moving items in sub directories. If you want to be sure that you are only moving files, not directories, add -type f to the line. At the end of the line we add \; so that find knows that's the end of the command we are executing. So the line should be: find /sourcedirectory -maxdepth 1 -mtime +365 -type f -exec mv "{}" /destination/directory/ \; To be on the safe side, start by just doing a ls -l instead of mv - that way you can check in advance that you're getting exactly the files you want, before re-running it with mv, like this: find /sourcedirectory -maxdepth 1 -mtime +365 -type f -exec ls -l {} \;
{ "source": [ "https://serverfault.com/questions/505949", "https://serverfault.com", "https://serverfault.com/users/159984/" ] }
506,005
Running ubuntu 12.04, I want to compare 2 directories, say folder1/ and folder2/ and copy any files that are different to folder3/. There are also nested files, so matching subdirectories should be copied as well Is there a single command that would help me? I can get the full list of changed files running: rsync -rcnC --out-format="%f" folder1/ folder2/ But rsync doesn't seem to have the ability to "export" these files on a different target directory. Can I pipe the list to cp or some other program, so that the files are copied, while the directories are created as well? For example, I tried rsync -rcnC --out-format="%f" folder1/ folder2/ | xargs cp -t folder3/ but that wouldn't preserve directories as well, it would simply copy all files inside folder3/
Use --compare-dest. From the man page: --compare-dest=DIR - This option instructs rsync to use DIR on the destination machine as an additional hierarchy to compare destination files against doing transfers (if the files are missing in the destination directory). If a file is found in DIR that is identical to the sender's file, the file will NOT be transferred to the destination directory. This is useful for creating a sparse backup of just files that have changed from an earlier backup. first check your syntax with --dry-run rsync -aHxv --progress --dry-run --compare-dest=folder2/ folder1/ folder3/ Then once you're satisfied with the output: rsync -aHxv --progress --compare-dest=folder2/ folder1/ folder3/ this link has a good explanation of --compare-dest scope.
{ "source": [ "https://serverfault.com/questions/506005", "https://serverfault.com", "https://serverfault.com/users/131569/" ] }
506,098
I was hoping to find a way to increase bandwidth between two desktop switches I have, and I wondered if connecting them with two cables (or perhaps three) instead of just one might increase theoretical bandwidth (which I am currently not in danger of saturating yet anyways) From this question ( 2 ethernet connections between two Switches ), I kind of assume I can't do what I hope to do, but I would love specific confirmation. I assume typical cheap desktop switches would be unmanaged and thus useless and/or self-defeating when connecting them with more than one cable and trying to create a little more bandwidth between them. Is this one of the differences between managed and unmanaged switches? And if I had managed switches, would it work (connect them with two cables to essentially double the bandwidth)?
An unmanaged switch won't have the feature you're looking for and connecting two ports between both switches will create a switch loop, which will effectively render the switches and the network unusable. A managed switch should have the feature that you're looking for, which is called Link Aggregation (LAG). Before purchasing a managed switch make sure to verify that it does indeed support LAG.
{ "source": [ "https://serverfault.com/questions/506098", "https://serverfault.com", "https://serverfault.com/users/172955/" ] }
506,099
I want to output the following: Average CPU utilization across all cores, over the last n seconds, in a single percentage value. So if I have 4 CPUs and their combined user and system utilization over the last 10 seconds is: # not actual output CPU1 10% CPU2 20% CPU3 30% CPU4 40% I want to be able to get this output: 25 Since the average of those utilizations is 25%. What is the simplest one-liner to output this value? (Not being able to specify the duration is fine, as long as it's a reasonable default).
An unmanaged switch won't have the feature you're looking for and connecting two ports between both switches will create a switch loop, which will effectively render the switches and the network unusable. A managed switch should have the feature that you're looking for, which is called Link Aggregation (LAG). Before purchasing a managed switch make sure to verify that it does indeed support LAG.
{ "source": [ "https://serverfault.com/questions/506099", "https://serverfault.com", "https://serverfault.com/users/1843/" ] }
506,177
I'm looking for a simple way to know if a server is using the Server Name Indication SSL extension for its HTTPS certificate on a website. A method that uses either a browser or Unix command line is fine. Thanks!
SNI is initiated by the client, so you need a client that supports it. Unless you're on windows XP, your browser will do. If your client lets you debug SSL connections properly (sadly, even the gnutls/openssl CLI commands don't), you can see whether the server sends back a server_name field in the extended hello. Note that the absence of this field only means that the server didn't use the server_name in the client hello to help pick a certificate, not that it doesn't support it. So, in practice the easiest test is to simply try connecting. For this you need to know two names that resolve to the same IP, to which an ssl connection can be made. https is easiest as you can then simply browse to both names and see if you're presented with the correct certificate. There are three outcomes: You get a wildcard certificate (or one with a subjectAltName) which covers both names: you learn nothing You get the wrong certificate for at least one of them: either the server does not support SNI or it has been configured wrong You get two different certificates, both for the correct name: SNI is supported and correctly configured. A slightly more complicated test which will yield more info is to have wireshark open and capturing while browsing. You can then find the relevant packets by filtering for ssl.handshake. The screenshots below are an example of a client hello/server hello pair where SNI is supported: Again, of course the absence of a server_name field in the server hello does not indicate that SNI is not supported. Merely that the client-provided server_name was not used in deciding which certificate to use.
{ "source": [ "https://serverfault.com/questions/506177", "https://serverfault.com", "https://serverfault.com/users/172989/" ] }
506,190
I want to run a non-system-wide couchdb instance (as a local, unprivileged user). But couchdb needs to be able to read /etc/couchdb/*.ini, which are -rwxrwx--- 1 couchdb couchdb , even if I tell it to reset the configuration chain with -n . How do I convince couchdb it doesn't need to read those files?
SNI is initiated by the client, so you need a client that supports it. Unless you're on windows XP, your browser will do. If your client lets you debug SSL connections properly (sadly, even the gnutls/openssl CLI commands don't), you can see whether the server sends back a server_name field in the extended hello. Note that the absence of this field only means that the server didn't use the server_name in the client hello to help pick a certificate, not that it doesn't support it. So, in practice the easiest test is to simply try connecting. For this you need to know two names that resolve to the same IP, to which an ssl connection can be made. https is easiest as you can then simply browse to both names and see if you're presented with the correct certificate. There are three outcomes: You get a wildcard certificate (or one with a subjectAltName) which covers both names: you learn nothing You get the wrong certificate for at least one of them: either the server does not support SNI or it has been configured wrong You get two different certificates, both for the correct name: SNI is supported and correctly configured. A slightly more complicated test which will yield more info is to have wireshark open and capturing while browsing. You can then find the relevant packets by filtering for ssl.handshake. The screenshots below are an example of a client hello/server hello pair where SNI is supported: Again, of course the absence of a server_name field in the server hello does not indicate that SNI is not supported. Merely that the client-provided server_name was not used in deciding which certificate to use.
{ "source": [ "https://serverfault.com/questions/506190", "https://serverfault.com", "https://serverfault.com/users/72124/" ] }
506,278
I changed the value of mysql's general_log_file variable to something else, and now I'm trying to change it back to what it was originally, /var/lib/mysql/ubuntu.log . But when I do: SET GLOBAL general_log_file = '/var/lib/msyql/ubuntu.log'; I get this error: ERROR 1231 (42000): Variable 'general_log_file' can't be set to the value of '/var/lib/msyql/ubuntu.log' What's going on?
ERROR 1231 (42000): Variable 'general_log_file' can't be set to the value of '/var/lib/msyql/ubuntu.log' What's going on? The simple answer is this file doesn't exist. You type too fast. There is a typo in the file name, it should be /var/lib/mysql/ubuntu.log .
{ "source": [ "https://serverfault.com/questions/506278", "https://serverfault.com", "https://serverfault.com/users/149745/" ] }
506,462
and yes, I have 127.0.0.1 localhost myhost.mydomain.eu myhost.domain2.eu localhost.localdomain 127.0.1.1 myhost in hosts file. What is wrong? Sendmail started to put this error into log. May 9 19:08:54 myhost sm-mta[17103]: unable to qualify my own domain name (myhost) -- using short name Is this configuration OK?
Sendmail: short host name to FQDN via /etc/hosts entry Reorder your /etc/hosts file entries: 127.0.0.1 localhost localhost.localdomain 127.0.1.1 myhost.mydomain.eu myhost.domain2.eu myhost It should qualify myhost to myhost.mydomain.eu (the leftmost name in /etc/hosts line with myhost )
{ "source": [ "https://serverfault.com/questions/506462", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
506,465
Wondering if there is a limit to the number of files that can be stored inside a directory, in CentOS 6. There is one particular directory which could potentially have millions of subdirectories. Storage capacity aside, is there a limit to the number of files that can be contained in a directory? (I assume here that "file" can mean either a file or a directory). Thanks very much!
It depends on your filesystem. I'm going to assume it's ext4: The maximum number of files is global, not per directory, and it's determined by the number of inodes allocated when the filesystem was created. Try running the following command to see the number of inodes per filesystem. $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb2 7864320 388119 7476201 5% / The maximum number of subdirectories seems to be 64000 according to here ( http://en.wikipedia.org/wiki/Ext4 ), but see also ( http://kernelnewbies.org/Ext4 ) -- suggests that it is unlimited.
{ "source": [ "https://serverfault.com/questions/506465", "https://serverfault.com", "https://serverfault.com/users/144154/" ] }