source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
366,372 | I'm working with Apache2 and Passenger for a Rails project.
I would like to create a self-signed SSL Certificate for testing purposes. sudo openssl rsa -des3 -in server.key -out server.key.new When i enter the above command, it says writing RSA key
Enter PEM pass phrase: If i do not enter the pass phrse, im getting the below error unable to write key
3079317228:error:28069065:lib(40):UI_set_result:result too small:ui_lib.c:869:Yo
u must type in 4 to 1024 characters
3079317228:error:0906406D:PEM routines:PEM_def_callback:problems getting passwor
d:pem_lib.c:111:
3079317228:error:0906906F:PEM routines:PEM_ASN1_write_bio:read key:pem_lib.c:382 Is it possible to generate a RSA key without giving pass phrase , since I am not sure how the /etc/init.d/httpd script will start the HTTP server without human intervention (i.e. If I give a 4 character pass phrase, it expects me to provide this while starting the Apache HTTP server). | If you are generating a self signed cert, you can do both the key and cert in one command like so: openssl req -nodes -new -x509 -keyout server.key -out server.cert Oh, and what @MadHatter said in his answer about omitting the -des3 flag. | {
"source": [
"https://serverfault.com/questions/366372",
"https://serverfault.com",
"https://serverfault.com/users/112891/"
]
} |
366,392 | I have (for example) this log entry in dmesg output: [600711.395348] do_trap: 6 callbacks suppressed Is there a possibility to convert this 'dmesg' time to 'real' time to know, when this event happend? | It looks as if it was implemented recently for Quantal (12.10) : see http://brainstorm.ubuntu.com/idea/17829/ . Basically, dmesg is reported to have a new switch -T, --ctime . Edit. As another extension on Ignacio's answer, here are some scripts to enhance dmesg output on older systems. ( Note: for the python version of the code shown there, one will want to replace < and > back to <> to make it usable again. ) Finally, for a single value like 600711.395348 one could do ut=`cut -d' ' -f1 </proc/uptime`
ts=`date +%s`
date -d"70-1-1 + $ts sec - $ut sec + $(date +%:::z) hour + 600711.395348 sec" +"%F %T" and get the event date and time in the local time zone. ( Please note that due to round-off errors the last second digit probably won't be accurate. ) . Edit(2) : Please note that -- as per Womble's comment below, -- this will only work if the machine was not hibernated etc. ( In that case, one shall better look at syslog configs at /etc/*syslog* and check the appropriate files. See also: dmesg vs /var/messages . ) | {
"source": [
"https://serverfault.com/questions/366392",
"https://serverfault.com",
"https://serverfault.com/users/104326/"
]
} |
366,481 | Or any GUI SSH for Amazon ec2 Linux instance servers? I need to transfer files between two Linux virtual servers and currently I have PuTTY (which Amazon recommends). However I am new to the server/virtual world and have no experience with commands. I was looking for a GUI for beginners like me so I can basically copy/paste or drag/drop folders into the server. Is there a friendly GUI out there for this? I was googling a bit and I found SuperPutty? Apparently It has the capabilities but is not fully developed?. What would you recommend? | I would recommend WinSCP (I've been using it to transfer files to my virtual private server for years). | {
"source": [
"https://serverfault.com/questions/366481",
"https://serverfault.com",
"https://serverfault.com/users/112773/"
]
} |
366,575 | Is it possible to start LXC container inside another LXC container? | I'm going to dispel a few myths here. This is just a bad idea. I'm sorry. – Jacob Mar 5 at 20:30 I don't see how this is a bad idea. It's really just a chroot inside a chroot. On one hand, it could possibly decrease performance in some negligible manner (nothing compared to running a VM inside a VM). On the other hand, it's likely to be more secure (e.g. more isolated from the root host system and it's constituents). Do you actually have a real reason to do this? Please remember that questions here should be about actual problems that you face. – Zoredache Mar 5 at 21:52 I agree 100% with the poster's following comment. Furthermore, I think it's safe to assume that everybody who posts a question on here likely thinks that they have a real reason to do [ it ].. I think, that lxc should be able to simplify VM migration(and backup+recovery too). But I'm not sure about cases, when there is no access to host OS(cheap vps for example). – Mikhail Mar 6 at 11:17 I actually came across this question back in June when I was first diving into LXC for PaaS/IaaS projects, and I was particularly interested in the ability to allow users to emulate cloud environments for development purposes. LXCeption. We're too deep. – Tom O'Connor Mar 6 at 22:46 I laughed a little bit when I read this one, but that's not, at all, the case :) Anyway, I eventually set up a VirtualBox environment with a stock install of Ubuntu 12.04 LTS Server Edition after reading all this, thinking that this was 100% possible. After installing LXC, I created a new container, and installed LXC inside the container with apt-get. Most of the installation progressed well, but resulted in error eventually due to a problem with the cgroup-lite package, whose upstart job failed to start after the package had been installed. After a bit of searching, I came across this fine article at stgraber.org (the goodies are hiding under the "Container Nesting" section): sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-host-container -t ubuntu
sudo wget https://www.stgraber.org/download/lxc-with-nesting -O /etc/apparmor.d/lxc/lxc-with-nesting
sudo /etc/init.d/apparmor reload
sudo sed -i "s/#lxc.aa_profile = unconfined/lxc.aa_profile = lxc-container-with-nesting/" /var/lib/lxc/my-host-container/config
sudo lxc-start -n my-host-container
(in my-host-container) sudo apt-get install lxc
(in my-host-container) sudo stop lxc
(in my-host-container) sudo sed -i "s/10.0.3/10.0.4/g" /etc/default/lxc
(in my-host-container) sudo start lxc
(in my-host-container) sudo lxc-create -n my-sub-container -t ubuntu
(in my-host-container) sudo lxc-start -n my-sub-container Installing that AppArmor policy and restarting the daemon did the trick (don't forget to change the network ranges, though!). In fact, I thought that particular snippet was so important that I mirrored it @ http://pastebin.com/JDFp6cTB just in case the article ever goes offline. After that, sudo /etc/init.d/cgroup-lite start succeeded and it was smooth sailing. So, yes, it is possible to start an LXC container inside of another LXC container :) | {
"source": [
"https://serverfault.com/questions/366575",
"https://serverfault.com",
"https://serverfault.com/users/112993/"
]
} |
367,003 | When are iptables byte counters reset? On reboot or other rotation? | On reboot, or whenever you ask them to be cleared with the -Z option to iptables. | {
"source": [
"https://serverfault.com/questions/367003",
"https://serverfault.com",
"https://serverfault.com/users/97576/"
]
} |
367,185 | We have a set of directories containing lucene indexes. Each index is a mix of different file types (differentiated by extension) eg: 0/index/_2z6.frq
0/index/_2z6.fnm
..
1/index/_1sq.frq
1/index/_1sq.fnm
.. (it's about 10 different extensions) We'd like to get a total by file extension, eg: .frq 21234
.fnm 34757
.. I've tried various combinations of du/awk/xargs but finding it tricky to do exactly this. | For any given extension you an use find /path -name '*.frq' -exec ls -l {} \; | awk '{ Total += $5} END { print Total }' to get the total file size for that type. And after some thinking #!/bin/bash
ftypes=$(find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq)
for ft in $ftypes
do
echo -n "$ft "
find . -name "*${ft}" -exec ls -l {} \; | awk '{total += $5} END {print total}'
done Which will output the size in bytes of each file type found. | {
"source": [
"https://serverfault.com/questions/367185",
"https://serverfault.com",
"https://serverfault.com/users/113209/"
]
} |
367,192 | I'm using perfmon's "PhysicalDisk\% Idle Time" to determine when the disk is being used heavily. The question is, what's the best/quickest way to narrow down what was using the disk? I'm aware of the following perfmon counters but they each have issues: Memory\Pages/sec: useful if disk usage was due to paging, useless
otherwise. Process\IO Data Bytes/sec: includes non-disk IO as well
(eg network), doesn't include processes started after perfmon setup,
and it can be time consuming to match processes with their perfmon
id. Resource Monitor's Disk tab gives very useful information, but unfortunately it does not offer historical logging. It can not tell me why, for example, "% Idle Time" was 0 for 20 seconds at 10am. The information I'm after is: Which processes were using the disk the most? What files were they accessing? | For any given extension you an use find /path -name '*.frq' -exec ls -l {} \; | awk '{ Total += $5} END { print Total }' to get the total file size for that type. And after some thinking #!/bin/bash
ftypes=$(find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq)
for ft in $ftypes
do
echo -n "$ft "
find . -name "*${ft}" -exec ls -l {} \; | awk '{total += $5} END {print total}'
done Which will output the size in bytes of each file type found. | {
"source": [
"https://serverfault.com/questions/367192",
"https://serverfault.com",
"https://serverfault.com/users/40216/"
]
} |
367,205 | I have been facing this annoying error when trying to setup Github on Mac, OS version is Lion. Basically, I followed the steps as mentioned at this URL: http://help.github.com/mac-set-up-git/ I always stuck at the step of executing this command "ssh -T [email protected]" I have tried to output the debugging message and below is the message log. The last message shows that it's due to an error 'Write failed: Broken pipe'. Please give me a solution to fix this error if you have ever encountered this error before and able to fix it. user-users-macbook:.ssh useruser$ ssh -vT [email protected]
OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: Connecting to github.com [207.97.227.239] port 22.
debug1: Connection established.
debug1: identity file /Users/useruser/.ssh/id_rsa type 1
debug1: identity file /Users/useruser/.ssh/id_rsa-cert type -1
debug1: identity file /Users/useruser/.ssh/id_dsa type -1
debug1: identity file /Users/useruser/.ssh/id_dsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2
debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.6
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host 'github.com' is known and matches the RSA host key.
debug1: Found key in /Users/useruser/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/useruser/.ssh/id_rsa
debug1: Remote: Forced command: gerve thsonvt
debug1: Remote: Port forwarding disabled.
debug1: Remote: X11 forwarding disabled.
debug1: Remote: Agent forwarding disabled.
debug1: Remote: Pty allocation disabled.
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: Remote: Forced command: gerve thsonvt
debug1: Remote: Port forwarding disabled.
debug1: Remote: X11 forwarding disabled.
debug1: Remote: Agent forwarding disabled.
debug1: Remote: Pty allocation disabled.
debug1: Authentication succeeded (publickey).
Authenticated to github.com ([207.97.227.239]:22).
debug1: channel 0: new [client-session]
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LC_CTYPE = UTF-8
Write failed: Broken pipe | For any given extension you an use find /path -name '*.frq' -exec ls -l {} \; | awk '{ Total += $5} END { print Total }' to get the total file size for that type. And after some thinking #!/bin/bash
ftypes=$(find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq)
for ft in $ftypes
do
echo -n "$ft "
find . -name "*${ft}" -exec ls -l {} \; | awk '{total += $5} END {print total}'
done Which will output the size in bytes of each file type found. | {
"source": [
"https://serverfault.com/questions/367205",
"https://serverfault.com",
"https://serverfault.com/users/113219/"
]
} |
367,438 | There is a particular directory ( /var/www ), that when I run ls (with or without some options), the command hangs and never completes. There is only about 10-15 files and directories in /var/www . Mostly just text files. Here is some investigative info: [me@server www]$ df .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root
50G 19G 29G 40% /
[me@server www]$ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg_dev-lv_root
3.2M 435K 2.8M 14% / find works fine. Also I can type in cd /var/www/ and press TAB before pressing enter and it will successfully tab-completion list of all files/directories in there: [me@server www]$ cd /var/www/
cgi-bin/ create_vhost.sh html/ manual/ phpMyAdmin/ scripts/ usage/
conf/ error/ icons/ mediawiki/ rackspace sqlbuddy/ vhosts/
[me@server www]$ cd /var/www/ I have had to kill my terminal sessions several times because of the ls hanging: [me@server ~]$ ps | grep ls
gdm 6215 0.0 0.0 488152 2488 ? S<sl Jan18 0:00 /usr/bin/pulseaudio --start --log-target=syslog
root 23269 0.0 0.0 117724 1088 ? D 18:24 0:00 ls -Fh --color=always -l
root 23477 0.0 0.0 117724 1088 ? D 18:34 0:00 ls -Fh --color=always -l
root 23579 0.0 0.0 115592 820 ? D 18:36 0:00 ls -Fh --color=always
root 23634 0.0 0.0 115592 816 ? D 18:38 0:00 ls -Fh --color=always
root 23740 0.0 0.0 117724 1088 ? D 18:40 0:00 ls -Fh --color=always -l
me 23770 0.0 0.0 103156 816 pts/6 S+ 18:41 0:00 grep ls kill doesn't seem to have any affect on the processes, even as sudo. What else should I do to investigate this problem? It just randomly started happening today. UPDATE dmesg is a big list of things, mostly related to an external USB HDD that I've mounted too many times and the max mount count has been reached, but that is an un-related problem I think. Near the bottom of dmesg I'm seeing this: INFO: task ls:23579 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff88041fc230c0 0 23579 23505 0x00000080
ffff8801688a1bb8 0000000000000086 0000000000000000 ffffffff8119d279
ffff880406d0ea20 ffff88007e2c2268 ffff880071fe80c8 00000003ae82967a
ffff880407169ad8 ffff8801688a1fd8 0000000000010518 ffff880407169ad8
Call Trace:
[<ffffffff8119d279>] ? __find_get_block+0xa9/0x200
[<ffffffff814c97ae>] __mutex_lock_slowpath+0x13e/0x180
[<ffffffff814c964b>] mutex_lock+0x2b/0x50
[<ffffffff8117a4d3>] do_lookup+0xd3/0x220
[<ffffffff8117b145>] __link_path_walk+0x6f5/0x1040
[<ffffffff8117a47d>] ? do_lookup+0x7d/0x220
[<ffffffff8117bd1a>] path_walk+0x6a/0xe0
[<ffffffff8117beeb>] do_path_lookup+0x5b/0xa0
[<ffffffff8117cb57>] user_path_at+0x57/0xa0
[<ffffffff81178986>] ? generic_readlink+0x76/0xc0
[<ffffffff8117cb62>] ? user_path_at+0x62/0xa0
[<ffffffff81171d3c>] vfs_fstatat+0x3c/0x80
[<ffffffff81258ae5>] ? _atomic_dec_and_lock+0x55/0x80
[<ffffffff81171eab>] vfs_stat+0x1b/0x20
[<ffffffff81171ed4>] sys_newstat+0x24/0x50
[<ffffffff810d40a2>] ? audit_syscall_entry+0x272/0x2a0
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b And also, strace ls /var/www/ spits out a whole BUNCH of information. I don't know what is useful here... The last handful of lines: ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, TIOCGWINSZ, {ws_row=68, ws_col=145, ws_xpixel=0, ws_ypixel=0}) = 0
stat("/var/www/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
open("/var/www/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC)
getdents(3, /* 16 entries */, 32768) = 488
getdents(3, /* 0 entries */, 32768) = 0
close(3) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 9), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3093b18000
write(1, "cgi-bin conf create_vhost.sh\te"..., 125cgi-bin conf create_vhost.sh error html icons manual mediawiki phpMyAdmin rackspace scripts sqlbuddy usage vhosts
) = 125
close(1) = 0
munmap(0x7f3093b18000, 4096) = 0
close(2) = 0
exit_group(0) = ? | Run strace ls /var/www/ and see what it hangs on. It's certainly hung on I/O -- that's what the D state in your ps output means (and since kill doesn't help, it's one of the uninterruptible I/O syscalls). Most hangs involve an NFS server that's gone to god, but based on your df that isn't the case here. A quick check of dmesg for anything related to filesystems or disks might be worthwhile, just in case. | {
"source": [
"https://serverfault.com/questions/367438",
"https://serverfault.com",
"https://serverfault.com/users/21307/"
]
} |
367,921 | $ ps | grep django
28006 ttys004 0:01.12 /usr/bin/python bin/django celeryd --beat
51393 ttys005 0:01.45 /usr/bin/python bin/django celeryd -l INFO
51472 ttys005 0:01.29 /usr/bin/python bin/django celeryd -l INFO
51510 ttys005 0:01.89 /usr/bin/python bin/django celeryd -l INFO
51801 ttys005 0:01.83 /usr/bin/python bin/django celeryd -l INFO
53470 ttys005 0:03.97 /usr/bin/python bin/django celeryd -l INFO
53780 ttys005 0:00.00 grep django Is there a way to prevent the last process (that is, the grep that was started at the same time as my ps command) being reported? (I started trying to come up with a regex that would match the literal but not match itself, but that seemed, um, not the right approach...) | +1 for @jamzed terse answer, however the OP might need some explanation: ps | grep "[d]jango" Using that regex you are launching a process which its ps string will not match itself, since the regexp matches "django" and not "[d]jango" . That way you'll exclude the process that has the string "[d]jango" which in this case is grep; The same can be applied to pgrep, egrep, awk, sed, etc... whichever command you used to define the regex. From man 7 regex A bracket expression is a list of characters enclosed in "[]". It nor‐
mally matches any single character from the list (but see below). If
the list begins with '^', it matches any single character (but see
below) not from the rest of the list. If two characters in the list
are separated by '-', this is shorthand for the full range of charac‐
ters between those two (inclusive) in the collating sequence, for exam‐
ple, "[0-9]" in ASCII matches any decimal digit. It is illegal(!) for
two ranges to share an endpoint, for example, "a-c-e". Ranges are very
collating-sequence-dependent, and portable programs should avoid rely‐
ing on them. | {
"source": [
"https://serverfault.com/questions/367921",
"https://serverfault.com",
"https://serverfault.com/users/68259/"
]
} |
367,934 | I'm maintaining a heterogeneous network of mac and linux so I decided to create a little perl script to unify mounting strategies across machines. The current implementation in linux is in /etc/fstab works fine: //myserverhere.com/cifs_share /mnt/cifs_share cifs
user,uid=65001,rw,workgroup=DEV,credentials=/root/.cifs 0 0 and /root/.cifs contains username=ouruser
password=ourpassword I tried translating that to a non-fstab format as follows: mount.cifs //myserverhere.com/cifs_share /mnt/cifs_share user,uid=65001,rw,workgroup=DEV,credentials=/root/.cifs But it doesn't seem to work. Can someone point out what I'm doing wrong please? Thanks in advance. Ismael Casimpan :) | Syntax of mount.cifs: mount.cifs {service} {mount-point} [-o options] You need to pass the options after the "-o". For example, with your given options, your command should be: mount.cifs //myserverhere.com/cifs_share /mnt/cifs_share \
-o user,uid=65001,rw,workgroup=DEV,credentials=/root/.cifs (I didn't test the options you gave.) | {
"source": [
"https://serverfault.com/questions/367934",
"https://serverfault.com",
"https://serverfault.com/users/67371/"
]
} |
368,038 | I would like to know the syntax to call datapump commands (expdp/impdp) logged as 'sys as sysdba' from a remote machine. I know that when logged on the machine which runs the database, I can use : expdp \"/ as sysdba\" However, I cannot find how to do this from a remote machine, for example, these does not work : expdp 'SYS@SID AS SYSDBA'
expdp "SYS AS SYSDBA"@SID In both case, the error message is : LRM-00108: invalid positional parameter value [...] | expdp \"SYS@service AS SYSDBA\" This works for me (10.2 and 11.1), but you need either to define service in your tnsnames.ora or to use proper SCAN. Generally, ORACLE_SID is a different identifier than TNS service, but for simplicity they often are administratively set to the same value. | {
"source": [
"https://serverfault.com/questions/368038",
"https://serverfault.com",
"https://serverfault.com/users/5160/"
]
} |
368,054 | I want to run a bash subshell, (1) run a few commands, (2) and then remain in that subshell to do as I please. I can do each of these individually: Run command using -c flag: $> bash -c "ls; pwd; <other commands...>" however, it immediately returns to the "super" shell after the commands are executed. I can also just run an interactive subshell: Start new bash process: $> bash and it won't exit the subshell until I say so explicitly... but I can't run any initial commands. The closest solution I've found is: $> bash -c "ls; pwd; <other commands>; exec bash" which works, but not the way I wanted to, as it runs the given commands in one subshell, and then opens a separate one for interaction. I want to do this on a single line. Once I exit the subshell, I should return back to the regular "super"shell without incident. There must be a way~~ NB: What I am not asking... not asking where to get a hold of the bash man page not asking how to read initializing commands from a file... I know how to do this, it's not the solution I'm looking for not interested in using tmux or gnu screen not interested in giving context to this. I.e., the question is meant to be general, and not for any specific purpose if possible, I want to avoid using workarounds that sort of accomplish what I want, but in a "dirty" way. I just want to do this on a single line. In particular, I don't want to do something like xterm -e 'ls' | This can be easily done with temporary named pipes : bash --init-file <(echo "ls; pwd") Credit for this answer goes to the comment from Lie Ryan . I found this really useful, and it's less noticeable in the comments, so I thought it should be its own answer. | {
"source": [
"https://serverfault.com/questions/368054",
"https://serverfault.com",
"https://serverfault.com/users/113526/"
]
} |
368,370 | How do I exclude directories when listing files in the current directory? ls . ^ will include directories in the listing. | Try this one: find . -maxdepth 1 -not -type d | {
"source": [
"https://serverfault.com/questions/368370",
"https://serverfault.com",
"https://serverfault.com/users/62866/"
]
} |
368,512 | This is a Canonical Question about Redundant DHCP Servers. Is it possible to have more than one DHCP server on the same LAN? What are the implications of doing this? What happens if there is more than one DHCP server available? How do my clients know which one to use? How can I have DHCP servers supplying addresses to more than one subnet\network segment? How can I configure multiple DHCP servers to supply addresses for the same subnet. | I’m assuming a basic knowledge of what DHCP does and how to configure your DHCP server of choice in this answer, but before we talk about multiple DHCP servers on the same network, let’s first of all quickly re-cap how clients receive IP addresses from DHCP at the most basic level. DHCP on a simple network works using the DORA principle. Discovery - the client broadcasts a message on the local network segment its connected to, to discover available DHCP servers. Offer - a suitably configured DHCP server receives a request from a client, and offers it an address from its pool of available addresses. Request - The client replies to the offer, requesting the address received in the Offer. Acknowledgement - The server acknowledges the request, marking the address as used in its pool of addresses, and informs the client of how long the address lease is valid for, and any other information needed. Any device on a network segment can be a DHCP server; it doesn't have to be the router or the domain controller or any other "special" device on the network. When the devices on your network first request an IP address or reach the end of their leases (or you force them to check their lease is still valid) they will simply broadcast a request for a DHCP server, and will accept an offer from the first DHCP server to reply . This is important to remember as we look at the options for multiple DHCP servers below. Multiple DHCP servers PT 1: Spanning multiple subnets. If you have several VLANs or physical network segments that are separated into different subnets, and you want to provide a DHCP service to devices in all those subnets then there are two ways of doing this. If the router / layer 3 switch separating them can act as a BOOTP/DHCP relay agent, then you can continue to keep all your DHCP server(s) in one or two central parts of your network and configure your DHCP server(s) to support multiple ranges of addresses. In order to support this, your router or layer 3 switch must support the BOOTP relay agent specification covered in section 4 of RFC 1542 . If your router does not support RFC 1542 BOOTP relay agents, or if some of your network segments are geographically dispersed over slow links, then you will need to place one or more DHCP server in each subnet. This ‘local’ DHCP server will only serve its own local segment’s requirements, and there is no interaction between it and other DHCP servers. If this is what you want then you can simply configure each DHCP server as a standalone server, with the details of the address pool for its own subnet, and not worry about any other DHCP servers on other parts of the network. This is the most basic example of having more than one DHCP server on the same network. Multiple DHCP servers PT 2: DHCP servers that serve the same network segment. When most people ask about “multiple DHCP Servers on the same network”, what they are usually asking for is this; they want more than one DHCP server issuing the same range of network addresses out to clients, either to split the load between multiple servers or to provide redundancy if one server is offline. This is perfectly possible, though it requires some thought and planning. From a “network traffic” point of view, the DORA process outlined at the start of this answer explains how more than one DHCP server can be present on a network segment; the client simply broadcasts a Discovery request and the first DHCP server to respond with an Offer is the ‘winner’. From the server’s point of view, each server will have a pool of addresses that it can issue to clients, known its address scope. DHCP servers that are serving the same subnet should not have a single “shared” scope, but rather they should have a “split” scope. In other words, if you have a range of DHCP addresses to issue to clients from 192.168.1.100 to 192.168.1.200, then both servers should be configured to serve separate parts of that range, so the first server might use parts of that scope from 192.168.1.100 to 192.168.1.150 and the second server would then issue 192.168.1.151 to 192.168.1.200. Microsoft's more recent implementations of DHCP have a wizard to make splitting your scope like this easy to do, described in a Technet article that might be worth looking at even if you're not using the Microsoft DHCP implementation, as it illustrates the principles talked about here quite nicely and this answer is already long enough. Splitting the scope – best practice One thing you’ll hear mentioned as best practice is the 80/20 rule for splitting a DHCP scope, which means that one server will serve 80% of the addresses in that scope and the other DHCP server, which is effectively ‘in reserve’ will serve 20% of the addresses. The idea behind splitting the addresses 80/20 is because 80% of the addresses available should hopefully be adequate for all the addresses needed on a subnet, and DHCP leases are typically issued for several days; so if your main DHCP server goes down for a few hours then it's unlikely that more than 20% of the machines on that subnet will need to renew their addresses during the downtime, making the 20% pool of addresses sufficient. This is still reasonable advice, but it assumes two things: That you can solve any problem with your “main” DHCP server quickly enough to avoid exhausting the small pool of addresses on your reserve DHCP server. That you’re not interested in load balancing. These days (as you can see from my examples) I tend to prefer 50/50 splits, which I think are a more realistic answer to the above points. Another thing to consider when creating your scopes on the DHCP servers is configuring the full scope into each server and excluding the range given out by the other DHCP server. This has the benefit of “self-documenting” the DHCP info for the full subnet on each DHCP server which will improve clarity for anyone else trying to understand what is going on, and also in the event of one of your DHCP servers being offline for some time, you can temporarily reconfigure the exclusion range on the other server to allow it to pick up the slack. Combining these ideas Lastly, its worth remembering that you can combine the principles discussed above - you can place all your DHCP servers into one or more "central server" VLANs and use BOOTP relay agents on all your routers to send all DHCP requests from a very large and segmented network to a centralised DHCP service (which is what I do, see below). Or you can have DHCP servers distributed throughout your network, with a "main" DHCP server in its local subnet and a "reserve" DHCP server on a "nearby" network segment providing a small amount of addresses as a backup - you could even have two DHCP servers in their own network segments configured to provide an 80/20 range of addresses for each other. The most sensible choice will depend on how your physical and logical networks map to each other. | {
"source": [
"https://serverfault.com/questions/368512",
"https://serverfault.com",
"https://serverfault.com/users/7783/"
]
} |
368,523 | UFW's man page mentions that it can setup iptables rate limiting for me: ufw supports connection rate limiting, which is useful for
protecting
against brute-force login attacks. ufw will deny connections if an IP
address has attempted to initiate 6 or more connections in the last 30
seconds. See http://www.debian-administration.org/articles/187 for
details. Typical usage is: ufw limit ssh/tcp Unfortunately this is all the documentation that I could find. I would like to stick with UFW, and not use more complicated iptables commands (to keep things "uncomplicated"). How would I use ufw to limit all incoming (so not outgoing) traffic on port 80 to 20 connections per 30 seconds? How would I disable rate limiting for ports 30000 to 30005? Is rate limiting enabled by default for all ports? | UFW is designed to be "uncomplicated," which in this case means you don't have control over the particulars of the rate to which connections are limited. If you want to dig into the Python source of UFW, you could find out how to tweak it. The appropriate information is (on my Ubuntu 10.04 system) in /usr/share/pyshared/ufw/backend_iptables.py Setting the timing issue aside, therefore, here are some answers to your rapid-fire questions at the end. Assuming 10.10.10.0/24 is your local network, this applies the default limiting rule to port 80/tcp incoming: ufw limit from any to 10.10.10.0/24 port http comment 'limit web' and 3. Rate limiting is not turned on by default. To add it to every (destination) port except the range you want, use this rule. Note that rules (even with ranges) are atomic units and cannot be split up. You cannot, for example, add a rule for any port, then delete a (nonexistent) rule for a particular range to remove it. limit is not an acceptable argument to ufw default , either. ufw limit from any to any port 0:29999,30006:65535 | {
"source": [
"https://serverfault.com/questions/368523",
"https://serverfault.com",
"https://serverfault.com/users/58603/"
]
} |
368,602 | I have CentOS machine and each time I've noticed that the server loses correct time after a while. It is usually behind by several minutes after time passes from having manually set the correct time. Is there a mechanism whereby I can update the server with the time from a specific time server? | Use the ntp daemon. Run yum install ntp and ensure that the service is started via ntsysv or chkconfig ntpd on . To get an immediate sync, run ntpdate time.apple.com (or something similar). | {
"source": [
"https://serverfault.com/questions/368602",
"https://serverfault.com",
"https://serverfault.com/users/111196/"
]
} |
368,906 | I understand the advantages of using Chef and puppet in a multiserver environment. Its fantastic for enforcing and describing configuration across many servers. But lets say you have a single server, what advantage does chef-solo give you over simply manually configuring the server? I love chef, but I can't think of a reason why taking the time to setup chef-solo is worth the hassle on a single or even 2 server architecture, but apparently people do it. | Disclaimer: I am one of the developers of Puppet, another tool in the space. The advantages of using Chef on a single node are the same as using it on multiple nodes: you declare how the system should be, in a form that is easy to version control, backup, audit, and change. Chef will then go ahead and make sure your system stays that way: if something breaks, it fixes it. If something changes, it reverts it. You end up solving problems once , not every time they crop up. You also end up with a single place to look to understand the server. You don't need to go investigate the details of the HTTP configuration, you can just look in Chef. The cross-machine value of tools like Chef is there, but you get the vast majority of the benefits from getting them in place at all - even on a single machine. | {
"source": [
"https://serverfault.com/questions/368906",
"https://serverfault.com",
"https://serverfault.com/users/78152/"
]
} |
368,911 | My server has a limited number of concurrent processes (20) it can handle. To make sure I don't exceed I need to understand: When a user is waiting for a PHP script to finish loading, does the entire waiting duration count as one process? Most of the time waiting for the script to finish is communicating with a remote server via cURL... I believe most of the time is simply waiting for the server to respond with data. Does the whole time connected to the remote server count as a process? I do payment processing and need to make sure nobody gets cut off. Script are run thru mod_fcgid. | Disclaimer: I am one of the developers of Puppet, another tool in the space. The advantages of using Chef on a single node are the same as using it on multiple nodes: you declare how the system should be, in a form that is easy to version control, backup, audit, and change. Chef will then go ahead and make sure your system stays that way: if something breaks, it fixes it. If something changes, it reverts it. You end up solving problems once , not every time they crop up. You also end up with a single place to look to understand the server. You don't need to go investigate the details of the HTTP configuration, you can just look in Chef. The cross-machine value of tools like Chef is there, but you get the vast majority of the benefits from getting them in place at all - even on a single machine. | {
"source": [
"https://serverfault.com/questions/368911",
"https://serverfault.com",
"https://serverfault.com/users/105350/"
]
} |
369,058 | Youtube as we know, is massive. It has thousand of concurrent users streaming at least 2 megabytes per video. Obviously, that gets to be a lot of traffic... far too much for any one server. What networking technologies allow pushing 4 billion videos a day? | Scaling on the backend In a very simple setup, one DNS entry goes to one IP which belongs to one server. Everybody the world over goes to that single machine. With enough traffic, that's just too much to handle long before you get to be YouTube's size. In a simple scenario, we add a load balancer. The job of the load balancer is to redirect traffic to various back-end servers while appearing as one server. With as much data as YouTube has, it would be too much to expect all servers to be able to serve all videos, so we have another layer of indirection to add: sharding . In a contrived example, one server is responsible for everything that starts with "A", another owns "B", and so on. Moving the edge closer Eventually, though, the bandwidth just becomes intense and you're moving a LOT of data into one room. So, now that we're super popular, we move it out of that room. The two technologies that matter here are Content Distribution Networks and Anycasting . Where I've got this big static files being requested all over the world, I stop pointing direct links to my hosting servers. What I do instead is put up a link to my CDN server. When somebody asks to view a video, they ask my CDN server for it. The CDN is responsible for already having the video, asking for a copy from the hosting server, or redirecting me. That will vary based on the architecture of the network. How is that CDN helpful? Well, one IP may actually belong to many servers that are in many places all over the world. When your request leaves your computer and goes to your ISP, their router maps the best path (shortest, quickest, least cost... whatever metric) to that IP. Often for a CDN, that will be on or next to your closest Tier 1 network. So, I requested a video from YouTube. The actual machine it was stored on is at least iad09s12.v12.lscache8.c.youtube.com and tc.v19.cache5.c.youtube.com . Those show up in the source of my webpage I'm looking at and were provided by some form of indexing server. Now, from Maine I found that tc19 server to be in Miama, Florida. From Washington, I found the tc19 server to be in San Jose, California. | {
"source": [
"https://serverfault.com/questions/369058",
"https://serverfault.com",
"https://serverfault.com/users/100298/"
]
} |
369,415 | How do I configure domain names in Cent OS? I am actually connecting to the servers via SSH remote terminal and I also have root credentials. Does configuring the /etc/sysconfig/network and /etc/hosts suffice? Would be great to have some steps or configuration guides.. | Four things to do: Add the hostname entry to /etc/hosts . Use the format detailed here . If your hostname is "your_hostname", type hostname your_hostname at a command prompt to make the change effective. Define the hostname in /etc/sysconfig/network to make this setting persist across reboots. Reboot the system or restart services that depend on hostname (cups, syslog, apache, sendmail, etc.) | {
"source": [
"https://serverfault.com/questions/369415",
"https://serverfault.com",
"https://serverfault.com/users/80943/"
]
} |
369,460 | This is a canonical question about setting up SPF records . I have an office with many computers that share a single external ip (I'm unsure if the address is static or dynamic). Each computer connects to our mail server via IMAP using outlook. Email is sent and received by those computers, and some users send and receive email on their mobile phones as well. I am using http://wizard.easyspf.com/ to generate an SPF record and I'm unsure about some of the fields in the wizard, specifically: Enter any other domains who may send or relay mail for this domain Enter any IP addresses in CIDR format for netblocks that originate or relay mail for this domain Enter any other hosts which can send or relay mail for this domain How stringent should SPF-aware MTA's treat this? the first few questions i'm fairly certain about... hope i have given enough info. | SPF records detail which servers are allowed to send mail for your domain. Questions 1-3 really summarise the whole point of SPF: You're supposed to be listing the addresses of all the servers that are authorised to send mail coming from your domain. If you don't have an exhaustive list at this time, it's generally not a good idea to set up an SPF record. Also a domain can only have one SPF record, so you'll need to combine all the information into a single record. The individual questions really just help break the list down for you. asks you for other domains whose mail servers may relay mail from you; if you have eg a secondary MX server at mail-relay.example.org, and that is the main mail server (MX record) for the domain example.org , then you should enter mx:example.org . Your SPF record should include your own domain's MX record under nearly all circumstances ( mx ). asks you for your ip netblocks. If you have colocated servers at 1.2.3.0/28, and your office address space is 6.7.8.0/22, enter ip4:1.2.3.0/28 ip4:6.7.8.0/22 . IPv6 space should be added as eg ip6:2a01:9900:0:4::/64 . if (eg) you also have a machine off in someone else's office that has to be allowed to send mail from your domain, enter that as well, with eg a:mail.remote.example.com . Your mobile phone users are problematic. If they send email by connecting to your mail server using eg SMTP AUTH, and sending through that server, then you've dealt with them by listing the mail server's address in (2). If they send email by just connecting to whatever mail server the 3G/HSDPA provider's offering, then you can't do SPF meaningfully until you have rearchitected your email infrastructure so that you do control every point from which email purporting to be from you hits the internet. Question 4 is a bit different, and asks what recipients should do with email that claims to be from your domain that doesn't come from one of the systems listed above. There are several legal responses, but the only interesting ones are ~all (soft fail) and -all (hard fail). ?all (no answer) is as useless as ~all (qv), and +all is an abomination. ~all is the simple choice; it tells people that you've listed a bunch of systems who are authorized to send mail from you, but that you're not committing to that list being exhaustive, so mail from your domain coming from other systems might still be legal. I urge you not to do that. Not only does it make SPF completely pointless, but some mail admins on SF deliberately configure their SPF receivers to treat ~all as the badge of a spammer. If you're not going to do -all , don't bother with SPF at all . -all is the useful choice; it tells people that you've listed the systems that are allowed to send email from you, and that no other system is authorized to do so, so they are OK to reject emails from systems not listed in your SPF record. This is the point of SPF, but you have to be sure that you have listed all the hosts that are authorized to originate or relay mail from you before you activate it . Google is known to advise that Publishing an SPF record that uses -all instead of ~all may result in
delivery problems. well, yes, it may; that is the whole point of SPF . We cannot know for sure why google gives this advice, but I strongly suspect that it's to prevent sysadmins who don't know exactly whence their email originates from causing themselves delivery problems. If you don't know where all your email comes from, don't use SPF . If you're going use SPF, list all the places it comes from, and tell the world you're confident in that list, with -all . Note that none of this is binding on a recipient's server; the fact that you advertise an SPF record in no way obliges anyone else to honour it. It is up to the admins of any given mail server what email they choose to accept or reject. What I think SPF does do is allow you to disclaim any further responsibility for email that claimed to be from your domain, but wasn't. Any mail admin coming to you complaining that your domain is sending them spam when they haven't bothered to check the SPF record you advertise that would have told them that the email should be rejected can fairly be sent away with a flea in their ear. Since this answer has been canonicalised, I'd better say a few words about include and redirect . The latter is simpler; if your SPF record, say for example.com , says redirect=example.org , then example.org 's SPF record replaces your own. example.org is also substituted for your domain in those look-ups (eg, if example.org 's record includes the mx mechanism, the MX lookup should be done on example.org , not on your own domain). include is widely misunderstood, and as the standard's authors note " the name 'include' was poorly chosen ". If your SPF record include s example.org 's record, then example.org 's record should be examined by a recipient to see if it gives any reason (including +all ) to accept your email . If it does, your mail should pass. If it doesn't, the recipient should continue to process your SPF record until landing on your all mechanism. Thus, -all , or indeed any other use of all except +all , in an include d record, has no effect on the result of processing. For more information on SPF records http://www.openspf.org is an excellent resource. Please don't take this the wrong way, but if you get an SPF record wrong, you can stop a significant fraction of the internet from receiving email from you until you fix it. Your questions suggest you might not be completely au fait with what you're doing, and if that's the case, then you might want to consider getting professional assistance before you do something that stops you sending email to an awful lot of people. Edit : thank you for your kind words, they're much appreciated. SPF is primarily a technique to prevent joe-jobbing , but some people seem to have started to use it to try to detect spam. Some of those may indeed attach a negative value to your having no SPF record at all, or an overbroad record (eg a:3.4.5.6/2 a:77.5.6.7/2 a:133.56.67.78/2 a:203.54.32.1/2 , which rather sneakily equates to +all ), but that's up to them and there's not much you can do about it. I personally think SPF is a good thing, and you should advertise a record if your current mail structure permits it, but it's very difficult to give an authoritative answer, valid for the entire internet, about how people are using a DNS record designed for a specific purpose, when they decide to use it for a different purpose. All I can say with certainty is that if you do advertise an SPF record with a policy of -all , and you get it wrong, a lot of people will never see your mail. Edit 2 : deleted pursuant to comments, and to keep the answer up-to-date. | {
"source": [
"https://serverfault.com/questions/369460",
"https://serverfault.com",
"https://serverfault.com/users/107611/"
]
} |
369,564 | I often hear people making statements such as "our MySQL server machine failed", which gives me the impression that they dedicate a single machine as their MySQL server (I guess they just install the OS and only MySQL on it). As a developer not a sysadmin, I'm used to MySQL being installed as part of a LAMP stack together with the web server and the PHP. Can someone explain to me: what's the point of installing MySQL on a separate server? sounds like a waste of resources when I can add the entire lamp stack there and additional servers as well. if the database is on a separate machine, how do the apps that need to use connect to it? | When your application platform and your database are competing for resources, that's usually the first indication that you're ready for a dedicated database server. Secondly, high-availability: setting up a database cluster (and usually in turn, a load-balanced Web/application server cluster). I would also say security plays a large role in the move to separate servers as you can have different policies for network access for each server (for example, a DMZ'ed Web server with a database server on the LAN). Access to the database server is over the network. i.e. when you're usually specifying "localhost" for your database host, you'd be specifying the host/IP address of your database server. Note: usually you need to modify the configuration of your database server to permit connections/enable listening on an interface other than the loopback interface. | {
"source": [
"https://serverfault.com/questions/369564",
"https://serverfault.com",
"https://serverfault.com/users/84281/"
]
} |
369,872 | I have a script on an EC2 instance that remotely starts another instance. Once this instance has fully loaded (finished booting) I want it to automatically run a bash script, what would be the best way to do this? I need everything to be fully started, basically the bash script runs a image conversion script (using ImageMagick and executes the "wget" command a few times) Currently, the script is located here: /home/root/beginProcess.sh And I can start it manually by executing bash beginProcess.sh RHEL-6.2-Starter-EBS-i386 Also there is an EBS volume attached to this, if that helps, Thanks! | I'd suggest just using the user-data option to ec2-run-instances . It lets you give a script of some sort to the VM which will be run on first boot. If you're using ubuntu or debian, you can use cloud-init , which puts some nice polish on the process. If using cloud-init, you can use the [runcmd] section of the config file to specify arbitrary commands to run after boot. Thanks to SF user Eric Hammond for the user-data page. Check out his site - it has a wealth of information on AWS. Edit: After re-reading, it's not clear whether you wanted to run a command on initial boot or on every boot. The above instructions only apply to the initial boot. If you want to run a command on every boot, you have a couple options - you can run a command via the @reboot cron directive, or alternatively you can add the script to /etc/rc.local , which will be run each time the system boots. | {
"source": [
"https://serverfault.com/questions/369872",
"https://serverfault.com",
"https://serverfault.com/users/114124/"
]
} |
370,403 | I have a list of files with consecutive numbers as suffixes. I would like to copy only a range of these files. How can I specify a range as part of my cp command. $ls
P1080272.JPG* P1080273.JPG* P1080274.JPG* P1080275.JPG* P1080276.JPG* P1080277.JPG*
P1080278.JPG* P1080279.JPG* P1080280.JPG* P1080281.JPG* P1080282.JPG* P1080283.JPG* I would like to copy files from P1080275.JPG to P1080283.JPG with something similar to: $cp P10802[75-83].JPG ~/Images/. Is there a way to do this? | You were very close. Your question was almost the correct syntax: cp P10802{75..83}.JPG ~/Images | {
"source": [
"https://serverfault.com/questions/370403",
"https://serverfault.com",
"https://serverfault.com/users/9865/"
]
} |
371,150 | I'm trying to troubleshoot an obscure authentication error and need some background information. Is there any difference between how Windows (and programs like Outlook) process DOMAIN\username and [email protected] ? What are the proper terms for these two username formats? Edit : In particular, are there any differences in how Windows authenticates the two username formats? | Assuming you have an Active Directory environment: I believe the backslash format DOMAIN\USERNAME will search domain DOMAIN for a user object whose SAM Account Name is USERNAME. The UPN format username@domain will search the forest for a user object whose User Principle Name is username@domain. Now, normally a user account with a SAM Account Name of USERNAME has a UPN of USERNAME@DOMAIN, so either format should locate the same account, at least provided the AD is fully functional. If there are replication issues or you can't reach a global catalog, the backslash format might work in cases where the UPN format will fail. There may also be (abnormal) conditions under which the reverse applies - perhaps if no domain controllers can be reached for the target domain, for example. However: you can also explicitly configure a user account to have a UPN whose username component is different from the SAM Account Name and whose domain component is different from the name of the domain. The Account tab in Active Directory Users and Computers shows the UPN under the heading "User logon name" and the SAM Account Name under the heading "User logon name (pre-Windows 2000)". So if you are having trouble with particular users I would check that there aren't any discrepancies between these two values. Note: it is possible that additional searches are done if the search I describe above doesn't find the user account. For example, perhaps the specified username is converted into the other format (in the obvious way) to see if that produces a match. There must also be some procedure for finding accounts in trusted domains that are not in the forest. I don't know where/whether the exact behaviour is documented. Just to further complicate troubleshooting, Windows clients will by default cache information about successful interactive logons, so that you may be able to log into the same client even if your user account information in the Active Directory is inaccessible. | {
"source": [
"https://serverfault.com/questions/371150",
"https://serverfault.com",
"https://serverfault.com/users/8437/"
]
} |
371,316 | Part of a firewall on a server : iptables -A INPUT -p tcp --dport 22 -m state NEW --state -m recent --set
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 100 --hitcount 10 -j DROP When I search online I always see NEW being used in that rule but I'm having a hard time understanding why ESTABLISHED and RELATED aren't being used. Like this : iptables -A INPUT -p tcp --dport 22 -m state NEW,ESTABLISHED,RELATED --state -m recent --set
iptables -A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED,RELATED -m recent --update --seconds 100 --hitcount 10 -j DROP Can someone explain to me when exactly a NEW packet changes into ESTABLISHED and RELATED ? | Consider a NEW packet a telephone call before the receiver has picked up. An ESTABLISHED packet is their, "Hello." And a RELATED packet would be if you were calling to tell them about an e-mail you were about to send them. (The e-mail being RELATED.) In case my analogy isn't so great, I personlly think the man pages handles it well: NEW -- meaning that the packet has started a new connection, or otherwise
associated with a connection which has not seen packets in both
directions, and ESTABLISHED -- meaning that the packet is associated with a connection
which has seen packets in both directions, RELATED -- meaning that the packet is starting a new connection, but is
associated with an existing connection, such as an FTP data transfer,
or an ICMP error. iptables(8) - Linux man page | {
"source": [
"https://serverfault.com/questions/371316",
"https://serverfault.com",
"https://serverfault.com/users/114606/"
]
} |
371,324 | I use this to connect to my local mysql server: <?
$host = "localhost";
$user = "root";
$pass = "";
$db1 = "mydb";
$db = mysql_connect($host,$user,$pass);
mysql_select_db($db1,$db);
?> But, I want to connect to a remote mysql server instead. I mean that site will be on the same server, but I'll connect to another mysql server. How can I do this? I'm on Ubuntu. | Consider a NEW packet a telephone call before the receiver has picked up. An ESTABLISHED packet is their, "Hello." And a RELATED packet would be if you were calling to tell them about an e-mail you were about to send them. (The e-mail being RELATED.) In case my analogy isn't so great, I personlly think the man pages handles it well: NEW -- meaning that the packet has started a new connection, or otherwise
associated with a connection which has not seen packets in both
directions, and ESTABLISHED -- meaning that the packet is associated with a connection
which has seen packets in both directions, RELATED -- meaning that the packet is starting a new connection, but is
associated with an existing connection, such as an FTP data transfer,
or an ICMP error. iptables(8) - Linux man page | {
"source": [
"https://serverfault.com/questions/371324",
"https://serverfault.com",
"https://serverfault.com/users/114610/"
]
} |
371,421 | SCP does not seem to preserve ownership stamps even if used with -p option. scp -p /mysql/serv/data_summary.* some_server:/mysql/test/ The files are owned by mysql and I want the same ownership to be assigned on the destination server. I need to copy files as root on both servers due to some admin issues. I can not change to mysql@ | Try to use rsync, it has a lot more benefits besides keeping ownership, permissions and incremental copies: rsync -av source 192.0.2.1:/dest/ination Besides that, since rsync uses ssh, it should work where scp works. | {
"source": [
"https://serverfault.com/questions/371421",
"https://serverfault.com",
"https://serverfault.com/users/16842/"
]
} |
371,691 | I have PostFix up and running on a CentOS box and would like to send mail from a Windows server on the same network out through the PostFix server. When I try to telnet from the Windows server into port 25 on the PostFix server currently the connection fails. Where do I set this up within PostFix/CentOS? Thanks in advance! | You will need to configure relay. However when postfix is running you should be able to still connect to port 25. Might there be a firewall blocking this connection? When you open main.cf, you can need to add this directive: mynetworks=A.B.C.D example: mynetworks = 127.0.0.0/8 168.100.189.0/28
mynetworks = !192.168.0.1, 192.168.0.0/28
mynetworks = 127.0.0.0/8 168.100.189.0/28 [::1]/128 [2001:240:587::]/64 do not put 0.0.0.0 or you will become an open relay. | {
"source": [
"https://serverfault.com/questions/371691",
"https://serverfault.com",
"https://serverfault.com/users/37860/"
]
} |
371,774 | I am just curious if you could use dig to check if a certain nameserver responds to recursive queries. Thanks! | Use dig and check the status of the RD and RA bits in the response. By default dig will send a recursive query ( RD set in the query header) unless you set the +norecurse command line flag. If the server supports recursive queries the response will have the "recursion available" RA bit set in the response headers. The RA bit is the diagnostic test for recursive query support. | {
"source": [
"https://serverfault.com/questions/371774",
"https://serverfault.com",
"https://serverfault.com/users/105101/"
]
} |
371,907 | I believe this is not possible, but someone I know insisted that it works. I don't even know what parameters to try, and I haven't found this documented anywhere. I tried http://myserver.com/~user=username&password=mypassword but it doesn't work. Can you confirm that it's not in fact possible to pass the user/pass via HTTP parameters (GET or POST)? | It is indeed not possible to pass the username and password via query parameters in standard HTTP auth. Instead, you use a special URL format, like this: http://username:[email protected]/ -- this sends the credentials in the standard HTTP "Authorization" header. It's possible that whoever you were speaking to was thinking of a custom module or code that looked at the query parameters and verified the credentials. This isn't standard HTTP auth, though, it's an application-specific thing. | {
"source": [
"https://serverfault.com/questions/371907",
"https://serverfault.com",
"https://serverfault.com/users/88/"
]
} |
372,066 | I'm wondering if there is a way to query a DNS server and bypass caching (with dig ). Often I change a zone on the DNS server and I want to check if it resolves correctly from my workstation. But since the server caches resolved requests, I often get the old ones. Restarting or -loading the server is not really something nice. | You can use the @ syntax to look up the domain from a particular server. If the DNS server is authoritative for that domain, the response will not be a cached result. dig @ns1.example.com example.com You can find the authoritative servers by asking for the NS records for a domain: dig example.com NS | {
"source": [
"https://serverfault.com/questions/372066",
"https://serverfault.com",
"https://serverfault.com/users/98724/"
]
} |
372,069 | I'm using Apache 2.2 running on Windows Server 2008 R2 as a WebDAV server for clients to upload large media files (roughly 100-2000MB). I am finding that when I have SSL enabled (openSSL 0.9.8o) and use HTTPS for the uploads the throughput is around 13Mbps but when I disable it and just use HTTP I get around 80Mbps. I can't understand why this is happening as my understanding was that the heavy SSL work was done at the beginning of the connection. If it helps the client that I am using is command line cURL and here is the command: curl -k -f -u digital:recorder -T 00320120321101048_ch1.mkv http://mediaserver/webdav/
curl -k -f -u digital:recorder -T 00320120321101048_ch1.mkv https://mediaserver/webdav/ Does anyone have any idea why the performance is so drastically affected by enabling SSL? Cheers. UPDATE: The problem does not exist on Windows 7 clients so this only happens on XP. This at least identifies that the issue is at the client end. I am running the exact same command line from both systems but it only affects WinXP. Does anyone know of why that might be? That XP is somehow crippling the SSL upload speed? I've run tests on Fedora Linux as well. So the issue has now been closer defined to be that the same version of cURL + OpenSSL uploading the same file to the same server is fast on Linux and Windows 7 but very slow on Windows XP. Can anybody help with this because I've really hit a brick wall! | You can use the @ syntax to look up the domain from a particular server. If the DNS server is authoritative for that domain, the response will not be a cached result. dig @ns1.example.com example.com You can find the authoritative servers by asking for the NS records for a domain: dig example.com NS | {
"source": [
"https://serverfault.com/questions/372069",
"https://serverfault.com",
"https://serverfault.com/users/114868/"
]
} |
372,143 | We are decommissioning around 40 Dell desktops. We would like to donate them but drives need to be wiped. What's the best approach to wipe them all as efficiently as possible? Is it standard practice to reinstall the OEM OS before donation or is this generally taken care of by the recipient? If I need to reinstall the OS, what's the best approach for imaging 3 different models? | Standard practice depends on how good a wipe you need. Fast wipe: Write one pass of zeros across the whole drive. Thorough wipe: Write alternating passes of zeros and ones across the whole drive at least twice. DoD Wipe: Write multiple (I believe the standard is 7?) passes of alternating ones and zeros across the whole drive. A tool like dban is probably the best way to accomplish this on a large number of systems. (Note that this assumes traditional (spinning magnetic) hard drives. SSDs are Different and Special ) Re: the OS, Typically I turn over the media and license keys to the organization I'm donating to, but leave the machine blank in the state it was after the wipe was completed. This lets the recipient decide what OS they want to install and go about it however they wish (and if the machines wind up in a technical-training setting they may want their students to install the OS themselves). If you install an operating system that requires a license (Windows) ensure that the license is with the machine (this covers your legal posterior). | {
"source": [
"https://serverfault.com/questions/372143",
"https://serverfault.com",
"https://serverfault.com/users/36918/"
]
} |
372,481 | Coming from an ubuntu perspective, if I want to check to see what additional packages will be installed/upgraded I can use apt-get --simulate install <package name> Is there something similar for yum? Our Red hat box (yum) is our production server, so I would like to see exactly what will be happening before I actually install some package. Couldn't really find a good solution, someone suggested: yum --assumeno install <package name> but this returned: Command line error: no such option: --assumeno yum version: 3.2.22 OS version: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Any ideas or suggestions would be welcome. | you can do a yum install without the -y switch (if you use it): yum install <package> this will grab a list of packages and dependancies required. Before installing it will ask you if you want to install or not, just answer no and no changes will be made. Alternatively you can do yum deplist <package> to list all the dependancies of a package and see what needs to be installed without downloading or installing anything. | {
"source": [
"https://serverfault.com/questions/372481",
"https://serverfault.com",
"https://serverfault.com/users/57715/"
]
} |
372,677 | After entering shutdown now in terminal I get everything running normally and then: All processes ended withing 2 seconds...done
INIT: Going single user
INIT: Sending processes the TERM signal
INIT: Sending processes the KILL signal
Give root password for maintenance(or.... I press Ctrl + D , and it shows me login screen Debian. Shutdown through GUI works properly. UPDATE 1 It seems some process hangs. Moreover, I've managed to power off the server through several retries. Recently I've installed only ntp and ntpdate, nothing more. I suppose it might be it conflicting with iptables. | You need to use the -h switch to halt the system. Default for shutdown is to switch to run level 1 (maintenance). shutdown -h now See man shutdown . | {
"source": [
"https://serverfault.com/questions/372677",
"https://serverfault.com",
"https://serverfault.com/users/129360/"
]
} |
372,733 | I'm having the following issue on a host using Apache 2.2.22 + PHP 5.4.0 I need to provide the file /home/server1/htdocs/admin/contents.php when a user makes the request: http://server1/admin/contents , but I obtain this message on the server error_log. Negotiation: discovered file(s) matching request: /home/server1/htdocs/admin/contents (None could be negotiated) Notice that I have mod_negotiation enabled and MultiViews among the options for the related virtualhost: <Directory "/home/server1/htdocs">
Options Indexes Includes FollowSymLinks MultiViews
Order allow,deny
Allow from all
AllowOverride All
</Directory> I also use mod_rewrite , with the following .htaccess rules: <IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^\./]*)$ index.php?t=$1 [L]
</IfModule> It seems very strange, but on the same box with PHP 5.3.6 it used to work correctly. I'm just trying an upgrade to PHP 5.4.0, but I cannot solve this negotiation issue. Any idea on why Apache cannot match contents.php when asking for content (which should be what mod_negotiation is supposed to do)? UPDATE: I noticed that mod_negotiation behaves correctly with files with extension different than .php: so if I'd have a file named /admin/contents.txt, I can access it regulary with the browser with /admin/contents url. So the problem is only for php files. Any clue on what could make the negotiation fail? | I found the solution. Very easy, indeed. I forgot to include the following: AddType application/x-httpd-php .php into apache mod_mime section into httpd.conf I was misled by the fact that php scripts were correctly working; however the negotiation was failing because mod_negotiation only looks for "interesting" (and known) file types. | {
"source": [
"https://serverfault.com/questions/372733",
"https://serverfault.com",
"https://serverfault.com/users/96041/"
]
} |
372,886 | Here's my abbreviated nginx vhost conf: upstream gunicorn {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
listen 80;
listen 443 ssl;
server_name domain.com ~^.+\.domain\.com$;
location / {
try_files $uri @proxy;
}
location @proxy {
proxy_pass_header Server;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 120;
proxy_pass http://gunicorn;
}
} The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') {
proxy_redirect http:// https://;
} But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config? | Well, I got a bit of inspiration and tried: proxy_redirect http:// $scheme://; Which seems to do the trick. This still seems hacky to me, though, so I still welcome any guidance on anything I might be doing wrong or less hacky ways to achieve the same result. | {
"source": [
"https://serverfault.com/questions/372886",
"https://serverfault.com",
"https://serverfault.com/users/99839/"
]
} |
372,975 | Yesterday I got a new computer as my homeserver, a HP Proliant Microserver.
Installed Arch Linux on it, with kernel version 3.2.12. After installing iptables (1.4.12.2 - the current version AFAIK) and changing the net.ipv4.ip_forward key to 1, and enabling forwarding in the iptables configuration file (and rebooting), the system cannot use any of its network interfaces. Ping fails with Ping: sendmsg: operation not permitted If I remove iptables completely, networking is okay, but I need to share the Internet connection to the local network. eth0 - wan NIC integrated on the motherboard (Broadcom NetXtreme BCM5723). eth1 - lan NIC in a pci-express slot (Intel 82574L Gigabit Network) Since it works without iptables(server can access the internet, and I can login with ssh from the internal network), I assume it has something to do with iptables. I do not have much experience with iptables, so I used these as reference (separate from each other of course...): wiki.archlinux.org/index.php/Simple_stateful_firewall#Setting_up_a_NAT_gateway revsys.com/writings/quicktips/nat.html howtoforge.com/nat_iptables On my previous server, I used the revsys guide to set up nat, worked like a charm. Anyone experienced anything like this before? What am I doing wrong? | The error message: Ping: sendmsg: operation not permitted means that your server is not allowed to send ICMP packets. You need to allow your server to send traffic via one or more of the configured interfaces. You can do this by: Set OUTPUT chain policy to ACCEPT to allow all outgoing traffic
from your box: sudo iptables -P OUTPUT ACCEPT Set OUTPUT chain policy to DROP and then allow selectively the type of traffic you need. This applies to all chains not only the OUTPUT chain. INPUT chain controls the traffic received by your box. FORWARD chain deals with traffic forwarded through the box. | {
"source": [
"https://serverfault.com/questions/372975",
"https://serverfault.com",
"https://serverfault.com/users/70559/"
]
} |
372,987 | I install custom software in /usr/local/lib . How do I set the PATH and LD_LIBRARY_PATH in CentOS 6 system-wide to use /usr/local/lib . I realize there may be more than one way. What's the simplest and most standard way? | You can edit the file /etc/ld.so.conf and add your path /usr/local/lib to it
or create a new file in /etc/ld.so.conf.d/ like /etc/ld.so.conf.d/usrlocal.conf and put only the following line in it /usr/local/lib Then run ldconfig -v as root, and you're done. | {
"source": [
"https://serverfault.com/questions/372987",
"https://serverfault.com",
"https://serverfault.com/users/60317/"
]
} |
373,052 | If I wanted to run two separate commands on one line, I could do this: cd /home; ls -al or this: cd /home && ls -al And I get the same results. However, what is going on in the background with these two methods? What is the functional difference between them? | The ; just separates one command from another. The && says only run the following command if the previous was successful cd /home; ls -al This will cd /home and even if the cd command fails ( /home doesn't exist, you don't have permission to traverse it, etc.), it will run ls -al . cd /home && ls -al This will only run the ls -al if the cd /home was successful. | {
"source": [
"https://serverfault.com/questions/373052",
"https://serverfault.com",
"https://serverfault.com/users/115236/"
]
} |
373,314 | For the past few days, I couldn't update our apt-sources on Debian 5.0 (lenny). I get the following errors. W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/main/binary-amd64/Packages 404 Not Found [IP: 130.89.148.12 80]
W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/contrib/binary-amd64/Packages 404 Not Found [IP: 130.89.148.12 80]
W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/non-free/binary-amd64/Packages 404 Not Found [IP: 130.89.148.12 80]
W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/main/source/Sources 404 Not Found [IP: 130.89.148.12 80]
W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/contrib/source/Sources 404 Not Found [IP: 130.89.148.12 80]
W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/non-free/source/Sources 404 Not Found [IP: 130.89.148.12 80] How do I fix this problem? Edit: My current sources are: # Debian Lenny
deb http://ftp.de.debian.org/debian/ lenny main non-free contrib
deb-src http://ftp.de.debian.org/debian/ lenny main non-free contrib
# Debian Lenny Non-US
deb http://non-us.debian.org/debian-non-US lenny/non-US main contrib non-free
deb-src http://non-us.debian.org/debian-non-US lenny/non-US main contrib non-free
# Debian Lenny Security
deb http://security.debian.org/ lenny/updates main contrib non-free | lenny is superseded by squeeze , and its lifecycle ended on Feb. 6th this year . You'll get no updates from the core Debian community for lenny . Options: Upgrade to squeeze . Stay with lenny , remove the Debian FTP servers from sources.list and keep the packages as they are. There will be no security updates. Pin ( man apt_preferences ) necessary packages down to lenny and perform a partial upgrade, or pin all packages down to lenny and perform upgrades as needed. Leaves you with a partial system, and you are likely to get all kinds of errors, but might be necessary if neither upgrade nor keep-as-is are options. | {
"source": [
"https://serverfault.com/questions/373314",
"https://serverfault.com",
"https://serverfault.com/users/43929/"
]
} |
373,372 | I have set up a virtual machine running Windows XP on my Ubuntu laptop. Using the virt-manager GUI application, I can insert a CD in my drive and go to Details→IDE CDROM 1 and click on the Connect button. Then the CD becomes available in my virtual machine. How can I do the same through the command line? Obviously, I'd like to be able to disconnect from the command line too. Note: I can start the VM from the command line using virsh start testbed (testbed being the name of the domain/VM). | If you defined no CDROM when you created your virtual machine, you can attach the device even to a running domain (virtual machine) by running the following command: virsh attach-disk testbed /dev/sr0 hdc --type cdrom If you already defined a CDROM, but it pointed to an ISO image, in my experience, you can still run the same command. The hdc part needs to match the block device you have in the testbed virtual machine. When you want to point to an ISO image again, you replace /dev/sr0 to the filename on the host, something like virsh attach-disk testbed ~/virtio-win-0.1-22.iso hdc --type cdrom The documentation suggests using virsh update-device , but it is more labour to create an XML definition something like: <disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sr0'/>
<target dev='hdc' bus='ide'/>
<readonly/>
</disk> If you are into this way, save something like that into a file (say ~/cdrom-real.xml ) and then fire: virsh update-device testbed ~/cdrom-real.xml | {
"source": [
"https://serverfault.com/questions/373372",
"https://serverfault.com",
"https://serverfault.com/users/5818/"
]
} |
373,563 | Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array ). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator , recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler
blockdev --setra 65536 /dev/cciss/c0d0
echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests
echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios. | I've found that when I've had to tune for lower latency vs throughput, I've tuned nr_requests down from it's default (to as low as 32). The idea being smaller batches equals lower latency. Also for read_ahead_kb I've found that for sequential reads/writes, increasing this value offers better throughput, but I've found that this option really depends on your workload and IO pattern. For example on a database system that I've recently tuned I changed this value to match a single db page size which helped to reduce read latency. Increasing or decreasing beyond this value proved to hurt performance in my case. As for other options or settings for block device queues: max_sectors_kb = I've set this value to match what the hardware allows for a single transfer (check the value of the max_hw_sectors_kb (RO) file in sysfs to see what's allowed) nomerges = this lets you disable or adjust lookup logic for merging io requests. (turning this off can save you some cpu cycles, but I haven't seen any benefit when changing this for my systems, so I left it default) rq_affinity = I haven't tried this yet, but here is the explanation behind it from the kernel docs If this option is '1', the block layer will migrate request completions to the
cpu "group" that originally submitted the request. For some workloads this
provides a significant reduction in CPU cycles due to caching effects. For storage configurations that need to maximize distribution of completion
processing setting this option to '2' forces the completion to run on the
requesting cpu (bypassing the "group" aggregation logic)" scheduler = you said that you tried deadline and noop. I've tested both noop and deadline, but have found deadline win's out for the testing I've done most recently for a database server. NOOP performed well, but for our database server I was still able to achieve better performance adjusting the deadline scheduler. Options for deadline scheduler located under /sys/block/{sd,cciss,dm-}*/queue/iosched/ : fifo_batch = kind of like nr_requests, but specific to the scheduler. Rule of thumb is tune this down for lower latency or up for throughput. Controls the batch size of read and write requests. write_expire = sets the expire time for write batches default is 5000ms. Once again decrease this value decreases your write latency while increase the value increases throughput. read_expire = sets the expire time for read batches default is 500ms. Same rules apply here. front_merges = I tend to turn this off, and it's on by default. I don't see the need for the scheduler to waste cpu cycles trying to front merge IO requests. writes_starved = since deadline is geared toward reads the default here is to process 2 read batches before a write batch is processed. I found the default of 2 to be good for my workload. | {
"source": [
"https://serverfault.com/questions/373563",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
373,578 | How can i configure a shared config block for a set of locations? location / {
proxy_pass http://127.0.0.1:9000/;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache cache-test;
proxy_cache_valid 200 302 24h;
proxy_cache_valid 404 60s;
add_header X-Cache-Status $upstream_cache_status;
}
location /api/0.1/user{
proxy_cache_key /user/$http_authorization;
} Now if i try to access /api/0.1/user then i will get 404 because it doesn´t pass the request to 127.0.0.1:9000 | Create a common proxy config and include as-needed. /etc/nginx/api_proxy.conf proxy_pass http://127.0.0.1:9000/;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache cache-test;
proxy_cache_valid 200 302 24h;
proxy_cache_valid 404 60s;
add_header X-Cache-Status $upstream_cache_status; Your Host Config File ...
location /api/0.1/user {
include /etc/nginx/api_proxy.conf;
proxy_cache_key /user/$http_authorization;
}
... | {
"source": [
"https://serverfault.com/questions/373578",
"https://serverfault.com",
"https://serverfault.com/users/62005/"
]
} |
373,871 | I'm struggling with some iptables rules. I'm a newbie in iptables . I found some resources where I get the following command related to iptables . This is stored in a file that will be executed. [0:0] -A PREROUTING -s 10.1.0.0/24 -p tcp -m tcp --dport 81 -j DNAT --to-destination 10.1.0.6:3128 Can anybody explain me what does [0:0] mean? Also, some link related to this in iptables are welcome. Thanks in advance! P.S. If you need more rules, just let me know. | The [0:0] or [1280:144299] or whatever are the count of [ Packets : Bytes ] that have been trough the chain . They are saved when you run an iptables-save command and are used by the iptables-restore command to initialise the counters. The Packets and bytes values can be useful for some statistical purposes. Issuing an iptables-save command with the -c argument would then make it possible for us to reboot without breaking our statistical and accounting routines. (Quoted from Iptables Tutorial 1.2.2 - by Oskar Andreasson)
Conclusively, restoring the iptables rules with Packets and bytes specified will not affect the rule behavior, just will keep a "stateful" track of Packets respectively bytes that match the rule. | {
"source": [
"https://serverfault.com/questions/373871",
"https://serverfault.com",
"https://serverfault.com/users/94558/"
]
} |
374,993 | Pretty basic question: how to PREPEND rules on IPTABLES rather than to APPEND? I have DROP statements at the bottom of my rules. I have a software to add new rules but adding rules after DROP statements isn't good. Every time I want to add a new rule, I have to flush the table (which is inefficient). Is there a way to prepend a rule i.e., add a rule to the top of the table rather than the bottom? Many thanks. | Use the -I switch: sudo iptables -I INPUT 1 -i lo -j ACCEPT This would insert a rule at position #1 in the INPUT chain. | {
"source": [
"https://serverfault.com/questions/374993",
"https://serverfault.com",
"https://serverfault.com/users/107249/"
]
} |
375,004 | I have this logrotate config and I am running on Ubuntu 10.04. /var/log/mysql/mysql-slow.log {
daily
rotate 3
compress
notifempty
missingok
create 660 mysql adm
postrotate
if test -x /usr/bin/mysqladmin && \
/usr/bin/mysqladmin ping &>/dev/null
then
/usr/bin/mysqladmin flush-logs
fi
endscript } I put this in /etc/logrotate.d yesterday and today the log was not rotated. Below are the things that I have done: I verified that the log is indeed in /var/log/mysql/mysql-slow.log mysqladmin lines work fine when run as root mysql is able to write to the mysql-slow.log When I did this: $ logrotate -d -f mysql-slow
reading config file mysql-slow
reading config info for /var/log/mysql/mysql-slow.log
Handling 1 logs
rotating pattern: /var/log/mysql/mysql-slow.log forced from command line (3 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/mysql/mysql-slow.log
log needs rotating
rotating log /var/log/mysql/mysql-slow.log, log->rotateCount is 3
dateext suffix '-20120329'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/mysql/mysql-slow.log.3.gz to /var/log/mysql/mysql-slow.log.4.gz (rotatecount 3, logstart 1, i 3),
renaming /var/log/mysql/mysql-slow.log.2.gz to /var/log/mysql/mysql-slow.log.3.gz (rotatecount 3, logstart 1, i 2),
renaming /var/log/mysql/mysql-slow.log.1.gz to /var/log/mysql/mysql-slow.log.2.gz (rotatecount 3, logstart 1, i 1),
renaming /var/log/mysql/mysql-slow.log.0.gz to /var/log/mysql/mysql-slow.log.1.gz (rotatecount 3, logstart 1, i 0),
renaming /var/log/mysql/mysql-slow.log to /var/log/mysql/mysql-slow.log.1
creating new /var/log/mysql/mysql-slow.log mode = 0660 uid = 20004 gid = 4
running postrotate script
running script (multiple) with arg /var/log/mysql/mysql-slow.log : "
if test -x /usr/bin/mysqladmin && \
/usr/bin/mysqladmin &>/dev/null
then
/usr/bin/mysqladmin flush-logs
fi
"
compressing log with: /bin/gzip
removing old log /var/log/mysql/mysql-slow.log.4.gz Where is the log that shows that logrotate was successful? I want to see if there is anything that would say that there was a problem. Any ideas on why the logrotate is not working? | A common issue is when you first setup a daily logrotate.d entry, it will not rotate the first day. When you use a time based rotation (daily/weekly/monthly) logrotate scribbles a date stamp of the last date it saw the file in /var/lib/logrotate/status (or /var/lib/logrotate.status on RHEL systems). The scribbled date becomes the reference date from that future runs of logrotate will use to compare 'daily' rotations. Since the default cron job runs daily, this is typically only a problem in daily jobs. You can avoid this problem two ways; run sudo logrotate -f /etc/logrotate.d/<my rotate job> This will scribble the date into the status file and rotate the logs Edit /var/lib/logrotate/status and add the line manually: "/var/log/my_special.log" 2013-4-8 setting it to today's or a prior date. Next run should cause it to run. | {
"source": [
"https://serverfault.com/questions/375004",
"https://serverfault.com",
"https://serverfault.com/users/50033/"
]
} |
375,090 | My Ubuntu 11.04 machine uses LUKS encryption for root, swap and home. A routine fsck -n revealed a set of errors I need to repair. fsck requires to unmount the partitions. Before luks I would simply boot from a USB stick and fix run fsck from there. What are the steps to do that for LUKS encrypted partitions? | The exact method depends on how you have setup luks, and if you have LVM on top of luks or if you just have a filesystem within the luks volume. If you don't have LVM in addition to luks then you would probably do something like this. cryptsetup luksOpen /dev/rawdevice somename
fsck /dev/mapper/somename
# or
cryptsetup luksOpen /dev/sda2 _dev_sda2
fsck /dev/mapper/_dev_sda2 If you used the LVM on LUKS option providied by the Debian/Ubuntu installer, then you'll need to start up LVM. So vgchange -aly after opening the encrypted volume, then run fsck against the /dev/mapper/lvname . (If commands are missing, you may need to do apt-get install cryptsetup first. Similarly if you need vgchange do apt-get install lvm .) | {
"source": [
"https://serverfault.com/questions/375090",
"https://serverfault.com",
"https://serverfault.com/users/115864/"
]
} |
375,096 | a bash commands outputs this: Runtime Name: vmhba2:C0:T3:L14
Group State: active
Runtime Name: vmhba3:C0:T0:L14
Group State: active unoptimized
Runtime Name: vmhba2:C0:T1:L14
Group State: active unoptimized
Runtime Name: vmhba3:C0:T3:L14
Group State: active
Runtime Name: vmhba2:C0:T2:L14
Group State: active I'd like to pipe it to something to make it look like this: Runtime Name: vmhba2:C0:T1:L14 Group State: active
Runtime Name: vmhba3:C0:T3:L14 Group State: active unoptimized
Runtime Name: vmhba2:C0:T2:L14 Group State: active
[...] i.e. remove every other newline I tried ... |tr "\nGroup" " " but it removed all newlines and ate up some other letters as well. thanks | can't test right now, but ... | paste - - should do it | {
"source": [
"https://serverfault.com/questions/375096",
"https://serverfault.com",
"https://serverfault.com/users/28678/"
]
} |
375,252 | I just want to setup a system wide environment variable, JAVA_HOME for all users, including root user. Requirements: accessible to normal users accessible to root always loaded, not only for bash (gnome-terminal does not start a bash by default) to work on Ubuntu, Debian and optionally Red Hat great if addition could be easily scripted | For Ubuntu, and possibly other *nix platforms, add a new script in /etc/profile.d named java.sh , such as: echo "export JAVA_HOME=/usr/lib/jvm/default-java" > /etc/profile.d/java.sh Other considerations that were ruled out: /etc/environment - works but is harder to maintain using other tools (or people will edit it); and /etc/profile - same drawbacks as /etc/environment | {
"source": [
"https://serverfault.com/questions/375252",
"https://serverfault.com",
"https://serverfault.com/users/10361/"
]
} |
375,316 | I am able to use the history command on CentOS to get list of previous inputted commands, however, if I do something like: !372 , history will attempt to run the referenced command. I need the previous run command to appear in at current cursor. Here's an example: [dev@home ~]$ previous_command_no_execute!372 | How about, put this on your command line: $ !372 Then press ESC followed by CTRL+E . This will autoexpand on the command line without actually running it. (also expands everything else on the line, including env vars) This only works on Bash, as far as I'm aware. | {
"source": [
"https://serverfault.com/questions/375316",
"https://serverfault.com",
"https://serverfault.com/users/115937/"
]
} |
375,525 | We have a bastion server that we use to connect to multiple hosts, and our .ssh/config has grown to over a thousand lines (we have hundreds of hosts that we connect to). This is beginning to get a little unwieldy and I'd like to know if there is a way to break the .ssh/config file up into multiple files. Ideally, we'd specify somewhere that other files would be treated as an .ssh/config file, possibly like: ~/.ssh/config
~/.ssh/config_1
~/.ssh/config_2
~/.ssh/config_3
... I have read the documentation on ssh/config, and I don't see that this is possible. But maybe someone else has had a similar issue and has found a solution. | The ~/.ssh/config file don't have a directive for including other files, possibly related to SSH's check for file permissions. Suggestions around this can include a script to cat several changes together either on the system or via checkin hooks on a repository. One might also look into tools such as Puppet or Augeas. However you approach it, though, you'll have to concatenate individual files to be a single file from outside of the file. $ cat ~/.ssh/config_* >> ~/.ssh/config note: overwrite: > v.s. append: >> Update December 2017: From 7.3p1 and up, there is the Include option. Which allows you to include configuration files. Include
Include the specified configuration file(s). Mul‐
tiple pathnames may be specified and each pathname
may contain glob(3) wildcards and, for user config‐
urations, shell-like “~” references to user home
directories. Files without absolute paths are
assumed to be in ~/.ssh if included in a user con‐
figuration file or /etc/ssh if included from the
system configuration file. Include directive may
appear inside a Match or Host block to perform con‐
ditional inclusion. | {
"source": [
"https://serverfault.com/questions/375525",
"https://serverfault.com",
"https://serverfault.com/users/75925/"
]
} |
375,981 | I have a chain appended with many rules like: > :i_XXXXX_i - [0:0]
> -A INPUT -s 282.202.203.83/32 -j i_XXXXX_i
> -A INPUT -s 222.202.62.253/32 -j i_XXXXX_i
> -A INPUT -s 222.202.60.62/32 -j i_XXXXX_i
> -A INPUT -s 224.93.27.235/32 -j i_XXXXX_i
> -A OUTPUT -d 282.202.203.83/32 -j i_XXXXX_i
> -A OUTPUT -d 222.202.62.253/32 -j i_XXXXX_i
> -A OUTPUT -d 222.202.60.62/32 -j i_XXXXX_i
> -A OUTPUT -d 224.93.27.235/32 -j i_XXXXX_i when I try to delete this chain with: iptables -X XXXX but got error like (tried iptables -F XXXXX before): iptables: Too many links. Is there a easy way to delete the chain by once command? | You can't delete chains when rules with '-j CHAINTODELETE' are referencing them. Figure out what is referencing your chain (the link), and remove that. Also, flush then kill. -F, --flush [chain] Flush the selected chain (all the chains in the table if none is given). This is equivalent to deleting all the rules one by one. -X, --delete-chain [chain] Delete the optional user-defined chain specified. There must be no references to the chain. If there are, you must delete or replace the
referring rules before the chain can be deleted. The chain must be empty, i.e. not contain any rules. If no argument is given, it will
attempt to delete every non-builtin chain in the table. | {
"source": [
"https://serverfault.com/questions/375981",
"https://serverfault.com",
"https://serverfault.com/users/115259/"
]
} |
376,162 | Right now I have this config: location ~ ^/phpmyadmin/(.*)$
{
alias /home/phpmyadmin/$1;
} However, if I visit www.mysite.com/phpmyadmin (note the lack of trailing slash), it won't find what I'm looking for a 404. I assume because I don't include the trailing slash. How can I fix this? | It might be in the regular expression that you're using -- location ~ ^/phpmyadmin/(.*)$ The above will match /phpmyadmin/, /phpmyadmin/anything/else/here, but it won't match /phpmyadmin because the regular expression includes the trailing slash. You probably want something like this: location ~ /phpmyadmin/?(.*)$ {
alias /home/phpmyadmin/$1;
} The question mark is a regular expression quantifier and should tell nginx to match zero or one of the previous character (the slash). Warning: The community seen this solution, as is, as a possible
security risk | {
"source": [
"https://serverfault.com/questions/376162",
"https://serverfault.com",
"https://serverfault.com/users/33982/"
]
} |
376,699 | I'm using PostgreSQL (8.3) with multiple databases... I'm wondering if there is some way to log the queries made only in one of the databases (not all of them). Or to have one logfile per database... I know I can use log_line_prefix = "%d" to log the name of the database, and then filter, but that is not the issue. Should I maybe use a log_analyzer to get around this ? Do you have any recommendations ? thanks | Yes, this is possible, you can set the configuration parameter log_statement per database: ALTER DATABASE your_database_name
SET log_statement = 'all'; | {
"source": [
"https://serverfault.com/questions/376699",
"https://serverfault.com",
"https://serverfault.com/users/46872/"
]
} |
376,717 | I run a job every minute to reindex my site's content. Today, the search engine died, and when I logged in there were hundreds of orphan processes that had been started by cron. Is there another way using some kind of existing software that will let me execute a job every minute, but that won't launch another instance if that job doesn't return (i.e. because the search engine process has failed)? | The problem isn't really with cron - it's with your job. You will need to have your job interact with a lock of some description. The easiest way to do this is have it attempt to create a directory and if successful continue, if not exit. When your job has finished and exits it should remove the directory ready for the next run. Here's a script to illustrate. #!/bin/bash
function cleanup {
echo "Cleanup"
rmdir /tmp/myjob.lck
}
mkdir /tmp/myjob.lck || exit 1
trap cleanup EXIT
echo 'Job Running'
sleep 60
exit 0 Run this in one terminal then before 60 seconds is up run it in another terminal it will exit with status 1. Once the first process exits you can run it from the second terminal ... EDIT: As I just learned about flock I thought I'd update this answer. flock(1) may be easier to use. In this case flock -n would seem appropriate e.g. * * * * * /usr/bin/flock -n /tmp/myAppLock.lck /path/to/your/job Would run your job every minute but would fail if flock could not obtain a lock on the file. | {
"source": [
"https://serverfault.com/questions/376717",
"https://serverfault.com",
"https://serverfault.com/users/116449/"
]
} |
376,839 | On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors . It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious visitor. Doing a whois on that IP address showed it's from the New York area, and from the "Websense" ISP. The lowest order 8 bits of the address vary, but they're always from the 208.80.194.0 /24 subnet. From most of the computers that I use to access my website, doing a traceroute to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. How and why is this happening? It seems completely benign, but unexplainable and a little creepy. | Websense? Websense is in the business of classifying URLs and looking for "naughty" things on the Internet. Their products usually show up in corporate environments. I'd bet that you accessed your secret page of HTTP from a company that has Websense installed and they automatically added the page to their (presumably gargantuan) list of pages to troll checking for porn, warez, forums, etc. As for the varying header, I'm guessing their robot has all manner of possible banners to choose from an intentionally changes them up to mask itself from analysis and pretend it's not a bot. In fact, a quick Google search of FunWebProducts websense all but confirms the theory. | {
"source": [
"https://serverfault.com/questions/376839",
"https://serverfault.com",
"https://serverfault.com/users/105047/"
]
} |
376,894 | I am often on one computer in my house and I would like to SSH to another one, but often don't know the IP address of the one I want to connect to. Is there a way, from the command line, to scan the local network so I can find the computer I want to connect to? | Use " nmap " - this will tell you which hosts are up on a network, and indeed which have port 22 open. You could combine it with a few other tools (like grep) to produce more targeted output if need be. Note: do this only on YOUR network. Running up nmap or its equivalents on someone else's network is considered bad form. sudo nmap -p 22 192.168.0.0/24 | {
"source": [
"https://serverfault.com/questions/376894",
"https://serverfault.com",
"https://serverfault.com/users/10126/"
]
} |
376,899 | We're about to move a client from a "regular" Courier Mail Server at another provider's server to Google Apps with us. So at the same time, we're switching their DNS records to "point" to our web server and Google's MX records. How can we avoid e-mail getting lost in the transition? Should we shut down Courier beforehand so that it doesn't receive any e-mail until the DNS changes have propagated? What will happen to e-mail that is sent to Courier but not received? Also, what about users who don't check their e-mail very often, how do we move their e-mail from Courier to Google Apps when they're on the same domain? Should we temporarily move Courier to another domain? | Use " nmap " - this will tell you which hosts are up on a network, and indeed which have port 22 open. You could combine it with a few other tools (like grep) to produce more targeted output if need be. Note: do this only on YOUR network. Running up nmap or its equivalents on someone else's network is considered bad form. sudo nmap -p 22 192.168.0.0/24 | {
"source": [
"https://serverfault.com/questions/376899",
"https://serverfault.com",
"https://serverfault.com/users/7090/"
]
} |
376,900 | I am using VMware ESXi. In our team we use to provide snapshots for long term backup. Then we faced issues like memory spillover and the server got hang up. I started reading in VMware knowledgebase articles and everywhere. Everywhere it was recommended not to have snapshots for a long time. Even VMware advised to keep snapshots for maximum of three days. But our team kept asking us to have at least two permanent snapshots (till deleting the VM). Sometimes we may use the VM for a year). one snapshot is for fresh machine state. (So when we complete testing an application, we will revert back to fresh state and install another application) (If I did not allow that, I may often need to host the VM.) Next snapshot for keeping the VM in some state (maybe they would have found an issue and keep that state for some time. Or they may install prerequisites for the application and keep the machine ready for testing.) Logically, their needs seems to be fair. But if I allow that, I am to permit them to hold the snapshots for long time. We are not using our VM as a mail server or database server. Why is keeping snapshots for long time having an adverse effect? Why are snapshots considered as temporary backups, not real backups? | When a VM has an active snapshot, its virtual disk I/O is not performed on the VM's actual .VMDK files, but instead they are kept unchanged, and whatever changes in the VM is written to different physical files; this allows for the recovery of the previous VM state, but has three important side effects: Disk I/O for the VM is much slower. Those "delta" files keep growing over time, as more and more disk I/O is performed by the VM. When the snapshot is removed, the changes stored in the "delta" files have to be merged back into the main .VMDK files, and this is is very slow and time consuming if the snapshot has been active for a long time. It is indeed better to not keep active snapshots for a long time. If you need a long-term backup of a VM in a given state, you can just copy the VM somewhere else: this will have no performance impact on the VM, and you'll anyway end up using less disk space than what long-term snapshots would fill up over time. Also, having a copy of the VM stored in a different place will actually help you if you lose the VM: snapshots are stored together with the VM they belong to, and are only useful if the VM is available; they are totally useless in the case of an actual data loss (like a datastore crash), and thus can't be used as real backups. Here is some official documentation about snapshots: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1015180 | {
"source": [
"https://serverfault.com/questions/376900",
"https://serverfault.com",
"https://serverfault.com/users/105129/"
]
} |
377,170 | I'm running Ubuntu 11.10 - setting up NFS to share a directory among many other servers. Which ports are required to be opened on the firewall? | $ rpcinfo -p | grep nfs Port 111 (TCP and UDP) and 2049 (TCP and UDP) for the NFS server. There are also ports for Cluster and client status (Port 1110 TCP for the former, and 1110 UDP for the latter) as well as a port for the NFS lock manager (Port 4045 TCP and UDP). Only you can determine which ports you need to allow depending on which services are needed cross-gateway. | {
"source": [
"https://serverfault.com/questions/377170",
"https://serverfault.com",
"https://serverfault.com/users/80269/"
]
} |
377,221 | The "screen" refers to a program mentioned in How to reconnect to a disconnected ssh session . That is a good facility. But there is a question I'd really like to know. How do I know whether I'm running inside a "screen"? The difference is: If yes, I know I can safely close current terminal window, e.g., close a PuTTY window, without losing my shell(Bash etc) session. If no, I know I have to take care of any pending works before I close the terminal window. Better, I'd like this status to be displayed in PS1 prompt so that I can see it any time automatically. | (Stolen from " How can I tell whether I'm in a screen? " over on StackOverflow and authored by user jho . P.S. You can't vote for a duplicate across StackExchange sites.) Check $STY . If it's null, you're on a "real" terminal. If it contains anything, it's the name of the screen you're in. If you are not in screen: eric@dev ~ $ echo $STY
eric@dev ~ $ If you are in screen: eric@dev ~ $ echo $STY
2026.pts-0.ip-10-0-1-71 If you use tmux instead of screen, also check $TMUX . To add this to your prompt, add the following to your ~/.bashrc : if [ -n "$STY" ]; then export PS1="(screen) $PS1"; fi
if [ -n "$TMUX" ]; then export PS1="(tmux) $PS1"; fi | {
"source": [
"https://serverfault.com/questions/377221",
"https://serverfault.com",
"https://serverfault.com/users/41079/"
]
} |
377,248 | I work as an IT guy in a law firm. I am recently asked to make a system wherein all the outgoing emails coming from our server to our clients will be put on hold first and wait for approval before it gets sent to the client. Our mail server uses Exim (that's what it says in cPanel). I am planning to create filters where the outgoing emails will be forwarded to an editor account. Then, the editor will review and edit the contents of the email. When the editor already approves the email, it will then get sent to the client by the editor but still using the original sender in the "From:" and "Reply-To:" field. I found some pointers from this site => http://www.devco.net/archives/2006/03/24/saving_copies_of_all_email_using_exim.php . Once the filters are in place, I want to make a simple PHP interface for the editor to check the forwarded emails and edit them if necessary. The editor can then click on an "Approve" button that will finally deliver the message using the original sender. I'm also thinking that maybe a PHP-less system will be enough. The editor can receive the emails from his own email client edit them and simply send the email as if he is the original sender. Is my plan feasible? Will there be issues that I have overlooked? Does it have the danger of being treated as spam by the other mailservers since I'll be messing up the headers? Update: (April 6, 2012)
The above questions are probably vague so here is a more specific question:
1. Can I possibly forward ALL outgoing messages going OUTSIDE our domain to be sent to another address and NOT to the actual recipient? | (Stolen from " How can I tell whether I'm in a screen? " over on StackOverflow and authored by user jho . P.S. You can't vote for a duplicate across StackExchange sites.) Check $STY . If it's null, you're on a "real" terminal. If it contains anything, it's the name of the screen you're in. If you are not in screen: eric@dev ~ $ echo $STY
eric@dev ~ $ If you are in screen: eric@dev ~ $ echo $STY
2026.pts-0.ip-10-0-1-71 If you use tmux instead of screen, also check $TMUX . To add this to your prompt, add the following to your ~/.bashrc : if [ -n "$STY" ]; then export PS1="(screen) $PS1"; fi
if [ -n "$TMUX" ]; then export PS1="(tmux) $PS1"; fi | {
"source": [
"https://serverfault.com/questions/377248",
"https://serverfault.com",
"https://serverfault.com/users/116640/"
]
} |
377,348 | I'm taking to putting various files in /tmp , and I wondered about the rules on deleting them? I'm imagining it's different for different distributions, and I'm particularly interested in Ubuntu and Fedora desktop versions. But a nice general way of finding out would be a great thing. Even better would be a nice general way of controlling it! (Something like 'every day at 3 in the morning, delete any /tmp files older than 60 days, but don't clear the directory on reboot') | That depends on your distribution. On some system, it's deleted only when booted, others have cronjobs running deleting items older than n hours. On Ubuntu 14: using tmpreaper which gets called by /etc/cron.daily , configured via /etc/default/rcS and /etc/tmpreaper.conf . ( Credits to this answer ). On Ubuntu 16: using tmpfiles.d . ( Credits to this answer ). On other Debian-like systems: on boot (the rules are defined in /etc/default/rcS ). On RedHat-like systems: by age (RHEL6 it was /etc/cron.daily/tmpwatch ; RHEL7/RHEL8 and RedHat-like with systemd it's configured in /usr/lib/tmpfiles.d/tmp.conf , called by systemd-tmpfiles-clean.service ). On Gentoo /etc/conf.d/bootmisc . | {
"source": [
"https://serverfault.com/questions/377348",
"https://serverfault.com",
"https://serverfault.com/users/70093/"
]
} |
377,598 | My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync , it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s . iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024
1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M
10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX
------------------------------------------------------------
Client connecting to 94.135.XXX, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec | Another way to mitigate high CPU usage but still keep the functionality of rsync, is by moving from rsync/SSH to rsync/NFS. You could export the paths you want to copy from via NFS and then use rsync locally from the NFS mount to your destination location. In one test from a WD MyBook Live network disk, one or more rsyncs from the NAS on a Gigabit network towards 2 local USB disks would not copy more than 10MB/sec (CPU: 80% usr, 20% sys), after exporting over NFS and rsyncing locally from the NFS share to both disks I got a total of 45MB/sec (maxing out both USB2 disks) and little CPU usage. Disk utilization when using rsync/SSH was about 6% and using rsync/NFS was closer to 24%, while both USB2 disks where close to 100%. So we effectively moved the bottleneck from the NAS CPU to both USB2 disks. | {
"source": [
"https://serverfault.com/questions/377598",
"https://serverfault.com",
"https://serverfault.com/users/116793/"
]
} |
377,617 | Following a discussion made HERE about how PHP-FPM consuming memory, I just found a problem in reading the memory in top command. Here is a screenshot of my top just after restarting PHP-FPM . Everything is normal: about 20 PHP-FPM processes, each consuming 5.5MB memory (0.3% of total). Here is the aged server right before restart of PHP-FPM (one day after the previous restart). Here, we still have about 25 PHP-FPM with double memory usage (10MB indicating 0.5% of total). Thus, the total memory used should be 600-700 MB. Then, why 1.6GB memory has been used? | TL;DR 1 Your server is within some kind of virtuozzo/openvz/ virtualization-du-jour container. Trying to make sense of memory usage is tilting at windmills. TL;DR 2 Linux ate your RAM! But that's okay, it does it to everyone. The Long Story Let's break it down! In the Mem: section we have: $n total : the amount of physical RAM in your machine $n used : how much memory is being consumed by Linux, not just the sum of the processes. $n free : How much RAM is not being consumed by Linux. This does not take into account that cached and buffered memory is in essence "free". $n buffers : buffer space is where blocks of disk I/O having been read or pending a write are stored. A buffer is a RAM representation of a single disk block. In the Swap: section we have: $n total : Self explanatory. Amount of disk space available to swap pages to. $n used : Self explanatory. How much disk swap space is used. $n free : Herp Derp. $n cache : Closely related to buffers above. It's actually part of the page cache and itself has no space on physical disk. Don't worry about the details for this conversation. The interesting part comes when you run free -m . You'll see three lines, and all of the numbers will correlate with top. I'll give my own PC as an example: total used free shared buffers cached
Mem: 8070 7747 323 0 253 5713
-/+ buffers/cache: 1780 6290
Swap: 5055 0 5055 The Mem row shows total RAM in megabytes ( $n total in top), how much is used ( $n used in top), how much is free ( $n free in top), how much is shared (ignore that), and now comes the good part! The buffers and cached columns in free -m correlate to, predictably, $n buffers and $n cache . But take a look at the second row that of free -m that starts with -/+ buffers/cache: . The math shows that the used amount is really (total)-((used-buffers)-cached). Free is (total)-(theNewUsed). What does all this mean? It means that Linux ate your RAM! The short story is that the Linux kernel gobbles up RAM as it is available to use for disk caching. There's nothing you can do about it unless you feel like trying to compile a custom kernel. Pro Tip: Don't. The RAM is really there and free for processes to use at their whim. That's what's meant by the -/+ buffers/cache: row in free -m . However, you're inside non hyper-visor virtualization container which makes things a bit squirrely. You simply can't take stock of your memory with byte accuracy at this point. However, you're not seeing any behavior that's terribly unusual. Keep Calm and Carry On. Also, get a physical server (unless you like memory statistics that look like Kreskin is your SysAdmin). | {
"source": [
"https://serverfault.com/questions/377617",
"https://serverfault.com",
"https://serverfault.com/users/94757/"
]
} |
377,934 | I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com
Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;somesubdomain.somedomain.com. IN A
;; ANSWER SECTION:
somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1.
;; AUTHORITY SECTION:
. 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400
;; Query time: 72 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Apr 10 00:11:01 2012
;; MSG SIZE rcvd: 136 | The data on a CNAME record must always be another DNS name - that's the whole point of a CNAME . As put succinctly by RFC 1034, the data in a CNAME should be: CNAME a domain name. While, if you're looking to point to an IP address, then your ticket is: A For the IN class, a 32 bit IP address CNAME is designed and implemented to be a DNS alias; it has no conception of having an IP address in that data field. As such, it's interpreted as an alias to another DNS name, as designed; after all, an IP address fits the syntax of a DNS name. So, for example's sake, let's say your DNS data is: somesubdomain.somedomain.com. 60 IN CNAME 192.0.2.1. The recursive DNS server that you're querying sees that the record is a CNAME , and figures that you'll want the actual data that it contains. No record other than the CNAME has been found, so there's no answer to give to the client. It tries to query a record for a hostname 192 , within the domain 0.2.1 . It doesn't have anything cached for that name, so it asks the root servers. They serve requests for TLDs like .com and .net , but this request is a request for .1 . They promptly respond that there is no such, and that's what the recurser sends on to you. The response that you're seeing in dig is your recursive name server saying, "well, the name you looked for pointed somewhere else, and that somewhere didn't exist - ask the root server if you don't believe me". So, yes, putting an IP address in a CNAME record is never valid, and I suspect that the systems that are working are functioning correctly through some other mechanism, like a hosts file or local name resolution - investigate their name resolution behavior. | {
"source": [
"https://serverfault.com/questions/377934",
"https://serverfault.com",
"https://serverfault.com/users/25218/"
]
} |
378,581 | I use nginx as a reverse proxy.
Whenever I update the config for it using sudo "cp -r #{nginx_config_path}* /etc/nginx/sites-enabled/"
sudo "kill -s HUP `cat /var/run/nginx.pid`" I face a brief downtime. How can I avoid that? | Run service nginx reload or /etc/init.d/nginx reload It will do a hot reload of the configuration without downtime. If you have pending requests, then there will be lingering nginx processes that will handle those connections before it dies, so it's an extremely graceful way to reload configs. Sometimes you may want to prepend with sudo | {
"source": [
"https://serverfault.com/questions/378581",
"https://serverfault.com",
"https://serverfault.com/users/117169/"
]
} |
378,595 | I've got a rather strange problem with sudo. Basically, it authenticates, but sometimes just doesn't launch the command provided. For example: liori@marvin:~$ sudo whoami
root
liori@marvin:~$ sudo whoami
root
liori@marvin:~$ sudo whoami
liori@marvin:~$ sudo whoami
liori@marvin:~$ sudo whoami
liori@marvin:~$ I wrote a test case which demonstrates this problem: liori@marvin:~$ sudo whoami; for i in `seq 100`; do echo -n ':' ; sudo whoami ; done ; echo
root
::::::::::::root
:::::root
:::::root
:::::::::::root
::::::::::root
::::::::::::::::::::::::::::::::root
:::root
::::root
::::root
:::root
:::root
::root
:root
:::::
liori@marvin:~$ Of course, the expected output is a series of lines, each one starting with exactly one colon character. I don't have any clue where to begin debugging this problem. For each attempt (whether the command was actually run or not), I get an entry in syslog: Apr 11 19:47:40 marvin systemd-logind[806]: New session c1079 of user root.
Apr 11 19:47:40 marvin systemd-logind[806]: Removed session c1079. This is Debian SID. I started observing this behavior few days ago, after a somewhat bigger update (I update this system maybe once a month), and after moving the system from one hard disk to another (using rsync -av --del ). Any help will be appreciated. | Run service nginx reload or /etc/init.d/nginx reload It will do a hot reload of the configuration without downtime. If you have pending requests, then there will be lingering nginx processes that will handle those connections before it dies, so it's an extremely graceful way to reload configs. Sometimes you may want to prepend with sudo | {
"source": [
"https://serverfault.com/questions/378595",
"https://serverfault.com",
"https://serverfault.com/users/16081/"
]
} |
378,939 | I always used to use the following command when copying from a server: rsync --progress -avze ssh user@host:/path/to/files ./here However, a friend of mine showed me that I can simply do: rsync --progress -avz user@host:/path/to/files ./here So the question is, if you do not need -e ssh why is it there anyways? | Any time you need additional options to the ssh command outside of the user and host, then you need the -e flag. Perhaps the server you're connecting to has ssh listening on port 2222. rsync -e 'ssh -p 2222' /source usr@host:/dest An alternative to getting around this, there are 2 files you can use. /etc/ssh/ssh_config or ~/.ssh/config The config file uses the same format as ssh_config . It's just able to be configured on a per user basis! | {
"source": [
"https://serverfault.com/questions/378939",
"https://serverfault.com",
"https://serverfault.com/users/116214/"
]
} |
378,999 | I need to run SSH commands remotely with the output displaying locally. But if the connection breaks I want the command to still run. I am not talking so much about logging in and executing but doing ssh user@remotehost 'commands && command etc' How can I ensure the command runs even if the connection breaks? | The best way to do this is using screen, which keeps the session open in a persistent way even if the connection dies (and if you want to start using it again you can do a screen -r and it will open it up again). Prefixing whatever command you want to run with screen (eg. ssh -t user@host screen command ) should do the job. If you want it to run in the background of the shell, you can also append an & to the whole thing. | {
"source": [
"https://serverfault.com/questions/378999",
"https://serverfault.com",
"https://serverfault.com/users/32056/"
]
} |
379,000 | Since one of our customers updated their server courier does not handle IMAP connections properly any more. POP3 works without any problems. When I try to test IMAP with telnet then it is always like this: $ telnet domain.com 143
Trying 188.40.46.214...
Connected to domain.com.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information.
01 LOGIN [email protected] test
Connection closed by foreign host. I enabled debugging in the authdaemond but the output does not really help much: Apr 12 23:10:04 servername authdaemond: received auth request, service=imap, authtype=login
Apr 12 23:10:04 servername authdaemond: authmysql: trying this module
Apr 12 23:10:04 servername authdaemond: SQL query: SELECT login, password, "", uid, gid, homedir, maildir, quota, "", concat('disableimap=',disableimap,',disablepop3=',disablepop3) FROM mail_user WHERE login = '[email protected]'
Apr 12 23:10:04 servername authdaemond: password matches successfully
Apr 12 23:10:04 servername authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n
Apr 12 23:10:04 servername authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Right after the "Authenticated" line the output stops. There is no other message. And in no other log file I've checked I could find any other related message. The system was updated from Ubuntu 10.10 to 12.04. How could I get more information? Or does anybody have an idea what could go wrong here? | The best way to do this is using screen, which keeps the session open in a persistent way even if the connection dies (and if you want to start using it again you can do a screen -r and it will open it up again). Prefixing whatever command you want to run with screen (eg. ssh -t user@host screen command ) should do the job. If you want it to run in the background of the shell, you can also append an & to the whole thing. | {
"source": [
"https://serverfault.com/questions/379000",
"https://serverfault.com",
"https://serverfault.com/users/12338/"
]
} |
379,675 | Nginx is running on port 80, and I'm using it to reverse proxy URLs with path /foo to port 3200 this way: location /foo {
proxy_pass http://localhost:3200;
proxy_redirect off;
proxy_set_header Host $host;
} This works fine, but I have an application on port 3200 , for which I don't want the initial /foo to be sent to. That is - when I access http://localhost/foo/bar , I want only /bar to be the path as received by the app. So I tried adding this line to the location block above: rewrite ^(.*)foo(.*)$ http://localhost:3200/$2 permanent; This causes 302 redirect (change in URL), but I want 301. What should I do? | Any redirect to localhost doesn't make sense from a remote system (e.g. client's Web browser). So the rewrite flags permanent (301) or redirect (302) are not usable in your case. Please try following setup using a transparent rewrite rule: location /foo {
rewrite /foo/(.*) /$1 break;
proxy_pass http://localhost:3200;
proxy_redirect off;
proxy_set_header Host $host;
} Use curl -i to test your rewrites. A very subtle change to the rule can cause nginx to perform a redirect. | {
"source": [
"https://serverfault.com/questions/379675",
"https://serverfault.com",
"https://serverfault.com/users/117601/"
]
} |
379,714 | I recently upgraded from the previous LTS Ubuntu to Precise and now mysql refuses to start. It complains of the following when I attempt to start it: ╰$ sudo service mysql restart
stop: Unknown instance:
start: Job failed to start And this shows in "/var/log/mysql/error.log": 120415 23:01:09 [Note] Plugin 'InnoDB' is disabled.
120415 23:01:09 [Note] Plugin 'FEDERATED' is disabled.
120415 23:01:09 [ERROR] Unknown/unsupported storage engine: InnoDB
120415 23:01:09 [ERROR] Aborting
120415 23:01:09 [Note] /usr/sbin/mysqld: Shutdown complete I've checked permissions on all the mysql directories to make sure it had ownership and I also renamed the previou ib_logs so that it could remake them. I'm just getting no where with this issue right now, after looking at google results for 2 hours. | After checking the logs I found the following error: [ERROR] Unknown/unsupported storage engine: InnoDB I removed these files: rm /var/lib/mysql/ib_logfile0
rm /var/lib/mysql/ib_logfile1 at /var/lib/mysql This resolved my problem after restart. | {
"source": [
"https://serverfault.com/questions/379714",
"https://serverfault.com",
"https://serverfault.com/users/74479/"
]
} |
380,561 | Here is the output of my date command: [root@r1304 ~]# date
Wed Apr 18 15:43:28 GST 2012 I want to change the default system timezone to Asia/Dubai. I've followed a tutorial and did this: ln -sf /usr/share/zoneinfo/Asia/Dubai /etc/localtime But with no effect. Seems that this is done differently in CentOS 6. How do I change the timezone? | It looks like that CentOS 6.2 doesn't have any hwclock line in it /etc/rc.sysinit , so changing /etc/sysconfig/clock will not work. Try tzselect or use ln -s /usr/share/zoneinfo/xxxx /etc/localtime | {
"source": [
"https://serverfault.com/questions/380561",
"https://serverfault.com",
"https://serverfault.com/users/36052/"
]
} |
380,642 | I run quite a big image gallery and there are 5 visitors that create an enormous amount of traffic by downloading the whole site every day using webcopiers. Those visitors have static IPs as it seems. What I would like to achieve is that those 5 IPs get redirected to a certain page (which explains why their behavior is problematic) as soon as they visit the site. All other visitors should be able to browse the site normally. The server is running CentOS (5.8) and nginx (1.0.15) as webserver.
Is there any way to achieve this by an entry in nginx.conf that you are aware of? Thank you very much in advance for your hints and support! Kind regards
-Alex | The Geo module is made to match client addresses. You can use it to define a variable to test like so: geo $bad_user {
default 0;
1.2.3.4/32 1;
4.3.2.1/32 1;
}
server {
if ($bad_user) {
rewrite ^ http://www.example.com/noscrape.html;
}
} This is more efficient than running a regex against $remote_addr, and easier to maintain. | {
"source": [
"https://serverfault.com/questions/380642",
"https://serverfault.com",
"https://serverfault.com/users/115654/"
]
} |
380,856 | I know it is funny situation but i removed python with all associated programs from Ubuntu using sudo apt-get remove python?
Obviously i can install back python, but it will take me a lot of time to install all programs that i removed.
Maybe there is some solution? Thanks | There is not an easy way but if you look at /var/log/apt/history.log you can see what was removed. Just reinstall each package that was removed. | {
"source": [
"https://serverfault.com/questions/380856",
"https://serverfault.com",
"https://serverfault.com/users/114187/"
]
} |
381,018 | I'm trying to install winswitch on CentOs 6. It requires nxagent . But in centos, the package name is nx . Is there a way to tell yum to skip checking the nxagent dependency (I installed nx already)? Specifying --skip-broken skips the whole thing. | The rpm command has the --nodeps option that you can use. A challenge is that rpm by itself is not aware of yum repositories. The following command will install or update the package, ignoring dependencies, but automatically looking up the download URL from your repositories with repoquery which is in package yum-utils . rpm -Uvh --nodeps $(repoquery --location winswitch) After that, a regular yum update will likely succeed without dependency errors. | {
"source": [
"https://serverfault.com/questions/381018",
"https://serverfault.com",
"https://serverfault.com/users/35316/"
]
} |
381,081 | I have logrotate running in an EC2 AWS machine rotating Apache logs. Once packed, Apache logs are saved into AWS S3 via s3fs. The problem is that I recently noticed that I didn't have logs rotated. In S3 I have old logs from day 48->60 but the 1->47 doesn't appear. My question is: Where does logrotate save its own log? It's possible that I have some kind of problem with s3fs, but I need to know before I do anything. I tried to find somewhere the logs but I couldn't find it out. Any idea? | logrotate does not log anything by default. normally it should be in your cron somewhere, for instance: $ grep -r -- 'logrotate.conf' /etc/cron*
/etc/cron.daily/logrotate:/usr/sbin/logrotate /etc/logrotate.conf You can either run that manually to see what is wrong, or redirect the logrotate output to a file in the above cron to see what happened next day. Likely somewhere the config is incorrect and caused the logrotate run to break. | {
"source": [
"https://serverfault.com/questions/381081",
"https://serverfault.com",
"https://serverfault.com/users/102847/"
]
} |
381,393 | On a website I am building, I plan to log the IP addresses of submissions, just in case it's necessary. I don't mind proxies, but outright spoofing your IP address would defeat the purpose. To perform a complete GET action, (regardless of whether you receive or whether it went through or not) is a legitimate IP address required? Or a website be spammed with posts from random spoofed IP addresses? (Is POST any different?) | No. Well, yes. Or maybe. It depends on where you're getting your "IP address" data from, and whether you trust them. If you're taking the address from the IP packets themselves, then you can trust that whoever sent the packets has access to packets sent to that IP address. That may mean that it's a legitimate user of that IP address (for appropriately limited values of the word "legitimate", in this age of botnets, open proxies, and Tor), or that whoever sent the packets has access to an intermediate system and can see the packets you're sending as they go past. However, with the wide prevalence of reverse proxies, the IP packet can often misrepresent the source of the connection, and so various HTTP headers have been introduced to allow the "actual" origin IP address to be provided by the proxy. The problem here is that you have to trust whoever's sending the header to provide accurate information. Also, default (or misguided copy-pasta'd) configs can easily leave you open to spoofing of those headers. Hence, you have to identify whether any reverse proxies are legitimately involved in your requests, and ensure they (and your webserver) are properly configured and secured. | {
"source": [
"https://serverfault.com/questions/381393",
"https://serverfault.com",
"https://serverfault.com/users/118185/"
]
} |
381,730 | I'm having big troubles with a remote server that for some reason explorer.exe crashed and, although I didn't lose remote desktop connectivity, I can't do anything. Is there a way of restarting explorer without rebooting the server? I appreciate ANY suggestions!! | Explorer runs on a per-user basis. Can you log in under a different account that isn't already logged in? Edit: Also, if your remote desktop session is still active, CTRL + ALT + END should have the same effect as a CTRL + ALT + DEL on the remote system. That might get you the Task Manager up, in which case you can kill/restart explorer.exe as required. | {
"source": [
"https://serverfault.com/questions/381730",
"https://serverfault.com",
"https://serverfault.com/users/94583/"
]
} |
382,116 | I can access the FTP site without problems from the local machine, but it times out from the remote machine. If I turn the firewall off COMPLETELY, it works. Obviously, this isn't really a satisfactory solution. I've attempted to follow these steps , but to now avail. On my remote machine I am using Filezilla as the FTP client.
Below is the output it gives me as I attempt to access the site. As you can see, it manages to connect and authenticate, but the attempt to list the directory times out. Can somebody tell me where I should look next? Status: Connecting to 192.168.15.12:21...
Status: Connection established, waiting for welcome message...
Response: 220 Microsoft FTP Service
Command: USER CMSDEVELOPMENT\CMSdev
Response: 331 Password required for CMSDEVELOPMENT\CMSdev.
Command: PASS ******
Response: 230-Directory has 71,805,415,424 bytes of disk space available.
Response: 230 User logged in.
Command: OPTS UTF8 ON
Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON.
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Type set to I.
Command: PASV
Response: 227 Entering Passive Mode (192,168,15,12,192,160).
Command: LIST
Response: 150 Opening BINARY mode data connection.
Error: Connection timed out
Error: Failed to retrieve directory listing Looking at the firewall logs, I see these entries: 2012-04-23 14:44:54 DROP TCP 192.168.15.90 192.168.15.12 55743 49342 52 S 650301735 0 65535 - - - RECEIVE
2012-04-23 14:44:57 DROP TCP 192.168.15.90 192.168.15.12 55743 49342 52 S 650301735 0 65535 - - - RECEIVE
2012-04-23 14:45:03 DROP TCP 192.168.15.90 192.168.15.12 55743 49342 48 S 650301735 0 65535 - - - RECEIVE | I finally got it to work, but there's some things I've learnt: IIS will let you configure the ports that the FTP server will use for passive mode. But, for me, this did NOT take affect until I restarted the service named "Microsoft FTP Service" When I looked at the inbound firewall rules, I saw three preconfigured rules: FTP Server (FTP Traffic-In) FTP Server Passive (FTP Passive Traffic-in) FTP Server Secure (FTP SSL Traffic In) These rules looked like just what I needed. But for some reason, they didn't actually do anything. When I created my OWN rules specifying exactly the same things, it worked.
(Apparently, I'm not the first to encounter this problem, see this posting .) Later Edit : Reading the comments below, it appears I was mistaken about these rules not working. You just need to enable them and restart the Microsoft FTP Service | {
"source": [
"https://serverfault.com/questions/382116",
"https://serverfault.com",
"https://serverfault.com/users/18371/"
]
} |
382,633 | Normally with a virtual host an ssl is setup with the following directives: Listen 443
SSLCertificateFile /home/web/certs/domain1.public.crt
SSLCertificateKeyFile /home/web/certs/domain1.private.key
SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt From: For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work? What is the difference between SSLCertificateFile and SSLCertificateChainFile ? The client has purchased a CA key from GoDaddy. It looks like GoDaddy only provides a SSLCertificateFile (.crt file), and a SSLCertificateKeyFile (.key file) and not at SSLCertificateChainFile . Will my ssl still work without a SSLCertificateChainFile path specified ? Also, is there a canonical path where these files should be placed? | Strictly speaking, you don't ever need the chain for SSL to function. What you always need is an SSLCertificateFile with a SSLCertificateKeyFile containing the correct key for that certificate. The trouble is, that if all you give Apache is the certificate, then all it has to give to connecting clients is the certificate - which doesn't tell the whole story about that SSL cert. It's saying, "I'm signed by someone, but I'm not going to tell you about them". This usually works fine, as most client systems have a large store of CA certificates (both root and intermediate) which it can check through for a matching signing relationship to establish trust. However, sometimes this doesn't work; most often the issue you'll run into is a client that doesn't hold the cert for an intermediate CA that's signed your certificate. That's where the chain comes in; it lets Apache show the client exactly what the trust relationship looks like, which can help a client fill in the blanks between your cert, a root they trust, and the intermediate that they don't know about. The chain can be included in your configuration in one of two ways: Embedded in the same file as you've set for your SSLCertificateFile , on new lines after the server certificate in order (the root should be at the bottom). If you set it up like this, you'll want SSLCertificateChainFile pointed to the exact same file as SSLCertificateFile . In a separate file configured in the SSLCertificateChainFile directive; the CA certificate that issued the server's certificate should be first in the file, followed by any others up the the root. Check the certificate file that you have now - I'm betting that it doesn't have the chain data included. Which usually works fine, but will eventually cause an issue with some browser or other. | {
"source": [
"https://serverfault.com/questions/382633",
"https://serverfault.com",
"https://serverfault.com/users/53629/"
]
} |
382,858 | In my server, the ssh port is not the standard 22. I have set a different one. If I setup fail2ban, will it be able to detect that port? How can I tell it to check that port rather than port 22? The output of iptables -L -v -n : Chain fail2ban-ssh (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * 119.235.2.158 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain fail2ban-ssh-ddos (0 references)
pkts bytes target prot opt in out source destination The output of service iptables status : iptables: unrecognized service Summery of fail2ban-regex /var/log/auth.log /etc/fail2ban/filter.d/sshd.conf : Summary
=======
Addresses found:
[1]
[2]
[3]
113.59.222.240 (Wed Mar 21 18:24:47 2012)
113.59.222.240 (Wed Mar 21 18:24:52 2012)
119.235.14.153 (Wed Mar 21 21:52:53 2012)
113.59.222.21 (Thu Mar 22 07:50:44 2012)
176.9.57.203 (Fri Mar 23 19:34:29 2012)
176.9.57.203 (Fri Mar 23 19:34:42 2012)
113.59.222.56 (Sat Mar 31 14:23:52 2012)
113.59.222.56 (Sat Mar 31 14:24:05 2012)
119.235.14.183 (Mon Apr 02 20:49:13 2012)
119.235.14.168 (Sat Apr 21 09:58:56 2012)
119.235.2.158 (Wed Apr 25 13:11:03 2012)
119.235.2.158 (Wed Apr 25 13:11:40 2012)
119.235.2.158 (Wed Apr 25 13:11:43 2012)
119.235.2.158 (Wed Apr 25 13:11:47 2012)
119.235.2.158 (Wed Apr 25 13:12:49 2012)
119.235.2.158 (Wed Apr 25 13:12:52 2012)
119.235.2.158 (Wed Apr 25 13:12:55 2012)
119.235.2.158 (Wed Apr 25 13:12:58 2012)
119.235.2.158 (Wed Apr 25 13:13:02 2012)
119.235.2.158 (Wed Apr 25 13:13:04 2012)
119.235.2.158 (Wed Apr 25 13:13:25 2012)
119.235.2.158 (Wed Apr 25 13:19:18 2012)
119.235.2.158 (Wed Apr 25 13:19:52 2012)
119.235.2.158 (Wed Apr 25 13:19:55 2012)
119.235.2.158 (Wed Apr 25 13:19:55 2012)
119.235.2.158 (Wed Apr 25 13:19:58 2012)
119.235.2.158 (Wed Apr 25 13:20:02 2012)
119.235.2.158 (Wed Apr 25 13:20:05 2012)
119.235.2.158 (Wed Apr 25 13:40:16 2012)
[4]
[5]
119.235.2.158 (Wed Apr 25 13:11:38 2012)
119.235.2.158 (Wed Apr 25 13:12:46 2012)
119.235.2.158 (Wed Apr 25 13:19:49 2012)
[6]
119.235.2.155 (Wed Mar 21 13:13:30 2012)
113.59.222.240 (Wed Mar 21 18:24:43 2012)
119.235.14.153 (Wed Mar 21 21:52:51 2012)
176.9.57.203 (Fri Mar 23 19:34:26 2012)
119.235.2.158 (Wed Apr 25 13:19:15 2012)
[7]
[8]
[9]
[10]
Date template hits:
1169837 hit(s): MONTH Day Hour:Minute:Second
0 hit(s): WEEKDAY MONTH Day Hour:Minute:Second Year
0 hit(s): WEEKDAY MONTH Day Hour:Minute:Second
0 hit(s): Year/Month/Day Hour:Minute:Second
0 hit(s): Day/Month/Year Hour:Minute:Second
0 hit(s): Day/Month/Year Hour:Minute:Second
0 hit(s): Day/MONTH/Year:Hour:Minute:Second
0 hit(s): Month/Day/Year:Hour:Minute:Second
0 hit(s): Year-Month-Day Hour:Minute:Second
0 hit(s): Day-MONTH-Year Hour:Minute:Second[.Millisecond]
0 hit(s): Day-Month-Year Hour:Minute:Second
0 hit(s): TAI64N
0 hit(s): Epoch
0 hit(s): ISO 8601
0 hit(s): Hour:Minute:Second
0 hit(s): <Month/Day/Year@Hour:Minute:Second>
Success, the total number of match is 37
However, look at the above section 'Running tests' which could contain important
information. The jail.conf : # Fail2Ban configuration file.
#
# This file was composed for Debian systems from the original one
# provided now under /usr/share/doc/fail2ban/examples/jail.conf
# for additional examples.
#
# To avoid merges during upgrades DO NOT MODIFY THIS FILE
# and rather provide your changes in /etc/fail2ban/jail.local
#
# Author: Yaroslav O. Halchenko <[email protected]>
#
# $Revision: 281 $
#
# The DEFAULT allows a global definition of the options. They can be override
# in each jail afterwards.
[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host
ignoreip = 127.0.0.1
bantime = 14400
maxretry = 3
# "backend" specifies the backend used to get files modification. Available
# options are "gamin", "polling" and "auto".
# yoh: For some reason Debian shipped python-gamin didn't work as expected
# This issue left ToDo, so polling is default backend for now
backend = polling
#
# Destination email address used solely for the interpolations in
# jail.{conf,local} configuration files.
destemail = root@localhost
#
# ACTIONS
#
# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overriden globally or per
# section within jail.local file
banaction = iptables-multiport
# email action. Since 0.8.1 upstream fail2ban uses sendmail
# MTA for the mailing. Change mta configuration parameter to mail
# if you want to revert to conventional 'mail'.
mta = sendmail
# Default protocol
protocol = tcp
#
# Action shortcuts. To be used to define action parameter
# The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s]
# ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s]
%(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s]
%(mta)s-whois-lines[name=%(__name__)s, dest="%(destemail)s", logpath=%(logpath)s]
# Choose default action. To change, just override value of 'action' with the
# interpolation to the chosen action shortcut (e.g. action_mw, action_mwl, etc) in jail.local
# globally (section [DEFAULT]) or per specific section
action = %(action_)s
#
# JAILS
#
# Next jails corresponds to the standard configuration in Fail2ban 0.6 which
# was shipped in Debian. Enable any defined here jail by including
#
# [SECTION_NAME]
# enabled = true
#
# in /etc/fail2ban/jail.local.
#
# Optionally you may override any other parameter (e.g. banaction,
# action, port, logpath, etc) in that section within jail.local
[ssh]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 4
# Generic filter for pam. Has to be used with action which bans all ports
# such as iptables-allports, shorewall
[pam-generic]
enabled = false
# pam-generic filter can be customized to monitor specific subset of 'tty's
filter = pam-generic
# port actually must be irrelevant but lets leave it all for some possible uses
port = all
banaction = iptables-allports
port = anyport
logpath = /var/log/auth.log
maxretry = 6
[xinetd-fail]
enabled = false
filter = xinetd-fail
port = all
banaction = iptables-multiport-log
logpath = /var/log/daemon.log
maxretry = 2
[ssh-ddos]
enabled = true
port = ssh
filter = sshd-ddos
logpath = /var/log/auth.log
maxretry = 6
#
# HTTP servers
#
[apache]
enabled = false
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6
# default action is now multiport, so apache-multiport jail was left
# for compatibility with previous (<0.7.6-2) releases
[apache-multiport]
enabled = false
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6
[apache-noscript]
enabled = false
port = http,https
filter = apache-noscript
logpath = /var/log/apache*/*error.log
maxretry = 6
[apache-overflows]
enabled = false
port = http,https
filter = apache-overflows
logpath = /var/log/apache*/*error.log
maxretry = 2
[nginx-auth]
enabled = true
filter = nginx-auth
action = iptables-multiport[name=NoAuthFailures, port="http,https"]
logpath = /var/log/nginx*/*error*.log
bantime = 600 # 10 minutes
maxretry = 6
[nginx-login]
enabled = true
filter = nginx-login
action = iptables-multiport[name=NoLoginFailures, port="http,https"]
logpath = /var/log/nginx*/*access*.log
bantime = 600 # 10 minutes
maxretry = 6
[nginx-badbots]
enabled = true
filter = apache-badbots
action = iptables-multiport[name=BadBots, port="http,https"]
logpath = /var/log/nginx*/*access*.log
bantime = 86400 # 1 day
maxretry = 1
[nginx-noscript]
enabled = true
action = iptables-multiport[name=NoScript, port="http,https"]
filter = nginx-noscript
logpath = /var/log/nginx*/*access*.log
maxretry = 6
bantime = 86400 # 1 day
[nginx-proxy]
enabled = true
action = iptables-multiport[name=NoProxy, port="http,https"]
filter = nginx-proxy
logpath = /var/log/nginx*/*access*.log
maxretry = 0
bantime = 86400 # 1 day
#
# FTP servers
#
[vsftpd]
enabled = false
port = ftp,ftp-data,ftps,ftps-data
filter = vsftpd
logpath = /var/log/vsftpd.log
# or overwrite it in jails.local to be
# logpath = /var/log/auth.log
# if you want to rely on PAM failed login attempts
# vsftpd's failregex should match both of those formats
maxretry = 6
[proftpd]
enabled = false
port = ftp,ftp-data,ftps,ftps-data
filter = proftpd
logpath = /var/log/proftpd/proftpd.log
maxretry = 6
[wuftpd]
enabled = false
port = ftp,ftp-data,ftps,ftps-data
filter = wuftpd
logpath = /var/log/auth.log
maxretry = 6
#
# Mail servers
#
[postfix]
enabled = false
port = smtp,ssmtp
filter = postfix
logpath = /var/log/mail.log
[couriersmtp]
enabled = false
port = smtp,ssmtp
filter = couriersmtp
logpath = /var/log/mail.log
#
# Mail servers authenticators: might be used for smtp,ftp,imap servers, so
# all relevant ports get banned
#
[courierauth]
enabled = false
port = smtp,ssmtp,imap2,imap3,imaps,pop3,pop3s
filter = courierlogin
logpath = /var/log/mail.log
[sasl]
enabled = false
port = smtp,ssmtp,imap2,imap3,imaps,pop3,pop3s
filter = sasl
# You might consider monitoring /var/log/warn.log instead
# if you are running postfix. See http://bugs.debian.org/507990
logpath = /var/log/mail.log
# DNS Servers
# These jails block attacks against named (bind9). By default, logging is off
# with bind9 installation. You will need something like this:
#
# logging {
# channel security_file {
# file "/var/log/named/security.log" versions 3 size 30m;
# severity dynamic;
# print-time yes;
# };
# category security {
# security_file;
# };
# };
#
# in your named.conf to provide proper logging
# !!! WARNING !!!
# Since UDP is connectionless protocol, spoofing of IP and immitation
# of illegal actions is way too simple. Thus enabling of this filter
# might provide an easy way for implementing a DoS against a chosen
# victim. See
# http://nion.modprobe.de/blog/archives/690-fail2ban-+-dns-fail.html
# Please DO NOT USE this jail unless you know what you are doing.
#[named-refused-udp]
#
#enabled = false
#port = domain,953
#protocol = udp
#filter = named-refused
#logpath = /var/log/named/security.log
[named-refused-tcp]
enabled = false
port = domain,953
protocol = tcp
filter = named-refused
logpath = /var/log/named/security.log I just noticed an error in the fail2ban log : 2012-04-25 14:57:29,359 fail2ban.actions.action: ERROR iptables -N fail2ban-ssh-ddos | Fail2Ban uses the file /etc/fail2ban/jail.local and look for the [ssh] section, you can change the port there. [ssh]
enabled = true
port = ssh You can change the port value to any positive integer. If it's not working and you want to look further, take a look at /etc/fail2ban/jail.conf , there should be something like: logpath = /var/log/auth.log That is what fail2ban uses to detect false logins. If it is not working correctly, you can try a few things to pinpoint the problem.
Start by checking if it is installed: dpkg -l |grep fail Check if the service is running: /etc/init.d/fail2ban status Check if your SSH-jail is setup: sudo fail2ban-client status Check the log file: fail2ban-regex /var/log/auth.log /etc/fail2ban/filter.d/sshd.conf Check your date/time: date && tail -2 /var/log/auth.log (You should first get the date, followed by the last lines in auth.log . If you still can't pinpoint the error, add your configuration file to your post. | {
"source": [
"https://serverfault.com/questions/382858",
"https://serverfault.com",
"https://serverfault.com/users/80981/"
]
} |
383,087 | The company I work at goes through computers fairly regularly. When we get a new computer, someone has to manually go through and remove all the bloatware that comes with the computer. Right now, I am compiling a database of known bloatware and their silent uninstall commands, but many programs either don't have or require a silent uninstall script to be created. I'm wondering if there are any methods that I have missed that would silently reduce the windows installation to just the barebones OS and drivers. | It's called a format. Just use a PXE server with some windows images. When a new computer comes in you automatically install a new windows image on it. In my experience, that's the easiest way. | {
"source": [
"https://serverfault.com/questions/383087",
"https://serverfault.com",
"https://serverfault.com/users/118743/"
]
} |
383,177 | I am attempting to start SQL Server Browser through the SQL Server Configuration Manager. However, not only is the "state" of SQL Server Browser stopped , but the options to Start , Stop , Pause , Resume , and Restart are all disabled (both in the right-click context menu, and through the Properties dialog). (Also, in the Properties dialog, I have attempted all 3 options for "built-in account": Local System, Local Service, and Network Service. I have also attempted "This Account" with various options. In all cases, the functionality remains disabled.) I initially thought it might be a port issue. Apparently, SQL Server Browser uses Port 1434. However, using a program called CurrPorts, I find that Port 1434 is not being used by any program. Can anyone help? | The service itself is disabled by default. In SQL Server Configuration Manager, go to Properties -> Service tab -> Start Mode = Automatic. | {
"source": [
"https://serverfault.com/questions/383177",
"https://serverfault.com",
"https://serverfault.com/users/108055/"
]
} |
383,335 | Is it possible on Windows Server 2000/2003/2008 machines to see which user rebooted the server? I have found the shutdown event in the System event log, but it does not show which user initiated the reboot. | In the System event log, filter by event id 1074 , this will show by which process and on behalf of which user a reboot was initiated. This was tested on Windows Server 2008. | {
"source": [
"https://serverfault.com/questions/383335",
"https://serverfault.com",
"https://serverfault.com/users/62286/"
]
} |
383,526 | This is a Canonical Question about selecting the right Apache httpd MPM. I'm a little confused between the different MPMs offered by Apache - 'worker', 'event', 'prefork', etc. What are the major differences between them, and how can I decide which one will be best for a given deployment? | There are a number of MPM modules (Multi-Processing Modules), but by far the most widely used (at least on *nix platforms) are the three main ones: prefork , worker , and event . Essentially, they represent the evolution of the Apache web server, and the different ways that the server has been built to handle HTTP requests within the computing constraints of the time over its long (in software terms) history. prefork mpm_prefork is.. well.. it's compatible with everything. It spins off a number of child processes for serving requests, and the child processes only serve one request at a time. Because it's got the server process sitting there, ready for action, and not needing to deal with thread marshaling, it's actually faster than the more modern threaded MPMs when you're only dealing with a single request at a time - but concurrent requests suffer, since they're made to wait in line until a server process is free. Additionally, attempting to scale up in the count of prefork child processes, you'll easily suck down some serious RAM. It's probably not advisable to use prefork unless you need a module that's not thread safe. Use if: You need modules that break when threads are used, like mod_php . Even then, consider using FastCGI and php-fpm . Don't use if: Your modules won't break in threading. worker mpm_worker uses threading - which is a big help for concurrency. Worker spins off some child processes, which in turn spin off child threads; similar to prefork, some spare threads are kept ready if possible, to service incoming connections. This approach is much kinder on RAM, since the thread count doesn't have a direct bearing on memory use like the server count does in prefork. It also handles concurrency much more easily, since the connections just need to wait for a free thread (which is usually available) instead of a spare server in prefork. Use if: You're on Apache 2.2, or 2.4 and you're running primarily SSL. Don't use if: You really can't go wrong, unless you need prefork for compatibility. However, note that the treads are attached to connections and not requests - which means that a keep-alive connection always keeps a hold of a thread until it's closed (which can be a long time, depending on your configuration). Which is why we have.. event mpm_event is very similar to worker, structurally; it's just been moved from 'experimental' to 'stable' status in Apache 2.4. The big difference is that it uses a dedicated thread to deal with the kept-alive connections, and hands requests down to child threads only when a request has actually been made (allowing those threads to free back up immediately after the request is completed). This is great for concurrency of clients that aren't necessarily all active at a time, but make occasional requests, and when the clients might have a long keep-alive timeout. The exception here is with SSL connections; in that case, it behaves identically to worker (gluing a given connection to a given thread until the connection closes). Use if: You're on Apache 2.4 and like threads, but you don't like having threads waiting for idle connections. Everyone likes threads! Don't use if: You're not on Apache 2.4, or you need prefork for compatibility. In today's world of slowloris , AJAX, and browsers that like to multiplex 6 TCP connections (with keep-alive, of course) to your server, concurrency is an important factor in making your server scale and scale well. Apache's history has tied it down in this regard, and while it's really still not up to par with the likes of nginx or lighttpd in terms of resource usage or scale, it's clear that the development team is working toward building a web server that's still relevant in today's high-request-concurrency world. | {
"source": [
"https://serverfault.com/questions/383526",
"https://serverfault.com",
"https://serverfault.com/users/112405/"
]
} |
383,666 | On the Linux CLI, is there a way to get the number of the week of the month? Maybe there is another way to get this with one simple (like date ) command? Let's say that day 1 to 7 is the first week, day 8 to 14 is the second week, and so on. | The date command can't do this internally, so you need some external arithmetic. echo $((($(date +%-d)-1)/7+1)) Edit: Added a minus sign between the % and the d | {
"source": [
"https://serverfault.com/questions/383666",
"https://serverfault.com",
"https://serverfault.com/users/71114/"
]
} |
384,050 | Is it (or will it be) possible to install IIS 8 on Windows Server 2008 (R2) or it's only meant for Windows Server "8"/2012? | Traditionally, it's always been one version of IIS to one operating system. Windows XP - IIS 5.1 Server 2003 - IIS 6 Server 2008 - IIS 7 Server 2008 R2 - IIS 7.5 I see no sign of this changing in the immediate future. | {
"source": [
"https://serverfault.com/questions/384050",
"https://serverfault.com",
"https://serverfault.com/users/55329/"
]
} |
384,132 | I do not wish to limit the rate of a specific service. My goals is to limit rate based solely on the incoming IP address. For example using a pseudo-rule: john.domain.local (192.168.1.100) can only download from our httpd/ftp servers at "10KB/s" (instead of 1MB/s) How could I rate limit using IPTables based on incoming IP addresses? | IPTables isn't made for this kind of work, where lots and lots of packets need to be analyzed to make these decisions. IPTables is partly the answer though! The real answer to this is the awesome and underused traffic control facilities in Linux. Note that mucking around with this without knowing what is going on may lead to you losing network connectivity to the machine! You have been warned! Assuming eth0 is the outgoing device you will need to create a class-based traffic control queue which will by default output most traffic through the 'fast' queue and put a specific list of people into the 'slow' queue. The beauty of this is you can create a situation whereby you allow lots of outbound traffic for the slow user unless an overriding class wants the bandwidth, but this example does not do this (will always provide 10kbps to the slow users). The queuing system will look something like this: Inbound traffic
+
|
|
v
+------------------+
| Class 1:1 |
|------------------|
| Root (all flows)|
| 100mbit |
+-----+-----+------+
| |
| |
| |
| |
| |
+----------+ | | +----------+
| 1:11 +-----+ +-----+ 1:12 |
|----------| |----------|
| Default | | Slow |
|100mb-80kb| | 80kb |
+----------+ +----------+ To do this, first you'll need to setup the queuing discipline in the kernel. The following will do this for you.. you must run this as one whole script #!/bin/bash
tc qdisc add dev eth0 parent root handle 1: hfsc default 11
tc class add dev eth0 parent 1: classid 1:1 hfsc sc rate 100mbit ul rate 100mbit
tc class add dev eth0 parent 1:1 classid 1:11 hfsc sc rate 99920kbit ul rate 100000kbit
tc class add dev eth0 parent 1:1 classid 1:12 hfsc sc rate 80kbit ul rate 80kbit
tc qdisc add dev eth0 parent 1:11 handle 11:1 pfifo
tc qdisc add dev eth0 parent 1:12 handle 12:1 pfifo The "default 11" is important as it tells the kernel what to do with traffic not classified. Once this is done, you can then setup an iptables rule to classify packets that match a certain criteria. If you plan on putting lots and lots of people into this slow rule an ipset rule is more appropriate (which should be available on rhel6 I believe). So, create an ipset database to do the matching against... ipset create slowips hash:ip,port Then create the iptables rule to do the match.. iptables -t mangle -I OUTPUT -m set --match-set slowips dst,src -j CLASSIFY --set-class 1:12 This instructs the kernel that if you match the destination IP with the source port from the set, classify it into the slow queue you setup with traffic control. Now, finally whenever you want to slow an IP down you can use the ipset command to add the ip to the set such as this: ipset add slowips 192.168.1.1,80
ipset add slowips 192.168.1.1,21
... You can test it works using the command "tc -s class show dev eth0" and you will see stats in there indicating packets being redirected to the slow queue. Note the only real downside to this is making it survive reboots. I don't think there are any init scripts available to create the ipsets from dumps on reboot (and they also must be created before iptables rules) and I'm certain there's no init scripts to resetup traffic control rules on reboot. If your not bothered, you can just recreate the whole thing from invoking a script in rc.local. | {
"source": [
"https://serverfault.com/questions/384132",
"https://serverfault.com",
"https://serverfault.com/users/119139/"
]
} |
384,237 | I am using Windows 7 Professional. When I am trying to start DefaultAppPool in IIS 7.0, I am getting error - Service WAS was not found on computer '.'. Is there any changes in setting need to be done? | Well, start off by checking if it is installed. Control Panel > Programs > Programs and Features > Windows Process Activation Service | {
"source": [
"https://serverfault.com/questions/384237",
"https://serverfault.com",
"https://serverfault.com/users/119183/"
]
} |
384,397 | We just took delivery of a new Avaya 2500 48-port switch , that has 24 PoE ports. The problem is that all the PoE ports are on the left-hand size of the switch, and our PoE device cables can only reach the right-hand side of the switch (we're upgrading from an old switch to a new one, and the old one had them on the right-hand side. This is the problem with doing neat cabling). Can I just mount the switch upside down? This would move the left-hand ports to the right-hand side and problem solved. My largest concern is that airflow or cooling might not work, but I can't see any visible breathing holes in the bottom or top of the switch which leads me to believe it will be OK, but better safe than sorry. | ˙sɯǝlqoɹd ʎuɐ pɐɥ ɹǝʌǝu ǝʌ,I puɐ uʍop ǝpısdn pǝʇunoɯ ǝɹɐ sǝɥɔʇıʍs ʞɹoʍʇǝu ʎɯ ɟo ll∀ (Seriously: you should have no problem mounting a switch upside-down - just make sure you don't create any ventilation issues) | {
"source": [
"https://serverfault.com/questions/384397",
"https://serverfault.com",
"https://serverfault.com/users/7709/"
]
} |
384,572 | I want to incorporate into a piece of software the ability to look up a manufacturer based on a mac address. By googling "mac address lookup" and similar, I have noticed several websites that make this correlation which suggests this data source is available somewhere. Where can I find this data source that correlates a mac address (input) with a manufacturer (output)? | The first half (24 bits) of your mac-address is called an OUI (Organizationally Unique Identifier) , and identifies the company. The list is available on ieee.org: http://standards.ieee.org/develop/regauth/oui/oui.txt They are formatted like this: 00-03-93 (hex) Apple Computer, Inc.
000393 (base 16) Apple Computer, Inc.
20650 Valley Green Dr.
Cupertino CA 95014
UNITED STATES The gaps between sequential hex-numbers are probably Privately Registered OUI's. There is no open list for those, but I've never encountered a MAC-address in such ranges. | {
"source": [
"https://serverfault.com/questions/384572",
"https://serverfault.com",
"https://serverfault.com/users/97543/"
]
} |
384,686 | This is a canonical question about capacity planning Related: How do you do load testing and capacity planning for web sites? How do you do load testing and capacity planning for databases? I have a question regarding capacity planning. Can the Server Fault community please help with the following: What kind of server do I need to handle some number of users? How many users can a server with some specifications handle? Will some server configuration be fast enough for my use case ? I'm building a social networking site: what kind of hardware do I need? How much bandwidth do I need for some project ? How much bandwidth will some number of users use in some application ? | The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload" . There are a number of factors at play in capacity planning which we can't adequately assess on a Question and Answer site: The requirements of your particular code/software External resources (databases, other software/sites/servers) Your workload (peak, average, queueing) The business value of performance (cost/benefit analysis) The performance expectations of your users Any service level agreements/contractual obligations you may have Doing a proper analysis on these factors, and others, is beyond the scope of a simple question-and-answer site: They require detailed knowledge about your environment and requirements which only your team (or an adequately-compensated consultant) can gather efficiently. Some Capacity Planning Axioms RAM is cheap If you expect your application to use a lot of RAM you should put in as much RAM as you can afford / fit. Disk is cheap If you expect to use a lot of disk you should buy big drives - lots of them. SAN/NAS storage is less cheap, and should also usually be spec'd large rather than small to avoid costly upgrades later. Workloads grow over time Assume your resource needs will increase. Bear in mind that the increase may not be symmetrical (CPU and RAM may rise faster than disk), and it may not be linear. Electricity is expensive Even though RAM and disks have decreased in price considerably, the cost of electricity has gone up steadily. All those extra disks and RAM, not to mention CPU power, will increase your electricity bill (or the bill you pay to your provider). Plan accordingly. | {
"source": [
"https://serverfault.com/questions/384686",
"https://serverfault.com",
"https://serverfault.com/users/32986/"
]
} |
385,226 | How can I tell the version of a package after doing a yum search? e.g. yum search rabbitmq returns rabbitmq-server.noarch : The RabbitMQ server I need to know the version of this server. | You can find the version number of a package in your repositories with the yum info command. # yum info rabbitmq-server
Available Packages
Name : rabbitmq-server
Arch : noarch
Version : 2.6.1
Release : 1.fc16
Size : 1.1 M
Repo : updates
Committer : Peter Lemenkov <[email protected]>
Committime : Tue Nov 8 13:00:00 2011
Buildtime : Tue Nov 8 10:31:03 2011
Summary : The RabbitMQ server
URL : http://www.rabbitmq.com/
License : MPLv1.1
Description : RabbitMQ is an implementation of AMQP, the emerging standard for high
: performance enterprise messaging. The RabbitMQ server is a robust and
: scalable implementation of an AMQP broker. To find the version numbers of installed packages, you can use rpm with the -q option. # rpm -q kernel
kernel-3.3.1-5.fc16.x86_64
kernel-3.3.2-1.fc16.x86_64
kernel-3.3.2-6.fc16.x86_64 | {
"source": [
"https://serverfault.com/questions/385226",
"https://serverfault.com",
"https://serverfault.com/users/97367/"
]
} |
385,851 | When trying to remove the user, it returns "user is currently logged in". I already killed the user using pkill -KILL -u usernameHere and several other commands, but it does not help. How can I remove this user? Running CentOS 6. | SU to the user su - username and run kill -9 -1 as the user. Exit the shell and try the userdel -r username again. Or you can check for processes from the user using lsof -u username and kill the relevant PIDs. Or pkill -u username or pkill -u uid | {
"source": [
"https://serverfault.com/questions/385851",
"https://serverfault.com",
"https://serverfault.com/users/119217/"
]
} |
385,893 | I have 2 domains hosted with different hosts. I need to redirect Domain A to Domain B. Unfortunately I can't do a 301 redirect from Host A, but can only modify/add DNS entries (A-Records and CNAMEs) at Host A. Surely it is possible to redirect www.DomainA.com to www.DomainB.com using only A-records and CNAMEs? At present, the DNS entries are: DomainA.com. 3600 IN SOA ns1.HostA.net.
www 3600 IN CNAME www.DomainB.com.
DomainA.com. 3600 IN NS ns1.HostA.net.
DomainA.com. 3600 IN NS ns2.HostA.net.
DomainA.com. 3600 IN NS ns3.HostA.net. I want to redirect DomainA.com -> DomainB.com
*.DomainA.com -> *.DomainB.com I've tried the suggestion from this other post but it didn't work. How can I achieve this only with A-Records and CNAMEs please? Thank you for your advice. Prembo. | So you are not looking at redirection as such (as that happens at the app level, i.e. on Apache/Nginx/wherever) but rather on the DNS resolution. The host on which DomainA is hosted will or should never be hit, based on your description as you want the DNS requests to be resolved to the IPs of DomainB. Unless I'm missing something in your request? As Shane pointed out DNS is not capable of HTTP redirection - that's an application/webserver duty. You could make DomainA and DomainB resolve to the same IP on DNS and all would work. But if you're looking to do this on per URL/per-path way then this is not possible - DNS is not capable of that - it's a simple DNS->IP service, what's happening with the actual URL is the webserver's task. After the comment below, what I'd do is to refer all DNS records for DomainA to the same IP(s) as DomainB is pointed to - this way you will get HTTP request hitting hostB and then it's just a simple matter of: creating a particular Apache Name Baseed Virtual host - which will be serving files from its own DocumentRoot creating permanent redirect on Apache like this: This will rewrite anything coming to DomainB to DomainA which can be hosted on the same server or somewhere else. I appreciate that the second option is probably an overhead and not necessary if you can/are allowed to create Name Based Virtual hosts on apache. <VirtualHost *:80>
ServerName DomainB
Redirect permanent / http://DomainA/
</VirtualHost> I'd go with 1. - point all DNS records of DomainA to the same IP(s) as DomainB is pointing and create particular Name Based VirtualHosts on Apache. | {
"source": [
"https://serverfault.com/questions/385893",
"https://serverfault.com",
"https://serverfault.com/users/96386/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.