source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
250,476 | I have a signup page on a subdomain like: https://signup.example.com It should only be accessible via HTTPS but I'm worried people might somehow stumble upon it via HTTP and get a 404. My html/server block in nginx looks like this: html {
server {
listen 443;
server_name signup.example.com;
ssl on;
ssl_certificate /path/to/my/cert;
ssl_certificate_key /path/to/my/key;
ssl_session_timeout 30m;
location / {
root /path/to/my/rails/app/public;
index index.html;
passenger_enabled on;
}
}
} What can I add so that people who go to http://signup.example.com get redirected to https://signup.example.com ? (FYI I know there are Rails plugins that can force SSL but was hoping to avoid that) | According to nginx pitfalls , it's slightly better to omit the unnecessary capture, using $request_uri instead. In that case, append a question mark to prevent nginx from doubling any query args. server {
listen 80;
server_name signup.mysite.com;
rewrite ^ https://$server_name$request_uri? permanent;
} | {
"source": [
"https://serverfault.com/questions/250476",
"https://serverfault.com",
"https://serverfault.com/users/10597/"
]
} |
250,481 | Are there any utility find commands that you can download and use in DOS which match the UNIX find command? | According to nginx pitfalls , it's slightly better to omit the unnecessary capture, using $request_uri instead. In that case, append a question mark to prevent nginx from doubling any query args. server {
listen 80;
server_name signup.mysite.com;
rewrite ^ https://$server_name$request_uri? permanent;
} | {
"source": [
"https://serverfault.com/questions/250481",
"https://serverfault.com",
"https://serverfault.com/users/3385/"
]
} |
250,490 | I'd like to know which hosts make a specific DNS query, and at what times. Is there any way to get logs this specific on Bind 9? For example, I might want to log all A queries for xyzzy.net . | According to nginx pitfalls , it's slightly better to omit the unnecessary capture, using $request_uri instead. In that case, append a question mark to prevent nginx from doubling any query args. server {
listen 80;
server_name signup.mysite.com;
rewrite ^ https://$server_name$request_uri? permanent;
} | {
"source": [
"https://serverfault.com/questions/250490",
"https://serverfault.com",
"https://serverfault.com/users/9060/"
]
} |
250,839 | How do you delete all partitions on a device from the command line on Linux (specifically Ubuntu)? I tried looking at fdisk, but it presents an interactive prompt. I'm looking for a single command, which I can give a device path (e.g. /dev/sda) and it'll delete the ext4, linux-swap, and whatever other partitions it finds. Essentially, this would be the same thing as if I were to open GParted, and manually select and delete all partitions. This seems fairly simple, but unfortunately, I haven't been able to find anything through Google. | Would this suffice? dd if=/dev/zero of=/dev/sda bs=512 count=1 conv=notrunc | {
"source": [
"https://serverfault.com/questions/250839",
"https://serverfault.com",
"https://serverfault.com/users/41252/"
]
} |
251,347 | I've used PuTTY for years, but alas, my saved session list has grown to the point that the simple alphabetical list is a bit cumbersome. What I'd really like to see is a nested/hierarchical style of saved sessions so that I can say create: ACME switch01 switch02 router ... Rand mailserver webserver ... Any suggestions? | If you're looking for something similar to a remote desktop connection manager but for SSH connections, you can use the PuTTY session manager . | {
"source": [
"https://serverfault.com/questions/251347",
"https://serverfault.com",
"https://serverfault.com/users/13008/"
]
} |
251,427 | Can someone please define what exactly is the "Stack". I know its an industry term but its very vague. I am referring to Infrastructure terminology not "Stack" in terms of memory allocation. | It refers to the technologies used that make up your service: your web application language/framework depends on (is stacked on) your web server, which talks to (stacks on) a specific database flavor, and these run on (stack with) specific operating systems. So you might have a stack like this: P PHP M MySQL A Apache L Linux to make up the LAMP stack, or like this: C C# S Sql Server I IIS W Windows to make up a WISC (windows) stack. Other common "stacks" are WIMP (Windows, IIS, MySql, PHP) and WAMP (Windows, Apache, MySQL, PHP). And those are just a few of the simple ones. It doesn't even begin to take into account Oracle, Ruby, Java, Python, and numerous other options that could sit at various points. You could have a MySql running on linux serving as the database for a web app running in Windows, or a web service tier using a completely different technology set from your application tier (which might even be a desktop app). The important thing is we often talk about whether your stack is windows-based or linux-based, and the reason it's important is because software developers tend to build products with a specific stack in mind, or have experience working with one stack (or family of stacks) but not another. As long as you match up to their stack, the product should work as expected. | {
"source": [
"https://serverfault.com/questions/251427",
"https://serverfault.com",
"https://serverfault.com/users/75720/"
]
} |
251,815 | I have generally seen that DHCP lease times are quite long (a day plus) on most defaults. I have a client that seems to have the following problem. They have a DHCP server in a router that is near-saturation (say in a normal work day 80-85% of the potential IPs are used). Occasionally they restart their router. When that happens it seems that the the router loses its table of assigned IPs, so it assigns IPs anew (of course). The problem is that quite often there is a client on the LAN which has the IP already and is going to hold it for a day (the current timeout length), causing an IP conflict and connectivity issues for those two machines. The obvious solution is to make a very short lease time, but since I'm only a hobbyist when it comes to networking, there may be more to DHCP that I don't understand. Is the above a reasonable evaluation of the situation (at least with lower-end equipment) and does a lower lease time (say a half-hour) make sense in this case? | You should consider replacing the DHCP server, as it is obviously broken. DHCP servers should keep lease information between restarts and preferably also probe addresses before releasing them into the pool to avoid address duplication. If that is not an option you can drop the lease length. As long as the DHCP server can handle the churn it should work, but short leases will cause a small increase the amount of broadcast traffic on your network. Short leases are primarily a problem when you have clients disconnecting and reconnecting a lot, for example in WiFi networks. Very short leases (less than 1 minute) can cause weird problems with some DHCP clients that have time-outs longer than the lease. | {
"source": [
"https://serverfault.com/questions/251815",
"https://serverfault.com",
"https://serverfault.com/users/8787/"
]
} |
252,137 | On Windows, you can set what should happen if/when a service fails. Is there a standard way of achieving the same thing on Linux (CentOS in particular)? A bigger part of my question is: how do you handle sockets that have been left open - for example in TIME_WAIT, FIN_WAIT1, etc states. At the moment if the service I am developing crashes, I have to wait for the sockets to clear or change the listen port before I can then manually restart it. Thanks for your help. | monit is a great way to monitor and restart services when they fail--and you'll probably end up using this for other essential services (such as Apache). There's a nice article on nixCraft detailing how to use this for services specifically, although monit itself has many more functions beyond this. As for the socket aspect, @galraen answered this spot on. | {
"source": [
"https://serverfault.com/questions/252137",
"https://serverfault.com",
"https://serverfault.com/users/74394/"
]
} |
252,150 | I have a Linux VPS (virtuozzo) server and I need to setup port forwarding, but my hosting provider does not allow iptables-nat kernel modules so iptables -t nat - is not working. I'm looking for other ways how to do it. I know I can forward port using openssh , but I need to forward 20+ different ports, tcp and udp so this is not an option. Is there is any software for linux that can do port forwarding? | Use the tool called "socat", it is great tool for such things and it is already packaged
in many linux distribution. Read about it here : http://www.dest-unreach.org/socat/doc/README Port forwarding example with socat : socat TCP4-LISTEN:80,fork TCP4:www.yourdomain.org:8080 This redirect all TCP connections on port 80 to www.yourdomain.org port 8080 TCP. | {
"source": [
"https://serverfault.com/questions/252150",
"https://serverfault.com",
"https://serverfault.com/users/69629/"
]
} |
252,333 | is there a command to see what packages are available from a certain ppa repository? | Simple: grep -h -P -o "^Package: \K.*" /var/lib/apt/lists/ppa.launchpad.net_*_Packages | sort -u Or more flexible: grep-dctrl -sPackage . /var/lib/apt/lists/ppa.launchpad.net_*_Packages For fancier querying, use apt-cache policy and aptitude as described here : aptitude search '~O LP-PPA-gstreamer-developers' | {
"source": [
"https://serverfault.com/questions/252333",
"https://serverfault.com",
"https://serverfault.com/users/46254/"
]
} |
252,723 | I have a process (dbus-daemon) which has many open connection over UNIX sockets. One of these connections is fd #36: =$ ps uw -p 23284
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
depesz 23284 0.0 0.0 24680 1772 ? Ss 15:25 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
=$ ls -l /proc/23284/fd/36
lrwx------ 1 depesz depesz 64 2011-03-28 15:32 /proc/23284/fd/36 -> socket:[1013410]
=$ netstat -nxp | grep 1013410
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
=$ netstat -nxp | grep dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1013953 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1013825 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1013726 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1013471 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1012325 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1012302 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1012289 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1012151 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011957 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011937 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011900 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011775 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011771 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011769 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011766 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011663 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011635 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011627 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011540 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011480 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011349 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011312 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011284 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011250 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011231 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011155 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011061 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011049 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011035 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1011013 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1010961 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD
unix 3 [ ] STREAM CONNECTED 1010945 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD Based on number connections, I assume that dbus-daemon is actually server. Which is OK. But how can I find which process is connected to it - using the connection that is 36th file handle in dbus-launcher? Tried lsof and even greps on /proc/net/unix but I can't figure out a way to find the client process. | Quite recently I stumbled upon a similar problem. I was shocked to find out that there are cases when this might not be possible. I dug up a comment from the creator of lsof (Vic Abell) where he pointed out that this depends heavily on unix socket implementation. Sometimes so called "endpoint" information for socket is available and sometimes not. Unfortunatelly it is impossible in Linux as he points out. On Linux, for example, where lsof must
use /proc/net/unix, all UNIX domain
sockets have a bound path, but no
endpoint information. Often there is
no bound path. That often makes it
impossible to determine the other
endpoint, but it is a result of the
Linux /proc file system
implementation. If you look at /proc/net/unix you can see for yourself, that (at least on my system) he is absolutelly right. I'm still shocked, because I find such feature essential while tracking server problems. | {
"source": [
"https://serverfault.com/questions/252723",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
252,921 | How do I remove empty/blank (including spaces only) lines in a file in Unix/Linux using the command line? contents of file.txt Line:Text
1:<blank>
2:AAA
3:<blank>
4:BBB
5:<blank>
6:<space><space><space>CCC
7:<space><space>
8:DDD output desired 1:AAA
2:BBB
3:<space><space><space>CCC
4:DDD | This sed line should do the trick: sed -i '/^$/d' file.txt The -i means it will edit the file in-place. | {
"source": [
"https://serverfault.com/questions/252921",
"https://serverfault.com",
"https://serverfault.com/users/4451/"
]
} |
253,074 | I'm using nginx reverse proxy cache with gzip enabled. However, I got some problems from Android applications HTTP-requests to my Rails JSON web service. It seems when I turn off reverse proxy cache, it works ok because the response header comes without gzip. Therefore, I think the problem is caused by gzip. What is the most appropriate level of gzip compression? gzip on;
gzip_http_version 1.0;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss; | I tested this under nginx 1.3.9 with two files, and these were the results I got for the various levels: text/html - phpinfo(): 0 55.38 KiB (100.00% of original size)
1 11.22 KiB ( 20.26% of original size)
2 10.89 KiB ( 19.66% of original size)
3 10.60 KiB ( 19.14% of original size)
4 10.17 KiB ( 18.36% of original size)
5 9.79 KiB ( 17.68% of original size)
6 9.62 KiB ( 17.37% of original size)
7 9.50 KiB ( 17.15% of original size)
8 9.45 KiB ( 17.06% of original size)
9 9.44 KiB ( 17.05% of original size) application/x-javascript - jQuery 1.8.3 (Uncompressed): 0 261.46 KiB (100.00% of original size)
1 95.01 KiB ( 36.34% of original size)
2 90.60 KiB ( 34.65% of original size)
3 87.16 KiB ( 33.36% of original size)
4 81.89 KiB ( 31.32% of original size)
5 79.33 KiB ( 30.34% of original size)
6 78.04 KiB ( 29.85% of original size)
7 77.85 KiB ( 29.78% of original size)
8 77.74 KiB ( 29.73% of original size)
9 77.75 KiB ( 29.74% of original size) I'm not sure how representative this is but it should serve as an example. Also, I haven't taken the CPU usage into account but from these results the ideal compression level seems to be between 4 and 6 . Additionally, if you use the gzip_static module, you may want to pre-compress your files (in PHP): function gzip_static($path)
{
if ((extension_loaded('zlib') === true) && (is_file($path) === true))
{
$levels = array();
$content = file_get_contents($path);
foreach (range(1, 9) as $level)
{
$levels[$level] = strlen(gzencode($content, $level));
}
if ((count($levels = array_filter($levels)) > 0) && (min($levels) < strlen($content)))
{
if (file_put_contents($path . '.gz', gzencode($content, array_search(min($levels), $levels)), LOCK_EX) !== false)
{
return touch($path . '.gz', filemtime($path), fileatime($path));
}
}
}
return false;
} This allows you to get the best possible compression without sacrificing the CPU on every request. | {
"source": [
"https://serverfault.com/questions/253074",
"https://serverfault.com",
"https://serverfault.com/users/51552/"
]
} |
253,077 | I am a little new to PHP and EC2 so bear with me. I need to setup two instances on EC2 running Apache and PHP, that in turn communicate with a third instance running MySQL server. The two Web instances will sit behind a load balancer. 1) How do I synchronize the files (php, conf, etc) in the different web server instances? What is the standard practice for doing such synchronization in a Web server farm and can this be simplified by running on EC2? 2) Most Web application architectures contain a presentation layer, business logic layer and the storage layer. Is there such an application server for PHP where I can continue to use Apache as a front end web server? (for example, running EJBs on JBoss and using Apache as the front end web server). Does the Apache/PHP solution scale well enough? Cheers Brian | I tested this under nginx 1.3.9 with two files, and these were the results I got for the various levels: text/html - phpinfo(): 0 55.38 KiB (100.00% of original size)
1 11.22 KiB ( 20.26% of original size)
2 10.89 KiB ( 19.66% of original size)
3 10.60 KiB ( 19.14% of original size)
4 10.17 KiB ( 18.36% of original size)
5 9.79 KiB ( 17.68% of original size)
6 9.62 KiB ( 17.37% of original size)
7 9.50 KiB ( 17.15% of original size)
8 9.45 KiB ( 17.06% of original size)
9 9.44 KiB ( 17.05% of original size) application/x-javascript - jQuery 1.8.3 (Uncompressed): 0 261.46 KiB (100.00% of original size)
1 95.01 KiB ( 36.34% of original size)
2 90.60 KiB ( 34.65% of original size)
3 87.16 KiB ( 33.36% of original size)
4 81.89 KiB ( 31.32% of original size)
5 79.33 KiB ( 30.34% of original size)
6 78.04 KiB ( 29.85% of original size)
7 77.85 KiB ( 29.78% of original size)
8 77.74 KiB ( 29.73% of original size)
9 77.75 KiB ( 29.74% of original size) I'm not sure how representative this is but it should serve as an example. Also, I haven't taken the CPU usage into account but from these results the ideal compression level seems to be between 4 and 6 . Additionally, if you use the gzip_static module, you may want to pre-compress your files (in PHP): function gzip_static($path)
{
if ((extension_loaded('zlib') === true) && (is_file($path) === true))
{
$levels = array();
$content = file_get_contents($path);
foreach (range(1, 9) as $level)
{
$levels[$level] = strlen(gzencode($content, $level));
}
if ((count($levels = array_filter($levels)) > 0) && (min($levels) < strlen($content)))
{
if (file_put_contents($path . '.gz', gzencode($content, array_search(min($levels), $levels)), LOCK_EX) !== false)
{
return touch($path . '.gz', filemtime($path), fileatime($path));
}
}
}
return false;
} This allows you to get the best possible compression without sacrificing the CPU on every request. | {
"source": [
"https://serverfault.com/questions/253077",
"https://serverfault.com",
"https://serverfault.com/users/61968/"
]
} |
253,313 | When I try to ssh to another box, I get this strange error $ ssh hostname
Bad owner or permissions on ~/.ssh/config But I made sure that I own and have rw permissions on the file: ls -la ~/.ssh/
total 40K
drwx------ 2 robert robert 4.0K Mar 29 11:04 ./
drwx------ 7 robert robert 4.0K Mar 29 11:04 ../
-rw-r--r-- 1 robert robert 2.0K Mar 17 20:47 authorized_keys
-rw-rw-r-- 1 robert robert 31 Mar 29 11:04 config
-rw------- 1 robert robert 1.7K Aug 4 2010 id_rsa
-rw-r--r-- 1 robert robert 406 Aug 4 2010 id_rsa.pub
-rw-r--r-- 1 robert robert 6.1K Mar 29 11:03 known_hosts | I needed to have rw for user only permissions on config. This fixed it. chmod 600 ~/.ssh/config As others have noted below, it could be the file owner. (upvote them!) chown $USER ~/.ssh/config If your whole folder has invalid permissions here's a table of possible permissions: Path Permission .ssh directory ( code ) 0700 (drwx------) private keys (ex: id_rsa ) ( code ) 0600 (-rw-------) config 0600 (-rw-------) public keys (*.pub ex: id_rsa.pub ) 0644 (-rw-r--r--) authorized_keys ( code ) 0644 (-rw-r--r--) known_hosts 0644 (-rw-r--r--) Sources: openssh check-perm.c openssh readconf.c openssh ssh_user_config fix_authorized_keys_perms | {
"source": [
"https://serverfault.com/questions/253313",
"https://serverfault.com",
"https://serverfault.com/users/11256/"
]
} |
253,464 | I wan't to be able to login via ssh with a password and not the key file. Yeah I know it's totally insecure but at this point in the config I was turning variables off and on left and right trying to get this to work. # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.
Port 22
#Protocol 2,1
Protocol 2
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
# HostKey for protocol version 1
#HostKey /etc/ssh/ssh_host_key
# HostKeys for protocol version 2
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
# Lifetime and size of ephemeral version 1 server key
#KeyRegenerationInterval 1h
#ServerKeyBits 768
# Logging
# obsoletes QuietMode and FascistLogging
#SyslogFacility AUTH
SyslogFacility AUTHPRIV
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#RSAAuthentication yes
#PubkeyAuthentication yes
#AuthorizedKeysFile .ssh/authorized_keys
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#RhostsRSAAuthentication no
# similar for protocol version 2
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# RhostsRSAAuthentication and HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
PermitEmptyPasswords yes
# Change to no to disable s/key passwords
ChallengeResponseAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
GSSAPIAuthentication no
#GSSAPIAuthentication yes
#GSSAPICleanupCredentials yes
#GSSAPICleanupCredentials yes
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication mechanism.
# Depending on your PAM configuration, this may bypass the setting of
# PasswordAuthentication, PermitEmptyPasswords, and
# "PermitRootLogin without-password". If you just want the PAM account and
# session checks to run without PAM authentication, then enable this but set
# ChallengeResponseAuthentication=no
#UsePAM no
UsePAM no
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL
#AllowTcpForwarding yes
#GatewayPorts no
#X11Forwarding no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes
#UseLogin no
#UsePrivilegeSeparation yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#ShowPatchLevel no
#UseDNS yes
#PidFile /var/run/sshd.pid
#MaxStartups 10
#PermitTunnel no
#ChrootDirectory none
# no default banner path
#Banner /some/path
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server | You need to change the config file for the ssh server and restart the server. alter /etc/ssh/sshd_config: PasswordAuthentication yes then restart the ssh server: /etc/init.d/sshd restart | {
"source": [
"https://serverfault.com/questions/253464",
"https://serverfault.com",
"https://serverfault.com/users/32265/"
]
} |
254,627 | I have a .cer certificate and I would like to convert it to the .pem format. If I remember correctly, I used to be able to convert them by exporting the .cer in Base64, then renaming the file to .pem . How do I convert a .cer certificate to .pem ? | Convert a DER file (.crt .cer .der) to PEM openssl x509 -inform der -in certificate.cer -out certificate.pem Source | {
"source": [
"https://serverfault.com/questions/254627",
"https://serverfault.com",
"https://serverfault.com/users/47271/"
]
} |
254,813 | This is something that I've never really been able to answer as well as I like: What is the real advantage of using Kerberos authentication in IIS instead of NTLM? I've seen a lot of people really struggle to get it set up (myself included) and I haven't been able to come up with a good reason for using it. There must be some pretty significant advantages though, otherwise it wouldn't be worth all the trouble to set it up, right? | From a Windows perspective only: NTLM works with both external (non-domain) and internal clients works with both domain accounts and local user accounts on the IIS box using domain accounts, only the server requires direct connectivity to a domain controller (DC) using local accounts, you don't need connectivity anywhere :) you don't need to be logged on as the user in question to use a credential Aside : it's not that uncommon for a DC to be overwhelmed by a busy NTLM server (IIS, Exchange, TMG/ISA, etc) with the volume of NTLM requests (to mitigate: MaxConcurrentAPI , AuthPersistSingleRequest (false) , faster DCs.) ( Self-referential bonus .) requires client connectivity only to the IIS server (on the site port, nothing else. i.e. Everything happens over HTTP (or HTTPS).) can traverse any proxy supporting HTTP Keep-Alive s you may be able to use TLS/SSL to work around others requires multiple round-trips to authenticate, with small packets (log pattern is 401.2, 401.1, 200 with username) cannot be used in scenarios where double-hop authentication is required i.e. the user's credentials are to be forwarded to a service on another computer supports older clients (< Win2000) Is susceptible to LM Auth Level discrepancies (mismatched lmcompatibilitylevel ) is used as a fallback by the Negotiate package if Kerb fails. ( not "if access is denied with Kerb", Kerb must break for NTLM to be used - usually this looks like not getting a ticket. If the client gets a ticket and it's not perfect, that doesn't cause a fallback.) Kerberos works with currently domain-joined clients only requires client connectivity to an AD DC (tcp/udp 88) AND the server (tickets are retrieved by the client from the DC via the Kerb port, and then provided to the server using HTTP) might be able to traverse a proxy, but see DC point above: you still need to be on the same network as an active DC, as does the server . so in theory if you had a domain in which internet-connected clients chatted directly to an internet-connected DC, it's workable. But don't do that unless you already knew that. In reverse proxy scenarios (ISA/TMG), the protocol transition server needs to be on that network, i.e. not the client... but then the client isn't really the one doing the Kerberos bit (necessarily - think Forms auth to Kerb transition). ticket is long-lived (10h) meaning less DC communication during ticket lifetime - and to emphasise: this could save thousands to millions of requests per client over that lifetime - ( AuthPersistNonNTLM is still a thing; Kerberos PAC validation used to be a thing) requires a single round-trip to authenticate, but the authentication payload size is relatively large (commonly 6-16K) ( 401 , {(encoded) token size} 200 ) can be used with ("please always use Constrained") delegation to enable double-hop scenarios , i.e. Windows authentication of the connecting user to the next service actually, N-hop - it stacks like Lego! Add as many hops as needed... for example, to allow UserA to access IIS, and for IIS to impersonate that same Windows user account when it accesses a different SQL Server computer. This is "delegation of authentication". ( Constrained in this context means "but not anything else", eg Exchange or another SQL box) is currently the primary security package for Negotiate authentication meaning Windows domain members prefer it when they can get it requires registration of SPNs , which can be tricky. Rules that help . requires use of a name as the target, not an IP address reasons Kerb might fail: using an IP address instead of a name no SPN registered duplicate SPNs registered SPN registered against wrong account ( KRB_ERR_AP_MODIFIED ) no client DNS / DC connectivity client proxy setting / Local Intranet Zone not used for target site While we're at it: Basic can multi-hop. But does so by exposing your username and password directly to the target web app which can then do anything it wants with them. Anything . "Oh, did a Domain Admin just use my app? And did I just read their email? Then reset their password? Awww. Pity " needs transport layer security (i.e. TLS/SSL) for any form of security. and then, see previous issue works with any browser (but see first issue ) requires a single round-trip to authenticate ( 401 , 200 ) can be used in multi-hop scenarios because Windows can perform an interactive logon with basic credentials May need the LogonType to be configured to accomplish this (think the default changed to network cleartext between 2000 and 2003, but might be misremembering) but again , see first issue . Getting the impression that the first issue is really, really important? It is. To sum up: Kerb can be tricky to set up, but there are loads of guides ( my one ) out there that try to simplify the process, and the tools have improved vastly from 2003 to 2008 ( SetSPN can search for duplicates, which is the most common breaking issue; use SETSPN -S anytime you see guidance to use -A, and life will be happier). Constrained delegation is worth the cost of admission. | {
"source": [
"https://serverfault.com/questions/254813",
"https://serverfault.com",
"https://serverfault.com/users/67732/"
]
} |
255,120 | I would like to backup user files from one server to another with rsync.
but I noticed that the user folders change to root.
how can I keep the user permissions with rsync (running by root)? | Use the -a flag, which includes among other things, the options -o and -g , which preserves owners and groups. This requires that you run rsync as root. Also, see man rsync . | {
"source": [
"https://serverfault.com/questions/255120",
"https://serverfault.com",
"https://serverfault.com/users/80829/"
]
} |
255,135 | I'm wondering if there's avantages of checking if a server is UP by doing a "HTTP GET Request" every second? Can any server handle it? | Can "any" server handle it? Probably. Should you do it? Probably not. Ask yourself a few questions: How fast will you be to respond to
an outage? How many pageviews do
you normally receive per second? How many consecutive errors are you
willing to see before calling it
"Down" and sending an alert? Do you have any SLA with internal or external customers that needs to be honored? Based the questions listed above what seems like a reasonable monitoring and response time? When I was first learning to
program, I decided I wanted to make a
stopwatch. When I finally got a
working application, I noticed the CPU
usage on my laptop was at 100%
whenever I ran it. My execution loop didn't have a wait
cycle. It just kept executing over the
time function. On that day I learned a valuable
lesson: there is no such thing as an
infinitely accurate measurement. | {
"source": [
"https://serverfault.com/questions/255135",
"https://serverfault.com",
"https://serverfault.com/users/67127/"
]
} |
255,291 | I use %0 in batch file to get the containing directory of the batch file but the result is :- c:\folder1\folder2\batch.bat I want just directory, without batch file name, like this :- c:\folder1\folder2\ How can I do it? Maybe I should filter the path. If yes, how can I do it? | %~p0 Will return the path only. %~dp0 Will return the drive+path. More info on the subject can be found on Microsoft's site . Information about this syntax can also be found in the help for the for command by executing for /? on a Windows OS. | {
"source": [
"https://serverfault.com/questions/255291",
"https://serverfault.com",
"https://serverfault.com/users/52749/"
]
} |
255,521 | I was su'ed into a user to run a particular long running script. I wanted to use screen but I got the error message "Cannot open your terminal '/dev/pts/4' - please check." So I Googled around and came across a forum post that instructed to run $ script '/dev/null/' . I did so and then I could screen. Why does this work? What is su doing that screen cannot run as the su'ed user? Why does re-directing 'script' to /dev/null do that is prevented otherwise? Is it using script to write a log as the original user to somewhere? | Well, technically you're not redirecting anything here. Calling script /dev/null just makes script save the whole typescript into /dev/null which in practice means discarding the contents. See man script for detailed info and util-linux-ng package for implementation ( misc-utils/script.c ). This has nothing to do with screen actually. Why this works is invoking script has a side effect of creating a pseudo-terminal for you at /dev/pts/X . This way you don't have to do it yourself, and screen won't have permission issues - if you su from user A to user B , by directly invoking screen you try to grab possession of user A 's pseudo-terminal. This won't succeed unless you're root . That's why you see the error message. | {
"source": [
"https://serverfault.com/questions/255521",
"https://serverfault.com",
"https://serverfault.com/users/8168/"
]
} |
255,522 | Possible Duplicate: Can you help me with my software licensing issue? I'm evaluating how many user licenses I need for Windows Server 2008. There will only be two users who need physical / remote desktop to the server itself, but I plan to install Symantec Endpoint on this server. Will I need more licenses for Windows based on how many machines are using Endpoint? | Well, technically you're not redirecting anything here. Calling script /dev/null just makes script save the whole typescript into /dev/null which in practice means discarding the contents. See man script for detailed info and util-linux-ng package for implementation ( misc-utils/script.c ). This has nothing to do with screen actually. Why this works is invoking script has a side effect of creating a pseudo-terminal for you at /dev/pts/X . This way you don't have to do it yourself, and screen won't have permission issues - if you su from user A to user B , by directly invoking screen you try to grab possession of user A 's pseudo-terminal. This won't succeed unless you're root . That's why you see the error message. | {
"source": [
"https://serverfault.com/questions/255522",
"https://serverfault.com",
"https://serverfault.com/users/42183/"
]
} |
255,580 | I'm trying to enter a 4028 bit DKIM key into DNS and it seems that I'm exceeding both the UDP 512 byte limit and also the maximum record size for a TXT record. How does someone properly create a large key (with implied larger encoded size) and import it into DNS? | You need to split them in the text field. I believe that 2048 is the practical limit for key sizes. Split the text field into parts 255 characters or less. There is overhead for each split. There are two formats for long fields. TXT "part one" \
"part two" TXT ( "part one"
"part two" ) Both of which will combine as "part onepart two". More details from Zytrax. To generate my DKIM entry I insert my public key file and wrap it in quotation marks. My public key file contains the following: MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQD78Ki2d0zmOlmjYNDC7eLG3af12KrjmPDeYRr3
q9MGquKRkRFlY+Alq4vMxnp5pZ7lDaAXXwLYjN91YY7ARbCEpqapA9Asl854BCHMA7L+nvk9kgC0
ovLlGvg+hhqIPqwLNI97VSRedE60eS+CwcShamHTMOXalq2pOUw7anuenQIDAQAB After editing the key in my dns zone file appears as follows: dkim3._domainkey IN TXT ("v=DKIM1; t=s; p="
"MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQD78Ki2d0zmOlmjYNDC7eLG3af12KrjmPDeYRr3"
"q9MGquKRkRFlY+Alq4vMxnp5pZ7lDaAXXwLYjN91YY7ARbCEpqapA9Asl854BCHMA7L+nvk9kgC0"
"ovLlGvg+hhqIPqwLNI97VSRedE60eS+CwcShamHTMOXalq2pOUw7anuenQIDAQAB") DNS returns it as follow: bill:~$ host -t TXT dkim3._domainkey.systemajik.com
dkim3._domainkey.systemajik.com descriptive text "v=DKIM1\; t=s\; p=" "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQD78Ki2d0zmOlmjYNDC7eLG3af12KrjmPDeYRr3" "q9MGquKRkRFlY+Alq4vMxnp5pZ7lDaAXXwLYjN91YY7ARbCEpqapA9Asl854BCHMA7L+nvk9kgC0" "ovLlGvg+hhqIPqwLNI97VSRedE60eS+CwcShamHTMOXalq2pOUw7anuenQIDAQAB" DNS treats it as one long string with no extra spaces where the lines are joined. All " " sequences are ignored. | {
"source": [
"https://serverfault.com/questions/255580",
"https://serverfault.com",
"https://serverfault.com/users/51457/"
]
} |
256,216 | I'm running a server "myserver.net", which has the subdomains "a.myserver.net" and "b.myserver.net". When creating (self-signed) SSL certificates, I have to create one for every subdomain, containing the FQDN, even though those subdomains are just vhosts. OpenSSL permits only one "common name", which is the domain in question. Is there any possibility to create a certificate that is valid for all subdomains of a domain? | Yes, use *.myserver.net as common name. This is called wildcard certs and there are large number of howtos finding with this keyword. Here is one of them: https://web.archive.org/web/20140228063914/http://www.justinsamuel.com/2006/03/11/howto-create-a-self-signed-wildcard-ssl-certificate Update: if you want cert to match root domain as well (myserver.net), then you should use Subject Alternative Name extension. When generating cert using openssh enter '*.myserver.net/CN=myserver.net' as Common Name. Compatibly is good enough , unless you have an ancient browser. | {
"source": [
"https://serverfault.com/questions/256216",
"https://serverfault.com",
"https://serverfault.com/users/51301/"
]
} |
256,746 | How common is for companies to let many users share only one public ip address? I hope the answer is "not very common" since I'm developing software that depends on the ip number being pretty much unique. | Only one of the companies I've worked for since 1995, and that's quite a lot of companies, uses public IP addresses for desktop clients. So for me, the answer is: very common indeed. I would very strongly advise against deploying software that makes the assumption that ipv4 addresses are unique to end-users, at this stage of the v4 deployment. ipv6, that's a different kettle of ball games. | {
"source": [
"https://serverfault.com/questions/256746",
"https://serverfault.com",
"https://serverfault.com/users/16725/"
]
} |
257,148 | I found out that my CentOS 5 server on EC2 was rebooted, without my command to do so. I did not even log in on the day it was rebooted. When I look at 'last' in linux, it says: jeroen pts/0 128.97....... Thu Apr 7 15:02 - 16:28 (01:25)
reboot system boot 2.6.18-xenU-ec2- Wed Apr 6 15:48 (1+05:27)
jeroen pts/2 128.97....... Tue Apr 5 19:31 - 23:17 (03:45) So it looks like nobody logged on that day. Any suggestions to what could have happened? Does EC2 reboot instances under any circumstances? Or might it be hacked? | This happens occasionally when Amazon is looking to decommission the physical server your instance is running on. They might be killing the server to replace a failing hardware component or the server might have simply reached its end of life. Either way from what I understand, their process works something like this: The physical server gets marked as decommissioned so no new instances get launched on it. If they can (e.g. they're not dealing with a critical hardware failure), Amazon will wait for some period of time to see if the instances running on the server shutdown or reboot on their own (rebooting an EC2 instance usually results in it getting launched on a different physical server). After that period completes, Amazon will force the remaining instances to reboot moving them to other physical servers. As a general rule of thumb, due to this and other quirks in the environment, you should treat any individual server running on EC2 as Ephemeral. Any server may reboot or flat out disappear at any time. Accordingly automating deployment with something like chef or puppet , having solid monitoring, and designing your application to work around failure is critical. | {
"source": [
"https://serverfault.com/questions/257148",
"https://serverfault.com",
"https://serverfault.com/users/76098/"
]
} |
257,394 | Problem I have MySQL replication setup between 2 servers, master ( A ) and slave ( B ). I need to add a new slave to the mix ( C ). I want this slave to get it's updates directly from the master, I do not want chain replication from the slave. However, the master is "hot", I usually use Xtrabackup to create a full backup of the master, but this will lock it for a good 10 minutes, as the database is around 20GB in size. Possible Solution FLUSH TABLES WITH READ LOCK on slave B , use SHOW SLAVE STATUS on B , write down binlog and position. Then backup database with Xtrabackup, ship the backup to C and use it to create the slave, and set replication to point to A with the binlog position I just wrote down. Question Is there a better way that doesn't require me to lock B for so long? Or something that is more easily automated? | Hey I know a crazy method to create a slave without augmenting any operation of master (ServerA) or slave (ServerB) Step 1) Setup a New Server (ServerC) Step 2) On ServerC, Install MySQL (same version as ServerB) Step 3) On ServerC, service mysql stop Step 4) Copy /etc/my.cnf from ServerB to ServerC Step 5) On ServerC, change server_id to a value different from ServerA and ServerB Step 6) rsync /var/lib/mysql on ServerB to ServerC Step 7) When rsync is completed, run "STOP SLAVE;" on ServerB Step 8) rsync /var/lib/mysql on ServerB to ServerC Step 9) On ServerB, run "START SLAVE;" Step 10) On ServerC, service mysql start Step 11) On ServerC, run "START SLAVE;" (Do this if skip-slave-start is in /etc/my.cnf) Give it a Try !!! BTW I have the utmost confidence this will work because I just did this for client over the last 2 days. Client had 2.7TB of data on a slave. I rsyncd to another server while the slave was still active. rsync took like 11 hours. I then ran STOP SLAVE; on the first slave and ran rsync again. That took another hour. I then performed the above step and everything is done. | {
"source": [
"https://serverfault.com/questions/257394",
"https://serverfault.com",
"https://serverfault.com/users/18989/"
]
} |
257,513 | I'm trying to allow a remote server to access a MySQL instance that currently shares a Linux server with a web app. According to the documentation the only way this would be possible (unless I'm not understanding correctly) is if the bind-address directive is set to 0.0.0.0 , which results in MySQL allowing access from any IP that can produce a valid user. So, two questions: how detrimental would this be to security? is there a better approach to allowing both local and remote interaction with MySQL? | I think you are misunderstanding the bind-address setting a little. These are the local addresses that MySQL will listen for connections. The default is 0.0.0.0 which is all interfaces. This setting does not restrict which IPs can access the server, unless you specified 127.0.0.1 for localhost only. If you need to restrict certain users from specific IP addresses, utilize create/grant user like this CREATE USER 'bobdole'@'192.168.10.221'; | {
"source": [
"https://serverfault.com/questions/257513",
"https://serverfault.com",
"https://serverfault.com/users/45630/"
]
} |
257,975 | I need it to determine if hitting ctrl + d would disconnect me from server or just close current screen . Is it somehow possible to check if I'm right now in screen session? | You can look at the $STY variable (a variable set by the screen command ). If it is not "" then you are in a screen session. I am in screen $ echo $STY
29624.pts-1.iain-10-04
$ I am not in screen $ echo $STY
$ | {
"source": [
"https://serverfault.com/questions/257975",
"https://serverfault.com",
"https://serverfault.com/users/46731/"
]
} |
257,979 | I am running Windows XP Professional SP3 on my Dell laptop. I use a VPN to connect to my work network to work remotely and control the computers and servers etc. I am connecting at home from an O2 Broadband router (Thompson) which I have access to the admin panel. I can sucessfully connect to the VPN and get an IP address on the work network. However, I cannot RDP or VNC any of the computers. I have made sure that the ports are pointing to my laptop using the routers application sharing options, I have turned the laptop and router firewall off aswell. Still no luck. Pointing the ports at a Win7 laptop allows the VPN to work as expected straight away. Just not on the Windows XP machine for some reason. I am connecting via WiFi on both laptops. Any ideas? | You can look at the $STY variable (a variable set by the screen command ). If it is not "" then you are in a screen session. I am in screen $ echo $STY
29624.pts-1.iain-10-04
$ I am not in screen $ echo $STY
$ | {
"source": [
"https://serverfault.com/questions/257979",
"https://serverfault.com",
"https://serverfault.com/users/41698/"
]
} |
258,064 | There is a server that is used from 4:30 am in the morning until ~ 22:00. Should it be turned off? I think that it is a server and that it won't have a problem to stay on, but serious professors are telling me that it is dangerous and that HD can fail within 2 years. The server owner believes that his old server running from 1995 without backup and a single hard disk (if the hard disk fails he is screwed) had no problem because he used to turn it off at nights. What do you believe for this? Now it has a RAID 1 array, external hard disk backup, and serveral full hard disk backups on DVD and over the internet. | To liken it to a car analogy: A taxi can do over 500,000 kilometers before it needs an engine rebuild. The reason for this is because they are always running, 24/7, and after a car's engine is up to temperature, the amount of wear it receives while it is running is greatly reduced. A computer is kinda the same. The majority of the "wear" on parts can happen when the server is booting up. Just attach an amp meter to your computer, and turn it on. When it starts up, the power it draws climbs very high, and then it settles down once all the disks have spun up and the processor is initalised. Also, think about how much disk activity the server undergoes during boot up vs when it's working. Chances are the disk access from booting the OS is fairly solid activity, whereas when the OS is running, unless it's a very heavy database server (I'm guessing not), the disks will most likely stay fairly idle. If there's any time it's going to fail, chances are it will be on boot up. Turning your server on and off is a stupid idea. Not only to mention most servers can take upwards of 2-5 minutes to just get past the BIOS checks, it's a huge amount of wasted time too. 2018 Update: Given that most computers are now essentailly entirely solid-state, this answer may no longer be as accurate as it once was. The taxi analogy doesn't really suit todays modern servers. That said, typically you still generall don't turn servers off. | {
"source": [
"https://serverfault.com/questions/258064",
"https://serverfault.com",
"https://serverfault.com/users/58706/"
]
} |
258,378 | I want to create a rule in nginx that does two things: Removes the "www." from the request URI Redirects to "https" if the request URI is "http" There are plenty of examples of how to do each of those things individually, but I can't figure out a solution that does both correctly (i.e. doesn't create a redirect loop and handles all cases properly). It needs to handle all of these cases: 1. http://www.example.com/path
2. https://www.example.com/path
3. http://example.com/path
4. https://example.com/path These should all end up at https://example.com/path (#4) without looping. Any ideas? | The best way to accomplish this is using three server blocks: one to redirect http to https, one to redirect the https www-name to no-www, and one to actually handle requests. The reason for using extra server blocks instead of ifs is that server selection is performed using a hash table, and is very fast. Using a server-level if means the if is run for every request, which is wasteful. Also, capturing the requested uri in the rewrite is wasteful, as nginx already has this information in the $uri and $request_uri variables (without and with query string, respectively). server {
server_name www.example.com example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /path/to/server.cert;
ssl_certificate_key /path/to/server.key;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /path/to/server.cert;
ssl_certificate_key /path/to/server.key;
server_name example.com;
<locations for processing requests>
} | {
"source": [
"https://serverfault.com/questions/258378",
"https://serverfault.com",
"https://serverfault.com/users/78018/"
]
} |
258,469 | Using postfix, I'd like all incoming mail, to any address (including those that don't map to local users) to be piped to a script. I've tried configuring mailbox_command in /etc/postfix/main.cf : mailbox_command = /path/to/myscript.py This works great if the user is a local user, but it fails for "unknown" users who don't have aliases. I tried setting luser_relay to a local user, but this pre-empts mailbox_command , and so the command doesn't get run. I tried setting local_recipient_maps= (empty string), but the message is still bounced (unknown user). Is there a magic invocation I can use to get all known and unknown users to go to the script as well? Full /etc/postfix/main.cf follows -- it's the default Ubuntu 10.04, with the exception of the mailbox_command line: # See /usr/share/postfix/main.cf.dist for a commented, more complete version
# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h
readme_directory = no
# TLS parameters
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.
myhostname = ... snip ...
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = sassafras, ... snip ...,localhost.localdomain, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
mailbox_command = /path/to/my/script.py | Ok, I just got this working -- though hairier than I thought it would be. I dropped the maildir_command part, and went with transport_maps . The key is to do 5 things: Set up a db file to handle aliases (and add a catch-all alias) Set up a db file to map the 'transport' for the domain in question to a special handler. Compile the db files into berkeley db format that postfix wants. Set up the handler in /etc/postfix/master.cf to pipe mail to the script. Set /etc/postfix/main.cf to use the transport db for transport_maps , and the alias db for virtual_alias-maps . (1) Create /etc/postfix/virtual_aliases to add a catch-all alias -- localuser needs to be an existing local user: @mydomain.tld [email protected] (2) Create /etc/postfix/transport to add a transport mapping. "mytransportname" can be whatever you want; it's used below in master.cf : mydomain.tld mytransportname: (3) Next, both transport and virtual_aliases need to be compiled into berkeley db files: $ sudo postmap /etc/postfix/virtual_aliases
$ sudo postmap /etc/postfix/transport (4) Add the transport to /etc/postfix/master.cf : mytransportname unix - n n - - pipe
flags=FR user=localuser argv=/path/to/my/script.py
${nexthop} ${user} (5) In /etc/postfix/main.cf : ...
transport_maps = hash:/etc/postfix/transport
virtual_alias_maps = hash:/etc/postfix/virtual_aliases And... good to go! Sheesh. | {
"source": [
"https://serverfault.com/questions/258469",
"https://serverfault.com",
"https://serverfault.com/users/67641/"
]
} |
258,474 | I'm using the strongswan documentation right here I've added to /etc/ipsec.secrets the following line: : RSA moonKey.pem "SomePwd" however i don't know how to create moonKey.pem . Any ideas? this is a follow up question to this one: strongSwan ipsec setup, couple of questions | Ok, I just got this working -- though hairier than I thought it would be. I dropped the maildir_command part, and went with transport_maps . The key is to do 5 things: Set up a db file to handle aliases (and add a catch-all alias) Set up a db file to map the 'transport' for the domain in question to a special handler. Compile the db files into berkeley db format that postfix wants. Set up the handler in /etc/postfix/master.cf to pipe mail to the script. Set /etc/postfix/main.cf to use the transport db for transport_maps , and the alias db for virtual_alias-maps . (1) Create /etc/postfix/virtual_aliases to add a catch-all alias -- localuser needs to be an existing local user: @mydomain.tld [email protected] (2) Create /etc/postfix/transport to add a transport mapping. "mytransportname" can be whatever you want; it's used below in master.cf : mydomain.tld mytransportname: (3) Next, both transport and virtual_aliases need to be compiled into berkeley db files: $ sudo postmap /etc/postfix/virtual_aliases
$ sudo postmap /etc/postfix/transport (4) Add the transport to /etc/postfix/master.cf : mytransportname unix - n n - - pipe
flags=FR user=localuser argv=/path/to/my/script.py
${nexthop} ${user} (5) In /etc/postfix/main.cf : ...
transport_maps = hash:/etc/postfix/transport
virtual_alias_maps = hash:/etc/postfix/virtual_aliases And... good to go! Sheesh. | {
"source": [
"https://serverfault.com/questions/258474",
"https://serverfault.com",
"https://serverfault.com/users/75759/"
]
} |
258,489 | What do I need to write in crontab to execute a script at 3pm every day? | You are looking for something like this (via crontab -e): 0 15 * * * your.command.goes.here 15 is the hour and 0 is the minute that the script is run. Day of month, month, and day of week get wildcards so that the script gets run daily. | {
"source": [
"https://serverfault.com/questions/258489",
"https://serverfault.com",
"https://serverfault.com/users/48728/"
]
} |
259,114 | /opt/eduserver/eduserver gives me options: Usage: /opt/eduserver/eduserver
{start|stop|startphp|startwww|startooo|stopphp|stopwww|stopooo|restartphp|restartwww|restartooo|status|restart|reload|force-reload} where memcache is php module there is memcache.ini in /opt/eduserver/etc/php/conf.d . I want to clear the memcache from command line. Can I do it somehow without 'touching' any other part of the web server? | yes. you can clear the memcache. try: telnet localhost 11211
flush_all
quit if the memcache does not run on localhost 11211, you will have to adjust it. | {
"source": [
"https://serverfault.com/questions/259114",
"https://serverfault.com",
"https://serverfault.com/users/33510/"
]
} |
259,226 | I need to automatically install a package with its config file already present on the server. I'm looking for something like: apt-get install --yes --force-yes --keep-current-confs mysql-server Probably a dumb question but I can't find such an option. | Found the answer on Raphael Hertzog's Blog : apt-get install -o Dpkg::Options::="--force-confold" --force-yes -y mysql-server It's dpkg's role to configure, therefore to chose which conf file to keep. You can also add this to the config of the system by creating a file in /etc/apt/apt.conf.d/ with this content: Dpkg::Options {
"--force-confdef";
"--force-confold";
} | {
"source": [
"https://serverfault.com/questions/259226",
"https://serverfault.com",
"https://serverfault.com/users/16728/"
]
} |
259,227 | How can I know how many users are using a IIS server, and who are them (IP, etc)?
There is a tool for showing these kind of stats? Thanks! | Found the answer on Raphael Hertzog's Blog : apt-get install -o Dpkg::Options::="--force-confold" --force-yes -y mysql-server It's dpkg's role to configure, therefore to chose which conf file to keep. You can also add this to the config of the system by creating a file in /etc/apt/apt.conf.d/ with this content: Dpkg::Options {
"--force-confdef";
"--force-confold";
} | {
"source": [
"https://serverfault.com/questions/259227",
"https://serverfault.com",
"https://serverfault.com/users/78320/"
]
} |
259,302 | On Ubuntu, it looks like the best place for a private key used to sign a certificate (for use by nginx) is in /etc/ssl/private/ This answer adds that the certificate should go in /etc/ssl/certs/ but that seems like an unsafe place. Do .crt files need to be kept safe or are they considered public? | The .crt file is sent to everything that connects; it is public. ( chown root:root and chmod 644 ) To add to the private key location; make sure you secure it properly as well as having it in there. ( chown root:ssl-cert and chmod 640 ) | {
"source": [
"https://serverfault.com/questions/259302",
"https://serverfault.com",
"https://serverfault.com/users/9185/"
]
} |
259,339 | I have a problem in one of my shell scripts. Asked a few colleagues, but they all just shake their heads (after some scratching), so I've come here for an answer. According to my understanding the following shell script should print "Count is 5" as the last line. Except it doesn't. It prints "Count is 0". If the "while read" is replaced with any other kind of loop, it works just fine. Here's the script: echo "1">input.data
echo "2">>input.data
echo "3">>input.data
echo "4">>input.data
echo "5">>input.data
CNT=0
cat input.data | while read ;
do
let CNT++;
echo "Counting to $CNT"
done
echo "Count is $CNT" Why does this happen and how can I prevent it? I've tried this in Debian Lenny and Squeeze, same result (i.e. bash 3.2.39 and bash 4.1.5.
I fully admit to not being a shell script wizard, so any pointers would be appreciated. | See argument @ Bash FAQ entry #24: "I set variables in a loop. Why do they suddenly disappear after the loop terminates? Or, why can't I pipe data to read?" (most recently archived here ). Summary:
This is only supported from bash 4.2 and up.
You need to use different ways like command substitutions instead of a pipe if you are using bash. | {
"source": [
"https://serverfault.com/questions/259339",
"https://serverfault.com",
"https://serverfault.com/users/17621/"
]
} |
259,505 | For apache, there is the htpasswd utility, which can be used to generate encrypted passwords for .htaccess access restriction etc. In Ubuntu I can install it via the apache2-utils package, but in Scientific Linux (Red Hat) I find only the following package, when I do yum search htpasswd : perl-Apache-Htpasswd.noarch : Manage Unix crypt-style password file but this does not seem to be the package I'm looking for, since it does not include the htpasswd command, and also when I do apt-cache search htpasswd in Ubuntu, I get: libapache-htpasswd-perl - Manage Unix crypt-style password file
lighttpd - A fast webserver with minimal memory footprint
nanoweb - HTTP server written in PHP
apache2-utils - utility programs for webservers ... where the first one is quite obviously the one corresponding to the one I found for Red Hat above(?). So, is there any equivalent to the apache2-utils package, or any other package including the htpassd utility, for Red Hat/Scientific Linux? At least I can't find it ... | Try yum provides \*bin/htpasswd | {
"source": [
"https://serverfault.com/questions/259505",
"https://serverfault.com",
"https://serverfault.com/users/59880/"
]
} |
259,580 | I ran this command: python ./manage.py dumpdata partyapp.InvitationTemplate > partyapp_dump.json To dump data into the partyapp_dump.json file. But all the data just gets printed on the screen and an empty partyapp_dump.json file is created. Why could this happen? I tested ls > partyapp_dump.json and that worked perfectly. | With > you only redirect the standard output.
Try 2> instead to redirect the error output. Use &> to redirect both. | {
"source": [
"https://serverfault.com/questions/259580",
"https://serverfault.com",
"https://serverfault.com/users/20346/"
]
} |
259,792 | We have a number of Supermicro machines with IPMI/BMC features. Some of these machines use an onboard BMC, while others use an add-on card . We are looking into using sideband due to it's reduced costs and cabling requirements. However, some sideband details don't quite make sense. Sideband requires one ethernet cable which is plugged into an ethernet port on the motherboard. This network port is then shared between the IPMI system and the operating system. From what I read in this Supermicro manual , "Use the same MAC address you are using for LAN1 for the SIMSO IPMI card". However, the IPMI must have a different IP address then the operating system. How is it possible to have two devices (the operating system and the IPMI) which can listen and transmit on this same physical network port? When a packet arrives at the interface, how does the system determine if this packet is intended for the Operating System or for the IPMI system? Are these packets handled by the CPU at all, using CPU interrupts? Can packets to the IPMI interface be viewed by the operating system? | I manage a lot of SuperMicro servers using the onboard IPMI. I have a love/hate relationship with the shared (aka sideband) ethernet. In general, the way these things work is that LAN1 appears to have 2 (different) MAC addresses - one is for the IPMI interface, the other your standard Broadcom NIC. Traffic to the IPMI interface (layer 2, based on the MAC address) is magically intercepted below the operating system level and never seen by whatever OS is running. You've already hit on the one good point for them: less cabling. Now let me cover some of the downsides: It's particularly difficult to partition the IPMI interface onto a separate subnet in a secure manner. Since the traffic all goes over the same cable, you (almost) always have to have the IPMI interface and the LAN1 interface on the same IP subnet. On the latest motherboards, the IPMI cards now support assigning a VLAN to the IPMI NIC, so you can get some semblance of separation - but the underlying OS could always sniff the traffic for that VLAN. Older BMC controllers don't allow changing the VLAN at all, although tools like ipmitool or ipmicfg will ostensibly let you change it, it just doesn't work. You're centralizing your failure points on the system. Doing configuration on a switch and manage to cut yourself off somehow? Congratulations, you've now cut off the primary network connection to your server AND the backup via IPMI. NIC hardware fail? Congratulations, same problem. Early SuperMicro IPMI BMCs were notorious for doing wonky things with the network interface. Whether to use the onboard vs. dedicated IPMI port was often determined at power-on (not restart), and would not toggle from there. If you had a power outage and your switch didn't provide power quickly enough, you could end up with the IPMI failing to work because it autodetected the wrong setting. I've personally had lots of weird, unexplainable connectivity issues getting the sideband IPMI working reliably. Sometimes I simply couldn't ping the interface IP for a few minutes. Sometimes I'd get a storm of packets on the assigned VLAN, but the traffic all appeared to be dropped. While this has nothing to do with sideband-vs.-dedicated, I'll also note that the tools for accessing host systems are very poorly written. Older IPMI cards don't support anything other than local authentication, making password rotation a total pain. If you're using the KVM-over-IP functionality, you're stuck using an improperly-signed, expired Java applet or a weird Java desktop application that only works on Windows and requires UAC elevation to run. I've found the keyboard entry to be spotty at best, sometimes getting "stuck keys" such that it's impossible to type a password to login without trying 10 times. I've eventually managed to get 40+ systems working with this arrangement. I've got mostly newer systems I could VLAN the IPMI interfaces onto a separate subnet, and I mostly use the serial console via ipmitool which works very well. For the next generation of servers, I'm looking at Intel's AMT technology with KVM support ; as this makes it into the server space, I can see replacing IPMI with this. | {
"source": [
"https://serverfault.com/questions/259792",
"https://serverfault.com",
"https://serverfault.com/users/36178/"
]
} |
260,607 | I have used pg_dump on one machine and copied result file to another, where I tried to restore it. I believe schema is the same. However, I get: pg_restore: [archiver] input file does not appear to be a valid archive I have done following operations: pg_dump -a -f db.txt dbname and: pg_restore -a -d dbname db.txt What might be wrong? | You are dumping in plain SQL format which was designed to feed to psql . This is not recognized by pg_restore . cat db.txt | psql dbname Should do the trick | {
"source": [
"https://serverfault.com/questions/260607",
"https://serverfault.com",
"https://serverfault.com/users/34330/"
]
} |
260,706 | I've got a CentOS 5.x box running on a VPS platform. My VPS host misinterpreted a support inquiry I had about connectivity and effectively flushed some iptables rules. This resulted in ssh listening on the standard port and acknowledging port connectivity tests. Annoying. The good news is that I require SSH Authorized keys. As far as I can tell, I don't think there was any successful breach. I'm still very concerned about what I'm seeing in /var/log/secure though: Apr 10 06:39:27 echo sshd[22297]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:27 echo sshd[22298]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:31 echo sshd[22324]: Invalid user edu1 from 222.237.78.139
Apr 10 06:39:31 echo sshd[22324]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:31 echo sshd[22330]: input_userauth_request: invalid user edu1
Apr 10 13:39:31 echo sshd[22330]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:35 echo sshd[22336]: Invalid user test1 from 222.237.78.139
Apr 10 06:39:35 echo sshd[22336]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:35 echo sshd[22338]: input_userauth_request: invalid user test1
Apr 10 13:39:35 echo sshd[22338]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:39 echo sshd[22377]: Invalid user test from 222.237.78.139
Apr 10 06:39:39 echo sshd[22377]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:39 echo sshd[22378]: input_userauth_request: invalid user test
Apr 10 13:39:39 echo sshd[22378]: Received disconnect from 222.237.78.139: 11: Bye Bye What exactly does "POSSIBLE BREAK-IN ATTEMPT" mean? That it was successful? Or that it didn't like the IP the request was coming from? | Unfortunately this in now a very common occurrence. It is an automated attack on SSH which is using 'common' usernames to try and break into your system. The message means exactly what it says, it does not mean that you have been hacked, just that someone tried. | {
"source": [
"https://serverfault.com/questions/260706",
"https://serverfault.com",
"https://serverfault.com/users/21875/"
]
} |
260,958 | I can't seem to get a straight answer to this quesion. Wikipedia says "IPsec is an integral part of the base protocol suite in IPv6," but does that mean that ALL communications are always encrypted, or does it mean that encryption is optional, but devices must be able to understand it (should it be used)? If encryption is optional, is it the operating system that decides whether to use encryption, or is it the application? Do popular operating systems and software generally enable encryption? I would investigate this myself, but I lack IPv6 connectivity. Update: Ok, so it's optional. My follow-up question: typically, is it the application that defines whether to use encryption, or is it the operating system? A specific example: Imagine I have a recent version of Windows with native ipv6 support, and I search for something on ipv6.google.com using Mozilla Firefox. Would it be encrypted? | No. IPv6 has IPsec built-in as part of the protocol, and it's not a bolt-on as it is with IPv4. However, this doesn't mean it's enabled by default, it just means it's a (theoretically) lower overhead on the network stack. Generally, IPsec usage is determinated at the IP-level of the network stack, and therefore determined by the system policies itself. e.g. System A might have a policy that requires both AH and ESP to have any communication with the 4.0.0.0/8 subnet. Update: to be clear, the application doesn't care - it just knows it has to open a network connection somewhere and send/receive data down it. The system then has to figure out whether to negotiate IPsec for the given requested connection. IPsec is very much designed to be a low-level authentication/encryption mechanism and is purposefully built so that higher-level protocols and applications don't have to worry about it. That said, it's just another network-level security control, and shouldn't necessarily be used in isolation or relied upon to guarantee 'security' - if you're trying to solve and authentication problem, it's entirely possible that you'd want the application to enforce some sort of user-level authentication whilst leaving machine-level authentication down to IPsec. | {
"source": [
"https://serverfault.com/questions/260958",
"https://serverfault.com",
"https://serverfault.com/users/78775/"
]
} |
261,341 | Is the hostname case sensitive? Is ping MYHOST equal to ping myhost Does it depend on the DNS used? Are there differences between Win/Mac/Unix systems? | Names resolved from DNS are case insensitive. This is important to prevent confusion. If it was case sensitive then we would have eight variants of .com (.com, .Com, .cOm, .COm, .coM, .CoM, .cOM, and .COM). Country codes would have four. If name resolution is case-sensitive for Ping it is not being done by DNS. | {
"source": [
"https://serverfault.com/questions/261341",
"https://serverfault.com",
"https://serverfault.com/users/2165/"
]
} |
261,354 | I'm trying to check if my configuration management system is running on my servers. It is pretty easy to use it to distribute a Zabbix configuration that will test if the CMS is running. However, hosts that are not presently running the CMS will return ZBX_NOTSUPPORTED, and I'd like to detect these as well. How can I do that? | Names resolved from DNS are case insensitive. This is important to prevent confusion. If it was case sensitive then we would have eight variants of .com (.com, .Com, .cOm, .COm, .coM, .CoM, .cOM, and .COM). Country codes would have four. If name resolution is case-sensitive for Ping it is not being done by DNS. | {
"source": [
"https://serverfault.com/questions/261354",
"https://serverfault.com",
"https://serverfault.com/users/9060/"
]
} |
261,802 | What are the functional differences between the .profile , .bash_profile and .bashrc files? | .bash_profile and .bashrc are specific to bash , whereas .profile is read by many shells in the absence of their own shell-specific config files. ( .profile was used by the original Bourne shell.) .bash_profile or .profile is read by login shells, along with .bashrc ; subshells read only .bashrc . (Between job control and modern windowing systems, .bashrc by itself doesn't get used much. If you use screen or tmux , screens/windows usually run subshells instead of login shells.) The idea behind this was that one-time setup was done by .profile (or shell-specific version thereof), and per-shell stuff by .bashrc . For example, you generally only want to load environment variables once per session instead of getting them whacked any time you launch a subshell within a session, whereas you always want your aliases (which aren't propagated automatically like environment variables are). Other notable shell config files: /etc/bash_profile (fallback /etc/profile ) is read before the user's .profile for system-wide configuration, and likewise /etc/bashrc in subshells (no fallback for this one). Many systems including Ubuntu also use an /etc/profile.d directory containing shell scriptlets, which are . ( source )-ed from /etc/profile ; the fragments here are per-shell, with *.sh applying to all Bourne/POSIX compatible shells and other extensions applying to that particular shell. | {
"source": [
"https://serverfault.com/questions/261802",
"https://serverfault.com",
"https://serverfault.com/users/78786/"
]
} |
261,803 | We're trying to fulfill the PCI Compliance requirement for a Wireless Analyzer that detects the presence of rogue AP's on the internal LAN. Questions: Are there a class of devices that will accomplish this requirement? How does such a device determine the difference between an AP that's nearby (say from a local coffee shop) and one that's actually getting it's internet/network access from a hard line on the corporate LAN (USB AP tethered off a workstation, AP plugged into a wall jack in an office, etc.) Our AP is a DLink DWL-3200AP, which has a "wireless analyzer" built into it, but it does not seem to be able to do much more than a wifi card will do, since it simply detects every single AP that's broadcasting it's SSID nearby, regardless of whether or not that AP is connected to our LAN EDIT: We're in a windows environment... Any help would be much appreciated... | .bash_profile and .bashrc are specific to bash , whereas .profile is read by many shells in the absence of their own shell-specific config files. ( .profile was used by the original Bourne shell.) .bash_profile or .profile is read by login shells, along with .bashrc ; subshells read only .bashrc . (Between job control and modern windowing systems, .bashrc by itself doesn't get used much. If you use screen or tmux , screens/windows usually run subshells instead of login shells.) The idea behind this was that one-time setup was done by .profile (or shell-specific version thereof), and per-shell stuff by .bashrc . For example, you generally only want to load environment variables once per session instead of getting them whacked any time you launch a subshell within a session, whereas you always want your aliases (which aren't propagated automatically like environment variables are). Other notable shell config files: /etc/bash_profile (fallback /etc/profile ) is read before the user's .profile for system-wide configuration, and likewise /etc/bashrc in subshells (no fallback for this one). Many systems including Ubuntu also use an /etc/profile.d directory containing shell scriptlets, which are . ( source )-ed from /etc/profile ; the fragments here are per-shell, with *.sh applying to all Bourne/POSIX compatible shells and other extensions applying to that particular shell. | {
"source": [
"https://serverfault.com/questions/261803",
"https://serverfault.com",
"https://serverfault.com/users/36513/"
]
} |
262,474 | I need to check that an OpenVPN (UDP) server is up and accessible on a given host:port. I only have a plain Windows XP computer with no OpenVPN client (and no chance to install it) and no keys needed to connect to the server - just common WinXP command line tools, a browser and PuTTY are in my disposition. If I was testing something like an SMTP or POP3 servert I'd use telnet and see if it responds, but how to do this with OpenVPN (UDP)? | Here is a shell one-liner: echo -e "\x38\x01\x00\x00\x00\x00\x00\x00\x00" |
timeout 10 nc -u openvpnserver.com 1194 | cat -v if there is an openvpn on the other end the output will be @$M-^HM--LdM-t|M-^X^@^@^@^@^@@$M-^HM--LdM-t|M-^X^@^@^@^@^@@$M-^HM--LdM-t|M-^X... otherwise it will just be mute and timeout after 10 seconds or display something different. NOTE: this works only if tls-auth config option is not active, otherwise the server rejects messages with incorrect HMAC. | {
"source": [
"https://serverfault.com/questions/262474",
"https://serverfault.com",
"https://serverfault.com/users/42777/"
]
} |
262,541 | I have following situation: =$ LC_ALL=C df -hP | column -t
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-rootlv 19G 854M 17G 5% /
/dev/mapper/vg0-homelv 19G 343M 18G 2% /home
/dev/mapper/vg0-optlv 19G 192M 18G 2% /opt
/dev/mapper/vg0-varlv 19G 357M 18G 2% /var I'd like to know what physical disks are used by these volumes, and how much free disk space (unallocated) I have, so that I will know how much I can grow these. | This is relatively easy. Use lvdisplay to show logical volumes, vgdisplay to show volume groups (including free space available) and pvdisplay to show physical volumes. You should get all the data you need from those three commands, albeit with some work to figure out what all the various bits of data mean. | {
"source": [
"https://serverfault.com/questions/262541",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
262,626 | I'm always running ssh with the -i parameter and it's a hassle to always type in the correct key for whatever host I'm connecting to. Is there a config file or something (on Mac) to define which private key to use when connecting to a particular host? | Yes, you want to create a ~/.ssh/config file. That lets you define a shortcut name for a host, the username you want to connect as, and which key to use. Here's part of mine, with hostnames obfuscated: Host tabs
HostName tabs.com
User me
IdentityFile ~/.ssh/new_rsa
Host scm.company.com
User cap
IdentityFile ~/.ssh/git_rsa
Host project-staging
HostName 50.56.101.167
User me
IdentityFile ~/.ssh/new_rsa With this I can say, ssh tabs and get connected to host tabs.com as user me , with key new_rsa , as though I'd used ssh [email protected] -i ~/.ssh/new_rsa . | {
"source": [
"https://serverfault.com/questions/262626",
"https://serverfault.com",
"https://serverfault.com/users/77287/"
]
} |
263,133 | I am about to begin a 4-year information security degree at Purdue. The degree does not call for any programming courses. So the only time I will be able to take one is the occasional elective. So most of my learning will be on my own. At the start of my senior year of high school I decided to completely switch to Linux. So far I have been learning some Linux and security stuff. However, I also believe it will be important for me to also learn a few programming languages. Basically I am planning on learning to program side-by-side with learning how to use Vim. So it most likely will be a slow process. In the end I think it will be worth it, though. As I said, I am going into the security, so I will mostly be creating security related applications, most of which will be networking related. I would also like to begin developing Android applications, but that will be later down the road. With that said I have a few ideas. I was thinking of starting with JavaScript, because it is cross-platform, and I have seen it suggested before. I have also been hearing a lot about Ruby or could go the natural Linux route with C. What direction should I go? | First and foremost: bash , along with the common command line utilities. Bash is the default user interface to the operating system, and a lot of programs on a Linux system will be wrapped in a shell script at some level. It can be quirky, has some idiosyncrasies, and often seems downright dumb, but it's something you will have to deal with, so get comfortable with it. The standard tools like grep , diff , head , tail , sort , uniq , and so on, will be very helpful not only with shell scripting, but with your productivity on the command-line. Learn at least some c . It's what the lowest levels of the system are written in, and it will give you a better understanding of the system as a whole. Pick any higher level language you like. Python , ruby , perl , java , whatever - as long as you enjoy it. This is where you really start to learn how to "program", and from here on out it will be easier to pick up more languages, and keep learning . | {
"source": [
"https://serverfault.com/questions/263133",
"https://serverfault.com",
"https://serverfault.com/users/79398/"
]
} |
263,260 | So, I just created the Amazon RDS account.
And I started an instance of database. The "endpoint" is:
abcw3n-prod.cbmbuiv8aakk.us-east-1.rds.amazonaws.com Great! Now I try to connect to it from one of my other EC2 instances. mysql -uUSER -pPASS -habcw3n-prod.cbmbuiv8aakk.us-east-1.rds.amazonaws.com But nothing works and it just hangs. I tried to ping it, and nothing works either. Nothing happens. Do I need to change some settings? | By default RDS does not allow any connection that is not specified within the Security Group (SG). You can allow based on CIDR addressing or by Amazon account number which would allow any EC2 under that account to access it. | {
"source": [
"https://serverfault.com/questions/263260",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
263,530 | I want to observe the HTTPs protocol. How can I use a Wireshark filter to do that? | As 3molo says. If you're intercepting the traffic, then port 443 is the filter you need. If you have the site's private key, you can also decrypt that SSL . (needs an SSL-enabled version/build of Wireshark.) See http://wiki.wireshark.org/SSL | {
"source": [
"https://serverfault.com/questions/263530",
"https://serverfault.com",
"https://serverfault.com/users/13725/"
]
} |
263,605 | My company has multiple domains and I log into my local machine with one set of credentials, but often when accessing certain network resources I need to use a different set of credentials. In Windows I would use RunAs where I have the option to run the entire process as under a different set of credentials or I could tell it to only impersonate the other user over the network ( runas /netonly ). Is there something like this in Linux? | $ sudo -u <username> <command> That will run the specified command as the user specified. It's not an exact drop-in for Windows' RunAs function, though, as that incorporates Kerberos authentication as well as for tasks that connect to remote hosts. | {
"source": [
"https://serverfault.com/questions/263605",
"https://serverfault.com",
"https://serverfault.com/users/79558/"
]
} |
263,694 | This is a Canonical Question about the Cost of Enterprise Storage. See also the following question: What's the best way to explain storage issues to developers and other users Regarding general questions like: Why do I have to pay 50 bucks a month per extra gigabyte of storage? Our file server is always running out of space, why doesn't our sysadmin just throw an extra 1TB drive in there? Why is SAN equipment so expensive? The Answers here will attempt to provide a better understanding of how enterprise-level storage works and what influences the price. If you can expand on the Question or provide insight as to the Answer, please post. | Server hard-drive capacities are miniscule compared to desktop hard-drive capacities. 450 and 600GB are not uncommon sizes to see in brand new servers, and you could buy many 4TB SATA desktop drives for the price of one 600GB SAS (server) hard drive. Your SATA hard-drive in your desktop PC at home is like a muscle car from Ford, or GM or Mercedes or any other manufacturer of cars for every-day people (large capacity V8 or V12, 5 or 6 litres). Because they need to be driven by people who don't have a racing license, or understand how an internal combusion engine works, they have very large tolerances. They have rev limiters, they're designed to run on any oil of a certain rating, they have service intervals say 10,000km apart, but if you miss a service interval by a few weeks it won't explode in your face. They don't catch fire when you drive long distances. The SAS drive in a server is more akin to a Formula 1 engine. They're really small (2.4 litres) but have immense power outputs because of their tiny tolerances. They rev higher, and often have no rev limiter (which means they suffer serious damage if driven incorrectly), and if you miss a service interval (which is every few hours ) they explode. You're basically comparing chalk and cheese. Numbers and a full breakdown are discussed in the Intel Whitepaper Enterprise-class versus Desktop-class Hard Drives Let's talk some hard numbers here. Let's say you request 1MB of additional data (a nice round number). How much data is that really ? Well, your 1MB of data is going to go into a RAID array. Let's say they're being safe and making that into RAID1. Your 1MB of data is mirrored, so it's actually 2MB of data. Let's say your data is inside a SAN. In case of a SAN node failure, your data is synchronized at a byte-level to a 2nd SAN node. So it's duplicated, and your 2MB of data is now 4MB. You expect your provider to keep on-site backups, so your data can be restored in the case of a non-disaster emergency? Any decent provider is going to provide you with at least 1 on-site backup, perhaps more. Let's say they take snapshots once a week for three weeks on-site. That's an extra 3MB of data, so you're now up to 7MB. If there is a critical disaster, your provider had better have a copy kept off-site somewhere. Even if it's a month old, it should exist. So now you're up to 8MB. If it's a really high-level provider, they may even have a disaster recovery site that's synchronized live. These disks will be RAIDed as well, so that's an extra 2MB, and thus you're up to 10MB of data. You're going to have to transfer that data eventually. What? Transfer it? Yes, data transfer costs money. It costs money when you download it, access it over the internet, it even costs money to back it up (someone has to take those tapes out of the office, and it could be that your 1MB of data means they have to purchase an extra set of tapes and transfer them somewhere). When your SATA home drive fails you get to call tech support and convince them your drive is dead. Then send your drive in to the manufacturer (on your own dime most times). Wait a week. Get a replacement drive back and have to reinstall it (it almost certainly isn't hot swappable or in a drive sled already). When that SAS drive fails you call the tech support. They almost never question your opinion that the drive needs immediate replacement and drop ship a new drive; usually the new drive is delivered later that same day, otherwise the next day is very common too. Commonly the manufacturer will send a representative out to actually install the drive if you don't know how (very handy if you plan on taking a vacation ever and need for things to keep working while you are away). Enterprise drives have tight tolerances, see #2 above, and tend to last about 10 times longer than Consumer grade drives (MTBF). Enterprise drives almost always support advanced error and failure detection, which a Google report found works about 40% of the time, but that's something anyone would prefer to a computer suddenly dying. When you have a single drive in your home computer, its statistical chances of failure are simply that of the drive. Drives used to be rated in MTBF (where SAS drives still enjoy ~50% higher ratings or more), now it's more common to see error rates. A typical SAS drive is 10 to 1,000 times less likely to have an unrecoverable error (with 100x the most common that I found recently). (error rates according to manufacturer documentation supplied by Seagate, Western Digital, and Hitachi; no bias intended; expressly disclaim indemnification). Error rates are particularly important not when you run across an unrecoverable error on a drive, but when another drive in the same array fails and you are not relying on all the drives in an array to be readable in order to recover the failed disk. SAS is a derivative of SCSI, which is a storage protocol. SATA is based on ATA, which is itself based on the ISA bus (that 8/16-bit bus in computers from the dinosaur age). The SCSI storage protocol has more extensive commands for optimizing the manner in which data is transferred from drives to controllers and back. This uptick in efficiency would make an otherwise equal SAS drive inherently faster, especially under extreme work loads, than a SATA drive; it also increases the cost. There are fewer SAS drives produced, economies of scale dictate that they will be more expensive all else being equal. SAS drives typically come in 10k or 15k rotational speeds; while SATA typically come in 5.4k or 7.2k. SAS drives, particularly the 2.5" size which are becoming increasingly popular, have faster seek times. The two combined dramatically increase the IOps a drive can perform, typically a SAS drive is ~3x faster. When multiple users are demanding disparate data, the IOps capacity of the drive/array becomes a critical performance indicator. The drives in a data center are typically powered up all the time. Studies have found that drive failure is influenced by the number of heating/cooling cycles it goes through (from running vs turned off). Keeping them running all the time typically increases the drive's life. The consequence of this is that the drives consume electricity. This electricity has to be supplied by something (in the case of a large DC the drives alone might take more power than a small neighborhood of houses). They also need to dissipate that heat somewhere, requiring cooling systems (which themselves take more power to operate). Infrastructure and staffing costs. Those drives are in high-end NAS or SAN units. Those units are expensive, even without the expensive drives in them. They require expensive staff to deploy and maintain them. The buildings that those NAS and SAN units are in are expensive to operate (see the point about cooling, above, but there's a lot more going on there.) The backup software is typically not free (nor are the licenses for things like mirroring), and the staff to deploy and maintain backups are usually pricey too. The cost of renting off-site tape delivery and storage is just one more of the many things that start to pile up when you need more storage. Keeping in mind that the capacity of their drives may well be 1/10th the size of a desktop drive, and five times the price, your 1MB of data is actually 10, and all the other differences, there's no way you can draw any meaningful conclusions between the price of your desktop storage and the price of enterprise level storage. | {
"source": [
"https://serverfault.com/questions/263694",
"https://serverfault.com",
"https://serverfault.com/users/7709/"
]
} |
263,847 | I'm looking to find out if a KB is installed via command line. | In addition to systeminfo there is also wmic qfe Example: wmic qfe get hotfixid | find "KB99999"
wmic qfe | find "KB99999" There is also update.exe Or from powershell, just adjust it for your needs: Get-WmiObject -query 'select * from win32_quickfixengineering' | foreach {$_.hotfixid} | {
"source": [
"https://serverfault.com/questions/263847",
"https://serverfault.com",
"https://serverfault.com/users/8199/"
]
} |
263,931 | From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)? | Water + Electricity = Disaster Water cooling allows for greater power density than air cooling; so figure out the cost savings of the extra density (likely none unless you're very space constrained). Then calculate the cost of the risk of a water disaster (say 1% * the cost of your facility). Then do a simple risk-reward comparison and see if it makes sense for your environment. | {
"source": [
"https://serverfault.com/questions/263931",
"https://serverfault.com",
"https://serverfault.com/users/39827/"
]
} |
264,595 | Currently I can only copy a single .tar file. But how can I copy directories recursively with scp ? | Yup, use -r : scp -rp sourcedirectory user@dest:/path -r means recursive -p preserves modification times, access times, and modes from the original file. Note: This creates the sourcedirectory inside /path thus the files will be in /path/sourcedirectory | {
"source": [
"https://serverfault.com/questions/264595",
"https://serverfault.com",
"https://serverfault.com/users/77159/"
]
} |
264,986 | In my FTP client I can see files' owner ID (99). How do I find out which user is the owner of these files? | Shorter getent version (as long as you don't need just the username) $ getent passwd 99
nobody:x:99:99:Nobody:/:/sbin/nologin Works on at least CentOS 5.6 - will take username or uid as key. | {
"source": [
"https://serverfault.com/questions/264986",
"https://serverfault.com",
"https://serverfault.com/users/67251/"
]
} |
265,155 | Can anyone explain in layman's terms what the difference between soft and hard limit is? Should I set my soft and hard limit to be the same? Or should soft be significantly lower? Does the system benefit either way? | The hard limit is the ceiling for the soft limit. The soft limit is what is actually enforced for a session or process. This allows the administrator (or user) to set the hard limit to the maximum usage they wish to allow. Other users and processes can then use the soft limit to self-limit their resource usage to even lower levels if they so desire. | {
"source": [
"https://serverfault.com/questions/265155",
"https://serverfault.com",
"https://serverfault.com/users/80026/"
]
} |
265,159 | I'm new in using cloud computing technologies. There is a popular cloud software called Eucalyptus. I found that the way to start a VM in Eucalyptus is very different from Virtualbox. In Virtualbox, we can create a virtual disk file, install any OS into the disk file, and then we can use the disk file as a virtual harddisk to start a VM. The whole process is straight forward and easy to understand. On the other hand, I found that the way to start a VM in Eucalyptus is quite complicated. First, we need to create raw disk file and install the guest OS. Then, we need to split the VM image into kernel image, ramdisk image, and disk image. We also need to perform the so called "bundling" process on those image files before we can use it. I don't understand why Eucalyptus make it so difficult to start a VM. Why can't it use the Virtualbox method which is much easier? May I know what is the purpose of splitting a VM into kernel image, ramdisk image, and disk image? If the VM is a Windows VM, then how are we going to split it? Why can't we use the raw disk file directly? What is the purpose of bundling an image? | The hard limit is the ceiling for the soft limit. The soft limit is what is actually enforced for a session or process. This allows the administrator (or user) to set the hard limit to the maximum usage they wish to allow. Other users and processes can then use the soft limit to self-limit their resource usage to even lower levels if they so desire. | {
"source": [
"https://serverfault.com/questions/265159",
"https://serverfault.com",
"https://serverfault.com/users/80030/"
]
} |
265,199 | I have a setup with Apache2 as a front-end server for multiple python apps served by gunicorn . My Apache2 setup using mod_proxy looks like this: <VirtualHost *:80>
ServerName example.com
UseCanonicalName On
ServerAdmin webmaster@localhost
LogLevel warn
CustomLog /var/log/apache2/example.com/access.log combined
ErrorLog /var/log/apache2/example.com/error.log
ServerSignature On
Alias /media/ /home/example/example.com/pysrc/project/media/
ProxyPass /media/ !
ProxyPass / http://127.0.0.1:4711/
ProxyPassReverse / http://127.0.0.1:4711/
ProxyPreserveHost On
ProxyErrorOverride Off
</VirtualHost> Generally, this setup works pretty well. I have one problem though: When I restart the gunicorn process (takes 2-5 seconds) and there is a request from Apache, that request will fail with a 503 error. So far so good. But Apache keeps returning 503 errors, even after the gunicorn process is back up. It resumes serving content from the proxied server only after a full restart of Apache. Is there a way around this behaviour? | Add retry=0 to your ProxyPass lines: ProxyPass / http://127.0.0.1:4711/ retry=0 From the mod_proxy documentation : Connection pool worker retry timeout in seconds. If the connection pool worker to the backend server is in the error state, Apache will not forward any requests to that server until the timeout expires. This enables to shut down the backend server for maintenance, and bring it back online later. A value of 0 means always retry workers in an error state with no timeout. | {
"source": [
"https://serverfault.com/questions/265199",
"https://serverfault.com",
"https://serverfault.com/users/16040/"
]
} |
265,410 | Possible Duplicate: updates in amazon-ec2 ubuntu 10.04 server When I log into an Ubuntu 10.04.2 LTS server, I see the message: 42 packages can be updated.
18 updates are security updates. But when I try to update this, nothing gets upgraded as would be expected: $ sudo apt-get update
....snip....
Reading package lists... Done
$ sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
linux-generic-pae linux-headers-generic-pae linux-image-generic-pae
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. Any ideas why either nothing was updated, or why the count of 3 (from apt-get) is different than 42? What software says 42 if not apt? (Other details: This is the server edition, no GUI; I haven't touched the apt configuration files; when I installed the software, I declined to allow automatic updates) | In order to install packages kept back you have to run: sudo apt-get update && sudo apt-get dist-upgrade Trying to run just sudo apt-get update && sudo apt-get upgrade wont install packages kept back because apt-get upgrade by default does not try to install new packages (such as new kernel versions); from the man page: under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. However apt-get dist-upgrade allows you to install new packages when needed (ie, a new kernel version); From the man page: dist-upgrade
dist-upgrade in addition to performing the function of upgrade, also intelligently handles changing dependencies with new
versions of packages; apt-get has a "smart" conflict resolution system, and it will attempt to upgrade the most important
packages at the expense of less important ones if necessary. So, dist-upgrade command may remove some packages. The
/etc/apt/sources.list file contains a list of locations from which to retrieve desired package files. See also
apt_preferences(5) for a mechanism for overriding the general settings for individual packages. | {
"source": [
"https://serverfault.com/questions/265410",
"https://serverfault.com",
"https://serverfault.com/users/73386/"
]
} |
265,675 | Is it possible and how can I zip a symlink from a linux shell? | You can store symlinks as symlinks (as opposed to a copy of the file/directory they point to) using the --symlinks parameter of the standard zip . Assuming foo is a directory containing symlinks: zip --symlinks -r foo.zip foo/ Rar equivalent: rar a -ol foo.rar foo/ tar stores them as is by default. tar czpvf foo.tgz foo/ Note that the symlink occupies almost no disk space by itself (just an inode). It's just a kind of pointer in the filesystem, as you probably know. | {
"source": [
"https://serverfault.com/questions/265675",
"https://serverfault.com",
"https://serverfault.com/users/26631/"
]
} |
266,263 | We have some databases with index fragmentation that is > 95%. As best I can tell the indexes have never been rebuilt much less reorganized. In years. (In fairness, these tables do seem to have auto-updated statistics enabled. Also in fairness, he is diligent about backups: full daily and trx logs hourly.) When I asked, the DBA said he was reluctant to rebuild or reorg the indexes. When I asked why, he couldn't really articulate it. Eventually he said he was concerned about potential data loss. For instance one of the databases is used by our Great Plains Dynamics accounting application, and he seemed very anxious about that. I am not a DBA but from what I've read, his anxiety seems ... difficult for me to understand. I am not sure what do to next. Suggestions how I should proceed? | Rebuilding a database index should not cause any data loss. It will however probably cause a substantial performance degradation as the indexes being rebuilt will normally not be available for use until the rebuild finishes. For that reason it should be done during off-hours when the affected systems are idle. Paranoia is a Good Thing in a DBA - If they're worried about data loss I would have them do a proper test of the backups (restore them to a separate system and make sure the data is all there), and if they're still concerned then performing a full backup before rebuilding the indexes would be a reasonable precaution to take. | {
"source": [
"https://serverfault.com/questions/266263",
"https://serverfault.com",
"https://serverfault.com/users/64837/"
]
} |
267,154 | When following the instructions to do rsync backups given here: http://troy.jdmz.net/rsync/index.html I get the error "protocol version mismatch -- is your shell clean?" I read somewhere that I needed to silence the prompt (PS1="") and motd (.hushlogin) displays to deal with this. I have done this, the prompt and login banner (MOTD) not longer appear, but the error still appears when I run: rsync -avvvz -e "ssh -i /home/thisuser/cron/thishost-rsync-key" remoteuser@remotehost:/remote/dir /this/dir/ Both ssh client and sshd server are using version 2 of the protocol. What could be the problem? [EDIT]
I have found http://www.eng.cam.ac.uk/help/jpmg/ssh/authorized_keys_howto.html which directs that it is sometimes necessary to "Force v2 by using the -2 flag to ssh or slogin ssh -2 -i ~/.ssh/my_private_key remotemachine" It is not clear this solved the problem as I think I put this change in AFTER the error changed but the fact is the error has evolved to something else. I'll update this when I learn more. And I will certainly try the suggestion to run this in an emacs shell - | One of your login scripts (.bashrc/.cshrc/etc.) is probably outputting data to the terminal (when it shouldn't be). This is causing ssh to error when it is connecting and getting ready to copy as it starts receiving extra data it doesn't expect. Remove output that is generated in the startup scripts. You can check if your terminal is interactive and only output text by using the following code in a bashrc. Something equivalent exists for other shells as well: if shopt -q login_shell; then
[any code that outputs text here]
fi or alternatively, like this, since the special parameter - contains i when the shell is interactive: if echo "$-" | grep i > /dev/null; then
[any code that outputs text here]
fi For more information see: rsync via ssh from linux to windows sbs 2003 protocol mismatch To diagnose this, make sure that the following is the output you get when you ssh in to the host: USER@HOSTNAME's password:
Last login: Mon Nov 7 22:54:30 2011 from YOURIP
[USER@HOSTNAME ~]$ If you get any newlines or other data you know that extra output is being sent. You could rename your .bashrc/.cshrc/.profile/etc. files to something else so that they won't output extra output. Of course there is still system files that could cause this. In that case, check with your sysadmin that the system files don't output data. | {
"source": [
"https://serverfault.com/questions/267154",
"https://serverfault.com",
"https://serverfault.com/users/63805/"
]
} |
267,157 | This might be a bit hard for me to explain - and it is a pretty individual situation. I got a native server at Hetzner (www.hetzner.de). The public IP is 88.[...].12. I got ESXi running on this server. I can access the esxi console by the public ip, but none of the virtual machines. That's why I bought a public subnet with 8 (6 usable) IPs (46.[...]) and an additional public ip (88.[...].26). This additional public ip belongs to the first virtual maschine - a firewall appliance - which is connected to the WAN. This need to be done this way - since it is the official way by hetzner. My 46. subnet is behind the firewall. I got a virtualmin server with dovecot imap/pop3 server. When sending a email, most provider (gmail) will accept those mails, but a lot will put it into spam (aol). My theory is: The MX line of my domain says of course the ip of the virtual machine (46.[...]), but in the raw email it says that email is sent by the ip of the firewall (88.[...].26), which doesnt sound trustworthy. A solution would be if the firewall could handle mail, but it simply cant. How can I prevent this problem? Thanks. | One of your login scripts (.bashrc/.cshrc/etc.) is probably outputting data to the terminal (when it shouldn't be). This is causing ssh to error when it is connecting and getting ready to copy as it starts receiving extra data it doesn't expect. Remove output that is generated in the startup scripts. You can check if your terminal is interactive and only output text by using the following code in a bashrc. Something equivalent exists for other shells as well: if shopt -q login_shell; then
[any code that outputs text here]
fi or alternatively, like this, since the special parameter - contains i when the shell is interactive: if echo "$-" | grep i > /dev/null; then
[any code that outputs text here]
fi For more information see: rsync via ssh from linux to windows sbs 2003 protocol mismatch To diagnose this, make sure that the following is the output you get when you ssh in to the host: USER@HOSTNAME's password:
Last login: Mon Nov 7 22:54:30 2011 from YOURIP
[USER@HOSTNAME ~]$ If you get any newlines or other data you know that extra output is being sent. You could rename your .bashrc/.cshrc/.profile/etc. files to something else so that they won't output extra output. Of course there is still system files that could cause this. In that case, check with your sysadmin that the system files don't output data. | {
"source": [
"https://serverfault.com/questions/267157",
"https://serverfault.com",
"https://serverfault.com/users/80658/"
]
} |
267,255 | I need to assemble a lot of images into one directory. Many of those images have the same file names. Is there some safe version of mv that will automatically rename files if the target filename already exists so that pic1.jpeg becomes something like pic1_2.jpeg ? I could write my own python script but there has to be something like this out there so I can do: find . -type f -name *.jpg -exec mvsafe '{}' /targetpath/ \; | mv already supports this out of the box (at least in Debian): mv --backup=t <source_file> <dest_file> As seen in mv(1) manpage: --backup[=CONTROL]
make a backup of each existing destination file
The backup suffix is `~', unless set with --suffix or SIM‐
PLE_BACKUP_SUFFIX. The version control method may be selected via the
--backup option or through the VERSION_CONTROL environment variable. To make --backup=t mean "make numbered backups", invoke as follows: env VERSION_CONTROL=numbered mv --backup=t <source_file> <dest_file> (dest_file can of course be a directory). Edit: in later versions (at least GNU coreutils 8.22 but prolly already much earlier) you can simply write mv --backup=numbered <source_file> <dest_file> | {
"source": [
"https://serverfault.com/questions/267255",
"https://serverfault.com",
"https://serverfault.com/users/65717/"
]
} |
267,260 | I've setup svnserve with SASL authentication and encryption for encrypting the traffic. Anonymous access should be allowed. My configuration file conf/svnserve.conf (with comments stripped) looks like this: [general]
anon-access = read
auth-access = write
realm = realm-of-repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256 The related sasl configuration file: pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /path/to/sasldb
mech_list: DIGEST-MD5 When supplying an username and password, everything works as expected: I can checkout and make changes. However, anonymous access (no username or password) fails with the next error message: svn: SASL(-1): generic failure: All-whitespace username. How do I enable anonymous SVN read access using svnserve and SASL ? I'm not looking for a solution with Apache. | mv already supports this out of the box (at least in Debian): mv --backup=t <source_file> <dest_file> As seen in mv(1) manpage: --backup[=CONTROL]
make a backup of each existing destination file
The backup suffix is `~', unless set with --suffix or SIM‐
PLE_BACKUP_SUFFIX. The version control method may be selected via the
--backup option or through the VERSION_CONTROL environment variable. To make --backup=t mean "make numbered backups", invoke as follows: env VERSION_CONTROL=numbered mv --backup=t <source_file> <dest_file> (dest_file can of course be a directory). Edit: in later versions (at least GNU coreutils 8.22 but prolly already much earlier) you can simply write mv --backup=numbered <source_file> <dest_file> | {
"source": [
"https://serverfault.com/questions/267260",
"https://serverfault.com",
"https://serverfault.com/users/51929/"
]
} |
267,941 | I've been running a few servers for a relatively long time, but I always only rented them, so I don't have much experience securing the actual server (as opposed to the application I'm running on it). Now I feel like opening up my small home SSH server to the internet. I will be the only one using it, my passwords are complicated enough, I've changed the default SSH port to something around 4000, the only accessible port is this SSH port through port-forwarding on my router/firewall, and it gets updated automatically every night (it runs Arch Linux, a rolling release distribution). What are, if any, other things I should do to keep it secure? Thanks! | Make sure root logins are disabled PermitRootLogin no . I would also consider disabling passwords altogether PasswordAuthentication no and use public keys PubkeyAuthentication yes . | {
"source": [
"https://serverfault.com/questions/267941",
"https://serverfault.com",
"https://serverfault.com/users/55340/"
]
} |
268,099 | I would like to run applications I'm working on that binds to port numbers less than 1000 without requiring root access. I'm using Linux Mint and have root access to set it up. I would ideally like to be able to do it over SSH. Also happy hear if it isn't possible or I shouldn't be doing it, if that is the case. EDIT: Mostly I'm happy to use higher port numbers for development, but Flash is expecting a socket policy on port 843. Currently I have to run the app as root and therefore I can't run it from my Makefile which is a PITA. | Another way of getting your daemon to respond to requests from a lower port number is to use iptables or similar to redirect a lower numbered port to the higher numbered port that your daemon is listening on: sudo iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080 Substitute 80 with the port to expose, and 8080 with your application listener port. | {
"source": [
"https://serverfault.com/questions/268099",
"https://serverfault.com",
"https://serverfault.com/users/19720/"
]
} |
268,288 | When running any sort of long-running command in the terminal, the program instantly dies and the terminal outputs the text Killed . Any pointers? Maybe there is a log file with data explaining why the commands are being killed? Update Here is a snippet from dmesg that hopefully should illuminate what's causing the issue. Another note that might be helpful is that this is an Amazon EC2 instance. May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184209] Call Trace:
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184218] [<c01e49ea>] dump_header+0x7a/0xb0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184221] [<c01e4a7c>] oom_kill_process+0x5c/0x160
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184224] [<c01e4fe9>] ? select_bad_process+0xa9/0xe0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184227] [<c01e5071>] __out_of_memory+0x51/0xb0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184229] [<c01e5128>] out_of_memory+0x58/0xd0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184232] [<c01e7f16>] __alloc_pages_slowpath+0x416/0x4b0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184235] [<c01e811f>] __alloc_pages_nodemask+0x16f/0x1c0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184238] [<c01ea2ca>] __do_page_cache_readahead+0xea/0x210
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184241] [<c01ea416>] ra_submit+0x26/0x30
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184244] [<c01e3aef>] filemap_fault+0x3cf/0x400
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184247] [<c02329ad>] ? core_sys_select+0x19d/0x240
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184252] [<c01fb65c>] __do_fault+0x4c/0x5e0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184254] [<c01e4161>] ? generic_file_aio_write+0xa1/0xc0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184257] [<c01fd60b>] handle_mm_fault+0x19b/0x510
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184262] [<c05f80d6>] do_page_fault+0x146/0x440
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184265] [<c0232c62>] ? sys_select+0x42/0xc0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184268] [<c05f7f90>] ? do_page_fault+0x0/0x440
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184270] [<c05f53c7>] error_code+0x73/0x78
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.184274] [<c05f007b>] ? setup_local_APIC+0xce/0x33e
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272161] [<c05f0000>] ? setup_local_APIC+0x53/0x33e
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272163] Mem-Info:
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272164] DMA per-cpu:
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272166] CPU 0: hi: 0, btch: 1 usd: 0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272168] Normal per-cpu:
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272169] CPU 0: hi: 186, btch: 31 usd: 50
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272171] HighMem per-cpu:
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272172] CPU 0: hi: 186, btch: 31 usd: 30
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272176] active_anon:204223 inactive_anon:204177 isolated_anon:0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272177] active_file:47 inactive_file:141 isolated_file:0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272178] unevictable:0 dirty:0 writeback:0 unstable:0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272179] free:10375 slab_reclaimable:1650 slab_unreclaimable:1856
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272180] mapped:2127 shmem:3918 pagetables:1812 bounce:0May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272186] DMA free:6744kB min:72kB low:88kB high:108kB active_anon:300kB inactive_anon:308kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15812kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272190] lowmem_reserve[]: 0 702 1670 1670May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272197] Normal free:34256kB min:3352kB low:4188kB high:5028kB active_anon:317736kB inactive_anon:317308kB active_file:144kB inactive_file:16kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:719320kB mlocked:0kB dirty:4kB writeback:0kB mapped:32kB shmem:0kB slab_reclaimable:6592kB slab_unreclaimable:7424kB kernel_stack:2592kB pagetables:7248kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:571 all_unreclaimable? yes
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272201] lowmem_reserve[]: 0 0 7747 7747May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272207] HighMem free:500kB min:512kB low:1668kB high:2824kB active_anon:498856kB inactive_anon:499092kB active_file:44kB inactive_file:548kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:991620kB mlocked:0kB dirty:0kB writeback:0kB mapped:8472kB shmem:15672kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:430 all_unreclaimable? yes
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272211] lowmem_reserve[]: 0 0 0 0May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272215] DMA: 10*4kB 22*8kB 38*16kB 33*32kB 16*64kB 10*128kB 4*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 6744kBMay 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272223] Normal: 476*4kB 1396*8kB 676*16kB 206*32kB 23*64kB 2*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 34256kBMay 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272231] HighMem: 1*4kB 2*8kB 28*16kB 1*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272238] 4108 total pagecache pages
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272240] 0 pages in swap cache
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272242] Swap cache stats: add 0, delete 0, find 0/0
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272243] Free swap = 0kB
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.272244] Total swap = 0kB
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.276842] 435199 pages RAM
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.276845] 249858 pages HighMem
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.276846] 8771 pages reserved
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.276847] 23955 pages shared
May 14 20:29:15 ip-10-112-33-63 kernel: [11144050.276849] 405696 pages non-shared | You should be able to find out what killed your process by looking at the output of the dmesg command; or at the logfiles /var/log/kern.log , /var/log/messages , or /var/log/syslog . There are a number of things that can cause a process to be summarily killed: If it exceeds the hard ulimit for various memory or cpu usage types that you can examine using ulimit -H -a If the system is low on virtual memory, processes can get killed by the kernel oom-killer to free up memory (In your case, it's probably not this) If the system has SELinux, and/or PaX/grsecurity installed, a process could get killed if it tries to do something that's not allowed by security policy, or if it tries to execute self-modified code. The logs or dmesg should tell you why the process was killed. | {
"source": [
"https://serverfault.com/questions/268288",
"https://serverfault.com",
"https://serverfault.com/users/27929/"
]
} |
268,368 | One of my common practices is to perform greps on all files of a certain type, e.g., find all the HTML files that have the word "rumpus" in them. To do it, I use find /path/to -name "*.html" | xargs grep -l "rumpus" Occasionally, find will return a file with a space in its name such as my new file.html . When xargs passed this to grep , however, I get these errors: grep: /path/to/bad/file/my: No such file or directory
grep: new: No such file or directory
grep: file.html: No such file or directory I can see what's going on here: either the pipe or the xargs is treating the spaces as delimiters between files. For the life of me, though, I can't figure out how to prevent this behavior. Can it be done with find + xargs ? Or do I have to use an entirely different command? | Use find ... -print0 | xargs -0 ... e.g. find /path/to -name "*.html" -print0 | xargs -0 grep -l "rumpus" from the find man page -print0
True; print the full file name on the standard output, followed
by a null character (instead of the newline character that
‘-print’ uses). This allows file names that contain newlines or
other types of white space to be correctly interpreted by pro-
grams that process the find output. This option corresponds to
the ‘-0’ option of xargs. | {
"source": [
"https://serverfault.com/questions/268368",
"https://serverfault.com",
"https://serverfault.com/users/32243/"
]
} |
268,369 | Few days ago I noticed something rather odd (at least for me). I ran rsync copying the same data and deleting it afterwards to NFS mount, called /nfs_mount/TEST . This /nfs_mount/TEST is hosted/exported from nfs_server-eth1 . The MTU on both network interfaces is 9000, the switch in between supports jumbo frames as well. If I do rsync -av dir /nfs_mount/TEST/ I get network transfer speed X MBps. If I do rsync -av dir nfs_server-eth1:/nfs_mount/TEST/ I get network transfer speed at least 2X MBps. My NFS mount options are nfs rw,nodev,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountvers=3,mountproto=tcp . Bottom line: both transfers go over the same network subnet, same wires, same interfaces, read the same data, write to the same directory, etc. Only difference one is via NFSv3, the other one over rsync. The client is Ubuntu 10.04, the server Ubuntu 9.10. How come rsync is that much faster? How to make NFS match that speed? Thanks Edit: please note I use rsync to write to NFS share or to SSH into the NFS server and write locally there. Both times I do rsync -av , starting with clear destination directory. Tomorrow I will try with plain copy. Edit2 (additional info): File size ranges from 1KB-15MB. The files are already compressed, I tried to compress them further with no success. I made tar.gz file from that dir . Here is the pattern: rsync -av dir /nfs_mount/TEST/ = slowest transfer; rsync -av dir nfs_server-eth1:/nfs_mount/TEST/ = fastest rsync with jumbo frames enabled; without jumbo frames is a bit slower, but still significantly faster than the one directly to NFS; rsync -av dir.tar.gz nfs_server-eth1:/nfs_mount/TEST/ = about the same as its non-tar.gz equivalent; Tests with cp and scp : cp -r dir /nfs_mount/TEST/ = slightly faster than rsync -av dir /nfs_mount/TEST/ but still significantly slower than rsync -av dir nfs_server-eth1:/nfs_mount/TEST/ . scp -r dir /nfs_mount/TEST/ = fastest overall, slightly overcomes rsync -av dir nfs_server-eth1:/nfs_mount/TEST/ ; scp -r dir.tar.gz /nfs_mount/TEST/ = about the same as its non-tar.gz equivalent; Conclusion, based on this results:
For this test there is not significant difference if using tar.gz large file or many small ones. Jumbo frames on or off also makes almost no difference. cp and scp are faster than their respective rsync -av equivalents. Writing directly to exported NFS share is significantly slower (at least 2 times) than writing to the same directory over SSH, regardless of the method used. Differences between cp and rsync are not relevant in this case. I decided to try cp and scp just to see if they show the same pattern and they do - 2X difference. As I use rsync or cp in both cases, I can't understand what prevents NFS to reach the transfer speed of the same commands over SSH. How come writing to NFS share is 2X slower than writing to the same place over SSH? Edit3 (NFS server /etc/exports options): rw,no_root_squash,no_subtree_check,sync . The client's /proc/mounts shows: nfs rw,nodev,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountvers=3,mountproto=tcp . Thank you all! | Maybe it's not slower transfer speed, but increased write latency. Try mounting the NFS share async instead of sync and see if that closes the speed gap. When you rsync over ssh, the remote rsync process writes asynchronously (quickly). But when writing to the synchronously mounted nfs share, the writes aren't confirmed immediately: the NFS server waits until they've hit disk (or more likely the controller cache) before sending confirmation to the NFS client that the write was successful. If 'async' fixes your problem, be aware that if something happens to the NFS server mid-write you very well might end up with inconsistent data on disk. As long as this NFS mount isn't the primary storage for this (or any other) data, you'll probably be fine. Of course you'd be in the same boat if you pulled the plug on the nfs server during/after rsync-over-ssh ran (e.g. rsync returns having 'finished', nfs server crashes, uncommitted data in the write cache is now lost leaving inconsistent data on disk). Although not an issue with your test (rsyncing new data), do be aware that rsync over ssh can make significant CPU and IO demands on remote server before a single byte is transfered while it calculating checksums and generating the list of files that need to be updated. | {
"source": [
"https://serverfault.com/questions/268369",
"https://serverfault.com",
"https://serverfault.com/users/69956/"
]
} |
268,479 | I never know when's the best time to use nohup and the & at the end of the command. | Nohup makes a program ignore the HUP signal, allowing it to run after current terminal is closed / user logged out. Nohup does not send program to background. & at the end of command is related to shell job control, allowing user to continue work in current shell session. Usually nohup and & are combined to launch program which runs after logout of user and allows to continue work at current shell session. | {
"source": [
"https://serverfault.com/questions/268479",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
268,530 | Does there exist a magical shell piping which would allow easily to grep through bunch of .gz log files without needing to extract them somewhere? .gz files are Apache logs, result of log rotation. I'd like to quickly check how often certain URIs are accessed in the past. | The zgrep program is available for Linux (and perhaps some Unix too). This will decompress the files and then grep through them. | {
"source": [
"https://serverfault.com/questions/268530",
"https://serverfault.com",
"https://serverfault.com/users/74975/"
]
} |
268,542 | My hosting company says IPTables is useless and doesn't provide any protection . Is this a lie? TL;DR I have two, co-located servers. Yesterday my DC company contacted me to tell me that because I'm using a software firewall my server is "Vulnerable to multiple, critical security threats" and my current solution offers "No protection from any form of attack". They say I need to get a dedicated Cisco firewall ($1000 installation then $200/month each ) to protect my servers. I was always under the impression that, while hardware firewalls are more secure, something like IPTables on RedHat offered enough protection for your average server. Both servers are just web-servers, there's nothing critically important on them but I've used IPTables to lock down SSH to just my static IP address and block everything except the basic ports (HTTP(S), FTP and a few other standard services). I'm not going to get the firewall, if ether of the servers were hacked it would be an inconvenience but all they run is a few WordPress and Joomla sites so I definitely don't think it's worth the money. | Hardware firewalls are running software too, the only real difference is that the device is purpose built and dedicated to the task. Software firewalls on servers can be just as secure as hardware firewalls when properly configured (note that hardware firewalls are generally 'easier' to get to that level, and software firewalls are 'easier' to screw up). If you're running outdated software, there's likely a known vulnerability. While your server might be susceptible to this attack vector, stating that it is unprotected is inflammatory, misleading, or a boldface lie (depends on what exactly they said and how they meant it). You should update the software and patch any known vulnerabilities regardless of the probability of exploitation. Stating that IPTables is ineffective is misleading at best . Though again, if the one rule is allow everything from all to all then yeah, it wouldn't be doing anything at all. Side Note : all my personal servers are FreeBSD powered and use only IPFW (built-in software firewall). I have never had a problem with this setup; I also follow the security announcements and have never seen any issues with this firewall software. At work we have security in layers; the edge firewall filters out all the obvious crap (hardware firewall); internal firewalls filter traffic down for the individual servers or location on the network (mix of mostly software and hardware firewalls). For complex networks of any kind, security in layers is most appropriate. For simple servers like yours there may be some benefit in having a separate hardware firewall, but fairly little. | {
"source": [
"https://serverfault.com/questions/268542",
"https://serverfault.com",
"https://serverfault.com/users/80776/"
]
} |
268,569 | I run ping <hostname> command in console and it now outputs hundreds of those rows now (icmp_seq=526 ttl=64 time=0.026 ms), icmp_seq is like 500 or more now. How to stop it? (linux debian)
Should i just close console? Never mind, it stopped on 532. Hahah. | Press Ctrl + C or Ctrl + | . | {
"source": [
"https://serverfault.com/questions/268569",
"https://serverfault.com",
"https://serverfault.com/users/47394/"
]
} |
268,571 | Is it ok to have http and https requests point to the same directory such as var/www/ ? It seems like it would be alright since when you're say authenticating a user you can just make sure to call https instead of http. However I can see how a malicious user could use javascript to change the https url to a http url. If it's best to split them between two directories any recommendations on how to do this with a framework since you would have to duplicate a lot of code between the directories? | Press Ctrl + C or Ctrl + | . | {
"source": [
"https://serverfault.com/questions/268571",
"https://serverfault.com",
"https://serverfault.com/users/14743/"
]
} |
268,719 | I have been poking around Amazon EC2, and am a little confused on some of the terminology. Specifically with regard to AMI, snapshots and volumes, and an EBS Please correct me if I am wrong, or fill in any serious gaps in my following statements: An AMI (Amazon Machine Image) is a full 'disk' capture of an operating system and configuration. When you launch an instance, you launch it from an AMI An EBS (Elastic Block Storage) is a way to persist the state of any modifications you made once booting from a given AMI. In my mind, this is sort of like a diff on the final state of your instance vs the AMI. A snapshot is ... well, I'm not sure. I can only assume it is a snapshot of a specific instance, but it is not clear to me how this differs from the state stored in an EBS. How is a snapshot different from creating an EBS AMI from an existing instance? A volume is ... it would seem mounted disk space into which an AMI/EBS pair is loaded? I'm not sure on this one either. I can see (from the AWS Console) that you can create a volume from a snapshot, and that you can attach/detach volumes, but it isn't clear to me why or when you would do that. | An AMI, as you note, is a machine image. It's a total snapshot of a system stored as an image that can be launched as an instance. We'll get back to AMIs in a second. Lets look at EBS. Your other two items are sub-items of this. EBS is a virtual block device. You can think of it as a hard drive, although it's really a bunch of software magic to link into another kind of storage device but make it look like a hard drive to an instance. EBS is just the name for the whole service. Inside of EBS you have what are called volumes. These are the "unit" amazon is selling you. You create a volume and they allocate you X number of gigabytes and you use it like a hard drive that you can plug into any of your running computers (instances). Volumes can either be created blank or from a snapshot copy of previous volume, which brings us to the next topic. Snapshots are ... well ... snapshots of volumes: an exact capture of what a volume looked like at a particular moment in time, including all its data. You could have a volume, attach it to your instance, fill it up with stuff, then snapshot it, but keep using it. The volume contents would keep changing as you used it as a file system but the snapshot would be frozen in time. You could create a new volume using this snapshot as a base. The new volume would look exactly like your first disk did when you took the snapshot. You could start using the new volume in place of the old one to roll-back your data, or maybe attach the same data set to a second machine. You can keep taking snapshots of volumes at any point in time. It's like a freeze-frame instance backup that can then easy be made into a new live disk (volume) whenever you need it. So volumes can be based on new blank space or on a snapshot. Got that? Volumes can be attached and detached from any instances, but only connected to one instance at a time, just like the physical disk that they are a virtual abstraction of. Now back to AMIs. These are tricky because there are two types. One creates an ephemeral instances where the root files system looks like a drive to the computer but actually sits in memory somewhere and vaporizes the minute it stops being used. The other kind is called an EBS backed instance. This means that when your instances loads up, it loads its root file system onto a new EBS volume, basically layering the EC2 virtual machine technology on top of their EBS technology. A regular EBS volume is something that sits next to EC2 and can be attached, but an EBS backed instance also IS a volume itself. A regular AMI is just a big chunk of data that gets loaded up as a machine. An EBS backed AMI will get loaded up onto an EBS volume, so you can shut it down and it will start back up from where you left off just like a real disk would. Now put it all together. If an instance is EBS backed, you can also snapshot it. Basically this does exactly what a regular snapshot would ... a freeze frame of the root disk of your computer at a moment in time. In practice, it does two things different. One is it shuts down your instance so that you get a copy of the disk as it would look to an OFF computer, not an ON one. This makes it easier to boot up :) So when you snapshot an instance, it shuts it down, takes the disk picture, then starts up again. Secondly, it saves that images as an AMI instead of as a regular disk snapshot. Basically it's a bootable snapshot of a volume. | {
"source": [
"https://serverfault.com/questions/268719",
"https://serverfault.com",
"https://serverfault.com/users/18496/"
]
} |
268,757 | How can I grep the PS output with the headers in place? These two process make up an app running on my server.... root 17123 16727 0 16:25 pts/6 00:00:00 grep GMC
root 32017 1 83 May03 ? 6-22:01:17 /scripts/GMC/PNetT-5.1-SP1/PNetTNetServer.bin -tempdir /usr/local/GMC/PNetT-5.1-SP1/tmpData -D does 6-22:01:17 mean that it's been running for 6 days? I'm tring to determine the length of how long the process has been running... Is the 2nd column the process id? So if I do kill 32017 it'll kill the 2nd process? | ps -ef | egrep "GMC|PID" Replace the "GMC" and ps switches as needed. Example output: root@xxxxx:~$ ps -ef | egrep "disk|PID"
UID PID PPID C STIME TTY TIME CMD
paremh1 12501 12466 0 18:31 pts/1 00:00:00 egrep disk|PID
root 14936 1 0 Apr26 ? 00:02:11 /usr/lib/udisks/udisks-daemon
root 14937 14936 0 Apr26 ? 00:00:03 udisks-daemon: not polling any devices ps -e selects all processes, and ps -f is full-format listing which shows the column headers. | {
"source": [
"https://serverfault.com/questions/268757",
"https://serverfault.com",
"https://serverfault.com/users/37222/"
]
} |
268,923 | Is there a way to determine the 'uptime' of a process in Windows. Disappointed to find that it is not one of the attributes available when using the Task Manager. | You can see this with Process Explorer . In the taskbar menu select View and check Show Process Tree and the Show Lower Pane options. Right click on any column and Select Columns , now click on the Process Performance tab and check the Start Time box. Community Update: As mentioned in the comments, in more recent versions of the tool (currently as of 2019), the information has been relocated into the image tab of the property sheets regarding each process-tree item (Just double-click the process name, no other steps are required). | {
"source": [
"https://serverfault.com/questions/268923",
"https://serverfault.com",
"https://serverfault.com/users/81231/"
]
} |
268,933 | Let's say you're hosting redis on a small server with little RAM. What happens if there's too much data, and all the RAM gets used up? Does redis die? Or does it continue operating? | You can see this with Process Explorer . In the taskbar menu select View and check Show Process Tree and the Show Lower Pane options. Right click on any column and Select Columns , now click on the Process Performance tab and check the Start Time box. Community Update: As mentioned in the comments, in more recent versions of the tool (currently as of 2019), the information has been relocated into the image tab of the property sheets regarding each process-tree item (Just double-click the process name, no other steps are required). | {
"source": [
"https://serverfault.com/questions/268933",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
268,935 | I am working in Sql Server 2008. The Issue is when i restore the database the database users get disable i can map that particular user with the Login user. For mapping i have to remove the user from the database and then map the user. Eg:- Database name :- Clv
Database User :- Clv
Server Loging :- Clv this user is mapped with the Clv Database Clv User Then i took full backup of the Clv database. Now i want to change the server of the database so i restore the backup in new server.
In new server i Create Clv login and trying to map it with the Clv database's Clv user but the error is there is already a Clv User so i delete the user of the database and then i map user again so its done on that way but i want to know that is there any other way out for this situation or there is only this wayout.
Now when i restore the database the The Clv database User no longer mapped with the Clv Login. For to doing That i have to remove Clv database user first and the Map with Clv Login. Is then any other solution?? | You can see this with Process Explorer . In the taskbar menu select View and check Show Process Tree and the Show Lower Pane options. Right click on any column and Select Columns , now click on the Process Performance tab and check the Start Time box. Community Update: As mentioned in the comments, in more recent versions of the tool (currently as of 2019), the information has been relocated into the image tab of the property sheets regarding each process-tree item (Just double-click the process name, no other steps are required). | {
"source": [
"https://serverfault.com/questions/268935",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
268,938 | By default when I access some computer's share ( typing \\hostname in Windows Explorer ) Windows passes credential of my current user. It prompts for credentials only when current user's credentials are incorrect. Is there some way to force Windows not to pass current user's credentails, but prompt for them ? I thought about making use of net view command, but it doesn't grab 'user' and 'password' parameters. | if you type the command net use \\SERVERNAME /u:DOMAIN\USER you will be prompted for the password of that user to be used when accessing that server | {
"source": [
"https://serverfault.com/questions/268938",
"https://serverfault.com",
"https://serverfault.com/users/80072/"
]
} |
269,289 | I want to know how many people are connected to my server.
Since I'm doing comet applications, this is important | There are about a zillion ways to do this but: netstat | grep http | wc -l Keep it mind that http is a stateless protocol. Each line can represent one client opening multiple sockets to grab different files (css, images, etc) that will hang out for awhile in a timewait state. | {
"source": [
"https://serverfault.com/questions/269289",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
269,420 | We are using Nginx to serve static files on a development platform. As it is a development platform, we'd like to disable caching so that each change is propagated to the server. The configuration of the VHost is quite simple: server {
server_name static.server.local;
root /var/www/static;
## Default location
location / {
access_log off;
expires 0;
add_header Cache-Control private;
}
} When we access an HTML file ( http://static.server.local/test.html ), we have no issue: the server returns a code 304 Not Modified as long as the file is not changed, and a 200 OK response with the modified file when the file is changed. However, it seems to behave differently with a Javascript or a CSS file. Once the file is changed, we get a 200 OK response as expected, but with the old text. Is there an internal cache mechanism in Nginx that could explain this behaviour? Or some configuration that we should add? As a side note, here is the header returned by Nginx when the file has been modified (it seems correct): Accept-Ranges:bytes
Cache-Control:max-age=0
private
Connection:keep-alive
Content-Length:309
Content-Type:text/css
Date:Fri, 13 May 2011 14:13:13 GMT
Expires:Fri, 13 May 2011 14:13:13 GMT
Last-Modified:Fri, 13 May 2011 14:13:05 GMT
Server:nginx/0.8.54 Edit After trying different settings with the expires directive and Cache-Control header, I made some further investigations. In fact, the server is installed on a VirtualBox guest Ubuntu, and data are read from a shared folder that is on the Mac OSX host. If the file is edited from an IDE (NetBeans) on the host, it seems that changes do not appear whereas if I edit it directly on the guest (using VIM), it is refreshed. The strange thing is it does not behave similarly with HTML files. Quite puzzling. Edit 2 (ANSWER) Indeed, the origin of the issue was more on the VirtualBox side. Or rather a conflict between VirtualBox and the "sendfile" option of the server. This link VirtualBox Hates Sendfile gave me the solution: switch the sendfile flag in the server configuration to off : sendfile off; Hope this could also help other person using VirtualBox for development. :) There are some additional information on the VirtualBox forum . | Since the answer is somehow hidden in the question - here is the solution for nginx in a VirtualBox environment as standalone answer. In your nginx config (usally /etc/nginx/nginx.conf) or vhost config file change the sendfile parameter to off : sendfile off; While sendfile is at the heart of Nginx's fame (blazing-fast low-level static file serving efficiency) it might be a bane for local development, e.g. Javascripts that change often and need to be reloaded. Nonetheless Nginx sendfile is smart and probably isn't most people's issue; check your browser's "disable cache" options as well! | {
"source": [
"https://serverfault.com/questions/269420",
"https://serverfault.com",
"https://serverfault.com/users/81372/"
]
} |
269,421 | I need a cron job to transfer a file across servers using scp and kerberos authentication. The system user for the job is in /etc/passwd on both machines and a valid keytab has been created (with -randkey) for the kerberos auth. The cron job script calls kinit, then scp, then kdestroy. However, the scp won't work unless I change the /sbin/nologin in /etc/passwd to a valid shell, say /bin/bash. Question #1: is this a security hole to specify a shell? Question #2: is this the "right" way to do this? Thanks in advance | Since the answer is somehow hidden in the question - here is the solution for nginx in a VirtualBox environment as standalone answer. In your nginx config (usally /etc/nginx/nginx.conf) or vhost config file change the sendfile parameter to off : sendfile off; While sendfile is at the heart of Nginx's fame (blazing-fast low-level static file serving efficiency) it might be a bane for local development, e.g. Javascripts that change often and need to be reloaded. Nonetheless Nginx sendfile is smart and probably isn't most people's issue; check your browser's "disable cache" options as well! | {
"source": [
"https://serverfault.com/questions/269421",
"https://serverfault.com",
"https://serverfault.com/users/81373/"
]
} |
269,838 | I am new to the world of setting up servers and am baffled by the term hostname and fully qualified domain name (FQDN). For example, if I want to set up a server that hosts files on the local network i.e. a file server, what would I use a hostname such as myfileserver or something else? What if I wanted to set up a web server, mail server, etc. that external users could access? | Your hostname is the name of your computer. Your fully qualified domain name is your hostname plus the domain your company uses often ending in .local . So if the name of your computer is bob , and your company's domain is contoso.local , your computer's fully qualified domain name (FQDN) is bob.contoso.local. : Hostname : bob Domain : contoso.com FQDN : bob.contoso.com. In the case of a domain like contoso.local I did not use an "external" internet domain name. This name doesn't have to be the only way that you address the server. If you make it available by its IP address you can use DNS or that IP address to allow external users to access it. The dot at the end of the FQDN is used to indicate the empty top-level domain. Some more information on DNS: http://www.howstuffworks.com/dns.htm http://en.wikipedia.org/wiki/.local https://en.wikipedia.org/wiki/Fully_qualified_domain_name Edit : Thanks for the comment on .local domains RobM | {
"source": [
"https://serverfault.com/questions/269838",
"https://serverfault.com",
"https://serverfault.com/users/81492/"
]
} |
269,921 | When I restart the network using: /etc/init.d/networking restart I get this warning: Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces So what's the best way to restart the network after making changes now? This problem also applies to Debian as the netbase package is inherited from debian. | It is just saying that the restart option is going away /etc/init.d/networking stop; /etc/init.d/networking start Note there is one line only! That is important when running network restart through the network. | {
"source": [
"https://serverfault.com/questions/269921",
"https://serverfault.com",
"https://serverfault.com/users/66603/"
]
} |
270,005 | In Amazon EC2, where I set "security groups", It says: Source: 0.0.0.0/0
And then it gives an example of: 192.168.2.0/24 What is "/24"? I know what port and IP is. | It represents the CIDR netmask - after the slash you see the number of bits the netmask has set to 1. So the /24 on your example is equivalent to 255.255.255.0. This defines the subnet the IP is in - IPs in the same subnet will be identical after applying the netmask. Take AND to mean bitwise &. Then: 192.168.2.5 AND 255.255.255.0 = 192.168.2.0
192.168.2.100 AND 255.255.255.0 = 192.168.2.0 but, for example: 192.168.3.100 AND 255.255.255.0 = 192.168.3.0 != 192.168.2.0 The most common CIDR netmasks are probably /32 (255.255.255.255 - a single host); /24 (255.255.255.0); /16 (255.255.0.0); and /8 (255.0.0.0). I think it's easier to make sense of the numbers if you remember that 255.255.255.255 can be written as FF.FF.FF.FF - and F is of course the same as binary 1111. So you substract as many 1's as the difference between 32 and the CIDR netmask to know how much of the IP address "belongs" to its subnet. If this is confusing you can probably skip it and keep to the previously mentioned common ones for the time being, it's just the way I prefer to think about this. Very simply, it is the number of most significant bits that would remain same in the network. Alternately it is (32 less the specified number) of least significant bits that would change in the network. https://www.rfc-editor.org/rfc/rfc1878 | {
"source": [
"https://serverfault.com/questions/270005",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
270,316 | I'm looking for a way to kill all processes with a given name that have been running for more than X amount of time. I spawn many instances of this particular executable, and sometimes it goes into a bad state and runs forever, taking up a lot of cpu. I'm already using monit, but I don't know how to check for a process without a pid file. The rule would be something like this: kill all processes named xxxx that have a running time greater than 2 minutes How would you express this in monit? | In monit, you can use a matching string for processes that do not have a PID. Using the example of a process named "myprocessname", check process myprocessname
matching "myprocessname"
start program = "/etc/init.d/myproccessname start"
stop program = "/usr/bin/killall myprocessname"
if cpu usage > 95% for 10 cycles then restart Maybe if you check to see if CPU load is at a certain level for 10 monitoring cycles (of 30-seconds each), then restart or kill, that could be an option. Or you could use monit's timestamp testing on a file related to the process. | {
"source": [
"https://serverfault.com/questions/270316",
"https://serverfault.com",
"https://serverfault.com/users/6162/"
]
} |
270,339 | How do I set up Nginx conf file to force SSL on only one of the paths in my site and non-SSL on all the rest? For example, I want all of the URLs under /user to be https but all the rest of the URLs to be http. For the first part I have: rewrite ^/user(.*) https://$http_host$request_uri?; I don't want to use "if". I assume it would take advantage of order of operation but I don't want to end up in a loop. | In your nginx configuration, you should have two "server" areas. One for port 80, and one for port 443 (non-SSL and SSL). Simply add a location in your non-SSL website to redirect to your SSL page. server {
root /var/www/
location / {
}
location /user {
rewrite ^ https://$host$request_uri? permanent;
}
} it will forward all traffic that ends up at /user to your https:// server. Then, in your 443 server, you do the opposite. server {
listen 443;
root /var/www/
location / {
rewrite ^ http://$host$request_uri? permanent;
}
location /user {
}
} | {
"source": [
"https://serverfault.com/questions/270339",
"https://serverfault.com",
"https://serverfault.com/users/21557/"
]
} |
270,568 | Here is my current code: Write-output “ENTER THE FOLLOWING DETAILS - When Creating Multiple New Accounts Go to EMC hit F5(refresh) and make sure previous new account is listed before proceeding to the next one”
$DName = Read-Host “User Diplay Name(New User)"
$RUser = Read-Host "Replicate User(Database Grab)"
$RData = ((Get-Mailbox -Identity $RUser).Database).DistinguishedName
$REmailInput = Read-Host “Requester's Name(Notification Email goes to this Person)"
$REmail = ((Get-Mailbox -Identity "$REmailInput").PrimarySmtpAddress).ToString()
Enable-Mailbox -Identity "$DName" -Database "$RData"
Set-CASMailbox -Identity "$DName" -ActiveSyncEnabled $false -ImapEnabled $false - PopEnabled $false
Send-MailMessage -From "John Doe <[email protected]>" -To $REmail -Subject "$DName's email account" -Body "$DName's email account has been setup.`n`n`nJohn Doe`nXYZ`nSystems Administrator`nOffice: 123.456.7890`[email protected]" -SmtpServer [email protected] This code works flawlessly about half the time, but the other half I get this error in return: ENTER THE FOLLOWING DETAILS - When Creating Multiple New Accounts Go to EMC hit
F5(refresh) and make sure previous new account is listed before proceeding to
the next one
User Diplay Name(New User): Jane Doe
Replicate User(Database Grab): Julie Doe
Requester's Name(Notification Email goes to this Person): Joanna Doe
Name Alias ServerName ProhibitSendQuo
ta
---- ----- ---------- ---------------
Jane Doe JDDAFA [email protected] unlimited
Set-CASMailbox : Jane Doe is not a mailbox user.
At C:\emailclientbasic.ps1:11 char:15
+ Set-CASMailbox <<<< -Identity "$DName" -ActiveSyncEnabled $false -ImapEnable
d $false -PopEnabled $false
+ CategoryInfo : NotSpecified: (0:Int32) [Set-CASMailbox], Manage
mentObjectNotFoundException
+ FullyQualifiedErrorId : 292DF1AC,Microsoft.Exchange.Management.Recipient
Tasks.SetCASMailbox So if anyone could help me throw in some kind of wait command after the mailbox is created and wait until the user's mailbox is created before the script disables ActiveSync, etc it would be really helpful. I believe that simply using the -wait switch does not work. | Use the Start-Sleep command: Start-Sleep -s 10 will pause the script for 10 seconds. | {
"source": [
"https://serverfault.com/questions/270568",
"https://serverfault.com",
"https://serverfault.com/users/59582/"
]
} |
270,715 | I want to create a rule that allows anyone on eth1 to access port 80. Can UFW do this or should I go back to using Shorewall? To clarify: this is a capabilties question, can ufw handle interfaces as a target? | I finally read the man page: By default, ufw will apply rules to all available interfaces. To
limit this, specify DIRECTION on INTERFACE, where DIRECTION is
one of in or out (interface aliases are not supported). For
example, to allow all new incoming http connections on eth0,
use:
ufw allow in on eth0 to any port 80 proto tcp To elaborate a little the answer is yes, ufw can use the interface as a target. My particular rule looked like this: ufw allow in on eth1 to [eth1 ip addr] port 80 proto tcp | {
"source": [
"https://serverfault.com/questions/270715",
"https://serverfault.com",
"https://serverfault.com/users/66603/"
]
} |
270,745 | I'm struggling to come to grasp with why all FTP servers requires the use of a port range for passive mode data channels as opposed to only using one data port for all incoming data channel connections. FTP servers handle many simultaneously connected clients on port 21. Web servers handle many simultaneously connected clients on port 80. Etc.. Then why can't an FTP server use only one data channel port for all incoming passive data connections (and still be able to handle many simultaneously connected clients on that port, say port 1024)? Or can it? I am interested in knowing the technical details for why this is not possible or not recommended. | a clear and technical explanation with regards to the multiple concurrent FTP sessions issue when locking the data port to only one port is what I am most interested in knowing in depth. When can it work, when will it not work, why it may not be recommended, etc. This will be a wild guess, as I haven't tested it, you should try it for yourself and see if there are some other problems I might have missed. I suppose you could limit the passive port range to one single port . In fact you can see in this question that small port ranges are used in practice . Theoretically, to support multiple concurrent connections you only need the 4 values: local IP, local port, remote IP, remote port to be unique. This is how you discern between different connections. If you lock down the port on your server to one single value, then the only variable left is the port used by the client. This is not a problem, as long as the client has a large enough pool of free ephemeral ports to choose from. Unless it's doing some heavy NAT, you don't have to worry about this. Now, be warned this will be purely theoretical stuff : if you used multiple ports on your server, you could multiply the number of hypothetical concurrent connections by enabling number of ports in range connections per one port client-side. But it won't happen in practice, as I doubt there's any implementation of an FTP client that would support this (because it doesn't make much sense). Plus if the client has to share his ephemeral ports in this way and can't just open a new one, then he has much more severe problems to deal with. So, from this perspective you should be totally safe using a single port. Let's think why a single port may not be sufficient . First of all, I could come up with a situation where a really buggy FTP server implementation uses solely the local port number as a way to identify the client data transfer. Once again, I don't think any decent FTPd would do that. The real problem ( yes, you can disregard all above as a major digression ;-)) is that passive port range is in a non-privileged range . This means that your selected port number is not reserved per se , and in fact any user process (doesn't need root privileges) can grab it before your FTP server does. If you have an abundant pool of ports to select from, you just grab a random free one. If you're bound to use the only one and it's already used, you won't be able to handle the transfers properly. Sorry, if the answer seems a bit too speculative. To be honest, I tried hard to find a reason why you shouldn't use a single port and, apart from the last bit, I couldn't think of any hard evidence against it. Nevertheless, an interesting and challenging question you pose. | {
"source": [
"https://serverfault.com/questions/270745",
"https://serverfault.com",
"https://serverfault.com/users/81756/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.