source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
578,061
I recently started a LAMP server (all the latest versions) w/ WordPress on it, and I'm trying to install a SSL certificate that I recently purchased. When I restart apachectl , error_log gives me this: [Tue Feb 25 01:07:14.744222 2014] [mpm_prefork:notice] [pid 1744] AH00169: caught SIGTERM, shutting down [Tue Feb 25 01:07:17.135704 2014] [suexec:notice] [pid 1765] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Feb 25 01:07:17.217424 2014] [auth_digest:notice] [pid 1766] AH01757: generating secret for digest authentication ... [Tue Feb 25 01:07:17.218686 2014] [lbmethod_heartbeat:notice] [pid 1766] AH02282: No slotmem from mod_heartmonitor PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/5.5/modules/mysql.so' - /usr/lib64/php/5.5/modules/mysql.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/5.5/modules/mysqli.so' - /usr/lib64/php/5.5/modules/mysqli.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Tue Feb 25 01:07:17.305292 2014] [mpm_prefork:notice] [pid 1766] AH00163: Apache/2.4.6 (Amazon) OpenSSL/1.0.1e-fips PHP/5.5.7 configured -- resuming normal operations [Tue Feb 25 01:07:17.305378 2014] [core:notice] [pid 1766] AH00094: Command line: '/usr/sbin/httpd' While ssl_error_log gives me this: [Tue Feb 25 00:57:15.802287 2014] [ssl:warn] [pid 1705] AH01909: RSA certificate configured for ec2-XX-XXX-XXX-XX.compute-1.amazonaws.com:443 does NOT include an ID which matches the server name [Tue Feb 25 00:57:15.899327 2014] [ssl:warn] [pid 1706] AH01909: RSA certificate configured for ec2-XX-XXX-XXX-XX.compute-1.amazonaws.com:443 does NOT include an ID which matches the server name I changed "ServerName" in ssl.conf to my server's name (dcturano.com) and restarted apachectl , yet this error occurs. Any ideas why? As an aside, I haven't set the CommonName of the server, could that be the issue?
openssl x509 -in server.crt -noout -subject Should return the CN the of the certificate. That's the name you have to use in the ServerName directive and to connect to.
{ "source": [ "https://serverfault.com/questions/578061", "https://serverfault.com", "https://serverfault.com/users/210701/" ] }
578,148
Having many configuration files, I put up one file but I don't see its effects. Either my config is WRONG, OR the config files are NOT LOADED by Apache. Is there a command I can fire to see wether a specific config file got loaded by Apache or not? apachectl configtest Does not print errors. Server restarts without error.
From command line you can also run the following arguments with the Apache binary to get additional information: -t -D DUMP_VHOSTS : show parsed vhost settings -t -D DUMP_RUN_CFG : show parsed run settings -t -D DUMP_MODULES : show all loaded modules Hope this helps!
{ "source": [ "https://serverfault.com/questions/578148", "https://serverfault.com", "https://serverfault.com/users/154732/" ] }
578,544
Is there an easy way to deploy a folder full of template .j2 folder to a linux box, using the same name as the template, but without the .j2 extension, rather than using the template module for each file? Right now i have a long list of: - name: create x template template: src=files/x.conf.j2 dest=/tmp/x.conf owner=root group=root mode=0755 notify: - restart myService
You could use with_fileglob to get the list of files from your template directory and use filters to strip the j2 extension like this: - name: create x template template: src: "{{ item }}" dest: /tmp/{{ item | basename | regex_replace('\.j2$', '') }} with_fileglob: - ../templates/*.j2
{ "source": [ "https://serverfault.com/questions/578544", "https://serverfault.com", "https://serverfault.com/users/163488/" ] }
579,393
Assuming you wanted to create a subdomain that points to a private location (perhaps the location of a database, or the IP address of a computer you don't want people to attempt SSH-ing into), so you add a DNS record named something like this: private-AGhR9xJPF4.example.com Would this be "hidden" to everyone except those who know the exact URI to the subdoman? Or, is there some way to "list" all registered subdomains of a particular domain?
Is there some kind of "subdomain listing" query for DNS? There is no query for this specific purpose, but there are a few indirect methods. A non-incremental zone transfer ( AXFR ). Most server operators lock down zone transfers to specific IP addresses to prevent unaffiliated parties from snooping around. If DNSSEC is enabled, iterative NSEC requests can be used to walk the zone . NSEC3 was implemented to make zone walking more computationally intensive. There's also a trick that will let someone know if an arbitrary subdomain exists. example.com. IN A 198.51.100.1 www.sub.example.com. IN A 198.51.100.2 In the above example, www lies within sub . A query for sub.example.com IN A will not return an ANSWER section, but the result code will be NOERROR instead of NXDOMAIN, betraying the existence of records further down the tree. (just not what those records are named) Should secrecy of DNS records ever be relied upon? No. The only way to reliably hide data from a client is to ensure that it can never get the data to begin with. Assume that existence of your DNS records will be spread among whoever has access to them, either by word of mouth or by observing the packets. If you're trying to hide records from a routable DNS client, You're Doing It Wrong™ . Make sure the data is only exposed to the environments that need it. (i.e. use privately routed domains for private IPs) Even if you have such a division set up, assume that knowledge of the IP addresses will be spread around anyway. The focus on security should be on what happens when someone gets the IP address, because it's going to happen. I'm aware that the list of reasons for IP address secrecy being a pipe dream could be expanded on further. IP scanning, social engineering...the list is endless, and I'm mostly focusing on the DNS protocol aspects of this question. At the end of the day, it all falls under the same umbrella: someone is going to get your IP address .
{ "source": [ "https://serverfault.com/questions/579393", "https://serverfault.com", "https://serverfault.com/users/97027/" ] }
580,595
I'm running an HAProxy load balancing server to balance load to multiple Apache servers. I need to reload HAProxy at any given time in order to change the load balancing algorithm. This all works fine, except for the fact that I have to reload the server without losing a single packet (at the moment a reload is giving me 99.76% success on average, with 1000 requests per second for 5 seconds). I have done many hours of research about this, and have found the following command for "gracefully reloading" the HAProxy server: haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) However, this has little or no effect versus the plain old service haproxy reload , it's still dropping 0.24% on average. Is there any way of reloading the HAProxy config file without a single dropped packet from any user?
According to https://github.com/aws/opsworks-cookbooks/pull/40 and consequently http://www.mail-archive.com/[email protected]/msg06885.html you can: iptables -I INPUT -p tcp --dport $PORT --syn -j DROP sleep 1 service haproxy restart iptables -D INPUT -p tcp --dport $PORT --syn -j DROP This has the effect of dropping the SYN before a restart, so that clients will resend this SYN until it reaches the new process.
{ "source": [ "https://serverfault.com/questions/580595", "https://serverfault.com", "https://serverfault.com/users/180252/" ] }
580,753
I'm attempting to run this simple provisioning script but I'm encountering errors when running vagrant up and then vagrant provision commands. I read that I needed to create a /etc/ansible/hosts file which I've done, populating it with: [vagrant] 192.168.222.111 My SSH config (some details removed): Host default HostName 127.0.0.1 User vagrant Port 2222 UserKnownHostsFile /dev/null StrictHostKeyChecking no PasswordAuthentication no IdentityFile /Users/ashleyconnor/.vagrant.d/insecure_private_key IdentitiesOnly yes LogLevel FATAL Host server HostName XXX.XXX.XXX.XXX User ash PreferredAuthentications publickey IdentityFile ~/.ssh/ash_ovh Host deployer HostName XXX.XXX.XXX.XXX User deployer PreferredAuthentications publickey IdentityFile ~/.ssh/deployer_ovh Host bitbucket.org PreferredAuthentications publickey IdentityFile ~/.ssh/bitbucket Host github.com PreferredAuthentications publickey IdentityFile ~/.ssh/github Host staging HostName 192.168.56.10 User deployer PreferredAuthentications publickey IdentityFile ~/.ssh/id_rsa The SSH output I'm receiving seems to churn through all my keys: <192.168.222.111> ESTABLISH CONNECTION FOR USER: vagrant <192.168.222.111> REMOTE_MODULE setup <192.168.222.111> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/ashleyconnor/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'IdentityFile=/Users/ashleyconnor/.vagrant.d/insecure_private_key', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '192.168.222.111', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1394317116.44-226619545527061 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1394317116.44-226619545527061 && echo $HOME/.ansible/tmp/ansible-tmp-1394317116.44-226619545527061'"] fatal: [192.168.222.111] => SSH encountered an unknown error. The output was: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /Users/ashleyconnor/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/Users/ashleyconnor/.ansible/cp/ansible-ssh-192.168.222.111-22-vagrant" does not exist debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.222.111 [192.168.222.111] port 22. debug2: fd 3 setting O_NONBLOCK debug1: fd 3 clearing O_NONBLOCK debug1: Connection established. debug3: timeout: 10000 ms remain after connect debug3: Incorrect RSA1 identifier debug3: Could not load "/Users/ashleyconnor/.vagrant.d/insecure_private_key" as a RSA1 public key debug1: identity file /Users/ashleyconnor/.vagrant.d/insecure_private_key type -1 debug1: identity file /Users/ashleyconnor/.vagrant.d/insecure_private_key-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "192.168.222.111" from file "/Users/ashleyconnor/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/ashleyconnor/.ssh/known_hosts:20 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],zlib,none debug2: kex_parse_kexinit: [email protected],zlib,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 119/256 debug2: bits set: 527/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 50:db:75:ba:11:2f:43:c9:ab:14:40:6d:7f:a1:ee:e3 debug3: load_hostkeys: loading entries for host "192.168.222.111" from file "/Users/ashleyconnor/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/ashleyconnor/.ssh/known_hosts:20 debug3: load_hostkeys: loaded 1 keys debug1: Host '192.168.222.111' is known and matches the RSA host key. debug1: Found key in /Users/ashleyconnor/.ssh/known_hosts:20 debug2: bits set: 511/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/ashleyconnor/.ssh/id_rsa (0x7fc212600540), debug2: key: /Users/ashleyconnor/.ssh/bitbucket (0x7fc212600730), debug2: key: /Users/ashleyconnor/.ssh/deployer (0x7fc212600a00), debug2: key: /Users/ashleyconnor/.ssh/github (0x7fc212600c80), debug2: key: /Users/ashleyconnor/.ssh/ash_ovh (0x7fc212601010), debug2: key: /Users/ashleyconnor/.ssh/deployer_ovh (0x7fc2126011e0), debug2: key: /Users/ashleyconnor/.vagrant.d/insecure_private_key (0x0), explicit debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey debug3: authmethod_lookup publickey debug3: remaining preferred: ,gssapi-keyex,hostbased,publickey debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/ashleyconnor/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Offering RSA public key: /Users/ashleyconnor/.ssh/bitbucket debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Offering RSA public key: /Users/ashleyconnor/.ssh/deployer debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Offering RSA public key: /Users/ashleyconnor/.ssh/github debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Offering RSA public key: /Users/ashleyconnor/.ssh/ash_ovh debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Offering RSA public key: /Users/ashleyconnor/.ssh/deployer_ovh debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply Received disconnect from 192.168.222.111: 2: Too many authentication failures for vagrant The vagrant ssh command works fine.
According to an older* ssh-config(5) man page, ssh will always try all keys known by the agent in addition to any Identity Files: IdentitiesOnly Specifies that ssh(1) should only use the authentication identity files configured in the ssh_config files, even if ssh-agent(1) offers more identities. The argument to this keyword must be “yes” or “no”. This option is intended for situations where ssh-agent offers many different identities. The default is “no”. IdentityFile Specifies a file from which the user's DSA, ECDSA or RSA authentication identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa and ~/.ssh/id_rsa for protocol version 2. Additionally, any identities represented by the authentication agent will be used for authentication. ssh(1) will try to load certificate information from the filename obtained by appending -cert.pub to the path of a specified IdentityFile. To prevent this, one must specify IdentitiesOnly=yes in addition to the explicitly provided private key. For example, running the ssh command below: $ ssh -i /home/henk/.vagrant.d/insecure_private_key \ [email protected] echo ok produces: Received disconnect from 192.168.222.111: 2: Too many authentication failures for vagrant However, running the same ssh command and, in addition, specifying IdentitiesOnly=yes : $ ssh -o IdentitiesOnly=yes \ -i /home/henk/.vagrant.d/insecure_private_key [email protected] echo ok produces: ok * Note: The OpenBSD project hosts up to date docs for IdentitiesOnly and IdentityFile . These include extra text for new features that do not change the essence of this answer.
{ "source": [ "https://serverfault.com/questions/580753", "https://serverfault.com", "https://serverfault.com/users/40862/" ] }
580,816
The two main reasons I can think of for taking backups seems to be taken care of when I use both snapshots and RAID together with btrfs. (By RAID here, I mean RAID1 or 10) Accidental deletion of data: Snapshots covers this case Failure of a drive and bit rot Complete failure: RAID covers this case Drive returning bad data: RAID + btrfs' error correcting feature covers this case So as an on-site backup solution, this seems to work fine, and it doesn't even need a separate data storage device for it! However, I have heard that both RAID and snapshots aren't considered proper backups, so I'm wondering if I have missed anything. Aside from btrfs not being a mature technology yet, can you think of anything I've missed? Or is my thinking correct and this is a valid on-site backup solution?
No, it's not. What happens when your filesystem or RAID volume gets corrupted? Or your server gets set on fire? Or someone accidentally formats the wrong array? You lose all your data and the not-real-backups you thought you had. That's why real backups are on a completely different system than the data you're backing up - because backups protect against something happening to the system in question that would cause data loss. Keep your backups on the same system as you're backing up, and data loss on that system can impact your "backups" as well.
{ "source": [ "https://serverfault.com/questions/580816", "https://serverfault.com", "https://serverfault.com/users/212125/" ] }
580,881
I love the idea of accessing servers via keys, so that I don't have to type in my password every time I ssh into a box, I even lock my user's (not root ) password ( passwd -l username ) so it's impossible to log in without a key. But all of this breaks if I'm required to enter password for sudo commands. So I'm tempted to set up passwordless sudo to make things in line with passwordless login. However, I keep having a gut feeling that it may backfire at me in some unexpected way, it just seems somehow insecure. Are there any caveats with such set up? Would you recommend/not recommend doing this for a user account on a server? Clarifications I'm talking about the use of sudo in an interactive user session here, not for services or administrative scripts I'm talking about using a cloud server (so I have no physical local access to a machine and can only log in remotely) I know that sudo has a timeout during which I don't have to re-enter my password. But my concert isn't really about wasting the extra time to physically type in a password. My idea though was to not have to deal with a password at all, because I assume that: If I have to memorize it at all, it's very likely too short to be secure or reused If I generate a long and unique password for my remote account, I'll have to store it somewhere (a local password manager program or a cloud service) and fetch it every time I want to use sudo . I hoped I could avoid that. So with this question I wanted to better understand the risks, caveats and tradeoffs of one possible configuration over the others. Follow up 1 All answers say that passwordless sudo is insecure as it allows "easy" escalation of privileges if my personal user account gets compromised. I understand that. But on the other hand, if I use a password, we incur all the classic risks with passwords (too short or common string, repeated across different services, etc.). But I guess that if I disable password authentication in /etc/ssh/sshd_config so that you still have to have a key to log in, I can use a simpler password just for sudo that's easier to type in? Is that a valid strategy? Follow up 2 If I also have a key to log in as root via ssh, if somebody gets access to my computer and steal my keys (they're still protected by OS' keyring password though!), they might as well get a direct access to the root account, bypassing the sudo path. What should be the policy for accessing the root account then?
I love the idea of accessing servers via keys, so that I don't have to type in my password every time I ssh into a box, I even lock my user's (not root) password (passwd -l username) so it's impossible to log in without a key...Would you recommend/not recommend doing this for a user account on a server? You're going about disabling password-based logins the wrong way. Instead of locking a user's account, set PasswordAuthentication no in your /etc/ssh/sshd_config . With that set, password authentication is disabled for ssh, but you can still use a password for sudo. The only time I recommend setting NOPASSWD in sudo is for service accounts, where processes need to be able to run commands via sudo programmatically. In those circumstances, make sure that you explicitly whitelist only the specific commands that account needs to run. For interactive accounts, you should always leave passwords enabled. Responses to your follow-up questions: But I guess that if I disable password authentication in /etc/ssh/sshd_config so that you still have to have a key to log in, I can use a simpler password just for sudo that's easier to type in? Is that a valid strategy? Yes, that's correct. I still recommend using relatively strong local account passwords, but not ridiculously-strong. ~8 characters, randomly generated is sufficient. If I also have a key to log in as root via ssh, if somebody gets access to my computer and steal my keys (they're still protected by OS' keyring password though!), they might as well get a direct access to the root account, bypassing the sudo path. Root access via ssh should be disabled. Period. Set PermitRootLogin no in your sshd_config . What should be the policy for accessing the root account then? You should always have a means of obtaining out-of-band access to the console of your server. Several VPS vendors do provide this, as do vendors of dedicated hardware. If your provider doesn't grant real console access (say, EC2 for instance), you can typically still restore access using a process like what I outline in this answer .
{ "source": [ "https://serverfault.com/questions/580881", "https://serverfault.com", "https://serverfault.com/users/164762/" ] }
580,972
I am trying to install an MSI on a Windows Server 2012 machine which is part of my lab domain. I am local and domain admin, but I seem to be prevented from installing this MSI. For clarification, when attempting to install the git extension for visual studio (located here ) logged in as a domain user that is part of the administrator group, I get the following error The machine reporting the error is a Windows Server 2012. I'm almost certain it must be some sort of group policy restriction? None will have been set, unless it's the default security level? For clarification, I'd like to know what is preventing this MSI being installed by a domain admin?
After spending time looking at group policy, as far as I could tell, there was nothing that was relevant. I then came across this post that suggest I try launching a command promt as an administrator and running msiexec /a install.msi This appeared to work, but ran very quickly - in fact it didn't . On a whim, I tried this inside the admin command prompt. msiexec /i install.msi which worked a treat.
{ "source": [ "https://serverfault.com/questions/580972", "https://serverfault.com", "https://serverfault.com/users/123010/" ] }
581,145
I copied the nginx.conf sample onto my ubuntu 12.04 box (I don't know where to put the other conf files. I'm an nginx noob). When I try to start nginx I get the following error: abe-lens-laptop@abe:/etc$ sudo service nginx start Starting nginx: nginx: [emerg] getpwnam("www") failed in /etc/nginx/nginx.conf:1 nginx: configuration file /etc/nginx/nginx.conf test failed What does this error mean? How can I fix it? I found this post but my user is already set to www www (if you see in the linked file) How do I change the NGINX user?
The user you specified in your configuration, www , doesn't exist. Either create the user, or choose a user that does exist.
{ "source": [ "https://serverfault.com/questions/581145", "https://serverfault.com", "https://serverfault.com/users/103183/" ] }
581,268
We're trying to distribute out S3 buckets via Cloudfront but for some reason the only response is an AccessDenied XML document like the following: <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>89F25EB47DDA64D5</RequestId> <HostId>Z2xAduhEswbdBqTB/cgCggm/jVG24dPZjy1GScs9ak0w95rF4I0SnDnJrUKHHQC</HostId> </Error> Here's the setting's we're using: And here's the policy for the bucket { "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity *********" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::x***-logos/*" } ] }
If you're accessing the root of your CloudFront distribution, you need to set a default root object: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html To specify a default root object using the CloudFront console: Sign in to the AWS Management Console and open the Amazon CloudFront console at https://console.aws.amazon.com/cloudfront/ . In the list of distributions in the top pane, select the distribution to update. In the Distribution Details pane , on the General tab, click Edit . In the Edit Distribution dialog box, in the Default Root Object field, enter the file name of the default root object. Enter only the object name, for example, index.html . Do not add a / before the object name. To save your changes, click Yes, Edit .
{ "source": [ "https://serverfault.com/questions/581268", "https://serverfault.com", "https://serverfault.com/users/212373/" ] }
581,465
I have just inherited 6 web servers from previous server guy who was fired, I am not a sysadmin I am more a DevOps. Could anyone point me to some sort of standard checklist one would follow when inheriting existing servers? Things I need to know are: What software is on the servers What are the standard things I should do to check they are secure? what is connecting to them and what are they connected too? What else should I know? Any advise is welcome, I was hoping there was a standard kind of checklist that one would follow as a start, but I could not find anything. All servers are Ubuntu (various versions)
To determine what software has been installed, you can review /var/log/dpkg.log However, this may not be a complete record. There may be binaries and code that was compiled manually or copied directly to the system pre-compiled. You could compare a default install of the same Ubuntu version and type to the server(s) and look for what files are different, but that can be quiet tedious. A file monitor solution would be ideal (tripewire, inotifywatch, etc.) http://linuxcommando.blogspot.com/2008/08/how-to-show-apt-log-history.html You need to check EVERYTHING on the server. Every user account in /etc/passwd , every application user account (such as users in Apache/PHP, database accounts, etc.) should be accounted for, and you should change all the passwords. You should check to see what services are launched on boot, what the default runlevel is and what starts with it and with other runlevels. I would use a vulnerability scanner and a baseline configuration tool to audit the current state. The Center for Internet Security offers a free configuration assessment tool, but it may be limited. They have more advanced tools for member organizations ($). http://benchmarks.cisecurity.org/ OpenVAS is a FOSS scanner, not unlike Nessus, which may have similar capabilities. There are many, many more things to check, but this answer is already getting a bit long... (Code review for webapps and web pages is a good example.) You can see review the state of ports available for connections to the servers with a variety of flags for netstat . http://www.thegeekstuff.com/2010/03/netstat-command-examples/ To identify who has been connecting to the server you will have to resort to the sexiest of Internet Security activities, reviewing system logs. The info can be in any one of a number of logs depending on what applications and servers are on the system. You may also have some luck with external network logs, if they exist. You have a lot of follow up to do. You indicated that the previous admin was fired ; if you suspect malicious intent from that person (i.e. they may have left backdoors, boobie traps, logic bombs, etc.) your almost certain to be better off rebuilding the servers from clean media and reimplement the webapps on them. If this previous admin had full access and control to those system and was not subjected to diligent auditing and overwatch, you should probably assume there are backdoors. This is based on a pessimistic assumption about the previous admin. Unfortunately that is the way the cookie crumbles for operational network security. There is a lot more to consider, as I said...way more than can be covered here. These points should give you some things to start doing so you can report to management that you are making some progress; but to be brutally honest, if you are not a security professional and you have reason to suspect this person did act with malice, you are probably in over your head. It is an unpopular answer with management because it requires a lot of effort (which means more $), but the general security minded answer is when in doubt, wipe and rebuild from clean sources . That is how most important gov't systems work with malware; if an alert comes up from AV, the system is segregated, wiped, and rebuilt. Hope you made a backup cuz that data is GONE . Good luck, and I hope this was helpful and not just depressing.
{ "source": [ "https://serverfault.com/questions/581465", "https://serverfault.com", "https://serverfault.com/users/212476/" ] }
581,524
I'm familiar with what a BBWC (Battery-backed write cache) is intended to do - and previously used them in my servers even with good UPS. There are obvously failures it does not provide protection for. I'm curious to understand whether it actually offers any real benefit in practice. (NB I'm specifically looking for responses from people who have BBWC and had crashes/failures and whether the BBWC helped recovery or not) Update After the feedback here, I'm increasingly skeptical as whether a BBWC adds any value. To have any confidence about data integrity, the filesystem MUST know when data has been committed to non-volatile storage (not necessarily the disk - a point I'll come back to). It's worth noting that a lot of disks lie about when data has been committed to the disk ( http://brad.livejournal.com/2116715.html ). While it seems reasonable to assume that disabling the on-disk cache might make the disks more honest, there's still no guarantee that this is the case either. Due to the typcally large buffers in a BBWC, a barrier can require significantly more data to be commited to disk therefore causing delays on writes: the general advice is to disable barriers when using a non-volatile write back cache (and to disable on-disk caching). However this would appear to undermine the integrity of the write operation - just because more data is maintained in non-volatile storage does not mean that it will be more consistent. Indeed, arguably without demarcation between logical transactions there seems to be less opportunity to ensure consistency than otherwise. If the BBWC were to acknowledge barriers at the point the data enters it's non-volatile storage (rather than being committed to disk) then it would appear to satisfy the data integrity requirement without a performance penalty - implying that barriers should still be enabled. However since these devices generally exhibit behaviour consistent with flushing the data to the physical device (significantly slower with barriers) and the widespread advice to disable barriers, they cannot therefore be behaving in this way. WHY NOT? If the I/O in the OS is modelled as a series of streams then there is some scope to minimise the blocking effect of a write barrier when write caching is managed by the OS - since at this level only the logical transaction (a single stream) needs to be committed. On the other hand, a BBWC with no knowledge of which bits of data make up the transaction would have to commit its entire cache to disk. Whether the kernel/filesystems actually implement this in practice would require a lot more effort than I'm wiling to invest at the moment. A combination of disks telling fibs about what has been committed and sudden loss of power undoubtedly leads to corruption - and with a Journalling or log structured filesystem which don't do a full fsck after an outage its unlikely that the corruption will be detected let alone an attempt made to repair it. In terms of the modes of failure, in my experience most sudden power outages occur because of loss of mains power (easily mitigated with a UPS and managed shutdown). People pulling the wrong cable out of rack implies poor datacentre hygene (labelling and cable management). There are some types of sudden power loss event which are not prevented by a UPS - failure in the PSU or VRM a BBWC with barriers would provide data integrity in the event of a failure here, however how common are such events? Very rare judging by the lack of responses here. Certainly moving the fault tolerance higher in the stack is significantly more expensive the a BBWC - however implementing a server as a cluster has lots of other benefits for performance and availability. An alternative way to mitigate the impact of sudden power loss would be to implement a SAN - AoE makes this a practical proposition (I don't really see the point in iSCSI) but again there's a higher cost.
Sure. I've had battery-backed cache (BBWC) and later flash-backed write cache (FBWC) protect in-flight data following crashes and sudden power loss. On HP ProLiant servers, the typical message is: POST Error: 1792-Drive Array Reports Valid Data Found in Array Accelerator Which means, " Hey, there's data in the write cache that survived the reboot/power-loss!! I'm going to write that back to disk now!! " An interesting case was my post-mortem of a system that lost power during a tornado , the array sequence was: POST Error: 1793-Drive Array - Array Accelerator Battery Depleted - Data Loss POST Error: 1779-Drive Array Controller Detects Replacement Drives POST Error: 1792-Drive Array Reports Valid Data Found in Array Accelerator The 1793 POST error is unique. - While the system was in use, power was interrupted while data was in the Array Accelerator memory. However, due to the fact that this was a tornado, power was not restored within four days, so the array batteries were depleted and data within was lost. The server had two RAID controllers. The other controller had an FBWC unit, which lasts far longer than a battery. That drive recovered properly. Some data corruption resulted on the array backed by the empty battery. Despite plenty of battery runtime at the facility, four days without power and hazardous conditions made it impossible for anyone to shut the servers down safely.
{ "source": [ "https://serverfault.com/questions/581524", "https://serverfault.com", "https://serverfault.com/users/35483/" ] }
581,817
There are a few questions that I've found on ServerFault that hint around this topic, and while it may be somewhat opinion-based, I think it can fall into that "good subjective" category based on the below: Constructive subjective questions: * tend to have long, not short, answers * have a constructive, fair, and impartial tone * invite sharing experiences over opinions * insist that opinion be backed up with facts and references * are more than just mindless social fun So that out of the way. I'm helping out a fellow sysadmin that is replacing an older physical server running Windows 2003 and he's looking to not only replace the hardware but "upgrade" to 2012 R2 in the process. In our discussions about his replacement hardware, we discussed the possibility of him installing ESXi and then making the 2012 "server" a VM and migrating the old apps/files/roles from the 2003 server to the VM instead of to a non-VM install on the new hardware. He doesn't perceive any time in the next few years the need to move anything else to a VM or create additional VMs, so in the end this will either be new hardware running a normal install or new hardware running a single VM on ESXi. My own experience would lean towards a VM still, there isn't a truly compelling reason to do so other than possibilities that may arise to create additional VMs. But there is the additional overhead and management aspect of the hypervisor now, albeit I have experienced better management capabilities and reporting capabilities with a VM. So with the premise of hoping this can stay in the "good subjective" category to help others in the future, what experiences/facts/references/constructive answers do you have to help support either outcome (virtualizing or not a single "server")?
In the general case, the advantage of putting a standalone server on a hypervisor is future-proofing. It makes future expansion or upgrades much easier, much faster, and as a result, cheaper. The primary drawback is additional complexity and cost (not necessarily financially, but from a man-hours and time perspective). So, to come to a decision, I ask myself three questions (and usually prefer to put the server on a hypervisor, for what it's worth). How big is the added cost of the hypervisor? Financially, it's usually minimal or non-existent. Both VMware and Microsoft have licensing options that allow you to run a host and a single guest for free, and this is sufficient for most standalone servers, exceptions generally being servers that are especially resource-intensive. From a management and resource standpoint, determining cost can be a bit trickier. You basically double the cost of maintaining the system, because now you have two systems to monitor, manage and keep up-to-date with patches and updates (the guest OS and the host OS). For most uses, this is not a big deal, as it's not terribly taxing to maintain one server, though for some especially small or especially technically challenged organizations, this can be a real concern. You also add to the technical skills required. Now, instead of just needing someone who can download updates from Windows Update, you need someone who knows enough to manage and maintain the virtualization environment. Again, not usually a problem, but sometimes, it's more than an organization can handle. How big is the benefit from ease-of upgrade or expansion? This boils down to how likely future expansion is, because obviously, if they don't expand or upgrade their server assets, this benefit is zero. If this is the type of organization that's just going to stuff the server in a corner and forget about it for 10 years until it needs to be replaced anyway, there's no point. If they're likely to grow organizationally, or even just technically (by say adding new servers with different roles, instead of just having an all-in-one server), then this provides a fairly substantial benefit. What's the benefit now ? Virtualization bring benefits beyond future-proofing, and in some use-cases, they can be substantial. The most obvious one is the ability to create snapshots and trivial-to-restore backups before doing something on the system, so if it goes bad, you can revert in one click. The ability to experiment with other VMs (and play the "what if" game) is another one I've seen management get excited about. For my money, though, the biggest benefit is the added portability you get from running a production server on a hypervisor. If something goes really wrong and you get yourself into a disaster-recovery or restore-from-backups situation, it is almost infinitely easier to restore a disk image to a machine running the same hypervisor than trying to do a bare-metal restore.
{ "source": [ "https://serverfault.com/questions/581817", "https://serverfault.com", "https://serverfault.com/users/7861/" ] }
582,499
I'm running Ubuntu 12.04 on Oracle VirtualBox. A couple months ago, I installed PostgreSQL server version 9.1 on my machine. Just recently, I learned that PostgreSQL server 9.3 supports JSON data types, so I decided to upgrade. I upgraded to 9.3 by following the instructions here: https://wiki.postgresql.org/wiki/Apt wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - sudo apt-get update sudo apt-get upgrade sudo apt-get install postgresql-9.3 pgadmin3 This installed server version 9.3 on my machine alongside version 9.1. Running pg_lsclusters after a fresh boot gives: Ver Cluster Port Status Owner Data directory Log file 9.1 main 5433 online postgres /var/lib/postgresql/9.1/main /var/log/postgresql/postgresql-9.1-main.log 9.3 main 5432 online postgres /var/lib/postgresql/9.3/main /var/log/postgresql/postgresql-9.3-main.log I then did the following post-upgrade maintenance: I exported several tables from my 9.1 server with pg_dump and restored them to my 9.3 server. I then opened my config files for 9.1 and 9.3 at /etc/postgresql/$VERSION/main/postgresql.conf and swapped their port numbers so that my psql client connects to the new server by default. My question is this. Both 9.1 and 9.3 start on boot. I would like to prevent 9.1 from auto booting, as it takes up roughly 5% of my system memory. How can I do this? Resources consulted: The PostgreSQL doc page on starting a server points me to the standard init.d directory. My init.d directory does contain the script postgresql . It looks like this script can be configured to launch only one version, but the required change is not obvious to me. http://www.postgresql.org/docs/9.1/interactive/server-start.html The post below was very informative, but it shows how to remove a cluster, not how to disable one on startup. I would like to leave my older cluster installed, as I may want to retrieve further information from it. I think I have multiple postgresql servers installed, how do I identify and delete the 'extra' ones? I have considered writing a script to kill the server once the system has finished loading, but this seems inefficient. Is there a cleaner way to disable version 9.1 on boot?
For less of a hack, edit /etc/postgresql/9.1/main/start.conf and replace auto with manual or disabled . Note : Invoke systemctl daemon-reload after editing this file.
{ "source": [ "https://serverfault.com/questions/582499", "https://serverfault.com", "https://serverfault.com/users/212985/" ] }
582,627
I read in a lot of places that it's necessary to ground networking equipment. In a small SOHO environment where there are, say, 4-5 systems and each system is grounded, is it also necessary to ground the network equipment, that is, the network cable and the switch? If yes, how it is/can be achieved? Most of the low-cost switches I have seen, like say this one or this one . seem to have only a two-pin power adaptor. In such a case, how do I ground the switch?
Grounding for safety (vs grounding for signal integrity, which is a whole different issue, see below) is primarily a concern for equipment in a metal enclosure. If there is a wiring fault inside the device (frayed wire shorting against the metal inside), then the outer shell of the unit may be electrified. Guess what happens when you touch it? Ouch! By having a ground path, you give the electrical current somewhere else to go (instead of through your body when you touch the device). You will notice that most devices in a metal enclosure will have a 3-prong input power (if the power plugs in directly without an AC/DC power brick), and also a chassis ground screw . If you are grounded through the power cord, the middle prong should be connected to the building's wiring and then to an earth grounding rod , assuming your building was properly wired using modern building codes. There are inexpensive testers that can check to make sure your outlet is wired correctly. If you use the chassis ground screw, you typically connect it to a grounding bus bar mounted on the wall, which is then connected to an earth ground. If you don't have a bus bar in your setup, you might be able to "cheat" by wiring it up to the grounding prong or center screw of an electrical outlet. (This is not ideal, and probably would not pass a code inspection, but it is better than nothing.) Your local safety codes may require one or the other be hooked up, or both. It may also depend on whether it is a temporary item (e.g. a 5-port switch on someone's desk), or a permanent installation (something bolted to a wall). Check your local codes. For rackmount devices, they are typically grounded to the metal frame of the rack by being bolted to it. Then there is is typically a grounding connection from the metal frame of the rack to a bus bar, which in turn grounds all of the devices mounted in the rack. This is in addition to the grounding provided via the power cords. Cable conduits, ladders, rack doors should be grounded as well (any exposed metal). Page 3 of this PDF provides a useful illustration. For a consumer-grade desktop switch in a plastic shell like the ones you mentioned: There is usually no option for attaching a ground because it is not required since there is no exposed metal. The only things you should do is make sure your outlets are wired correctly (using the aforementioned tester), and use a surge suppressor (power strip or UPS). Grounding for Signal Integrity: The other reason you might need to pay attention to grounding is if you have a signal integrity problem (corrupt data). Two big ways this can come into play: In an electronic system, ground is the reference point for "zero volts". Ground is not the same everywhere you are, so two physically separate systems may disagree on what is a "1" or a "0". This can lead to all kinds of "interesting" communication problems. A common way you can run into this is if one of the computers connected to the switch is on a separate electrical power system (e.g. two buildings connected by an underground cable). In that case, it is recommended that you use fiber ethernet (not a consumer grade switch). Electronic interference and "noise". Power cables running next to data cables. EMI due to a large electric compressor next to your wiring closet. These kinds of problems can be mitigated with grounded conduits and other forms of shielding (or just use fiber). Generally speaking, Ethernet is is much more forgiving than say RS-232 when it comes to grounding issues because the signaling is differential and uses an isolation transformer. So, you usually do not need to worry about signal integrity grounding in a typical office environment. However, problems can still occur in "harsh" environments, like a factory floor. If you have a higher-end managed switch, it can give you statistics on Layer 1-2 communication errors, which will give you some idea if there are physical problems with your wiring that need to be addressed.
{ "source": [ "https://serverfault.com/questions/582627", "https://serverfault.com", "https://serverfault.com/users/213038/" ] }
582,630
There is a windows service that gets reinstalled sometimes. I need a user to be able to start/stop/restart this service. This user is not an administrator and shouldn't be. If I use setacl.exe than it works, or even I can use sc sdset , but after the service gets reinstalled setacl needs to be called again, but the process that reinstalls the service has no rights to run setacl. Is there a way to grant a specific user the right to restart a service with a specific name, or even all services, that persists through a service reinstall? If I'm able to give a user some general permissions to "manage services" that would also be fine, but I'm unable to pinpoint the exact rights needed for this (if I add the user to the admin group, he can start/stop services, but can -obviously- do a lot more than that).
Grounding for safety (vs grounding for signal integrity, which is a whole different issue, see below) is primarily a concern for equipment in a metal enclosure. If there is a wiring fault inside the device (frayed wire shorting against the metal inside), then the outer shell of the unit may be electrified. Guess what happens when you touch it? Ouch! By having a ground path, you give the electrical current somewhere else to go (instead of through your body when you touch the device). You will notice that most devices in a metal enclosure will have a 3-prong input power (if the power plugs in directly without an AC/DC power brick), and also a chassis ground screw . If you are grounded through the power cord, the middle prong should be connected to the building's wiring and then to an earth grounding rod , assuming your building was properly wired using modern building codes. There are inexpensive testers that can check to make sure your outlet is wired correctly. If you use the chassis ground screw, you typically connect it to a grounding bus bar mounted on the wall, which is then connected to an earth ground. If you don't have a bus bar in your setup, you might be able to "cheat" by wiring it up to the grounding prong or center screw of an electrical outlet. (This is not ideal, and probably would not pass a code inspection, but it is better than nothing.) Your local safety codes may require one or the other be hooked up, or both. It may also depend on whether it is a temporary item (e.g. a 5-port switch on someone's desk), or a permanent installation (something bolted to a wall). Check your local codes. For rackmount devices, they are typically grounded to the metal frame of the rack by being bolted to it. Then there is is typically a grounding connection from the metal frame of the rack to a bus bar, which in turn grounds all of the devices mounted in the rack. This is in addition to the grounding provided via the power cords. Cable conduits, ladders, rack doors should be grounded as well (any exposed metal). Page 3 of this PDF provides a useful illustration. For a consumer-grade desktop switch in a plastic shell like the ones you mentioned: There is usually no option for attaching a ground because it is not required since there is no exposed metal. The only things you should do is make sure your outlets are wired correctly (using the aforementioned tester), and use a surge suppressor (power strip or UPS). Grounding for Signal Integrity: The other reason you might need to pay attention to grounding is if you have a signal integrity problem (corrupt data). Two big ways this can come into play: In an electronic system, ground is the reference point for "zero volts". Ground is not the same everywhere you are, so two physically separate systems may disagree on what is a "1" or a "0". This can lead to all kinds of "interesting" communication problems. A common way you can run into this is if one of the computers connected to the switch is on a separate electrical power system (e.g. two buildings connected by an underground cable). In that case, it is recommended that you use fiber ethernet (not a consumer grade switch). Electronic interference and "noise". Power cables running next to data cables. EMI due to a large electric compressor next to your wiring closet. These kinds of problems can be mitigated with grounded conduits and other forms of shielding (or just use fiber). Generally speaking, Ethernet is is much more forgiving than say RS-232 when it comes to grounding issues because the signaling is differential and uses an isolation transformer. So, you usually do not need to worry about signal integrity grounding in a typical office environment. However, problems can still occur in "harsh" environments, like a factory floor. If you have a higher-end managed switch, it can give you statistics on Layer 1-2 communication errors, which will give you some idea if there are physical problems with your wiring that need to be addressed.
{ "source": [ "https://serverfault.com/questions/582630", "https://serverfault.com", "https://serverfault.com/users/109093/" ] }
582,696
Using PowerShell, how can I get the currently logged on domain user's full name (not only its username) without the need of the ActiveDirectory module?
$dom = $env:userdomain $usr = $env:username ([adsi]"WinNT://$dom/$usr,user").fullname Returns: John Doe Some other (mostly) obscure properties also available. A few useful ones: Homedrive UNC Homedrive Letter Description Login script Try: [adsi]"WinNT://$dom/$usr,user" | select *
{ "source": [ "https://serverfault.com/questions/582696", "https://serverfault.com", "https://serverfault.com/users/31148/" ] }
583,111
In a cron expression, what is the difference between 0/1 , 1/1 and * ?
It depends on where the terms are located 0/1 means starting at 0 every 1. 1/1 means starting at 1 every 1. * means all possible values. so For the minutes, hours, and day of week columns the 0/1 and * are equivalent as these are 0 based. For the Day Of Month and Month columns 1/1 and * are equivalent as these are 1 based.
{ "source": [ "https://serverfault.com/questions/583111", "https://serverfault.com", "https://serverfault.com/users/213296/" ] }
583,332
I am migrating over a server to new hardware. A part of the system will be rebuild. What files and directories are needed to copy so that usernames, passwords, groups, file ownership and file permissions stay in intact? Ubuntu 12.04 LTS.
Start with /etc/passwd - user account information less the encrypted passwords /etc/shadow - contains encrypted passwords /etc/group - user group information /etc/gshadow - - group encrypted passwords Be sure to ensure that the permissions on the files are correct too
{ "source": [ "https://serverfault.com/questions/583332", "https://serverfault.com", "https://serverfault.com/users/74975/" ] }
583,374
I try to configure an Nginx server as a reverse proxy so the https requests it receives from clients are forwarded to the upstream server via https as well. Here's the configuration that I use: http { # enable reverse proxy proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for; upstream streaming_example_com { server WEBSERVER_IP:443; } server { listen 443 default ssl; server_name streaming.example.com; access_log /tmp/nginx_reverse_access.log; error_log /tmp/nginx_reverse_error.log; root /usr/local/nginx/html; index index.html; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_certificate /etc/nginx/ssl/example.com.crt; ssl_certificate_key /etc/nginx/ssl/example.com.key; ssl_verify_client off; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://streaming_example_com; } } } Anyway, when I try to access a file using reverse proxy this is the error I get in reverse proxy logs: 2014/03/20 12:09:07 [error] 4113079#0: *1 SSL_do_handshake() failed (SSL: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message) while SSL handshaking to upstream, client: 192.168.1.2, server: streaming.example.com, request: "GET /publishers/0/645/_teaser.jpg HTTP/1.1", upstream: " https://MYSERVER.COM:443/publishers/0/645/_teaser.jpg ", host: "streaming.example.com" Any idea what I am doing wrong?
I found what was the error, I needed to add proxy_ssl_session_reuse off;
{ "source": [ "https://serverfault.com/questions/583374", "https://serverfault.com", "https://serverfault.com/users/126044/" ] }
583,517
I have a program that is launched on system startup using Task Scheduler on Windows Server 2012. The program must start even if the computer reboots automatially. Administrator is the account used to start the program, the option "Run whether user is logged on or not" is checked for the task. The problem with this is that when someone finally does log on as Administrator using Remote Desktop Connection the interface (program window) is hidden. As I understand there is no way to solve this using Task Scheduler. How can I solve this? It should be a fairly common problem but I can't find anything by searching the net. I'm pretty surprised that Microsoft allow such a limitation in their scheduler. Can I make a VBScript or something that runs on startup and launches the program which will then be visible when the user actually logs on? Other ideas? (I don't want to have to make a separate GUI-only program that connects to the original program by the way. I would also prefer it if I don't have to terminate the already-running program upon user logon and then launch it again.)
Figured out how to do it myself. It's somewhat of a workaround but that's what I expected to get. Alright, first step is to grab a program called AutoLogon.exe from Microsoft: http://technet.microsoft.com/sv-se/sysinternals/bb963905.aspx Stop! Don't cringe just yet. Read on... Run it, set it so that Administrator should log on automatically. Create a task in Task Scheduler. Set it to run only when user (Administrator) is logged on. Trigger is "at log on" and specify that it's only when Administrator logs on. Create a second task. Run only when user is logged on, trigger at admin log on. Action should be "start a program" and program is "C:\Windows\System32\rundll32.exe" with the argument field set to "user32.dll, LockWorkStation". What happens now if you restart the computer is that Administrator automatically logs on, the program you want to start is started and the work station becomes locked. If I log in via Remote Desktop Connection I can see the program window and use the GUI. I can lock/unlock the computer with no problem and disconnect/reconnect as I please. There's no issue if I go to the server and log in at the actual workstation either. Since Administrator is already signed in the task will not run again (it doesn't create some infinite log-in-lock-loop that you can't break out of). Simple as that. Granted there is a one second time period before the computer becomes locked after the auto login and I guess a pro hacker with physical access to the computer could do something sneaky during this time window but in my case I can overlook that security risk. As long as I don't let any pro hackers into my home and show them the computer the system should be relatively safe. Above all there isn't that much of value on the computer that needs super-vault protection so I'm quite happy with this solution.
{ "source": [ "https://serverfault.com/questions/583517", "https://serverfault.com", "https://serverfault.com/users/213353/" ] }
583,733
If I do dd if=/dev/zero of=/tank/test/zpool bs=1M count=100 how can I treat the file /tank/test/zpool as a vdev, so I can use it as a zpool? It is for zfs testing purposes only.
There is no need to create a loop device, you can simply use the file itself as a vdev: zpool create test /tank/test/zpool
{ "source": [ "https://serverfault.com/questions/583733", "https://serverfault.com", "https://serverfault.com/users/208796/" ] }
583,884
How can I configure apache so that it refuses connections coming directly to the IP address ( http://xxx.xxx.xxx.xxx ) instead of the vhost name http://example.com ? My VirtualHost configuration: ServerName example.com <VirtualHost *:80> ServerName example.com DocumentRoot /var/www/ <Directory /var/www/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>
You cannot have it refuse connections, since the hostname (or IP) that the user is trying to use as their HTTP host is not known to the server until the client actually sends an HTTP request. The TCP listener is always bound to the IP address. Would an HTTP error response be acceptable instead? <VirtualHost *:80> ServerName catchall <Location /> Order allow,deny Deny from all </Location> </VirtualHost> <VirtualHost *:80> ServerName example.com DocumentRoot /var/www/ <Directory /var/www/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>
{ "source": [ "https://serverfault.com/questions/583884", "https://serverfault.com", "https://serverfault.com/users/128954/" ] }
583,890
I want to run Tomcat 7 as a user on CentOS 6. I've created a user tomcat:tomcat and changed the ownership under /var/lib/apache-tomcat* etc... There are lots of docs online on how to do that but I don't think they are current. Most of them indicate that you do it as below. Problem is... this technique will bomb because the tomcat startup etc scripts can't write to the PID due to lower permissions on the file system. I don't want to start loosening write permissions on the file system. The goal is to increase security. What is the better way to do this? I'm surprised there is not a "canned" init script for tomcat. I know it's not complicated. But why do we have to keep reinventing the wheel? Thanks I've been using this one for years. I don't recall where I got it. I just added /bin/su tomcat . # Startup script for the Jakarta Tomcat Java Servlets and JSP server # # chkconfig: - 85 15 # description: Jakarta Tomcat Java Servlets and JSP server # processname: tomcat # pidfile: /var/run/tomcat.pid # config: # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 0 # Set Tomcat environment. export JAVA_HOME=/usr/lib/jvm/java/ #export CLASSPATH=.:/usr/local/j2sdk/lib/tools.jar:/usr/local/j2re/lib/rt.jar export CATALINA_HOME=/var/lib/tomcat #export CATALINA_OPTS="-server -Xms64m -Xmx512m -Dbuild.compiler.emacs=true" #export PATH=/usr/local/j2sdk/bin:/usr/local/j2re/bin:$PATH export CATALINA_PID=/var/run/tomcat.pid [ -f $CATALINA_HOME/bin/startup.sh ] || exit 0 [ -f $CATALINA_HOME/bin/shutdown.sh ] || exit 0 export PATH=$PATH:/usr/bin:/usr/local/bin # See how we were called. case "$1" in start) # Start daemon. echo -n "Starting Tomcat: " /bin/su tomcat $CATALINA_HOME/bin/startup.sh RETVAL=$? echo [ $RETVAL = 0 ] && touch /var/lock/subsys/tomcat ;; stop) # Stop daemons. echo -n "Shutting down Tomcat: " /bin/su tomcat $CATALINA_HOME/bin/shutdown.sh RETVAL=$? echo [ $RETVAL = 0 ] && rm -f /var/lock/subsys/tomcat ;; restart) $0 stop sleep 1 $0 start ;; condrestart) [ -e /var/lock/subsys/tomcat ] && $0 restart ;; status) status tomcat ;; *) echo "Usage: $0 {start|stop|restart|status}" exit 1 esac exit 0
You cannot have it refuse connections, since the hostname (or IP) that the user is trying to use as their HTTP host is not known to the server until the client actually sends an HTTP request. The TCP listener is always bound to the IP address. Would an HTTP error response be acceptable instead? <VirtualHost *:80> ServerName catchall <Location /> Order allow,deny Deny from all </Location> </VirtualHost> <VirtualHost *:80> ServerName example.com DocumentRoot /var/www/ <Directory /var/www/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>
{ "source": [ "https://serverfault.com/questions/583890", "https://serverfault.com", "https://serverfault.com/users/199962/" ] }
584,012
I have used Linux for quite a while, and I always wondered how Windows was able to handle the programs depedencies like apt-get , aptitude , Pacman , yum and other package managers are able to. Sometimes, my package manager would tell me that this version of that library was needed for this package or that there would be some conflict. How does Windows handle all of that stuff?
It doesn't. Unless we're talking about .NET which asks you to install framework version X according with the compiler. Everything else just throws an error. With luck, you get missing dll xxxx.dll . Although, most installers will have the required libraries included in order to run the software.
{ "source": [ "https://serverfault.com/questions/584012", "https://serverfault.com", "https://serverfault.com/users/125071/" ] }
584,252
I'm doing: aws iam upload-server-certificate --server-certificate-name MysiteCertificate --certificate-body Downloads/mysite/mysite.crt --private-key mysite.pem --certificate-chain Downloads/mysite/COMODOSSLCA.crt I'm getting an error though: A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to parse certificate. Please ensure the certificate is in PEM format. It is a valid pem file though =(
Add a file:// before the file names.
{ "source": [ "https://serverfault.com/questions/584252", "https://serverfault.com", "https://serverfault.com/users/94395/" ] }
584,510
To make setting up passwordless SSH easier on new machines and environments, is there any reason why the id_rsa.pub file (just the public half of the key pair) could not be published somewhere on the web? For example in a dotfiles GitHub repository. I'm aware that: the id_rsa file (the private half of the the key pair) must be carefully guarded and the key pair should be protected with a passphrase But my searches haven't turned up any explicit advice that this is allowed or encouraged. Out of curiosity, would the same advice hold for a keypair without a passphrase?
RSA is specifically designed to allow you to share that public key, so yes, you can publish it. This is pretty similar to how x.509 (and SSL) with RSA certificates works. Before publishing the file, actually look at it; the only things that need to be in there are the keyword "ssh-rsa" and the base64-encoded key. You may want to keep it to that (I believe this is the default now). This is true whether or not the key has a passphrase. A passphrase encrypts the private key and does not affect the public key. Ensure, as always, that your key is sufficiently entropic and large. If it is generated by a broken PRNG it might be predictable. However, publishing this doesn't present much additional risk, since if the keyspace is that small an attacker could simply try with all the keys in the enumerated keyspace until they get the right one. I suggest using a 4096-bit key (specify -b 4096 ), so that it will be more difficult than usual (the default is 2048) for someone to invert your public key into a private one. That is the only significant risk in doing this, and it isn't a very big one since the algorithm is specifically designed to make it impractical.
{ "source": [ "https://serverfault.com/questions/584510", "https://serverfault.com", "https://serverfault.com/users/45143/" ] }
584,635
I'm trying to setup a simple Amazon AWS S3 based website, as explained here . I've setup the S3 bucket (simples3websitetest.com), gave it the (hopefully) right permissions: { "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::simples3websitetest.com/*" ] } ] } I uploaded index.html, setup website access, and it is accessible via http://simples3websitetest.com.s3-website-us-west-2.amazonaws.com/index.html So far so good, now I want to setup Amazon Route53 access and this is where I got stuck. I've setup a hosted zone on a domain I own (resourcesbox.net), and clicked "create record set", and got to the "setup alias" step, but I get "No targets available" under S3 website endpoints when I try to set the alias target. What did I miss??
The A-record alias you create has to be the same as the name of the bucket, because virtual hosting of buckets in S3 requires that the Host: header sent by the browser match the bucket name. There's not really another practical way in which virtual hosting of buckets could be accomplished... the bucket has to be identified by some mechanism, and that mechanism is the http headers. In order to create an alias to a bucket inside the "example.com" domain, the bucket name is going to have to also be a hostname you can legally declare within that domain... the Route 53 A-Record "testbucket.example.com," for example, can only be aliased to a bucket called "testbucket.example.com" ... and no other bucket. In your question, you're breaking this constraint... but you can only create an alias to a bucket named "simples3websitetest.com" inside of (and at the apex of) the "simples3websitetest.com" domain. This is by design, and not exactly a limitation of Route 53 nor of S3. They're only preventing you from doing something that can't possibly work. Web servers are unaware of any aliasing or CNAMEs or anything else done in the DNS -- they only receive the original hostname that the browser believes it is trying to connect to, in the http headers sent by the browser ... and S3 uses this information to identify the name of the bucket to which the virtual hosted request applies. Amazon S3 requires that you give your bucket the same name as your domain. This is so that Amazon S3 can properly resolve the host headers sent by web browsers when a user requests content from your website. Therefore, we recommend that you create your buckets for your website in Amazon S3 before you pay to register your domain name. http://docs.aws.amazon.com/gettingstarted/latest/swh/getting-started-create-bucket.html#bucket-requirements Note, however, that this restriction only applies when you are not using CloudFront in front of your bucket. With CloudFront, there is more flexibility, because the Host: header can be rewritten (by CloudFront itself) before the request is passed through to S3. You configure the "origin host" in your CloudFront distribution as your-bucket.s3-website-xx-yyyy-n.amazonaws.com where xx-yyyy-n is the AWS region of S3 where your bucket was created. This endpoint is shown in the S3 console for each bucket.
{ "source": [ "https://serverfault.com/questions/584635", "https://serverfault.com", "https://serverfault.com/users/126289/" ] }
584,708
My understanding is that the SPF spec specifies an email receiver shouldn't have to do more than 10 DNS lookups in order to gather all the allowed IPs for a sender. So if an SPF record has include:foo.com include:bar.com include:baz.com and those three domains each have SPF records which also have 3 include entries, now we are up to 3+3+3+3=12 DNS lookups. is my understanding above correct? I only use 2 or 3 services for my domain and I am already way past this limit. Is this limit typically (or ever) enforced by major/minor email providers?
Both libspf2 (C) and Mail::SPF::Query (perl, used in sendmail-spf-milter ) implement a limit of 10 DNS-causing mechanisms , but the latter does not (AFAICT) apply the MX or PTR limits. libspf2 limits each of mx and ptr to 10 also. Mail::SPF (perl) has a limit of 10 DNS-causing mechanisms, and a limit of 10 lookups per mechanism, per MX and per PTR. (The two perl packages are commonly, though not by default, used in MIMEDefang .) pyspf has limits of 10 on all of: "lookups", MX, PTR, CNAME; but it explicitly multiplies MAX_LOOKUPS by 4 during operation. Unless in "strict" mode, it also multiples MAX_MX and MAX_PTR by 4. I can't comment on commercial/proprietary implementations, but the above (except pyspf ) clearly implement an upper limit of 10 DNS-triggering mechanisms (more on that below), give or take, though in most cases it can be overridden at run-time. In your specific case you are correct, it is 12 includes and that exceeds the limit of 10. I would expect most SPF software to return "PermError", however , failures will only affect the final "included" provider(s) because the count will be calculated as a running total: SPF mechanisms are evaluated left-to-right and checks will "early-out" on a pass, so it depends on where in the sequence the sending server appears. The way around this is to use mechanisms which do not trigger DNS lookups, e.g. ip4 and ip6 , and then use mx if possible as that gets you up to 10 further names, each of which can have more than one IP. Since SPF results in arbitrary DNS requests with potentially exponential scaling, it could easily be exploited for DOS/amplification attacks. It has deliberately low limits to prevent this: it does not scale the way you want. 10 mechanisms (strictly mechanisms + the "redirect" modifier) causing DNS look-ups is not exactly the same thing as 10 DNS look-ups though. Even "DNS lookups" is open to interpretation, you don't know in advance how many discrete lookups are required, and you don't know how many discrete lookups your recursive resolver may need to perform (see below). RFC 4408 §10.1 : SPF implementations MUST limit the number of mechanisms and modifiers that do DNS lookups to at most 10 per SPF check, including any lookups caused by the use of the "include" mechanism or the "redirect" modifier. If this number is exceeded during a check, a PermError MUST be returned. The "include", "a", "mx", "ptr", and "exists" mechanisms as well as the "redirect" modifier do count against this limit. The "all", "ip4", and "ip6" mechanisms do not require DNS lookups and therefore do not count against this limit. [...] When evaluating the "mx" and "ptr" mechanisms, or the %{p} macro, there MUST be a limit of no more than 10 MX or PTR RRs looked up and checked. So you may use up to 10 mechanisms/modifiers which trigger DNS lookups. (The wording here is poor: it seems to state only the upper bound of the limit, a confirming implementation could have a limit of 2.) §5.4 for the mx mechanism, and §5.5 for the ptr mechanism each have a limit of 10 lookups of that kind of name, and that applies to the processing of that mechanism only, e.g.: To prevent Denial of Service (DoS) attacks, more than 10 MX names MUST NOT be looked up during the evaluation of an "mx" mechanism (see Section 10). i.e. you may have 10 mx mechanisms, with up to 10 MX names, so each of those may cause 20 DNS operations (10 MX + 10 A DNS lookups each) for total of 200. It's similar for ptr or %{p} , you can look up 10 ptr mechanisms, hence 10x10 PTRs, each PTR also requires an A lookup, again a total of 200. This is exactly what the 2009.10 test suite checks, see the " Processing Limits " tests. There is no clearly stated upper limit on the total number of client DNS lookup operations per-SPF-check, I calculate it as implicitly 210, give or take. There is also a suggestion to limit the volume of DNS data per-SPF-check, no actual limit is suggested though. You can get a rough estimate as SPF records are limited to 450 bytes (which is sadly shared with all other TXT records), but the total could exceed 100kiB if you're generous. Both those values are clearly open to potential abuse as an amplification attack, which is exactly what §10.1 says you need to avoid. Empirical evidence suggests a total of 10 lookup mechanisms is commonly implemented in records (check out the SPF for microsoft.com who seem to have gone to some lengths to keep it to exactly 10). It's hard to collect evidence of too-many-lookups failure because the mandated error code is simply "PermError", which covers all manner of problems ( DMARC reporting might help with that though). The OpenSPF FAQ perpetuates the limit of a total of "10 DNS lookups", rather than the more precise "10 DNS causing mechanisms or redirects". This FAQ is arguably wrong since it actually says: Since there is a limit of 10 DNS lookups per SPF record, specifying an IP address [...] which is in disagreement with the RFC which imposes the limits on an "SPF check" operation, does not limit DNS lookup operations in this way, and clearly states an SPF record is a single DNS text RR. The FAQ would imply that you restart the count when you process an "include" as that is a new SPF record. What a mess. DNS Lookups What is a "DNS lookup" anyway? As a user . I would consider " ping www.microsoft.com " to involve a single DNS "lookup": there's one name that I expect to turn into one IP. Simple? Sadly not. As an administrator I know that www.microsoft.com might not be a simple A record with a single IP, it might be a CNAME that in turn needs another discrete lookup to obtain an A record, albeit one that my upstream resolver will probably perform rather than the resolver on my desktop. Today, for me, www.microsoft.com is a chain of 3 CNAMEs that finally end up as an A record on akamaiedge.net, that's (at least) 4 DNS query operations for someone. SPF may see CNAMEs with the "ptr" mechanism, an MX record should not be a CNAME though. Finally, as a DNS adminstrator I know that answering (almost) any question involves many discrete DNS operations, individual questions and answer transactions (UDP datagrams) — assuming an empty cache, a recursive resolver needs to start at the DNS root and work its way down: . → com → microsoft.com → www.microsoft.com asking for specific types of records (NS, A etc) as required, and dealing with CNAMEs. You can see this in action with dig +trace www.microsoft.com , though you probably won't get the exact same answer due to geolocation trickery (example here ). (There's even a little bit more to this complexity since SPF piggybacks on TXT records, and obsolete limits of 512 bytes on DNS answers might mean retrying queries over TCP.) So what does SPF consider as a lookup? It's really closest to the administrator point of view, it needs to be aware of the specifics of each type of DNS query (but not to the point where it actually needs to count individual DNS datagrams or connections).
{ "source": [ "https://serverfault.com/questions/584708", "https://serverfault.com", "https://serverfault.com/users/19317/" ] }
584,986
I created the user MY_USER. Set his home dir to /var/www/RESTRICTED_DIR, which is the path he should be restricted to. Then I edited sshd_config and set: Match user MY_USER ChrootDirectory /var/www/RESTRICTED_DIR Then I restarted ssh. Made MY_USER owner (and group owner) of RESTRICTED_DIR, and chmodded it to 755. I get Accepted password for MY_USER session opened for user MY_USER by (uid=0) fatal: bad ownership or modes for chroot directory component "/var/www/RESTRICTED_DIR" pam_unix(sshd:session): session closed for user MY_USER If I removed the 2 lines from sshd_config the user can login successfully. Of course it can access all the server though. What's the problem? I even tried to chown RESTRICTED_DIR to root (as I read somewhere that someone solved this same problem doing it). No luck..
From the man page : ChrootDirectory Specifies the pathname of a directory to chroot(2) to after authentication. All components of the pathname must be root-owned directories that are not writable by any other user or group . After the chroot, sshd(8) changes the working directory to the user's home directory. My guess is one or more of the directories on the path do not meet those requirements (my suspicion is www is owned or writable by your web user, not root). Go back and follow the directions, ensuring that the requirements above in bold italics are met.
{ "source": [ "https://serverfault.com/questions/584986", "https://serverfault.com", "https://serverfault.com/users/123313/" ] }
585,089
I've discovered (via looking at mod_pagespeed cache entries) that a completely random domain I've never heard of before is resolving to my website. If I visit this domain, my website loads. The DNS for that domain is pointing to my server's IP. Right now in my vhost config I have *:80, which I'm guessing is where I'm going wrong. I immediately changed this to example.com:80 where example.com is my domain. Assuming this would mean the server would only respond to and fulfil requests for my domain name, rather than any request on port 80. My original vhost config; <VirtualHost *:80> DocumentRoot "/var/www/example.com" <Directory "/var/www/example.com"> Order allow,deny Allow from all Allowoverride all </Directory> </VirtualHost> My new tried config; Listen 80 ServerName example.com <VirtualHost example.com:80> DocumentRoot "/var/www/example.com" <Directory "/var/www/example.com"> Order allow,deny Allow from all Allowoverride all </Directory> </VirtualHost> When I tried to restart apache with the new config I got the following error: * Restarting web server apache2 [Fri Mar 28 08:55:47.821904 2014] [core:error] [pid 5555] (EAI 2)Name or service not known: AH00549: Failed to resolve server name for 152.155.254.241 (check DNS) -- or specify an explicit ServerName (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 Note: The IP beginning 152 in the above error has been slightly edited, but the original wasn't my server's IP address anyway. Can anyone offer advice on this issue? Is the domain (actually there's a couple) that is resolving to my website innocently just the previous user of the dedicated server, whose DNS is just still pointing to it? How can I resolve the apache virtual host config issue, and any other advice is welcome. Thanks.
There's probably no harm in having those other domains pointing to your host, except of course that it increases the load on your server. If you want to block them, set up new virtual hosts for them: NameVirtualHost *:80 <VirtualHost *:80> ServerName example.com # example.com configuration </VirtualHost> <VirtualHost *:80> ServerName baddomain.com Deny from all </VirtualHost> Instead of Deny from all you could use Redirect permanent /error.html to show them a custom error message. You could repeat the second VirtualHost for each domain you want to block, or if there are a lot of them, put it first to make it the default VirtualHost, and make exceptions for your domain(s): NameVirtualHost *:80 <VirtualHost *:80> # default VirtualHost Deny from all </VirtualHost> <VirtualHost *:80> ServerName example.com # example.com config </VirtualHost> As for your error messages, it seems that Apache couldn't resolve the hostname example.com when it started, or couldn't find your ServerName directive. Not sure why. The second error says that port 80 is already in use on your host. Did you finish shutting down all of the previous instances of Apache?
{ "source": [ "https://serverfault.com/questions/585089", "https://serverfault.com", "https://serverfault.com/users/208564/" ] }
585,528
How do I change gitlab's default port 80 to a custom port number? There are two approaches I've tried: Set the port in /etc/gitlab/gitlab.rb external_port "8888" Then run reconfigure: gitlab-ctl reconfigure Set port in /var/opt/gitlab/gitlab-rails/etc/gitlab.yml production: &base # # 1. GitLab app settings # ========================== ## GitLab settings gitlab: ## Web server settings (note: host is the FQDN, do not include http://) host: gitlab.blitting.com port: 8888 https: false Then restart gitlab gitlab-ctl stop gitlab-ctl start With both of these, gitlab continues to run on the default 80 port number.
Chad Carbert answer still applies but just want to add extra for version 7.0.0. Open "/etc/gitlab/gitlab.rb" in your text editor where currently I have external_url http://127.0.0.1/ or similar to that. I may need to change external_url with dns including port number (eg. ' http://gitlab.com.local:81/ ') then reconfigure using command "sudo gitlab-ctl reconfigure" Gitlab is now working on port 81. Step by step: sudo -e /etc/gitlab/gitlab.rb Change external_url from yourdomain.com to yourdomain.com:9999 9999 -> Port you want it to run sudo gitlab-ctl reconfigure
{ "source": [ "https://serverfault.com/questions/585528", "https://serverfault.com", "https://serverfault.com/users/207451/" ] }
585,862
In the mtr man pages, it reads: mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool I use mtr a lot, and find that it's way much faster than traceroute . Instinctively, mtr gives me the answer emidiately, while traceroute list each ip address every seconds. At my own computer, I used time mtr www.google.com and time traceroute www.google.com , the result is 21.9s VS 6.1s. The question is why? Since mtr = ping + traceroute , doesn't that mean it's slower or at least the same as traceroute . Can anyone gives me a reasonable and detailed answer?
Parallelism is a major reason for variation in the speed of these tools. Another contributing factor is how long they wait for a reply before the hop is considered to not be responding. If reverse DNS is performed, you have to wait for that as well. The plain traceroute command gets much quicker, if you disable reverse DNS. Another important difference, which I did not see mentioned, is how the two tools render the output. Traceroute produces the output in order top down. Mtr renders output in a different way, where mtr can go back and update output on previous lines. This means mtr can display output as soon as it is available, because if later replies cause that output to not be accurate, mtr can go back and update it. Since traceroute cannot go back and update output, it has to wait until it has ultimately decided what it will display. For example if hop number 2 is not responding (which is a symptom I have seen on multiple ISPs), traceroute will display hop number 1 and then wait for a while before it displays hop number 2 and 3. Even though the reply from hop number 3 has arrived it is not being displayed because traceroute is still waiting for the reply from hop number 2. Mtr does not have that restriction and can display the reply from hop number 3 and still go back to display the reply from hop number 2, if it arrives later. Too much parallelism can cause the output to become inaccurate. In some scenarios there are limits to how many packets you can get replies for. Sending more packets in those cases will not speed up the process, it will however cause more lost packets, as you get the same number of replies with more packets being sent. One example of this is when a hop on the route does not reply to ARP requests. Usually the first packet will trigger an ARP request, and if more packets arrive before the ARP request times out, only the last of those packets will be buffered and get a reply. Another difference is in how many hops with no responses will be displayed before the tool stops displaying more hops. I have seen the traceroute command continue for as many hops as requested (30 by default), while the mtr command would stop as soon as it had passed five hops with no responses.
{ "source": [ "https://serverfault.com/questions/585862", "https://serverfault.com", "https://serverfault.com/users/214731/" ] }
586,171
I recently started using VPS from OVH: http://www.ovh.co.uk/vps/vps-classic.xml This is likely problem very specific to this one provider. My goal is to install and run Docker on it, for this I need kernel supporting modules. By default, OVH's VPS machines use custom kernel that does not and Docker crashes. I tried reinstalling machine a few times with various versions of Debian (6, 7) and Ubuntu (12.04, 13.10) available for their VPSs, every time uname -r shows me uname -r 2.6.32-042stab084.14 ... /boot directory is empty, there is no grub nor lilo installed, there are no linux-image packages installed, though they are available. Installing linux kernel from repository, grub, updating grub (this is widely spread advice I googled out) and rebooting machine has little effect. Grub finds one system image, the freshly installed one, /boot gets populated, but system still runs kernel mentioned above. This and the above symptoms puzzle me greatly: how exactly this machine boots in the first place? Net boot perhaps? How to check it, how to change this behaviour? Following netboot idea I checked google again, this told me there is an option in OVH web manager version 3 to change net boot settings. I use manager v.6 to tinker with my VPS (they say the functionality is moved there), I found no such option there, previous manager versions don't even see my VPS. This is how far I went until now. I want to run standard repository kernel on this VPS, would also welcome any explanations on how this setup works and why is it so problematic, because right now I feel rather confused :)
You cannot run your own kernel on a VPS using OpenVZ. You would have to upgrade from OVH's VPS Classic service to their VPS Cloud service, which runs VMware and would allow you to run a customised kernel.
{ "source": [ "https://serverfault.com/questions/586171", "https://serverfault.com", "https://serverfault.com/users/145941/" ] }
586,178
I have to move some file shares from a machine with Win2008R2 Enterprise to another with Win2008 R2 Standard. The goal is to have the file shares only on the second, dedicated machine. Currently these shares are accessed by the users and some applications through UNCs similar to \\app.bizunit.example.com\share_name , where app.bizunit.example.com is an alias for the first machine. The tricky requirement is to keep the same UNCs to avoid reconfiguring the related applications. But at the same time we would like to keep the app.bizunit.example.com alias pointing to the first machine so it could still be used to reach its remaining services. I looked into DFS namespaces but unfortunately I didn't manage to accomplish the result I'm after. What I managed to accomplish with DFS-N ("Stand-Alone Namespace" more specifically) though is to have the \\app.bizunit.example.com\<namespace_name>\share_name , which is not optimal.
You cannot run your own kernel on a VPS using OpenVZ. You would have to upgrade from OVH's VPS Classic service to their VPS Cloud service, which runs VMware and would allow you to run a customised kernel.
{ "source": [ "https://serverfault.com/questions/586178", "https://serverfault.com", "https://serverfault.com/users/2361/" ] }
586,204
I am setting up a pretty decent server running Windows Server 2008 R2 to be a remote (colocated) Hyper-V host. It will be hosting Linux and Windows VMs, initially for developers to use but eventually also to do some web hosting and other tasks. Currently I have two VMs, one Windows and one Ubuntu Linux, running pretty well, and I plan to clone them for future use. Right now I'm considering the best ways to configure developer and administrator access to the server once it is moved into the colocation facility, and I'm seeking advice on that. My thought is to set up a VPN for access to certain features of the VMs on the server, but I have a few different options for going about this: Connect the server to an existing hardware firewall (an old-ish Netscreen 5-GT) that can create a VPN and map external IPs to the VMs, which will have their own IPs exposed through the virtual interface. One problem with this choice is that I'm the only one trained on the Netscreen, and its interface is a bit baroque, so others may have difficulty maintaining it. Advantage is that I already know how to do it, and I know it will do what I need. Connect the server directly to the network and configure the Windows 2008 firewall to restrict access to the VMs and set up a VPN. I haven't done this before, so it will have a learning curve, but I'm willing to learn if this option is better long-term than the Netscreen. Another advantage is that I won't have to train anyone on the Netscreen interface. Still, I'm not certain if the capabilities of the Windows software firewall as far as creating VPNs, setting up rules for external access to certain ports on the IPs of Hyper-V servers, etc. Will it be sufficient for my needs and easy enough to set up / maintain? Anything else? What are the limitations of my approaches? What are the best practices / what has worked well for you? Remember that I need to set up developer access as well as consumer access to some services. Is a VPN even the right choice?
You cannot run your own kernel on a VPS using OpenVZ. You would have to upgrade from OVH's VPS Classic service to their VPS Cloud service, which runs VMware and would allow you to run a customised kernel.
{ "source": [ "https://serverfault.com/questions/586204", "https://serverfault.com", "https://serverfault.com/users/214907/" ] }
586,216
We have our organization's primary domain (with AD) example.com. In the past, previous admins have created several other zones - such as dmn.com, lab.example.com, dmn-geo.com etc - as well as subdomains and delegates all of which are for different engineering groups. Our DNS right now is a bit of a mess. And of course this causes problems when someone at a workstation in example.com needs to connect to a system in any of these other zones/subdomains, or vice versa (partially because zone transfer and delegation aren't properly configured for most of them). Our production DNS is integrated with Active Directory, but engineering systems should be isolated from AD. We're discussing ways of reorganizing DNS and consolidating all of these different entries. I see three different avenues we can take: Create a new zone i.e. 'dmn.eng'. This could either be managed by IT, using our DNS servers or engineering using their nameservers. Create a new delegate eng.example.com, consolidate engineering DNS into that subdomain, and let the engineers manage the nameserver for the delegate. Create a new subdomain eng.example.com with no delegation and manage DNS for the subdomain ourselves. I favor creating a delegate subdomain and letting the engineers have full control over their own DNS structure within that subdomain. The advantage is that if their DNS doesn't work, it's most likely not my fault ;). However, there is still some ambiguity about responsibility when something doesn't work and it will require coordination with engineering to set up, configure, and administer. If we do not delegate the subdomain, that means a lot more work for production IT handling non-production DNS (which we've essentially been doing a lot of already). The advantage is that we have full control over all DNS and when something doesn't work there is no doubt about whose responsibility it is to fix it. We can also add delegates, such as geo.eng.example.com, to give engineering more flexibility and control when they need it. I am really uncertain about the necessity or benefits of creating a new zone, dmn.eng. So what are the industry best practices and recommendations for situations like this? What solution would be the simplest to implement and provide seamless name resolution between engineering and production? What are some potential benefits or pitfalls to each solution that I may be missing? To add a little more information, we're a fairly large manufacturing company. These engineers work in R&D, Development, and QA. Labs frequently have their own subnets or whole networks, DHCP, etc. In regards to organization and technology, they are kind of their own little world. We want to maintain some level of network isolation for engineering labs and networks in order to protect our production environment (reference an earlier question regarding engineers who added engineering DHCP servers as authoritative AD DHCP servers - that should not be able to happen). However, a user at a workstation in a lab will need to access a resource in our production network, or a user at a workstation on our production network will need to connect to a lab system, and this happens with enough frequency to justify a sort-of unified DNS. The existing delegates already have DNS servers managed by engineering, but there is no communication between the engineers in the different labs where these servers are set up, so the most common problem is failed name resolution between subdomains. Since the engineers own those delegate servers, I can't correct the NS entries to get them talking to each other - hence the advantage of non-delegated DNS wholly owned by IT. But managing DNS for production and engineering is a headache, especially as engineering can make DNS changes on a daily basis. But as BigHomie mentioned in his answer, this probably means engineering will have to hire (or designate) a real DNS admin; and that person and I will have to become fairly well acquainted. I don't necessarily like the idea of creating a new zone with arbitrary an top level domain name or suffix either, but we already have 5 other zones with arbitrary names so consolidating into a single one is still an improvement. I do know that other companies exist that do have separate top level zones for different groups in their organization, so I'm curious about when that is appropriate and what the advantages/disadvantages of that approach are. FYI, I've only been at this company for a few months and the previous AD/DNS admin left the company, so I have nothing to reference as to why any of the existing DNS structure exists.
You cannot run your own kernel on a VPS using OpenVZ. You would have to upgrade from OVH's VPS Classic service to their VPS Cloud service, which runs VMware and would allow you to run a customised kernel.
{ "source": [ "https://serverfault.com/questions/586216", "https://serverfault.com", "https://serverfault.com/users/159902/" ] }
586,486
I would like do some NAT in iptables . So that, all the packets coming to 192.168.12.87 and port 80 will be forwarded to 192.168.12.77 port 80 . How to do this with iptables? Or Any other ways to achieve the same?
These rules should work, assuming that iptables is running on server 192.168.12.87 : #!/bin/sh echo 1 > /proc/sys/net/ipv4/ip_forward iptables -F iptables -t nat -F iptables -X iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.12.77:80 iptables -t nat -A POSTROUTING -p tcp -d 192.168.12.77 --dport 80 -j SNAT --to-source 192.168.12.87 You have to DNAT incoming traffic on port 80, but you will also need to SNAT the traffic back. Alternative (and best approach IMHO) : Depending on what your Web Server is (Apache, NGinx) you should consider an HTTP Proxy on your front-end server (192.168.12.87) : mod_proxy (Apache) proxy_pass (NGinx)
{ "source": [ "https://serverfault.com/questions/586486", "https://serverfault.com", "https://serverfault.com/users/122847/" ] }
586,506
I am now looking for a way how to monitoring a server hardware such as fans/power supplies/etc.. The problem is, we have very dynamic environment - servers are automatically powered on/off - even several times a day, depending on load. I created tetmplates for our supermicro servers (we have just 3-4 types of them, so they are very specific) that contains fan speed check (0 means fan is dead). However, everytime I turn off the server fan speed is also 0. So I am now searching how to get power status (or any other indicator that server is running) over ipmi to send a zabbix alert only if the server is running. Over ipmi is unortunately the requirement, because we monitor this way some servers that we don't have an access to. I'd like to avoid writing a script that will run something like: ipmitool power status. Zabbix has an amaizing ipmi integration, so I'd like to use it as much as possible. ipmitool sensor returns: root@virt1:~# ipmitool sensor System Temp | 28.000 | degrees C | ok | -9.000 | -7.000 | -5.000 | 75.000 | 77.000 | 79.000 CPU Temp | 0x0 | discrete | 0x0000| na | na | na | na | na | na FAN 1 | 8355.000 | RPM | ok | 400.000 | 585.000 | 770.000 | 29260.000 | 29815.000 | 30370.000 FAN 2 | 8355.000 | RPM | ok | 400.000 | 585.000 | 770.000 | 29260.000 | 29815.000 | 30370.000 FAN 3 | 8725.000 | RPM | ok | 400.000 | 585.000 | 770.000 | 29260.000 | 29815.000 | 30370.000 FAN 4 | na | RPM | na | na | na | na | na | na | na CPU Vcore | 1.144 | Volts | ok | 0.640 | 0.664 | 0.688 | 1.344 | 1.408 | 1.472 +3.3VCC | 3.280 | Volts | ok | 2.816 | 2.880 | 2.944 | 3.584 | 3.648 | 3.712 +12 V | 12.031 | Volts | ok | 10.494 | 10.600 | 10.706 | 13.091 | 13.197 | 13.303 DIMM | 1.544 | Volts | ok | 1.152 | 1.216 | 1.280 | 1.760 | 1.776 | 1.792 +5 V | 5.216 | Volts | ok | 4.096 | 4.320 | 4.576 | 5.344 | 5.600 | 5.632 +5VSB | 5.056 | Volts | ok | 4.096 | 4.320 | 4.576 | 5.344 | 5.600 | 5.632 VBAT | 3.232 | Volts | ok | 2.816 | 2.880 | 2.944 | 3.584 | 3.648 | 3.712 +3.3VSB | 3.280 | Volts | ok | 2.816 | 2.880 | 2.944 | 3.584 | 3.648 | 3.712 AVCC | 3.280 | Volts | ok | 2.816 | 2.880 | 2.944 | 3.584 | 3.648 | 3.712 Chassis Intru | 0x0 | discrete | 0x0000| na | na | na | na | na | na PS Status | 0x1 | discrete | 0x01ff| na | na | na | na | na | na root@virt1:~#
These rules should work, assuming that iptables is running on server 192.168.12.87 : #!/bin/sh echo 1 > /proc/sys/net/ipv4/ip_forward iptables -F iptables -t nat -F iptables -X iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.12.77:80 iptables -t nat -A POSTROUTING -p tcp -d 192.168.12.77 --dport 80 -j SNAT --to-source 192.168.12.87 You have to DNAT incoming traffic on port 80, but you will also need to SNAT the traffic back. Alternative (and best approach IMHO) : Depending on what your Web Server is (Apache, NGinx) you should consider an HTTP Proxy on your front-end server (192.168.12.87) : mod_proxy (Apache) proxy_pass (NGinx)
{ "source": [ "https://serverfault.com/questions/586506", "https://serverfault.com", "https://serverfault.com/users/201130/" ] }
586,511
I want to run postfix + dovecot on my server. I'm doing it for first time. Postfix and Dovecot works on local - I can connect through telnet, and send/receive e-mails using commands. When I try to connect from outside, using mail client (in my case Thunderbird), I always see in client alert: Authentication failed . My /var/log/maillog file says: dovecot: pop3-login: Disconnected: Shutting down (auth failed, 6 attempts): user=<test.user>, method=PLAIN, rip=xxx.xxx.xxx, lip=xxx.xxx.xxx . Distribution is CentOS 6 EDIT ( postconf -n ): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = mydomain.com myhostname = host.mydomain.com mynetworks = xxx.xxx.xxx/24 127.0.0.0/8 mynetworks_style = subnet myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 When I want to send an e-mail to outside host from Thunderbird, I get 5.7.1 <[email protected]>: Relay access denied . Error log for this: NOQUEUE: reject: RCPT from my_vps_domain[xxx.xxx.xxx.xxx]: 554 5.7.1 <[email protected]>: Relay access denied; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[127.0.0.1]>
These rules should work, assuming that iptables is running on server 192.168.12.87 : #!/bin/sh echo 1 > /proc/sys/net/ipv4/ip_forward iptables -F iptables -t nat -F iptables -X iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.12.77:80 iptables -t nat -A POSTROUTING -p tcp -d 192.168.12.77 --dport 80 -j SNAT --to-source 192.168.12.87 You have to DNAT incoming traffic on port 80, but you will also need to SNAT the traffic back. Alternative (and best approach IMHO) : Depending on what your Web Server is (Apache, NGinx) you should consider an HTTP Proxy on your front-end server (192.168.12.87) : mod_proxy (Apache) proxy_pass (NGinx)
{ "source": [ "https://serverfault.com/questions/586511", "https://serverfault.com", "https://serverfault.com/users/215075/" ] }
586,586
In Nginx we have been trying to redirect a URL as follows: http://example.com/some/path -> http://192.168.1.24 where the user still sees the original URL in their browser. Once the user is redirected, say they click on the link to /section/index.html , we would want this to make a request that leads to the redirect http://example.com/some/path/section/index.html -> http://192.168.1.24/section/index.html and again still preserve the original URL. Our attempts have involved various solutions using proxies and rewrite rules, and below shows the configuration that has brought us closest to a solution (note that this is the web server configuration for the example.com web server). However, there are still two problems with this: It does not perform the rewrite properly, in that the request URL received by the web server http://192.168.1.24 includes /some/path and therefore fails to serve the required page. When you hover on a link once a page has been served, /some/path is missing from the URL server { listen 80; server_name www.example.com; location /some/path/ { proxy_pass http://192.168.1.24; proxy_redirect http://www.example.com/some/path http://192.168.1.24; proxy_set_header Host $host; } location / { index index.html; root /var/www/example.com/htdocs; } } We are looking for a solution that only involves changing the web server configuration on example.com . We are able to change the config on 192.168.1.24 (also Nginx), however we want to try and avoid this because we will need to repeat this setup for hundreds of different servers whose access is proxied through example.com .
First, you shouldn't use root directive inside the location block, it is a bad practice. In this case it doesn't matter though. Try adding a second location block: location ~ /some/path/(?<section>.+)/index.html { proxy_pass http://192.168.1.24/$section/index.html; proxy_set_header Host $host; } This captures the part after /some/path/ and before index.html to a $section variable, which is then used to set the proxy_pass destination. You can make the regex more specific if you require.
{ "source": [ "https://serverfault.com/questions/586586", "https://serverfault.com", "https://serverfault.com/users/178726/" ] }
586,714
Is there a way to scan for free IPs on the network? I use nmap -sP 192.168.1.0/24 but this actually shows hosts that are up.
Using Nmap like this is a fairly accurate way of doing what you asked, provided that some preconditions are true: You must run the scan as root (or Administrator on Windows) in order to send ARP requests, not TCP connections. Otherwise the scan may report an address as "down" when it is simply firewalled. You can only do this from a system on the same data link (layer 2) as the address range you are scanning. Otherwise, Nmap will need to use network-layer probes which can be blocked by a firewall. In order to get the "available" addresses, you need to get the list of addresses that Nmap reports as "down." You can do this with a simple awk command: sudo nmap -v -sn -n 192.168.1.0/24 -oG - | awk '/Status: Down/{print $2}' Summary of Nmap options used: When you use the -v option, Nmap will print the addresses it finds as "down" in addition to the ones that are "up". Instead of -sP , I've substituted the newer spelling -sn , which still accomplishes the same scan, but means "skip the port scan" instead of the misleading "Ping scan" (since the host discovery phase does not necessarily mean an ICMP Echo scan or Ping). The -n option skips reverse DNS lookups, which buys you a bit of time, since you aren't interested in names but just IP addresses. The -oG option tells Nmap to output grepable format, which is easier for awk to process. The argument " - " tells it to send this output to stdout. The awk command then searches for "Status: Down" and prints the second field, containing the IP address. Of course, if you have access to the switch's running configs or the DHCP server's leases, you could get this answer much more authoritatively without doing a scan that could set off security alarms.
{ "source": [ "https://serverfault.com/questions/586714", "https://serverfault.com", "https://serverfault.com/users/120102/" ] }
586,973
I'm planning to setup a set of 3x 2TB 7200rpm drives as a LUKS-encrypted Z-RAID pool in Linux (for a NAS solution). My understanding of the problem at hand is that the only way to achieve this is to luksFormat every physical device and then assemble a zpool out of unlocked LUKS containers. I have the following concerns with this: Wouldn't it significantly impede write performance? In this setup redundant data is encrypted several times because LUKS is not "aware" of Z-RAID. In LUKS-on-mdadm solution data is encrypted once and merely written to disks multiple times. My CPU supports Intel AES-NI. Will ZFS be aware of disk failures when operating on device-mapper LUKS containers as opposed to physical devices? How about deduplication and other ZFS features?
One of the servers that I administrate runs the type of configuration that you describe. It has six 1TB hard drives with a LUKS-encrypted RAIDZ pool on it. I also have two 3TB hard drives in a LUKS-encrypted ZFS mirror that are swapped out every week to be taken off-site. The server has been using this configuration for about three years, and I've never had a problem with it. If you have a need for ZFS with encryption on Linux then I recommend this setup. I'm using ZFS-Fuse, not ZFS on Linux. However, I believe that would have no bearing on the result other than ZFS on Linux will probably have better performance than the setup that I am using. In this setup redundant data is encrypted several times because LUKS is not "aware" of Z-RAID. In LUKS-on-mdadm solution data is encrypted once and merely written to disks multiple times. Keep in mind that LUKS isn't aware of RAID. It only knows that it's sitting on top of a block device. If you use mdadm to create a RAID device and then luksformat it, it is mdadm that is replicating the encrypted data to the underlying storage devices, not LUKS. Question 2.8 of the LUKS FAQ addresses whether encryption should be on top of RAID or the other way around . It provides the following diagram. Filesystem <- top | Encryption | RAID | Raw partitions | Raw disks <- bottom Because ZFS combines the RAID and filesystem functionality, your solution will need to look like the following. RAID-Z and ZFS Filesystem <-top | Encryption | Raw partitions (optional) | Raw disks <- bottom I've listed the raw partitions as optional as ZFS expects that it will use raw block storage rather than a partition. While you could create your zpool using partitions, it's not recommended because it'll add a useless level of management, and it will need to be taken into account when calculating what your offset will be for partition block alignment. Wouldn't it significantly impede write performance? [...] My CPU supports Intel AES-NI. There shouldn't be a performance problem as long as you choose an encryption method that's supported by your AES-NI driver. If you have cryptsetup 1.6.0 or newer you can run cryptsetup benchmark and see which algorithm will provide the best performance. This question on recommended options for LUKS may also be of value. Given that you have hardware encryption support, you are more likely to face performance issues due to partition misalignment. ZFS on Linux has added the ashift property to the zfs command to allow you to specify the sector size for your hard drives. According to the linked FAQ, ashift=12 would tell it that you are using drives with a 4K block size. The LUKS FAQ states that a LUKS partition has an alignment of 1 MB. Questions 6.12 and 6.13 discuss this in detail and also provide advice on how to make the LUKS partition header larger. However, I'm not sure it's possible to make it large enough to ensure that your ZFS filesystem will be created on a 4K boundary. I'd be interested in hearing how this works out for you if this is a problem you need to solve. Since you are using 2TB drives, you might not face this problem. Will ZFS be aware of disk failures when operating on device-mapper LUKS containers as opposed to physical devices? ZFS will be aware of disk failures insofar as it can read and write to them without problems. ZFS requires block storage and doesn't care or know about the specifics of that storage and where it comes from. It only keeps track of any read, write or checksum errors that it encounters. It's up to you to monitor the health of the underlying storage devices. The ZFS documentation has a section on troubleshooting which is worth reading. The section on replacing or repairing a damaged device describes what you might encounter during a failure scenario and how you might resolve it. You'd do the same thing here that you would for devices that don't have ZFS. Check the syslog for messages from your SCSI driver, HBA or HD controller, and/or SMART monitoring software and then act accordingly. How about deduplication and other ZFS features? All of the ZFS features will work the same regardless of whether the underlying block storage is encrypted or not. Summary ZFS on LUKS-encrypted devices works well. If you have hardware encryption, you won't see a performance hit as long as you use an encryption method that's supported by your hardware. Use cryptsetup benchmark to see what will work best on your hardware. Think of ZFS as RAID and filesystem combined into a single entity. See the ASCII diagram above for where it fits into the storage stack. You'll need to unlock each LUKS-encrypted block device that the ZFS filesystem uses. Monitor the health of the storage hardware the same way you do now. Be mindful of the filesystem's block alignment if you are using drives with 4K blocks. You may need to experiment with luksformat options or other settings to get the alignment you need for acceptable speed. February 2020 Update It's been six years since I wrote this answer. ZFS on Linux v0.8.0 supports native encryption, which you should consider if you don't have a specific need for LUKS.
{ "source": [ "https://serverfault.com/questions/586973", "https://serverfault.com", "https://serverfault.com/users/49838/" ] }
587,102
Please note: The answers and comments to this question contains content from another, similar question that has received a lot of attention from outside media but turned out to be hoax question in some kind of viral marketing scheme. As we don't allow ServerFault to be abused in such a way, the original question has been deleted and the answers merged with this question. Here's a an entertaining tragedy. This morning I was doing a bit of maintenance on my production server, when I mistakenly executed the following command: sudo rm -rf --no-preserve-root /mnt/hetznerbackup / I didn't spot the last space before / and a few seconds later, when warnings was flooding my command line, I realised that I had just hit the self-destruct button. Here's a bit of what burned into my eyes: rm: cannot remove `/mnt/hetznerbackup': Is a directory rm: cannot remove `/sys/fs/ecryptfs/version': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/inode_readahead_blks': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/mb_max_to_scan': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/delayed_allocation_blocks': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/max_writeback_mb_bump': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/mb_stream_req': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/mb_min_to_scan': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/mb_stats': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/trigger_fs_error': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/session_write_kbytes': Operation not permitted rm: cannot remove `/sys/fs/ext4/md2/lifetime_write_kbytes': Operation not permitted # and so on.. I stopped the task and was relieved when I discovered that the production service was still running. Sadly, the server no longer accept my public key or password for any user via SSH. How would you move forward from here? I'll swim an ocean of barbed wire to get that SSH-access back. The server is running Ubuntu-12.04 and hosted at Hetzner.
Boot into the rescue system provided by Hetzner and check what damage you have done. Transfer out any files to a safe location and redeploy the server afterwards. I'm afraid that is the best solution in your case.
{ "source": [ "https://serverfault.com/questions/587102", "https://serverfault.com", "https://serverfault.com/users/215342/" ] }
587,239
I'm starting up a postgres 9.3 instance on a ubuntu 12.04 server : ~# service postgresql start * The PostgreSQL server failed to start. Please check the log output. [fail] the start fails, but it leaves no log, this file is empty : tail /var/log/postgresql/postgresql-9.3-main.log and the are no other files in this directory : /var/log/postgresql/ what is the best way to troubleshoot this ?
Try running it manually with debug enabled. This will cause it to run in the foreground and print any error messages to standard error, while also increasing the verbosity. I believe this will be the correct command line for PostgreSQL 9.3 on Ubuntu, but it might require some very slight tweaking (note: line is split for readability; you can recombine it to a single line (without the backslash) if you want): /usr/lib/postgresql/9.3/bin/postgres -d 3 -D /var/lib/postgresql/9.3/main \ -c config_file=/etc/postgresql/9.3/main/postgresql.conf The beginning is the location of the postgres binary, then we enable debug and set it to level 3 (you can adjust this up or down to increase or decrease verbosity). Next we specify the data directory and the config file to start with. These should be the defaults for Ubuntu Server 12.04, I think. Hopefully, that'll give you enough information to determine where the problem is.
{ "source": [ "https://serverfault.com/questions/587239", "https://serverfault.com", "https://serverfault.com/users/207420/" ] }
587,261
So I've had to deal with renaming whole rooms of PC's before due to them being swapped out and incorrectly named or generally being moved around but I've never settled on a decent workflow. I'll give an example of the most recent situation. (Win 7 PC's and a Win 2012 server) Approx. 30 PC's numbered from 1 to potentially anything, mostly within that 1-30 range though. As it was a complete mess all the computer objects were removed from AD to start fresh. In many cases I can simply change the name of the PC, restart and it will fine. If that doesn't work going through the Join a Domain or Workgroup wizard works and then finally if they don't work, changing to a workgroup, restart, re-join the domain and restart does it. Occasionally get trust relationship errors and have to re-do the last one. As you can tell, it's messy, inefficient and potentially wrong in a few ways. I know I'm at risk of sounding vague and there's unlikely to be one single answer but it's time consuming and I'm very interested in improving my methodology and hopefully it will help some people.
Try running it manually with debug enabled. This will cause it to run in the foreground and print any error messages to standard error, while also increasing the verbosity. I believe this will be the correct command line for PostgreSQL 9.3 on Ubuntu, but it might require some very slight tweaking (note: line is split for readability; you can recombine it to a single line (without the backslash) if you want): /usr/lib/postgresql/9.3/bin/postgres -d 3 -D /var/lib/postgresql/9.3/main \ -c config_file=/etc/postgresql/9.3/main/postgresql.conf The beginning is the location of the postgres binary, then we enable debug and set it to level 3 (you can adjust this up or down to increase or decrease verbosity). Next we specify the data directory and the config file to start with. These should be the defaults for Ubuntu Server 12.04, I think. Hopefully, that'll give you enough information to determine where the problem is.
{ "source": [ "https://serverfault.com/questions/587261", "https://serverfault.com", "https://serverfault.com/users/205311/" ] }
587,324
I was looking at a reliable and portable way to check the OpenSSL version on GNU/Linux and other systems, so users can easily discover if they should upgrade their SSL because of the Heartbleed bug. I thought it would be easy, but I quickly ran into a problem on Ubuntu 12.04 LTS with the latest OpenSSL 1.0.1g: openssl version -a I was expecting to see a full version, but instead I got this: OpenSSL 1.0.1 14 Mar 2012 built on: Tue Jun 4 07:26:06 UTC 2013 platform: [...] To my unpleasant surprise, the version letter doesn't show. No f, no g there, just "1.0.1" and that's it. The listed dates do not assist in discovering a (non-)vulnerable version either. The difference between 1.0.1 (a-f) and 1.0.1g is crucial. Questions: What is a reliable way to check the version, preferably cross distro? Why isn't the version letter showing in the first place? I was unable to test this on anything else but Ubuntu 12.04 LTS. Others are reporting this behaviour as well. A few examples: https://twitter.com/orblivion/status/453323034955223040 https://twitter.com/axiomsofchoice/status/453309436816535554 Some (distro-specific) suggestions rolling in: Ubuntu and Debian: apt-cache policy openssl and apt-cache policy libssl1.0.0 . Compare the version numbers to the packages here: http://www.ubuntu.com/usn/usn-2165-1/ Fedora 20: yum info openssl (thanks @znmeb on twitter) and yum info openssl-libs Checking if a older version of OpenSSL is still resident: It's not completely reliable, but you can try lsof -n | grep ssl | grep DEL . See Heartbleed: how to reliably and portably check the OpenSSL version? on why this may not work for you. It turns out that updating the OpenSSL package on Ubuntu and Debian isn't always enough. You should also update the libssl1.0.0 package, and -then- check if openssl version -a indicates built on: Mon Apr 7 20:33:29 UTC 2014 .
Based on the date displayed by your version of OpenSSL, it seems you are seeing the full version displayed there. Open SSL 1.0.1 was released on March 14th, 2012 . 1.0.1a was released on April 19th of 2012. So, I'm going to go ahead and assert that openssl version -a is the proper, cross-distro way to display the full version of OpenSSL that's installed on the system. It seems to work for all the Linux distros I have access to, and is the method suggested in the help.ubuntu.com OpenSSL documentation, as well . Ubuntu LTS 12.04 shipped with vanilla OpenSSL v1.0.1, which is the version that looks like an abbreviated version, on account of not having a letter following it. Having said that, it appears that there is a major bug in Ubuntu (or how they package OpenSSL), in that openssl version -a continues to return the original 1.0.1 version from March 14, 2012, regardless of whether or not OpenSSL has been upgraded to any of the newer versions. And, as with most things when it rains, it pours. Ubuntu is not the only major distro in the habit of backporting updates into OpenSSL (or other packages), rater than relying on the upstream updates and version numbering that everyone recognizes. In the case of OpenSSL, where the letter version numbers represent only bug fix and security updates, this seems nearly incomprehensible, but I have been informed that this may be because of the FIPS-validated plugin major Linux distros ship packaged with OpenSSL. Because of requirements around revalidation that trigger due to any change, even changes that plug security holes, it is version-locked. For example, on Debian, the fixed version displays a version number of 1.0.1e-2+deb7u5 instead of the upstream version of 1.0.1g . As a result, at this time, there is no reliable, portable way to check SSL versions across Linux distributions , because they all use their own backported patches and updates with different version numbering schemes. You will have to look up the fixed version number for each different distribution of Linux you run, and check the installed OpenSSL version against that distribution's specific version numbering to determine if your servers are running a vulnerable version or not.
{ "source": [ "https://serverfault.com/questions/587324", "https://serverfault.com", "https://serverfault.com/users/171337/" ] }
587,329
This is a Canonical Question about understanding and remediating the Heartbleed security issue. What exactly is CVE-2014-0160 AKA "Heartbleed"? What is the cause, what OSs and versions of OpenSSL are vulnerable, what are the symptoms, are there any methods to detect a successful exploit? How can I check to see if my system is affected? How can this vulnerability be mitigated? Should I be concerned that my keys or other private data have been compromised? What other side effects should I be concerned about?
First , before freaking out, be sure that you understand whether or not this vulnerability actually applies to you. If you have a server, but have never actually had any applications using TLS, then this is not a high-priority thing for you to fix. If, on the other hand, you have ever had TLS-enabled applications, well then you're in for a treat. Read on: What exactly is CVE-2014-0160 aka "Heartbleed"? It's a big fricking mess, that's what it is. In short, a remotely-exploitable vulnerability was discovered in OpenSSL versions 1.0.1 through 1.0.1f through which an attacker can read certain parts of system memory. Those parts being that which hold sensitive data such as private keys, preshared keys, passwords and high valued corporate data among other things. The bug was independently discovered by Neel Mehta of Google Security (March 21, 2014) and Finnish IT security testing firm Codenomicon (April 2, 2014). What is the cause? Well, errant code in OpenSSL. Here is the commit that introduced the vulnerability, and here is the commit that fixed the vulnerability. The bug showed up in December of 2011 and was patched today, April 7th, 2014. The bug can also be seen as a symptom of a larger problem. The two related problems are (1) what process are in place to ensure errant code is not introduced to a code base, and (2) why are the protocols and extensions so complex and hard to test. Item (1) is a governance and process issue with OpenSSL and many other projects. Many developers simply resist practices such as code reviews, analysis and scanning. Item (2) is being discussed on the IETF's TLS WG. See Heartbleed / protocol complexity . Was the errant code maliciously inserted? I won't speculate on whether this was truly a mistake or possibly a bit of code slipped in on behalf of a bad actor. However, the person who developed the code for OpenSSL states it was inadvertent. See Man who introduced serious 'Heartbleed' security flaw denies he inserted it deliberately . What OSs and versions of OpenSSL are vulnerable? As mentioned above, any operating system that is using, or application that is linked against OpenSSL 1.0.1 through 1.0.1f. What are the symptoms, are any methods to detect a successful exploit? This is the scary part. As far as we know, there is no known way to detect whether or not this vulnerability has been exploited. It is theoretically possible that IDS signatures will be released soon that can detect this exploit, but as of this writing, those are not available. There is evidence that Heartbleed was being actively exploited in the wild as early as November, 2013. See the EFF's Wild at Heart: Were Intelligence Agencies Using Heartbleed in November 2013? And Bloomberg reports the NSA had weaponized the exploit shortly after the vulnerability was introduced. See NSA Said to Exploit Heartbleed Bug for Intelligence for Years . However, the US Intelligence Community denies Bloomberg's claims. See IC ON THE RECORD . How can I check to see if my system is affected? If you are maintaining OpenSSL on your system, then you can simply issue openssl version : $ openssl version OpenSSL 1.0.1g 7 Apr 2014 If the distribution is maintaining OpenSSL, then you probably can't determine the version of OpenSSL due to back patching using openssl command or the package information (for example, apt-get , dpkg , yum or rpm ). The back patching process used by most (all?) distributions only uses the base version number (for example, "1.0.1e"); and does not include an effective security version (for example, "1.0.1g"). There's an open question on Super User to determine the effective security version for OpenSSL and other packages when packages are backpatched. Unfortunately, there are no useful answers (other than check the distro's website). See Determine Effective Security Version when faced with Backpatching ?. As a rule of thumb: if you have ever installed one of the affected versions, and have ever run programs or services that linked against OpenSSL for TLS support, then you are vulnerable. Where can I find a program to test for the vulnerability? Within hours of the Heartbleed announcement, several people on the internet had publicized publicly-accessible web applications that supposedly could be used to check a server for the presence of this vulnerability. As of this writing, I have not reviewed any, so I won't further publicize their applications. They can be found relatively easily with the help of your preferred search engine. How is this vulnerability mitigated? Upgrade to a non-vulnerable version and reset or re-secure vulnerable data. As noted on the Heartbleed site, appropriate response steps are broadly: Patch vulnerable systems. Regenerate new private keys. Submit new CSR to your CA. Obtain and install new signed certificate. Invalidate session keys and cookies Reset passwords and shared secrets Revoke old certificates. For a more detailed analysis and answer, see What should a website operator do about the Heartbleed OpenSSL exploit? on the Security Stack Exchange. Should I be concerned that my keys or other private data have been compromised? What other side effects should I be concerned about? Absolutely. Systems Administrators need to assume that their servers which used vulnerable OpenSSL versions are indeed compromised and respond accordingly. Shortly after the vulnerability was disclosed, Cloudfare offered a challenge to see if a server's private key could be recovered in practice. The challenge was independently won by Fedor Indutny and Ilkka Mattila. See The Heartbleed Challenge . Where can I find more information? Link dump, for those looking for more details: OpenSSL SECADV 2014047 CVE-2014-0160 Heartbleed Ubuntu announcement RHEL announcement Official OpenSSL announcement A rather detailed timeline of the disclosure events can be found at Heartbleed disclosure timeline: who knew what and when . If you are a programmer and are interested in various programming tricks like detecting a Heartbleed attack through OpenSSL's msg_cb callback, then see OpenSSL's Security Advisory 2014047 .
{ "source": [ "https://serverfault.com/questions/587329", "https://serverfault.com", "https://serverfault.com/users/60820/" ] }
587,386
I have a rather large and slow (complex data, complex frontend) web application build in RoR and served by Puma with nginx as reverse proxy. Looking at the nginx error log, I see quite a few entries like: 2014/04/08 09:46:08 [warn] 20058#0: *819237 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/8/47/0000038478 while reading upstream, client: 5.144.169.242, server: engagement-console.foo.it, request: "GET /elements/pending?customer_id=2&page=2 HTTP/1.0", upstream: "http://unix:///home/deployer/apps/conversationflow/shared/sockets/puma.sock:/elements/pending?customer_id=2&page=2", host: "ec.reputationmonitor.it", referrer: "http://ec.foo.it/elements/pending?customer_id=2&page=3" I am rather curious as it's very unlikely that the page remains the same for different users and different user interactions, and I would not think that buffering the response on disk is necessary/useful. I know about proxy_max_temp_file_size and setting it to 0, but it seems to me a little bit awkward (my proxy tries to buffer but has no file where to buffer to... how can that be faster?). My questions are: How can I remove the [warn] and avoid buffering responses? Is it better to turn off proxy_buffering or to set proxy_max_temp_file_size to 0? Why? If nginx buffers a response: When does it serve the buffered response, to whom, and why? Why nginx turns proxy_buffering on by default and then [warn]s you if it actually buffers a response? When does a response trigger that option? When it takes > some seconds (how many?) to serve the response? Is this configurable? TIA, ngw.
How can I remove the [warn] and avoid buffering responses? Is it better to turn off proxy_buffering or set proxy_max_temp_file_size to 0? Why? You should set proxy_max_temp_file_size to 0 in order to remove it. The proxy_buffering directive isn't directly related to the warning. You can switch it off to stop any buffering at all but that isn't recommended in general (unless it's needed for Comet ). If nginx buffers a response when does it serve the buffered response, to whom and why? It serves the response immediately, but a client usually has a much slower connection and can't consume the response data as fast as it is produced by your application. Nginx tries to buffer the whole response in order to release your application ASAP. See also: http://aosabook.org/en/nginx.html Why nginx turns proxy_buffering on by default and then [warn]s you if it actually buffers a response? As I already mentioned, the proxy_buffering isn't directly related to the warning. It's generally needed for optimized proxy operations and turning it off degrades performance and throughput. Nginx only warns you when a response doesn't fit into configured memory buffers. You may ignore the warning if it's ok for you. When does a response triggers that option? When it takes > than some seconds (how many?) to serve the response? Is this configurable? It triggers when memory buffers are full. Please, look at the docs, the whole mechanism is explained: http://nginx.org/r/proxy_max_temp_file_size You may want to increase memory buffers.
{ "source": [ "https://serverfault.com/questions/587386", "https://serverfault.com", "https://serverfault.com/users/76122/" ] }
587,433
The OpenSSL 'heartbleed' vulnerability ( CVE-2014-0160 ) affects webservers serving HTTPS. Other services also use OpenSSL. Are these services also vulnerable to heartbleed-like data leakage? I'm thinking in particular of sshd secure SMTP, IMAP etc -- dovecot, exim & postfix VPN servers -- openvpn and friends all of which, on my systems at least, are linked to the OpenSSL libraries.
Any service that uses OpenSSL for its TLS implementation is potentially vulnerable; this is a weakness in the underlying cyrptography library, not in how it's presented via a web server or email server package. You should consider all linked services vulnerable to data leakage at least . As I'm sure you're aware, it's quite possible to chain attacks together. Even in the simplest attacks it's perfectly possible to, for example, use Heartbleed to compromise SSL, read webmail credentials, use webmail credentials to gain access to other systems with a quick "Dear helpdesk, can you give me a new password for $foo, love CEO" . There's more information and links in The Heartbleed Bug , and in another question maintained by a Server Fault regular, Heartbleed: What is it and what are options to mitigate it? .
{ "source": [ "https://serverfault.com/questions/587433", "https://serverfault.com", "https://serverfault.com/users/156599/" ] }
587,439
I have/had a VM running Ubuntu on Azure. After one resize of the instance, it stopped responding completely; i can't SSH in and the web server is not responding. What's more, the New Relic server monitor agent doesn't appear to be sending data, so it looks like the machine is not running at all or is completely disconnected from the internet (even though the Azure dashboard says everything's fine). One thing I tried is snapshoting it and creating a new instance from the snapshot, but it didn't change anything. Are there any other options to recover the machine?
Any service that uses OpenSSL for its TLS implementation is potentially vulnerable; this is a weakness in the underlying cyrptography library, not in how it's presented via a web server or email server package. You should consider all linked services vulnerable to data leakage at least . As I'm sure you're aware, it's quite possible to chain attacks together. Even in the simplest attacks it's perfectly possible to, for example, use Heartbleed to compromise SSL, read webmail credentials, use webmail credentials to gain access to other systems with a quick "Dear helpdesk, can you give me a new password for $foo, love CEO" . There's more information and links in The Heartbleed Bug , and in another question maintained by a Server Fault regular, Heartbleed: What is it and what are options to mitigate it? .
{ "source": [ "https://serverfault.com/questions/587439", "https://serverfault.com", "https://serverfault.com/users/162906/" ] }
587,551
I have an Ubuntu 12.04 server. I have updated the OpenSSL package in order to fix the heartbleed vulnerability. But I am still vulnerable even, even though I have restarted the web server, and even the whole server. To check my vulnerability I used: http://www.exploit-db.com/exploits/32745/ http://filippo.io/Heartbleed dpkg gives: dpkg -l |grep openssl ii openssl 1.0.1-4ubuntu5.12 Secure Socket Layer (SSL) binary and related cryptographic tools (launchpad.net/ubuntu/+source/openssl/1.0.1-4ubuntu5.12)
Ensure that the libssl1.0.0 package has been updated as well (that package contains the actual library, the openssl package contains the tools) and that all services using the library have been restarted after the upgrade. You have to RESTART all services using openssl (service apache restart).
{ "source": [ "https://serverfault.com/questions/587551", "https://serverfault.com", "https://serverfault.com/users/215618/" ] }
587,567
Munin shows ram that is doing nothing most of the time. But swapping still happens. Quite regularly actually, bogging down the hdd. Or am I seeing the ram graph in the wrong way? How would I go about diagnosing this kind of problem?
Ensure that the libssl1.0.0 package has been updated as well (that package contains the actual library, the openssl package contains the tools) and that all services using the library have been restarted after the upgrade. You have to RESTART all services using openssl (service apache restart).
{ "source": [ "https://serverfault.com/questions/587567", "https://serverfault.com", "https://serverfault.com/users/204855/" ] }
587,625
I have been looking for an answer to that question (the one in the title) and the best thing I've found was: In DNS Protocol design, UDP transport Block size (payload size) has been limited to 512-Bytes to optimize performance whilst generating minimal network traffic. my question is: how exactly does this enhance performance and are there any other reasons for this limitation when using UDP ?
The 512 byte payload guarantees that DNS packets can be reassembled if fragmented in transit. Also, generally speaking there's less chance of smaller packets being randomly dropped. The IPv4 standard specifies that every host must be able to reassemble packets of 576 bytes or less. With an IPv4 header (20 bytes, though it can be as high as 60 bytes w/ options) and an 8 byte UDP header, a DNS packet with a 512 byte payload will be smaller than 576 bytes. As @RyanRies says: DNS can use TCP for larger payloads and for zone transfers and DNSSEC. There's a lot more latency when TCP comes into play because, unlike UDP, there's a handshake between the client and server before any data begins to flow.
{ "source": [ "https://serverfault.com/questions/587625", "https://serverfault.com", "https://serverfault.com/users/199731/" ] }
587,727
I am starting with ansible and will use it, among others, to install packages on several Linux distros. I see in the docs that the yum and apt commands are separated - what would be the easiest way to unify them and use something like this: - name: install the latest version of Apache unified_install: name=httpd state=latest instead of - name: install the latest version of Apache on CentOS yum: name=httpd state=latest when: ansible_os_family == "RedHat" - name: install the latest version of Apache on Debian apt: pkg=httpd state=latest when: ansible_os_family == "Debian" I understand that the two package managers are different, but they still have a set of common basic usages. Other orchestators ( salt for instance ) have a single install command.
Update: As of Ansible 2.0, there is now a generic & abstracted package module Usage Examples: Now when the package name is the same across different OS families, it's as simple as: --- - name: Install foo package: name=foo state=latest When the package name differs across OS families, you can handle it with distribution or OS family specific vars files: --- # roles/apache/apache.yml: Tasks entry point for 'apache' role. Called by main.yml # Load a variable file based on the OS type, or a default if not found. - include_vars: "{{ item }}" with_first_found: - "../vars/{{ ansible_distribution }}-{{ ansible_distribution_major_version | int}}.yml" - "../vars/{{ ansible_distribution }}.yml" - "../vars/{{ ansible_os_family }}.yml" - "../vars/default.yml" when: apache_package_name is not defined or apache_service_name is not defined - name: Install Apache package: > name={{ apache_package_name }} state=latest - name: Enable apache service service: > name={{ apache_service_name }} state=started enabled=yes tags: packages Then, for each OS that you must handle differently... create a vars file: --- # roles/apache/vars/default.yml apache_package_name: apache2 apache_service_name: apache2 --- # roles/apache/vars/RedHat.yml apache_package_name: httpd apache_service_name: httpd --- # roles/apache/vars/SLES.yml apache_package_name: apache2 apache_service_name: apache2 --- # roles/apache/vars/Debian.yml apache_package_name: apache2 apache_service_name: apache2 --- # roles/apache/vars/Archlinux.yml apache_package_name: apache apache_service_name: httpd EDIT: Since Michael DeHaan (creator of Ansible) has chosen not to abstract out the package manager modules like Chef does, If you are still using an older version of Ansible (Ansible < 2.0) , unfortunately you'll need to handle doing this in all of your playbooks and roles. IMHO this pushes a lot of unnecessary repetitive work onto playbook & role authors... but it's the way it currently is. Note that I'm not saying we should try to abstract package managers away while still trying to support all of their specific options and commands, but just have an easy way to install a package that is package manager agnostic. I'm also not saying that we should all jump on the Smart Package Manager bandwagon, but that some sort of package installation abstraction layer in your configuration management tool is very useful to simplify cross-platform playbooks/cookbooks. The Smart project looks interesting, but it is quite ambitious to unify package management across distros and platforms without much adoption yet... it'll be interesting to see whether it is successful. The real issue is just that package names sometimes tend to be different across distros, so we still have to do case statements or when: statements to handle the differences. The way I've been dealing with it is to follow this tasks directory structure in a playbook or role: roles/foo └── tasks    ├── apt_package.yml    ├── foo.yml    ├── homebrew_package.yml    ├── main.yml    └── yum_package.yml And then have this in my main.yml : --- # foo: entry point for tasks # Generally only include other file(s) and add tags here. - include: foo.yml tags=foo This in foo.yml (for package 'foo'): --- # foo: Tasks entry point. Called by main.yml - include: apt_package.yml when: ansible_pkg_mgr == 'apt' - include: yum_package.yml when: ansible_pkg_mgr == 'yum' - include: homebrew_package.yml when: ansible_os_family == 'Darwin' - name: Enable foo service service: > name=foo state=started enabled=yes tags: packages when: ansible_os_family != 'Darwin' Then for the different package managers: Apt: --- # tasks file for installing foo on apt based distros - name: Install foo package via apt apt: > name=foo{% if foo_version is defined %}={{ foo_version }}{% endif %} state={% if foo_install_latest is defined and foo_version is not defined %}latest{% else %}present{% endif %} tags: packages Yum: --- # tasks file for installing foo on yum based distros - name: Install EPEL 6.8 repos (...because it's RedHat and foo is in EPEL for example purposes...) yum: > name={{ docker_yum_repo_url }} state=present tags: packages when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int == 6 - name: Install foo package via yum yum: > name=foo{% if foo_version is defined %}-{{ foo_version }}{% endif %} state={% if foo_install_latest is defined and foo_version is not defined %}latest{% else %}present{% endif %} tags: packages - name: Install RedHat/yum-based distro specific stuff... yum: > name=some-other-custom-dependency-on-redhat state=latest when: ansible_os_family == "RedHat" tags: packages Homebrew: --- - name: Tap homebrew foobar/foo homebrew_tap: > name=foobar/foo state=present - homebrew: > name=foo state=latest Note that this is awfully repetitive and not D.R.Y. , and although some things might be different on the different platforms and will have to be handled, generally I think this is verbose and unwieldy when compared to Chef's: package 'foo' do version node['foo']['version'] end case node["platform"] when "debian", "ubuntu" # do debian/ubuntu things when "redhat", "centos", "fedora" # do redhat/centos/fedora things end And yes, there is the argument that some package names are different across distros. And although there is currently a lack of easily accessible data , I'd venture to guess that most popular package names are common across distros and could be installed via an abstracted package manager module. Special cases would need to be handled anyway, and would already require extra work making things less D.R.Y. If in doubt, check pkgs.org .
{ "source": [ "https://serverfault.com/questions/587727", "https://serverfault.com", "https://serverfault.com/users/78319/" ] }
588,031
I need to set the default home page for the entire domain via GPO. Where is the IE Home Page Policy located?
There are a few different ways to do it. Do you want to force everyone's home page, and disallow changes? Or do you just want to set a default home page that people can modify? If you want to force a home page: (Do what HopelessN00b said) Create a new GPO or edit the existing one. (I'm assuming you know how to do this already. Let me know if you don't.) In the Group Policy Management Editor, go to User Configuration -> Policies -> Administrative Templates -> Windows Components -> Internet Explorer. Find the policy Disable changing home page settings. Set it to Enabled, and specify the URL for your home page. Once it applies, the option in IE will be greyed out on the client PC. If you want to specify a default that people can change: My preferred method would be to use Group Policy Preferences to set the necessary Registry values. (Others may disagree.) In the Group Policy Management Editor, go to User Configuration -> Preferences -> Windows Settings -> Registry. Right click -> New -> Registry Item Action = Update Hive = HKEY_CURRENT_USER Key Path = Software\Microsoft\Internet Explorer\Main Value Name = Start Page Value Type = REG_SZ Value Data = your Home Page URL On the Common Tab, check Apply Once and Do not Re-Apply This will set the home page by default for everyone, but the user will be free to edit it afterwards. If you go this route, I also recommend that you set a value for Default_Page_URL as well, without checking Apply Once and Do not Re-Apply. This will give your users the ability to click the Use Default button in the IE settings and get back to the company home page. You probably also want to delete the Secondary Start Pages and Default_Secondary_Page_URL registry values as well. If you are unfamiliar with these registry values, it would probably be a good idea to open the Registry Editor and look at HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main to understand how they work. Other ways you can set the IE home page: In the GPO editor, go to User Configuration -> Preferences -> Windows Settings -> Internet Settings. This may not work for all IE versions out of the box . You may have to update your Administrative Templates. In the GPO editor, go to User Configuration -> Policies -> Windows Settings -> Internet Explorer Maintenance. However, you may not have this anymore with newer Administrative Templates . Use the Internet Explorer Administration Kit (IEAK) to create a customized IE build and deploy that. Use a logon script to apply the same registry settings listed above. One thing to note about GPOs: Any of the settings in the "Preferences" sections of your GPO will only apply if you have the Group Policy Preference Client Side Extensions and the corresponding dependencies installed. Windows 7-era PCs support this out of the box. However, Vista and XP machines need an update to provide this functionality. A fully updated machine should already have this installed, but if do a reinstall it will take a few patch-reboot cycles.
{ "source": [ "https://serverfault.com/questions/588031", "https://serverfault.com", "https://serverfault.com/users/111937/" ] }
588,244
$ORIGIN example.com. ; not necessary, using this to self-document $TTL 3600 @ IN SOA ns1.example.com. admin.example.com. ( 1970010100 7200 1800 1209600 300) @ IN NS ns1.example.com. @ IN NS ns2.example.com. @ IN A 198.51.100.1 ns1 IN A 198.51.100.2 ns2 IN A 198.51.100.3 sub1 IN NS ns1.example.edu. sub2 IN NS ns1.sub2 ns1.sub2 IN A 203.0.113.1 ; inline glue record The role of a NS record beneath the apex of a domain is well-understood; they exist to delegate authority for a subdomain to another nameserver. Examples of this above would include the NS records for sub1 and sub2 . These allow the nameserver to hand out referrals for portions of the domain that it does not consider itself authoritative for. The purpose of the NS records at the apex of a domain, ns1 and ns2 in this case, seem to be less understood by the internet at large. My understanding (which may not be holistic) is as follows: They are not used by caching DNS servers in order to determine the authoritative servers for the domain. This is handled by nameserver glue , which is defined at the registrar level. The registrar never uses this information to generate the glue records. They are not used to delegate authority for the entire domain to other nameservers. Trying to do so with software such as ISC BIND will not result in the "expected" referral behavior at all, as the nameserver will continue to consider itself authoritative for the zone. They are not used by the nameserver to determine whether it should return authoritative responses ( AA flag set) or not; that behavior is defined by whether the software is told to be a master or a slave for the zone. Most nameserver software will quite happily serve apex NS records that disagree with the information contained by the upstream glue records, which will in turn cause well-known DNS validation websites to generate warnings for the domain. With this being the case, what are we left with? Why are we defining this information if it doesn't appear to be consumed by caching DNS servers on the internet at large?
Subordinate identification Apex level NS records are used by a master server to identify its subordinates. When data on an authoritative nameserver changes, it will advertise this via DNS NOTIFY messages ( RFC 1996 ) to all of its peers on that list. Those servers will in turn call back with a request for the SOA record (which contains the serial number), and make a decision on whether to pull down a more recent copy of that zone. It's possible to send these messages to servers not listed in the NS section, but this requires server specific configuration directives (such as ISC BIND's also-notify directive). The apex NS records comprise the basic list of servers to notify under a default configuration. It's worth noting that the secondary servers will also send NOTIFY messages to each other based on these NS records, usually resulting in logged refusals. This can be disabled by instructing servers to only send notifies for zones they are masters for (BIND: notify master; ), or to skip NS based notifies entirely in favor of notifies explicitly defined in the configuration. (BIND: notify explicit; ) Authoritative definition The question above contained a fallacy: They are not used by caching DNS servers in order to determine the authoritative servers for the domain. This is handled by nameserver glue, which is defined at the registrar level. The registrar never uses this information to generate the glue records. This is an easy conclusion to arrive at, but not accurate. The NS records and glue record data (such as that defined within your registrar account) are not authoritative. It stands to reason that they cannot be considered "more authoritative" than the data residing on the servers that authority is being delegated to. This is emphasized by the fact that referrals do not have the aa (Authoritative Answer) flag set. To illustrate: $ dig @a.gtld-servers.net +norecurse +nocmd example.com. NS ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14021 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN NS ;; AUTHORITY SECTION: example.com. 172800 IN NS a.iana-servers.net. example.com. 172800 IN NS b.iana-servers.net. ;; ADDITIONAL SECTION: a.iana-servers.net. 172800 IN A 199.43.135.53 a.iana-servers.net. 172800 IN AAAA 2001:500:8f::53 b.iana-servers.net. 172800 IN A 199.43.133.53 b.iana-servers.net. 172800 IN AAAA 2001:500:8d::53 Note the lack of aa in the flags for the above reply. The referral itself is not authoritative. On the other hand, the data on the server being referred to is authoritative. $ dig @a.iana-servers.net +norecurse +nocmd example.com. NS ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2349 ;; flags: qr aa; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN NS ;; ANSWER SECTION: example.com. 86400 IN NS a.iana-servers.net. example.com. 86400 IN NS b.iana-servers.net. That said, this relationship can get very confusing as it is not possible to learn about the authoritative versions of these NS records without the non-authoritative NS records defined on the parent side of the referral. What happens if they disagree? The short answer is "inconsistent behavior". The long answer is that nameservers will initially stub everything off of the referral (and glue) on an empty cache, but those NS , A , and AAAA records may eventually be replaced when they are refreshed. The refreshes occur as the TTLs on those temporary records expire, or when someone explicitly requests the answer for those records. A and AAAA records for out of zone data (i.e. the com nameservers defining glue for data outside of the com zone, like example.net ) will definitely end up being refreshed, as it is a well-understood concept that a nameserver should not be considered an authoritative source of such information. (RFC 2181) When values of the NS records differ between the parent and child sides of the referral (such as the nameservers entered into the registrar control panel differing from the NS records that live on those same servers), the behaviors experienced will be inconsistent, up to and including child NS records being ignored completely. This is because the behavior is not well defined by the standards, and the implementation varies between different recursive server implementations. In other words, consistent behavior across the internet can only be expected if the nameserver definitions for a domain are consistent between the parent and child sides of a referral . The long and short of it is that recursive DNS servers throughout the internet will bounce back between destinations if the records defined on the parent side of the referral do not agree with the authoritative versions of those records. Initially the data present in the referral will be preferred, only to be replaced by the authoritative definitions. Since caches are constantly being rebuilt from scratch across the internet, it is impossible for the internet to settle on a single version of reality with this configuration. If the authoritative records are doing something illegal per the standards, such as pointing NS records at aliases defined by a CNAME , this gets even more difficult to troubleshoot; the domain will alternate between working and broken for software that rejects the violation. (i.e. ISC BIND / named) RFC 2181 §5.4.1 provides a ranking table for the trustworthiness of this data, and makes it explicit that cache data associated with referrals and glue cannot be returned as the answer to an explicit request for the records they refer to. 5.4.1. Ranking data When considering whether to accept an RRSet in a reply, or retain an RRSet already in its cache instead, a server should consider the relative likely trustworthiness of the various data. An authoritative answer from a reply should replace cached data that had been obtained from additional information in an earlier reply. However additional information from a reply will be ignored if the cache contains data from an authoritative answer or a zone file. The accuracy of data available is assumed from its source. Trustworthiness shall be, in order from most to least: + Data from a primary zone file, other than glue data, + Data from a zone transfer, other than glue, + The authoritative data included in the answer section of an authoritative reply. + Data from the authority section of an authoritative answer, + Glue from a primary zone, or glue from a zone transfer, + Data from the answer section of a non-authoritative answer, and non-authoritative data from the answer section of authoritative answers, + Additional information from an authoritative answer, Data from the authority section of a non-authoritative answer, Additional information from non-authoritative answers. <snip> Unauthenticated RRs received and cached from the least trustworthy of those groupings, that is data from the additional data section, and data from the authority section of a non-authoritative answer, should not be cached in such a way that they would ever be returned as answers to a received query. They may be returned as additional information where appropriate. Ignoring this would allow the trustworthiness of relatively untrustworthy data to be increased without cause or excuse.
{ "source": [ "https://serverfault.com/questions/588244", "https://serverfault.com", "https://serverfault.com/users/152073/" ] }
588,297
I have vps that I use to run a webserver on, it currently runs ubuntu server 12.04. Since a few weeks I keep getting a lot of errors in my ssh console. 2014 Apr 11 08:41:18 vps847 PAM service(sshd) ignoring max retries; 6 > 3 2014 Apr 11 08:41:21 vps847 PAM service(sshd) ignoring max retries; 6 > 3 2014 Apr 11 08:41:24 vps847 PAM service(sshd) ignoring max retries; 6 > 3 2014 Apr 11 08:41:25 vps847 PAM service(sshd) ignoring max retries; 6 > 3 2014 Apr 11 08:41:26 vps847 PAM service(sshd) ignoring max retries; 6 > 3 2014 Apr 11 08:41:29 vps847 PAM service(sshd) ignoring max retries; 6 > 3 2014 Apr 11 08:41:29 vps847 PAM service(sshd) ignoring max retries; 6 > 3 Could someone please tell me what these errors mean. Or at least tell me how to disable these errors. It is realy anoying when I am working over ssh and these errors keep popping up all over my screen.
PAM is telling you that it is configured with "retry=3" and it will ignore any further auth requests from sshd within the same session. SSH however will continue trying until it exhausts MaxAuthTries setting (which defaults to 6). You should probably set both of these (SSH and PAM) to same value for maximum auth retries. Updated To change this behaviour: For sshd you edit /etc/ssh/sshd_config and set MaxAuthTries 3 . Also restart SSH server for the setting to take effect. For PAM , you have to look for configuration in /etc/pam.d directory (I think it's common-password file in Ubuntu), you have to change retry= value. Note: I would strongy suggest to also check Peter Hommel's answer regarding the reason of these requests as it's possible your SSH is being brute-forced.
{ "source": [ "https://serverfault.com/questions/588297", "https://serverfault.com", "https://serverfault.com/users/209195/" ] }
588,904
Am I missing something but is there no way to add a route via CloudFormation to the default route table that comes provisioned with a VPC?
Nah you can't, there's nothing to refer to anyway (e.g. logical ID). Just create your own main table ;-). This is probably one of the reason it can't be used: One way to protect your VPC is to leave the main route table in its original default state (with only the local route), and explicitly associate each new subnet you create with one of the custom route tables you've created. This ensures that you must explicitly control how each subnet's outbound traffic is routed .
{ "source": [ "https://serverfault.com/questions/588904", "https://serverfault.com", "https://serverfault.com/users/76847/" ] }
589,070
I have a Dell server running CentOS 6 using PERC H710 Raid Controller card with Raid 5 setup and I want to monitor the hard disk failure/working status behind the Raid Controller. Then I should be able to use a bash script to monitor the hard disk status and send alert emails if something went bad. The LSI MegaRAID SAS command tool (About LSI MegaRAID SAS Linux Tools) for CentOS/Red Hat/Linux does NOT support PERC H710 and smartctl does NOT support it either. Based on Dell website, CentOS IS not supported for this server ( NX3200 PowerVault ) and I couldn't download any linux program to monitor the hard disk. [root@server ~]# lspci | grep RAID 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05) [root@server ~]# smartctl -a /dev/sda smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-431.el6.x86_64] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net Vendor: DELL Product: PERC H710 Revision: 3.13 User Capacity: 299,439,751,168 bytes [299 GB] Logical block size: 512 bytes Logical Unit id: .... Serial number: .... Device type: disk Local Time is: Tue Apr 15 16:38:30 2014 SGT Device does not support SMART Error Counter logging not supported Device does not support Self Test logging Anyone knows how to monitor the hard disk status behind hardware raid on Dell PERC H710 with CentOS 6?
S.M.A.R.T. is not the final word in disk or storage monitoring!! It's a component, but modern RAID controllers use it along with other methods to determine drive and array health. I'm assuming this is a PERC controller in a Dell PowerEdge server. The normal Linux-friendly approach to health monitoring of Dell hardware is to install the Dell OMSA agents for Linux via Yum - http://linux.dell.com/wiki/index.php/Repository/OMSA#Yum_setup yum install srvadmin-all will install the full suite of agents. Once installed, you can use the omreport command to get information about your array. Examples: $ omreport storage vdisk $ omreport storage pdisk controller=0 $ omreport storage vdisk controller=0 vdisk=1
{ "source": [ "https://serverfault.com/questions/589070", "https://serverfault.com", "https://serverfault.com/users/113488/" ] }
589,150
We run a number of AWS services in the eu-west-1 region. Unfortunately it seems that a lot of our developers and other employees who need to create temporary resources forget about this aspect of AWS and don't select this region before launching EC2 instances, creating S3 buckets, etc. As a result they often end up in the us-east-1 region since that appears to be the default that AWS always uses. Is there any way through IAM (or some other way) to restrict user accounts to only launch/create things within a specific region?
Unfortunately you can't do this globally. However, for each AWS product that supports it, you typically can limit access to a certain region. For instance, for EC2, you can do the following: { "Statement":[{ "Effect":"allow", "Action":"RunInstances", "Resource":"*", "Condition":{ "StringEquals":{ "ec2:Region":"us-west-1" } } } ] } Of course, you'd need to issue a deny rule as well where appropriate. Here's the documentation for the above.
{ "source": [ "https://serverfault.com/questions/589150", "https://serverfault.com", "https://serverfault.com/users/173890/" ] }
589,154
I wish to schedule a bat file on my Win 2003 server without a password. The requirement is to run it everyday at a specific time. We use expiring passwords and do not have a non expiring account. I plan to schedule this using Scheduled Tasks as a SYSTEM user. It is working fine, for me. However, What is the impact of using a SYSTEM user? Is there something we should be cautious about? The code is certainly not malicious! However, i do not wish to impact other users/applications on my Win 2003 server. Any one has details of the impact?
Unfortunately you can't do this globally. However, for each AWS product that supports it, you typically can limit access to a certain region. For instance, for EC2, you can do the following: { "Statement":[{ "Effect":"allow", "Action":"RunInstances", "Resource":"*", "Condition":{ "StringEquals":{ "ec2:Region":"us-west-1" } } } ] } Of course, you'd need to issue a deny rule as well where appropriate. Here's the documentation for the above.
{ "source": [ "https://serverfault.com/questions/589154", "https://serverfault.com", "https://serverfault.com/users/216545/" ] }
589,240
Looks like I need to update the full scenario here for all. Our users need to pick up whatever they need from our file server to sync to remote location, but users have limited permissions on file server to move files around. so here is my task: create a tool that user can use to pickup data and sync to remote location. DFS and 3rd party tools are not options, must be codes made by our own and everything must be running on background. Here is my way to do it and it is working now. I have made 3 pieces of components: **A**** HTA application with VBS sitting on user PC providing user a file browser to pickup data. **B**** A shared location that allows that HTA to write data path to a txt file. any path in this text file will be made as softlink into a final location. **C**** A final location on file server holds all softlinks. Here is how basically it works: User pick a data from file server by using HTA I made, It will write the full data path to the the 000.txt file on the shared location. My endless looping script monitors this shared location, if 000.txt file is created by any user in this shared folder, it will call up another script to read all data paths in this 000.txt and using mklink to make softlinks based on the paths user provided and outputs softlinks to the final location, then deletes 000.txt file. All softlinks on this final location will be synced by robocopy during the night on schedule. There are more functions required in my HTA application, there is no need to talk about it. Since no one here talking about coding, so I deleted my endless loop code, This loop script starts with Windows and running as a service. I can start/stop it anytime I want. It basically just monitors that shared folder, if any user creates 000.txt file in there, it will call up mklink.bat to make softlins and 000.txt will be deleted by mklink.bat when softlink are made. The reason that I use an endless loop instead of task scheduler is user need to see results in that final location right after they submit the data path. I thought the minimal interval of task scheduler is one min, (@MikeAWood said it can be 1 second. Thanks!) so I made a this 2 seconds interval endless loop to monitor that shared folder. My question was the following: Is this a good idea to running a endless loop on server like forever to monitor a folder? I monitored the resources usage on server while this script is running. I dont see any significant consumings...so I guess it will be harmless right? If task scheduler can handle 1 second interval, I guess my question is solved. Thanks to you all. Or if you have better way to do this or any opinion on the way I do it.
As a general alternative to this: put your script in Task Scheduler and trigger it every minute, two minutes, whatever. This is more reliable, as your process with survive reboots or script errors. Using Scheduled Tasks not only allows your process to survive reboots, as mentioned already, but can also make your task deployable to a large number of servers via Group Policy Preferences. Your current solution is an enemy to both scalability and reliability. As for the actual script you're talking about - it seems like you're re-inventing a Frankenstein's Monster of DFS-R and/or Robocopy. DFS-R is a scalable, mature file replication tool that is built into Windows Server. You should see if you can use it for this situation. Microsoft has put way more engineering brain-power into DFS-R than you could ever put into a script that does the same thing. Also, even if you can't use DFS-R for some reason, robocopy has a /mir switch, which will mirror directories. If you really can't use DFS-R for some reason, at least use something like this in a script.
{ "source": [ "https://serverfault.com/questions/589240", "https://serverfault.com", "https://serverfault.com/users/212415/" ] }
589,734
That's the part: vars_files: - vars/vars.default.yml - vars/vars.yml If a file vars/vars.yml does not exist - here is an error. ERROR: file could not read: /.../vars/vars.yml How can I load additional variables from this file only if it exists? (with no errors)
It's quite simple really. You can squash your different vars_files items into a single tuple and Ansible will automatically go through each one until it finds a file that exists and load it. E.x.: vars_files: - [ "vars/foo.yml", "vars/bar.yml", "vars/default.yml" ]
{ "source": [ "https://serverfault.com/questions/589734", "https://serverfault.com", "https://serverfault.com/users/50663/" ] }
589,877
Currently running PHP 5.4 on CentOS 6.5. I installed the webtatic php55w package then installed PEAR+PECL without issue along with redis and mongo through PECL. Shortly after, I realized 5.5 is not compatible with the framework I was working with so I yum erased php55w and installed php54w in it's place. Now the pecl command doesn't work at all. It just produces this really long string of errors every time I issue any pecl command (abbreviated...most repeated dozens of times): Warning: Invalid argument supplied for foreach() in Command.php on line 259 Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259 ...etc etc etc... Notice: Undefined index: honorsbaseinstall in Role.php on line 180 Notice: Undefined index: honorsbaseinstall in Role.php on line 180 ...etc etc etc... Notice: Undefined index: installable in Role.php on line 145 Notice: Undefined index: installable in Role.php on line 145 ...etc etc etc... Notice: Undefined index: phpfile in Role.php on line 212 Notice: Undefined index: phpfile in Role.php on line 212 ...etc etc etc... Notice: Undefined index: config_vars in Role.php on line 49 Notice: Undefined index: config_vars in Role.php on line 49 ...etc etc etc... Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259 Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259 ...etc etc etc... XML Extension not found How can I fix this?
I came across this error after updating my PHP installation to 5.5.14, on RedHat EL v6. I had installed PHP via the Yum package manager, and then needed to re-install some of the PHP extensions I was using. In searching for tips on how to solve this issue, I came across this question, and now that I have discovered a working solution I wanted to share my findings here. Other suggestions I had found online which included erasing and re-installing PECL/PEAR and even my PHP installation did not solve this issue. Finally after some further research and reviewing the source code for PECL/PEAR I found the real cause. Hopefully what follows will be of help to others: You may see this error when trying to run PECL if your PHP installation does not have XML enabled by default, but instead XML support is usually loaded into your PHP installation via a PHP extension module (this could occur if the ./configure --disable-xml flag was specified when building PHP from source, or if you installed PHP via various package managers where that build of PHP is configured to load XML via an extension module). Notice how the last line of the error output from PECL is XML Extension not found – the reason this error is appearing is because when PECL tries to use its XMLParser.php class it fails because it cannot access the XML extension (it checks for the XML module using extension_loaded('xml') around line 259 of the XMLParser.php source), and because the XML module is unavailable, it cannot parse its configuration/settings files and outputs all of the other errors seen above. The reason this issue occurs is due to the way that PECL operates. The PECL command itself is just a shell script, which first works out where PHP is installed on your system installation, and then calls PHP on the command line with a number of flags before providing the path to the main PECL PHP script file. The problem flag which the PECL shell script is using is the -n option, which tells PHP to ignore any php.ini files (and therefore PHP will not load any of the additional extensions your php.ini file specifies, including in this case XML). One can see the impact of the -n flag by running the following two commands: first try running php -m on the command line then compare the output to php -n -m You should not see the XML extension listed when you run the second command because the -n flag told PHP not to parse our php.ini file(s). If you run vi `which pecl` on the command line you should see the contents of the PECL command (as noted above, its just a shell script), and if you inspect the last line, you will see something like this: exec $PHP -C -n -q $INCARG -d date.timezone=UTC -d output_buffering=1 -d variables_order=EGPCS -d safe_mode=0 -d register_argc_argv="On" $INCDIR/peclcmd.php "$@" You should see the -n flag listed between the -C and -q flags. If you edit the PECL shell script, omitting the -n flag you should now be able to run PECL again without issues. Alternatively, you can recompile PHP from source making sure that the XML module is compiled into the PHP binary instead of being loaded from a PHP extension module at run-time. Obviously editing the PECL shell script to remove the -n flag will only fix the issue until PECL/PEAR gets re-installed, hopefully however the maintainers of PECL/PEAR can update their repo with this fix. Ensuring PHP is built with XML support compiled in, is however a long-term fix to the solution, but may not be ideal for everyone's circumstances. Just for completeness, if you run vi `which pear` you will see a very similar shell script to the one that PECL uses, however the -n flag is missing from the command which calls PHP and as such the PEAR command is not subject to these same issues.
{ "source": [ "https://serverfault.com/questions/589877", "https://serverfault.com", "https://serverfault.com/users/127034/" ] }
589,885
Is there any way to monitor the CPU utilization for each user at regular intervals Currently using the sar command I am able to get the following output: 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 4.72 0.00 1.56 0.49 0.00 93.23 12:20:01 AM all 4.70 0.00 1.58 0.41 0.00 93.31 12:30:01 AM all 1.89 0.00 0.93 0.12 0.00 97.06 What I need is a break down of the CPU used per user for the same time duration. I can not install any new tools or commands.
I came across this error after updating my PHP installation to 5.5.14, on RedHat EL v6. I had installed PHP via the Yum package manager, and then needed to re-install some of the PHP extensions I was using. In searching for tips on how to solve this issue, I came across this question, and now that I have discovered a working solution I wanted to share my findings here. Other suggestions I had found online which included erasing and re-installing PECL/PEAR and even my PHP installation did not solve this issue. Finally after some further research and reviewing the source code for PECL/PEAR I found the real cause. Hopefully what follows will be of help to others: You may see this error when trying to run PECL if your PHP installation does not have XML enabled by default, but instead XML support is usually loaded into your PHP installation via a PHP extension module (this could occur if the ./configure --disable-xml flag was specified when building PHP from source, or if you installed PHP via various package managers where that build of PHP is configured to load XML via an extension module). Notice how the last line of the error output from PECL is XML Extension not found – the reason this error is appearing is because when PECL tries to use its XMLParser.php class it fails because it cannot access the XML extension (it checks for the XML module using extension_loaded('xml') around line 259 of the XMLParser.php source), and because the XML module is unavailable, it cannot parse its configuration/settings files and outputs all of the other errors seen above. The reason this issue occurs is due to the way that PECL operates. The PECL command itself is just a shell script, which first works out where PHP is installed on your system installation, and then calls PHP on the command line with a number of flags before providing the path to the main PECL PHP script file. The problem flag which the PECL shell script is using is the -n option, which tells PHP to ignore any php.ini files (and therefore PHP will not load any of the additional extensions your php.ini file specifies, including in this case XML). One can see the impact of the -n flag by running the following two commands: first try running php -m on the command line then compare the output to php -n -m You should not see the XML extension listed when you run the second command because the -n flag told PHP not to parse our php.ini file(s). If you run vi `which pecl` on the command line you should see the contents of the PECL command (as noted above, its just a shell script), and if you inspect the last line, you will see something like this: exec $PHP -C -n -q $INCARG -d date.timezone=UTC -d output_buffering=1 -d variables_order=EGPCS -d safe_mode=0 -d register_argc_argv="On" $INCDIR/peclcmd.php "$@" You should see the -n flag listed between the -C and -q flags. If you edit the PECL shell script, omitting the -n flag you should now be able to run PECL again without issues. Alternatively, you can recompile PHP from source making sure that the XML module is compiled into the PHP binary instead of being loaded from a PHP extension module at run-time. Obviously editing the PECL shell script to remove the -n flag will only fix the issue until PECL/PEAR gets re-installed, hopefully however the maintainers of PECL/PEAR can update their repo with this fix. Ensuring PHP is built with XML support compiled in, is however a long-term fix to the solution, but may not be ideal for everyone's circumstances. Just for completeness, if you run vi `which pear` you will see a very similar shell script to the one that PECL uses, however the -n flag is missing from the command which calls PHP and as such the PEAR command is not subject to these same issues.
{ "source": [ "https://serverfault.com/questions/589885", "https://serverfault.com", "https://serverfault.com/users/184686/" ] }
589,956
I am looking to buy a dedicated server for my web application.But I am concerned about security to my application code and who can access to my server even dedicated server.As hosting provider provides me pre-installed OS I have concern on hosting provider access to my server even I change password. Is there an chance to access my server by hosting provider in any case?
Yes, they will have access to your server. If virtual, they have access through the virtualization console or container root. If physical, IPMI and out-of-band management provide access. They may have access to your backups. They definitely have access to your disks...
{ "source": [ "https://serverfault.com/questions/589956", "https://serverfault.com", "https://serverfault.com/users/186937/" ] }
590,124
I need to setup an in memory storage system for around 10 GB of data, consisting of many 100 kb single files(images). There will be lots of reads and fairly periodic writes(adding new files, deleting some old ones). Now, I know that tmpfs behaves like a regular file system for which you can, for example, check free/used space with df , which is a nice feature to have. However I'm interested if ramfs would offer some advantages with regards to speed of IO operations. I know that I can not control the size of consumed memory when using ramfs and that my system can hang if it completely consumes the free RAM, but that will not be an issue in this scenario. To sum it up, I'm interested: - Performance wise, which is faster: ramfs or tmpfs (and possibly why)? - When does tmpfs use swap space? Does it move already saved data to swap(to free RAM for other programs currently running) or only new data if at that moment there is no free RAM left?
My recommendation: Measure and observe real-life activity under normal conditions. Those files are unlikely to be ALL be needed and served from cache at all times. But there's a nice tool called vmtouch that can tell you what's in cache at a given moment. You can also use it to lock certain directories or files into cache. So see what things look like after some regular use. Using tmpfs and ramfs are not necessary for this situation. See: http://hoytech.com/vmtouch/ I think you'll be surprised to see that the most active files will probably be resident in cache already. As far as tmpfs versus ramfs, there's no appreciable performance difference. There are operational differences. A real-life use case is Oracle, where ramfs was used to allow Oracle to manage data in RAM without the risk of it being swapped. tmpfs data can be swapped-out under memory pressure. There are also differences in resizing and modifying settings on the fly.
{ "source": [ "https://serverfault.com/questions/590124", "https://serverfault.com", "https://serverfault.com/users/182108/" ] }
590,191
I'm having trouble setting the hostname on a running docker container. I'm also having trouble understanding how to specify hostname after the image is started. I started a container from an image I downloaded: sudo docker run -p 8080:80 -p 2222:22 oskarhane/docker-wordpress-nginx-ss But I forgot to specify hostname through -h ; how can I specify the hostname now that the container is running?
Edit /etc/hostname is one thing for which you need ssh access inside the container. Otherwise, you can spin up the container with -h option. To set the host and domain names: $ docker run -h foo.bar.baz -i -t ubuntu bash root@foo:/# hostname foo root@foo:/# hostname -d bar.baz root@foo:/# hostname -f foo.bar.baz
{ "source": [ "https://serverfault.com/questions/590191", "https://serverfault.com", "https://serverfault.com/users/24068/" ] }
590,230
I have two Dell R515 servers running CentOS 6.5, with one of the broadcom NICs in each directly attached to the other. I use the direct link to push backups from the main server in the pair to the secondary every night using rsync over ssh. Monitoring the traffic, I see throughput of ~2MBps, which is much less than I'd expect from a gigabit port. I've set the MTU to 9000 on both sides, but that didn't seem to change anything. Is there a recommended set of settings and optimizations that would take me to the maximum available throughput? Moreover, since I am using rsync over ssh (or potentially just NFS) to copy millions of files (~6Tb of small files - a huge Zimbra mailstore), the optimizations I am looking for might need to be more specific for my particular use case. I am using ext4 on both sides, if that matters Thanks EDIT: I've used the following rsync options with pretty much similar results: rsync -rtvu --delete source_folder/ destination_folder/ rsync -avHK --delete --backup --backup-dir=$BACKUPDIR source_folder/ destination_folder/ Currently, I'm looking at the same level of bad performance when using cp to an NFS export, over the same direct cable link. EDIT2: after finishing the sync, I could run iperf and found performance was around 990Mbits/sec, the slowness was due to the actual dataset in use.
The file count and SSH encryption overhead are likely the biggest barriers. You're not going to see wire-speed on a transfer like this. Options to improve include: Using rsync+SSH with a less costly encryption algorithm (e.g. -e "ssh -c arcfour" ) Eliminating encryption entirely over the SSH transport with something like HPN-SSH . Block-based transfers. Snapshots, dd , ZFS snapshot send/receive , etc. If this is a one-time or infrequent transfer, using tar , netcat ( nc ), mbuffer or some combination. Check your CentOS tuned-adm settings . Removing the atime from your filesystem mounts. Examining other filesystem mount options. NIC send/receive buffers. Tuning your rsync command. Would -W , the whole-files option make sense here? Is compression enabled? Optimize your storage subsystem for the type of transfers (SSDs, spindle-count, RAID controller cache.)
{ "source": [ "https://serverfault.com/questions/590230", "https://serverfault.com", "https://serverfault.com/users/13543/" ] }
590,262
So we're looking at migrating our single dedicated server (set up like shared web hosting) to an architecture with multiple load-balanced front-end servers plus separate database servers (due to traffic growth and availability concerns). TLDR; I want some way of failing over to a local read-only mirror of the site files when the file server goes down. The scenario: About 200 vhosts with a few aliases each Each site is basically only a "wrapper" with ~30 local files, mostly just template files, the rest is just included from a centralised code base Sites are basically read only, except for maybe a cache directory (which can be separate on each host) No 3rd party access to site files (ie. not shared) Each site only gets a 2-10k hits/month Goals / requirements: Be resilient to any single server being taken offline for maintenance or due to an error Some centralised way to make low volumes of file updates regularly (manually, just normal site updates), preferably via FTP It would be acceptable for changes to be propagated to front-end servers in up to 5 seconds If a server went offline unexpectedly I'd be happy for up to 10 minutes of file changes to be lost At this stage we'd probably only be looking at 2 front end servers running full time, plus a file server Will probably be implemented on AWS More front end servers may be added and removed periodically I realise a typical approach would be to deploy via version control, which is what we already do in some instances but our staff (non-developers, who mainly manage banners, text updates, etc.) are pretty used to an "FTP" workflow, which I'd like to reform but perhaps not yet. Here are the solutions I've come up with: rsync deployment File server hosts the "master" copy of the site files which can be accessed via FTP and exposes these via an rsync server. I have some kind of "deployment" interface which triggers each front end server to rsync a site and "deploy it". Pros Pretty simple Allows for a "staging" version on the file server which might be useful Each front end server has their own copy of the files, no problem if the file server is offline Cons How reliable? Potential for confusion over what's been deployed and what hasn't NFS NFS file server with local cache, periodically rsync a local backup and then possibly fail over by switching the mount point of a bind to the local backup. Pros Possibly supports writing (not that necessary) NFS is in some ways simpler Unless there's an outage they should always all be in sync Cons I'm not sure how well local NFS caching works and whether it'll invalidate caches of modified objects automatically. Without local cache NFS is pretty slow? I'm pretty sure I'd need some kind of heart-beat to trigger the fail over and also to mount the master when it comes back online Gluster, etc. I'm not super familiar with this option, but I believe this is a common approach when you have a large number of servers. I looked at some of the documentation and it might be suitable, but I don't see many people recommending it on such a small scale. Pros Would allow read and writing Supports caching I think, should be faster than non-cached NFS? Automatic replication, recovery and fail over Cons I like the idea of having a single "master" volume which I can snapshot and backup, I don't think there's an option to say "this node must have a complete copy" in gluster? With such a small pool of servers it seems like you could easily accidentally terminate two servers which happen to have the only copy of some data So, my questions: Is NFS really the only option for me? Are there other options I should be considering? Are any of these options the best fit for my needs? Edit: Thanks for your responses guys, I am starting to realise that I'm making things far too complicated considering the (relatively small) scale of my needs. I think the correct solution in my instance is that I need to introduce an explicit "deploy" event which triggers the local files to be updated, ether from version control or some other source. Although files are updated regularly, when spread out across ~200 sites the reality is that it's unlikely most sites will be updated more than once a month, so having a mechanism to sync any arbitrary change instantly on any file at any time seems unnecessary.
The file count and SSH encryption overhead are likely the biggest barriers. You're not going to see wire-speed on a transfer like this. Options to improve include: Using rsync+SSH with a less costly encryption algorithm (e.g. -e "ssh -c arcfour" ) Eliminating encryption entirely over the SSH transport with something like HPN-SSH . Block-based transfers. Snapshots, dd , ZFS snapshot send/receive , etc. If this is a one-time or infrequent transfer, using tar , netcat ( nc ), mbuffer or some combination. Check your CentOS tuned-adm settings . Removing the atime from your filesystem mounts. Examining other filesystem mount options. NIC send/receive buffers. Tuning your rsync command. Would -W , the whole-files option make sense here? Is compression enabled? Optimize your storage subsystem for the type of transfers (SSDs, spindle-count, RAID controller cache.)
{ "source": [ "https://serverfault.com/questions/590262", "https://serverfault.com", "https://serverfault.com/users/217100/" ] }
590,524
I'm reading the docs over at CentOS.org . In section 25.1.2. Partitions: Turning One Drive Into Many , there is the following statement: The partition table is divided into four sections or four primary partitions. A primary partition is a partition on a hard drive that can contain only one logical drive (or section). Each section can hold the information necessary to define a single partition, meaning that the partition table can define no more than four partitions. I don't understand why there can only ever be four partitions. Is this just the way it was designed in the beginning? Can there really only ever be 4 primary partitions?
Is this just the way it was designed in the beginning? Can there really only ever be 4 primary partitions? Yes, that's exactly it. The partition table at the front of an MBR disk (as opposed to a GPT style disk) has a very strict data-structure that dates from the 1980's when space was a precious, precious thing. The design decision way back then was to only allow four partitions, but allow one of them to be an 'extended' partition that was a pointer to another spot on the disk that could contain a lot more 'logical' partitions. (This is the same reason why MBR formatted disks have trouble with 2TB+ disks. 512 byte size clusters, and 32bit fields containing cluster-counts for partition size = 2TB maximum disk size. A 4KB cluster size punts the problem down the road a ways.) GPT is an updated method of handling partitioning that doesn't have these limitations.
{ "source": [ "https://serverfault.com/questions/590524", "https://serverfault.com", "https://serverfault.com/users/156189/" ] }
590,819
There are many tutorials on how to configure nginx to cooperate with uWGSI when I want to deploy Django application. But why do I need nginx in this kit? uWSGI itself can serve WSGI Python applications, it can serve static files, it can also do SSL. What can nginx do which uWSGI can not?
You don't. That's the simple answer, anyway -- you don't need it. uWSGI is itself a capable server. However, other servers like nginx have been around longer and are (probably, anyway) more secure, as well as having additional features not supported by uWSGI -- for example, improved handling of static resources (via any combination of Expires or E-Tag headers, gzip compression, pre-compressed gzip, etc.) that can significantly reduce server and network load; additionally, a server like nginx in front of your Django application can implement caching of your dynamic content as well, further helping to reduce server load, and even helping to facilitate the use of a CDN (which normally don't do well with dynamic content). You could even go further and have nginx on a completely separate server, reverse proxying requests for dynamic content to a load balanced cluster of application servers while handling the static content itself. For example, my blog (while it is WordPress, it does have nginx in front of it) is tuned to cache posts for 24 hours, and to cache index pages for 5 minutes; while I don't see enough traffic for that to really matter most of the time, it helps my tiny little VPS weather the occasional surge that might otherwise knock it down -- such as the big surge of traffic when one of my articles got picked up by a Twitterer with many thousands of followers, many of whom re-tweeted it to their thousands of followers. If I had been running a "bare" uWSGI server (and assuming it had been a Django site, rather than WordPress), it might have stood up to it just fine -- or it might have crashed and burned, costing me in missed visitors. Having nginx in front of it to handle that load can really help. All that being said, if you're just running a little site that won't see a lot of traffic, there's no real need for nginx or anything else -- just use uWSGI on its own if that's what you want to do. On the other hand, if you'll see a lot of traffic... well, you still might want uWSGI, but you should at least consider something in front of it to help with the load. Actually, you should really be load-testing different configurations with your finished site to determine what does work best for you under your expected load, and use whatever that ends up being.
{ "source": [ "https://serverfault.com/questions/590819", "https://serverfault.com", "https://serverfault.com/users/151351/" ] }
590,865
I am running an ASP.NET MVC webapp in IIS 8.0. My application needs to be warmed up before taking requests. We already have a process to warm up the application automatically when we deploy new code. However, we are seeing periodic App Pool Recycle events that are resulting in the app not being warmed up. Is there a best practice for detecting an app pool recycle event and executing a script or some code?
There are several things you can do: 1. Application Initialization You can use Application Initialization Module which comes in-box with IIS 8.0 you can have something like this in your web.config <applicationInitialization doAppInitAfterRestart="true" > <add initializationPage="/" /> </applicationInitialization> This will send a request to the root of your app ( initializationPage="/" ) every time your app starts automatically. You can also configure the Start Mode for your application pool to Always Running which means every time IIS restarts, it will make sure to start your application pool immediately (this if from right click on your application pool then Advanced Settings and Preload for your site itself (right click on the site then Manage Site then Advanced Settings 2. Disable Idle Time-out Additionally you can disable idleTimeout (by default IIS will shut down the app after 20 minutes of in activity) by changing the of in Idle Time-out for your application pool to 0 (infinite) 3. Disable periodic recycling also turn off Regular Time Interval (minutes) by default IIS would recycle your app every 29 hours. For
{ "source": [ "https://serverfault.com/questions/590865", "https://serverfault.com", "https://serverfault.com/users/4863/" ] }
590,870
I have a certificate bundle .crt file. doing openssl x509 -in bundle.crt -text -noout only shows the root certificate. how do i see all the other certificates?
http://comments.gmane.org/gmane.comp.encryption.openssl.user/43587 suggests this one-liner: openssl crl2pkcs7 -nocrl -certfile CHAINED.pem | openssl pkcs7 -print_certs -text -noout It indeed worked for me, but I don't understand the details so can't say if there are any caveats. updated june 22: for openssl 1.1.1 and higher: a single-command answer can be found here serverfault.com/a/1079893 ( openssl storeutl -noout -text -certs bundle.crt )
{ "source": [ "https://serverfault.com/questions/590870", "https://serverfault.com", "https://serverfault.com/users/12554/" ] }
591,405
We have two Windows Server 2008 SP2 (sadly not 2008 R2) Domain Controllers in a small 150 client domain that are exhibiting very "peaky" CPU usage. The Domain Controllers both exhibit the same behavior and are hosted on vSphere 5.5.0, 1331820. Every two or three seconds the CPU usage jumps up to 80-100% and then quickly drops, remains low for a second or two and then jumps up again. Looking at the historical performance data for the virtual machine indicates that this condition has been going on for at least a year but the frequency has increased since March. The offending process is SVChost.exe which is wrapping the DHCP Client (dhcpcsvc.dll), EventLog (wevtsvc.dll) and LMHOSTS (lmhsvc.dll) services. I'm certainly not a Windows internals expert but I could not seem to find anything especially amiss when viewing the process with Process Explorer other than it appears the EventLog is triggering a ton of RpcBindingUnbind calls. At this point I'm out of coffee and ideas. How should I continue to troubleshoot this issue?
TL;DR: EventLog file was full. Overwriting entries is expensive and/or not implemented very well in Windows Server 2008. At @pk. and @joeqwerty suggestion and after asking around, I decided that it seemed most likely that a forgotten monitoring implementation was scraping the event logs. I installed Microsoft's Network Monitor on one of the Domain Controllers and started filtering for MSRPC using the ProtocolName == MSRPC filter. There was lots of traffic but it was all between our remote site's RODC and unfortunately did not use same destination port as the listening EventLog process. Darn! There goes that theory. To simplify things and make it easier to run monitoring software I decided to unwrap the EventLog service from SVCHost. The following command and a reboot of the Domain Controller dedicates one SVCHost process to the EventLog service. This makes investigation a little easier since you do not have multiple services attached to that PID. SC config EventLog Type= own I then resorted to ProcMon and setup a filter to exclude everything that did not use that PID. I did not see tons of failed attempts by EventLog to open missing registry keys as indicated as a possible cause here (apparently crappy applications can register as a Event Sources in extremely poor ways). Predictably I saw lots of successful ReadFile entries of the Security Event Log (C:\Windows\System32\WinEvt\Logs\Security.evtx). Here's a look at the Stack on one of those events: You'll notice first the RPCBinding and then RPCBindingUnbind. There were a lot of these. Like thousands per second. Either the Security Log is really busy or something is not working right with the Security.evtx log. In EventViewer the Security Log was only logging a between 50-100 events per minute which seemed appropriate for a domain of this size. Darn! There goes theory number two that we had some application with very verbose event auditing turned on left in a forgotten corner still dutifully chugging away. There were a still a lot (~250,000) of events recorded even though the rate of events being logged was low. Log size perhaps? Security Logs - (Right Click) - Properties... and the maximum log size was set for 131,072 KB and log size was currently holding at 131,072 KB. The 'Overwrite events as needed' radio button was checked. I figured that constantly deleting and writing to the log file was probably hard work especially when it was so full so I opted to Clear the Log (I saved the old log just in case we need it for auditing later) and let the EventLog service create a new empty file. The result: CPU usage returned to a sane level around 5%.
{ "source": [ "https://serverfault.com/questions/591405", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
591,758
I need to send a message to graylog2 server via echo to test if the %{@type} for facility is corrent, but once I do the echo thats in GELF support does not arrive in to my graylog2 server. If it restart graylog2 then the messages about it starting arrive to the graylog2 server. Example of the echo message: echo '{"version": "1.1","host":"example.org","short_message":"A short message that helps you identify what is going on","full_message":"Backtrace here\n\nmore stuff","level":1,"_user_id":9001,"_some_info":"foo","_some_env_var":"bar"}' | nc -w 1 my.graylog.server 12201 What am I doing wrong? The graylog --debug mode does not show anything. It does not even see the message come in. Edit: Graylog2 input is setup for GELF TCP and shows active connections and it raises when I try to echo, but nothing reaches the server as for the message goes.
It seems that GELF TCP input needs a null character at the end of each Gelf message. So you should send: echo -e '{"version": "1.1","host":"example.org","short_message":"Short message","full_message":"Backtrace here\n\nmore stuff","level":1,"_user_id":9001,"_some_info":"foo","_some_env_var":"bar"}\0' | nc -w 1 my.graylog.server 12201 This answer was found in a discussion on Graylog's issues .
{ "source": [ "https://serverfault.com/questions/591758", "https://serverfault.com", "https://serverfault.com/users/159984/" ] }
592,061
I am getting bombarded with attempted hacks from China all with similar IPs. How would I block the IP range with something like 116.10.191.* etc. I am running Ubuntu Server 13.10. The current line I am using is: sudo /sbin/iptables -A INPUT -s 116.10.191.207 -j DROP This only lets me block each one at a time but the hackers are changing the IPs at every attempt.
To block 116.10.191.* addresses: $ sudo iptables -A INPUT -s 116.10.191.0/24 -j DROP To block 116.10.*.* addresses: $ sudo iptables -A INPUT -s 116.10.0.0/16 -j DROP To block 116.*.*.* addresses: $ sudo iptables -A INPUT -s 116.0.0.0/8 -j DROP But be careful what you block using this method. You don't want to prevent legitmate traffic from reaching the host. edit : as pointed out, iptables evaluates rules in sequential order. Rules higher in the ruleset are applied before rules lower in the ruleset. So if there's a rule higher in your ruleset that allows said traffic, then appending ( iptables -A ) the DROP rule will not produce the intended blocking result. In this case, insert ( iptables -I ) the rule either: as the first rule sudo iptables -I ... or before the allow rule sudo iptables --line-numbers -vnL say that shows rule number 3 allows ssh traffic and you want to block ssh for an ip range. -I takes an argument of an integer that's the location in your ruleset you want the new rule to be inserted iptables -I 2 ...
{ "source": [ "https://serverfault.com/questions/592061", "https://serverfault.com", "https://serverfault.com/users/178853/" ] }
592,071
This was originally posted on stackoverflow, but it was recommended that I post it here: https://stackoverflow.com/questions/23274818/apt-get-installation-of-nginx-leaves-files-owned-by-root/23346149 I'm coming to nginx after years of using apache httpd. I used apt-get to install the nginx server and notice that all the files are owned by root. debian@nginx-explore:~$ ls -la /usr/share/nginx/www/ total 16 drwxr-xr-x 2 root root 4096 Apr 23 21:09 . drwxr-xr-x 3 root root 4096 Apr 23 21:09 .. -rw-r--r-- 1 root root 383 Jul 7 2006 50x.html -rw-r--r-- 1 root root 151 Oct 4 2004 index.html The master process is also owned by root, although there are worker processes running as www-data debian@nginx-explore:~$ ps aux|grep nginx root 2724 0.0 0.1 62348 1324 ? Ss Apr23 0:00 nginx: master process /usr/sbin/nginx www-data 2725 0.0 0.1 62688 1624 ? S Apr23 0:03 nginx: worker process www-data 2726 0.0 0.1 62688 1624 ? S Apr23 0:03 nginx: worker process www-data 2727 0.0 0.1 62688 1624 ? S Apr23 0:03 nginx: worker process www-data 2728 0.0 0.2 62688 2132 ? S Apr23 0:00 nginx: worker process In Apache, it was always stressed to make sure files and processes were not owned by root as a security measure. Is there some reason this isn't a big deal when using nginx? Ultimately I need to know whether I should go tweak the ownership for nginx files and folders to keep my system secure.
To block 116.10.191.* addresses: $ sudo iptables -A INPUT -s 116.10.191.0/24 -j DROP To block 116.10.*.* addresses: $ sudo iptables -A INPUT -s 116.10.0.0/16 -j DROP To block 116.*.*.* addresses: $ sudo iptables -A INPUT -s 116.0.0.0/8 -j DROP But be careful what you block using this method. You don't want to prevent legitmate traffic from reaching the host. edit : as pointed out, iptables evaluates rules in sequential order. Rules higher in the ruleset are applied before rules lower in the ruleset. So if there's a rule higher in your ruleset that allows said traffic, then appending ( iptables -A ) the DROP rule will not produce the intended blocking result. In this case, insert ( iptables -I ) the rule either: as the first rule sudo iptables -I ... or before the allow rule sudo iptables --line-numbers -vnL say that shows rule number 3 allows ssh traffic and you want to block ssh for an ip range. -I takes an argument of an integer that's the location in your ruleset you want the new rule to be inserted iptables -I 2 ...
{ "source": [ "https://serverfault.com/questions/592071", "https://serverfault.com", "https://serverfault.com/users/218060/" ] }
592,744
I'm running a software daemon that requires for certain actions to enter a passphrase to unlock some features which looks for example like that: $ darkcoind masternode start <mypassphrase> Now I got some security concerns on my headless debian server. Whenever I search my bash history for example with Ctrl+R I can see this super strong password. Now I imagine my server is compromized and some intruder has shell access and can simply Ctrl+R to find my passphrase in the history. Is there a way to enter the passphrase without it to be shown in bash history, ps , /proc or anywhere else? Update 1 : Passing no password to the daemon throws an error. This is no option. Update 2 : Don't tell me to delete the software or other helpful hints like hanging the developers. I know this is not a best-practice example but this software is based on bitcoin and all bitcoin based clients are some kind of json rpc server which listens to these commands and its a known security issue still being discussed ( a , b , c ). Update 3 : The daemon is already started and running with the command $ darkcoind -daemon Doing ps shows only the startup command. $ ps aux | grep darkcoin user 12337 0.0 0.0 10916 1084 pts/4 S+ 09:19 0:00 grep darkcoin user 21626 0.6 0.3 1849716 130292 ? SLl May02 6:48 darkcoind -daemon So passing the commands with the passphrase does not show up in ps or /proc at all. $ darkcoind masternode start <mypassphrase> $ ps aux | grep darkcoin user 12929 0.0 0.0 10916 1088 pts/4 S+ 09:23 0:00 grep darkcoin user 21626 0.6 0.3 1849716 130292 ? SLl May02 6:49 darkcoind -daemon This leaves the question where does the history show up? Only in .bash_history ?
Really, this should be fixed in the application itself. And such applications should be open source, so that fixing the issue in the app itself should be an option. A security related application which makes this kind of mistake might make other mistakes as well, so I wouldn't trust it. Simple interposer But you were asking for a different way, so here is one: #define _GNU_SOURCE #include <dlfcn.h> int __libc_start_main( int (*main) (int, char * *, char * *), int argc, char * * ubp_av, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void (* stack_end) ) { int (*next)( int (*main) (int, char * *, char * *), int argc, char * * ubp_av, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void (* stack_end) ) = dlsym(RTLD_NEXT, "__libc_start_main"); ubp_av[argc - 1] = "secret password"; return next(main, argc, ubp_av, init, fini, rtld_fini, stack_end); } Compile this with gcc -O2 -fPIC -shared -o injectpassword.so injectpassword.c -ldl then run your process with LD_PRELOAD=$PWD/injectpassword.so darkcoind masternode start fakepasshrase The interposer library will run this code before the main function from your application gets executed. It will replace the last command line argument by the actual password in the call to main. The command line as printed in /proc/*/cmdline (and therefore seen by tools such as ps ) will still contain the fake argument, though. Obviously you'd have to make the source code and the library you compile from it readable only to yourself, so best operate in a chmod 0700 directory. And since the password isn't part of the command invocation, your bash history is safe as well. More advanced interposer If you want to do anything more elaborate, you should keep in mind that __libc_start_main gets executed before the runtime library has been properly initialized. So I'd suggest avoiding any function calls unless they are absolutely essential. If you want to be able to call functions to your heart's content, make sure you do so just before main itself gets invoked, after all initialization is done. For the following example I have to thank Grubermensch who pointed out how to hide a password passed as command line argument which brought getpass to my attention. #define _GNU_SOURCE #include <dlfcn.h> #include <unistd.h> static int (*real_main) (int, char * *, char * *); static int my_main(int argc, char * * argv, char * * env) { char *pass = getpass(argv[argc - 1]); if (pass == NULL) return 1; argv[argc - 1] = pass; return real_main(argc, argv, env); } int __libc_start_main( int (*main) (int, char * *, char * *), int argc, char * * ubp_av, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void (* stack_end) ) { int (*next)( int (*main) (int, char * *, char * *), int argc, char * * ubp_av, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void (* stack_end) ) = dlsym(RTLD_NEXT, "__libc_start_main"); real_main = main; return next(my_main, argc, ubp_av, init, fini, rtld_fini, stack_end); } This prompts for the password, so you no longer have to keep the interposer library a secret. The placeholder argument is reused as password prompt, so invoke this like LD_PRELOAD=$PWD/injectpassword.so darkcoind masternode start "Password: " Another alternative would read the password from a file descriptor (like e.g. gpg --passphrase-fd does), or from x11-ssh-askpass , or whatever.
{ "source": [ "https://serverfault.com/questions/592744", "https://serverfault.com", "https://serverfault.com/users/116529/" ] }
592,793
Our production mysql server just crashed and won't come back up. It's giving a segfault error. I tried a reboot, and just don't know what else to try. Here is the stacktrace: 140502 14:13:05 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Log scan progressed past the checkpoint lsn 108 1057948207 140502 14:13:06 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 108 1058059648 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 15 row operations to undo InnoDB: Trx id counter is 0 562485504 140502 14:13:06 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting in background the rollback of uncommitted transactions 140502 14:13:06 InnoDB: Rolling back trx with id 0 562485192, 15 rows to undo 140502 14:13:06 InnoDB: Started; log sequence number 108 1058059648 140502 14:13:06 InnoDB: Assertion failure in thread 1873206128 in file ../../../storage/innobase/fsp/fsp0fsp.c line 1593 InnoDB: Failing assertion: frag_n_used > 0 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 140502 14:13:06 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 345919 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = (nil) thread_stack 0x30000 140502 14:13:06 [Note] Event Scheduler: Loaded 0 events 140502 14:13:06 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.10' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) /usr/sbin/mysqld(my_print_stacktrace+0x2d) [0xb7579cbd] /usr/sbin/mysqld(handle_segfault+0x494) [0xb7245854] [0xb6fc0400] /lib/tls/i686/cmov/libc.so.6(abort+0x182) [0xb6cc5a82] /usr/sbin/mysqld(+0x4867e9) [0xb74647e9] /usr/sbin/mysqld(btr_page_free_low+0x122) [0xb74f1622] /usr/sbin/mysqld(btr_compress+0x684) [0xb74f4ca4] /usr/sbin/mysqld(btr_cur_compress_if_useful+0xe7) [0xb74284e7] /usr/sbin/mysqld(btr_cur_pessimistic_delete+0x332) [0xb7429e72] /usr/sbin/mysqld(btr_node_ptr_delete+0x82) [0xb74f4012] /usr/sbin/mysqld(btr_discard_page+0x175) [0xb74f41e5] /usr/sbin/mysqld(btr_cur_pessimistic_delete+0x3e8) [0xb7429f28] /usr/sbin/mysqld(+0x526197) [0xb7504197] /usr/sbin/mysqld(row_undo_ins+0x1b1) [0xb7504771] /usr/sbin/mysqld(row_undo_step+0x25f) [0xb74c210f] /usr/sbin/mysqld(que_run_threads+0x58a) [0xb74a31da] /usr/sbin/mysqld(trx_rollback_or_clean_all_without_sess+0x3e3) [0xb74ded43] /lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0xb6f9f96e] /lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xb6d65a4e] The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. Any recommendations?
Ouch. InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. Check the suggested webpage: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html . Basically, try to start the MySQL server in a recovery mode and make a backup of your crashed tables . Edit your /etc/my.cnf and add: innodb_force_recovery = 1 ...to see if you can get into your database and get your data / find the corrupted table. Usually, when this happens it's a re-build (at least of a corrupted table or two). From http://chepri.com/mysql-innodb-corruption-and-recovery/ : Stop mysqld ( service mysql stop ). Backup /var/lib/mysql/ib* Add the following line into /etc/my.cnf : innodb_force_recovery = 1 (they suggest 4, but its best to start with 1 and increment if it won't start) Restart mysqld ( service mysql start ). Dump all tables: mysqldump -A > dump.sql Drop all databases which need recovery. Stop mysqld ( service mysql stop ). Remove /var/lib/mysql/ib* Comment out innodb_force_recovery in /etc/my.cnf Restart mysqld . Look at mysql error log. By default it should be /var/lib/mysql/server/hostname.com.err to see how it creates new ib* files. Restore databases from the dump: mysql < dump.sql
{ "source": [ "https://serverfault.com/questions/592793", "https://serverfault.com", "https://serverfault.com/users/218426/" ] }
593,071
Computers mainly need three voltages to work : +12V , +5V and +3,3V , all of them are DC. Why can't we just have a few (for redundancy) big power supply providing these three voltages to the entire datacenter, and servers directly using it ? That would be more efficient since converting power always has losses, it's more efficient to do it one single time than do it each time in each server's PSU. Also it'll be better for UPSes since they can use 12V batteries to directly power the entire 12V grid of the datacenter instead of transforming the 12V DC into 120/240 AC which is quite inefficient.
What'cha talking 'bout Willis? You can get 48V PSUs for most servers today. Running 12V DC over medium/long distance suffers from Voltage Drop , whereas 120V AC doesn't have this problem¹. Big losses there. Run high voltage AC to the rack, convert it there. The problem with 12V over long distance is you need higher amperage to transmit the same amount of power and higher amperage is less efficient and requires larger conductors. The Open Compute Open Rack design uses 12V rails inside a rack to distribute power to components. Also large UPSes don't turn 12V DC into 120V AC - they typically use 10 or 20 batteries hooked in series (and then parallel banks of those) to provide 120V or 240V DC and then invert that into AC. So yes, we're there already for custom installations but there's a fair bit of an overhead to get going and commodity hardware generally doesn't support that. Non sequitor: measuring is difficult . 1: I lie, it does, but less than DC.
{ "source": [ "https://serverfault.com/questions/593071", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
593,244
I've got an nginx config stanza that looks like: server { listen *:80; server_name domain1.com domain2.com domain3.com domain4.com .... domainN.com; rewrite ^(.*) http://my_canonical_domain.com permanent; } with lots of different domains. Is there some way to break this up over multiple lines? I don't see anything in the nginx config docs which address this.
There is no need to. This works perfectly: server_name domain1 domain2 domain3 ... domainN; Also you could use multiple server_name directives.
{ "source": [ "https://serverfault.com/questions/593244", "https://serverfault.com", "https://serverfault.com/users/57522/" ] }
593,399
Every once in a while I will do something like ssh user@host sudo thing and I am reminded that ssh doesn't allocate a pseudo-tty by default. Why doesn't it? What benefits would I be losing if I aliased ssh to ssh -t ?
The primary difference is the concept of interactivity . It's similar to running commands locally inside of a script, vs. typing them out yourself. It's different in that a remote command must choose a default, and non-interactive is safest. (and usually most honest) STDIN If a PTY is allocated, applications can detect this and know that it's safe to prompt the user for additional input without breaking things. There are many programs that will skip the step of prompting the user for input if there is no terminal present, and that's a good thing. It would cause scripts to hang unnecessarily otherwise. Your input will be sent to the remote server for the duration of the command. This includes control sequences. While a Ctrl-c break would normally cause a loop on the ssh command to break immediately, your control sequences will instead be sent to the remote server. This results in a need to "hammer" the keystroke to ensure that it arrives when control leaves the ssh command, but before the next ssh command begins. I would caution against using ssh -t in unattended scripts, such as crons. A non-interactive shell asking a remote command to behave interactively for input is asking for all kinds of trouble. You can also test for the presence of a terminal in your own shell scripts. To test STDIN with newer versions of bash: # fd 0 is STDIN [ -t 0 ]; echo $? STDOUT When aliasing ssh to ssh -t , you can expect to get an extra carriage return in your line ends. It may not be visible to you, but it's there; it will show up as ^M when piped to cat -e . You must then expend the additional effort of ensuring that this control code does not get assigned to your variables, particularly if you're going to insert that output into a database. There is also the risk that programs will assume they can render output that is not friendly for file redirection. Normally if you were to redirect STDOUT to a file, the program would recognize that your STDOUT is not a terminal and omit any color codes. If the STDOUT redirection is from the output of the ssh client and the there is a PTY associated with the remote end of the client, the remote programs cannot make such a distinction and you will end up with terminal garbage in your output file. Redirecting output to a file on the remote end of the connection should still work as expected. Here is the same bash test as earlier, but for STDOUT: # fd 1 is STDOUT [ -t 1 ]; echo $? While it's possible to work around these issues, you're inevitably going to forget to design scripts around them. All of us do at some point. Your team members may also not realize/remember that this alias is in place, which will in turn create problems for you when they write scripts that use your alias. Aliasing ssh to ssh -t is very much a case where you'll be violating the design principle of least surprise ; people will be encountering problems they do not expect and may not understand what is causing them.
{ "source": [ "https://serverfault.com/questions/593399", "https://serverfault.com", "https://serverfault.com/users/2706/" ] }
593,742
I have an enterprise-grade workstation with an Adaptec 6405E RAID controller in it. According to Adaptec, this RAID controller does not, and will never be able to, natively support 4K disk sectors. If you stick a 4K disk in it, then if it has 512-byte sector emulation mode (512e), it will use 512e. If it doesn't have 512e, the disk just won't work at all. Connected to my 6405E are four HGST SATA disks that all have 4K native sectors, but they do support 512e. The disks are in RAID10 and the array is working "reasonably well" (performance is fine to good, but not amazing). Without wiping the data on the disks and re-initializing the array, assuming I plug in an Adaptec 71605E , which does support native 4K sectors, will the controller use 4K sectors when interfacing with my disks? Or is this decision of using 512e or 4Kn baked into the structure of the on-disk format such that I'd have to wipe the disks to do that? This question is just about whether I have to backup my data and re-initialize the array, or whether the controller can (automatically, or with manual intervention) be asked to "switch over" to Advanced Format 4Kn addressing. I already know for certain that if I did wipe and reinitialize the array, I could definitely set it up from scratch to use 4Kn on all the drives, using this new RAID controller. Note that I am already quite familiar with the arcconf command line utility, and have previously used it to upgrade this array from RAID0 to RAID10 (yes, I know, I should've never been using RAID0 to begin with, but I got lucky, okay?). If there is some feature of the arcconf utility to "switch over" from 512e to 4Kn on the Adaptec 7-series controllers, I'd love to know about it, so I can use that to avoid having to reformat and temporarily offload the data to a backup location. In the worst case, I have off-site backups of critical data already, but the system has so much software loaded on it that it would be cheaper (in terms of time spent) for me to do a block-layer copy of the entire array onto another disk -- probably a cheap 4TB disk connected to the mobo via AHCI -- then copy it back over once the logical array is reinitialized. Compared to the prospect of reinstalling everything (a metric ton of proprietary Windows programs with activation and such), that'd actually be cheaper and faster.
Your disks are either 512e (512 sectors on SAS/SATA interface) or 4k native (4k sectors on SAS/SATA interface), and unfortunately there is no way to change that via software or jumpers etc. You select the transfer mode when you buy the disks. Buy 4k native disk if you have adapter that supports 4k native on interface. Update: and, once again, the disks never "fallback from 4kn to 512e" etc. The disks are either 512e - it means they will always send data in 512 sized sectors over SAS/SATA interface, or 4kn, it means disks will always send data in 4k sized data over SAS/SATA interface, and it depends only on disk, not on RAID adapter capabilities. The difference between 512n and 512e is that on the physical media sectors are sized as 512 for 512n, and 4k for 512e (disk chip translates each 4k sector on plates into 8 x 512 sectors on interface), on the interface 512e disk will always transmit only sectors at 512 bytes, no matter what adapter it is connected. The part number differ for 512e and 4kn disks, for example: ST6000NM0014 - 6TB SAS drive with 4k sectors on SAS interface (called 4kn drive); ST6000NM0034 - 6TB SAS drive with 512 byte sectors on SAS interface (called 512e drive) both of these have 4k sectors on disk media, so care must be taken about sector write alignment in 512e case. And you can still buy 512n disks, for example: ST4000NM0023 - 4TB SAS drive with 512 byte sectors on interface and 512 sectors on media, so no need to care about sector alignment for this drive. The RAID adapters fall intro 3 categories: a) the oldest ones that do not know about 4k sectors - they work with 512n and 512e disks, however issues may arise with write performance if writes are not aligned by 8xsector boundaries on 512e drives, b) the not so old ones that know about 4k internal sectors and about 512e emulation, but only work with 512 sectors on interface - less problems with alignment as controller cares about that, c) the very new ones that are able to work with 4k sectors on interface. Only these will work with new 4kn disks that pass native 4kb sector as 4kb sector onto SAS/SATA interface. Also, only Windows 8, 8.1 or later OS support 4kn drives (for server, 2012 or later version). Majority of old utilities that directly work with disks will NOT work properly with 4k sectors as they assume sectors are always 512 byte sized. instead of checking. So, to avoid any confusion with alignment and get the maximum performance, use new 4kn drives, new 4kn enabled adapters, and new OS. I think this statement below is not correct: "certain 4kb-native disks may choose to support 512-byte emulation. If they support 512-byte emulation, they can switch between this mode and 4kn depending on what the disk controller supports; they'll prefer 4kn, but fall back to 512e if they have to". Sector size is fixed in the factory. I am not aware about any drive that is able to automatically change sector size on interface depending on RAID adapter capabilities. What I see in Seagate order systems are very separate part numbers depending on sector size on interface. Impossible to change sector size after disk is ordered (could be possible by some hacking, changing disk firmware, etc. but not officially supported). So if your drive is 512e it will always send only 512 byte sectors on interface, and never 4k sectors. If your drive is 4kn, it will always send only 4k sectors on interface and never 512 sectors. You decide only when ordering, as its different part numbers. The possible drive formats are (the number indicates sector size on interface): 512n - 512 on disk, 512 on interface (simple) 512e - 4k on disk, 512 on interface (performance complications possible on old systems) 4kn - 4k on disk, 4k on interface (simple, best interface performance, do not work on old systems) n or e means if specified sector size on interface is native disk sector size (n), or emulated size (e). And the answer is: your disks are 512e disks (as they work with adapter that does not support 4kn disks), they are not 4kn disks. Your 512e disks will never use 4k sectors on interface with any RAID adapter. BTW, only the very new 6TB drives from Seagate are possible in 4kn format, and new 6TB and 8TB from HGST also can be ordered as either 512e or 4kn. All drives up to 4TB before were only available in 512e or 512n, I was not able to purchase any 4kn drive for testing before this September. My personal recommendation is to use LSI adapters. Most compatible with best error reporting from anything I tested, and best performance. With latest firmware release, fully support 4kn disks. I am using many Smart Array adapters from HP also, as they come with HP ProLiant servers, but still no information if and when SmartArray adapters will support 4kn disks. Only host bus adapters mentioned in release notes - very recent firmware update enables support for 4kn disks. So, still 4kn disks are very new. Hope I helped to make it clear.
{ "source": [ "https://serverfault.com/questions/593742", "https://serverfault.com", "https://serverfault.com/users/127863/" ] }
593,937
I am trying to provision a few special case laptops. I would like to create a local guest account. That's fine but when I try to create it I prompted that my guest password does not meet the complexity requirements. I tried editing the local security policy to change the complexity but this is greyed out. Is it possible to override domain policy with local? Yes, I know I can chose a longer password but that is not the point. I want to know how to override domain policy in case I need to in the future.
There are always way to hack around central policies if you have local admin access - at a minimum you can make your changes locally to the registry and hack the security settings so they can't be updated by the group policy agent - but it isn't the best way to go. I'll admit to doing it 10 years ago.. but really.. don't. There are unanticipated results in a lot of cases. See this technet article. The order for policy application is effectively: Local Site Domain OU Later policies will overwrite earlier ones. Your best bet is to make a computer group and use that group to either exclude your custom computers from the password complexity policy or assemble a new policy that'll override these defaults, filtered to only apply to this group.
{ "source": [ "https://serverfault.com/questions/593937", "https://serverfault.com", "https://serverfault.com/users/219077/" ] }
594,281
I want to inspect a docker image created by someone else with both an entrypoint and cmd specified, for example: ENTRYPOINT ["/usr/sbin/apache2ctl"] CMD ["-D", "FOREGROUND"] I currently do: docker run --interactive --tty --entrypoint=/bin/bash $IMAGE --login Is there a way to override CMD to be empty (so I don't have to use "--login") ?
You could just enter via docker run -it --entrypoint=/bin/bash $IMAGE -i (you 'll launch a new container from the image and get a bash shell in interactive mode), then run the entrypoint command in that container. You can then inspect the running container in the state it should be running. EDIT: Since Docker 1.3 you can use exec to run a process in a running container. Start your container as you 'd normally do, and then enter it by issuing: docker exec -it $CONTAINER_ID /bin/bash Assuming bash is installed you will be given shell access to the running container.
{ "source": [ "https://serverfault.com/questions/594281", "https://serverfault.com", "https://serverfault.com/users/3642/" ] }
594,319
I deleted a critical symbolic link - libc.so.6 . I have the file it should point at, but the basic commands such as ln or wget won't work anymore due to the link missing. However, echo or other Bash builtins work. I am looking for a way to recreate this symbolic link.
you can use ldconfig, it recreates the symlink: # rm /lib/libc.so.6 rm: remove symbolic link `/lib/libc.so.6'? y # ls -l /lib/libc* ls: error while loading shared libraries: libc.so.6: cannot open shared object file: # ldconfig # ls -l /lib/libc* [skip] lrwxrwxrwx. 1 root root 12 May 11 07:59 /lib/libc.so.6 -> libc-2.12.so just tested it, as you see.
{ "source": [ "https://serverfault.com/questions/594319", "https://serverfault.com", "https://serverfault.com/users/151262/" ] }
594,368
When querying Sparkfun's CDN url using OpenSSL with the following command: openssl s_client -showcerts -connect dlnmh9ip6v2uc.cloudfront.net:443 The common name returned in the certificate is *.sparkfun.com , which fails to verify, but if you load the host in Chrome, the common name shown is *.cloudfront.net What is going on here? This is causing a problem because the environment I am in proxies SSL via Squid SSL_Bump, which generates a certificate signed by my locally trusted CA for the domain. This works for all domains but the above, as the CN does not match as the new cert is generated using OpenSSL. EDIT - I have verified the same occurs with OpenSSL on a server in a remote data centre that has a direct connection to the internet with no proxies or filtering involved. EDIT - The issue is due to SNI, as accepted, but to fill out the information as to why it causes a problem with Squid and SSL_Bump: This project will not support forwarding of SSL Server Name Indication (SNI) information to the origin server and will make such support a little more difficult. However, SNI forwarding has its own serious challenges (beyond the scope of this document) that far outweigh the added forwarding difficulties. Taken from: http://wiki.squid-cache.org/Features/BumpSslServerFirst
CloudFront uses SNI, a way of being able to use multiple certificates on a single IP. All modern browsers support this, as does openssl's s_client command, but s_client doesn't magically do this. You have to tell it to use it: openssl s_client -servername dlnmh9ip6v2uc.cloudfront.net -connect dlnmh9ip6v2uc.cloudfront.net:443 -showcerts
{ "source": [ "https://serverfault.com/questions/594368", "https://serverfault.com", "https://serverfault.com/users/87487/" ] }
594,746
I'm very new to network administration, so please regard that I'm not that experienced yet. I have a Ubuntu root server with plesk panel. Yesterday my friends and I noticed that the quality of speech on our TS3 got very bad. I sent some pings to the server and there was a very high packet loss. After that i googled a bit and found out that there is a auth.log . I downloaded it and scrolled a bit around, then I found this: May 13 10:01:27 rs204941 sshd[9351]: input_userauth_request: invalid user student [preauth] May 13 10:01:27 rs204941 sshd[9351]: pam_unix(sshd:auth): check pass; user unknown May 13 10:01:27 rs204941 sshd[9351]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=112.220.198.102 May 13 10:01:29 rs204941 sshd[9351]: Failed password for invalid user student from 112.220.198.102 port 39806 ssh2 May 13 10:01:29 rs204941 sshd[9351]: Received disconnect from 112.220.198.102: 11: Bye Bye [preauth] May 13 10:01:31 rs204941 sshd[9353]: Invalid user student from 112.220.198.102 It seems like someone tried to log in over SSH many times. I scrolled a bit around, and saw, that this someone tries to use many different usernames: student, tech, psi, news,... Hundreds of these logins were displayed in the file. I looked up the traffic statistics at the website of my datacenter. It was only at 17MB per hour. I have a 100Mbit Backbone, so the data transfer itself seems not to be the problem. At the moment i cannot get acces to the server in any way. My Question is: how can i get acces again, how can i surpress this attack and prevent following attacks?
How to gain access? It's not clear why you can't access your account. If your machine is under attack or high load, you should talk to your provider about restricting access (IP Restrictions) or taking the server offline (disconnect from the Internet). You might also require out of band access which your provider may be able to help with. If somebody has compromised your server you may need to restore from backups or use a recovery image. How to prevent attacks on your server, in particular SSH best way to prevent brute force logons? Don't let them get to your machine in the first place! There are plenty of ways to stop brute force attempts before they get to your host, or even at the SSH level. Having said that, protecting your Operating System with something like fail2ban is a great idea. http://en.wikipedia.org/wiki/Fail2ban Fail2ban is similar to DenyHosts ... but unlike DenyHosts which focuses on SSH, fail2ban can be configured to monitor any service that writes login attempts to a log file, and instead of using /etc/hosts.deny only to block IP addresses/hosts, fail2ban can use Netfilter/iptables and TCP Wrappers /etc/hosts.deny. There are a number of important security techniques you should consider to help prevent brute force logins: SSH: Don't allow root to login Don't allow ssh passwords (use private key authentication) Don't listen on every interface Create a network interface for SSH (e.g eth1), which is different to the interface you serve requests from (e.g eth0) Don't use common usernames Use an allow list, and only allow users that require SSH Access If you require Internet Access...Restrict Access to a finite set of IPs. One static IP is ideal, however locking it down to x.x.0.0/16 is better than 0.0.0.0/0 If possible find a way to connect without Internet Access, that way you can deny all internet traffic for SSH (e.g with AWS you can get a direct connection that bypasses the Internet, it's called Direct Connect) Use software like fail2ban to catch any brute force attacks Make sure OS is always up to date, in particular security and ssh packages Application: Make sure your application is always up to date, in particular security packages Lock down your application 'admin' pages. Many of the advice above applies to the admin area of your application too. Password Protect your admin area, something like htpasswd for web console will project any underlying application vulnerabilities and create an extra barrier to entry Lock down file permissions. 'Upload folders' are notorious for being entry points of all sorts of nasty stuff. Consider putting your application behind a private network, and only exposing your front-end load balancer and a jumpbox (this is a typical setup in AWS using VPCs)
{ "source": [ "https://serverfault.com/questions/594746", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
594,772
So I was following this guide on how to install Snort, Barnyard 2 and the like. I've set up Snort so it would run automatically, by editing the rc.local file: ifconfig eth1 up /usr/local/snort/bin/snort -D -u snort -g snort \ -c /usr/local/snort/etc/snort.conf -i eth1 /usr/local/bin/barnyard2 -c /usr/local/snort/etc/barnyard2.conf \ -d /var/log/snort \ -f snort.u2 \ -w /var/log/snort/barnyard2.waldo \ -D And I then restarted the computer. Snort was able to run and detect the attack, but the log files (including barnyard2.waldo) remained blank, even if a new log entry was created for each attack. I'm not sure what went wrong here, since it's supposed to log any attacks and store it in the log directory, right? Then, I tried changing the parameter to: /usr/local/snort/bin/snort -D -b -u snort -g snort \ -c /usr/local/snort/etc/snort.conf -i eth1 And when I checked the log file, there are two log files, one in u2 and another in tcpdump format, but they're both blank and is approximately 0 bytes. So I thought I'd run it from the console to see if it would work from there, using this command: /usr/local/snort/bin/snort -A full -u snort -g snort \ -c /usr/local/snort/etc/snort.conf -i eth1 and I then checked the log file to see if it would log the attack, and it still doesn't.
How to gain access? It's not clear why you can't access your account. If your machine is under attack or high load, you should talk to your provider about restricting access (IP Restrictions) or taking the server offline (disconnect from the Internet). You might also require out of band access which your provider may be able to help with. If somebody has compromised your server you may need to restore from backups or use a recovery image. How to prevent attacks on your server, in particular SSH best way to prevent brute force logons? Don't let them get to your machine in the first place! There are plenty of ways to stop brute force attempts before they get to your host, or even at the SSH level. Having said that, protecting your Operating System with something like fail2ban is a great idea. http://en.wikipedia.org/wiki/Fail2ban Fail2ban is similar to DenyHosts ... but unlike DenyHosts which focuses on SSH, fail2ban can be configured to monitor any service that writes login attempts to a log file, and instead of using /etc/hosts.deny only to block IP addresses/hosts, fail2ban can use Netfilter/iptables and TCP Wrappers /etc/hosts.deny. There are a number of important security techniques you should consider to help prevent brute force logins: SSH: Don't allow root to login Don't allow ssh passwords (use private key authentication) Don't listen on every interface Create a network interface for SSH (e.g eth1), which is different to the interface you serve requests from (e.g eth0) Don't use common usernames Use an allow list, and only allow users that require SSH Access If you require Internet Access...Restrict Access to a finite set of IPs. One static IP is ideal, however locking it down to x.x.0.0/16 is better than 0.0.0.0/0 If possible find a way to connect without Internet Access, that way you can deny all internet traffic for SSH (e.g with AWS you can get a direct connection that bypasses the Internet, it's called Direct Connect) Use software like fail2ban to catch any brute force attacks Make sure OS is always up to date, in particular security and ssh packages Application: Make sure your application is always up to date, in particular security packages Lock down your application 'admin' pages. Many of the advice above applies to the admin area of your application too. Password Protect your admin area, something like htpasswd for web console will project any underlying application vulnerabilities and create an extra barrier to entry Lock down file permissions. 'Upload folders' are notorious for being entry points of all sorts of nasty stuff. Consider putting your application behind a private network, and only exposing your front-end load balancer and a jumpbox (this is a typical setup in AWS using VPCs)
{ "source": [ "https://serverfault.com/questions/594772", "https://serverfault.com", "https://serverfault.com/users/219540/" ] }
594,835
I have come across articles advising for the following: iptables -A INPUT -p tcp 1000:2000 -j ACCEPT And others stating that the above will not work and iptables only supports multiple port declarations with the --multiport option. Is there a correct way to open many ports with iptables?
This is the correct way: iptables -A INPUT -p tcp --match multiport --dports 1024:3000 -j ACCEPT As an example. Source here .
{ "source": [ "https://serverfault.com/questions/594835", "https://serverfault.com", "https://serverfault.com/users/216237/" ] }
595,216
While studying for the CCENT exam, my reference materials have made an alarming number of references to class A/B/C networks. Thankfully they just treat Class A/B/C as shorthand for /8, /16, and /24 CIDR subnets, and don't make any mention of an implicit subnet from the first nibble. Still, it throws me off to have "Class B" pop up in a question or explanation and have to remind myself at every step that there is an implied /16 mask in there. Is this a convention that is still widely used despite being obsoleted over two decades ago? Am I going to just have to get used to this from my senior admins? And, perhaps most importantly, does Cisco expect its certified technicians/associates/experts to accept and use classful network terminology? (Ignore the last question if it violates Cisco's exam confidentiality policy.) Update: After switching to a more authoritative reference/study guide, it became clear that Cisco expects knowledge of actual classful networks, insofar as the official study guide dedicates several chapters to them. This makes the question less about the A/B/C terminology and more about if/why admins are expected to know about classful networks.
You should know three things about class-based-routing: Class-based routing was a simpler system that was abandoned (in 1993) long before most people ever heard about the Internet. In all likelihood, nobody you will ever know has used it. And if any of your network equipment is that old, you should seriously consider alternate employment. The system used the first few bits of the address to determine its class, and (indirectly) its netmask. Note that the netmask was implied in the class, it did not determine the class. Saying you have a "Class C at 172.16.1.0" will earn you a swift kick from anyone with even a vague understanding of class-based routing. People currently say Class A, B, and C to mean /8, /16, and /24 netmasks, respectively. As should be obvious from the above, they do so incorrectly. They typically think it makes them appear knowledgeable and wise to the history of of the Internet (oh, the irony). Some hold-overs of the original system still exist. "Class D" (prefix 224 to 239) is still multicast, and "Class E" (prefix 240 to 255) is still "Reserved" or "Experimental". Plus, some (older) systems assume a default netmask based on the original class designation; so /8 for prefix 0 through 128, for example. This is often more annoying than helpful, but that's where it came from.
{ "source": [ "https://serverfault.com/questions/595216", "https://serverfault.com", "https://serverfault.com/users/191125/" ] }
595,323
Follow-Up: It looks like the rapid series of disconnects coinciding with a few months of running each server is probably coincidental and just served to reveal the actual problem. The reason it failed to reconnect is almost certainly due to the AliveInterval values (kasperd's answer). Using the ExitOnForwardFailure option should allow the timeout to occur properly before reconnecting, which should solve the problem in most cases. MadHatter's suggestion (the kill script) is probably the best way to make sure that the tunnel can reconnect even if everything else fails. I have a server (A) behind a firewall that initiates a reverse tunnel on several ports to a small DigitalOcean VPS (B) so I can connect to A via B's IP address. The tunnel has been working consistently for about 3 months, but has suddenly failed four times in the last 24 hours. The same thing happened a while back on another VPS provider - months of perfect operation, then suddenly multiple rapid failures. I have a script on machine A that automatically executes the tunnel command ( ssh -R *:X:localhost:X address_of_B for each port X) but when it executes, it says Warning: remote port forwarding failed for listen port X . Going into the sshd /var/log/secure on the server shows these errors: bind: Address already in use error: bind: Address already in use error: channel_setup_fwd_listener: cannot listen to port: X Solving requires rebooting the VPS. Until then, all attempts to reconnect give the "remote port forwarding failed" message and will not work. It's now to the point where the tunnel only lasts about 4 hours before stopping. Nothing has changed on the VPS, and it is a single-use, single user machine that only serves as the reverse tunnel endpoint. It's running OpenSSH_5.3p1 on CentOS 6.5. It seems that sshd is not closing the ports on its end when the connection is lost. I'm at a loss to explain why, or why it would suddenly happen now after months of nearly perfect operation. To clarify, I first need to figure out why sshd refuses to listen on the ports after the tunnel fails, which seems to be caused by sshd leaving the ports open and never closing them. That seems to be the main problem. I'm just not sure what would cause it to behave this way after months of behaving as I expect (i.e. closing the ports right away and allowing the script to reconnect).
I agree with MadHatter, that it is likely to be port forwardings from defunct ssh connections. Even if your current problem turns out to be something else, you can expect to run into such defunct ssh connections sooner or later. There are three ways such defunct connections can happen: One of the two endpoints got rebooted while the other end of the connection was completely idle. One of the two endpoints closed the connection, but at the time where the connection was closed, there was a temporary outage on the connection. The outage lasted for a few minutes after the connection was closed, and thus the other end never learned about the closed connection. The connection is still completely functional at both endpoints of the ssh connection, but somebody has put a stateful device somewhere between them, which timed out the connection due to idleness. This stateful device would be either a NAT or a firewall, the firewall you already mentioned is a prime suspect. Figuring out which of the above three is happening is not highly important, because there is a method, which will address all three. That is the use of keepalive messages. You should look into the ClientAliveInterval keyword for sshd_config and the ServerAliveInterval interval for ssh_config or ~/.ssh/config . Running the ssh command in a loop can work fine. It is a good idea to insert a sleep in the loop as well such that you don't end up flooding the server when the connection for some reason fails. If the client reconnect before the connection has terminated on the server, you can end up in a situation where the new ssh connection is live, but has no port forwardings. In order to avoid that, you need to use the ExitOnForwardFailure keyword on the client side.
{ "source": [ "https://serverfault.com/questions/595323", "https://serverfault.com", "https://serverfault.com/users/108656/" ] }
595,366
During the setup of my virtual server instances I require some applications to be built using make . Are there any security risks associated with having make installed? Or should I clean it up before the instance is deployed? I also have the gcc compiler on the server which I use to build applications before deployment.
Some people will argue that the presence of development tools on a production machine will make life easier for an attacker. This however is such a tiny roadbump to an attacker, that any other argument you can find for or against installing the development tools will weigh more. If an attacker was able to penetrate the system so far, that they could invoke whatever tools are present on the server, then you already have a serious security breach. Without development tools there are many other ways to write binary data to a file and then run a chmod on that file. An attacker wanting to use a custom build executable on the system at this point could just as well build that on their own machine and transfer it to the server. There are other much more relevant things to look out for. If an installed piece of software contains a security bug, there is a few ways it could be exposed to an attacker: The package could contain a suid or sgid executable. The package could be starting services on the system. The package could install scripts that are invoked automatically under certain circumstances (this includes cron jobs, but scripts could be invoked by other events for example when the state of a network interface changes or when a user logs in). The package could install device inodes. I would not expect development tools to match one of the above, and as such is not a high risk package. If you have workflows in which you would make use of the development tools, then you first have to decide whether those are reasonable workflows, and if they are, you should install the development tools. If you find that you don't really need those tools on the server, you should refrain from installing them for multiple reasons: Saves disk space, both on the server and on backups. Less installed software makes it easier to track what your dependencies are. If you don't need the package, there is no point in taking the additional security risk from having it installed, even if that security risk is tiny. If you decide that for security reasons, you won't allow unprivileged users to put their own executabels on the server, then what you should avoid is not the development tools but rather directories writable to those users on file systems mounted with execute permissions. There may still be a use for development tools even under those circumstances, but it is not very likely.
{ "source": [ "https://serverfault.com/questions/595366", "https://serverfault.com", "https://serverfault.com/users/157990/" ] }
595,471
Having access to an VPS, i need to know which type of virtualization it is running from the terminal. How can determine the virtualization platform that my VM is running on? (OpenVZ, Xen, KVM, etc?)
hostnamectl is your friend (requires systemd ). A few examples: Laptop without any virtualization $ hostnamectl status Static hostname: earth.gangs.net Icon name: computer-laptop Chassis: laptop Machine ID: 18a0752e1ccbeef09da51ad17fab1f1b Boot ID: beefdc99969e4a4a8525ff842b383c62 Operating System: Ubuntu 16.04.2 LTS Kernel: Linux 4.4.0-66-generic Architecture: x86-64 Xen $ hostnamectl status Static hostname: pluto.gangs.net Icon name: computer-vm Chassis: vm Machine ID: beef39aebbf8ba220ed0438b54497609 Boot ID: beefc71e97ed48dbb436a470fe1920e1 Virtualization: xen Operating System: Ubuntu 16.04.2 LTS Kernel: Linux 3.13.0-37-generic Architecture: x86-64 OpenVZ $ hostnamectl status Static hostname: mars.gangs.net Icon name: computer-container Chassis: container Machine ID: 55296cb0566a4aaca10b8e3a4b28beef Boot ID: 1bb259b0eb064d9eb8a22d112211beef Virtualization: openvz Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 2.6.32-042stab120.16 Architecture: x86-64 KVM $ hostnamectl status Static hostname: mercury.gangs.net Icon name: computer-vm Chassis: vm Machine ID: beeffefc50ae499881b024c25895ec86 Boot ID: beef9c7662a240b3b3b04cef3d1518f0 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-514.10.2.el7.x86_64 Architecture: x86-64
{ "source": [ "https://serverfault.com/questions/595471", "https://serverfault.com", "https://serverfault.com/users/206910/" ] }
596,994
I had an issue with a container, even though it builds perfectly it does not properly start. The cause is a workaround I've added to the Dockerfile (for having a self-configured /etc/hosts routing) RUN mkdir -p -- /lib-override /etc-override && cp /lib/libnss_files.so.2 /lib-override ADD hosts.template /etc-override/hosts RUN perl -pi -e 's:/etc/hosts:/etc-override/hosts:g' /lib-override/libnss_files.so.2 ENV LD_LIBRARY_PATH /lib-override Obviously there's some error in there, but I wonder how can I get more info on what docker is doing while running. for example, this works: $ docker run image ls usr bin ... But this doesn't: $ docker run image ls -l $ There is nothing in the logs and I can't call an interactive shell either. I can use strace to see what's happening but I was hoping theres a better way. Is there any way I can set docker to be more verbose? EDIT : Thanks to Andrew D. I now know what's wrong with the code above (I left it so his answer can be understood). Now the issue is still how might I debug something like this or get some insides at why ls -l failed why ls did not. EDIT : The -D=true might give more output, though not in my case...
Docker events command may help and Docker logs command can fetch logs even after the image failed to start. First start docker events in the background to see whats going on. docker events& Then run your failing docker run ... command. Then you should see something like the following on screen: 2015-12-22T15:13:05.503402713+02:00 xxxxxxxacd8ca86df9eac5fd5466884c0b42a06293ccff0b5101b5987f5da07d: (from xxx/xxx:latest) die Then you can get the startup hex id from previous message or the output of the run command. Then you can use it with the logs command: docker logs <copy the instance id from docker events messages on screen> You should now see some output from the failed image startup. As @alexkb suggested in a comment: docker events& can be troublesome if your container is being constantly restarted from something like AWS ECS service. In this scenario it may be easier to get the container hex id out of the logs in /var/log/ecs/ecs-agent.log.<DATE> . Then use docker logs <hex id> .
{ "source": [ "https://serverfault.com/questions/596994", "https://serverfault.com", "https://serverfault.com/users/115640/" ] }
597,115
In our servers we have a habit of dropping caches at midnight. sync; echo 3 > /proc/sys/vm/drop_caches When I run the code it seems to free up lots of RAM, but do I really need to do that. Isn't free RAM a waste?
You are 100% correct. It is not a good practice to free up RAM. This is likely an example of cargo cult system administration.
{ "source": [ "https://serverfault.com/questions/597115", "https://serverfault.com", "https://serverfault.com/users/220313/" ] }
597,586
Scenario: While my DC is running, I log into an arbitrary machine. I stop the DC I log off the arbitrary machine. Let's bounce it for good measure, too. When the machine comes back up, I can still login with my domain credentials even though DC is down Why and how? Is there some sort of local credential cache in play on the "arbitrary" machine? My password was somehow hashed and stored for the future in CASE the DC blows up or is down? Would the same process work if I attempted to login to a box that I had never logged into before while the DC is down?
By default, Windows will cache the last 10-25 users to log into a machine (depending on OS version). This behavior is configurable via GPO and is commonly turned off completely in instances where security is critical. If you tried to log into a workstation or member server that you had never logged into while all of your DCs are unreachable, you would get an error stating There are currently no logon servers available to service the logon request
{ "source": [ "https://serverfault.com/questions/597586", "https://serverfault.com", "https://serverfault.com/users/106575/" ] }