source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
597,593
Here is the problem We have NTFS file server with several layers of sub-folders. FolderA\ - User AAA should not have access FolderA\FolderAA\ - User AAA should not have access FolderA\FolderAA\FolderAAA - user AAA should have access How do we do this? Thanks
By default, Windows will cache the last 10-25 users to log into a machine (depending on OS version). This behavior is configurable via GPO and is commonly turned off completely in instances where security is critical. If you tried to log into a workstation or member server that you had never logged into while all of your DCs are unreachable, you would get an error stating There are currently no logon servers available to service the logon request
{ "source": [ "https://serverfault.com/questions/597593", "https://serverfault.com", "https://serverfault.com/users/220642/" ] }
597,765
It was easy for me to connect to my remote mysql server on AWS using a sequelpro , however I'm struggling with doing the same thing with mongodb. I tried setting up an ssh tunnel via command line like so: ssh -fN -l root -i path/to/id_rsa -L 9999:host.com:27017 host.com I also tried it with replacing host with an ip address the idea is to forward all mongodb connections on port 9999 to the one on the host on port 27101.. however when I run the command: mongo --host localhost --port 9999 the connection fails, I get this instead: MongoDB shell version: 2.6.0 connecting to: localhost:9999/test channel 2: open failed: connect failed: Connection timed out channel 3: open failed: connect failed: Connection timed out 2014-05-22T14:42:01.372+0300 DBClientCursor::init call() failed 2014-05-22T14:42:01.374+0300 Error: DBClientBase::findN: transport error: localhost:9999 ns: admin.$cmd query: { whatsmyuri: 1 } at src/mongo/shell/mongo.js:148 exception: connect failed if I run sudo netstat -plnt I get the following (which seems in order): Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4242/node tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1342/httpd2-prefork tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2552/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2505/master tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 11719/mongod tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 16561/redis-server any idea what i'm doing wrong? update: this is how the final functional command looks like (credit goes to kenster ): ssh -fN -i ~/path/to/id_rsa -L 6666:localhost:27017 [email protected] where the -fN command make this command run in the background
The "channel 2" and "channel 3" lines are from ssh . The sshd instance on the remote server is trying to connect to host.com port 27017 in order to service a tunnel connection, and it's getting a "connection timed out" error. In other words, sshd on the remote server can't reach the target of the tunnel. Since the remote host is also the host which you're supposedly tunneling to, it's hard to say what the specific problem is. It could be that "host.com" resolves to more than one IP address. You're making an SSH connection to one server in the cluster, and then a different server in the cluster is being chosen as the tunnel target. You could try changing the tunnel target to "localhost" instead of "host.com": ssh -fN -l root -i path/to/id_rsa -L 9999:localhost:27017 host.com Update: "-L 9999:localhost:27017" means that the ssh client on the local server listens for connections on port 9999. When it gets a connection, it tunnels the connection to the sshd instance on the remote server. The remote sshd instance connects from there to localhost:27017. So "localhost" here is from the perspective of the remote server. With the netstat output, it's a little clearer why it wasn't working before. The "127.0.0.1:27017 " part means that Mongodb is specifically bound to the localhost (127.0.0.1) interface on the remote host. You can't contact that instance of mongodb directly by trying to connect to the host's regular IP address--you can only contact that instance of mongodb through the localhost address. And of course, since it's localhost, you can only contact if from a client running on the same host. So, the way you're doing it now--tunnel a connection to the server through ssh and then connect to localhost from there--is the way to do it.
{ "source": [ "https://serverfault.com/questions/597765", "https://serverfault.com", "https://serverfault.com/users/203388/" ] }
598,085
I have a reverse proxy on nginx which proxies quite a few sites. I have recently enabled HTTP Strict Transport Security for all SSL-enabled websites. I now have one site that doesn't want to have this enabled. I thought I would just do a simple check if my upstream already sent me a Strict-Transport-Security -header, and if not, just add one. That way, my upstream could send an STS header containing max-age=0 to avoid having HSTS enabled by the proxy. I thought I'd just change my configuration as follows: location / { proxy_pass http://webservers; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto "https"; if ($upstream_http_strict_transport_security = "") { add_header Strict-Transport-Security "max-age=15552000"; } } But, probably because if is evil , this doesn't work. I have tried a bunch of different things to make sure the variable actually exists (which is the case), but nothing seems to help. How could I make this work?
This doesn't work because the if is evaluated before the request is passed to the backend, so the $upstream_http_ variables don't have any values yet. add_header with an empty value is ignored, so you can use a map to conditionally add the header like so: map $upstream_http_strict_transport_security $sts { '' max-age=15552000; } server { location / { add_header Strict-Transport-Security $sts; } }
{ "source": [ "https://serverfault.com/questions/598085", "https://serverfault.com", "https://serverfault.com/users/220972/" ] }
598,202
I run several docker containers with hostnames: web1.local web2.local web3.local Routing to these done based on hostname by nginx. I have a proxy in front of this setup (on different machine connected to internet) where I define upstream as: upstream main { server web1.local:80; server web2.local:80; server web3.local:80; } And actual virtual host description: server { listen 80; server_name example.com; location / { proxy_pass http://main; } } Now, because containers receive hostname "main" instead of "web1.local", they do not respond properly to the request. Question: how I can tell nginx to pass name of the upstream server instead of name of upstream group of servers in Host: header when proxying request?
Actually you can do that via proxy_set_header. For more details look here: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header or see an example use-case here: https://stackoverflow.com/questions/12847771/configure-nginx-with-proxy-pass I have included the dynamic approach into your above posted configuration: server { listen 80; server_name example.com; location / { proxy_pass http://main; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } Here is an example with a static host name: server { listen 80; server_name example.com; location / { proxy_pass http://main; proxy_set_header Host www.example.com; proxy_set_header X-Forwarded-For $remote_addr; } }
{ "source": [ "https://serverfault.com/questions/598202", "https://serverfault.com", "https://serverfault.com/users/221050/" ] }
598,278
We need to disallow the domain Administrator account to access a server directly via RDP. Our policy is to log on as regular user and then use Run As Admin functionallity. How can we set this up? The server in question is running Windows Server 2012 R2 with Remote Desktop Session Host and Session Based RD Collection. Allowed User groups do not contain the domain Administrator user but somehow he is still able to log on. Thank you.
This seems to be what you are looking for: http://support.microsoft.com/kb/2258492 To deny a user or a group logon via RDP, explicitly set the "Deny logon through Remote Desktop Services" privilege. To do this access a group policy editor (either local to the server or from a OU) and set this privilege: Start | Run | Gpedit.msc if editing the local policy or chose the appropriate policy and edit it. Computer Configuration | Windows Settings | Security Settings | Local Policies | User Rights Assignment. Find and double click "Deny logon through Remote Desktop Services" Add the user and / or the group that you would like to dny access. Click ok. Either run the command gpupdate /force /target:computer on the command prompt or wait for the next policy refresh for this setting to take effect.
{ "source": [ "https://serverfault.com/questions/598278", "https://serverfault.com", "https://serverfault.com/users/196105/" ] }
598,554
Right now I use a powershell script to see the currently logged in users. But I don't see if their session is idle, active or inactive. I can see when the session was started, that's it. Is there an easy way to see how many users are currently logged in to the server I am logged in and see their status? It should not be remotely executed. I would like to avoid third party tools if possible.
Use the query user command Query User Command http://technet.microsoft.com/en-us/library/bb490801.aspx
{ "source": [ "https://serverfault.com/questions/598554", "https://serverfault.com", "https://serverfault.com/users/185013/" ] }
598,662
I have the following tree # upper letters = directory # lower letters = files A |-- B |-- C |-- D |-- e <= file |-- F |-- G I need to copy this tree to another destination, recursively ignoring all the empty directories. So the destination ends up looking like: C |-- e How would you do this with unix, rsync, etc?
Of course minutes later I discover an easy method. rsync has a --prune-empty-dirs option.
{ "source": [ "https://serverfault.com/questions/598662", "https://serverfault.com", "https://serverfault.com/users/4179/" ] }
599,103
I'm deploying a 3rd-party application in compliance with the 12 factor advisory , and one of the points tell that application logs should be printed to stdout/stderr: then clustering software can collect it. However, the application can only write to files or syslog. How do I print these logs instead?
An amazing recipe is given in the nginx Dockerfile : # forward request and error logs to docker log collector RUN ln -sf /dev/stdout /var/log/nginx/access.log \ && ln -sf /dev/stderr /var/log/nginx/error.log Simply, the app can continue writing to it as a file, but as a result the lines will go to stdout & stderr !
{ "source": [ "https://serverfault.com/questions/599103", "https://serverfault.com", "https://serverfault.com/users/12097/" ] }
599,219
Recently I needed to purchase a wildcard SSL certificate (because I need to secure a number of subdomains), and when I first searched for where to buy one I was overwhelmed with the number of choices, marketing claims, and price range. I created a list to help me see past the marketing gimmicks that the greater majority of the Certificate Authorities (CAs) and resellers plaster all over their sites. In the end my personal conclusion is that pretty much the only things that matter are the price and the pleasantness of the CA's website. Question: Besides price and a nice website, is there anything worthy of my consideration in deciding where to purchase a wildcard SSL certificate?
I believe that with respect to deciding where to purchase a wildcard SSL certificate, the only factors that matter are the first year's cost of an SSL certificate, and the pleasantness of the seller's website (i.e. user experience) for the purchase and setup of the certificate. I am aware of the following: Claims about warranties (e.g. $10K, $1.25M) are marketing gimmicks - these warranties protect the users of a given website against the possibility that the CA issues a certificate to a fraudster (e.g. phishing site) and the user loses money as a result (but, ask yourself: is someone spending/losing $10K or more on your fraudulent site? oh wait, you are not a fraudster? no point.) It is necessary to generate a 2048-bit CSR ( certificate signing request ) private key to activate your SSL certificate. According to modern security standards using CSR codes with private key size less than 2048 bits is not allowed. Learn more here and here . Claims of 99+% , 99.3% , or 99.9% browser/device compatibility. Claims of fast issuance and easy install . It is nice to have a money-back satisfaction guarantee (15 and 30 days are common). The following list of wildcard SSL certificate base prices (not sales) and issuing authorities and resellers was updated on May 30th, 2018: price | / year | Certificate Authority (CA) or Reseller ($USD) | -------+--------------------------------------- $0 | DNSimple / Let's Encrypt * $49 | SSL2BUY / AlphaSSL (GlobalSign) * $68 | CheapSSLSecurity / PositiveSSL (Comodo) * $69 | CheapSSLShop / PositiveSSL (Comodo) * $94 | Namecheap / PositiveSSL (Comodo) * (Can$122) $95 | sslpoint / AlphaSSL (GlobalSign) * $100 | DNSimple / EssentialSSL (Comodo) * | $150 | AlphaSSL (GlobalSign) * $208 | Gandi $250 | RapidSSL $450 | Comodo | $500 | GeoTrust $600 | Thawte $600 | DigiCert $609 | Entrust $650 | Network Solutions $850 | GlobalSign | $2,000 | Symantec * Note that DNSimple, sslpoint, Namecheap, CheapSSLShop, CheapSSLSecurity, and SSL2BUY, are resellers, not Certificate Authorities. Namecheap offers a choice of Comodo/PostiveSSL and Comodo/EssentialSSL (though there is no technical difference between the two, just branding/marketing - I asked both Namecheap and Comodo about this - whereas EssentialSSL costs a few dollars more (USD$100 vs $94)). DNSimple resells Comodo's EssentialSSL, which, again, is technically identical to Comodo's PositiveSSL. Note that SSL2BUY, CheapSSLShop, CheapSSLSecurity, Namecheap, and DNSimple provide not only the cheapest wildcard SSL certs, but they also have the least marketing gimmicks of all the sites I reviewed; and DNSimple seems to have no gimmicky stuff whatsoever. Here are links to the cheapest 1-year certificates (as I can't link to them in the table above): SSL2BUY CheapSSLShop CheapSSLSecurity sslpoint Namecheap DNSimple As of March 2018 Let’s Encrypt supports wildcard certificates . DNSimple supports Let's Encrypt certificates.
{ "source": [ "https://serverfault.com/questions/599219", "https://serverfault.com", "https://serverfault.com/users/145737/" ] }
599,249
There isn't much room on server chassis and I'm wondering where a label with the servers name should go. Is there any other information in addition to the name that should go on the label? Does it make sense to label each hard drive in a server or is that not necessary? There certainly is overkill. When I worked at Big Blue, labeling was a huge source of bureaucracy; even a projector needed to be labelled and have its whereabouts routinely reported.
HP ProLiant servers, Supermicro servers and surely any non-Dell systems don't have a convenient LCD on the front. If I do label, the location depends on the server model/type... But this is really a common-sense, do-what-works-for-you question. For instance, on the 1U rack mount systems pictured below, I'd likely add a label on the CD/DVD drive. For the systems here, I may use the CD/DVD slot/blank or place labels on the rack mount ears. For situations where the CD slot doesn't exist, or there isn't enough vertical height on the server, I end up placing labels on the hard disk drive slots.
{ "source": [ "https://serverfault.com/questions/599249", "https://serverfault.com", "https://serverfault.com/users/218569/" ] }
599,357
I am looking for where the default Amazon AMI linux image sets up the privileges for the default ec2-user account. After logging in with this account I can use sudo successfully. Checking via the sudoers file, which I open by running visudo (with no other options) I see a few default settings and permissions for root ALL ALL So ... Where is the permissions for ec2-user assigned? I have not yet tried to add a new permission but ultimately I want to resign ec2-user for systems management tasks and use a non-full root user for administering the applications (stop and start mysql, httpd, edit apache's vhost files, and upload / edit web content under the web root)
It's in /etc/sudoers.d/cloud-init . I, too, delete it from my production systems as soon as I can. It is included by virtue of the line #includedir /etc/sudoers.d in the /etc/sudoers file. Note that, as it says, that leading # isn't treated as a comment sign. On some of my servers, it's also in /etc/sudoers.d/90-cloud-init-users ; it may be safest to userdel the ec2-user user.
{ "source": [ "https://serverfault.com/questions/599357", "https://serverfault.com", "https://serverfault.com/users/162451/" ] }
599,362
We ship a virtual machine to the clients which runs a web service. Clients need to install this virtual machine on one of their hosts using an hypervisor and just need to modify the network configuration to access the web service. We do not want to give full root access to clients for modifying just the network configuration. We thought of adding a new user with sudo privileges on /etc folder but this may other consequences. What is the best of handling it? P.S.: There is no USB access on the machines where the virtual machines are installed.
It's in /etc/sudoers.d/cloud-init . I, too, delete it from my production systems as soon as I can. It is included by virtue of the line #includedir /etc/sudoers.d in the /etc/sudoers file. Note that, as it says, that leading # isn't treated as a comment sign. On some of my servers, it's also in /etc/sudoers.d/90-cloud-init-users ; it may be safest to userdel the ec2-user user.
{ "source": [ "https://serverfault.com/questions/599362", "https://serverfault.com", "https://serverfault.com/users/43669/" ] }
599,421
Currently, the value for the max_connections parameter in a MySQL RDS t1.micro server model is {DBInstanceClassMemory/12582880} is 32. Since my server does not allowing any more connections after 32, what is the Maximum safe value for max_connections I can use for a micro instance ?
About 2 years ago, I was tasked with evaluating Amazon RDS for MySQL. I wrote some posts in the DBA StackExchange about my findings and observations: Jul 25, 2012 : Scaling Percona datacenters: setup and replication Aug 02, 2012 : Local database vs Amazon RDS Sep 21, 2012 : MySQL 5.5 Runs Out of Memory, Drops All Connections When Creating Many Databases In short, there are three options you cannot alter max_connections (per Server Model) innodb_buffer_pool_size (per Server Model) innodb_log_file_size (all Server Models, 128M ) Here is the Chart I made telling you those per-Server Model limits MODEL max_connections innodb_buffer_pool_size --------- --------------- ----------------------- t1.micro 34 326107136 ( 311M) m1-small 125 1179648000 ( 1125M, 1.097G) m1-large 623 5882511360 ( 5610M, 5.479G) m1-xlarge 1263 11922309120 (11370M, 11.103G) m2-xlarge 1441 13605273600 (12975M, 12.671G) m2-2xlarge 2900 27367833600 (26100M, 25.488G) m2-4xlarge 5816 54892953600 (52350M, 51.123G) As for your actual question, t1.micro has 34 as a max_connections setting. If you cannot surpass 32, that is quite understandable. Amazon AWS must be able to connect to and monitor things for the RDS Instance as a SUPER user. Not being able to go beyond 32 is reasonable for a t1.micro instance. In light of this, you will have no choice but to trust the management scheme administered by Amazon for apportioning max_connections and other options among all MySQL Instances in the AWS Cloud.
{ "source": [ "https://serverfault.com/questions/599421", "https://serverfault.com", "https://serverfault.com/users/212488/" ] }
599,560
Let's say I have a key for Github, along with other keys. I've added lots of keys to my ssh agent ( ssh-add -L returns lots of lines) at my home computer A. In my .ssh/config I have set up which key to use with which host, so e.g. ssh -T -vvv [email protected] 2>&1 | grep Offering gives debug1: Offering RSA public key: /Users/doxna/.ssh/id_rsa.github Only one key is offered, as expected. But then ssh-ing to some host B with ForwardAgent yes and repeating the same command, I get debug1: Offering RSA public key: /Users/doxna/.ssh/id_rsa.linode2 debug1: Offering RSA public key: /Users/doxna/.ssh/id_rsa.helium debug1: Offering RSA public key: /Users/doxna/.ssh/id_rsa.github meaning it tries all my keys. This is problematic since only a limited number of keys can be tried before servers return Too many authentication failures . So I tried editing .ssh/config on host B to include Host github.com IdentityFile /Users/doxna/.ssh/id_rsa.github IdentitiesOnly yes but then I get no key offerings, but rather debug2: key: /Users/doxna/.ssh/id_rsa.github ((nil)) which I guess means that the key was not found(?) And after all, the key is located at my home computer A, not host B, so the question is how to refer to it at host B? Hope I managed to explain the question.
You got the right idea. The only part you are missing is that the file pointed to by IdentityFile must exist. It does not need to contain a private key, having just the public key available is sufficient. On host B you can extract the public key from the agent by typing ssh-add -L | grep /Users/doxna/.ssh/id_rsa.github > ~/.ssh/id_rsa.github.pub and then point to that file from ~/.ssh/config
{ "source": [ "https://serverfault.com/questions/599560", "https://serverfault.com", "https://serverfault.com/users/221948/" ] }
601,409
A ZooKeeper Quorum consisting of three ZooKeeper servers has been created. The zoo.cfg located on all three ZooKeeper servers looks as follows: maxClientCnxns=50 # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib/zookeeper # the port at which the clients will connect clientPort=2181 server.1=<ip-address-1>:2888:3888 server.2=<ip-address-2>:2888:3888 server.3=<ip-address-3>:2888:3888 Analysis It is clear that one of the three ZooKeeper servers will become the Leader and the others Followers . If the Leader ZooKeeper server has been shutdown, the Leader election will start again. The aim is to check if another ZooKeeper server will become the Leader if the Leader server has been shut down.
It is possible to check whether a ZooKeeper server is a leader or follower using the nc command that is included in the netcat package: echo stat | nc localhost 2181 | grep Mode echo srvr | nc localhost 2181 | grep Mode #(From 3.3.0 onwards) If the ZooKeeper server is a leader then the command will return: Mode: leader and otherwise: Mode: follower
{ "source": [ "https://serverfault.com/questions/601409", "https://serverfault.com", "https://serverfault.com/users/215599/" ] }
601,450
I wanted to update the DHCP lease of an Amazon EC2 instance, so I executed the following command: user@host:~$ sudo dhclient Following that, the system's DHCP lease is updated successfully updated. However, the command prints the following to the console: RTNETLINK answers: File exists What on earth does that mean? Is it a cause for concern? For what it's worth, dhclient returned without any errors: user@host:~$ echo $? 0
Basically what happens is that dhclient adds a route to the routing table. It tries this while the route is already in the table. Check ip route for a route which was added by the dhcp server. For having the lease renewed do dhclient -r if thats not enough you can remove all leases by removing the file and getting a new lease sudo rm /var/lib/dhcp/dhclient.leases; sudo dhclient eth0 Depending on your exact setup this might be an issue with having to type your password twice, so watch out for that.
{ "source": [ "https://serverfault.com/questions/601450", "https://serverfault.com", "https://serverfault.com/users/160138/" ] }
601,475
I'm new with Apache Cassandra. I am trying to install a little sample cluster using two CentOS server. I followed the documentation (Tarball installation) and the nodes are up. However, when I go to OpsCenter, the nodes cannot see each other's agent (there is always "1 of 2 agents connected"..I tried to fix, but nothing change). I tried both to disable and enable SSL, I tried to set the incoming_interface in opscenter.conf, I tried almost everything the network suggested to me, but the problem persisted. Is there someone that could help me, please?
Basically what happens is that dhclient adds a route to the routing table. It tries this while the route is already in the table. Check ip route for a route which was added by the dhcp server. For having the lease renewed do dhclient -r if thats not enough you can remove all leases by removing the file and getting a new lease sudo rm /var/lib/dhcp/dhclient.leases; sudo dhclient eth0 Depending on your exact setup this might be an issue with having to type your password twice, so watch out for that.
{ "source": [ "https://serverfault.com/questions/601475", "https://serverfault.com", "https://serverfault.com/users/223566/" ] }
601,548
I moved my Master/slave database architecture to Amazon RDS and everything works fine. But I have a slave out of the RDS service which should keep sync with the new Master server, to do so I have to point my DB domain name master-db.myawsserver.com on the Master (RDS) private address IP. AWS console didn't provide this information and I am connected directly to the MySQL database.
RDS instances can change their IPs unexpectedly, so they should not be used nor are they provided in the console or API (although you can technically dig for them). The DNS endpoint provided in the AWS console will resolve to the internal IPs from within Amazon's network.
{ "source": [ "https://serverfault.com/questions/601548", "https://serverfault.com", "https://serverfault.com/users/212463/" ] }
603,982
When I import my OpenSSH public key into AWS EC2's keyring the fingerprint that AWS shows doesn't match what I see from: ssh-keygen -l -f my_key It is a different length and has different bytes. Why? I'm sure I uploaded the correct key.
AWS EC2 shows the SSH2 fingerprint, not the OpenSSH fingerprint everyone expects. It doesn't say this in the UI. It also shows two completely different kinds of fingerprints depending on whether the key was generated on AWS and downloaded, or whether you uploaded your own public key. Fingerprints generated with ssh-keygen -l -f id_rsa will not match what EC2 shows. You can either use the AWS API tools to generate a fingerprint with the ec2-fingerprint-key command, or use OpenSSL to do it. Note that if you originally generated a key on AWS, but then uploaded it again (say, to another region) then you'll get a different fingerprint because it'll take the SSH2 RSA fingerprint, rather than the sha1 it shows for keys you generated on AWS. Fun, hey? In the above, test-generated was generated using AWS EC2. test-generated-reuploaded is the public key from the private key AWS generated, extracted with ssh-keygen -y and uploaded again. The third key, test-uploaded , is a locally generated key ... but the local ssh-keygen -l fingerprint is b2:2c:86:d6:1e:58:c0:b0:15:97:ab:9b:93:e7:4e:ea . $ ssh-keygen -l -f theprivatekey 2048 b2:2c:86:d6:1e:58:c0:b0:15:97:ab:9b:93:e7:4e:ea $ openssl pkey -in theprivatekey -pubout -outform DER | openssl md5 -c Enter pass phrase for id_landp: (stdin)= 91:bc:58:1f:ea:5d:51:2d:83:d3:6b:d7:6d:63:06:d2 Keys uploaded to AWS When you upload a key to AWS, you upload the public key only, and AWS shows the MD5 hash of the public key. You can use OpenSSL, as demonstrated by Daniel on the AWS forums , to generate the fingerprint in the form used by AWS to show fingerprints for uploaded public keys (SSH2 MD5), like: 7a:58:3a:a3:df:ba:a3:09:be:b5:b4:0b:f5:5b:09:a0 If you have the private key, you can generate the fingerprint by extracting the public part from the private key and hashing it using: openssl pkey -in id_rsa -pubout -outform DER | openssl md5 -c If you only have the public key, and it is in OpenSSH format, you need to first convert it to PEM and then DER and then hash, using: ssh-keygen -f id_rsa.pub -e -m PKCS8 | openssl pkey -pubin -outform DER | openssl md5 -c Keys generated on AWS When you generate a keypair on AWS, AWS shows the SHA1 hash of the private key, which is longer, like: ea:47:42:52:2c:25:43:76:65:f4:67:76:b9:70:b4:64:12:00:e4:5a In this case you need to use the following command, also shown by Daniel on the AWS forums, to generate a sha1 hash based on the private key: openssl pkcs8 -in aws_private.pem -nocrypt -topk8 -outform DER | openssl sha1 -c on the downloaded AWS-generated private key/certificate file. It'll work on keys you converted to OpenSSH format too. This does, however, require that you have the private key, since the hash is of the private key. You cannot generate the hash locally if all you have is the public key. References See: AWS developers forum discussion https://stackoverflow.com/q/19251562/398670 Bug report on AWS forums - please chime in
{ "source": [ "https://serverfault.com/questions/603982", "https://serverfault.com", "https://serverfault.com/users/102814/" ] }
603,984
I created a new Windows instance on AWS EC2, using a keypair I created by uploading my public key from my local machine. The instance launched fine, but it won't decrypt the password. It reports: I'm certain I uploaded the correct key. I've verified that the fingerprints match with the weird fingerprint format AWS uses . But it just won't decrypt. I've tried uploading the key file, and pasting it into the form. I eventually figured out that it isn't stripping the trailing newline, and deleted the blank line in the key. That just gets me to a new error when I click "Decrypt Password", though:
AWS EC2's key management does not cope with SSH private keys that have passwords set (are encrypted). It doesn't detect this, and simply fails with an uninformative error. If your private key is stored encrypted on disk (like it should be, IMO) you must decrypt it to paste it into AWS's console. Rather than doing that, consider decrypting the password locally, so you don't have to send your private key to AWS. Get the encrypted password data (base64 encoded) from the server log after startup, or using get-password-data or the corresponding API requests. You can then base64 decode and decrypt the result: base64 -d /tmp/file | openssl rsautl -decrypt -inkey /path/to/aws/private/key.pem (OpenSSH private keys are accepted by openssl rsautl ). The issue with failing to handle password protected keys with a useful error also affects the ec2-get-password command . See also: EC2 Windows - Get Administrator Password decrypt password with OpenSSL bug report on AWS forums - please chime in.
{ "source": [ "https://serverfault.com/questions/603984", "https://serverfault.com", "https://serverfault.com/users/102814/" ] }
603,987
Before I plunge into the depths of how to synchronize UID's/GID's across my different Linux machines, I would like to know what is actually the benefit? I know that this keeps file synchronization relatively easy (as ownership is "naturally" retained). However this can also be achieved otherwise depending on the transmission service. Is there anything else that would benefit from consistent UIDs/GIDs?
technical debt For the reasons below, it is much simpler to address this problem early on to avoid the accumulation of technical debt . Even if you find yourself already in this situation, it's probably better to deal with it in the near future than let it continue building. networked filesystems This question seems to be focused on the narrow scope of transferring files between machines with local filesystems, which allows for machine specific ownership states. Networked filesystem considerations are easily the biggest case for trying to keep your UID/GID mappings in sync, because you can usually throw that "achieved otherwise" you mentioned out the window the moment they enter the picture. Sure, you might not have networked filesystems shared between these hosts right now ...but what about the future? Can you honestly say that there will never be a use case for a networked filesystem being introduced between your current hosts, or hosts that are created in the future? It's not very forward thinking to assume otherwise. Assume that /home is a networked filesystem shared between host1 and host2 in the following examples. Disagreeing permissions : /home/user1 is owned by a different user on each system. This prevents a user from being able to consistently access or modify their home directory across systems. chown wars : It's very common for a user to submit a ticket requesting that their home directory permissions be fixed on a specific system. Fixing this problem on host2 breaks the permissions on host1 . It can sometimes take several of these tickets to be worked before someone steps back and realizes that a tug of war is in play. The only solution is to fix the disagreeing ID mappings. Which leads to... UID/GID rebalancing hell : The complexity of correcting IDs later increases exponentially by the number of remappings involved to correct a single user across multiple machines. ( user1 has the ID of user2 , but user2 has the ID of user17 ...and that's just the first system in the cluster) The longer you wait to fix the problem, the more complex these chains can become, often requiring the downtime of applications on multiple servers in order to get things properly in synch. Security problems : user2 on host2 has the same UID as user1 on host1 , allowing them to write to /home/user1 on host2 without the knowledge of user1 . These changes are then evaluated on host1 with the permissions of user1 . What could possibly go wrong? (if user1 is an app user, someone in dev will discover it's writable and will make changes. this is a time proven fact.) There are other scenarios, and these are just examples of the most common ones. names aren't always an option Any scripts or config files written against numeric IDs become inherently unportable within your environment. Generally not a problem because most people don't hardcode these unless they're absolutely required to...but sometimes the tool you're working with doesn't give you a choice in the matter. In these scenarios, you're forced to maintain n different versions of the script or configuration file. Example: pam_succeed_if allows you to use fields of user , uid , and gid ...a "group" option is conspicuously absent. If you were put in a position where multiple systems were expected to implement some form of group-based access restriction, you'd have n different variations of the PAM configs. (or at least a single GID that you have to avoid collisions on) centralized management natxo's answer has this covered pretty well.
{ "source": [ "https://serverfault.com/questions/603987", "https://serverfault.com", "https://serverfault.com/users/119504/" ] }
604,397
I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.
In the old days, setting up a stratum-1 NTP server was very difficult, because stratum-0 sources were very expensive, extremely delicate, and usually radioactive. Nowadays we have the GPS, which incidentally functions as an extremely accurate radio clock. You can buy a dedicated stratum-1 server containing a GPS receiver as its stratum-0 timesource for quite reasonable prices, or with a little ingenuity you can attach a decent consumer-grade GPS unit to a random server, and set up NTP accordingly to give you your own stratum-1 server. The first of those is better for improved availability. Do one of those things, and you'll have a single stratum-1 server on your network to which everyone can sync. One is enough; everyone will be sync'ed to it while it's up, and if it goes down, all the clients will probably have got a good idea of their drift rates, so they shouldn't drift too far before it comes back, at which time they'll gently resync to it. I can't see any reason not to have good time even with intermittent internet access.
{ "source": [ "https://serverfault.com/questions/604397", "https://serverfault.com", "https://serverfault.com/users/225808/" ] }
604,510
How is this possible and how do I deal with it? I'm making backup script that is dependent on Unix date and have discovered an interesting bug: [root@web000c zfs_test]# date +%y-%m-%d --date='2 months ago' 14-04-01 [root@web000c zfs_test]# date +%y-%m-%d --date='3 months ago' 14-02-28 [root@web000c zfs_test]# date Sun Jun 1 00:08:50 CEST 2014
You're seeing this behavior because of summer time (daylight saving time). Because you are currently in summer time, where your clock is one hour ahead, when you ask for three months ago at just after midnight on the first of June, the time ends up being one hour "earlier" because it was not summer time three months ago. The GNU date documentation suggests to work around this by using 12:00 noon and the 15th of the month as starting points, when asking for relative days or months, respectively. For example: date +%y-%m-%d --date="$(date +%Y-%m-15) -3 month"
{ "source": [ "https://serverfault.com/questions/604510", "https://serverfault.com", "https://serverfault.com/users/195529/" ] }
604,541
I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version , but it is all "messed up" in a way I can't understand. I mean, different packages use totally different naming specifications. Here are the major questions: Why do some version numbers have ubuntu in them, and some not? What does all the special punctuation -~+ mean? What are alpha , build , and dfsg ? Can I just use them casually? vim and other packages have 2: . What does that mean? How does "version comparison" work, when version formats can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.
The Debian Policy Manual has this to say about the version field, which answers some parts of your question: Format The format is: [epoch:]upstream_version[-debian_revision] The three components here are: epoch This is a single (generally small) unsigned integer. It may be omitted, in which case zero is assumed. If it is omitted then the upstream_version may not contain any colons. It is provided to allow mistakes in the version numbers of older versions of a package, and also a package's previous version numbering schemes, to be left behind. upstream_version This is the main part of the version number. It is usually the version number of the original ("upstream") package from which the .deb file has been made, if this is applicable. Usually this will be in the same format as that specified by the upstream author(s); however, it may need to be reformatted to fit into the package management system's format and comparison scheme. The comparison behavior of the package management system with respect to the upstream_version is described below. The upstream_version portion of the version number is mandatory. The upstream_version may contain only alphanumerics[36] and the characters "." (full stop), "+" (plus), "-" (hyphen), ":" (colon), "~" (tilde) and should start with a digit. If there is no debian_revision then hyphens are not allowed; if there is no epoch then colons are not allowed. debian_revision This part of the version number specifies the version of the Debian package based on the upstream version. It may contain only alphanumerics and the characters "." (full stop), "+" (plus), "~" (tilde) and is compared in the same way as the upstream_version is. It is optional; if it isn't present then the upstream_version may not contain a hyphen. This format represents the case where a piece of software was written specifically to be a Debian package, where the Debian package source must always be identical to the pristine source and therefore no revision indication is required. It is conventional to restart the debian_revision at 1 each time the upstream_version is increased. The package management system will break the version number apart at the last hyphen in the string (if there is one) to determine the upstream_version and debian_revision . The absence of a debian_revision is equivalent to a debian_revision of 0. Comparison When comparing two version numbers, first the epoch of each are compared, then the upstream_version if epoch is equal, and then debian_revision if upstream_version is also equal. epoch is compared numerically. The upstream_version and debian_revision parts are compared by the package management system using the following algorithm: The strings are compared from left to right. First the initial part of each string consisting entirely of non-digit characters is determined. These two parts (one of which may be empty) are compared lexically. If a difference is found it is returned. The lexical comparison is a comparison of ASCII values modified so that all the letters sort earlier than all the non-letters and so that a tilde sorts before anything, even the end of a part. For example, the following parts are in sorted order from earliest to latest: ~~ , ~~a , ~ , the empty part, a . Then the initial part of the remainder of each string which consists entirely of digit characters is determined. The numerical values of these two parts are compared, and any difference found is returned as the result of the comparison. For these purposes an empty string (which can only occur at the end of one or both version strings being compared) counts as zero. These two steps (comparing and removing initial non-digit strings and initial digit strings) are repeated until a difference is found or both strings are exhausted. Note that the purpose of epochs is to allow us to leave behind mistakes in version numbering, and to cope with situations where the version numbering scheme changes. It is not intended to cope with version numbers containing strings of letters which the package management system cannot interpret (such as ALPHA or pre- ), or with silly orderings. ubuntu will indicate that the package has been built specifically for Ubuntu. The alpha and build strings don't seem to have any particular meaning, but dfsg refers to a package that has been modified for compliance with the Debian Free Software Guidelines .
{ "source": [ "https://serverfault.com/questions/604541", "https://serverfault.com", "https://serverfault.com/users/214731/" ] }
604,980
I know that typically electronics draw only as much current as they need. Is this the case with UPSs? I have a 120 volt, 30 amp circuit, and am replacing the UPS plugged into that circuit with a lower wattage UPS. This UPS has a 15 amp plug. Is it safe to use an adapter to do this? Also, the current plug is NEMA L5-30, whereas the newer UPS is 5-15. If this is safe, are there adapters that are L5-30P to 5-15R?
You should be safe. It is always OK to put a lower load on a higher rated receptacle. (At the proper voltage that is. Don't mix 230V and 115V). Just think of it this way: If it wasn't OK nobody could plug a phone charger (about 2 Amp max) in a standard wall-outlet (10 Amps or more). And for the record: I am a qualified electrician. Even though it's been over 20 years since I worked in that field, the laws of electrical physics haven't changed. I still recommend you have a electrician sort out the converter plug or cable. The cheap stuff you can buy online is often of shoddy quality. If you buy it yourself get it from a reputable source. An UPS for, presumably, important equipment is not the place to skimp on quality.
{ "source": [ "https://serverfault.com/questions/604980", "https://serverfault.com", "https://serverfault.com/users/38307/" ] }
605,715
I've setup postfix so that email clients use port 465 (smtps) for outbound mail. I'm not really understanding the difference between smtps (port 465) and submission (port 587) What's the 'best practice' when configuring postfix for clients to securely send mail? Just use smtps? Or use both submission and smtps?
edit: This answer is based on RFC-6409 and is no longer correct, see the newer RFC-8314 Port 465 was used for SMTP connections secured by SSL. However, using that port for SMTP has been deprecated with the availability of STARTTLS: "Revoking the smtps TCP port" These days you should no longer use Port 465 for SMTPS. Instead, use Port 25 for receiving mails for your domain from other servers, or port 587 to receive e-mails from clients, which need to send mails through your server to other domains and thus other servers. As an additional note, port 587 however is dedicated to mail submission - and mail submission is designed to alter the message and/or provide authentication: offering and requiring authentication for clients which try to submit mails providing security mechanisms to prevent submission of unsolicited bulk mail (spam) or infected mails (viruses, etc.) modify the mail to the needs of an organisation (rewriting the from part, etc.) Submission to port 587 is supposed to support STARTTLS, and thus can be encrypted. See also RFC#6409 .
{ "source": [ "https://serverfault.com/questions/605715", "https://serverfault.com", "https://serverfault.com/users/226681/" ] }
605,848
I have a server that hosts a website at home and when I'm at other places accessing my website it is very slow. My Upload speed is 3mbps and I get over 1k users a day. It has got to be incredibly slow at some times of the day. I want to send my server to a professional data center. Because this service has outgrown being hosted at my home, what kind of service should I be looking for to get my server in a data center with more available bandwidth?
Yes, you can do that. It is called colocation. Essentially you provide the server and the colocation provider supplies everything else: power, cooling, security, and in some cases they provide bandwidth in some cases you can provide it yourself. They will base the cost on how much physical space your server takes up, how much power it uses how much heat it generates and if you need bandwidth, how much bandwidth you will be using. Typically if they provide bandwidth they will charge you a fixed amount for a certain number of IP addresses and a set CIR (committed information rate). You can either pay for a fixed amount of bandwidth or you can pay based on usage. If you pay for fixed bandwidth then they will give you that amount and you will never be able to use more. It is quite common for you to buy burstable bandwidth though. In this scenario they will provide you with a port that can go up to 100 Mbps for example and bill you based on your average utilization. Typically this is done using the 95th percentile billing model (Google can explain it better than I can). So they may charge you $50/mo per megabit of 95th percentile bandwidth so if you average 10 Mbps then you would pay $500 for that month. Having said that, if you only have 1 server then it would probably be drastically simpler and easier to either rent a dedicated server or use a virtual server (VPS). Companies such as Rackspace and Amazon Web Services provide virtual servers. In that model you would pay for the virtual server based on how much CPU, RAM and disk you need. You also pay for bandwidth but in this case you pay based upon how much data you transfer, not your average utilization. For example, AWS charges about 10-12 cents per gigabyte of data your server sends to the Internet. There are other advantages to using a virtual server. You no longer have to worry about hardware failures since your server is a virtual machine if the host it is running on has a hardware problem it can easily be moved to another host. Additionally it is easy to upgrade the virtual machine to have more or less CPU/RAM/Disk based upon your usage.
{ "source": [ "https://serverfault.com/questions/605848", "https://serverfault.com", "https://serverfault.com/users/226760/" ] }
605,850
EDIT : I've tried something different based on searching around. This is now my /etc/network/interfaces : # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # This line makes sure the interface will be brought up during boot auto eth0 allow-hotplug eth0 # The primary network interface iface eth0 inet static address 85.17.141.27 netmask 255.255.255.0 gateway 85.17.141.254 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 85.17.150.123 85.17.96.69 85.17.150.123 62.212.64.122 dns-search localdomain # up commands up ip addr add 85.17.141.33/24 dev eth0 up ip -6 addr add 2001:1af8:4100:a00e:4::1/64 dev eth0 up ip -6 ro add default via 2001:1af8:4100:a00e::1 dev eth0 Then ip addr show eth0 outputs: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether d4:ae:52:c5:d2:1b brd ff:ff:ff:ff:ff:ff inet 85.17.141.27/24 brd 85.17.141.255 scope global eth0 inet 85.17.141.33/24 scope global secondary eth0 inet6 2001:1af8:4100:a00e:d6ae:52ff:fec5:d21b/64 scope global dynamic valid_lft 2591870sec preferred_lft 604670sec inet6 2001:1af8:4100:a00e:4::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::d6ae:52ff:fec5:d21b/64 scope link valid_lft forever preferred_lft forever further ip -6 ro outputs: 2001:1af8:4100:a00e::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 default via 2001:1af8:4100:a00e::1 dev eth0 metric 1024 default via fe80::2d0:ff:fe9e:1800 dev eth0 proto kernel metric 1024 expires 1627sec default via fe80::2d0:2ff:fe33:3c00 dev eth0 proto kernel metric 1024 expires 1627sec Eventually the two default proto kernel routes disappear from the output. My IPv6 connection still dropped at some point over night though. Again, simply running sudo service networking stop && sudo service networking start got everything working again. Those two fe80 routes reappeared as well, not surprising. Anyone any ideas? aside : at no point has any IPv4 connectivity had issues.
Yes, you can do that. It is called colocation. Essentially you provide the server and the colocation provider supplies everything else: power, cooling, security, and in some cases they provide bandwidth in some cases you can provide it yourself. They will base the cost on how much physical space your server takes up, how much power it uses how much heat it generates and if you need bandwidth, how much bandwidth you will be using. Typically if they provide bandwidth they will charge you a fixed amount for a certain number of IP addresses and a set CIR (committed information rate). You can either pay for a fixed amount of bandwidth or you can pay based on usage. If you pay for fixed bandwidth then they will give you that amount and you will never be able to use more. It is quite common for you to buy burstable bandwidth though. In this scenario they will provide you with a port that can go up to 100 Mbps for example and bill you based on your average utilization. Typically this is done using the 95th percentile billing model (Google can explain it better than I can). So they may charge you $50/mo per megabit of 95th percentile bandwidth so if you average 10 Mbps then you would pay $500 for that month. Having said that, if you only have 1 server then it would probably be drastically simpler and easier to either rent a dedicated server or use a virtual server (VPS). Companies such as Rackspace and Amazon Web Services provide virtual servers. In that model you would pay for the virtual server based on how much CPU, RAM and disk you need. You also pay for bandwidth but in this case you pay based upon how much data you transfer, not your average utilization. For example, AWS charges about 10-12 cents per gigabyte of data your server sends to the Internet. There are other advantages to using a virtual server. You no longer have to worry about hardware failures since your server is a virtual machine if the host it is running on has a hardware problem it can easily be moved to another host. Additionally it is easy to upgrade the virtual machine to have more or less CPU/RAM/Disk based upon your usage.
{ "source": [ "https://serverfault.com/questions/605850", "https://serverfault.com", "https://serverfault.com/users/52567/" ] }
605,931
I am using Apache 2.2.15 on CentOS to provide SSL for a TomCat application. ProxyPass / http://127.0.0.1:8090/ connectiontimeout=300 timeout=300 ProxyPassReverse / http://127.0.0.1:8090 This works fine and everything is great; however, I want to add the following line: Redirect permanent /broken/page.html https://www.servername.com/correct/page.html before the above to handle an error in the TomCat application itself. However, it does not appear to work the way I expect (i.e, it appears to do nothing and change nothing). Is it possible to use Redirect this way? I don't have the ability to edit the application, unfortunately.
Yes! Above the ProxyPass / , add: ProxyPass /broken/page.html ! That'll force the proxypass to not act on the page that you're trying to redirect.
{ "source": [ "https://serverfault.com/questions/605931", "https://serverfault.com", "https://serverfault.com/users/218127/" ] }
606,185
When I use the default settings: vm.overcommit_memory = 0 vm.overcommit_ratio = 50 I can read these values from /proc/meminfo file: CommitLimit: 2609604 kB Committed_AS: 1579976 kB But when I change vm.overcommit_memory from 0 to 2 , I'm unable to start the same set of applications that I could start before the change, especially amarok. I had to change vm.overcommit_ratio to 300 , so the limit could be increased. Now when I start amarok, /proc/meminfo shows the following: CommitLimit: 5171884 kB Committed_AS: 3929668 kB This machine has only 1GiB of RAM, but amarok works without problems when vm.overcommit_memory is set to 0. But in the case of setting it to 2 , amarok needs to allocate over 2GiB of memory. Is it a normal behavior? If so, could anyone explain why, for instance, firefox (which consumes 4-6x more memory than amarok) works in the same way before and after the change?
You can find the documentation in man 5 proc ( or at kernel.org ): /proc/sys/vm/overcommit_memory This file contains the kernel virtual memory accounting mode. Values are: 0: heuristic overcommit (this is the default) 1: always overcommit, never check 2: always check, never overcommit In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed". In mode 2 (available since Linux 2.6), the total virtual address space that can be allocated (CommitLimit in /proc/mem‐ info) is calculated as CommitLimit = (total_RAM - total_huge_TLB) * overcommit_ratio / 100 + total_swap The simple answer is that setting overcommit to 1, will set the stage so that when a program calls something like malloc() to allocate a chunk of memory ( man 3 malloc ), it will always succeed regardless if the system knows it will not have all the memory that is being asked for. The underlying concept to understand is the idea of virtual memory . Programs see a virtual address space that may, or may not, be mapped to actual physical memory. By disabling overcommit checking, you tell the OS to just assume that there is always enough physical memory to backup the virtual space. Example To highlight why this can sometimes matter, take a look at the Redis guidances on why vm.overcommit_memory should be set to 1 for it.
{ "source": [ "https://serverfault.com/questions/606185", "https://serverfault.com", "https://serverfault.com/users/206512/" ] }
606,255
I often hear this term used "We have a T1"... used on SF and other sites. I googled it and it seems like an ancient technology possibly related to frame relay but I'm not sure. Maybe things have changed and the term means different things now. What speed is a T1, do users get all 24 channels at 1.5Mbit? How does it relate to Frame Relay? something we used in my company over 15 years ago before ADSL became competitive. T1 is not something offered in my part of the world, that's why I'm asking really.
It's still a thing, yes. In fact, some places, like where I work, even still use some fractional T1's (which would be a T1 with a bandwidth cap on it). In terms of data, a T1 is a [specific type of] 1.5 Mbit connection. Nothing more, nothing less, at least as it relates to modern networking. Since your question relates to "modern" networking, I should point out that if you see T1s today, you will most often see a "bundle" of T1's, which are multiple T1 lines aggregated together to increase capacity, and you get 1.5 Mbits of bandwidth for every T1 in the bundle. To your question about end users, in terms of data, you can hook your T1 up to a switch (as we do at our locations with T1's), and theoretically have as many endpoints as you want sharing the connection... but they all have to share the 1.5 Mbits of bandwidth (per T1 in the bundle). In terms of voice, if you use a T1 (or bundle of T1's), you get the same data rate, but more importantly, the ability to digitize 24 channels of voice communications simultaneously... so a T1 for voice (which is the same technology as for data), means that you have the ability to have 24 simultaneous land-line phone calls in and/or out of the PBX it's connected to. As to why they're still used... well, faxes are still used, and they're even older, and technically speaking, easily replaced by far superior technologies. Infrastructure has a lot of inertia, especially given the high cost of replacing it with something better. And that's to say nothing of other sources of inertia, like the fact that my bosses still actually believe that T1's are more reliable than fiber or whatever else, or prior business relationships only add to the weight behind sticking with the status quo. The fact that you can "bundle" multiple T1's together allows you to get... tolerable... data rates out of just T1's, and if you've got an ISP that is offering deep discounts on their T1 lines to squeeze some extra money out of their old infrastructure, then you can even run into situations where you can make a compelling business case for going with T1's over a newer technology. In our specific case, we also have remote sites that are in rural areas, where the best available connections are the T1 lines that were run many years ago, so there's just no other options for a few of our sites.
{ "source": [ "https://serverfault.com/questions/606255", "https://serverfault.com", "https://serverfault.com/users/14631/" ] }
606,520
I'm having trouble figuring out how to remove systemd units that no longer have files. They still seem to linger in the system somehow. The old broken units I am trying to remove: core@ip-172-16-32-83 ~ $ systemctl list-units --all firehose-router* UNIT LOAD ACTIVE SUB DESCRIPTION <E2><97><8F> [email protected] not-found failed failed [email protected] <E2><97><8F> [email protected] not-found failed failed [email protected] LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 2 loaded units listed. To show all installed unit files use 'systemctl list-unit-files'. The files do not exist, yet a reload still has these units lingering: core@ip-172-16-32-83 ~ $ systemctl list-unit-files [email protected] core@ip-172-16-32-83 ~ $ sudo systemctl daemon-reload core@ip-172-16-32-83 ~ $ systemctl list-units --all firehose-router* UNIT LOAD ACTIVE SUB DESCRIPTION <E2><97><8F> [email protected] not-found failed failed [email protected] <E2><97><8F> [email protected] not-found failed failed [email protected] LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 2 loaded units listed. To show all installed unit files use 'systemctl list-unit-files'. There are no files related to them that I can find: core@ip-172-16-32-83 ~ $ sudo find /var/run/systemd -name "*firehose-router*" core@ip-172-16-32-83 ~ $ find /etc/systemd/ -name "*firehose-router*" core@ip-172-16-32-83 ~ $ find /usr/lib/systemd/ -name "*firehose-router*" core@ip-172-16-32-83 ~ $ So how do I get rid of these?
The command you're after is systemctl reset-failed
{ "source": [ "https://serverfault.com/questions/606520", "https://serverfault.com", "https://serverfault.com/users/44901/" ] }
606,824
It is a very sensitive topic that requires a solution from our end. I have few servers that I rent to few people. I have all legal permissions and rights to scan over the servers. I want to prevent people from storing child pornography, animal cruelty or other videos of similar nature. The first priority is to be able to prevent child pornography since it is the most sensitive issue. I tried searching online for solutions but couldn't find many people even discussing about this issue, I believe mainly because it is considered Taboo topic of discussion. One of my thoughts was to search the servers for signatures of known files. Is there such a database anywhere? I know big companies like GoDaddy have such prevention system but as a small company owner what can I do?
There are various Government and Industry programs that will provide Hashes of "Known Bad" material (eg CP) to hosting providers. You can then hash the files on your servers and compare. Below are a few that I know of: HashKeeper Discontinued, ran by the US DoJ http://www.nsrl.nist.gov/RDS/rds_2.44/Hashkeeper-RDS244-split.zip DCMEC HVSI Run by the Missing Kids non-profit http://www.missingkids.com/Exploitation/Industry NIST NSRL Hashes for software files, mostly for avoiding software piracy http://www.nsrl.nist.gov/Downloads.htm Other Notes: It's going to be called "CP" everywhere to avoid using the actual term. It's that taboo. The law in the US are very reasonable when it comes to holding service providers accountable for what their clients put on their servers. Make minimal efforts to prevent abuse, communicate abuse policy to users, and have procedures to deal with policy violations. CP is about as "sensitive" as it gets. Be sure your response includes contacting authorities immediately upon any known CP violations - do not tamper with the data or server, contact authorities first. Authorities will advise you as to what steps you should follow from there.
{ "source": [ "https://serverfault.com/questions/606824", "https://serverfault.com", "https://serverfault.com/users/227272/" ] }
607,398
I just setup and sysprepped a nice new VM, now I need to convert it to a wim real quick, to upload to my sccm server. For some reason, I can't change the VM properties to boot from a legacy nic for pxe, which is how I usually capture my images using sccm. VMM just changes the settings right back, even though it says successful. Anyway, the first page of google was terrible for this, w/ the exception of a 3rd party .ps1 script on MS's website, but I'm using 2012r2, I should be able to do this natively, right?
Absolutely, let's post a prim and proper answer for Google. This is a simple 2 command Powershell execution, using the dism module. The dism can be copied to earlier versions of Windows, provided you have the appropriate version of the windows management framework. First, mount the vhd using Mount-WindowsImage -ImagePath C:\VHDs\BigHomies.vhdx -Path C:\VHDMount -Index 1 Then, capture it into a wim with New-WindowsImage -CapturePath C:\VHDMount -Name Win7Image -ImagePath C:\CapturedWIMs\Win7.wim -Description "Yet another Windows 7 Image" -Verify And let it do it's thing. When you are done you can unmount the vhd and discard any changes using: Dismount-WindowsImage -Path C:\VHDMount -Discard
{ "source": [ "https://serverfault.com/questions/607398", "https://serverfault.com", "https://serverfault.com/users/154913/" ] }
607,443
Can Mac OS X be run inside Docker? If so, any suggestion as to how? And would it be running headless, or there would be a possibility to connect to the GUI remotely?
Docker provides methods for managing OS-level containers and is built on top of Linux's native features for OS-level containerization. All containers running on a system share the same kernel; Mac OS X does not use the Linux kernel, but rather a mach kernel, so it cannot be run inside a Docker container at this time. You can run Docker on your Mac using a virtual machine, but containers running on that instance would need to run Linux. Now that Docker uses libcontainer rather than LXC as its basis, it is possible that porting of libcontainer in the future could one day allow for running Windows and Mac OS Docker containers on those systems respectively, but it would depend on appropriate OS features being available to allow for containerization.
{ "source": [ "https://serverfault.com/questions/607443", "https://serverfault.com", "https://serverfault.com/users/99166/" ] }
607,689
When I'm transferring large quantities of data using rsync, it would be helpful if I could have the average speed up until now at a glance, rather than a bunch of different speeds for each file.
Yes. Starting with rsync version 3.1.0 the --info=progress2 argument will give you progress on the entire transfer, including speed of the entire transfer. You can see a little bit of detail on the rsync man page .
{ "source": [ "https://serverfault.com/questions/607689", "https://serverfault.com", "https://serverfault.com/users/77798/" ] }
607,747
How can i choose the right CPU for a site that runs on two servers, a web server (Apache worker MPM) and a database server (MySQL). The website is written in PHP-Mysql, with no PHP caching (as required by the owner) and it has heavy traffic (avrg concurrent users 3000~ and avrg transactions per second 7000~) I got two options for example: 2x Octo-Core E5-2650 2.0 Ghz w/HT 32 Threads or a single Intel Xeon E3-1270V3 3.5Ghz. I have looked up the specifications of both of them and i see that the first one exceeds the second one in everything except the clock speed, What should i be looking at ? Note: I have asked this question before a couple of days and deleted it because one of the dedicated servers providers refused to share the full CPU information, i am re-posting this after having the complete CPU specifications.
Yes. Starting with rsync version 3.1.0 the --info=progress2 argument will give you progress on the entire transfer, including speed of the entire transfer. You can see a little bit of detail on the rsync man page .
{ "source": [ "https://serverfault.com/questions/607747", "https://serverfault.com", "https://serverfault.com/users/105956/" ] }
608,881
Windows Server 2003 is a very good operating system from Microsoft, and we're relying on it on a daily basis. I have heard that I should replace it by something "newer" and more "modern". Why should I do this? What are the implications if I don't upgrade?
While Windows Server 2003 was a very good Operating System for quite some time, it will reach its End of Extended Support life on July 14th, 2015. While mainstream support gives you free security updates, service packs, non-security related hotfixes and a wealth of other stuff, the extended support phase reduces this to security update support and no new features/service-packs. The end of extended support then basically marks the end of the product lifecycle, where there are no new security updates published by Microsoft for free. Depending on the product, there is the possibility to extend this time period by some time, but it's very expensive. (Refer to the Product Lifecycle FAQ @ Microsoft ) What does that mean for you? If a security issue is found in Windows Server 2003 after July 14th, 2015, Microsoft will not issue a patch to fix the issue. Your server will be vulnerable - forever - from that point on forward. We have seen and learned with Windows XP that even after months of awareness campaigns, that there are still a lot of Windows XP installations out there, even after months of its end of extended support. These systems are and remain vulnerable to current and future threats. Refer to Qualys Blog It is therefore strongly recommended and definitely best practice to upgrade those systems before July 14th, 2015. Start now.
{ "source": [ "https://serverfault.com/questions/608881", "https://serverfault.com", "https://serverfault.com/users/121802/" ] }
608,895
Can I put something like this in my .zone file? @ IN CNAME srvr-01.foo.bar. Or is that invalid? If it's invalid, how can I redirect visitors from mydomain.com to the server srvr-01.foo.bar ? (note that I'm not given the server IP, just the domain, which makes me think it could change randomly) EDIT: Sorry, my bad. I replaced NS with CNAME , which is what I actually wanted to write.
While Windows Server 2003 was a very good Operating System for quite some time, it will reach its End of Extended Support life on July 14th, 2015. While mainstream support gives you free security updates, service packs, non-security related hotfixes and a wealth of other stuff, the extended support phase reduces this to security update support and no new features/service-packs. The end of extended support then basically marks the end of the product lifecycle, where there are no new security updates published by Microsoft for free. Depending on the product, there is the possibility to extend this time period by some time, but it's very expensive. (Refer to the Product Lifecycle FAQ @ Microsoft ) What does that mean for you? If a security issue is found in Windows Server 2003 after July 14th, 2015, Microsoft will not issue a patch to fix the issue. Your server will be vulnerable - forever - from that point on forward. We have seen and learned with Windows XP that even after months of awareness campaigns, that there are still a lot of Windows XP installations out there, even after months of its end of extended support. These systems are and remain vulnerable to current and future threats. Refer to Qualys Blog It is therefore strongly recommended and definitely best practice to upgrade those systems before July 14th, 2015. Start now.
{ "source": [ "https://serverfault.com/questions/608895", "https://serverfault.com", "https://serverfault.com/users/94421/" ] }
609,767
I have a simple script which outputs a bunch of logs to screen and I piped the STDOUT to a file to store the logs. Since this script is long running, I needed to rotate the log files so they are chucked into smaller more manageable ones. The problem I faced was that once the logrotate moves the current log file into a new one, the newly created log file is not populated with the logs anymore. It seems that the once the original log file is removed, its file handler is lost and redirection won't work anymore. I also found this post which had the same problem as me and claims that it can be fixed by using >> instead of > to redirect the output. I tested his solution but it didn't work for me. Does anyone have any idea how to keep the redirection work?
You should use the copytruncate directive in your logrotate config for this log file. copytruncate Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place
{ "source": [ "https://serverfault.com/questions/609767", "https://serverfault.com", "https://serverfault.com/users/184733/" ] }
610,130
On Debian Wheezy, ulimit -a gives: open files (-n) 1024 I add this to /etc/security/limits.conf * hard nofile 64000 then reboot. And ulimit -a still gives a maximum number of open files of 1024. Anyone could throw some light on it?
Option one: You did not set the softlimit higher aswell. Possible solution: in /etc/security/limits.conf add * soft nofile 2048 test with ulimit -n 2048 Option two: You are logged in as user and in some "config" file (profile, bashrc, something like this) the soft limit is set to a lower value. Possible solution f.e. grep for ulimit in your etc folder and/or home folder. Warning: Depending on the amount of files/directorys you have in there you might want to consider only specific directorys/files ps: there are a lot of similiar question here you might want to read up. Specially Hard vs Soft Limit Read here for possible other solution which go more into detail Too Many Open Files
{ "source": [ "https://serverfault.com/questions/610130", "https://serverfault.com", "https://serverfault.com/users/161309/" ] }
610,322
I'm using Ansible 1.6.6 to provision my machine. There is a template task in my playbook that creates destination file from Jinja2 template: tasks: - template: src=somefile.j2 dest=/etc/somefile.conf I do not want to replace somefile.conf if it already exists. Is it possible with Ansible? If so, how?
You can check for file existence using stat, and then use template only if file does not exist. tasks: - stat: path=/etc/somefile.conf register: st - template: src=somefile.j2 dest=/etc/somefile.conf when: not st.stat.exists
{ "source": [ "https://serverfault.com/questions/610322", "https://serverfault.com", "https://serverfault.com/users/227777/" ] }
610,441
I have backed up a linux web server using rsync with cygwin. I now have a perfect copy of the server on my windows laptop. If i delete or modify a file on my laptop and run rsync again with cygwin will it delete/update the same file on the server? Im under the impression that if i delete/modify on the server and run rsync on my laptop it will delete/modify the local file on my laptop but does this work in reverse?
Rsync does a one way sync, however it's up to you to decide which way the sync goes. Rsync command syntax is the following: rsync [OPTION...] SRC... [DEST] Note that you specify sync from source to destination. Source and destination can be any local or remote path. For example if you want to copy files from your server to your laptop you do: rsync [OPTION...] <server-path> <laptop-path> To sync in the opposite direction you do: rsync [OPTION...] <laptop-path> <server-path> So to answer your question: it depends on how you execute rsync. If you want files to be deleted on the destination you need to use --delete option. But be careful with it, because if you make a mistake when specifying your source then you will end up removing everything on your destination. It's safer to test your sync without --delete option first and once you are happy with how it works you can add --delete option. As suggested by masegaloeh in comments below, -n or --dry-run option may also be used to test rsync command behavior.
{ "source": [ "https://serverfault.com/questions/610441", "https://serverfault.com", "https://serverfault.com/users/223388/" ] }
611,050
Not a technical question, but a valid one nonetheless. Scenario: HP ProLiant DL380 Gen 8 with 2 x 8-core Xeon E5-2667 CPUs and 256GB RAM running ESXi 5.5. Eight VMs for a given vendor's system. Four VMs for test, four VMs for production. The four servers in each environment perform different functions, e.g.: web server, main app server, OLAP DB server and SQL DB server. CPU shares configured to stop the test environment from impacting production. All storage on SAN. We've had some queries regarding performance, and the vendor insists that we need to give the production system more memory and vCPUs. However, we can clearly see from vCenter that the existing allocations aren't being touched, e.g.: a monthly view of CPU utilization on the main application server hovers around 8%, with the odd spike up to 30%. The spikes tend to coincide with the backup software kicking in. Similar story on RAM - the highest utilization figure across the servers is ~35%. So, we've been doing some digging, using Process Monitor (Microsoft SysInternals) and Wireshark, and our recommendation to the vendor is that they do some TNS tuning in the first instance. However, this is besides the point. My question is: how do we get them to acknowledge that the VMware statistics that we've sent them are evidence enough that more RAM/vCPU won't help? --- UPDATE 12/07/2014 --- Interesting week. Our IT management have said that we should make the change to the VM allocations, and we're now waiting for some downtime from the business users. Strangely, the business users are the ones saying that certain aspects of the app are running slowly (compared to what, I don't know), but they're going to "let us know" when we can take the system down (grumble, grumble!). As an aside, the "slow" aspect of the system is apparently not the HTTP(S) element, i.e.: the "thin app" used by most of the users. It sounds like it's the "fat client" installs, used by the main finance bods, that is apparently "slow". This means that we're now considering the client and the client-server interaction in our investigations. As the initial purpose of the question was to seek assistance as to whether to go down the "poke it" route, or just make the change, and we're now making the change, I'll close it using longneck 's answer. Thank you all for your input; as usual, serverfault has been more than just a forum - it's kind of like a psychologist's couch as well :-)
I suggest that you make the adjustments they have requested. Then benchmark the performance to show them that it made no difference. You could even go so far to benchmark it with LESS memory and vCPU to make your point. Also, "We're paying you to support the software with actual solutions, not guesswork."
{ "source": [ "https://serverfault.com/questions/611050", "https://serverfault.com", "https://serverfault.com/users/56640/" ] }
611,082
When deploying applications onto servers, there is typically a separation between what the application bundles with itself and what it expects from the platform (operating system and installed packages) to provide. One point of this is that the platform can be updated independently of the application. This is useful for example when security updates need to be applied urgently to packages provided by the platform without rebuilding the entire application. Traditionally security updates have been applied simply by executing a package manager command to install updated versions of packages on the operating system (for example "yum update" on RHEL). But with the advent of container technology such as Docker where container images essentially bundle both the application and the platform, what is the canonical way of keeping a system with containers up to date? Both the host and containers have their own, independent, sets of packages that need updating and updating on the host will not update any packages inside the containers. With the release of RHEL 7 where Docker containers are especially featured, it would be interesting to hear what Redhat's recommended way to handle security updates of containers is. Thoughts on a few of the options: Letting the package manager update packages on the host will not update packages inside the containers. Having to regenerate all container images to apply updates seems to break the separation between the application and the platform (updating the platform requires access to the application build process which generates the Docker images). Running manual commands inside each of the running containers seems cumbersome and changes are at risk of being overwritten the next time containers are updated from the application release artifacts. So none of these approaches seems satisfactory.
A Docker image bundles application and "platform", that's correct. But usually the image is composed of a base image and the actual application. So the canonical way to handle security updates is to update the base image, then rebuild your application image.
{ "source": [ "https://serverfault.com/questions/611082", "https://serverfault.com", "https://serverfault.com/users/37524/" ] }
611,120
I'm trying to set up logstash forwarder, but I have issues with making a proper secure channel. Trying to configure this with two ubuntu (server 14.04) machines running in virtualbox. They are 100% clean (not touched hosts file or installed any other packages other than the required java, ngix, elastisearch, etc, for logstash) I do not believe this is a logstash issue, but improper handling of certificates or something not set correct either on the logstash ubuntu or forwarder machine. I generated the keys: sudo openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt My input conf on logstash server: input { lumberjack { port => 5000 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } Keys were copied to forwarder host , which has the following config. { "network": { "servers": [ "192.168.2.107:5000" ], "timeout": 15, "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" "ssl key": "/etc/pki/tls/certs/logstash-forwarder.key" }, "files": [ { "paths": [ "/var/log/syslog", "/var/log/auth.log" ], "fields": { "type": "syslog" } } ] } With logstash server running, I 'sudo service logstash-forwarder start' on the forwarder machine, giving me the following repeated error: Jul 9 05:06:21 ubuntu logstash-forwarder[1374]: 2014/07/09 05:06:21.589762 Connecting to [192.168.2.107]:5000 (192.168.2.107) Jul 9 05:06:21 ubuntu logstash-forwarder[1374]: 2014/07/09 05:06:21.595105 Failed to tls handshake with 192.168.2.107 x509: cannot validate certificate for 192.168.2.107 because it doesn't contain any IP SANs Jul 9 05:06:22 ubuntu logstash-forwarder[1374]: 2014/07/09 05:06:22.595971 Connecting to [192.168.2.107]:5000 (192.168.2.107) Jul 9 05:06:22 ubuntu logstash-forwarder[1374]: 2014/07/09 05:06:22.602024 Failed to tls handshake with 192.168.2.107 x509: cannot validate certificate for 192.168.2.107 because it doesn't contain any IP SANs As I mentioned earlier, I do not believe this is a logstash issue, but certificate/machine config issue. Problem is, I can't seem to solve it. Hopefully some clever minds here can help me out? Thanks
... Failed to tls handshake with 192.168.2.107 x509: cannot validate certificate for 192.168.2.107 because it doesn't contain any IP SANs SSL needs identification of the peer, otherwise your connection might be against a man-in-the-middle which decrypts + sniffs/modifies the data and then forwards them encrypted again to the real target. Identification is done with x509 certificates which need to be validated against a trusted CA and which need to identify the target you want to connect to. Usually the target is given as a hostname and this is checked against the subject and subject alternative names of the certificate. In this case your target is a IP. The validate the certifcate successfully the IP must be given n the certificate inside the subject alternative names section, but not as an DNS entry (e.g. hostname) but instead as IP. So what you need to is: Edit your /etc/ssl/openssl.cnf on the logstash host - add subjectAltName = IP:192.168.2.107 in [v3_ca] section. Recreate the certificate Copy the cert and key to both hosts PS Consider adding -days 365 or more to the certificate creation commandline as the default certificate validity is just 30 days and you probably do not want to recreate it every month..
{ "source": [ "https://serverfault.com/questions/611120", "https://serverfault.com", "https://serverfault.com/users/87420/" ] }
611,239
I'm running php5-fpm under Nginx on Ubuntu 14.04. I want to increase the max upload size. I have edited my /etc/php5/fpm/php.ini to have the following lines defined as below: upload_max_filesize = 20M post_max_size = 25M and I restarted php5-fpm and nginx but phpinfo() is still showing the limits to be 8M and 2M for post and upload respectively. Is there anything I have missed here?
Nginx client_max_body_size PHP post_max_size upload_max_filesize And restart or reload php fpm. Source: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
{ "source": [ "https://serverfault.com/questions/611239", "https://serverfault.com", "https://serverfault.com/users/145263/" ] }
611,270
So, I'd like to know, is is possible to do the following with mdadm: I start with RAID0 configuration on 2 disks: sda and sdb . I would like to add one more disk to array, sdc and move all data from sdb to it. Disconnect sdb . Right now I see only one option - I stop the array, copy sdb to sdc with dd or any other block-copy tool and start the array back. Do I miss something? Is it possible to do this with mdadm?
First of all: to those, who still believes in "RAID0 has no hot spare". It could have a manual spare, done by human, who understand RAID levels and mdadm. mdadm is software RAID, so it could do a lot of interesting things. Credits to Zoredache for the idea! So, the situation: you have RAID0 array of two disks you would like to replace one of them without array downtime If the downtime is acceptable, you always can just make a block copy of disk with dd and reassemble the array, mdadm will do OK. Solution: use RAID4 as intermediate solution RAID0 -> RAID4 -> RAID0 So, if you don't remember RAID4, it is simple. It has a parity block, but unlike RAID5 it is not distributed across the array, but resides on ONE disk. That's the point, this is important and this is the reason RAID5 will not work. What you'll need: two more disks of the same size, as the disk you would like to replace. Environment: Ubuntu 14.04 Thrusty Thar mdadm - v3.2.5 - 18th May 2012 /dev/sdb - start with it, will replace it /dev/sdc - start with it /dev/sdd - will be used temporary /dev/sde - will be used instead of sdb The ultimate RAID0 hot-spare mdadm guide ;) sudo mdadm -C /dev/md0 -l 0 -n 2 /dev/sd[bc] md0 : active raid0 sdc[1] sdb[0] 2096128 blocks super 1.2 512k chunks We've created raid0 array, it looks sweet. sudo md5sum /dev/md0 b422ba644a3c83cdf28adfa94cb658f3 /dev/md0 This is our check point - if even one bit will differ in resulting /dev/md0 - we've failed. sudo mdadm /dev/md0 --grow --level=4 md0 : active raid4 sdc[1] sdb[0] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_] So, we've grown our array to be RAID4. We haven't added the parity disk yet, so let's do it. The grow will be instant - there is nothing to recompute or recalculate. sudo mdadm /dev/md0 -a /dev/sdd md0 : active raid4 sdd[3] sdc[1] sdb[0] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_] [===>.................] recovery = 19.7% (207784/1048064) finish=0.2min speed=51946K/sec We've added sdd as parity disk. This is important to remember - the order of disks in the first row is not syncronized with the picture in second row! [UU_] sdd is displayed first, but in fact it is last one, and holds not the data, but the parity. sudo mdadm /dev/md0 -f /dev/sdb md0 : active raid4 sdd[3] sdc[1] sdb[0](F) 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [_UU] We've made our disk sdb faulty, to remove it in the next steps. sudo mdadm --detail /dev/md0 State : clean, degraded Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 3 8 48 2 active sync /dev/sdd 0 8 16 - faulty spare /dev/sdb Details show us the removal of the first disk and here we can see the true order of the disks in the array. It's important to track the disk with parity, we should not leave it in the array when going back to RAID0. sudo mdadm /dev/md0 -r /dev/sdb md0 : active raid4 sdd[3] sdc[1] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [_UU] sdb is completely removed, could be taken away. sudo mdadm /dev/md0 -a /dev/sde md0 : active raid4 sde[4] sdd[3] sdc[1] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [_UU] [==>..................] recovery = 14.8% (156648/1048064) finish=0.2min speed=52216K/sec We have added the replacement for our sdb disk. And here we go: now the data of sdb is being recovered using parity. Sweeeeet. md0 : active raid4 sde[4] sdd[3] sdc[1] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/3] [UUU] Done. Right now we are completely safe - all data from sdb are recovered, and now we have to remove sdd (remember, it holds parity). sudo mdadm /dev/md0 -f /dev/sdd md0 : active raid4 sde[4] sdd[3](F) sdc[1] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_] Made sdd faulty. sudo mdadm /dev/md0 -r /dev/sdd md0 : active raid4 sde[4] sdc[1] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_] Removed sdd from our array. We are ready to become RAID0 again. sudo mdadm /dev/md0 --grow --level=0 --backup-file=backup md0 : active raid4 sde[4] sdc[1] 2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_] [=>...................] reshape = 7.0% (73728/1048064) finish=1.5min speed=10532K/sec Aaaaaaand bang! md0 : active raid0 sde[4] sdc[1] 2096128 blocks super 1.2 512k chunks Done. Let's look at md5 checksum. sudo md5sum /dev/md0 b422ba644a3c83cdf28adfa94cb658f3 /dev/md0 Any more questions? So RAID0 could have a hot spare. It's called "user" ;)
{ "source": [ "https://serverfault.com/questions/611270", "https://serverfault.com", "https://serverfault.com/users/183173/" ] }
611,272
HAProxy gives you the option to set the mode to TCP or HTTP. It also allows you to set the port. So why allow me to choose between HTTP and TCP, if it's letting me choose the port too? Surely if I wanted HTTP I could just choose TCP and port 80? Why only TCP and HTTP? It seems to imply that HTTP is not TCP. Why not have TCP, HTTP, SNMP, FTP, etc, etc, etc.. Why just HTTP and TCP? Why have either of those options if HTTP is TCP? Find it very confusing, and it's really difficult to find any information about load balancing non-http(s) services.
By using the HTTP method in the HAProxy config, you have access to several HTTP-specific options. For example, you can choose different backends based on the URL in the HTTP request. When specifying TCP mode, HAProxy does not evaluate the HTTP headers in the packet. So, you can definitely just use TCP for HTTP traffic, but you wouldn't have the additional HTTP options. As a side note, unless you're using the SSL features, you have to use TCP for HTTPS traffic because the packets are encrypted and HAProxy can't view the HTTP headers.
{ "source": [ "https://serverfault.com/questions/611272", "https://serverfault.com", "https://serverfault.com/users/34743/" ] }
611,289
I am in charge of maintaining around 25 PCs with various versions of Windows (Vista, 7, 8). I was thinking something along the following lines: Every 4–6 months: Take an image of the system partition so that installed programs with their various license requirements can be restored easily if a hard disk fails. (I am thinking of Clonezilla for this.) Physically clean the machine, get rid of dust on the fans etc. Every 2 months: Do a software check on things like backups still running ok, anti-virus up to date, Windows updating itself, firewall set up correctly. Every day: Automatic backups of things like emails and documents. What kind of schedule do you recommend? What kind of software tools to use? Ideally I would like to automate as much of this as possible. What other things should I be doing? UPDATE: Thanks for all the great answers so far, the advice to not do backups/images of individual machines doesn't really work in my case. The licensing costs would be prohibitive, since there are at least 5 different software configurations for the different roles in the company -- finance, sales, management, production (3 types here alone) and having licenses for everyone for everything wouldn't make sense. Also we have to keep some old versions of software for compatibility with some of our customers - the installation disks (with the license keys) have been lost or buried before my time.
Here are some "pages" out of my personal "operations manual": All user data is saved on servers, period. It might be replicated to client computers via functionality like "Offline Files" (for laptop computers, particularly) and Outlook's "Cached Exchange Mode", but there is no primary storage of data on client computers, ever. No recurring backups of client computers are performed. No data is stored there. Users are instructed (ideally by corporate policy documents) to save data in approved areas and that anything saved outside those areas is not backed-up. Software should be automatically installed via an automated mechanism wherever economically feasible. (The "break even" has been, for purposes of my Customers, a program being installed on five (5) or more computers. If it goes on fewer I'll probably just do the installation manually.) The Active Directory security group membership and location (OU) are sufficient to determine a machine's software load. I have taken an image of a client computer being used in a very business-critical role now and again, but in general the majority of client computers I work with are built-up from their factory load and automated software installs. Where I've seen it done I've felt that maintaining a "library" of disk images of client computers has been cumbersome and error-prone. Since Windows 7 added software RAID-1 to the "Professional" OS I have made use of that, increasingly, for client computers that are in more "mission-critical" roles. Windows software RAID is much more forgiving and workable than "motherboard RAID" (which is nothing but trouble). Antivirus software should use a "management console" that can provide centralized, automated alerting for fault or anomaly conditions. This often means buying "enterprise"-oriented antivirus software. Computer environment settings (firewall, security options, etc) are pushed out via Group Policy. (Anything that can be done with Group Policy is done that way.) No maintenance of the hardware (fans, etc) is done except when the environment is harsh, and even then only in a reactive manner. Hardware has gotten pretty solid in the last 10-15 years. Updates are installed via WSUS. Compliance is tracked in WSUS and, if the environment warrants (for PCI compliance, for example) with whatever auditing tool is financially appropriate. (SCCM is nice, for example, but not always appropriate from a cost perspective.) Edit (now that I have a couple more minutes to write): My definition of "user data" includes user profiles. I use Folder Redirection to get the big folders out of the profile. I generally redirect AppData, which seems to be heavily discouraged throughout the industry (because dimwit software developers make assumptions about the AppData folder being local that may not be true... >cough< Apple >cough< iTunes >cough< ). Users never have "Administrator" rights on client computers for their day-to-day user accounts. Dealing with small privately-held businesses, as I do, can often require some finesse in explaining to the owner why their user account doesn't have Administrator rights. (With the advent of scary-as-heck malware, though, making this argument has become a lot easier. Score one for malware, I guess...) I do create secondary local Administrator accounts for users who are technically competent and who have a legitimate need on a case-by-case basis (after consulting with my contact and weighing the pros and cons). Making this one change drastically decreases "software maintenance". If you do nothing else, do this. Some of the goals of this methodology are: Allow for a user to "hot desk" if they have major failure (smoke rolling out of the computer, etc). All their software might not be available (because of licensing limitations that limit installed seats, etc), but they should have basic functionality. (I support a reasonable number of client computers throughout my Customer-base. I need a simple PC failure to be a non-emergency event or I can't scale to any significant number of Customers.) Reduces most troubleshooting for user issues simply to determining if the problem is user profile-specific or machine-specific. User profile-specific issues are resolved either by restoring the profile from a known-good backup or, in drastic situations, starting with a clean profile. Machine-specific problems are resolved by bringing out a spare machine or wiping / re-imaging the failed computer. There is no data loss impact when the eventual failure of client computer hard disk drives occurs. Computer replacement (and keeping the Customer sticking to a computer lifecycle plan) is easy.
{ "source": [ "https://serverfault.com/questions/611289", "https://serverfault.com", "https://serverfault.com/users/41674/" ] }
611,884
Imagine a server setup of a shared webhosting company where multiple (~100) customers have shell access to a single server. A lot of web "software" recommends to chmod files 0777 . I'm nervous about our customers unwisely following these tutorials, opening up their files to our other customers. (I'm certainly not using cmod 0777 needlessly myself!) Is there a method to make sure that customers can only access their own files and prevent them from accessing world readable files from other users? I looked into AppArmor , but that is very tightly coupled to a process, which seems to fail in that environment.
Put a restricted and immutable directory between the outside world and the protected files, e.g. / ├─ bin ├─ home │ └─ joe <===== restricted and immutable │ └─ joe <== regular home directory or /home/joe/restricted/public_html . Restricted means that only the user and perhaps the web server can read it (e.g. modes 0700 / 0750 or some ACLs ). Immutability can be done with chattr +i or by changing the ownership to something like root:joe . An easy way to create that hierarchy on Ubuntu would be to edit /etc/adduser.conf and set GROUPHOMES to yes .
{ "source": [ "https://serverfault.com/questions/611884", "https://serverfault.com", "https://serverfault.com/users/209465/" ] }
612,903
I have applied several internet explorer settings via group policy. Especially a long list of URLs in the "site to zone assignment" setting. However it seems that one URL still falls into the "internet zone" even when assigned to the "trusted zone". In earlier versions of internet explorer one could easily determine from the status bar into which zone an URL falls. How can this be done via IE11? Am I overlooking something obvious?
In the menu bar, if you go to File->Properties. The properties dialog shows the zone for that page.
{ "source": [ "https://serverfault.com/questions/612903", "https://serverfault.com", "https://serverfault.com/users/190346/" ] }
613,179
I'm trying to add mount --bind /proc/ /chroot/mysql/proc to /etc/fstab . How can I do this?
The mount command accepts --bind or -o bind . In the /etc/fstab file, you can use the following line: /source /destination none defaults,bind 0 0
{ "source": [ "https://serverfault.com/questions/613179", "https://serverfault.com", "https://serverfault.com/users/28230/" ] }
613,180
Whenever I provision vagrant all the data is nil. When I ssh in and specify /etc/puppet/hiera.yaml as the config option i can get the values. How can I get vagrant to use the right hiera.config file? # Enable the Puppet provisioner config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/" puppet.manifest_file = "default.pp" puppet.module_path = "puppet/modules" puppet.hiera_config_path = "puppet/hiera.yaml" puppet.options = "--verbose --debug" end If you want to see all the code its on my bitbucket. https://bitbucket.org/yamiko/izanagi/src
The mount command accepts --bind or -o bind . In the /etc/fstab file, you can use the following line: /source /destination none defaults,bind 0 0
{ "source": [ "https://serverfault.com/questions/613180", "https://serverfault.com", "https://serverfault.com/users/227759/" ] }
613,182
I am looking to increase storage of two RDS instances (just the storage space allocated, not the instance type or other parameters). The documentation at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.ModifyingExisting suggests: You can change from standard storage to Provisioned IOPS storage, or from Provisioned IOPS to standard storage, as well as increase storage, with little to no downtime. I would definitely schedule a maintenance window before performing the change. But the documentation seems a little vague in this area. For someone who might have done this before, what is "little to no downtime"? Can I expect 5 seconds or is it more like 5 minutes? Update July, 2019: I've updated the link to the correct and updated AWS documentation (which was broken). The newer documentation has a blurb that helps answer the original question as well: In most cases, scaling storage doesn't require any outage and doesn't degrade performance of the server. After you modify the storage size for a DB instance, the status of the DB instance is Storage-optimization. The DB instance is fully operational after a storage modification. However, you can't make further storage modifications either for six hours or while the DB instance status is storage-optimization, whichever is longer. However, a special case is if you have a SQL Server DB instance and haven't modified the storage configuration since November 2017. In this case, you might experience a short outage of a few minutes when you modify your DB instance to increase the allocated storage. After the outage, the DB instance is online but in the Storage-optimization state. Performance might be degraded during storage optimization.
First, note that you may be looking at the incorrect operation -- you describe that you want to change storage size , but have quoted documentation describing storage type . This is an important distinction: RDS advises that you won't experience an outage for changing storage size, but that you will experience an outage for changing storage type. Expect degraded performance for changing storage size, the duration and impact of which will depend on several factors: Your RDS instance type Configuration Will this occur during maintenance? Will these changes occur first on your Multi-AZ slave, and then failover? Current database size Candidate database size AWS capacity to handle this request at your requested time of day, at your requested availability zone, in your requested region Engine type (for Amazon Aurora users , storage additions are managed by RDS as-needed in 10 GB increments, so this discussion is moot) With this in mind, you would be better served by testing this yourself, in your environment, and on your terms. Try experimenting with the following: Restoring a new RDS instance from a snapshot of your existing instance, and performing this operation on the new clone. With this clone: Increase the size at different times of day, when you would expect a different load on AWS. Increase to different sizes. Try it with multi-AZ. See if your real downtime changes as compared to not enabling multi-AZ. Try it during a maintenance window, and compare it with applying the change immediately. This will cost a bit more (it doesn't have to... you could do most of that in 1-3 instance-hours), but you will get a much cleaner answer than peddling for our experiences in a myriad of different RDS environments. If you're still looking for a "ballpark" answer, I would advise to plan for at least performance degradation in the scope of minutes, not seconds -- again dependent very much on your environment and configuration. For reference, I most recently applied this exact operation to add 10GB to a 40GB db.m1.small type instance on a Saturday afternoon (in EST). The instance remained in a "modifying" state for approximately 17 minutes. Note that the modifying state does not describe real downtime, but rather the duration that the operation is being applied . You won't be able to apply additional changes to the actual instance (although you can still access the DB itself) and this is also the duration that you can expect any performance degradation to occur. Note : If you're only planning on changing the storage size an outage is unexpected, but note that it can occur if this change is made in conjunction with other operations like changing the instance identifier/class, or storage type.
{ "source": [ "https://serverfault.com/questions/613182", "https://serverfault.com", "https://serverfault.com/users/44901/" ] }
613,256
Trying to uninstall zarafa mail server. I use yum list installed to view the already installed packages. After which I use yum erase zarafa* It picks up all the packages but returns: Error in PREUN scriptlet in rpm package zarafa-dagent Error in PREUN scriptlet in rpm package zarafa-gateway Error in PREUN scriptlet in rpm package zarafa-monitor Error in PREUN scriptlet in rpm package zarafa-server Error in PREUN scriptlet in rpm package zarafa-spooler Error in PREUN scriptlet in rpm package zarafa-ical zarafa-ical-7.1.9-1.el6.i686 was supposed to be removed but is not! Verifying : zarafa-ical-7.1.9-1.el6.i686 1/6 zarafa-spooler-7.1.9-1.el6.i686 was supposed to be removed but is not! Verifying : zarafa-spooler-7.1.9-1.el6.i686 2/6 zarafa-server-7.1.9-1.el6.i686 was supposed to be removed but is not! Verifying : zarafa-server-7.1.9-1.el6.i686 3/6 zarafa-monitor-7.1.9-1.el6.i686 was supposed to be removed but is not! Verifying : zarafa-monitor-7.1.9-1.el6.i686 4/6 zarafa-gateway-7.1.9-1.el6.i686 was supposed to be removed but is not! Verifying : zarafa-gateway-7.1.9-1.el6.i686 5/6 zarafa-dagent-7.1.9-1.el6.i686 was supposed to be removed but is not! Verifying : zarafa-dagent-7.1.9-1.el6.i686 6/6 Failed: zarafa-dagent.i686 0:7.1.9-1.el6 zarafa-gateway.i686 0:7.1.9-1.el6 zarafa-ical.i686 0:7.1.9-1.el6 zarafa-monitor.i686 0:7.1.9-1.el6 zarafa-server.i686 0:7.1.9-1.el6 zarafa-spooler.i686 0:7.1.9-1.el6
It seems like somehow yum cached data and the rpm database got out of sync with each other I guess. Try running the next commands: su -c 'yum clean all && rpm --rebuilddb' su -c 'package-cleanup --problems' Then run: su -c 'yum erase zarafa*' Edit #1: Try running the next command: # su -c 'yum --setopt=tsflags=noscripts remove zarafa*' If that doesn't work, try this: # su -c 'rpm -e --noscripts zarafa*'
{ "source": [ "https://serverfault.com/questions/613256", "https://serverfault.com", "https://serverfault.com/users/200235/" ] }
613,257
I’m sorry if this should be on SuperUser instead of ServerFault. Please ask me to migrate the question instead of flaming. I’ve had 2 windows desktops go down on the network in the space of one month, One windows 7 and the other Windows 8 in a network of 6 machines with PDC and another DC in Azure with a few other machines on a virtual Azure network. The machines are 2 year old Asus I7 4 core 8 processors with 32 gig memory and SSD main disk. The machines are being run in a development shop so everybody got everything installed. The 2 machines that went down are running local sql servers (and one mysql and postgress also). The first one went down and we blamed the ssd disk for the crash. But some aspects of the crash made a few warning lights go off in my head but being swamped (developer and trying to bang some sense into the network) did nothing. Ok Then my machine having quite full main system disk (SSD), decided to run the disk cleanup utility to clean up system files. I noticed that I had 192 gig in system files, thought nothing of it and ran it. Few hours later I started getting strange vibes from the machine and started the task manager… file not found error! Went straight into system32 and lo and behold, no files but those locked by the file system where left. Tried to download virus scanners but it could not install because the UAW exe was gone. Managed to get a malware scanner down (did not need an install) which did not give me any good reason for the situation. I went to another windows 7 machine and managed to copy all the system32 files to my file system. And my intention was to do a save reboot and copy the files manually to system32 and hopefully get it running (Got a deadline staring at me), but of course that did not work, the boot sector was gone. The shadow copy folders where gone and the restore points where gone too. So I had to clean install it. The disk is not reporting any errors. I scanned the network and found a hidden service on the PDC (rootkit). But I know of no virus that does this kind of damage. So finally the question is. Can a disk crash on a SSD disk behave like this? And if not what kind of virus can do this kind of damage. Edit I know the network is compromised and needs to be reinstalled. But the question is are the clients going down because of a virus or can this be a SSD disk crash or a windows update failure (Which is the company owner's answer to it all, and he only wants to remove the rootkit and then continue.)
It seems like somehow yum cached data and the rpm database got out of sync with each other I guess. Try running the next commands: su -c 'yum clean all && rpm --rebuilddb' su -c 'package-cleanup --problems' Then run: su -c 'yum erase zarafa*' Edit #1: Try running the next command: # su -c 'yum --setopt=tsflags=noscripts remove zarafa*' If that doesn't work, try this: # su -c 'rpm -e --noscripts zarafa*'
{ "source": [ "https://serverfault.com/questions/613257", "https://serverfault.com", "https://serverfault.com/users/229763/" ] }
613,528
With the base ubuntu:12.04 , ifconfig is not available in the container, though the ip command is available, why is this? and, how to get ifconfig in the container?
You can install ifconfig with apt-get install net-tools . (Specifically, by adding RUN apt-get install -y net-tools to your Dockerfile.) Based on my test, ifconfig is included in ubuntu:14.04.
{ "source": [ "https://serverfault.com/questions/613528", "https://serverfault.com", "https://serverfault.com/users/54651/" ] }
613,829
This is a Canonical Question about CNAMEs at the apices (or roots) of zones It's relatively common knowledge that CNAME records at the apex of a domain are a taboo practice. Example: example.com. IN CNAME ithurts.example.net. In a best case scenario nameserver software might refuse to load the configuration, and in the worst case it might accept this configuration and invalidate the configuration for example.com. Recently I had a webhosting company pass instructions to a business unit that we needed to CNAME the apex of our domain to a new record. Knowing that this would be a suicide config when fed to BIND, I advised them that we would not be able to comply and that this was bunk advice in general. The webhosting company took the stance that it is not outright forbidden by standard defining RFCs and that their software supports it. If we could not CNAME the apex, their advice was to have no apex record at all and they would not provide a redirecting webserver. ...What? Most of us know that RFC1912 insists that A CNAME record is not allowed to coexist with any other data. , but let's be honest with ourselves here, that RFC is only Informational. The closest I know to verbiage that forbids the practice is from RFC1034 : If a CNAME RR is present at a node, no other data should be present; this ensures that the data for a canonical name and its aliases cannot be different. Unfortunately I've been in the industry long enough to know that "should not" is not the same as "must not", and that's enough rope for most software designers to hang themselves with. Knowing that anything short of a concise link to a slam dunk would be a waste of my time, I ended up letting the company get away with a scolding for recommending configurations that could break commonly used software without proper disclosure. This brings us to the Q&A. For once I'd like us to get really technical about the insanity of apex CNAMEs, and not skirt around the issue like we usually do when someone posts on the subject. RFC1912 is off limits, as are any other Informational RFC applicable here that I didn't think of. Let's shut this baby down.
CNAME records were originally created to allow multiple names that provide the same resource to be aliased to a single "canonical name" for the resource. With the advent of name based virtual hosting, it has instead become commonplace to use them as a generic form of IP address aliasing. Unfortunately, most people who come from a web hosting background expect CNAME records to indicate equivalence in the DNS , which has never been the intent. The apex contains record types which are clearly not used in the identification of a canonical host resource ( NS , SOA ), which cannot be aliased without breaking the standard at a fundamental level. (particularly in regards to zone cuts ) Unfortunately, the original DNS standard was written before the standards governing bodies realized that explicit verbiage was necessary to define consistent behavior ( RFC 2119 ). It was necessary to create RFC 2181 to clarify several corner cases due to vague wording, and the updated verbiage makes it clearer that a CNAME cannot be used to achieve apex aliasing without breaking the standard. 6.1. Zone authority The authoritative servers for a zone are enumerated in the NS records for the origin of the zone, which, along with a Start of Authority (SOA) record are the mandatory records in every zone. Such a server is authoritative for all resource records in a zone that are not in another zone. The NS records that indicate a zone cut are the property of the child zone created, as are any other records for the origin of that child zone, or any sub-domains of it. A server for a zone should not return authoritative answers for queries related to names in another zone, which includes the NS, and perhaps A, records at a zone cut, unless it also happens to be a server for the other zone. This establishes that SOA and NS records are mandatory, but it says nothing about A or other types appearing here. It may seem superfluous that I quote this then, but it will become more relevant in a moment. RFC 1034 was somewhat vague about the problems that can arise when a CNAME exists alongside other record types. RFC 2181 removes the ambiguity and explicitly states the record types that are allowed to exist alongside them: 10.1. CNAME resource records The DNS CNAME ("canonical name") record exists to provide the canonical name associated with an alias name. There may be only one such canonical name for any one alias. That name should generally be a name that exists elsewhere in the DNS, though there are some rare applications for aliases with the accompanying canonical name undefined in the DNS. An alias name (label of a CNAME record) may, if DNSSEC is in use, have SIG, NXT, and KEY RRs, but may have no other data. That is, for any label in the DNS (any domain name) exactly one of the following is true: one CNAME record exists, optionally accompanied by SIG, NXT, and KEY RRs, one or more records exist, none being CNAME records, the name exists, but has no associated RRs of any type, the name does not exist at all. "alias name" in this context is referring to the left hand side of the CNAME record. The bulleted list makes it explicitly clear that a SOA , NS , and A records cannot be seen at a node where a CNAME also appears. When we combine this with section 6.1, it is impossible for a CNAME to exist at the apex as it would have to live alongside mandatory SOA and NS records. (This seems to do the job, but if someone has a shorter path to proof please give a crack at it.) Update: It seems that the more recent confusion is coming from Cloudflare's recent decision to allow an illegal CNAME record to be defined at the apex of domains, for which they will synthesize A records. "RFC compliant" as described by the linked article refers to the fact that the records synthesized by Cloudflare will play nicely with DNS. This does not change the fact that it is a completely custom behavior. In my opinion this is a disservice to the larger DNS community: it is not in fact a CNAME record, and it misleads people into believing that other software is deficient for not allowing it. (as my question demonstrates)
{ "source": [ "https://serverfault.com/questions/613829", "https://serverfault.com", "https://serverfault.com/users/152073/" ] }
613,927
When attempting to run a PHP file on Windows server 2012 and IIS, I keep getting a 500 error. I cannot find any detailed logs or anything. However, when going to PHP Manager for IIS and click check config, I get the following error: Detailed Error Information: Module FastCgiModule Notification ExecuteRequestHandler Handler PHP55_via_FastCGI Error Code 0xc0000135 Requested URL http://domain.com:80/brkld3ip.php Physical Path drive:\sites\domain.com\brkld3ip.php Logon Method Anonymous Logon User Anonymous I installed PHP using Microsoft Web Platform Installer 5.0 on a fresh install of Windows Server. I am new to IIS coming from Linux. So I am not "learned" enough in IIS to know what's going on. I have tried updating C++ redistributable 2012 update 4 as a couple websites suggest. Anybody have any other ideas? EDIT: Another thing I checked was memory limit. One site suggested my memory limit needed to be upped. No change. EDIT: Question: Does Windows have to be rebooted for PHP changes to take effect?
There's a fairly good chance you're missing the correct VC++ runtime for the version of PHP you're running. If you're running PHP 5.5.x you need to ensure the VC++11 runtime is installed: http://www.microsoft.com/en-us/download/details.aspx?id=30679 Make sure you download and install the x86 version ( vcredist_x86.exe ), PHP on Windows isn't 64 bit yet. If you're running PHP 5.4.x then you need to install the VC++9 runtime: http://www.microsoft.com/en-us/download/details.aspx?id=5582
{ "source": [ "https://serverfault.com/questions/613927", "https://serverfault.com", "https://serverfault.com/users/148395/" ] }
614,051
Is the default Ctrl-Alt-Delete shutdown -r functionality on Linux systems a dangerous feature? Years ago, when I deployed physical systems with attached keyboards and monitors, I'd sometimes modify the /etc/inittab on Red Hat systems to disable the reboot trap. This usually happened after a local IT person or Windows admin accidentally used the magic key combination on the wrong terminal/keyboard/window and rebooted their server. # Trap CTRL-ALT-DELETE ca::ctrlaltdel:/sbin/shutdown -t3 -r now I haven't done this since the RHEL4 days, but newer systems seem to have a /etc/init/control-alt-delete.conf file for this. In the years since, most of my systems have been deployed headless or are running as virtual machines. This has reduced the frequency of unintended reboots... however, I've had a recent set of ctrl-alt-delete oopses from: 1). an IP KVM plugged into the wrong server by datacenter staff. 2). a Windows admin using the key combination in a VMware console, thinking it was needed for logon. 3). me using the ctrl-alt-delete macro in an HP ILO console to reboot a live CD... but it was actually the ILO for a very busy production server . Does it make sense to disable Ctrl-Alt-Delete reboot in Linux by default? Is this a common concern, or generally ignored? Are there any downsides to doing so? How do you handle this in your environment? Edit: In fact, I just encountered this server , a virtual machine running for 1,115 days, root password unknown, and VMware tools were not installed ( so Ctrl-Alt-Delete would be the only graceful shutdown option ).
This can be useful for very, very seldom touched machines. Years after installation, if no-one can remember a login for the host, Ctrl-Alt-Delete will do proper shutdown and then let you use GRUB (or even LiLo!) to supply rw init=/bin/bash to the kernel and thus give you the chance to reset the root password . The above is also a way that Ctrl-Alt-Delete is dangerous even if physical access to the power/reset switches and power cables is prevented. A boot loader password (and BIOS password plus disabling of USB/CD-ROM boot and the boot menu key) can prevent this but makes legitimate emergency recovery more difficult.
{ "source": [ "https://serverfault.com/questions/614051", "https://serverfault.com", "https://serverfault.com/users/13325/" ] }
614,351
I am attempting to get ProxyPass to work on my OpenSUSE 13.1 install. I have tried: a2enmod proxy a2enmod proxy_http a2enmod proxy_connect systemctl restart apache2 systemctl reload apache2 (All combinations of statements to no avail). I keep getting the same error over and over: SERVER:/etc/apache2 # apache2ctl start -f /etc/apache2/httpd-proxy.conf AH00526: Syntax error on line 4 of /etc/apache2/httpd-proxy.conf: Invalid command 'ProxyPass', perhaps misspelled or defined by a module not included in the server configuration httpd-proxy.conf looks like: <VirtualHost *:80> DocumentRoot /srv/www/subsite ServerName www.site.com/subsite ProxyPass /subsite/ http://localhost:81 ProxyPassReverse /subsite/ http://localhost:81 </Virtualhost> Does anyone know how to get this ProxyPass statement working?
it looks like proxy_http_module isn't getting loaded, make sure you have following inside of your httpd.conf : LoadModule proxy_http_module modules/mod_proxy_http.so
{ "source": [ "https://serverfault.com/questions/614351", "https://serverfault.com", "https://serverfault.com/users/233863/" ] }
614,523
We have a Dell PowerEdge T410 server running CentOS, with a RAID-5 array containing 5 Seagate Barracuda 3 TB SATA disks. Yesterday the system crashed (I don't know how exactly and I don't have any logs). Upon booting up into the RAID controller BIOS, I saw that out of the 5 disks, disk 1 was labeled as "missing," and disk 3 was labeled as "degraded." I forced disk 3 back up, and replaced disk 1 with a new hard drive (of the same size). The BIOS detected this and began rebuilding disk 1 - however it got stuck at %1. The spinning progress indicator did not budge all night; totally frozen. What are my options here? Is there any way to attempt rebuilding, besides using some professional data recovery service? How could two hard drives fail simultaneously like that? Seems overly coincidental. Is it possible that disk 1 failed, and as a result disk 3 "went out of sync?" If so, is there any utility I can use to get it back "in sync?"
After you accepted a bad answer, I am really sorry for my heretic opinion (which saved such arrays multiple times already). Your second failed disk has probably a minor problem, maybe a block failure. This is the cause, why the bad sync tool of your bad raid5 firmware crashed on it. You could easily make a sector-level copy with a lowlevel disk cloning tool (for example, gddrescue is probably very useful), and use this disk as your new disk3. In this case, your array survived with a minor data corruption. I am sorry, probably it is too late, because the essence of the orthodox answer in this case: "multiple failure in a raid5, here is the apocalypse!" If you want very good, redundant raid, use software raid in linux. For example, its raid superblock data layout is public and documented... I am really sorry, for my this another heretic opinion.
{ "source": [ "https://serverfault.com/questions/614523", "https://serverfault.com", "https://serverfault.com/users/82222/" ] }
614,590
A linux server of mine is trying to establish a LDAPS connection to a global catalog server and the connection is getting dropped (presumably by the GC side). For the purpose of discussion, let's say that 1.1.1.1 is the Linux server and 1.2.3.4 is the global catalog server. If I try to use telnet from the Linux box, I see: [root@foobox ~]# telnet gcfoo.exampleAD.local 3269 Trying 1.2.3.4... Connected to gcfoo.examplead.local. Escape character is '^]'. Connection closed by foreign host. There's no delay between the 4th and 5th lines. It just immediately drops the connection. I thought that telnet results might be a little misleading (since it's not actually appropriate for any type of secure communication) so I collected a packet capture of the actual connection attempt from the appliance (using the actual program requiring LDAPS). Here's what I see (again, IPs and source ports have been renamed to protect the innocent): No. Time Source Destination Protocol Length Info 1 0.000000 1.1.1.1 1.2.3.4 TCP 66 27246 > msft-gc-ssl [SYN] Seq=0 Win=5840 Len=0 MSS=1460 SAC_PERM=1 WS=128 2 0.000162 1.2.3.4 1.1.1.1 TCP 62 msft-gc-ssl > 27246 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 SACK_PERM=1 3 0.000209 1.1.1.1 1.2.3.4 TCP 54 27246 > msft-gc-ssl [ACK] Seq=1 Ack=1 Win=5840 Len=0 4 0.003462 1.1.1.1 1.2.3.4 TCP 248 27246 > msft-gc-ssl [PSH, ACK] Seq=1 Ack=1 Win=5840 Len=194 5 0.007264 1.2.3.4 1.1.1.1 TCP 60 msft-gc-ssl > 27246 [RST] Seq=1 Win=64046 Len=0 I'm a bit rusty with TCP/IP so please forgive my ignorance... I see the three-way handshake taking place in packets 1-3. That makes sense. What's going on in packet #4 though? What does [PSH, ACK] mean? This seems like a redundant acknowledgement that's unnecessary. Is actual data being sent in this 4th packet? Or is this some weird continuation of the handshake?
PSH is a Push flag: http://ask.wireshark.org/questions/20423/pshack-wireshark-capture The Push flag tells the receiver's network stack to "push" the data straight to the receiving socket, and not to wait for any more packets before doing so. The Push flag usually means that data has been sent whilst overriding an in-built TCP efficiency delay, such as Nagle's Algorithm or Delayed Acknowledgements . These delays make TCP networking more efficient at the cost of some latency (usually around a few tens of milliseconds). A latency-sensitive application does not want to wait around for TCP's efficiency delays so the application will usually disable them, causing data to be sent as quick as possible with a Push flag set. On Linux, this is done with the setsockopt() flags TCP_QUICKACK and TCP_NODELAY . See man 7 socket for more information.
{ "source": [ "https://serverfault.com/questions/614590", "https://serverfault.com", "https://serverfault.com/users/21875/" ] }
614,890
Trying to run a simple AWS CLI backup script. It loops through lines in an include file, backs those paths up to S3, and dumps output to a log file. When I run this command directly, it runs without any error. When I run it through CRON I get an "Unable to locate credentials" error in my output log. The shell script: AWS_CONFIG_FILE="~/.aws/config" while read p; do /usr/local/bin/aws s3 cp $p s3://PATH/TO/BUCKET --recursive >> /PATH/TO/LOG 2>&1 done </PATH/TO/INCLUDE/include.txt I only added the line to the config file after I started seeing the error, thinking this might fix it (even though I'm pretty sure that's where AWS looks by default). Shell script is running as root. I can see the AWS config file at the specified location. And it all looks good to me (like I said, it runs fine outside of CRON).
If it works when you run it directly but not from cron there is probably something different in the environment. You can save your environment interactively by doing set | sort > env.interactive And do the same thing in your script set | sort > /tmp/env.cron And then diff /tmp/env.cron env.interactive and see what matters. Things like PATH are the most likely culprits.
{ "source": [ "https://serverfault.com/questions/614890", "https://serverfault.com", "https://serverfault.com/users/58008/" ] }
615,550
I keep getting an error "The configuration file now needs a secret passphrase" after installation of phpmyadmin. I have set the passphrase and also followed the instruction presented on https://serverfault.com/questions/291490/phpmyadmin-not-allowing-users-to-log-on but it doesn't seems to be working. I am using AMI and chcked the owner and permissions as well. Please kindly help.
This might help, https://wiki.archlinux.org/index.php/PhpMyAdmin#Add_blowfish_secret_passphrase If you see the following error message at the bottom of the page when you first log in to /phpmyadmin (using a previously setup MySQL username and password) : ERROR: The configuration file now needs a secret passphrase (blowfish_secret) You need to add a blowfish password to the phpMyAdmin's config file. Edit /etc/webapps/phpmyadmin/config.inc.php and insert a random blowfish "password" in the line $cfg['blowfish_secret'] = ; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ It should now look something like this: $cfg['blowfish_secret'] = 'qtdRoGmbc9{8IZr323xYcSN]0s)r$9b_JUnb{~Xz'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ This all assumes you've already properly created the config file, cp config.sample.inc.php config.inc.php
{ "source": [ "https://serverfault.com/questions/615550", "https://serverfault.com", "https://serverfault.com/users/161164/" ] }
616,407
I'm trying to add a second TXT record to a domain, but I get the following error: Tried to create resource record set type='TXT but it already exists Can I add two records at the same domain?
You would enter all the TXT values at the same time... even the one that already exists. Example CLI: route53 --zone example.com -c --type TXT --name example.com --values "text1","text2","text3" Example WebUI: "txt=ABC123" "txt=CDE456" See here as well: https://superuser.com/questions/573305/unable-to-create-txt-record-using-amazon-route-53
{ "source": [ "https://serverfault.com/questions/616407", "https://serverfault.com", "https://serverfault.com/users/187214/" ] }
616,430
I've installed redis on a new CentOS 7 box but can't start it using systemctl. It was installed like this: rpm -i http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm yum install redis Attempting to start it like this seemed to silently fail (there was no output): systemctl start redis-server # also tried redis-server.service Here's what happens when trying to connect: redis-cli Could not connect to Redis at 127.0.0.1:6379: Connection refused not connected> But starting it manually works: [root@redis ~]# redis-server /etc/redis.conf [root@redis ~]# redis-cli 127.0.0.1:6379> Anyone know what's going wrong, or how to debug this? UPDATE: Output of /var/log/redis/redis.log is below. Btw it's a 512mb RAM VPS. [1972] 29 Jul 18:52:16.258 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1972] 29 Jul 18:52:16.258 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted. [1972] 29 Jul 18:52:16.258 # Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'. _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 2.8.13 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in stand alone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 1972 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' [1972] 29 Jul 18:52:16.259 # Server started, Redis version 2.8.13 [1972] 29 Jul 18:52:16.259 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. [1972] 29 Jul 18:52:16.260 * DB loaded from disk: 0.001 seconds [1972] 29 Jul 18:52:16.260 * The server is now ready to accept connections on port 6379 [1972] 29 Jul 18:52:16.265 # User requested shutdown... [1972] 29 Jul 18:52:16.265 * Saving the final RDB snapshot before exiting. [1972] 29 Jul 18:52:16.267 * DB saved on disk [1972] 29 Jul 18:52:16.267 * Removing the pid file. [1972] 29 Jul 18:52:16.267 # Redis is now ready to exit, bye bye... And status: [root@redis ~]# systemctl status redis-server redis-server.service - Redis persistent key-value database Loaded: loaded (/usr/lib/systemd/system/redis-server.service; disabled) Active: inactive (dead) Jul 29 18:52:16 redis systemd[1]: Starting Redis persistent key-value database... Jul 29 18:52:16 redis systemd[1]: Started Redis persistent key-value database.
Finally, fixed it. Systemd requires redis to run non-daemonised, so the config needed to change: # /etc/redis.conf daemonize yes # << comment this out
{ "source": [ "https://serverfault.com/questions/616430", "https://serverfault.com", "https://serverfault.com/users/98012/" ] }
616,435
In CentOS 6 I could type setup from the command line and I would be presented with a set of tools, one of them being Firewall configuration . I can still do this in CentOS 7, except the list no longer includes Firewall configuration as an option. Does anyone know where I can find it now and why it has been moved? This is where I used to go to allow incoming traffic via HTTP and HTTPS . If there's a better way, I'd gladly take the advice. Thanks.
Since the release of RedHat/CentOS 7, the previous firewall system has been replaced with firewalld . At the time of writing there is no curses-like console interface similar to system-config-firewall. If you don't mind using a GUI you could use firewall-config instead. If you need something for the console you will have to use firewall-cmd instead. For more information and full documentation about firewalld : 4.5. Using Firewalls (or now (subscription required) How to configure firewalld in RHEL ? ) I hope this might help you!
{ "source": [ "https://serverfault.com/questions/616435", "https://serverfault.com", "https://serverfault.com/users/235156/" ] }
616,485
I have a Dell 1U Server with Intel(R) Xeon(R) CPU L5420 @ 2.50GHz, 8 cores running Ubuntu Server Kernel Version 3.13.0-32-generic on x86_64. It has dual 1000baseT networking cards. I have it set up to forward packets from eth0 to eth1. I have noticed that in my kern.log file it keeps hanging then resting. This is happening often. This happens every few second then maybe it will be ok for a few minutes then back to every few seconds. Here is the log file dump: [118943.768245] e1000e 0000:00:19.0 eth0: Detected Hardware Unit Hang: [118943.768245] TDH <45> [118943.768245] TDT <50> [118943.768245] next_to_use <50> [118943.768245] next_to_clean <43> [118943.768245] buffer_info[next_to_clean]: [118943.768245] time_stamp <101c48d04> [118943.768245] next_to_watch <45> [118943.768245] jiffies <101c4970f> [118943.768245] next_to_watch.status <0> [118943.768245] MAC Status <80283> [118943.768245] PHY Status <792d> [118943.768245] PHY 1000BASE-T Status <7800> [118943.768245] PHY Extended Status <3000> [118943.768245] PCI Status <10> [118944.780015] e1000e 0000:00:19.0 eth0: Reset adapter unexpectedly Here is the info from ethtool: Settings: Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off (auto) Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes Driver info: ethtool -i eth0 driver: e1000e version: 2.3.2-k firmware-version: 1.4-0 bus-info: 0000:00:19.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no What could be causing this? Is this just a bug in the software or a actual hardware issue? I have seen many other having similar issues but no real solution and this also leads me to believe that its a software issue? Maybe someone can shed some light on this for me?
Ok so after posting this question last night night I continued to do some research the only real solution I came across seems to have taken care of the problem. Disabling TSO, GSO and GRO using ethtool: ethtool -K eth0 gso off gro off tso off According to a post found here: http://ehc.ac/p/e1000/bugs/378/ From what I understand this will or can cause a reduction in performance. I also noticed another solution was to disable Active-State Power Management pcie_aspm=off According to this post on serverfault: Linux e1000e (Intel networking driver) problems galore, where do I start? I haven’t tried this solution yet. I will try it and see if that makes a difference and post back my findings. EDIT: Ok so I have tried turning off Active-State Power Management, pcie_aspm=off and this didn't have any effect. I continued to notice errors in my log file. This may still work for some as some of the Intel nics have issues with different kernels of falling asleep when power management is enabled.
{ "source": [ "https://serverfault.com/questions/616485", "https://serverfault.com", "https://serverfault.com/users/132175/" ] }
616,698
I've written various pieces of code that connect to LDAP servers and run queries, but it's always been voodoo to me. One thing I don't really understand is the concept of a bind DN. Here's an example using the ldapsearch command-line tool available from openldap. (Ignore the lack of authentication.) ldapsearch -h 1.2.3.4 -D dc=example,dc=com [query] What is the purpose and function of the -D dc=example,dc=com part of this? Why do we need to bind to a particular location in the directory hierarchy? Is it to establish which part of the directory my queries should apply to? E.g. if the root node of the directory is dc=com , and it has two children ( dc=foo and dc=bar ), maybe I want my queries to be against the dc=foo,dc=com subtree and not the dc=bar,dc=com subtree?
A bind DN is an object that you bind to inside LDAP to give you permissions to do whatever you're trying to do. Some (many?) LDAP instances don't allow anonymous binds, or don't allow certain operations to be conducted with anonymous binds, so you must specify a bindDN to obtain an identity to perform that operation. In a similar non-technical way - and yes this is a stretch - a bank will allow you to walk in and look at their interest rates without giving them any sort of ID, but in order to open an account or withdraw money, you have to have an identity they know about - that identity is the bindDN.
{ "source": [ "https://serverfault.com/questions/616698", "https://serverfault.com", "https://serverfault.com/users/10376/" ] }
617,081
I'm trying to modify /etc/ssh/sshd_config on my dedicated debian7 server with both AllowUsers and AllowGroups . However I can't seem get both to work together. The Setup There's a user called testuser . That user is in a group called ssh-users : $ groups testuser testuser : testuser ssh-users testuser is trying to connect via ssh testuser@<server_ip> and entering their password. My sshd_config can be found here: http://pastebin.com/iZvVDFKL - I think basically the only changes I made from default was: to set PermitRootLogin no and add two users with AllowUsers (actual usernames differ on my server) service ssh restart is run each time after modifying sshd_config . The Problem testuser can connect when set with AllowUsers : AllowUsers user1 user2 testuser testuser can NOT connect when setting AllowGroups for its group: AllowUsers user1 user2 AllowGroups ssh-users which results in Permission denied, please try again. when testuser enters their password in the ssh password prompt. The Question Does AllowUsers override AllowGroups ? What's the best way to fix this without manually adding the username to AllowUsers ? Ideally I'd like to be able to just add users to the ssh-users group in the future without having to touch sshd_config again.
Yes, AllowUsers takes precedent over AllowGroups . If specified, only the users that match the pattern specified in AllowUsers may connect to the SSHD instance. According to sshd_config manpage : The allow/deny directives are processed in the following order: DenyUsers , AllowUsers , DenyGroups , and finally AllowGroups . So, the solution to your problem is probably to use one or the other, possibly the group access directives if groups are your preferred way to manage users.
{ "source": [ "https://serverfault.com/questions/617081", "https://serverfault.com", "https://serverfault.com/users/229971/" ] }
617,248
This functionality is required for properly directing a root domain to Heroku: https://devcenter.heroku.com/articles/custom-domains#cname-functionality-at-the-apex Some registrars, like DNSimple, support it. Is it supported by the new Google Domains?
No. The full list of records supported by Google Domains can be found at: https://support.google.com/domains/answer/3290350 There is no 'ALIAS' or 'ANAME' or any other similar pseudo-CNAME supported. Please note that the type of record mentioned by the Heroku documentation is not an actual CNAME, but rather an A record that is auto-updated to match some arbitrary external A record. Amazon Route 53, as well as several other DNS providers offer this, and call it various things - some call it ALIAS or ANAME etc - but it is not an actual RR type. Google domains does support a thing called "synthetic records", however AFAIK it would not help you with Heroku. https://support.google.com/domains/answer/6069273
{ "source": [ "https://serverfault.com/questions/617248", "https://serverfault.com", "https://serverfault.com/users/95000/" ] }
617,398
What I mean under the question is: is there a way to dump the ordered list (like pstree does for processes) to see how systemd executed the supplied set of units, i.e. the tree after the dependencies were resolved and jobs were queued for the execution? I know that you can do it by analysing systemd state data, but is there a quick way to see such a tree? It would help a lot in failure investigation (e.g. if you see that the boot process was stuck on some unit you would be able to pinpoint the approximate location for your deeper investigation.
systemd-analyze is your friend. For example systemd-analyze critical-chain outputs blocking tree of daemons. Mine for example: graphical.target @20.211s └─multi-user.target @20.211s └─nginx.service @19.348s +862ms └─network.target @19.347s └─NetworkManager.service @10.315s +9.031s └─basic.target @10.312s └─timers.target @10.311s └─systemd-tmpfiles-clean.timer @10.311s └─sysinit.target @10.295s └─systemd-update-utmp.service @10.167s +127ms └─systemd-tmpfiles-setup.service @10.124s +41ms └─local-fs.target @10.097s └─home-entd-Downloads.mount @10.093s +2ms └─home.mount @9.326s +672ms └─[email protected] @8.472s +696ms └─dev-sda6.device @8.471s NetworkManager in example basically holding entire bootup. If you want to have more detailed view you can render entire execution chain in a svg file. systemd-analyze plot > something.svg outputs entire chain (120+ modules) as progress bars to high-res svg file which show states, that are blocked and another problems. Finally you have systemd-analyze dot tool which outputs dot file which outputs entire hierarchy: systemd-analyze dot | dot -Tpng -o stuff.png with dot tool you can output it as ps and svg files too. All of above tools are built-in in systemd-analyze tool which comes by default with systemd in archlinux at least. I think there is some 3rd party projects dealing with it too.
{ "source": [ "https://serverfault.com/questions/617398", "https://serverfault.com", "https://serverfault.com/users/209544/" ] }
617,548
I'm using Ansible to provision my development server. I want it to always start some services for me. I have handlers for this purpose but what is the best way to trigger handler execution without condition, e.g. make it always work? Something like this: tasks: - name: Trigger handler run_handler: name=nginx-restart
If you absolutely need to trigger a handler every time then here are two options: 1) run a noop shell command which will always report as changed - name: trigger nginx-restart command: /bin/true notify: nginx-restart 2) use debug along with changed_when: to trigger a handler - debug: msg="trigger nginx-restart" notify: nginx-restart changed_when: true Also of note for Option 1 and Check Mode: You may want to use check_mode: no if using Ansible version 2.2 or higher or always_run: yes if using earlier versions than that so that the task does not get skipped over in check mode. From my manual testing it looks like the handlers remain in check mode, but please be careful as your case may differ.
{ "source": [ "https://serverfault.com/questions/617548", "https://serverfault.com", "https://serverfault.com/users/227777/" ] }
617,610
On the server node, it is possible to access an exported folder. However, after reboots (both server and client), the folder is no longer accessible from the clients. On server # ls /data Folder1 Forlder2 and the /etc/exports file contains /data 192.168.1.0/24(rw,no_subtree_check,async,no_root_squash) On client # ls /data ls: cannot access /data: Stale NFS file handle I have to say that there were no problem with the shared folder from client side however after reboots (server and client), I see this message. Any way to fix that?
The order of reboots is important. Rebooting the server after the clients can result in this situation. The stale NFS handle indicates that the client has a file open, but the server no longer recognizes the file handle. In some cases, NFS will cleanup its data structures after a timeout. In other cases, you will need to clean the NFS data structures yourself and restart NFS afterwards. Where these structures are located are somewhat O/S dependent. Try restarting NFS first on the server and then on the clients. This may clear the file handles. Rebooting NFS servers with files opened from other servers is not recommended. This is especially problematic if the open file has been deleted on the server. The server may keep the file open until it is rebooted, but the reboot will remove the in-memory file handle on the server side. Then the client will no longer be able to open the file. Determining which mounts have been used from the server is difficult and unreliable. The showmount -a option may show some active mounts, but may not report all of them. Locked files are easier to identify, but require the locking to be enabled and relies on the client software to lock the files. You can use lsof on the clients to identify the processes which have files open on the mounts. I use the hard and intr mount options on my NFS mounts. The hard option causes IO to be retried indefinitely. The intr option allows processes to be killed if they are waiting on NFS IO to complete.
{ "source": [ "https://serverfault.com/questions/617610", "https://serverfault.com", "https://serverfault.com/users/158757/" ] }
617,616
I want to use ssl with nginx. I create the necessary certificates: [root@arch ssl]# pwd /etc/nginx/ssl [root@arch ssl]# ls -l total 12 -rwx------ 1 root root 1346 Aug 3 14:36 server.crt -rwx------ 1 root root 1115 Aug 3 14:36 server.csr -rwx------ 1 root root 1743 Aug 3 14:35 server.key But nginx fails to load these files. It says it can't find them: systemctl -l status nginx nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled) Active: failed (Result: exit-code) since Sun 2014-08-03 14:50:04 EDT; 21min ago Process: 21391 ExecStart=/usr/bin/nginx -g pid /run/nginx.pid; error_log stderr; (code=exited, status=1/FAILURE) Main PID: 16458 (code=exited, status=0/SUCCESS) Aug 03 14:50:04 arch nginx[21391]: 2014/08/03 14:50:04 [emerg] 21391#0: BIO_new_file("/etc/gninx/ssl/server.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/gninx/ssl/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file) Aug 03 14:50:04 arch systemd[1]: nginx.service: control process exited, code=exited status=1 Aug 03 14:50:04 arch systemd[1]: Failed to start A high performance web server and a reverse proxy server. Aug 03 14:50:04 arch systemd[1]: Unit nginx.service entered failed state. This is the config that I have: server { server_name localhost; listen 443; ssi on; ssl on; ssl_certificate /etc/gninx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; client_max_body_size 4G; location = / { ... } } Can anyone tell me please what I;m missing? Thanks in advance for your kind help and time. Jenia.
The order of reboots is important. Rebooting the server after the clients can result in this situation. The stale NFS handle indicates that the client has a file open, but the server no longer recognizes the file handle. In some cases, NFS will cleanup its data structures after a timeout. In other cases, you will need to clean the NFS data structures yourself and restart NFS afterwards. Where these structures are located are somewhat O/S dependent. Try restarting NFS first on the server and then on the clients. This may clear the file handles. Rebooting NFS servers with files opened from other servers is not recommended. This is especially problematic if the open file has been deleted on the server. The server may keep the file open until it is rebooted, but the reboot will remove the in-memory file handle on the server side. Then the client will no longer be able to open the file. Determining which mounts have been used from the server is difficult and unreliable. The showmount -a option may show some active mounts, but may not report all of them. Locked files are easier to identify, but require the locking to be enabled and relies on the client software to lock the files. You can use lsof on the clients to identify the processes which have files open on the mounts. I use the hard and intr mount options on my NFS mounts. The hard option causes IO to be retried indefinitely. The intr option allows processes to be killed if they are waiting on NFS IO to complete.
{ "source": [ "https://serverfault.com/questions/617616", "https://serverfault.com", "https://serverfault.com/users/235902/" ] }
617,648
I am trying to test a project that needs compressed storage with use of the ext4 file system since the application I use relies on ext4 features. Are there any production/stable solutions out there for transparent compression on ext4? What I have tried: Ext4 over ZFS volume with compression enabled. This actually had an adverse affect. I tried creating a ZFS volume with lz4 compression enabled and making an ext4 filesystem on /dev/zvol/... but the zfs volume showed double the actual usage and the compression did not seem to have any effect. # du -hs /mnt/test **1.1T** /mnt/test # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 15.2T 2.70G 290K /pool pool/test 15.2T 13.1T **2.14T** - ZFS Creation Commands zpool create pool raidz2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde2 /dev/sdf1 /dev/sdg1 /dev/sdh2 /dev/sdi1 zfs set recordsize=128k pool zfs create -p -V15100GB pool/test zfs set compression=lz4 pool/test mkfs.ext4 -m1 -O 64bit,has_journal,extents,huge_file,flex_bg,uninit_bg,dir_nlink /dev/zvol/pool/test Fusecompress: Seemed to work but not 100% stable. Looking for alternatives. LessFS: Is it possible to use Lessfs in conjunction with ext4? I have not yet tried but would be interested in user insight. One major problem: not true transparency An issue I saw with fusecompress was quotas. For example, if I enabled compression on the filesystem, I would want my system to benefit from the compression, not necessarily the end user. If I enabled a quota of 1GB for a user, with a compression ratio of 1.5, they would be able to upload 1.5GB of data, rather than 1GB of data and the system benefiting from the compression. This also appeared to show on df -h. Is there a solution to have compression transparent to quotas?
I use ZFS on Linux as a volume manager and a means to provide additional protections and functionality to traditional filesystems. This includes bringing block-level snapshots, replication, deduplication, compression and advanced caching to the XFS or ext4 filesystems. See: https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ for another explanation. In my most common use case, I leverage the ZFS zvol feature to create a sparse volume on an existing zpool. That zvol's properties can be set just like a normal ZFS filesystem's. At this juncture, you can set properties like compression type, volume size, caching method, etc. Creating this zvol presents a block device to Linux that can be formatted with the filesystem of your choice. Use fdisk or parted to create your partition and mkfs the finished volume. Mount this and you essentially have a filesystem backed by a zvol and with all of its properties. Here's my workflow... Create a zpool comprised of four disks: You'll want the ashift=12 directive for the type of disks you're using. The zpool name is "vol0" in this case. zpool create -o ashift=12 -f vol0 mirror scsi-AccOW140403AS1322043 scsi-AccOW140403AS1322042 mirror scsi-AccOW140403AS1322013 scsi-AccOW140403AS1322044 Set initial zpool settings: I set autoexpand=on at the zpool level in case I ever replace the disks with larger drives or expand the pool in a ZFS mirrors setup. I typically don't use ZFS raidz1/2/3 because of poor performance and the inability to expand the zpool. zpool set autoexpand=on vol0 Set initial zfs filesystem properties: Please use the lz4 compression algorithm for new ZFS installations. It's okay to leave it on all the time. zfs set compression=lz4 vol0 zfs set atime=off vol0 Create ZFS zvol: For ZFS on Linux, it's very important that you use a large block size. -o volblocksize=128k is absolutely essential here. The -s option creates a sparse zvol and doesn't consume pool space until it's needed. You can overcommit here, if you know your data well. In this case, I have about 444GB of usable disk space in the pool, but I'm presenting an 800GB volume to XFS. zfs create -o volblocksize=128K -s -V 800G vol0/pprovol Partition zvol device: ( should be /dev/zd0 for the first zvol; /dev/zd16, /dev/zd32, etc. for subsequent zvols ) fdisk /dev/zd0 # (create new aligned partition with the "c" and "u" parameters) Create and mount the filesystem: mkfs.xfs or ext4 on the newly created partition, /dev/zd0p1. mkfs.xfs -f -l size=256m,version=2 -s size=4096 /dev/zd0p1 Grab the UUID with blkid and modify /etc/fstab . UUID=455cae52-89e0-4fb3-a896-8f597a1ea402 /ppro xfs noatime,logbufs=8,logbsize=256k 1 2 Mount the new filesystem. mount /ppro/ Results... [root@Testa ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sde2 20G 8.9G 9.9G 48% / tmpfs 32G 0 32G 0% /dev/shm /dev/sde1 485M 63M 397M 14% /boot /dev/sde7 2.0G 68M 1.9G 4% /tmp /dev/sde3 12G 2.6G 8.7G 24% /usr /dev/sde6 6.0G 907M 4.8G 16% /var /dev/zd0p1 800G 398G 403G 50% /ppro <-- Compressed ZFS-backed XFS filesystem. vol0 110G 256K 110G 1% /vol0 ZFS filesystem listing. [root@Testa ~]# zfs list NAME USED AVAIL REFER MOUNTPOINT vol0 328G 109G 272K /vol0 vol0/pprovol 326G 109G 186G - <-- The actual zvol providing the backing for XFS. vol1 183G 817G 136K /vol1 vol1/images 183G 817G 183G /images ZFS zpool list. [root@Testa ~]# zpool list -v NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT vol0 444G 328G 116G 73% 1.00x ONLINE - mirror 222G 164G 58.1G - scsi-AccOW140403AS1322043 - - - - scsi-AccOW140403AS1322042 - - - - mirror 222G 164G 58.1G - scsi-AccOW140403AS1322013 - - - - scsi-AccOW140403AS1322044 - - - - ZFS zvol properties ( take note of referenced , compressratio and volsize ). [root@Testa ~]# zfs get all vol0/pprovol NAME PROPERTY VALUE SOURCE vol0/pprovol type volume - vol0/pprovol creation Sun May 11 15:27 2014 - vol0/pprovol used 326G - vol0/pprovol available 109G - vol0/pprovol referenced 186G - vol0/pprovol compressratio 2.99x - vol0/pprovol reservation none default vol0/pprovol volsize 800G local vol0/pprovol volblocksize 128K - vol0/pprovol checksum on default vol0/pprovol compression lz4 inherited from vol0 vol0/pprovol readonly off default vol0/pprovol copies 1 default vol0/pprovol refreservation none default vol0/pprovol primarycache all default vol0/pprovol secondarycache all default vol0/pprovol usedbysnapshots 140G - vol0/pprovol usedbydataset 186G - vol0/pprovol usedbychildren 0 - vol0/pprovol usedbyrefreservation 0 - vol0/pprovol logbias latency default vol0/pprovol dedup off default vol0/pprovol mlslabel none default vol0/pprovol sync standard default vol0/pprovol refcompressratio 3.32x - vol0/pprovol written 210M - vol0/pprovol snapdev hidden default
{ "source": [ "https://serverfault.com/questions/617648", "https://serverfault.com", "https://serverfault.com/users/235918/" ] }
617,823
During CentOS 7 system boot nginx start fails with the following error: 2014/08/04 17:27:34 [emerg] 790#0: bind() to a.b.c.d:443 failed (99: Cannot assign requested address) I suspect this is happening due to the network interfaces not being up yet before attempting to bind to that IP address for serving a vhost over SSL. My guess is I need to specify the network.service as a requirement for the nginx.service, but I can't find the network service in /etc/systemd/ at all. How can I configure the service order or dependencies in systemd?
You need, at minimum, After=network.target in the [Unit] section of your unit file, to ensure that the network is up before starting nginx. I have no idea why your unit file doesn't have it. Here is a complete example from my handy Fedora system, as shipped by Fedora: [Unit] Description=The nginx HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t ExecStart=/usr/sbin/nginx ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target
{ "source": [ "https://serverfault.com/questions/617823", "https://serverfault.com", "https://serverfault.com/users/99559/" ] }
618,102
This is a proposed Canonical Question about Server Memory. I have to buy a Dell R420 server and there are various combinations (1600 and 1333 MHz RDIMMS and UDIMMS) and Performance Optimized vs. Advanced ECC with and without sparing. I noticed that there are only 4gb DIMMS with UDIMM, so I will have utimately to go to 16GB RDIMMS. What are these options and what do I need to know about them?
RAM for servers comes with a few common metrics to specify it's capacity and ability to work in a particular configuration. To help confuse this there are different names for what is essentially the same thing, and the "standard" name changes depending on which type of RAM you're using. Capacity (1GB, 4GB, 32GB, etc) This is easy enough; everyone should already be familiar with the concept that RAM comes in different capacities. The particular type of RAM determines what the maximum size of a single stick can be, but that's irrelevant because actual implementations limit the amount of RAM a system can support (ie, check the documentation for your system to see what capacity it supports). RAM's capacity can be organized in different configurations. Usually there's just one standard configuration for RAM of a certain size. If you're buying ultra-cheap RAM off the Internet be warned that it may be non-standard (especially if they mention the organization) and not supported by your server. Speed (1600MHz, etc) For the purposes of this Answer, you want the speed of the RAM to match the maximum speed of the system. RAM that is one or sometimes two "speeds" faster will work as well, though at the lesser speed. Similarly RAM that is one or two "speeds" slower will work, also at the lesser speed. Integrity Protection (ECC or Non-ECC) ECC is the most common form of integrity protection (ie, making sure cosmic rays didn't flip any bits and none of the memory locations are going bad). In most systems the RAM must either be ECC or non-ECC, whatever the system requires. Occasionally this is called 72-bit memory (a misnomer leftover from 64 memory data channels getting 8 bits of ECC along side the data bus). When RAM has ECC, that protection information can be checked at a variety of times. The most basic protection reads and checks the ECC data only when the RAM at that memory location is read. More advanced options allow the system to check ECC regularly. Most frequently I've seen this called "memory scrubbing"; it works much like disk array scrubbing; and like disk array scrubbing you should have it enabled unless there's a good reason to disable it. ECC is one of the steps reducing the impact of Row Hammer bug . Bus Electrical Capacity (Unbuffered or Registered) We're not electrical engineers, so all you really need to know is that Buffered or Registered RAM allows more RAM in a system than without. Like ECC this is something that must be supported by the system. Unlike ECC many new servers support both Unbuffered/Unregistered and Buffered/Registered RAM. Older servers tended to support only one or the other. Registers are a type of buffer, but the terms are used interchangeably when applied to RAM. I have never see a system that can mix Unbuffered and Registered at the same time. When you see UDIMM, the "U" is for "Unbuffered". The "R" in RDIMM is "Registered". Ranks Registered RAM has well defined electrical "usage" characteristics metered in "ranks". Each RAM channel (or bus) in a system can support so many ranks at each speed it supports. Typically systems are rated at two speeds (ie, the channel runs at X speed normally with up to A ranks; but Y speed if over that; and only up to B ranks are possible). There is RAM available with the same capacity and speed, but taking up different numbers of ranks. Typically the more capacity the more ranks a module takes up. Low voltage modules take up less ranks (per the module's specifications). Foot Notes There are a variety of configuration options unrelated to what physical RAM you need to buy for your server. These include mirroring the RAM (just like RAID1, but for RAM), sparing (literally spare RAM that if one goes bad the spare replaces it), timing and related optimizations. Modern servers typically have the memory controller(s) integrated into the CPU instead of a separate North Bridge chip. This means systems that support multiple CPUs must have the CPU socket populated that corresponds to a memory slot in order to use that slot. Similarly some CPUs required there to be memory populated in their slots for the system to work. See the system's documentation for details. Modern servers typically have more than one memory channel. These channels operate mostly independently, which will allow greater memory bandwidth in memory-intensive usage scenarios. Generally you should plan on distributing memory across all channels on all populated CPUs as evenly as is realistic to ensure the best performance.
{ "source": [ "https://serverfault.com/questions/618102", "https://serverfault.com", "https://serverfault.com/users/137365/" ] }
618,700
I often read that using multiple PTR records in a DNS configuration is not recommended. However, the reasons are often vague, or not so obvious, naming: "it can cause problems", "can trigger bugs in programs expecting a single answer": it's the software's problem then, isn't it?! "can make DNS answer packet too large": isn't this fixed with EDNS ? Are these good reasons? Do you know of any other (good) reasons? All this kinda looks like a "legacy fear"...
The PTR record for a reverse name (eg 7.2.0.192.in-addr.arpa ) is expected to identify the canonical name that is associated with that IP address. Both the gateway pointers at network nodes and the normal host pointers at full address nodes use the PTR RR to point back to the primary domain names of the corresponding hosts. From: https://www.rfc-editor.org/rfc/rfc1035#section-3.5 This expectation is reflected in software that does reverse lookups; often such software specifically expects a single name back and it expects to be able to use that name as a canonical name for that host. If there are multiple names returned it's common to just take one at random because they have absolutely no way of knowing which one you would have preferred for this particular occasion. As the general expectation is that there is one canonical name associated with an IP address and that name is what the PTR should point to, adding multiple names generally has no upside (nothing expects any random A / AAAA record to have a matching PTR ) but it has a potential downside as it can cause strange results as you have no control over which of your PTR records will be used if you have added more than one. In essence, if you have multiple PTR records you do not actually make your host appear more legitimate but rather the opposite, you run the risk of failing some validation or otherwise breaking something. As a perhaps somewhat extreme metaphor, handing over five passports all with your photo but with different names at the airport is probably not going to be received as well as if you just hand over one.
{ "source": [ "https://serverfault.com/questions/618700", "https://serverfault.com", "https://serverfault.com/users/158888/" ] }
618,735
I am using rsync in a bash script to keep files in sync between a few servers and a NAS. One issue I have run into is trying to generate a list of the files that have changed from the during the rsync. The idea is that when I run rsync, I can output the files that have changed into a text file - more hoping for an array in memory - then before the script exists I can run a chown on only the changed files. Has anyone found a way to perform such a task? # specify the source directory source_directory=/Users/jason/Desktop/source # specify the destination directory # DO NOT ADD THE SAME DIRECTORY NAME AS RSYNC WILL CREATE IT FOR YOU destination_directory=/Users/jason/Desktop/destination # run the rsync command rsync -avz $source_directory $destination_directory # grab the changed items and save to an array or temp file? # loop through and chown each changed file for changed_item in "${changed_items[@]}" do # chown the file owner and notify the user chown -R user:usergroup; echo '!! changed the user and group for:' $changed_item done
You can use rsync's --itemize-changes ( -i ) option to generate a parsable output that looks like this: ~ $ rsync src/ dest/ -ai .d..t.... ./ >f+++++++ newfile >f..t.... oldfile ~ $ echo 'new stuff' > src/newfile ~ $ !rsync rsync src/ dest/ -ai >f.st.... newfile The > character in the first position indicates a file was updated, the remaining characters indicate why, for example here s and t indicate that the file size and timestamp changed. A quick and dirty way to get the file list might be: rsync -ai src/ dest/ | egrep '^>' Obviously more advanced parsing could produce cleaner output :-) I came across this great link while trying to find out when --itemize-changes was introduced, very useful: http://andreafrancia.it/2010/03/understanding-the-output-of-rsync-itemize-changes.html (archived link)
{ "source": [ "https://serverfault.com/questions/618735", "https://serverfault.com", "https://serverfault.com/users/228517/" ] }
618,857
I need to know how to list the IDs of all route tables. For example, I can run: ip rule add fwmark 2 table 104 ip route add dev eth0 default via 192.168.3.7 table 104 A call to ip rule list shows: 0: from all lookup local 32765: from all fwmark 0x2 lookup 104 32766: from all lookup main 32767: from all lookup default And a call to ip route show table 104 shows: default via 192.168.3.7 dev eth0 If I then call ip rule del table 104 , a subsequent call to ip rule list shows: 0: from all lookup local 32766: from all lookup main 32767: from all lookup default However, a call to ip route show table 104 still shows: default via 192.168.3.7 dev eth0 I know that I can flush the table using ip route flush table 104 . I'd like to be able to flush all tables that are not local , main , and default . Thus I want to be able to list the existing tables. I've seen people use cat /etc/iproute2/rt_tables , but that only produces: # # reserved values # 255 local 254 main 253 default 0 unspec # # local # #1 inr.ruhep What can I do to get all the table names that currently exist? Thanks in advance!
There exists a way to list all routing entries of all tables. ip route show table all Using some shell piping magic, you can extract all table names and IDs like this: ip route show table all | grep "table" | sed 's/.*\(table.*\)/\1/g' | awk '{print $2}' | sort | uniq or ip route show table all | grep -Po 'table \K[^\s]+' | sort -u If you only care about the numeric table names, add some grep filtering: ip route show table all | grep "table" | sed 's/.*\(table.*\)/\1/g' | awk '{print $2}' | sort | uniq | grep -e "[0-9]" or ip route show table all | grep -Po 'table \K[^\s]+' | sort -u | grep -e "[0-9]"
{ "source": [ "https://serverfault.com/questions/618857", "https://serverfault.com", "https://serverfault.com/users/236660/" ] }
618,994
I've set the following environment so that no question/dialog is asked during apt-get install: ENV DEBIAN_FRONTEND noninteractive # export DEBIAN_FRONTEND="noninteractive" Which is equivalent to: export DEBIAN_FRONTEND="noninteractive" Yet, when building an image from a Dockerfile, at the end of one specific Debian/Ubuntu package install (using apt-get install), package configuration debconf says: debconf: unable to initialize frontend: Noninteractive # export DEBIAN_FRONTEND="noninteractive" debconf: (Bareword "Debconf::FrontEnd::Noninteractive" not allowed while "strict subs" in use at (eval 35) line 3, <> line 1.) debconf: falling back to frontend: Noninteractive Subroutine BEGIN redefined at (eval 36) line 2, <> line 1.
It should be actively discouraged to set the DEBIAN_FRONTEND to noninteractive via ENV . The reason is that the environment variable persists after the build, e.g. when you run docker exec -it ... bash . The setting would not make sense here. There are two other possible ways: Set it via ARG as this only is available during build: ARG DEBIAN_FRONTEND=noninteractive RUN apt-get -qq install {your-package} Set it on-the-fly when required. RUN apt-get update && \ DEBIAN_FRONTEND=noninteractive apt-get -qq install {your-package}
{ "source": [ "https://serverfault.com/questions/618994", "https://serverfault.com", "https://serverfault.com/users/236742/" ] }
619,542
I am running the following command every 5 minutes in my crontab to keep Phusion Passenger alive. */5 * * * * wget mysite.com > /dev/null 2>&1 When I run this it performs a wget on the site url routes STDOUT/STDERR to /dev/null. When I run this from a command line it works fine and doesn't produce an index.html file in my home directory. When it runs from cron, it creates a new index.html file every five minutes, leaving me with a ton of index files which I don't want. Is my syntax incorrect for running the cron job? From a command line it works without a problem but from cron it generates a index.html file in my home directory. I'm sure I'm making a simple mistake, would appreciate it if anyone could help out.
You could do it like this: */5 * * * * wget -O /dev/null -o /dev/null example.com Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all.
{ "source": [ "https://serverfault.com/questions/619542", "https://serverfault.com", "https://serverfault.com/users/174425/" ] }
619,554
I'm looking for some advice on setting-up multi-host I/O access on our SAN: I have a blade enclosure (PowerEdge1000e) containing an Equallogic PS-M4110 storage blade with a single RAID6 volume currently formatted as ext4. This is connected via iSCSI to one of other blades (all running ubuntu server 14.04) and mounted there as a standard drive. Now I am trying to connect another of the blades in the enclosure to the SAN in a way that allows multi-host I/O. Preferably trying to avoid the obvious solution of NFS because some of the slightly questionably coded tools we use have a habit of crashing and burning when doing high I/O to NFS. This is particularly problematic as these tools take weeks to run and don't have many opportunities to checkpoint (have you guessed this is an academic environment yet?). However, everything plays nicely with the current iSCSI set-up. So I was leaning towards a cluster-aware or distributed file-system + iSCSI as the best option but I'm worried of split-brain issues etc as we only have 1 node. 1) Is any of the above remotely sane? 2) Do you have any recommendations of which fs to use (FOSS and linux compatible preferable)?
You could do it like this: */5 * * * * wget -O /dev/null -o /dev/null example.com Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all.
{ "source": [ "https://serverfault.com/questions/619554", "https://serverfault.com", "https://serverfault.com/users/237077/" ] }
619,699
I am having problems serving static assets to Firefox using AWS Cloudfront. Chrome works perfect, but Firefox is returning a CORS error. If I execute curl , I get: HTTP/1.1 200 OK Content-Type: application/x-font-opentype Content-Length: 39420 Connection: keep-alive Date: Mon, 11 Aug 2014 21:53:50 GMT Cache-Control: public, max-age=31557600 Expires: Sun, 09 Aug 2015 01:28:02 GMT Last-Modified: Fri, 08 Aug 2014 19:28:05 GMT ETag: "9df744bdf9372cf4cff87bb3e2d68fc8" Accept-Ranges: bytes Server: AmazonS3 Age: 2743 X-Cache: Hit from cloudfront Via: 1.1 c445b20dfbf3128d810e975e5d84e2cd.cloudfront.net (CloudFront) X-Amz-Cf-Id: ... Which I think needs the header: Access-Control-Allow-Origin: * Can anyone help me? Why is it a problem on Firefox and not Chrome? How can I solve it?
First thing, you need to make sure that you whitelist origin header: If you want CloudFront to respect cross-origin resource sharing settings, configure CloudFront to forward the Origin header to your origin. http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-custom-cors Also see: http://aws.amazon.com/blogs/aws/enhanced-cloudfront-customization/ By the way, there are several similar questions on serverfault/stackoverflow and a lot of answers.
{ "source": [ "https://serverfault.com/questions/619699", "https://serverfault.com", "https://serverfault.com/users/118108/" ] }
620,522
I have found out that McAfee SiteAdvisor has reported my website as "may be having security issues" . I care little about whatever McAfee thinks of my website (I can secure it myself and if not, McAfee definitely is not the company I'd be asking for help, thank you very much). What bothers me, though, is that they have, apparently, crawled my website without my permission. To clarify: There's almost no content on my website yet, just some placeholder and some files for my personal usage. There are no ToS. My questions is: Does McAffee have the right to download content from / crawl my website? Can I forbid them to do so? I have a feeling there should be some kind of "My castle, my rules" principle, however I basically know nothing about all the legal stuff. Update: I probably should have mentioned my server provider sends me emails about SiteAdvisor's findings on a regular basis - that's how I found out about their 'rating' and that's why I'm annoyed.
Yes, they have the right to do so - you've created a public website, what makes you think they don't? You too, of course, have the right to stop them. You can ask them not to crawl your website with robots.txt or actively prevent them from accessing it with something like fail2ban . Alternatively, don't worry about it and continue on with your life. It's not hurting anything and is definitely on the benign side of Internet probing.
{ "source": [ "https://serverfault.com/questions/620522", "https://serverfault.com", "https://serverfault.com/users/237703/" ] }
620,595
I always used the command: shutdown -r now However, sometimes that causes MySQL issues. What's the most graceful way to restart CentOS? I've seen: reboot and halt How can I gently reboot the machine?
Systems using systemd (CentOS >=7) will have the reboot , shutdown and halt commands symlinked to systemctl to handle the reboot. The systemctl program will detect the use of the symlink and run the systemctl command with the correstponing arguments. For the difference between the commands see the manpage for systemctl ( man systemctl ) for it is quite nicely documented. For CentOS 6, there is no better way to restart your server by using anything else than any those commands stated in the original question: shutdown is the most common way to stop your system. Adding the argument -r and a specific time (or ' now ') will reboot your system instead of halting it after the shutdown sequence. reboot is a wrapper round shutdown which does some harddisk maintenance (syncing and/or putting in standby mode and not really relevant). New versions of reboot (>2.74) will initiate shutdown if not in runlevel 0 or 6. Most init scripts call halt to make a log in utmp . Modern distributions will have all tasks covered regardless of the command you are using. Basically they all initiate the shutdown run-time of your SysV (CentOS <7) or systemd (CentOS >=7) scripts (I will call them init scripts for ease of reading). Shutting down using init scripts step by step stop all your services registered under usually runlevel 'S'. Individual init scripts can have a timeout, like the MySQL init script in CentOS. When the stop argument is given and the daemon will not be shutdown in a fair amount of time, the script will stop and exit giving a failure. The shutdown process will continue as if nothing was wrong, only taking a bit longer and probably print a warning. At the end, when all init scripts are executed, the inevitable will happen: all processes still running will get a SIGTERM signal and, after a few seconds (2 or 5), a SIGKILL . This will clean up the rest before an ACPI call is done to really reboot or shutdown your system. One exception is using the reboot command with the -f option, this will skip executing init scripts and will reboot the system directly. You will be better off fixing the root-cause of your worries: MySQL not shutting down properly. Often this is due to the massive load of work that needs to be done before the daemon can be exited safely. I once had a MySQL instance with +300.000 tables that took over an hour to exit. Similar issues can be found with systems using huge buffers and sparse I/O availability.
{ "source": [ "https://serverfault.com/questions/620595", "https://serverfault.com", "https://serverfault.com/users/211505/" ] }
622,432
I have migrated software to a very slow sever. Some software services refuses to startup because of system timeout. How do I increase timeout from default 30 sec.(?) to several minutes? Thank you in advance!
You can modify the timeout value in the registry . 1. Click Start, click Run, type regedit, and then click OK. 2. Locate and then click the following registry subkey: - HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control 3. In the right pane, locate the ServicesPipeTimeout entry. **Note**: If the ServicesPipeTimeout entry does not exist, you must create it. To do this, follow these steps: - On the Edit menu, point to New, and then click DWORD Value. - Type ServicesPipeTimeout, and then press ENTER. 4. Right-click ServicesPipeTimeout, and then click Modify. 5. Click Decimal, type 60000, and then click OK. - This value represents the time in milliseconds before a service times out. 6. Restart the computer.
{ "source": [ "https://serverfault.com/questions/622432", "https://serverfault.com", "https://serverfault.com/users/149691/" ] }
622,796
I've a problem w/ postfix problem: # tail -f /var/log/mail.err Aug 20 17:57:50 myserver postfix/smtpd[8243]: error: unsupported dictionary type: mysql Aug 20 17:57:50 myserver postfix/smtpd[8243]: error: unsupported dictionary type: mysql Aug 20 17:58:05 myserver postfix/smtpd[8244]: error: unsupported dictionary type: mysql Aug 20 17:58:05 myserver postfix/smtpd[8244]: error: unsupported dictionary type: mysql Aug 20 18:00:38 myserver postfix/smtpd[8277]: error: unsupported dictionary type: mysql Aug 20 18:00:38 myserver postfix/smtpd[8277]: error: unsupported dictionary type: mysql Aug 20 18:03:32 myserver postfix/smtpd[8320]: error: unsupported dictionary type: mysql Aug 20 18:03:32 myserver postfix/smtpd[8320]: error: unsupported dictionary type: mysql Aug 20 18:03:33 myserver postfix/trivial-rewrite[8322]: error: unsupported dictionary type: mysql Aug 20 18:03:33 myserver postfix/trivial-rewrite[8322]: error: unsupported dictionary type: mysql idea?
[SOLVED] This fixed the issue for me in Ubuntu 14.04: sudo apt-get install postfix-mysql
{ "source": [ "https://serverfault.com/questions/622796", "https://serverfault.com", "https://serverfault.com/users/238404/" ] }
623,211
Following a complete re-installation we got a problem with the configuration: the sender address was wrong and some recipients (mail servers) rejected them. So there is a bunch of mails stuck in the Postfix queue. Ideally, a change of the sender address directly in the queued mails, and then flushing the queue would be optimal. I tried this answer that addresses this very problem. But messages don't seem to be easily modifiable in the version I have (2.11.0). For instance there is no /var/spool/mqueue dir, but, instead, /var/spool/postfix/... active bounce corrupt defer deferred dev etc flush hold incoming lib maildrop pid private public saved trace usr and the dir of interest is deferred . I tried to modify a few files there changing the wrong domain with the correct one (and was careful to ensure only those were changed). But then, those mails were moved to corrupt , meaning that a simple text change doesn't seem to work (done with vi ). Any other cleaner way to change the sender in queued mails?
I tried this answe r that addresses this very problem. But messages don't seem to be easily modifiable in the version I have (2.11.0). For instance there is no /var/spool/mqueue dir, but, instead, /var/spool/postfix/... I want to clarify two things. First, that answer was applied to sendmail NOT postfix. Second, direct-manipulating-raw-queue-files was not supported at all. So, you have several options here 1. smtp_generic_maps parameter This answer inspired by this excellent answer . It will rewrite old-address to new-address automatically. You can define file to maps old-address to new-address. /etc/postfix/main.cf: smtp_generic_maps = hash:/etc/postfix/generic /etc/postfix/generic: [email protected] [email protected] Don't forget to postmap /etc/postfix/generic and run postfix reload Upside: You doesn't need to requeue the message Downside: Postfix will rewriting sender and recipient address that matching [email protected] . 2. sender_canonical_address To overcome the downside of first option, you can use sender_canonical_maps . This solution based on Postfix author suggestion . Same as first option, you can define file to maps old-address to new-address. /etc/postfix/main.cf: sender_canonical_maps = hash:/etc/postfix/sender_canonical /etc/postfix/sender_canonical: [email protected] [email protected] Run postmap /etc/postfix/sender_canonical then run postfix reload . Due the flow of postfix queue, you must re-queue the affected queue with command postsuper -r queueid Upside: Postfix not rewriting recipient address. Downside: You must requeue all affected message. But you can requeue all deferred with single command postsuper -r ALL deferred 3. direct manipulating of postfix queue This is manual old ways to modify queue for advanced processing. This answer came from postfix-users mailing lists In short Extract queue # postsuper -h queueid # postcat -qbh queueid > tempfile.eml # vi tempfile.eml Resubmit queue and delete old queue # sendmail -f $sender $recipient < tempfile.eml # postsuper -d queueid For documentation of above command, refer to this page Note: Original solution from postfix-users mailing lists , use postcat -q queueid >tempfile to extract queue. This command will extract the header, body and meta-information of the queue. As pointed Azendale below , sendmail will refuse to send this malformed email because of meta-information. Using -bh parameter in addition of q parameter will make postcat filter the output to header and body only, not including meta-information. A side benefit of this is the tempfile is in the format most email clients recognize as .eml format, allowing you to view the resulting (edited) message.
{ "source": [ "https://serverfault.com/questions/623211", "https://serverfault.com", "https://serverfault.com/users/51913/" ] }
623,634
Ansible tags can be used to run only a subset of tasks/roles. This means that by default all tasks are executed and we can only prevent some tasks to execute. Can we limit a task to be exectued only when "foo" tag is specified? Can we use current tags in when section of a task?
Ansible 2.5 comes with special tags never and always . Tag never can be used exactly for this purpose. E.g: tasks: - debug: msg='{{ showmevar}}' tags: [ 'never', 'debug' ] In this example, the task will only run when the debug (or never ) tag is explicitly requested. [Reference on ansible docs]
{ "source": [ "https://serverfault.com/questions/623634", "https://serverfault.com", "https://serverfault.com/users/105928/" ] }
624,387
I would like to run a XAMPP server, and a Nodejs server on the port 80. If the server get a HTTP request, then XAMPP will handle it, if the server get a Websocket request, then Nodejs How is it possible? If a port is already in use, then I can't start the other server program.
You would need to use a reverse proxy to do this, e.g. Apache 2.4. with mod_proxy_wstunnel . Use it as a frontend and then tunnel the connections to the appropriate backend.
{ "source": [ "https://serverfault.com/questions/624387", "https://serverfault.com", "https://serverfault.com/users/239619/" ] }
624,848
I'm having an error when trying to redirect https://example.com to https://www.example.com . When I go to https://example.com , it doesn't redirect and returns the page/200 status. I don't want this, I want it to redirect to https://www.example.com . When I go to http://example.com , it redirects to https://www.example.com Can somebody tell me where I am going wrong? This is my default and default-ssl configuration files: default.conf server { listen 80; server_name example.com; return 301 https://www.example.com$request_uri; } default-ssl.conf upstream app_server_ssl { server unix:/tmp/unicorn.sock fail_timeout=0; } server { server_name example.com; return 301 https://www.example.com$request_uri } server { server_name www.example.com; listen 443; root /home/app/myproject/current/public; index index.html index.htm; error_log /srv/www/example.com/logs/error.log info; access_log /srv/www/example.com/logs/access.log combined; ssl on; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_certificate /srv/www/example.com/keys/ssl.crt; ssl_certificate_key /srv/www/example.com/keys/www.example.com.key; ssl_ciphers AES128-SHA:RC4-MD5:ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:!ADH:!AECDH:!MD5:AES128-SHA; ssl_prefer_server_ciphers on; client_max_body_size 20M; try_files $uri/index.html $uri.html $uri @app; # CVE-2013-2028 http://mailman.nginx.org/pipermail/nginx-announce/2013/000112.html if ($http_transfer_encoding ~* chunked) { return 444; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server_ssl; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/app/example/current/public; } }
You are missing listen directive in file default-ssl.conf . Add listen 443; in this directive server { server_name example.com; return 301 https://www.example.com$request_uri; } By default, if you omit this directive, nginx assume that you want listen on port 80. Here the documentation of this default behavior. Edit: Thanks for comment from @TeroKilkanen. Here the complete config for your default-ssl.conf server { listen 443 ssl; server_name example.com; ssl_certificate /srv/www/example.com/keys/ssl.crt; ssl_certificate_key /srv/www/example.com/keys/www.example.com.key; return 301 https://www.example.com$request_uri; } Sidenote : You can replace ssl on; directive with listen 443 ssl; as recommendation from nginx documentation .
{ "source": [ "https://serverfault.com/questions/624848", "https://serverfault.com", "https://serverfault.com/users/130312/" ] }
625,008
Is there a method to find a domain's DKIM and DMARC records using dig or nslookup ? I have attempted to do the following: dig somedomain.org any returns many records, but not the known DKIM and DMARC text records. nslookup -type=txt somedomain.org returns all the text records known except the DKIM and DMARC records.
To query the TXT record for DMARC, you can use: dig TXT _dmarc.example.org To query for a particular record for DKIM, you would need to know the selector prefix. You will find it in the s value in an email's DKIM-Signature. For example: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=example.org; s=google; t=1615461277; […] You would then query it as TXT: dig TXT google._domainkey.example.org
{ "source": [ "https://serverfault.com/questions/625008", "https://serverfault.com", "https://serverfault.com/users/240025/" ] }
625,166
I'm going to have a number of EC2 instances in an Elastic Beanstalk autoscaling group in a default subnet in a VPC. The app on these EC2 instances needs to connect to a third party service who uses an IP address whitelist to allow access. So I need one or more static IP addresses that I can give to this service provider so they can be added to the whitelist. My understanding is that the only way to get a static IP is to get an Elastic IP address. And I can only associate the Elastic IP with one EC2 instance at a time - I can't associate it with my whole subnet or internet gateway (is this correct?). So do I need an Elastic IP for each EC2 instance, so each instance can be separately whitelisted? How would that work if the autoscaling adds another instance? Should I have one EC2 instance with an Elastic IP, and route all the outgoing traffic via that instance? If so, does that instance need to be solely for this purpose or can it be one of the instances that's running my app?
You need a NAT. This configuration is commonly used to support private subnets in VPC, there's quite a detailed guide here . Once your VPC is configured to use the NAT instance all the outbound traffic will be attributed to the EIP of the NAT instance. If so, does that instance need to be solely for this purpose or can it be one of the instances that's running my app? Technically you probably could, but it's not a good idea: It's good security to have roles isolated. You want your application servers to have similar or identical load profiles. If one instance has an extra 10% load because of the NAT then you'll have to scale up prematurely when you hit the limits of that instance. This will get worse as the NAT gets busier as more instances get added to your cluster. You want your application servers to be identical and ephemeral so you can tear them down and/or replace them whenever there's an issue or you need to scale. Having one application server which is different to the rest would be a major headache. You might be able to get away with it if your instances are containerised but it's still probably not a great idea. Also keep in mind that your NAT instance could be a single point of failure, so you may want to think about redundancy.
{ "source": [ "https://serverfault.com/questions/625166", "https://serverfault.com", "https://serverfault.com/users/80084/" ] }
625,641
I have a system that I can only log in to under my username (myuser), but I need to run commands as other user (scriptuser). So far, I have come up with the following to run the commands I need: ssh -tq myuser@hostname "sudo -u scriptuser bash -c \"ls -al\"" If however, when I try to run a more complex command, such as [[ -d "/tmp/Some directory" ]] && rm -rf "/tmp/Some directory" I quickly get into trouble with quoting. I'm not sure how I could pass this example complex command to bash -c , when \" already delimites the boundaries of the command I'm passing (and so I don't know how to quote /tmp/Some directory, which includes a spaces. Is there a general solution allowing me to pass any command no matter how complex/crazy the quoting is, or is this some sort of limitation I have reached? Are there other possible and perhaps more readable solutions?
A trick I use sometimes is to use base64 to encode the commands, and pipe it to bash on the other site: MYCOMMAND=$(base64 -w0 script.sh) ssh user@remotehost "echo $MYCOMMAND | base64 -d | sudo bash" This will encode the script, with any commas, backslashes, quotes and variables inside a safe string, and send it to the other server. ( -w0 is required to disable line wrapping, which happens at column 76 by default). On the other side, $(base64 -d) will decode the script and feed it to bash to be executed. I never got any problem with it, no matter how complex the script was. Solves the problem with escaping, because you don't need to escape anything. It does not creates a file on the remote host, and you can run vastly complicated scripts with ease.
{ "source": [ "https://serverfault.com/questions/625641", "https://serverfault.com", "https://serverfault.com/users/72785/" ] }
626,521
Problem: iptables resets to default settings after server reboot. I'm trying to set rule like this: iptables -I INPUT -p tcp --dport 3000 -j ACCEPT after that I do: service iptables save and it writes back something like this iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] and after this I just ran (this was done once): chkconfig iptables on (I have read that this has to be done in order to restore settings after reboot) After that I reboot and run this command: systemctl list-unit-files | grep iptables and I see that iptables.service is enabled, however, the rule (to open port 3000) does not work anymore. How do I persist these settings?
CentOS 7 is using FirewallD now! Use the --permanent flag to save settings. Example: firewall-cmd --zone=public --add-port=3000/tcp --permanent Then reload rules: firewall-cmd --reload
{ "source": [ "https://serverfault.com/questions/626521", "https://serverfault.com", "https://serverfault.com/users/241046/" ] }
626,558
is there any difference between running an intensive task over sudo with the following commands?: nice sudo [intensive command here] sudo nice [intensive command here] BTW this is for Linux 3.x.
There's a difference, a crucial one. If you want to decrease the process' priority, the order does not matter. On the other hand, if you want to increase it, you must put sudo before nice . Since you are running the command as a normal user (otherwise you would not bother with sudo at all), you can only decrease the priority of your command. But if you use sudo first, you can increase it if you want.
{ "source": [ "https://serverfault.com/questions/626558", "https://serverfault.com", "https://serverfault.com/users/241079/" ] }