source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
311,883 | I have a server running multiple websites under IIS7.5. I want to view the log files for one website in particular. In C:\inetpub\logs\LogFiles I see a number of folders, W3SVC1 through 6. How do I find out which website corresponds to which folder? In IIS6.0 it used to tell you, but I can't find this anywhere in IIS7.5. | The numbers on the folders correspond to the Site ID of each specific site in IIS. If you go into Internet Manager you can see the Site ID's by clicking on the Sites node in the navigation pane. So, if a site has ID 1 its log folder name is W3SVC1, ID2 = W3SVC2, etc. You could also review %WinDir%\System32\Inetsrv\Config\applicationHost.Config , which contains information about all of the sites. It is XML format. You'll want to look for <site> nodes in the XML which contain an id attribute. This is the aforementioned Site ID of that specific site and will align with the number in the log folder. <site name="Default Web Site" id="1"> | {
"source": [
"https://serverfault.com/questions/311883",
"https://serverfault.com",
"https://serverfault.com/users/77742/"
]
} |
312,111 | My iOS app is currently accessing domain A via http POST but I would like to forward all requests to domain B. If I use the usual rewrite ^/(.*)$ http://mydomain/$1 permanent; the POST data seems to get lost. How can I pass HTTP POST data to a different domain using NginX? | Try using the reverse proxy support instead. An example location section would be: location / {
proxy_pass http://localhost:8080;
proxy_redirect http://localhost:8080/ /;
proxy_read_timeout 60s;
# May not need or want to set Host. Should default to the above hostname.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
} This example will pass through all requests to this server block to a second server running on localhost:8080 . This preserves POST 's and should also preserve other request types too if it ever becomes an issue. The issue is that external redirects will never resend POST data. This is written into the HTTP spec (check the 3xx section). Any client that does do this is violating the spec. If the 301/302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm fairly sure that most browsers implement this by simply forcing the redirected request to be a GET request. Theoretically, the spec does allow for a browser that would ask the user whether to redirect the POST data, but I'm unaware of any that currently do. | {
"source": [
"https://serverfault.com/questions/312111",
"https://serverfault.com",
"https://serverfault.com/users/94896/"
]
} |
312,177 | What is the right way to enable correct charset headers in NGINX? I'm analyzing my website with Google Page Speed. It says that I should specify the charset of HTML files in HTTP-headers. What is the right way to do this? I already tried to set charset utf-8; in the server {} declaration of my NGINX configuration file, but it hasn't got any effect. My server responds with the following header: Connection: keep-alive
Date: Fri, 16 Sep 2011 12:43:24 GMT
Last-Modified: Fri, 02 Sep 2011 15:13:17 GMT
Server: nginx/0.7.67 Thank you. | Adding charset utf-8; is pretty much everything you need to do. Are sure that you didn't forget to reload nginx after you changed the configuration file? Besides at the moment of writing, curl -I https://vorb.de/ returns the following result: HTTP/1.1 200 OK
Server: nginx/0.7.67
Date: Fri, 16 Sep 2011 13:20:03 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 1705
Last-Modified: Fri, 02 Sep 2011 15:13:17 GMT
Connection: keep-alive
Vary: Accept-Encoding
Accept-Ranges: bytes So everything looks ok now. | {
"source": [
"https://serverfault.com/questions/312177",
"https://serverfault.com",
"https://serverfault.com/users/83157/"
]
} |
312,453 | If I dont set a error log inside a virtual host it will default to the default error/access log. Is there a way to turn this off for one virtual host? | Within your <VirtualHost> block for the vhost in question you can configure the logs to be sent to /dev/null <VirtualHost *:80>
ServerName nologserver.tld
ErrorLog /dev/null
CustomLog /dev/null common
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/312453",
"https://serverfault.com",
"https://serverfault.com/users/86582/"
]
} |
312,472 | When updating with yum i recieve the following message: yum update
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
* atomic: www7.atomicorp.com
* base: mirror.de.leaseweb.net
* extras: mirror.de.leaseweb.net
* updates: mirror.de.leaseweb.net
118 packages excluded due to repository priority protections
Setting up Update Process
No Packages marked for Update What does that mean ? How to install these packages ? | Some packages are held by more than one repository. The priorities plugin choose packages from the highest-priority repository, excluding duplicate entries from other repos. | {
"source": [
"https://serverfault.com/questions/312472",
"https://serverfault.com",
"https://serverfault.com/users/78129/"
]
} |
312,494 | I would like to add a condition in an adduser script to update nginx.conf for it to load ~/www as http://ipaddress/~user whenever I create a new user. And when a user is named www.domainname it will host that domain name in the ~/www folder. Is there a script that already does this? | You don't need to add anything to nginx upon user creation. Simply use something like this in your server block: location ~ ^/~(.+?)(/.*)?$ {
alias /home/$1/www$2;
autoindex on;
} Check your distributions /etc/skel if you mkdir /etc/skel/www all userdirs created by adduser (or your distributions adduser-script) will have this directory by default. | {
"source": [
"https://serverfault.com/questions/312494",
"https://serverfault.com",
"https://serverfault.com/users/75248/"
]
} |
312,930 | In my application I am pinging a server and waiting for a response. I am using this to determine whether the server is available and responsive or not. Is this a reliable way of determining availability? I assume a firewall could be filtering icmp traffic... Are there any other drawbacks? Is there a more reliable method? | The best way to tell if any given remote service is alive is to ask it to service a request in the way it's meant to - in fact it's the only way to truly know something is working correctly. As an example I always get my load-balancers to get an actual 'head' response back from our web servers, you could do the same for a small select on a DB box if you wanted to, or whatever your actual server serves. As a tip you can create an 'online.txt' (or whatever name you want to give it) on your web servers, have your LBs try to get that file and if it fails then it removes the server from the VIP, this is a nice way of manually taking individual servers out of your VIPs simply by renaming a single file. Ping only tests for the ability to respond to pings, so that's base OS, parts of the IP stack and the physical links - but that's all, everything else could be down and you'd not know. I know this is mentioned below, but it bears repeating again and again. ICMP Echo Requests (aka "Pings") (aka ICMP Type 8) are built onto the IP stack spec, yes, but are not required to be either implemented or used. As a matter of fact, there are a large number of internet providers who refuse to forward those and silently drop those requests, as they are a form of network attack (called a pingflood). As mentioned above, this is handled by the OS (specifically at the network stack level) and so it is up to the OS configuration to respond to those or not. If this is turned off (a security precaution?), you can't do anything about receiving ping replies from the other end. This is why it's not reliable. | {
"source": [
"https://serverfault.com/questions/312930",
"https://serverfault.com",
"https://serverfault.com/users/87011/"
]
} |
313,126 | Our building is located approx. 100 meters from the explosive charges. They happen several times per day, and really shake the entire building a lot. This is going to go on for many days and the blasts are supposed to get stronger. Our server rooms are nothing fancy; one of them has all the racks on hard concrete while the other one has a raised floor (the one which allows the cables to go beneath it). Does anyone have any tips, countermeasures or best practices for us? Currently we are thinking of the following countermeasures: Daily report of the server rooms status lights (HD lights, power supplies and so on). Nightly check disk scan on the most important servers Order in extra supply of spare harddrives Edit: Many good answers here! However one needs to be accepted. The highest voted answer at the time of this edit will get their answer accepted. | If I were you, I'd call the company's insurance company, and have them place out an accelerometer. This way - the insurance company will know that you're not the one shaking the disks, and the insurance company will know for sure when every blast has gone off, in case your SAN dies at the exact same moment. (We did this.) | {
"source": [
"https://serverfault.com/questions/313126",
"https://serverfault.com",
"https://serverfault.com/users/62845/"
]
} |
313,465 | I'm in the process of configuring a cloud server to run the following stack: Ruby, Passenger, Apache; under Ubuntu 10.04 (Lucid Lynx). In the process of wanting to make the server easier to manage I setup RSA keys on root , and www-data so that I can ssh into the server. The thing I didn't like was that www-data 's .ssh directory sat in /var/www which is the default directory setup for apache. My worry is that if apache isn't configured properly then the .ssh directory can be exposed. I came across the solution to move the ~/.ssh/authorized_keys file into a central location by changing AuthorizedKeysFile in /etc/ssh/sshd_config . This comes with 2 pros: A single location for keys, and not having to worry about a bad apache configuration. The single con that I can think of is that now every user is available for login on the server (clearly a double edged sword of the central key file.) Is there anything that I've missed in this configuration? Have I over exposed myself, or is this a better solution than individual authorized_keys files? I'm green when it comes to server management, but am totally ready to be called bad names for doing bad things. :D | All the keys files can be centralized in the same directory and not mixed in the same file. Simply set up the sshd_config file like this: AuthorizedKeysFile /etc/ssh/authorized_keys/%u On your server: www-data keys will be in /etc/ssh/authorized_keys/www-data root keys will be in /etc/ssh/authorized_keys/root Regarding the access rights, these settings are accepted by sshd: /etc/ssh/authorized_keys is owned by root:root and has mode 755. Key files are owned by root:root and have mode 644. Other modes may work but I haven't tested them. | {
"source": [
"https://serverfault.com/questions/313465",
"https://serverfault.com",
"https://serverfault.com/users/755/"
]
} |
313,649 | We have a linux server that has been in heavy use for 3 years. We're running a number of virtualized servers on it, some that have not been well behaved, and for a significant time the server's io capacity was exceeded leading to bad iowait. It's got 4 500gb Barracuda sata drives connected to a 3com raid controller. 1 Drive has the OS, and the other 3 are setup raid-5. Now we have a debate as to the condition of the drives and whether they are actively failing. Here's a portion of the output for 1 of the 4 disks. They all have relatively similar statistics: SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 118 099 006 Pre-fail Always - 169074425
3 Spin_Up_Time 0x0003 095 092 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 26
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 077 060 030 Pre-fail Always - 200009354607
9 Power_On_Hours 0x0032 069 069 000 Old_age Always - 27856
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 1
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 26
184 Unknown_Attribute 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 1
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 071 060 045 Old_age Always - 29 (Lifetime Min/Max 26/37)
194 Temperature_Celsius 0x0022 029 040 000 Old_age Always - 29 (0 21 0 0)
195 Hardware_ECC_Recovered 0x001a 046 033 000 Old_age Always - 169074425
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
SMART Error Log Version: 1
No Errors Logged My interpretation of this is that we have not had any bad sectors or other indications that any of the drives are actively failing. However, the high Raw_Read_Error_Rate and Seek_Error_Rate is being pointed to as indications that the drives are dying. | For Seagate disks (and possibly some old ones from WD too) the Seek_Error_Rate and Raw_Read_Error_Rate are 48 bit numbers, where the most significant 16 bits are an error count, and the low 32 bits are a number of operations. % python
>>> 200009354607 & 0xFFFFFFFF
2440858991
>>> (200009354607 & 0xFFFF00000000) >> 32
46 So your disk has performed 2440858991 seeks, of which 46 failed. My experience with Seagate drives is that they tend to fail when the number of errors goes over 1000. YMMV. | {
"source": [
"https://serverfault.com/questions/313649",
"https://serverfault.com",
"https://serverfault.com/users/93259/"
]
} |
314,311 | I am wondering if rebooting a server in a schedule would be good idea for performance. Let's say we want to reboot the server at 02:00 AM per 2 nights. The server here is Windows Server 2008 R2 . Mainly, SQL Server and IIS 7.5 (nearly 15 apps running) are running under this server. Server has 4GB memory. | While I would agree that there is nothing wrong with rebooting the box, per se, based on your comment that the SQL Server Agent is stopping I would advise some additional root cause analysis. Services don't typically just stop, and the SQL Server Agent services hasn't acted that way in my experience, typically. I think you'd do well, aside from rebooting, to examine the event logs and run a long-term performance counter log that you can analyze with Performance Analysis of Logs (PAL) to see if it "sees" anything wrong. You should try, if nothing else, to correlate the events associated with the SQL Agent stopping with other factors. | {
"source": [
"https://serverfault.com/questions/314311",
"https://serverfault.com",
"https://serverfault.com/users/77342/"
]
} |
314,574 | The wikipedia description of the HTTP header X-Forwarded-For is: X-Forwarded-For: client1, proxy1, proxy2, ... The nginx documentation for the directive real_ip_header reads, in part: This directive sets the name of the header used for transferring the replacement IP address. In case of X-Forwarded-For, this module uses the last ip in the X-Forwarded-For header for replacement. [Emphasis mine] These two descriptions seem at odds with one another. In our scenario, the X-Forwarded-For header is exactly as described -- the client's "real" IP address is the left-most entry. Likewise, the behavior of nginx is to use the right -most value -- which, obviously, is just one of our proxy servers. My understanding of X-Real-IP is that it is supposed to be used to determine the actual client IP address -- not the proxy. Am I missing something, or is this a bug in nginx? And, beyond that, does anyone have any suggestions for how to make the X-Real-IP header display the left -most value, as indicated by the definition of X-Forwarded-For ? | I believe the key to solving X-Forwarded-For woes when multiple IPs are chained is the recently introduced configuration option, real_ip_recursive (added in nginx 1.2.1 and 1.3.0). From the nginx realip docs : If recursive search is enabled, an original client address that matches one of the trusted addresses is replaced by the last non-trusted address sent in the request header field. nginx was grabbing the last IP address in the chain by default because that was the only one that was assumed to be trusted. But with the new real_ip_recursive enabled and with multiple set_real_ip_from options, you can define multiple trusted proxies and it will fetch the last non-trusted IP. For example, with this config: set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.1;
real_ip_header X-Forwarded-For;
real_ip_recursive on; And an X-Forwarded-For header resulting in: X-Forwarded-For: 123.123.123.123, 192.168.2.1, 127.0.0.1 nginx will now pick out 123.123.123.123 as the client's IP address. As for why nginx doesn't just pick the left-most IP address and requires you to explicitly define trusted proxies, it's to prevent easy IP spoofing. Let's say a client's real IP address is 123.123.123.123 . Let's also say the client is up to no good, and they're trying to spoof their IP address to be 11.11.11.11 . They send a request to the server with this header already in place: X-Forwarded-For: 11.11.11.11 Since reverse proxies simply add IPs to this X-Forwarded-For chain, let's say it ends up looking like this when nginx gets to it: X-Forwarded-For: 11.11.11.11, 123.123.123.123, 192.168.2.1, 127.0.0.1 If you simply grabbed the left-most address, that would allow the client to easily spoof their IP address. But with the above example nginx config, nginx will only trust the last two addresses as proxies. This means nginx will correctly pick 123.123.123.123 as the IP address, despite that spoofed IP actually being the left-most. | {
"source": [
"https://serverfault.com/questions/314574",
"https://serverfault.com",
"https://serverfault.com/users/54287/"
]
} |
314,850 | Remote session from client name a exceeded the maximum allowed failed
logon attempts. The session was forcibly terminated. One of the servers are being hit by a dictionary attack. I have all the standard security in place (renamed Administrator, etc.) but want to know is there a way to limit or ban the attack. Edit : The server is remote only. I need RDP to access it. | Block RDP at the firewall. I don't know why so many people allow this. If you need to RDP to your server, setup a VPN. | {
"source": [
"https://serverfault.com/questions/314850",
"https://serverfault.com",
"https://serverfault.com/users/433/"
]
} |
314,858 | In light of a growing number of security issues, such as the newly announced Browser Exploit Against SSL/TLS (BEAST), I was curious how we could go about enabling TLS 1.1 and 1.2 with OpenSSL and Apache to ensure that we will not be vulnerable to such threat vectors. | TLS1.2 is now available for apache,
to add TLSs1.2 you just need to add in your https virtual host configuration: SSLProtocol -all +TLSv1.2 -all is removing other ssl protocol (SSL 1,2,3 TLS1) +TLSv1.2 is adding TLS 1.2 for more browser compatibility you can use SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2 by the way you can increase the Cipher suite too using: SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GC$ You can test your https website security with an online scanner like: https://www.ssllabs.com/ssltest/index.html | {
"source": [
"https://serverfault.com/questions/314858",
"https://serverfault.com",
"https://serverfault.com/users/60455/"
]
} |
314,874 | We want to support web browsers utilizing TLS 1.1 and 1.2, which has been apparently implemented by Microsoft, but is turned off by default. So I went searching on Google and discovered some pages everyone seems to be following: http://support.microsoft.com/kb/245030 https://www.derekseaman.com/2010/06/enable-tls-12-aes-256-and-sha-256-in.html However! It doesn't appear to be working for me. I have set both DWORD vaules for DisabledByDefault and Enabled for TLS 1.1 and 1.2. I can confirm my client is attempting to communicate with TLS 1.2, but the server only responds with 1.0. I've restarted IIS, but it didn't change the situation. Microsoft points out: "WARNING: The DisabledByDefault value in the registry keys under the Protocols key does not take precedence over the grbitEnabledProtocols value that is defined in the SCHANNEL_CRED structure that contains the data for an Schannel credential." Well, that's very vague to me. I can't find anywhere where SCHANNEL_CRED is defined or set, all I can determine that it's a structure defined in a Microsoft library. That's my only guess for why this isn't work, yet I can't find enough information on it to determine if it is the true problem. | Reboot. Changes to Schannel settings do not take effect until the system is rebooted. | {
"source": [
"https://serverfault.com/questions/314874",
"https://serverfault.com",
"https://serverfault.com/users/92051/"
]
} |
315,181 | On a virtualized server running Ubuntu 10.04, df reports the following: # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 7.4G 7.0G 0 100% /
none 498M 160K 498M 1% /dev
none 500M 0 500M 0% /dev/shm
none 500M 92K 500M 1% /var/run
none 500M 0 500M 0% /var/lock
none 500M 0 500M 0% /lib/init/rw
/dev/sda3 917G 305G 566G 36% /home This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left. Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home). Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later. | It's possible that a process has opened a large file which has since been deleted. You'll have to kill that process to free up the space. You may be able to identify the process by using lsof. On Linux deleted yet open files are known to lsof and marked as (deleted) in lsof's output. You can check this with sudo lsof +L1 | {
"source": [
"https://serverfault.com/questions/315181",
"https://serverfault.com",
"https://serverfault.com/users/95782/"
]
} |
315,590 | I must remove 200 000 files (all of them) from a folder, and I don't want to delete the folder itself. using rm, I get an "Argument list too long" error.
I've tried to do something with xargs, but I'm not a Shell Guy, so it doesn't work: find -name * | xargs rm -f | $ find /path/to/folder -type f -delete | {
"source": [
"https://serverfault.com/questions/315590",
"https://serverfault.com",
"https://serverfault.com/users/76824/"
]
} |
315,593 | I created a windows 7 virtual machine on Centos 6 running the following: virt-install --name=W7VIRT64 --ram=768 --disk path=/var/lib/libvirt/images/guest1-win7-32,size=8 --vnc --network network=default --os-type=windows --os-variant=win7 --cdrom=/root/win7.iso I was able to successfully install the guest OS and get it to boot. How do I go about increasing the disk image size to 20G using non-gui tools? | $ find /path/to/folder -type f -delete | {
"source": [
"https://serverfault.com/questions/315593",
"https://serverfault.com",
"https://serverfault.com/users/9006/"
]
} |
315,817 | We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades
CPU: 6 core (only single socket due to Oracle licensing). | You will want to go with the large and slow RAM. The difference in RAM performance is negligible compared to the difference between RAM performance and disk performance. | {
"source": [
"https://serverfault.com/questions/315817",
"https://serverfault.com",
"https://serverfault.com/users/64156/"
]
} |
316,516 | I have seen advice saying you should use different port numbers for private applications (e.g. intranet, private database, anything that no outsider will use). I am not entirely convinced that can improve security because Port scanners exist If an application is vulnerable, it remains so regardless of its port number. Did I miss something or have I answered my own question? | It doesn't provide any serious defense against a targetted attack. If your server is being targetted then, as you say, they will port scan you and find out where your doors are. However, moving SSH off the default port of 22 will deter some of the non-targetted and amateur script kiddie type attacks. These are relatively unsophisticated users who are using scripts to port scan large blocks of IP addresses at a time specifically to see if port 22 is open and when they find one, they will launch some sort of attack on it (brute force, dictionary attack, etc). If your machine is in that block of IPs being scanned and it is not running SSH on port 22 then it will not respond and therefore will not show up in the list of machines for this script kiddie to attack. Ergo, there is some low-level security provided but only for this type of opportunistic attack. By way of example, if you have the time - log dive on your server (assuming SSH is on port 22) and pull out all the unique failed SSH attempts that you can. Then move SSH off that port, wait some time, and go log diving again. You will undoubtedly find less attacks. I used to run Fail2Ban on a public webserver and it was really, really obvious when I moved SSH off port 22. It cut the opportunistic attacks by orders of magnitude. | {
"source": [
"https://serverfault.com/questions/316516",
"https://serverfault.com",
"https://serverfault.com/users/96212/"
]
} |
316,560 | I see kswapd using 100% CPU... how can I tell on which process's behalf kswapd is being used so much? | kswapd is managing swap space in response to memory demands greater than physically available for all processes. It is process agnostic, it is only interested in what pages are access and when (it is more complex than this of course but to keep things simple we may as well view it this way). So the real question is "what processes have the greatest burden on memory that are causing kswapd to need to page all the time". That is most easily answered using 'top' and switching to memory usage sort mode. | {
"source": [
"https://serverfault.com/questions/316560",
"https://serverfault.com",
"https://serverfault.com/users/96215/"
]
} |
316,637 | We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response. | Here is Wikipedia 's handy chart of the pursuit of nines: Interestingly, only 3 of the top 20 websites were able to achieve the mythical 5 nines or 99.999% uptime in 2007. They were Yahoo, AOL, and Comcast. In the first 4 months of 2008, some of the most popular social networks , didn't even come close to that. From the chart, it should be evident how ridiculous the pursuit of 100% uptime is... | {
"source": [
"https://serverfault.com/questions/316637",
"https://serverfault.com",
"https://serverfault.com/users/3550/"
]
} |
316,703 | In order to develop one web application based on postgresql, i need to install libpq on my centos. I can install it by "apt-get install libpq-dev" on ubuntu, but i can not install it on centos by "yum install libpq". Who can tell me how to install it, thanks! | The package is called postgresql-libs on Red Hat and derived distributions. | {
"source": [
"https://serverfault.com/questions/316703",
"https://serverfault.com",
"https://serverfault.com/users/55582/"
]
} |
316,814 | I want to serve invoices for download. Currently I'm using a simple numbering scheme (invoice-01.pdf, invoice-02.pdf, and so on). I know that I could use hashes instead to obscure the data. Is it also possible to use PHP and serve the invoices by not directly having the user point to them? | There is even an example of this on php.net <?php
// We'll be outputting a PDF
header('Content-type: application/pdf');
// It will be called downloaded.pdf
header('Content-Disposition: attachment; filename="downloaded.pdf"');
// The PDF source is in original.pdf
readfile('original.pdf');
?> Or expand that a bit with <?php
if ( can_this_file_be_downloaded() ) {
header('Content-type: application/pdf');
header('Content-Disposition: attachment; filename="invoice.pdf"');
readfile("{$_GET['filename']}.pdf");
} else {
die("None shall pass");
}
?> | {
"source": [
"https://serverfault.com/questions/316814",
"https://serverfault.com",
"https://serverfault.com/users/81054/"
]
} |
316,907 | I've been setting up SSL for my domain today, and have struck another issue - I was hoping someone could shed some light on.. I keep receiving the following error messages: [error] Init: Unable to read server certificate from file /etc/apache2/domain.com.ssl/domain.com.crt/domain.com.crt
[error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag
[error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error I'm running Apache 2.2.16 and Ubuntu 10.10. My .crt file has the Begin and End tags, and has been copied exactly from the confirmation email I received, very frustrating! Cheers! Edit >>
When trying to verify the .crt It doesn't seem to work: >> openssl x509 -noout -text -in domain.com.crt
unable to load certificate
16851:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: TRUSTED CERTIFICATE Also >> >> openssl x509 -text -inform PEM -in domain.com.crt
unable to load certificate
21321:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: TRUSTED CERTIFICATE >> openssl x509 -text -inform DER -in domain.com.crt
unable to load certificate
21325:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1316:
21325:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:380:Type=X509 Edit>>
(Cheers for the help by the way) >> grep '^-----' domain.com.crt
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE----- Just emailed the company providing the Certificate, they responded> I have checked the CSR file that you have provided and I can assure
that this was correctly generated. The error that you are currently
encountering is caused because you are using a wrong command line for
installing the CSR. You will need to modify this domain.com.crt from
your command line with the according name of your domain. currently the crt is set up to mysite.com.crt - I've used domain.com.crt as an example | Is it possible that the lines are ^M-terminated? This is a potential issue when moving files from Windows to UNIX systems. One easy way to check is to use vi in "show me the binary" mode, with vi -b /etc/apache2/domain.ssl/domain.ssl.crt/domain.com.crt . If each line ends with a control-M, like this -----BEGIN CERTIFICATE-----^M
MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM^M
MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg^M
THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x^M you've got a file in Windows line-terminated format, and apache doesn't love those. Your options include moving the file over again, taking more care; or using the dos2unix command to strip those out; you can also remove them inside vi, if you're careful. Edit : thanks to @dave_thompson_085, who points out that this answer no longer applies in 2019. That is, Apache/OpenSSL are now tolerant of ^M-terminated lines, so they don't cause problems. That said, other formatting errors, several different examples of which appear in the comments, can still cause problems; check carefully for these if the certificate has been moved across systems. | {
"source": [
"https://serverfault.com/questions/316907",
"https://serverfault.com",
"https://serverfault.com/users/81985/"
]
} |
317,191 | I've just completely uninstalled nginx 1.0.6 from my server (Ubuntu 11.04) using apt-get remove nginx
rm -rf /etc/nginx/
rm -rf /usr/sbin/nginx
rm /usr/share/man/man1/nginx.1.gz
apt-get remove nginx* Now I want to install it again, however when starting nginx, I get errors such as: Restarting nginx: nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory) Then I placed my own conf file, then I get a new error: Restarting nginx: nginx: [emerg] open() "/etc/nginx/mime.types" failed (2: No such file or directory) in /etc/nginx/nginx.conf:12 Now it seems that apt-get install nginx doesn't install it completely, I cleared the apt-get cache, doesn't seem to help. How can I get a full installation of nginx using apt-get? | Run apt-get remove --purge nginx nginx-full nginx-common first, and then apt-get install nginx and see if it works. | {
"source": [
"https://serverfault.com/questions/317191",
"https://serverfault.com",
"https://serverfault.com/users/54403/"
]
} |
317,281 | A large company is doing a review of our software before they will use the web software built by our start-up company. We are using Linux to host, which is properly secured and hardened. The regulation of the security reviewer is that all computers and servers must have anti-virus program. Obviously, telling them that Linux can't be infected by a virus wont work. Is there a 3rd party security article or resource which could help us convince them to drop the requirement, or will we need to install ClamAV and make it burn some CPU once a day? | Yes, it's certainly a reasonable request. The day you deny that your infrastructure is vulnerable to virus threats is the day you've lost a great deal of credibility. You need to weigh the ramifications (annoyance factor, possible performance issues, maintenance overhead) of running AV with the value of this contract. If one company is listing AV as a requirement, it's likely that others will do the same in the future. If you're already running it, you'll be well-positioned to win their business. | {
"source": [
"https://serverfault.com/questions/317281",
"https://serverfault.com",
"https://serverfault.com/users/77625/"
]
} |
317,282 | I'm trying to set up a local LDAP server with local entry customizations that lays on top of an existing LDAP directory that we provide (which is read-only). From what I'm seeing, it looks like slapo-translucent is exactly what I need. However, I'm seeing a ton of conflicting information on the web on how to configure OpenLDAP for overlays, since there's some new LDIF configuration format, and the old slapd.conf configuration is deprecated/gone. I already have slapd installed and running on my Ubuntu instance. Does anyone have any good pointers on where to go to set up overlays? Thanks!
Cody | Yes, it's certainly a reasonable request. The day you deny that your infrastructure is vulnerable to virus threats is the day you've lost a great deal of credibility. You need to weigh the ramifications (annoyance factor, possible performance issues, maintenance overhead) of running AV with the value of this contract. If one company is listing AV as a requirement, it's likely that others will do the same in the future. If you're already running it, you'll be well-positioned to win their business. | {
"source": [
"https://serverfault.com/questions/317282",
"https://serverfault.com",
"https://serverfault.com/users/49566/"
]
} |
317,393 | I'm experiencing 502 Gateway errors when accessing a PHP file in a directory ( http://example.com/dev/index.php ). The logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before. What is the solution for this type of 502 Gateway error? This is the nginx.conf : user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
} | It sounds like you haven't started and configured the backend for Nginx. Start php-fpm and add the following to nginx.conf , in the http context: server {
listen 127.0.0.1;
server_name localhost;
error_log /var/log/nginx/localhost.error_log info;
root /var/www/localhost/htdocs;
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
fastcgi_intercept_errors on;
error_page 404 /error/404.php;
}
} | {
"source": [
"https://serverfault.com/questions/317393",
"https://serverfault.com",
"https://serverfault.com/users/54403/"
]
} |
317,595 | I'm doing a hardware refresh on a my Colo, I just need to copy my UFW rules from my old server to my new server. I dont seem to be able to get them copy all the active rules from my old server to my new one. How do I copy my active UFW rules between servers? | I found the rules in /etc/ufw/user.rules and for ipv6 you can find the rules in /etc/ufw/user6.rules . If you copy those files between the servers, disable and then re-enable ufw. | {
"source": [
"https://serverfault.com/questions/317595",
"https://serverfault.com",
"https://serverfault.com/users/67918/"
]
} |
317,603 | I want all mail to go to two SMTP servers and each server does everything it would do if there was only one SMTP in the first place. In other words people get mail in two places and they are out of sync. The scenario is we have Google Apps AND a custom app that uses sendgrid.com parse incoming email.
The problem being solved is the app is not 100% reliable and as such we need All incoming email to go to both places. I have no idea what I should be looking up as the above goes against all common sense and standard IT practises.... | I found the rules in /etc/ufw/user.rules and for ipv6 you can find the rules in /etc/ufw/user6.rules . If you copy those files between the servers, disable and then re-enable ufw. | {
"source": [
"https://serverfault.com/questions/317603",
"https://serverfault.com",
"https://serverfault.com/users/96618/"
]
} |
317,876 | Currently I have a number or domains that are set up as Email Spam traps. So if I get mails on that domains I can be certain that it is ~100% Spam. I'm using this information to temporarily defer message delivery from spamming IPs on my real Email domains. I can also use the Spam mails to improve Bayesian filtering and identifying brand new viruses before they hit my real inboxes. This procedure is only effective when I get many Spams on the Spam traps. So the question is how can I generate more Email traffic on the Spam trap domains? I'm not going to register Spam traps at dubious newsletter senders as this would increase the false negative rate. And it would also need too much manual work to register hundreds of addresses. Trying to publish the Spam trap addresses on Websites also failed. I have millions of addresses published and they got harvested but not used for spamming. It takes weeks and months until you get a noticeable amount of Spam on these addresses. I'm not going to publish these Spam traps on forums and guestbooks as this would mean fighting Spam by spamming the web. What I'm now looking for are ways how I can "accidentally" reveal hundreds and thousands of Email addresses so that Spammers pick them up and use them in their campaigns. But if someone can give me advice which other methods are good to attract Spammers I will appreciate this. Anwering Miles' suggestions: Mark's only points out how to set up good sites for harvesting and what to do with the fetched Spam. But as I said I already have these pages which are not harvested enough Phil's experiment is too old. His approach was appropriate until 2004 and in a way until 2006. But then Spammers changed their methods drastically. Using external services as Craigslist or guestbooks counts as spamming in my opinion and so is not a valid option. This is poisoning of half-legitime newsletters and increases the false negative rate. I already have two servers that are pretending to be open proxies. But as they are not a real open proxy I can see that spammers do testing attempts. These test mails are not returned to them and so they see that it is only a fake open relay. So they avoid these servers for their tasks. Twitter gets only be crawled for tweets with special keywords. These accounts are then followed and used for twitter spamming. But not for email spamming. | You could setup a fake company web sites and "accidentally" publish a dump file called "users.sql" with names and email addresses (something like "staff.csv" might actually be more effective). Once it gets it indexed by Google you'd expect some spammer to pick it up. If you're feeling a bit bolder you could dig into the underbelly of the email marketing underground yourself and offer to sell a database dump you stole from a server you compromised.... (since patched of course). Just make sure you route through tor or a public vpn provider when doing this! Or do a Lulzsec-style release on pastebin, not sure how you'd "promote" it so it got picked up by scripts though, probably using keywords like hacked database, email address etc would help. | {
"source": [
"https://serverfault.com/questions/317876",
"https://serverfault.com",
"https://serverfault.com/users/75291/"
]
} |
317,877 | I would like to get the number from rating as output from this # nc localhost 9571
language:
language:en_ZA.UTF-8
language:en_ZW.UTF-8
session-with-name:Ubuntu Classic (No effects):gnome-session --session=2d-gnome
session-with-name:Ubuntu (Safe Mode):gnome-session -f --session=2d-gnome
session-with-name:Ubuntu Classic:gnome-session --session=classic-gnome
xsession:/etc/X11/Xsession
rating:94 I can do it like this # nc localhost 9571 | grep rating | cut -d: -f2
94 but could awk be used instead for a simpler solution? | $ nc localhost 9571 | awk -F: '/rating/ { print $2 }' | {
"source": [
"https://serverfault.com/questions/317877",
"https://serverfault.com",
"https://serverfault.com/users/34187/"
]
} |
318,091 | I am migrating my server from the USA to the UK from one data center to another. My host said I should be able to achieve 11 megabytes per second. The operating system is Windows Server 2008 at both ends. My average file size is around 100 MB and the data is split across five 2 TB drives. What would be the recommended way to transfer these files? FTP SMB Rsync / Robocopy Other? I'm not too bothered about security as these are public files anyway, but I just want a solution that can push the full 11 MB/s transfer rate to minimize the total transfer time. | Ship hard drives across the ocean instead. At 11 Mbps with full utilization, you're looking at just shy of 90 days to transfer 10 TB. 11 Mbps = 1.375 MBps = 116.015 GB/day . 10240 GB / 116.015 GB/day = ~88.3 days . | {
"source": [
"https://serverfault.com/questions/318091",
"https://serverfault.com",
"https://serverfault.com/users/35537/"
]
} |
318,101 | I'm running CentOS 5.6 on my server with Apache. I need to create a "staging" site so that client can view the development in progress. I understand that this is done via Apache virtual host. Where is this file located on the server? Since we have 1 IP address on the test server, and we don't want to change DNS for the domain name just yet, how do I setup the virtual host to resolve the http://my.ip.addy to our test server's /var/www/html/public directory? Thanks much for your guidance. | Ship hard drives across the ocean instead. At 11 Mbps with full utilization, you're looking at just shy of 90 days to transfer 10 TB. 11 Mbps = 1.375 MBps = 116.015 GB/day . 10240 GB / 116.015 GB/day = ~88.3 days . | {
"source": [
"https://serverfault.com/questions/318101",
"https://serverfault.com",
"https://serverfault.com/users/47242/"
]
} |
318,474 | Possible Duplicate: Connect through SSH and type in password automatically, without using a public key I have a bash script that makes dump of DB then copies file from one server to another but it always asks for password before connection. scp file.tar.gz [email protected]:/backup Is there a way to pass password directly into script ? | Use the tool sshpass sshpass -p 'password' scp file.tar.gz [email protected]:/backup | {
"source": [
"https://serverfault.com/questions/318474",
"https://serverfault.com",
"https://serverfault.com/users/83047/"
]
} |
318,476 | I have a Windows 2008 Server with Hyper-V installed, and a couple of VMs. All was working properly until I tried upgrading BIOS. All of a sudden I get permission denied when attempting to start any VM, saying 'general access denied error' on the vhd file. I have attempted the fix at: http://support.microsoft.com/kb/2249906 and http://techblog.mirabito.net.au/?p=275 - both with the same result. Also SYSTEM, and NETWORK SERVICE accounts have full access. How can I fix this issue? *Edit: Even creating a new virtual machine + disk is giving this error, but I can inspect an existing disk, and even expand it without problems. Also cdrom device is giving the same error. If I remove all harddrives and cdrom devices from settings, I may power on the VM without any 'General access denied error'. *Edit2: I attempted to completely remove Hyper-V and reinnstall it again - same result. | Use the tool sshpass sshpass -p 'password' scp file.tar.gz [email protected]:/backup | {
"source": [
"https://serverfault.com/questions/318476",
"https://serverfault.com",
"https://serverfault.com/users/74300/"
]
} |
318,909 | How can I passively monitor the packet loss on TCP connections to/from my machine? Basically, I'd like a tool that sits in the background and watches TCP ack/nak/re-transmits to generate a report on which peer IP addresses "seem" to be experiencing heavy loss. Most questions like this that I find of SF suggest using tools like iperf. But, I need to monitor connections to/from a real application on my machine. Is this data just sitting there in the Linux TCP stack? | For a general sense of the scale of your problem netstat -s will track your total number of retransmissions. # netstat -s | grep retransmitted
368644 segments retransmitted You can aso grep for segments to get a more detailed view: # netstat -s | grep segments
149840 segments received
150373 segments sent out
161 segments retransmitted
13 bad segments received For a deeper dive, you'll probably want to fire up Wireshark. In Wireshark set your filter to tcp.analysis.retransmission to see retransmissions by flow. That's the best option I can come up with. Other dead ends explored: netfilter/conntrack tools don't seem to keep retransmits stracing netstat -s showed that it is just printing /proc/net/netstat column 9 in /proc/net/tcp looked promising, but it unfortunately appears to be unused. | {
"source": [
"https://serverfault.com/questions/318909",
"https://serverfault.com",
"https://serverfault.com/users/61563/"
]
} |
318,960 | I need to make some small modification to incoming traffic from a known tcp host:port before the process handling the connection get the stream. For example, let 192.168.1.88 be a remote host which runs a web server. I need that, when a process on my local host receives data from 192.168.1.88:80 (e.g. the browser), the data is first changed replacing text-A with text-B , like this: 127.0.0.1:... connects to 192.168.1.88:80 127.0.0.1:... sends to 192.168.1.88:80: GET / 192.168.1.88:80 sends to 127.0.0.1:...: HTTP/1.0 200 OK
Content-Type: text/plain
Some text-A, some other text That data is somewhat intercepted by the system and passed to a program whose output is: HTTP/1.0 200 OK
Content-Type: text/plain
Some text-B, some other text the system gives the so changed data to the process handling 127.0.0.1:..., like if it comes from 192.168.1.88:80. Assuming I have a stream-based way to make this changes (using sed for instance), what is the easiest way to pre-process the incoming tcp stream? I guess this would involve iptables , but I'm not very good at it. Note that the application should feel to deal with the original host, so setting up a proxy is not likely a solution. | Use netsed and iptables proxying. iptables -t nat -D PREROUTING -s yourhost -d desthost -p tcp --dport 80 -j REDIRECT --to 10101 Then run: netsed tcp 10101 desthost 80 s/text-A/text-B NetSED is a small and handy utility designed to alter, in real time, the contents of packets forwarded through your network. It is really useful for network packet alteration, forging, or manipulation. NetSED supports: black-box protocol auditing - whenever there are two or more proprietary boxes communicating using some undocumented protocol. By enforcing changes in ongoing transmissions, you will be able to test if the examined application can be claimed secure. fuzz generating experiments, integrity tests - whenever you do stability tests of an application to see how it cares for data integrity; other common use-cases: deceptive transfers, content filtering, protocol conversion - whatever best fits your task at hand. | {
"source": [
"https://serverfault.com/questions/318960",
"https://serverfault.com",
"https://serverfault.com/users/93647/"
]
} |
319,134 | I've found a description of hard links and junctions in Windows, however I'd like to know ,from the Windows UI or command prompt, how I can view the hard links of a particular file or folder? | The fsutil utility included in Windows XP and higher. Example: fsutil.exe hardlink list C:\Windows\System32\notepad.exe Sample results (from Windows 7): \Windows\System32\notepad.exe
\Windows\notepad.exe
\Windows\winsxs\amd64_microsoft-windows-notepadwin_31bf3856ad364e35_6.1.7600.16385_none_9ebebe8614be1470\notepad.exe
\Windows\winsxs\amd64_microsoft-windows-notepad_31bf3856ad364e35_6.1.7600.16385_none_cb0f7f2289b0c21a\notepad.exe | {
"source": [
"https://serverfault.com/questions/319134",
"https://serverfault.com",
"https://serverfault.com/users/21462/"
]
} |
319,135 | I create backup from remote server1 and transfer to remote server2.
i want do this from local computer.
many FTP program like FileZilla transfer files from local to remote or remote to local.
I want locally transfer files from remote server to another remote server from local. do you know any solution or program? | The fsutil utility included in Windows XP and higher. Example: fsutil.exe hardlink list C:\Windows\System32\notepad.exe Sample results (from Windows 7): \Windows\System32\notepad.exe
\Windows\notepad.exe
\Windows\winsxs\amd64_microsoft-windows-notepadwin_31bf3856ad364e35_6.1.7600.16385_none_9ebebe8614be1470\notepad.exe
\Windows\winsxs\amd64_microsoft-windows-notepad_31bf3856ad364e35_6.1.7600.16385_none_cb0f7f2289b0c21a\notepad.exe | {
"source": [
"https://serverfault.com/questions/319135",
"https://serverfault.com",
"https://serverfault.com/users/97101/"
]
} |
319,362 | Is there way to tell the bash find command to output what it is doing (verbose mode)? For example for the command: find /media/1Tb/videos -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \; to output: Found /media/1Tb/videos/102, executing rm -rf /media/1Tb/videos/102
... | You could concoct something with -printf , but the easiest is just to tack on -print on the end. This will show what was successfully deleted. | {
"source": [
"https://serverfault.com/questions/319362",
"https://serverfault.com",
"https://serverfault.com/users/93816/"
]
} |
319,725 | How do I restart a single website in IIS7+ using commandline only? Same functionality as the circled menu item in the image - but from the commandline. Iisreset does not have any options to deal with individual sites, and I found some ancient references to Iisweb.vbs, which seems to be outdated. | What you are looking for is the appcmd command. Take a look at its TechNet manual . To list your sites out : %windir%\system32\inetsrv\appcmd list site To restart your site, stop it and then start it : appcmd start site /site.name:string or appcmd stop site /site.name:string | {
"source": [
"https://serverfault.com/questions/319725",
"https://serverfault.com",
"https://serverfault.com/users/1194/"
]
} |
319,737 | I want to be able to restart services from a php-script. running under the www-user account. What's the preferred way to perform these actions? I recon I can place create a file with que'd commands, read by CRON, but the solution itches. What I'm thinking of is a tiny service, running under root, allowing predefined "methods" so arbitrary root actions cannot be executed. Any tool out there for this? | You could reinvent the wheel, but honestly, I use passwordless sudo for this. For example, my monitoring system needs to be able to run a command to check the hardware RAID. This requires root privilege, but I don't want to run the whole monitoring system as root, so instead I have in sudoers a line that says nagios ALL=(root) NOPASSWD: /usr/lib/nagios/plugins/check_md_raid and then run the command sudo /usr/lib/nagios/plugins/check_md_raid as the monitoring user, when I need to check the RAID. You could have a sudoers line that said www-user ALL=(root) NOPASSWD: /etc/rc.d/init.d/myservice then have php execute sudo /etc/rc.d/init.d/myservice restart . | {
"source": [
"https://serverfault.com/questions/319737",
"https://serverfault.com",
"https://serverfault.com/users/65297/"
]
} |
319,738 | I'm trying to redirect a url http://domain.com/?p=106 to http://domain.com/?p=110 My .htaccess file looks like this: RewriteEngine On
RewriteCond %{QUERY_STRING} ^p=106
RewriteRule / http://domain.com/\?p=110 [L,R=301] But I can't seem to get it to work. | You could reinvent the wheel, but honestly, I use passwordless sudo for this. For example, my monitoring system needs to be able to run a command to check the hardware RAID. This requires root privilege, but I don't want to run the whole monitoring system as root, so instead I have in sudoers a line that says nagios ALL=(root) NOPASSWD: /usr/lib/nagios/plugins/check_md_raid and then run the command sudo /usr/lib/nagios/plugins/check_md_raid as the monitoring user, when I need to check the RAID. You could have a sudoers line that said www-user ALL=(root) NOPASSWD: /etc/rc.d/init.d/myservice then have php execute sudo /etc/rc.d/init.d/myservice restart . | {
"source": [
"https://serverfault.com/questions/319738",
"https://serverfault.com",
"https://serverfault.com/users/58008/"
]
} |
319,745 | At Random times my Microsoft AD Server gets totally frozen (but i can see that the cursor moves freely and num lock status is also fine but on the screen i dont see anything other than the background). I dont see any error in the event viewer other than ntfrs error/warning in the File Replication. Not sure why this is happening. The only option left is to turn it down manually using the power button and then restart. Kindly let me know if any additional information required. Kindly help. | You could reinvent the wheel, but honestly, I use passwordless sudo for this. For example, my monitoring system needs to be able to run a command to check the hardware RAID. This requires root privilege, but I don't want to run the whole monitoring system as root, so instead I have in sudoers a line that says nagios ALL=(root) NOPASSWD: /usr/lib/nagios/plugins/check_md_raid and then run the command sudo /usr/lib/nagios/plugins/check_md_raid as the monitoring user, when I need to check the RAID. You could have a sudoers line that said www-user ALL=(root) NOPASSWD: /etc/rc.d/init.d/myservice then have php execute sudo /etc/rc.d/init.d/myservice restart . | {
"source": [
"https://serverfault.com/questions/319745",
"https://serverfault.com",
"https://serverfault.com/users/97276/"
]
} |
320,028 | my puppet.conf on the master [master]
certname = myname.mydomain.com
ca_server = myname.mydomain.com
certdnsnames = puppet;puppet.local;myname.dyndns.org;hivemind.local; for my understanding with the certdnsnames defined the following should work: puppet agent --server myname.dyndns.org --test but i get the following error: err: Could not retrieve catalog from remote server: hostname was not match with the server certificate how to avoid this error? how to correctly define certdnsnames? i have found diffent documentation about this, but no simple example. i i use "," for seperation i cannot sign at all.
i also have seen a syntax like certdnsnames = puppet:puppet.intra.myserver.fr,puppet.myserver.fr:puppet,puppet:puppet,puppet.intra.myserver.fr,puppet.myserver.fr http://projects.puppetlabs.com/issues/5776 but for me its not clear when to add a "puppet:" and when not. | For the benefit of anyone else who stumbles upon this answer: Due to CVE-2011-3872 , Puppet no longer supports the certdnsnames option. From the documentation: The certdnsnames setting is no longer functional, after CVE-2011-3872.
We ignore the value completely. For your own certificate request you
can set dns_alt_names in the configuration and it will apply locally.
There is no configuration option to set DNS alt names, or any other
subjectAltName value, for another nodes certificate. Alternately you
can use the --dns_alt_names command line option to set the labels
added while generating your own CSR. You can generate an SSL certificate for your server using subjectAlternativeName like this: $ puppet cert generate <puppet master's certname> --dns_alt_names=<comma-separated list of DNS names> | {
"source": [
"https://serverfault.com/questions/320028",
"https://serverfault.com",
"https://serverfault.com/users/71452/"
]
} |
320,614 | How can I forward requests coming in on port 80 to another port on the same linux machine? I used to do this by changing nat.conf , but this machine that I'm using doesn't have NAT. What's the alternative? | You can accomplish the redirection with iptables: iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080 | {
"source": [
"https://serverfault.com/questions/320614",
"https://serverfault.com",
"https://serverfault.com/users/83281/"
]
} |
320,716 | I'm trying to find a reliable way of finding which process on my machine is changing a configuration file ( /etc/hosts to be specific). I know I can use lsof /etc/hosts to find out what processes currently have the file open, but this doesn't help because the process is obviously opening the file, writing to it, and then closing it again. I also looked at lsof 's repeat option (-r), but it seems to only go as fast as once a second, which probably won't ever capture the write in progress. I know of a couple tools for monitoring changes to the filesystem, but in this case I want to know which process is responsible, which means catching it in the act. | You can use auditing to find this. If not already available, install and enable auditing for your distro. set an audit watch on /etc/hosts /sbin/auditctl -w /etc/hosts -p war -k hosts-file
-w watch /etc/hosts
-p warx watch for write, attribute change, execute or read events
-k hosts-file is a search key. Wait till the hosts file changes and then use ausearch to seer what is logged /sbin/ausearch -f /etc/hosts | more You'll get masses of output e.g. > time->Wed Oct 12 09:34:07 2011 type=PATH
> msg=audit(1318408447.180:870): item=0 name="/etc/hosts" inode=2211062
> dev=fd:00 mode=0100644 ouid=0 ogid=0 rdev=00:00
> obj=system_u:object_r:etc_t:s0 type=CWD msg=audit(1318408447.180:870):
> cwd="/home/iain" type=SYSCALL msg=audit(1318408447.180:870):
> arch=c000003e syscall=2 success=yes exit=0 a0=7fff73641c4f a1=941
> a2=1b6 a3=3e7075310c items=1 **ppid=7259** **pid=7294** au id=1001 uid=0 gid=0
> euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=123
> comm="touch" **exe="/bin/touch"** subj=user_u:system_r:unconfined_t:s0
> key="hosts-file" In this case I used the touch command to change the files timstamp it's pid was 7294 and it's ppid was 7259 (my shell). | {
"source": [
"https://serverfault.com/questions/320716",
"https://serverfault.com",
"https://serverfault.com/users/138387/"
]
} |
320,890 | I'm configuring a failover server tasked to accept any incoming request, and reply with blank 200 response. The idea is to minimize the reply time and to ensure we dont send any 40x or 50x. I tried using return 200; for the desired locations within Nginx, but my monitoring systems (Pingdom) didn't like the response, and consider the server not responding. Is there a better way to do this, of course with minimal overhead on the server? | HTTP status code 204 No Content is meant to say "I've completed the request, but there is no body to return": 10.2.5 204 No Content The server has fulfilled the request but does not need to return an
entity-body, and might want to return updated metainformation. The
response MAY include new or updated metainformation in the form of
entity-headers, which if present SHOULD be associated with the
requested variant. If the client is a user agent, it SHOULD NOT change its document view
from that which caused the request to be sent. This response is
primarily intended to allow input for actions to take place without
causing a change to the user agent's active document view, although
any new or updated metainformation SHOULD be applied to the document
currently in the user agent's active view. The 204 response MUST NOT include a message-body, and thus is always
terminated by the first empty line after the header fields. | {
"source": [
"https://serverfault.com/questions/320890",
"https://serverfault.com",
"https://serverfault.com/users/63770/"
]
} |
320,896 | This is probably an easy question, but I've been at it for days with no luck. I need soap enabled to run an application on linux CentOS/Apache. I have PHP installed and have loaded soap.so in /usr/lib/php/modules/. I also have edited php.ini to point to the correct extensions directory. After restarting apache though, the recognized modules simply do not change in phpinfo();. I am baffled. Any help is greatly appreciated. | HTTP status code 204 No Content is meant to say "I've completed the request, but there is no body to return": 10.2.5 204 No Content The server has fulfilled the request but does not need to return an
entity-body, and might want to return updated metainformation. The
response MAY include new or updated metainformation in the form of
entity-headers, which if present SHOULD be associated with the
requested variant. If the client is a user agent, it SHOULD NOT change its document view
from that which caused the request to be sent. This response is
primarily intended to allow input for actions to take place without
causing a change to the user agent's active document view, although
any new or updated metainformation SHOULD be applied to the document
currently in the user agent's active view. The 204 response MUST NOT include a message-body, and thus is always
terminated by the first empty line after the header fields. | {
"source": [
"https://serverfault.com/questions/320896",
"https://serverfault.com",
"https://serverfault.com/users/97631/"
]
} |
321,132 | I am trying to make a tunnel between a server and laptop with Putty. The problem is, since the laptop has no public IP address, I have to make a reverse connection. ASCII Artwork: SERVER(PORT:6000) ----------> LAPTOP(PORT:7000) However, since laptop has no public IP Address I have to: SERVER(PORT:6000) <---------- LAPTOP(PORT:7000) But, all the data coming from will be transferred from server to laptop. | In PuTTY go to Settings -> Connection -> SSH -> Tunnels. You can add port forwards there. For reverse forward, enter source port, and destination, but choose 'Remote' instead of 'Local'. In your case, put 6000 in to source port, localhost:7000 in the Destination, and choose Remote. | {
"source": [
"https://serverfault.com/questions/321132",
"https://serverfault.com",
"https://serverfault.com/users/97730/"
]
} |
321,153 | I'm part of a Windows Server 2008 domain. There's a login script (supposedly configured to execute on login via GPO) that sets permissions for network shares. However, instead of being executed, it's opening in my text editor. I presume that there's been a registry change that affects the default shell ("double click") action for .vbs, and that Windows Server 2008, or possibly this particular GPO, assumes that the default shell action is "execute". Is there a way to fix this locally? Is there a way to fix the GPO to explicitly pass the script to the Windows script host instead? | In PuTTY go to Settings -> Connection -> SSH -> Tunnels. You can add port forwards there. For reverse forward, enter source port, and destination, but choose 'Remote' instead of 'Local'. In your case, put 6000 in to source port, localhost:7000 in the Destination, and choose Remote. | {
"source": [
"https://serverfault.com/questions/321153",
"https://serverfault.com",
"https://serverfault.com/users/65320/"
]
} |
321,155 | I need a virtualization solution with the following properties: guest OSes can receive multicast traffic from the host machine. some services running on the guest OS (eg: port 80) can be port forwarded, so it's visible on the host and other machines. I tried vmware player, it doesn't support multicast at all. I managed to set up port forwarding with Virtualbox, but multicast doesn't work seem to work. | In PuTTY go to Settings -> Connection -> SSH -> Tunnels. You can add port forwards there. For reverse forward, enter source port, and destination, but choose 'Remote' instead of 'Local'. In your case, put 6000 in to source port, localhost:7000 in the Destination, and choose Remote. | {
"source": [
"https://serverfault.com/questions/321155",
"https://serverfault.com",
"https://serverfault.com/users/73642/"
]
} |
321,167 | Trying to ssh into a computer I control, I'm getting the familiar message: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
[...].
Please contact your system administrator.
Add correct host key in /home/sward/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/sward/.ssh/known_hosts:86
RSA host key for [...] has changed and you have requested strict checking.
Host key verification failed. I did indeed change the key. And I read a few dozen postings saying that the way to resolve this problem is by deleting the old key from the known_hosts file. But what I would like is to have ssh accept both the old key and the new key. The language in the error message (" Add correct host key ") suggests that there should be some way to add the correct host key without removing the old one. I have not been able to figure out how to add the new host key without removing the old one. Is this possible, or is the error message just extremely misleading? | get the rsa key of your server, where server_ip is your server's IP address, such as 192.168.2.1 : $ ssh-keyscan -t rsa server_ip Sample response: # server_ip SSH-2.0-OpenSSH_4.3
server_ip ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwH5EXZG... and on the client, copy the entire response line server_ip ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwH5EXZG... , and add this key to the bottom of your ~/.ssh/known_hosts file: server_ip ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqx9m529...(the offending key, and/or the very bottom of the `known_hosts` file)
server_ip ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwH5EXZG... (line you're adding, copied and pasted from above) | {
"source": [
"https://serverfault.com/questions/321167",
"https://serverfault.com",
"https://serverfault.com/users/97740/"
]
} |
321,301 | As the title says, how do I view the contents of a SELinux policy package? The resulting files end with .pp. I'm running on centos 6, but I guess it's the same way on "all" distros. For example cp /usr/share/selinux/targeted/cobbler.pp.bz2 ~
bunzip2 cobbler.pp.bz2
MAGIC_SELINUX_CMD cobbler.pp | A SELinux policy module is built by following steps: generate a set of policy rules: audit2allow compile: checkmodule build: semodule_package http://wiki.centos.org/HowTos/SELinux Assuming that I have a postgreylocal.te file with belows content: module postgreylocal 1.0;
require {
type postfix_smtpd_t;
type postfix_spool_t;
type initrc_t;
class sock_file write;
class unix_stream_socket connectto;
}
#============= postfix_smtpd_t ==============
allow postfix_smtpd_t initrc_t:unix_stream_socket connectto;
allow postfix_smtpd_t postfix_spool_t:sock_file write; postgreylocal.pp policy module will be created with: # checkmodule -M -m -o postgreylocal.mod postgreylocal.te
# semodule_package -m postgreylocal.mod -o postgreylocal.pp To unpack this policy module, you need a tool which is called semodule_unpackage to extract the .mod file and then use dismod to disassemble the binary module to textual representation. On my Gentoo, the following packages need to be installed: [I] sys-apps/policycoreutils
Available versions: [M]2.0.82 [M](~)2.0.82-r1 [M](~)2.0.85 [M](~)2.1.0 {M}(~)2.1.0-r1
Installed versions: 2.1.0-r1(05:12:27 PM 10/14/2011)
Homepage: http://userspace.selinuxproject.org
Description: SELinux core utilities
[I] sys-apps/checkpolicy
Available versions: [M]2.0.21 [M](~)2.0.23 {M}(~)2.1.0 {debug}
Installed versions: 2.1.0(01:27:53 PM 10/14/2011)(-debug)
Homepage: http://userspace.selinuxproject.org
Description: SELinux policy compiler
[I] sys-libs/libsepol
Available versions: [M]2.0.41!t [M](~)2.0.42!t {M}(~)2.1.0!t
Installed versions: 2.1.0!t(01:25:43 PM 10/14/2011)
Homepage: http://userspace.selinuxproject.org
Description: SELinux binary policy representation library Firstly, extract the module from .pp file: # semodule_unpackage postgreylocal.pp postgreylocal.mod and secondly, disassemble with dismod : # cd checkpolicy-2.1.0/test/
# ls
dismod.c dispol.c Makefile
# make
cc -g -Wall -O2 -pipe -I/usr/include -c -o dispol.o dispol.c
dispol.c: In function ‘main’:
dispol.c:438:8: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
dispol.c:465:9: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
dispol.c:476:9: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
dispol.c:500:9: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
cc dispol.o -lfl -lsepol -lselinux /usr/lib/libsepol.a -L/usr/lib -o dispol
cc -g -Wall -O2 -pipe -I/usr/include -c -o dismod.o dismod.c
dismod.c: In function ‘main’:
dismod.c:913:8: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
dismod.c:982:9: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
dismod.c: In function ‘link_module’:
dismod.c:787:7: warning: ignoring return value of ‘fgets’, declared with attribute warn_unused_result
cc dismod.o -lfl -lsepol -lselinux /usr/lib/libsepol.a -L/usr/lib -o dismod
# ls
dismod dismod.c dismod.o dispol dispol.c dispol.o Makefile ./dismod postgreylocal.pp
Reading policy...
libsepol.policydb_index_others: security: 0 users, 1 roles, 3 types, 0 bools
libsepol.policydb_index_others: security: 0 sens, 0 cats
libsepol.policydb_index_others: security: 2 classes, 0 rules, 0 cond rules
libsepol.policydb_index_others: security: 0 users, 1 roles, 3 types, 0 bools
libsepol.policydb_index_others: security: 0 sens, 0 cats
libsepol.policydb_index_others: security: 2 classes, 0 rules, 0 cond rules
Binary policy module file loaded.
Module name: postgreylocal
Module version: 1.0
Select a command:
1) display unconditional AVTAB
2) display conditional AVTAB
3) display users
4) display bools
5) display roles
6) display types, attributes, and aliases
7) display role transitions
8) display role allows
9) Display policycon
0) Display initial SIDs
a) Display avrule requirements
b) Display avrule declarations
c) Display policy capabilities
l) Link in a module
u) Display the unknown handling setting
F) Display filename_trans rules
f) set output file
m) display menu
q) quit Command ('m' for menu): 1
unconditional avtab:
--- begin avrule block ---
decl 1:
allow [postfix_smtpd_t] [initrc_t] : [unix_stream_socket] { connectto };
allow [postfix_smtpd_t] [postfix_spool_t] : [sock_file] { write };
Command ('m' for menu): a
avrule block requirements:
--- begin avrule block ---
decl 1:
commons: <empty>
classes: sock_file{ write } unix_stream_socket{ connectto }
roles : <empty>
types : postfix_smtpd_t postfix_spool_t initrc_t
users : <empty>
bools : <empty>
levels : <empty>
cats : <empty>
Command ('m' for menu): | {
"source": [
"https://serverfault.com/questions/321301",
"https://serverfault.com",
"https://serverfault.com/users/64308/"
]
} |
321,321 | I need to upgrade cURL to the latest version on Centos 2.6.18-164.15.1.el5.centos.plusxen #1 SMP Wed Mar 17 20:32:20 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux I'm unable to find any suitable packages to do so via yum or rpm . Is there a standard way to do this upgrade without installing from source? | This is an old question, but it is still one the first results in google search, so I'd like post the solution that solved my problem. Create a new file /etc/yum.repos.d/city-fan.repo Paste the following contents: [CityFan]
name=City Fan Repo
baseurl=http://www.city-fan.org/ftp/contrib/yum-repo/rhel$releasever/$basearch/
enabled=1
gpgcheck=0 Type this into the terminal: yum clean all
yum install curl And it's done! Observe that for other RHEL/CentOS versions, all you have to do is specify the appropriate CityFan URL. | {
"source": [
"https://serverfault.com/questions/321321",
"https://serverfault.com",
"https://serverfault.com/users/44793/"
]
} |
321,386 | Since upgrading to Mac OS X Lion (from Snow Leopard), I have noticed that resolving to a virtual host is very slow (between about 3 seconds). I have found a number of tips (e.g., not using the .local TLD) that might resolve this, but they do not apply to my setup. My setup is quite simple:
- Apache 2 (shipped with Lion)
- enabled PHP
- added a few virtual hosts
- installed Mail and SMTP Pear packages Apache's hosts file looks like this: 127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 tbi.dev
127.0.0.1 www.tbi.dev
127.0.0.1 test1.tbi.dev
127.0.0.1 test2.tbi.dev
127.0.0.1 psa.dev
127.0.0.1 snd.dev And Apache's virtual hosts file looks like this: <VirtualHost *:80>
DocumentRoot "/Users/Bart/Sites/tbi"
ServerName tbi.dev
</VirtualHost>
<VirtualHost *:80>
DocumentRoot "/Users/Bart/Sites/tbi"
ServerName tbi.dev
ServerAlias *.tbi.dev www.tbi.dev
</VirtualHost>
<VirtualHost *:80>
DocumentRoot "/Users/Bart/Sites/psa"
ServerName psa.dev
</VirtualHost>
<VirtualHost *:80>
DocumentRoot "/Users/Bart/Sites/sandbox"
ServerName snd.dev
</VirtualHost> The setup is basically identical to my setup on Snow Leopard, but Apache's performance for resolving virtual hosts is significantly different. I run Mac OS X Lion 10.7.2, but the issue was already present when running 10.7.1. This might seem like a small issue, but when you're accessing a virtual hosts a few hundreds of times a day then this adds up to a significant waste of time as you can imagine. | Long DNS timeouts are almost always a sign of IPv6 issues. Do you need IPv6 connectivity to apache ? If not, I suggest changing <VirtualHost *:80> into <VirtualHost 0.0.0.0:80> Or disable IPv6 connectivity altogether. | {
"source": [
"https://serverfault.com/questions/321386",
"https://serverfault.com",
"https://serverfault.com/users/97808/"
]
} |
321,388 | I have the company's wordpress site under IIS 7.5 (Windows 2008R2 SP1) but I still have file access problems: Under .NET Websites it's usual the IIS_IUSRS the user used by IIS to perform tasks on behalf of the website user, but is it when using PHP as well? As you can see I have set the correct permissions to the wp-content folder and all of child folders (I did verify this). What am I missing ? | Long DNS timeouts are almost always a sign of IPv6 issues. Do you need IPv6 connectivity to apache ? If not, I suggest changing <VirtualHost *:80> into <VirtualHost 0.0.0.0:80> Or disable IPv6 connectivity altogether. | {
"source": [
"https://serverfault.com/questions/321388",
"https://serverfault.com",
"https://serverfault.com/users/711/"
]
} |
321,520 | So, I was performing an Ubuntu Server upgrade from 11.04 to 11.10. I forgot about it in the background, and my SSH client timed out and disconnected (putty on Windows, go figure). The last thing on my terminal was a question about keeping an old config, etc. When I logged back in to the server, aptitude files were locked by another process, so I assume this upgrade process is sat there waiting for my input. How I can interact with this process and continue the upgrade? If possible. Thanks | The process actually runs in a screen or byobu session as the root user.
reconnect to the server with putty on port 22 or the failsafe 1022. sudo su - or su - into your root account and resume the screen with byobu or a screen -r and resume where you left. oh and yes i found out the hard way ;-) | {
"source": [
"https://serverfault.com/questions/321520",
"https://serverfault.com",
"https://serverfault.com/users/4158/"
]
} |
321,534 | I have no clue on how this happens. The distro is Scientific Linux 6.1 and everything is set up to perform authentication via public key. Yet, when sshd is running as a daemon (service sshd start), it doesn't accept public keys. (To obtain this piece of log, I've changed the sshd script to add the -ddd option) debug1: trying public key file /root/.ssh/authorized_keys
debug1: restore_uid: 0/0
debug1: temporarily_use_uid: 0/0 (e=0/0)
debug1: trying public key file /root/.ssh/authorized_keys2
debug1: restore_uid: 0/0
Failed publickey for root from xxx.xxx.xxx.xxx port xxxxx ssh2
debug3: mm_answer_keyallowed: key 0x7f266e1a8840 is not allowed
debug3: mm_request_send entering: type 22
debug3: mm_request_receive entering
debug2: userauth_pubkey: authenticated 0 pkalg ssh-rsa
debug3: Wrote 64 bytes for a total of 1853
debug1: userauth-request for user root service ssh-connection method publickey
debug1: attempt 2 failures 1 If sshd is run in debug mode /usr/sbin/sshd -ddd , authentication works like a charm: debug1: trying public key file /root/.ssh/authorized_keys
debug1: fd 4 clearing O_NONBLOCK
debug1: matching key found: file /root/.ssh/authorized_keys, line 1
Found matching RSA key: d7:3a:08:39:f7:28:dc:ea:f3:71:7c:23:92:02:02:d8
debug1: restore_uid: 0/0
debug3: mm_answer_keyallowed: key 0x7f85527ef230 is allowed
debug3: mm_request_send entering: type 22
debug3: mm_request_receive entering
debug3: Wrote 320 bytes for a total of 2109
debug2: userauth_pubkey: authenticated 0 pkalg ssh-rsa
Postponed publickey for root from xxx.xxx.xxx.xxx port xxxxx ssh2
debug1: userauth-request for user root service ssh-connection method publickey
debug1: attempt 2 failures 0 Any ideas?? Has anyone seen anything like this? Notes: File permissions have been double checked: # ll -d .ssh
drwx------. 2 root root 4096 Oct 14 10:05 .ssh
# ll .ssh
total 16
-rw-------. 1 root root 786 Oct 14 09:35 authorized_keys
-rw-------. 1 root root 1675 Oct 13 08:24 id_rsa
-rw-r--r--. 1 root root 393 Oct 13 08:24 id_rsa.pub
-rw-r--r--. 1 root root 448 Oct 13 12:51 known_hosts I was asked if sshd can access root's files in "daemon mode". The closest answer I get to this question is: # netstat -ntap | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 19847/sshd
# ps -ef | grep 19847
root 19847 1 0 09:58 ? 00:00:00 /usr/sbin/sshd If sshd is running as root, I don't know how it's not possible to access its own files. Could SELinux be the cause? | Yes, SELinux is likely the cause. The .ssh dir is probably mislabeled. Look at /var/log/audit/audit.log . It should be labeled ssh_home_t . Check with ls -laZ . Run restorecon -r -vv /root/.ssh if need be. | {
"source": [
"https://serverfault.com/questions/321534",
"https://serverfault.com",
"https://serverfault.com/users/97859/"
]
} |
321,805 | I'm a pretty big noob when it comes to setting up MySQL for performance. And honestly I'm not worried about the fine tuning to squeeze every last bit of performance out of MySQL, but I do know that the most important thing to do that provides some of the best results is setting up caches/buffers correctly. I've tried to keep things simple by using only InnoDB as a storage engine. And I do have a dedicated server for MySQL. It has 8gb of RAM, how should I be allocating that to maximize performance? I'd like to be able to fit my entire database into memory for the best performance. The database is about 5gb. Is this possible? How much memory should I allocate to the query cache? How much to the InnoDB buffer pool? How much for the rest of the computer (i.e. non MySQL related processes)? Etc. Since I'm not using MyISAM I don't really need to put a lot of memory in the key cache correct? | This is hard without knowing much about the database itself. There are a few tools you should be aware of; mysqltuner.pl Github MySQL Tuning Primer Github About storing the entire database in memory; Any queries that are doing changes on the database will remain open until the write is performed on the disk. The only thing that can avoid the disk to be a bottleneck, is a disk-controller with a write cache. I would start with the following changes from the defaults: key_buffer_size = 128M
thread_stack = 128K
thread_cache_size = 8
table_cache = 8192
max_heap_table_size = 256M
query_cache_limit = 4M
query_cache_size = 512M
innodb_buffer_pool_size = 4G
# This is crucial to avoid checkpointing all the time:
innodb_log_file_size = 512M
# If you have control on who consumes the DB, and you don't use hostnames when you've set up permissions - this can help as well.
skip_name_resolve Then I'd see how things go, and try different things based on (among other things) the output of the tools mentioned above. I would also make sure to graph trends with a monitoring tool, such as Munin or Cacti , to see what kind of workload I'm actually dealing with. Personally, I have great experience with the MySQL-plugins provided with Munin. | {
"source": [
"https://serverfault.com/questions/321805",
"https://serverfault.com",
"https://serverfault.com/users/97941/"
]
} |
321,913 | I remember using a command line tool to flash a NIC's link light to identify it. I can't remember for the life of me what it was. | Yeah it was ethtool I was looking for but specifically this will flash the link light for two minutes: ethtool -p eth0 120 | {
"source": [
"https://serverfault.com/questions/321913",
"https://serverfault.com",
"https://serverfault.com/users/96084/"
]
} |
322,500 | Is it possible to use rsync to copy files in one direction only? For example, suppose we have: left/a.txt right/a.txt where the files are initially identical. If one then modifies right/a.txt , then: rsync -avv left/ right/ will copy right/a.txt onto left/a.txt . Is it possible to restrict rsync to only copying from left/ to right/ (i.e. prevent it from copying from right/ to left/ )? | You misunderstand rsync. This command: rsync -avv left/ right/ will not sync anything in right to left. It will, as @atbg says, only sync left to right. Rsync is not a bi-directional syncer. It syncs the dest with the source. Man page for reference: http://linux.die.net/man/1/rsync | {
"source": [
"https://serverfault.com/questions/322500",
"https://serverfault.com",
"https://serverfault.com/users/98196/"
]
} |
322,518 | With rpm -qV openssh-server I will get a list of files that have changed compared to default. ~$ rpm -qV openssh-server
S.?....T. c /etc/ssh/sshd_config
~$ Can dpkg on Ubuntu do the same? | I don't thinks so, in Ubuntu md5 checksums are only stored for certain files. For any given package the list of files that have checksums can be found in /var/lib/dpkg/info/<package>.md5sums e.g /var/lib/dpkg/info/openssh-server.md5sums These generally don't contain a complete list of the files that have been installed by a package e.g. openssh-server.md5sums bb5096cf79a43b479a179c770eae86d8 usr/lib/openssh/sftp-server
42da5b1c2de18ec8ef4f20079a601f28 usr/sbin/sshd
8c5592e0d522fa0f8f55f3c104479ef5 usr/share/lintian/overrides/openssh-server
cfcb67f58bcd1edcaa5a770863e49304 usr/share/man/man5/sshd_config.5.gz
71a51cbb514da3044b277e05a3ceaf0b usr/share/man/man8/sshd.8.gz
222d4da61fcb3c65b4e6e83944752f20 usr/share/man/man8/sftp-server.8.gz You can use the debsums command (sudo apt-get install debsums) to check the files that have md5 signatures debsums openssh-server
/usr/lib/openssh/sftp-server OK
/usr/sbin/sshd OK
/usr/share/lintian/overrides/openssh-server OK
/usr/share/man/man5/sshd_config.5.gz OK
/usr/share/man/man8/sshd.8.gz OK
/usr/share/man/man8/sftp-server.8.gz OK | {
"source": [
"https://serverfault.com/questions/322518",
"https://serverfault.com",
"https://serverfault.com/users/34187/"
]
} |
322,870 | I've just tried logging into a Fedora (release 13 Goddard) server using SSH (PuTTY, Windows). For some reason the Enter after typing my username didn't go through and I typed in my password and hit Enter again. I only realized my mistake when the server greeted me with a happy myusername MYPASSWORD @server.example.com's password: I broke off the connection at this point and changed my password on that machine (through a separate SSH connection). ... now my question is: Is such a failed login stored in plain text in any logfile? In other words, have I just forced my (now-outdated) password in front of the eyes of the remote admin the next time he scans his logs? Update Thanks for all the comments about the implied question "what to do to prevent this in the future". For quick, one-off connections I'll use this PuTTY feature now: to replace the where-was-it-again "auto-login username" option I'll also start using ssh keys more often, as explained in the PuTTY docs . | In short: yes. # ssh 192.168.1.1 -l "myuser mypassword"
^C
# egrep "mypassword" /var/log/auth.log
Oct 19 14:33:58 host sshd[19787]: Invalid user myuser mypassword from 192.168.111.78
Oct 19 14:33:58 host sshd[19787]: Failed none for invalid user myuser mypassword from 192.168.111.78 port 53030 ssh2 | {
"source": [
"https://serverfault.com/questions/322870",
"https://serverfault.com",
"https://serverfault.com/users/75214/"
]
} |
322,949 | I have a quick question regarding SPF records: Do they need to be present for all subdomains? Lets say that I have a TXT record with SPF info for domain.com Let's also say that I have a seperate email domain for subdomain.domain.com Will the SPF policy/info for domain.com also apply to the subdomain? Or do I need to add a separate TXT record for that too? | You need to have separate SPF records for each subdomain you wish to send mail from. The following was originally posted on openspf.org, which used to be a great resource for this kind of thing. Latest link http://www.open-spf.org/FAQ/The_demon_question/ The Demon Question: What about subdomains? If I get mail from
pielovers.demon.co.uk, and there's no SPF data for pielovers, should I
go back one level and test SPF for demon.co.uk? No. Each subdomain at
Demon is a different customer, and each customer might have their own
policy. It wouldn't make sense for Demon's policy to apply to all its
customers by default; if Demon wants to do that, it can set up SPF
records for each subdomain. So the advice to SPF publishers is this: you should add an SPF record
for each subdomain or hostname that has an A or MX record. Sites with wildcard A or MX records should also have a wildcard SPF
record, of the form: * IN TXT "v=spf1 -all" This makes sense - a subdomain may very well be in a different geographical location and have a very different SPF definition. The 'include:' directive for SPF may be used to provide all subdomains with the same entries. For example, on the SPF record for subdomain mailfrom.example.com enter 'include:example.com'. In this fashion whenever you update the definition for example.com your subdomains will automatically pick up the updated values. | {
"source": [
"https://serverfault.com/questions/322949",
"https://serverfault.com",
"https://serverfault.com/users/21875/"
]
} |
322,997 | A few of us at my company have root access on production servers. We are looking for a good way to make it exceedingly clear when we have ssh'd in. A few ideas we have had are: Bright red prompt Answer a riddle before getting a shell Type a random word before getting a shell What are some techniques you guys use to differentiate production systems? | The red prompt is a good idea, which I also use. Another trick is to put a large ASCII-art warning in the /etc/motd file. Having something like this greet you when you log in should get your attention: _______ _ _ _____ _____ _____ _____
|__ __| | | |_ _|/ ____| |_ _|/ ____| /\
| | | |__| | | | | (___ | | | (___ / \
| | | __ | | | \___ \ | | \___ \ / /\ \
| | | | | |_| |_ ____) | _| |_ ____) | / ____ \
|_| |_| |_|_____|_____/ |_____|_____/ /_/ \_\
_____ _____ ____ _____ _ _ _____ _______ _____ ____ _ _
| __ \| __ \ / __ \| __ \| | | |/ ____|__ __|_ _/ __ \| \ | |
| |__) | |__) | | | | | | | | | | | | | | || | | | \| |
| ___/| _ /| | | | | | | | | | | | | | || | | | . ` |
| | | | \ \| |__| | |__| | |__| | |____ | | _| || |__| | |\ |
|_| |_| \_\\____/|_____/ \____/ \_____| |_| |_____\____/|_| \_|
__ __ _____ _ _ _____ _ _ ______
| \/ | /\ / ____| | | |_ _| \ | | ____|
| \ / | / \ | | | |__| | | | | \| | |__
| |\/| | / /\ \| | | __ | | | | . ` | __|
| | | |/ ____ \ |____| | | |_| |_| |\ | |____
|_| |_/_/ \_\_____|_| |_|_____|_| \_|______| You could generate such a warning on this website or you could use the figlet command. Like Nicholas Smith suggested in the comments, you could spice things up with some dragons or other animals using the cowsay command. Instead of using the /etc/motd file, you could also call cowsay or figlet in the .profile file. | {
"source": [
"https://serverfault.com/questions/322997",
"https://serverfault.com",
"https://serverfault.com/users/13659/"
]
} |
323,233 | I have this problem with NRPE, all the stuff I've found so far on the net seems to point me at things I've already tried. # /usr/local/nagios/plugins/check_nrpe -H nrpeclient gives NRPE v2.12 as expected. Running the command by hand (as defined in nrpe.cfg on "nrpeclient", gives the expected response nrpe.cfg: command[check_openmanage]=/usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge
"Expected response" But if I try to run the command from the Nagios server I get the following: # /usr/local/nagios/plugins/check_nrpe -H comxps -c check_openmanage
NRPE: Unable to read output Can anyone think of anywhere else I might have made a mistake with this? I've done the same thing on multiple other servers with no problem. The only difference I can think of with this is that this box is RHEL 5 based, whereas the others are RHEL 4 based. Those two bits above that I've tested are the what most people seem to suggest when people have had this problem. I should mention that I get a weird error in the logs when I restart nrpe : nrpe[14534]: Unable to open config file '/usr/local/nagios/etc/nrpe.cfg' for reading
nrpe[14534]: Continuing with errors...
nrpe[14535]: Starting up daemon
nrpe[14535]: Warning: Daemon is configured to accept command arguments from clients!
nrpe[14535]: Listening for connections on port 5666
nrpe[14535]: Allowing connections from: bodbck,combck,nam-bck Even though, it's plainly reading that /usr/local/nagios/etc/nrpe.cfg file to get the stuff it's talking about further down.. | You have a rights problem. Change the command to: command[check_openmanage]=sudo /usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge (add sudo) Then, add the nagios-user to the sudoers: nagios ALL=(ALL) NOPASSWD:/usr/lib/nagios/plugins/additional/check_openmanage Or you could just chmod the file... That also works. If you are using CentOS, Red Hat, Scientific or Fedora, make sure to disable Defaults requiretty in the sudoers file. | {
"source": [
"https://serverfault.com/questions/323233",
"https://serverfault.com",
"https://serverfault.com/users/87359/"
]
} |
323,497 | During the last couple of days I have been using a lot of F-words, while browsing Internet for good documentation about how to setup an LDAP-server. So far I have found none, but plenty that are less than good, but better than bad. So I had to do it the usual Linux way, read, test, scream, read, test and scream. My goals for the LDAP-server are: Install LDAP on a Centos 6 minimum installation, both for server and clients. Install in the way that the developers of OpenLDAP intended. Install LDAP securely with LDAPS, iptables, SELinux etc. enabled. Use SSSD on the clients for the "authentication" connections to the LDAP-server. This is the kind of question that I usually answer myself, but I would appreciate suggestions about how to do the installation even better. | Here is a couple of shell scripts that will install and configure openldap on a server and install and configure sssd for user authentication against the LDAP-server. One that installs the LDAP-server with groups, users etc. #!/bin/sh
###########################################################
# Install LDAP-server
###########################################################
# Enable SELinux for higher security.
setenforce 1
setsebool -P domain_kernel_load_modules 1
# Communication with the LDAP-server needs to be done with domain name, and not
# the ip. This ensures the dns-name is configured.
cat >> /etc/hosts << EOF
10.100.110.7 ldap.syco.net
EOF
# Install all required packages.
yum -y install openldap-servers openldap-clients
# Create backend database.
cp /usr/share/doc/openldap-servers-2.4.19/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
chown -R ldap:ldap /var/lib/ldap
# Set password for cn=admin,cn=config (it's secret)
cat >> /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{0\}config.ldif << EOF
olcRootPW: {SSHA}OjXYLr1oZ/LrHHTmjnPWYi1GjbgcYxSb
EOF
# Autostart slapd after reboot.
chkconfig slapd on
# Start ldap server
service slapd start
# Wait for slapd to start.
sleep 1
###########################################################
# General configuration of the server.
###########################################################
# Create folder to store log files in
mkdir /var/log/slapd
chmod 755 /var/log/slapd/
chown ldap:ldap /var/log/slapd/
# Redirect all log files through rsyslog.
sed -i "/local4.*/d" /etc/rsyslog.conf
cat >> /etc/rsyslog.conf << EOF
local4.* /var/log/slapd/slapd.log
EOF
service rsyslog restart
# Do the configurations.
ldapadd -H ldap://ldap.syco.net -x -D "cn=admin,cn=config" -w secret << EOF
# Setup logfile (not working now, propably needing debug level settings.)
dn: cn=config
changetype:modify
replace: olcLogLevel
olcLogLevel: config stats shell
-
replace: olcIdleTimeout
olcIdleTimeout: 30
# Set access for the monitor db.
dn: olcDatabase={2}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="cn=Manager,dc=syco,dc=net" read by * none
# Set password for cn=admin,cn=config
dn: olcDatabase={0}config,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}OjXYLr1oZ/LrHHTmjnPWYi1GjbgcYxSb
# Change LDAP-domain, password and access rights.
dn: olcDatabase={1}bdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=syco,dc=net
-
replace: olcRootDN
olcRootDN: cn=Manager,dc=syco,dc=net
-
replace: olcRootPW
olcRootPW: {SSHA}OjXYLr1oZ/LrHHTmjnPWYi1GjbgcYxSb
-
replace: olcAccess
olcAccess: {0}to attrs=employeeType by dn="cn=sssd,dc=syco,dc=net" read by self read by * none
olcAccess: {1}to attrs=userPassword,shadowLastChange by self write by anonymous auth by * none
olcAccess: {2}to dn.base="" by * none
olcAccess: {3}to * by dn="cn=admin,cn=config" write by dn="cn=sssd,dc=syco,dc=net" read by self write by * none
EOF
##########################################################
# Configure sudo in ldap
#
# Users that should have sudo rights, are configured in
# in the ldap-db. The ldap sudo schema are not configured
# by default, and are here created.
#
# http://eatingsecurity.blogspot.com/2008/10/openldap-continued.html
# http://www.sudo.ws/sudo/man/1.8.2/sudoers.ldap.man.html
##########################################################
# Copy the sudo Schema into the LDAP schema repository
/bin/cp -f /usr/share/doc/sudo-1.7.2p2/schema.OpenLDAP /etc/openldap/schema/sudo.schema
restorecon /etc/openldap/schema/sudo.schema
# Create a conversion file for schema
mkdir ~/sudoWork
echo "include /etc/openldap/schema/sudo.schema" > ~/sudoWork/sudoSchema.conf
# Convert the "Schema" to "LDIF".
slapcat -f ~/sudoWork/sudoSchema.conf -F /tmp/ -n0 -s "cn={0}sudo,cn=schema,cn=config" > ~/sudoWork/sudo.ldif
# Remove invalid data.
sed -i "s/{0}sudo/sudo/g" ~/sudoWork/sudo.ldif
# Remove last 8 (invalid) lines.
head -n-8 ~/sudoWork/sudo.ldif > ~/sudoWork/sudo2.ldif
# Load the schema into the LDAP server
ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret -f ~/sudoWork/sudo2.ldif
# Add index to sudoers db
ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret << EOF
dn: olcDatabase={1}bdb,cn=config
changetype: modify
add: olcDbIndex
olcDbIndex: sudoUser eq
EOF
###########################################################
# Create modules area
#
###########################################################
ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret << EOF
dn: cn=module{0},cn=config
objectClass: olcModuleList
cn: module{0}
olcModulePath: /usr/lib64/openldap/
EOF
###########################################################
# Add auditlog overlay.
#
# http://www.manpagez.com/man/5/slapo-auditlog/
###########################################################
ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret << EOF
dn: cn=module{0},cn=config
changetype:modify
add: olcModuleLoad
olcModuleLoad: auditlog.la
dn: olcOverlay=auditlog,olcDatabase={1}bdb,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcAuditLogConfig
olcOverlay: auditlog
olcAuditlogFile: /var/log/slapd/auditlog.log
EOF
###########################################################
# Add accesslog overlay.
#
# http://www.manpagez.com/man/5/slapo-accesslog/
#
# TODO: Didn't get it working.
#
###########################################################
# ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret << EOF
# dn: cn=module,cn=config
# objectClass: olcModuleList
# cn: module
# olcModulePath: /usr/lib64/openldap/
# olcModuleLoad: access.la
#
#
# dn: olcOverlay=accesslog,olcDatabase={1}bdb,cn=config
# changetype: add
# olcOverlay: accesslog
# objectClass: olcOverlayConfig
# objectClass: olcAccessLogConfig
# logdb: cn=auditlog
# logops: writes reads
# # read log every 5 days and purge entries
# # when older than 30 days
# logpurge 180+00:00 5+00:00
# # optional - saves the previous contents of
# # person objectclass before performing a write operation
# logold: (objectclass=person)
# EOF
###########################################################
# Add pwdpolicy overlay
#
# http://www.zytrax.com/books/ldap/ch6/ppolicy.html
# http://www.openldap.org/software/man.cgi?query=slapo-ppolicy&sektion=5&apropos=0&manpath=OpenLDAP+2.3-Release
# http://www.symas.com/blog/?page_id=66
###########################################################
ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret << EOF
dn: cn=module{0},cn=config
changetype:modify
add: olcModuleLoad
olcModuleLoad: ppolicy.la
dn: olcOverlay=ppolicy,olcDatabase={1}bdb,cn=config
olcOverlay: ppolicy
objectClass: olcOverlayConfig
objectClass: olcPPolicyConfig
olcPPolicyHashCleartext: TRUE
olcPPolicyUseLockout: FALSE
olcPPolicyDefault: cn=default,ou=pwpolicies,dc=syco,dc=net
EOF
##########################################################
# Add users, groups, sudoers. Ie. the dc=syco,dc=net database.
##########################################################
ldapadd -H ldap:/// -x -D "cn=Manager,dc=syco,dc=net" -w secret -f /opt/syco/doc/ldap/manager.ldif
###########################################################
# Create certificates
###########################################################
# Create CA
echo "00" > /etc/openldap/cacerts/ca.srl
openssl req -new -x509 -sha512 -nodes -days 3650 -newkey rsa:4096\
-out /etc/openldap/cacerts/ca.crt \
-keyout /etc/openldap/cacerts/ca.key \
-subj '/O=syco/OU=System Console Project/CN=systemconsole.github.com'
# Creating server cert
openssl req -new -sha512 -nodes -days 1095 -newkey rsa:4096 \
-keyout /etc/openldap/cacerts/slapd.key \
-out /etc/openldap/cacerts/slapd.csr \
-subj '/O=syco/OU=System Console Project/CN=ldap.syco.net'
openssl x509 -req -sha512 -days 1095 \
-in /etc/openldap/cacerts/slapd.csr \
-out /etc/openldap/cacerts/slapd.crt \
-CA /etc/openldap/cacerts/ca.crt \
-CAkey /etc/openldap/cacerts/ca.key
#
# Customer create a CSR (Certificate Signing Request) file for client cert
#
openssl req -new -sha512 -nodes -days 1095 -newkey rsa:4096 \
-keyout /etc/openldap/cacerts/client.key \
-out /etc/openldap/cacerts/client.csr \
-subj '/O=syco/OU=System Console Project/CN=client.syco.net'
#
# Create a signed client crt.
#
cat > /etc/openldap/cacerts/sign.conf << EOF
[ v3_req ]
basicConstraints = critical,CA:FALSE
keyUsage = critical,digitalSignature
subjectKeyIdentifier = hash
EOF
openssl x509 -req -days 1095 \
-sha512 \
-extensions v3_req \
-extfile /etc/openldap/cacerts/sign.conf \
-CA /etc/openldap/cacerts/ca.crt \
-CAkey /etc/openldap/cacerts/ca.key \
-in /etc/openldap/cacerts/client.csr \
-out /etc/openldap/cacerts/client.crt
# One file with both crt and key. Easier to manage the cert on client side.
cat /etc/openldap/cacerts/client.crt /etc/openldap/cacerts/client.key > \
/etc/openldap/cacerts/client.pem
# Create hash and set permissions of cert
/usr/sbin/cacertdir_rehash /etc/openldap/cacerts
chown -Rf root:ldap /etc/openldap/cacerts
chmod -Rf 750 /etc/openldap/cacerts
restorecon -R /etc/openldap/cacerts
# View cert info
# openssl x509 -text -in /etc/openldap/cacerts/ca.crt
# openssl x509 -text -in /etc/openldap/cacerts/slapd.crt
# openssl x509 -text -in /etc/openldap/cacerts/client.pem
# openssl req -noout -text -in /etc/openldap/cacerts/client.csr
###########################################################
# Configure ssl
#
# Configure slapd to only be accessible over ssl,
# with client certificate.
#
# http://www.openldap.org/pub/ksoper/OpenLDAP_TLS.html#4.0
# http://www.openldap.org/faq/data/cache/185.html
###########################################################
ldapadd -H ldap:/// -x -D "cn=admin,cn=config" -w secret << EOF
dn: cn=config
changetype:modify
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/openldap/cacerts/slapd.key
-
replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/openldap/cacerts/slapd.crt
-
replace: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/openldap/cacerts/ca.crt
-
replace: olcTLSCipherSuite
olcTLSCipherSuite: HIGH:MEDIUM:-SSLv2
-
replace: olcTLSVerifyClient
olcTLSVerifyClient: demand
EOF
# Enable LDAPS and dispable LDAP
sed -i 's/[#]*SLAPD_LDAPS=.*/SLAPD_LDAPS=yes/g' /etc/sysconfig/ldap
sed -i 's/[#]*SLAPD_LDAP=.*/SLAPD_LDAP=no/g' /etc/sysconfig/ldap
service slapd restart
# Configure the client cert to be used by ldapsearch for user root.
sed -i '/^TLS_CERT.*\|^TLS_KEY.*/d' /root/ldaprc
cat >> /root/ldaprc << EOF
TLS_CERT /etc/openldap/cacerts/client.pem
TLS_KEY /etc/openldap/cacerts/client.pem
EOF
###########################################################
# Require higher security from clients.
###########################################################
ldapadd -H ldaps://ldap.syco.net -x -D "cn=admin,cn=config" -w secret << EOF
dn: cn=config
changetype:modify
replace: olcLocalSSF
olcLocalSSF: 128
-
replace: olcSaslSecProps
olcSaslSecProps: noanonymous,noplain
dn: cn=config
changetype:modify
replace: olcSecurity
olcSecurity: ssf=128
olcSecurity: simple_bind=128
olcSecurity: tls=128
EOF
###########################################################
# Open firewall
#
# Let clients connect to the server through the firewall.
# This is done after everything else is done, so we are sure
# that the server is secure before letting somebody in.
# TODO: Add destination ip
###########################################################
iptables -I INPUT -m state --state NEW -p tcp -s 10.100.110.7/24 --dport 636 -j ACCEPT And one that installs sssd on the client, and connects to the LDAP-server. #!/bin/sh
###########################################################
# Install LDAP-client
#
# This part should be executed on both LDAP-Server and
# on all clients that should authenticate against the
# LDAP-server
#
# This script is based on information from at least the following links.
# http://www.server-world.info/en/note?os=CentOS_6&p=ldap&f=2
# http://docs.fedoraproject.org/en-US/Fedora/15/html/Deployment_Guide/chap-SSSD_User_Guide-Introduction.html
#
###########################################################
###########################################################
# Uninstall sssd
#
# Note: Only needed if sssd has been setup before.
# might need --skip-broken when installing sssd.
###########################################################
#yum -y remove openldap-clients sssd
#rm -rf /var/lib/sss/
###########################################################
# Install relevant packages
###########################################################
# Install packages
yum -y install openldap-clients
# Pick one package from the Continuous Release
# Version 1.5.1 of sssd.
yum -y install sssd --skip-broken
yum -y install centos-release-cr
yum -y update sssd
yum -y remove centos-release-cr
###########################################################
# Get certificate from ldap server
#
# This is not needed to be done on the server.
###########################################################
if [ ! -f /etc/openldap/cacerts/client.pem ];
then
scp [email protected]:/etc/openldap/cacerts/client.pem /etc/openldap/cacerts/client.pem
fi
if [ ! -f /etc/openldap/cacerts/ca.crt ];
then
scp [email protected]:/etc/openldap/cacerts/ca.crt /etc/openldap/cacerts/ca.crt
fi
/usr/sbin/cacertdir_rehash /etc/openldap/cacerts
chown -Rf root:ldap /etc/openldap/cacerts
chmod -Rf 750 /etc/openldap/cacerts
restorecon -R /etc/openldap/cacerts
###########################################################
# Configure client authenticate against ldap.
###########################################################
# Setup iptables before configuring sssd, so it can connect to the server.
iptables -I OUTPUT -m state --state NEW -p tcp -d 10.100.110.7 --dport 636 -j ACCEPT
# Communication with the LDAP-server needs to be done with domain name, and not
# the ip. This ensures the dns-name is configured.
sed -i '/^10.100.110.7.*/d' /etc/hosts
cat >> /etc/hosts << EOF
10.100.110.7 ldap.syco.net
EOF
# Configure all relevant /etc files for sssd, ldap etc.
authconfig \
--enablesssd --enablesssdauth --enablecachecreds \
--enableldap --enableldaptls --enableldapauth \
--ldapserver=ldaps://ldap.syco.net --ldapbasedn=dc=syco,dc=net \
--disablenis --disablekrb5 \
--enableshadow --enablemkhomedir --enablelocauthorize \
--passalgo=sha512 \
--updateall
# Configure the client cert to be used by ldapsearch for user root.
sed -i '/^TLS_CERT.*\|^TLS_KEY.*/d' /root/ldaprc
cat >> /root/ldaprc << EOF
TLS_CERT /etc/openldap/cacerts/client.pem
TLS_KEY /etc/openldap/cacerts/client.pem
EOF
###########################################################
# Configure sssd
###########################################################
# If the authentication provider is offline, specifies for how long to allow
# cached log-ins (in days). This value is measured from the last successful
# online log-in. If not specified, defaults to 0 (no limit).
sed -i '/\[pam\]/a offline_credentials_expiration=5' /etc/sssd/sssd.conf
cat >> /etc/sssd/sssd.conf << EOF
# Enumeration means that the entire set of available users and groups on the
# remote source is cached on the local machine. When enumeration is disabled,
# users and groups are only cached as they are requested.
enumerate=true
# Configure client certificate auth.
ldap_tls_cert = /etc/openldap/cacerts/client.pem
ldap_tls_key = /etc/openldap/cacerts/client.pem
ldap_tls_reqcert = demand
# Only users with this employeeType are allowed to login to this computer.
access_provider = ldap
ldap_access_filter = (employeeType=Sysop)
# Login to ldap with a specified user.
ldap_default_bind_dn = cn=sssd,dc=syco,dc=net
ldap_default_authtok_type = password
ldap_default_authtok = secret
EOF
# Restart sssd
service sssd restart
# Start sssd after reboot.
chkconfig sssd on
###########################################################
# Configure the client to use sudo
###########################################################
sed -i '/^sudoers.*/d' /etc/nsswitch.conf
cat >> /etc/nsswitch.conf << EOF
sudoers: ldap files
EOF
sed -i '/^sudoers_base.*\|^binddn.*\|^bindpw.*\|^ssl on.*\|^tls_cert.*\|^tls_key.*\|sudoers_debug.*/d' /etc/ldap.conf
cat >> /etc/ldap.conf << EOF
# Configure sudo ldap.
uri ldaps://ldap.syco.net
base dc=syco,dc=net
sudoers_base ou=SUDOers,dc=syco,dc=net
binddn cn=sssd,dc=syco,dc=net
bindpw secret
ssl on
tls_cacertdir /etc/openldap/cacerts
tls_cert /etc/openldap/cacerts/client.pem
tls_key /etc/openldap/cacerts/client.pem
#sudoers_debug 5
EOF Provided are also an LDIF files that needs to be placed in the same folder as the above scripts. # Filename: manager.ldif
###########################################################
# NEW DATABASE
###########################################################
dn: dc=syco,dc=net
objectClass: top
objectclass: dcObject
objectclass: organization
o: System Console Project
dc: syco
description: Tree root
# Used by sssd to ask general queries.
dn: cn=sssd,dc=syco,dc=net
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: sssd
description: Account for sssd.
userPassword: {SSHA}OjXYLr1oZ/LrHHTmjnPWYi1GjbgcYxSb
###########################################################
# Add pwdpolicy overlay
# Need to be done before adding new users.
###########################################################
dn: ou=pwpolicies,dc=syco,dc=net
objectClass: organizationalUnit
objectClass: top
ou: policies
dn: cn=default,ou=pwpolicies,dc=syco,dc=net
cn: default
#objectClass: pwdPolicyChecker
objectClass: pwdPolicy
objectClass: person
objectClass: top
pwdAllowUserChange: TRUE
pwdAttribute: 2.5.4.35
#pwdCheckModule: crackcheck.so
#pwdCheckQuality: 2
pwdExpireWarning: 604800
pwdFailureCountInterval: 30
pwdGraceAuthNLimit: 0
pwdInHistory: 10
pwdLockout: TRUE
pwdLockoutDuration: 3600
pwdMaxAge: 7776000
pwdMaxFailure: 5
pwdMinAge: 3600
pwdMinLength: 12
pwdMustChange: FALSE
pwdSafeModify: FALSE
sn: dummy value
EOF
###########################################################
# GROUPS
###########################################################
dn: ou=group,dc=syco,dc=net
objectClass: top
objectclass: organizationalunit
ou: group
dn: cn=sycousers,ou=group,dc=syco,dc=net
cn: sycousers
objectClass: posixGroup
gidNumber: 2000
memberUid: user1
memberUid: user2
memberUid: user3
dn: cn=sysop,ou=group,dc=syco,dc=net
cn: sysop
objectClass: posixGroup
gidNumber: 2001
memberUid: user1
memberUid: user2
dn: cn=management,ou=group,dc=syco,dc=net
cn: management
objectClass: posixGroup
gidNumber: 2002
memberUid: user1
###########################################################
# USERS
###########################################################
dn: ou=people,dc=syco,dc=net
objectClass: top
objectclass: organizationalunit
ou: people
dn: uid=user1,ou=people,dc=syco,dc=net
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: user1
employeeType: Sysop
givenName: User1
surname: Syco
displayName: Syco User1
commonName: Syco User1
gecos: Syco User1
initials: SU
title: System Administrator (fratsecret)
userPassword: {CRYPT}frzelFSD.VhkI
loginShell: /bin/bash
uidNumber: 2001
gidNumber: 2000
homeDirectory: /home/user1
shadowExpire: -1
shadowFlag: 0
shadowWarning: 7
shadowMin: 8
shadowMax: 999999
shadowLastChange: 10877
mail: [email protected]
postalCode: 666666
mobile: +46 (0)73 xx xx xx xx
homePhone: +46 (0)8 xx xx xx xx
postalAddress:
dn: uid=user2,ou=people,dc=syco,dc=net
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: user2
employeeType: Sysop
givenName: User2
surname: Syco
displayName: Syco User2
commonName: Syco User2
gecos: Syco User2
initials: SU
title: System Administrator
userPassword: {CRYPT}frzelFSD.VhkI
loginShell: /bin/bash
uidNumber: 2002
gidNumber: 2000
homeDirectory: /home/user2
shadowExpire: -1
shadowFlag: 0
shadowWarning: 7
shadowMin: 8
shadowMax: 999999
shadowLastChange: 10877
mail: [email protected]
postalCode: 666666
mobile: +46 (0)73 xx xx xx xx
homePhone: +46 (0)8 xx xx xx xx
postalAddress:
dn: uid=user3,ou=people,dc=syco,dc=net
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: user3
employeeType: Developer
givenName: User3
surname: Syco
displayName: Syco User3
commonName: Syco User3
gecos: Syco User3
initials: SU
title: System Administrator
userPassword: {CRYPT}frzelFSD.VhkI
loginShell: /bin/bash
uidNumber: 2003
gidNumber: 2000
homeDirectory: /home/user3
shadowExpire: -1
shadowFlag: 0
shadowWarning: 7
shadowMin: 8
shadowMax: 999999
shadowLastChange: 10877
mail: [email protected]
postalCode: 666666
mobile: +46 (0)73 xx xx xx xx
homePhone: +46 (0)8 xx xx xx xx
postalAddress:
###########################################################
# SUDOERS
###########################################################
dn: ou=SUDOers,dc=syco,dc=net
objectClass: top
objectClass: organizationalUnit
ou: SUDOers
dn: cn=defaults,ou=SUDOers,dc=syco,dc=net
objectClass: top
objectClass: sudoRole
cn: defaults
description: Default sudoOptions go here
sudoOption: requiretty
sudoOption: always_set_home
sudoOption: env_reset
sudoOption: env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"
sudoOption: env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
sudoOption: env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
sudoOption: env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
sudoOption: env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
sudoOption: secure_path=/sbin:/bin:/usr/sbin:/usr/bin
dn: cn=root,ou=SUDOers,dc=syco,dc=net
objectClass: top
objectClass: sudoRole
cn: root
sudoUser: root
sudoHost: ALL
sudoRunAsUser: ALL
sudoCommand: ALL
# Allow all sysops to execute anything
dn: cn=%sysop,ou=SUDOers,dc=syco,dc=net
objectClass: top
objectClass: sudoRole
cn: %sysop
sudoUser: %sysop
sudoHost: ALL
sudoRunAsUser: ALL
sudoCommand: ALL You will need to understand and edit the scripts before they are executed on your server. Amongst other things you need to customized for you installation is the things related to "syco.net", users, groups and passwords. | {
"source": [
"https://serverfault.com/questions/323497",
"https://serverfault.com",
"https://serverfault.com/users/64308/"
]
} |
323,558 | My business is... troublesome. What I do is legal in every country on earth, but some people don't like it, and make it so tough on my poor ISPs that I am forced to go looking for new providers more often than I would like. The only option I know of for somebody in my position is called "bulletproof hosting" (think wikileaks) and it isn't cheap. The only thing that makes bulletproof hosting expensive, or anything different than typical hosting, is their legal stance when responding to "abuse" complaints via email. Your typical host will get fed-up after 5 or 10 no matter what the reason, and a bulletproof host will take the time to look through the legality of the matter and make a decision based on that. As far as I know, these abuse emails are directly tied to the ip address on which their server sits, because they "own" that ip address and have the ability to lease it out to me on my expensive bulletproof server. If I could answer them personally, I would save another company the trouble and hopefully save myself some money along the way. How can I become the bulletproof host? Just rent out a room in my local DC and ask where to get the IPs from? P.S. No ... just because I know this will be the first question everyone asks - I am not some spammer or rule 34 pornographer, what I do is legal in every country. I said "think wikileaks"! EDIT: Thank you once again for all of the amazing responses. Don't know why I've been lighting fires here recently. Thanks to everyone who saw through the smoke and provided me with meaningful answers. | You need to apply and be granted your own IP allocation by your local registry like RIPE or APNIC. They require annual fees, and you need to justify your requirement (yours is legit). They will assign you an Autonomous System number and a range of IP addresses. You must then find people to peer with (in a datacenter usually), preferably more than one. You then publish your BGP routes using your AS provided with your IPs via your new peer links. You also should allocate a DNS server to provide PTR records for your allocation. None of it is cheap, it requires expensive subscriptions, expensive peer links, expensive hardware (routers to do the BGP peering) and a fair amount of networking knowledge. What I suggest is that you hire a contractor to set up the initial network and peering and firewall, etc to take the pressure off you having to learn and maintain it all, so you can concentrate on... Whatever it is that you host. | {
"source": [
"https://serverfault.com/questions/323558",
"https://serverfault.com",
"https://serverfault.com/users/33108/"
]
} |
323,706 | What should I do about this user? The user is: Downloading pornography Attempting unauthorized access Running hacking software Sending unsolicited email Installing software / tampering with the system etc This is intended as a generic answer for employee behavioral problems, a la Can you help me with my software licensing question? I could see where acceptable use issues are a touch out of scope for SF, however it is one of those things most sysadmins will run into. I don't want to keep rewriting similar answers. | When it comes down to it most of us are just systems administrators. We might be the ones to spot bad behavior and even sometimes called upon to help resolve situations. It is not our job to police or enforce employee behavior. That being said having strong tools at your company’s disposal to address behavior issues as they come up is critical. Once a breach of policy occurs it is a HR question on how to deal with it. Provide them your documentation and let them do their thing. Wait to provide them whatever technical support is needed. If you are in the situation that your company does not have an AUP or it needs revision this summary reflects a lot of research. It should provide you some guidance in getting started. A good AUP should cover the following subjects. One user per ID / Password - if someone uses your account you are liable. One location for each password - don't use your work password outside. Handling of personally identifiable / confidential data Handling of media (CD, USB stick, etc) What information can be transferred and to whom Session locking - your screen locks so your account can't be misused. Monitoring for email, file system utilization, web access Personal use of business systems Legal violations (copyright, hacking attempts, etc) Attempts to bypass internal security controls How violations are responded to - up to and including termination and legal action EDIT - as DKNUCKLES points out it is necessary to follow the standard chain of command for these issues. Just because I was supposed to take them straight to the head of HR doesn't mean that is what your organization does. | {
"source": [
"https://serverfault.com/questions/323706",
"https://serverfault.com",
"https://serverfault.com/users/90351/"
]
} |
323,717 | A client uses Windows 2000 Professional on a machine for about six years. It is configured to startup without password prompt. Today we needed to transfer files to another machine, so I connected the cable and configured the network. After a reboot the machine suddenly prompts for an administrator password while it was configured to login automatically. We do not know the administrator password, so now we can not login anymore. Does anyone have a thought about how this could happen and how we get access back to the system. The client does not have an install cd either. I though it might be possible to boot an Ubuntu live cd and transfer the files to another machine, but I am not sure if I can access the disks form Ubuntu. | When it comes down to it most of us are just systems administrators. We might be the ones to spot bad behavior and even sometimes called upon to help resolve situations. It is not our job to police or enforce employee behavior. That being said having strong tools at your company’s disposal to address behavior issues as they come up is critical. Once a breach of policy occurs it is a HR question on how to deal with it. Provide them your documentation and let them do their thing. Wait to provide them whatever technical support is needed. If you are in the situation that your company does not have an AUP or it needs revision this summary reflects a lot of research. It should provide you some guidance in getting started. A good AUP should cover the following subjects. One user per ID / Password - if someone uses your account you are liable. One location for each password - don't use your work password outside. Handling of personally identifiable / confidential data Handling of media (CD, USB stick, etc) What information can be transferred and to whom Session locking - your screen locks so your account can't be misused. Monitoring for email, file system utilization, web access Personal use of business systems Legal violations (copyright, hacking attempts, etc) Attempts to bypass internal security controls How violations are responded to - up to and including termination and legal action EDIT - as DKNUCKLES points out it is necessary to follow the standard chain of command for these issues. Just because I was supposed to take them straight to the head of HR doesn't mean that is what your organization does. | {
"source": [
"https://serverfault.com/questions/323717",
"https://serverfault.com",
"https://serverfault.com/users/98636/"
]
} |
323,958 | I'm trying to create an ssh key for another user. I'm logged in as root. Can I just edit the files generated by ssh-keygen and change root to the user I want? | You could do that with ssh-keygen , however, remember that the private key is meant to be private to the user so you should be very careful to keep it safe- as safe as the user's password. Or even safer, as the user is not likely to be required to change it upon first login. ssh-keygen -f anything creates two files in the current directory. anything.pub is the public key, which you could append to the user's ~/.ssh/authorized_keys on any destination server. The other file, just called anything is the private key and therefore should be stored safely for the user. The default location would be ~username/.ssh/id_rsa (here named id_rsa , which is default for rsa keys). Remember that the .ssh directory cannot be readable or writeable by anyone but the user, and the user's home directory cannot be writeable by anyone but the user. Likewise, permissions must be tight on the private key, as well: Read/write for only the user, and the .ssh directory and private keyfile must be owned by the user. Technically you could store the key anywhere. With ssh -i path/to/privatekey you could specify that location, while connecting. Again, proper ownership and permissions are critical and ssh will not work if you don't have them right. | {
"source": [
"https://serverfault.com/questions/323958",
"https://serverfault.com",
"https://serverfault.com/users/97916/"
]
} |
324,281 | I setup an Ubuntu guest on a CentOS KVM host with initially 6GB of disk space. How do I go about increasing the Ubuntu guest's disk space from the command line? EDIT #1: I'm using a disk image file (qemu). | stop the VM run qemu-img resize vmdisk.img +10G to increase image size by 10Gb start the VM, resize the partitions and LVM structure within it normally | {
"source": [
"https://serverfault.com/questions/324281",
"https://serverfault.com",
"https://serverfault.com/users/2518/"
]
} |
324,549 | Suppose I have a rack with several servers and other stuff. One of servers overheats severely and either starts smoking or catches fire while there's a serviceman nearby. If anything similar happens in an apartment and there's a fire extinguisher nearby using the latter promptly often lets extinguish the fire very fast but in case of server rack improper extinguishing may lead to unneeded extra damage to surrounding equipment. To clarify, I'm talking about a really small fire that one can try to extinguish without risking their life - like grab a nearby extinguisher, discharge it and get the fire extinguished in say ten fifteen seconds. What's the strategy to extinguish a small local fire in a server rack? What type of extinguisher is to be used? How to minimize damage to surrounding equipment? | This is a strange question, but I'm gonna attempt to answer it anyway. All electrical fires must be extinguished carefully. Especially if they're still live. Fire departments will all recommend the use of a CO2 extinguisher . In a datacentre environment, however, I'd (But I don't recommend you) do one of two things. Hit the EPO (Emergency Power Off) and the fire alarm button. Find a CO2 or Halon* extinguisher and attempt to extinguish the fire If that doesn't work. Goto 1. Unconditional Jump: Run out of Fire Exit. I'd be more concerned about the protection of the rest of the suite than the expense or damage caused by following their procedure for fires in the building. You're operating as a business, so you've got insurance, both for your hardware, and Public Liability. It'll all be covered by that if you've got a decent insurer. It'd be better all around to hit the fire alarm and let the professionals deal with it. Electrical fires produce evil acrid choking smoke, often containing horrible toxins, and I sure as hell wouldn't wanna be trying to extinguish it without breathing apparatus. *Halon is a bit of an odd one. It's a fantastically efficient extinguishing agent, but has a nasty side-effect of destroying the Ozone layer. I've only seen Halon extinguishers in datacentres, and even then, there's a lot of paperwork if you discharge one.
It works by depleting the oxygen in the applied area, and as a result, it'll suffocate the fire, and you at the same time if you linger too long. Again, this kind of lends itself to calling the fire service. Or activating the building's fire supression system.
Don't worry about the expense, that's what insurance is for. Also.. has this just happened? I can't resist this. (Disclaimer: I'm an ex fire-cadet, so I've got a pretty good knowledge of fire and it's extinguishing. ) | {
"source": [
"https://serverfault.com/questions/324549",
"https://serverfault.com",
"https://serverfault.com/users/101/"
]
} |
324,608 | I don't get any information in my log file for openldap on my Centos 6 server. This is how i configured it. SELinux is disabled at the moment. First created a folder where I'd like to store the log files. mkdir /var/log/slapd
chmod 755 /var/log/slapd/
chown ldap:ldap /var/log/slapd/ Then did the configuration. ldapsearch -D "cn=admin,cn=config" -w secret -b cn=config cn=config
dn: cn=config
changetype:modify
replace: olcLogFile
olcLogFile: /var/log/slapd/slapd.log
-
replace: olcLogLevel
olcLogLevel: conns filter config acl stats shell
EOF Just to be safe I restarted the service service openldap restart It does create the file, but don't write anything into the file. Of course I did some searches and updates to the LDAP-server, so it gets connections and stuff to log. $ ls -alh
total 12K
drwxr-xr-x. 2 ldap ldap 4.0K Oct 25 14:27 .
drwxr-xr-x. 6 root root 4.0K Oct 25 14:10 ..
-rw-r--r--. 1 ldap ldap 0 Oct 25 14:33 slapd.log My LDAP-setup can be found here (now slightly modified on my own server) How do I configure LDAP on Centos 6 for user authentication in the most secure and correct way? | I haven't tried olcLogFile but by default, OpenLDAP log all information to rsyslog's local4 facility. Add the following line to /etc/rsyslog.conf or /etc/rsyslog.d/ldap.conf: local4.* /var/log/ldap.log Restart the rsyslog service and check out this log. | {
"source": [
"https://serverfault.com/questions/324608",
"https://serverfault.com",
"https://serverfault.com/users/64308/"
]
} |
324,883 | My question is about Virtual Machines and delivering their content over the servers connection to the internet. I have an Ec2 windows instance, and its network connection appears to be 100mbps If I was to be delivering content from that EC2 instance, is THAT my potential bottleneck? How does s3 differ, I am guess their is no real potential outbound bottleneck with s3? Note : I know s3 and their CDN would be better for static content, however I need to explore this situation for now. Our HTML pages need to access a server side page via AJAX, and because there is no bombproof work around for this at the moment our content and our server needs to be on exactly the same domain, so it rules out using S3. Bandwidth needed : I am not sure, we could have up to 100 users downloading videos at any time, probably no more. Videos can be up to 5mb each, but they would view up to 20. | I can't speak for Windows instances, but I will presume that their base characteristics are fairly similar to Linux instances. Your estimate for bandwidth usage is 100 simultaneous video downloads (I am not sure if you mean downloading the file or streaming the video - I will assume the latter). If we take a stream rate of 512kbps, you need about 51Mbit/s or 6.5MB/s. EC2 instances differ in their I/O performance (which includes bandwidth). There are 3 levels of I/O performance: low, moderate, and high. Keep in mind, though, that disk I/O (i.e. from EBS volumes) also is bandwidth dependent. You can only really consider bandwidth within the EC2 network (as it will be completely variable over the Internet). Some typical numbers to quantify 'low', 'medium', and 'high' (different sources quote different numbers for theoretical values, so they might not be completely accurate). High: Theoretical: 1Gbps = 125MB/s;
Realistic ( source ): 750Mbps = 95MB/s Moderate: Theoretical: 250Mbps;
Realistic ( source, p57 ): 80Mbps = 10MB/s Low: Theoretical: 100Mbps;
Realistic (from my own tests): 10-15Mbps = 1-2MB/s (There is actually a 'very high' level as well (10Gbps theoretical) but that applies only to cluster compute instances only). A further point of mention is the degree of variation. On smaller instances, there is more variability in performance as the physical components are shared between more virtual machines. Regardless, you can expect around +/-20% variation in your performance (sources: 1 , 2 , 3 ). In your case (as per the assumptions/calculations at the top), you may need peak bandwidth of 13MB/s (double 6.5MBps, since disk I/O is also network limited). If you are transferring lower bandwidth content, you should be able to use an instance with 'moderate' I/O performance (see the instance types page ), if your calculations result in a higher bandwidth requirement, you will need an instance with 'high' I/O performance. Simply streaming the data should not be CPU or memory bound, but sustaining 100 simultaneous connections will probably require at least a medium sized instance - and if bandwidth is a concern, based on the above, a large instance would be a safer bet). I would recommend benchmarking the servers you launch to see if they meet your (calculated) needs. Launch two instances (of the same type) and run iperf on each using the instances' private IP addresses - you will need to open port 5001 in your security group if you run it with the default settings). Additionally, most tests outside of the EC2 network show results of between 80-130Mbps (large instances) - although such numbers are not necessarily meaningful. A CDN would be better suited to your needs, if your setup permits it. S3 appear to have a limit around 50MB/s for bandwidth (at least from a single instance) as per this article , but that is higher than what you should require (S3 does not support streaming). Cloudfront would be better suited to your task (as it is designed as a CDN) and supports 1000Mbps=125MB/s by default ( source ) with higher bandwidth available on request and can stream content as well) | {
"source": [
"https://serverfault.com/questions/324883",
"https://serverfault.com",
"https://serverfault.com/users/54583/"
]
} |
324,886 | having an issue trying to segment my network. Currently, our LAN resides on the 10.1.x.x segment, the first x varying based on qeuipment type, and the second varying on the number of the particular piece of equipment of that type. What we would like to do is set up a wireless access point on the network, with an IP of 192.168.1.1, and a dhcp range of 2-100. When trying to enter the default gateway and DNS settings on the access point, it tells me this can not be done as they are on a different network sefment. We have previously managed to do this with a netgear router, but can't for the life of us remember how we did it. The access point we are using is a TP-LINK TL-WA801ND. I've had a search around different places for an answer, but so far have been fairly uncsucessful. Can any one point me in the right direction? | I can't speak for Windows instances, but I will presume that their base characteristics are fairly similar to Linux instances. Your estimate for bandwidth usage is 100 simultaneous video downloads (I am not sure if you mean downloading the file or streaming the video - I will assume the latter). If we take a stream rate of 512kbps, you need about 51Mbit/s or 6.5MB/s. EC2 instances differ in their I/O performance (which includes bandwidth). There are 3 levels of I/O performance: low, moderate, and high. Keep in mind, though, that disk I/O (i.e. from EBS volumes) also is bandwidth dependent. You can only really consider bandwidth within the EC2 network (as it will be completely variable over the Internet). Some typical numbers to quantify 'low', 'medium', and 'high' (different sources quote different numbers for theoretical values, so they might not be completely accurate). High: Theoretical: 1Gbps = 125MB/s;
Realistic ( source ): 750Mbps = 95MB/s Moderate: Theoretical: 250Mbps;
Realistic ( source, p57 ): 80Mbps = 10MB/s Low: Theoretical: 100Mbps;
Realistic (from my own tests): 10-15Mbps = 1-2MB/s (There is actually a 'very high' level as well (10Gbps theoretical) but that applies only to cluster compute instances only). A further point of mention is the degree of variation. On smaller instances, there is more variability in performance as the physical components are shared between more virtual machines. Regardless, you can expect around +/-20% variation in your performance (sources: 1 , 2 , 3 ). In your case (as per the assumptions/calculations at the top), you may need peak bandwidth of 13MB/s (double 6.5MBps, since disk I/O is also network limited). If you are transferring lower bandwidth content, you should be able to use an instance with 'moderate' I/O performance (see the instance types page ), if your calculations result in a higher bandwidth requirement, you will need an instance with 'high' I/O performance. Simply streaming the data should not be CPU or memory bound, but sustaining 100 simultaneous connections will probably require at least a medium sized instance - and if bandwidth is a concern, based on the above, a large instance would be a safer bet). I would recommend benchmarking the servers you launch to see if they meet your (calculated) needs. Launch two instances (of the same type) and run iperf on each using the instances' private IP addresses - you will need to open port 5001 in your security group if you run it with the default settings). Additionally, most tests outside of the EC2 network show results of between 80-130Mbps (large instances) - although such numbers are not necessarily meaningful. A CDN would be better suited to your needs, if your setup permits it. S3 appear to have a limit around 50MB/s for bandwidth (at least from a single instance) as per this article , but that is higher than what you should require (S3 does not support streaming). Cloudfront would be better suited to your task (as it is designed as a CDN) and supports 1000Mbps=125MB/s by default ( source ) with higher bandwidth available on request and can stream content as well) | {
"source": [
"https://serverfault.com/questions/324886",
"https://serverfault.com",
"https://serverfault.com/users/95433/"
]
} |
325,200 | Possible Duplicate: Switch to IPv6 and get rid of NAT? Are you kidding? I'm thinking about the way that in IPv4 most of the time you have a single point to configure a firewall on, mainly your router, but if everybody has a Globally Accessible IP Address, doesn't that mean that each computer user is basically responsible for managing their own firewall? (I mean I'll admit the same is true when using a public wifi access point, but still...) | IPv6 gets rid of NAT, which has certainly been a large part of avoiding accidental exposure of services to the internet from internal hosts.. so in that way, yes, it's a change to how most everyone is doing things. However, it doesn't at all mean that you won't still have a central firewall at the network edge - the change is simply that it'll be acting as a pure firewall instead of a firewall/NAT device. It'll just be up to the people managing those firewalls to make sure to avoid accidentally exposure of services; fire up the deny rules! Getting rid of NAT is a big change to network security practices, and there will certainly be times before too long that we hear about some accidental information exposure breaches due to misconfigured firewalls and IPv6. But NAT has always been a hack, and getting the firewalls out of the business of tracking all of those connections and fake connections for stateless protocols and port translations will be a good thing in the long run - less complexity sounds good to me! | {
"source": [
"https://serverfault.com/questions/325200",
"https://serverfault.com",
"https://serverfault.com/users/1980/"
]
} |
325,467 | Using OpenSSL from the command line in Linux, is there some way to examine a key (either public or private) to determine the key size? | openssl rsa -in private.key -text -noout The top line of the output will display the key size. For example: Private-Key: (2048 bit) To view the key size from a certificate: $ openssl x509 -in public.pem -text -noout | grep "RSA Public Key"
RSA Public Key: (2048 bit) | {
"source": [
"https://serverfault.com/questions/325467",
"https://serverfault.com",
"https://serverfault.com/users/95719/"
]
} |
325,879 | The developers on my team want a shared development machine to use instead of running the software on their own computers. Their rationale seems to be that we are only targeting Fedora/CentOS/Red Hat for release and they use Macs. I tried to explain to them that for what we are doing, they will all need root on the server and one of them could quite easily do something like sudo rm -rf / (even if by accident), thus hosing everyone's work that's not checked into source control. I told them to download CentOS and use VirtualBox to run the code. So I guess the question here is who's in the right? From my perspective the issues of sharing a dev server outweigh the minor if any inconveniences of running CentOS on their machines. | To expound on my comment above, there should be absolutely no need for your devs to need root access in your dev environment, shared or otherwise. With a combination of well-thought-out file permissions supplemented by a handful of sudo rules, they should be able to do whatever is it they need to do. Regarding a shared dev environment versus each developer having their own environment: I'm with your developers here. With each developer managing their own dev environment, you're ending up with umpteen completely different configs, software revisions, file permission structures, daemon versions, kernel versions, etc. That is a nightmare for bug squashing. They recognize that they need a stable, well-managed development environment. They're absolutely right, so give it to them! | {
"source": [
"https://serverfault.com/questions/325879",
"https://serverfault.com",
"https://serverfault.com/users/65831/"
]
} |
325,880 | I administer multiple Windows servers. One issue I have is that the time gets out of sync, and this becomes noticeable over a week or so. For example, my app server system time could be different from my database server time. What can I do to automatically synchronize the timing across multiple servers? | To expound on my comment above, there should be absolutely no need for your devs to need root access in your dev environment, shared or otherwise. With a combination of well-thought-out file permissions supplemented by a handful of sudo rules, they should be able to do whatever is it they need to do. Regarding a shared dev environment versus each developer having their own environment: I'm with your developers here. With each developer managing their own dev environment, you're ending up with umpteen completely different configs, software revisions, file permission structures, daemon versions, kernel versions, etc. That is a nightmare for bug squashing. They recognize that they need a stable, well-managed development environment. They're absolutely right, so give it to them! | {
"source": [
"https://serverfault.com/questions/325880",
"https://serverfault.com",
"https://serverfault.com/users/99316/"
]
} |
325,955 | I've been trying to configure email to forward to Gmail, using Postfix to relay email to smtp.gmail.com. However, I'm failing to get it to authenticate with smtp.gmail.com, which is a rather vital prerequisite to getting anything working… The mail logs show only: Oct 29 15:50:14 gsnedders-1 postfix/master[6596]: daemon started -- version 2.7.1, configuration /etc/postfix
Oct 29 15:50:19 gsnedders-1 postfix/pickup[6598]: EBA1F78750: uid=1000 from=<gsnedders>
Oct 29 15:50:19 gsnedders-1 postfix/cleanup[6603]: EBA1F78750: message-id=<[email protected]>
Oct 29 15:50:19 gsnedders-1 postfix/qmgr[6599]: EBA1F78750: from=<[email protected]>, size=324, nrcpt=1 (queue active)
Oct 29 15:50:19 gsnedders-1 postfix/cleanup[6603]: F2D557874F: message-id=<[email protected]>
Oct 29 15:50:19 gsnedders-1 postfix/local[6605]: EBA1F78750: to=<[email protected]>, orig_to=<me>, relay=local, delay=0.04, delays=0.03/0.02/0/0, dsn=2.0.0, status=sent (forwarded as F2D557874F)
Oct 29 15:50:19 gsnedders-1 postfix/qmgr[6599]: F2D557874F: from=<[email protected]>, size=454, nrcpt=1 (queue active)
Oct 29 15:50:19 gsnedders-1 postfix/qmgr[6599]: EBA1F78750: removed
Oct 29 15:50:20 gsnedders-1 postfix/smtp[6606]: warning: SASL authentication failure: No worthy mechs found
Oct 29 15:50:20 gsnedders-1 postfix/smtp[6606]: F2D557874F: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.157.108]: no mechanism available And the postfix config is: relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd
smtp_sasl_security_options = noanonymous
smtp_tls_eccert_file =
smtp_tls_eckey_file =
smtp_tls_security_level = may
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtpd_tls_received_header = yes
tls_random_source = dev:/dev/urandom
smtpd_tls_security_level = may | Ah-ha! Installing the libsasl2-modules package solved the problem. | {
"source": [
"https://serverfault.com/questions/325955",
"https://serverfault.com",
"https://serverfault.com/users/99343/"
]
} |
325,997 | What exactly is an SMTP relay, and what exactly is an SMTP smarthost? Can someone give me a brief description of each, including how they relate to one another? | In general, both are mail relays, and a mail relay is just a server that passes mail to another mail server, via SMTP, rather than a server that offers mailbox service to end users via POP3/IMAP/HTTP. A smarthost is a mail relay that is specialized to deal with outbound email. If you have a private LAN, you may want to control the flow of outbound email, and prevent "any old server" from being able to deliver email to the internet, or perhaps your internal systems only resolve internal DNS and can't resolve hosts or domain MX records for systems "out there on the interwebs". In a case like this, you might designate a single host as the Smarthost. All the other machines would in turn blindly send any outbound email to the Smarthost. The smarthost would have the ability to resolve hosts and domain MX records on the internet, and would be permitted by firewall/acl/iptables/whatever to communicate to other hosts on port 25, or port 587, to deliver outbound email. The other common use of a mail relay is with inbound email. If you run a large organization, with thousands or hundreds of thousands of users, writing email to block storage can consume a tremendous amount of time and resources. If you only had 1 server to do this, it would quickly become bogged down. If you have multiple servers, serving a subset of users each, you would have to change each user's email domain to be distinct for that user. Those workarounds become fairly inconvenient rather quickly. The solution for this is a single MX record for your domain, which might resolve (by load balancing, or DNS round-robining) to multiple mail-relay servers. These mail-relays would be configured to accept email for any users in the domain, while filtering SPAM, then it would consult it's own policies/maps to determine to which mailbox server the email needs to be forwarded to to reach the end-user's mailbox. userA => server1, userB => server2, etc. This allows the servers that do the heavy lifting of receiving email from the internet for all the users to rapidly forward them off, while the mailbox servers having a lower individual volume, are able to incur the resource penalties of writing messages to disk, without becoming a bottleneck. | {
"source": [
"https://serverfault.com/questions/325997",
"https://serverfault.com",
"https://serverfault.com/users/79496/"
]
} |
326,044 | With the telnet program one can connect to any TCP port on any host, but is there a way to listen on some port on current host? i.e. 1) on host a:
telnet listen 12345 2) on host b:
telnet host_a 12345 I don't want any service behind the listening side, just connection and whatever typed to be transferred as is both ways. I know I can already do the 2), but is there any way to achieve the 1)? I'm interested in both Windows and Linux solution. | The usual tool for this is something called netcat . It's available in most Linux distros, and may even be installed by default in some (the command is nc ). There are even ports for Windows, but nearly every antivirus package on the planet considers it deeply suspicious due it's use in malware which makes it hard to download and use. | {
"source": [
"https://serverfault.com/questions/326044",
"https://serverfault.com",
"https://serverfault.com/users/99368/"
]
} |
326,313 | I have a Linux machine and I have a script called load_info and the script is located in /var I would like to be able to run the load_info script from another directory without having to prefix the command with "./" For example if I was in /root I want to be able to type load_info (without ./) and have this execute the /var/load_info script. So how do I make it so that if I cd /root , I can type load_info and have it run the /var/load_info script. | Your shell will search the paths listed in the $PATH environment variable for the typed command if it's not fully qualified (e. g. vi instead of /usr/bin/vi ). You can easily add another path to your $PATH variable by appending a line in your ~/.profile or ~/.bashrc : export PATH="$PATH:/path/to/your/scripts" As kind of best practice you should also save your script under /usr/local/bin or /usr/local/sbin , see hier(7) . | {
"source": [
"https://serverfault.com/questions/326313",
"https://serverfault.com",
"https://serverfault.com/users/90487/"
]
} |
327,416 | When sending large email to a new CentOS6 server running Postfix as the MTA, the following message is returned: tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 552 552 5.3.4 Error: message file too big (state 18) I found the following suggestion , but am unclear as to where it needs to be added in the main.cf file: This was caused by Postfix and it's limit on not only messages but mailbox sizes. I had to add this setting in /etc/postfix/main.cf : message_size_limit = 31457280 How can the maximum mail size (including attachments) be increased in Postfix? | Add it anywhere in main.cf, it's not relevant :) But it's good to keep directives grouped in some logical manner, it is easier for maintance According to official postfix documentation: message_size_limit (default: 10240000) The maximal size in bytes of a message, including envelope information. Note: be careful when making changes. Excessively small values will result in the loss of non-delivery notifications, when a bounce message size exceeds the local or remote MTA's message size limit. Additionally, the default mailbox size of 50M may prevent mail from being delivered, especially after increasing the permitted message size. To increase maximum per user mailbox size, add mailbox_size_limit = <size in bytes> to main.cf. Additionally, as Ian Sparkes commented, if you are using a virtual mailbox configuration, you might need to set virtual_mailbox_limit = <size_in_bytes> . | {
"source": [
"https://serverfault.com/questions/327416",
"https://serverfault.com",
"https://serverfault.com/users/2321/"
]
} |
327,738 | We have a couple of tower servers in a small server room. The carpet is wet as a result of the cooler and no-one else really seems concerned about this but I'm not too happy. I'm only a lowly developer, but I seem to be more concerned than the hardware guys! Is this dangerous? What's the worst that could happen? My instinct says water + (electric * allOfOurData) = dangerous. | Carpet is a big 'NO! NO!!' for a room hosting equipments that are of high value, because of the fire risk. Water is too, for obvious reasons. You should straight call maintenance immediately and have them repair the drainage system. The water could really cause problems, inform your superiors right away and draw their attention on the matter. | {
"source": [
"https://serverfault.com/questions/327738",
"https://serverfault.com",
"https://serverfault.com/users/98326/"
]
} |
328,101 | I have binary files that should be text (they're exported logs), but I can't open it with less (it looks ugly - it looks like a binary file). I found that I could open it with vi and I can cat it (you'll see the actual logs), but what I'd really like to do is grep through them (without having to open up each one with vi and then perform a search). Is there a way for me to do that? | You can use grep anyway to search through the file - it does not really care if the input file is really text or not. From 'man grep': -a, --text
Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
--binary-files=TYPE
If the first few bytes of a file indicate that the file contains binary data, assume that the file is
of type TYPE. By default, TYPE is binary, and grep normally outputs either a one-line message saying
that a binary file matches, or no message if there is no match. If TYPE is without-match, grep assumes
that a binary file does not match; this is equivalent to the -I option. If TYPE is text, grep
processes a binary file as if it were text; this is equivalent to the -a option. Warning: grep
--binary-files=text might output binary garbage, which can have nasty side effects if the output is a
terminal and if the terminal driver interprets some of it as commands. Please mark the words of caution at the end of the second paragraph. You might want to redirect the results from grep into a new file and examine this with vi / less. | {
"source": [
"https://serverfault.com/questions/328101",
"https://serverfault.com",
"https://serverfault.com/users/100083/"
]
} |
328,307 | What would be the best way to connect two freestanding farm buildings onto the same network? The total cable length would be less than 1000 feet. Cat6 is listed as having a max length of about 330', which is too short. What other options are out there? There is no line of sight due to a slight hill between the two buildings, so a Wi-Fi boost would probably run into problems too. Edit: Realized I didn't say how much bandwidth I would need. The internet connection is 10/3, and internal transfers won't be huge. Anything >= 10 Mbps would be plenty. | In a word: fiber. Your solution could be as simple as two media converters and 1000 feet of multi-mode fiber with matching connectors, at a cost of under $500 total for actual networking components. You would need to plan the run carefully to prevent the fiber from being damaged during or after installation. Compared to copper, fiber is easier to break and requires very expensive tools to fix. Ordinarily, one would consult a professional installer. | {
"source": [
"https://serverfault.com/questions/328307",
"https://serverfault.com",
"https://serverfault.com/users/97385/"
]
} |
328,363 | I am trying to scp a file from a server to my local machine, but it is giving me this error: protocol error: unexpected <newline> This is my syntax: scp user@server:/path/to/file . It did not work on this server, but then I tried the same command on my other server, so I can only assume that it is something wrong with my server and not the syntax of the scp command. Any ideas? | One of your login scripts (.bashrc/.cshrc/etc.) is outputting data to the terminal when it shouldn't be. This is causing scp to error when it is connecting and getting ready to copy as it starts receiving extra data it doesn't expect. Remove output that is generated here. You can check if your terminal is interactive and only output text by using the following code in a bashrc. Something equivalent exists for other shells as well: if shopt -q login_shell; then
[any code that outputs text here]
fi | {
"source": [
"https://serverfault.com/questions/328363",
"https://serverfault.com",
"https://serverfault.com/users/98295/"
]
} |
328,380 | I'm trying to redirect the following URL Http://example.com/category/something To redirect to Http://something.example.com/category/something Where "something" could be anything. I've already set up nginx and Dns for wild card and confirmed that that works. | One of your login scripts (.bashrc/.cshrc/etc.) is outputting data to the terminal when it shouldn't be. This is causing scp to error when it is connecting and getting ready to copy as it starts receiving extra data it doesn't expect. Remove output that is generated here. You can check if your terminal is interactive and only output text by using the following code in a bashrc. Something equivalent exists for other shells as well: if shopt -q login_shell; then
[any code that outputs text here]
fi | {
"source": [
"https://serverfault.com/questions/328380",
"https://serverfault.com",
"https://serverfault.com/users/32999/"
]
} |
328,395 | I found this on the internet, while putting up a FTP server in FreeBSD. Putting nologin into /etc/shells potentially creates a back door by
which those accounts can be used with FTP. (see: http://osdir.com/ml/freebsd-questions/2005-12/msg02392.html ) Can anybody explain why this is? And why taking a copy of the nologin and putting that one in the /etc/shells resolves this problem? | /etc/shells contains a list of binaries that the system considers (unrestricted) shells. That means that any user that has configured one of those binaries as their shell is assumed to have full access to the system (meaning they can execute any command, provided they have the appropriate permission). The most direct result is that they can use chsh to change their configured shell. If a user has a shell configured that isn't in this list, then the system assumes that he's somehow restricted. In the case of chsh it means that the user cannot change that value. Other programs might query that list and apply similar restrictions. So by putting nologin in /etc/shells you effectively say "any user that has nologin as its shell is considered a full, unrestricted user". That's almost certainly the exact opposite of what nologin was meant to say . | {
"source": [
"https://serverfault.com/questions/328395",
"https://serverfault.com",
"https://serverfault.com/users/100193/"
]
} |
328,969 | Possible Duplicate: How long does it take for an A record to propagate? I recently changed nameservers and it has been 24 hours since.
Some of my visitors are complaining they are still viewing the old site while some are already seeing the new site. Is there any way to speed-up the DNS propagation without updating the hosts file of each of my visitors? Are there any best practices when it comes to changing nameservers to minimize this problem? | DNS records doesn't propagate in the sense that they aren't "pushed" from your server to other resolvers. What actually happens is that when other DNS servers look up your domain, they cache the record for X seconds so that they don't have to do another lookup for subsequent requests. X seconds should be determined by the TTL value on the record when it was retrieved from your name server. If you've already changed the address there's nothing you can do but sit and wait. If you had planned this in advance, you could have lowered the TTL value. Some larger DNS resolvers cache longer than the TTL, which is a violation of the relevant RFCs (but they don't care). If you can track this issue down to a few name servers, you can email the operators and ask them to invalidate the cache for your zone so that they'll stop using the cached (old) record. Honestly, though, unless this goes on for an extended period of time, it's probably just as well that you sit tight and wait and plan a better migration for next time, since the damage is already done. | {
"source": [
"https://serverfault.com/questions/328969",
"https://serverfault.com",
"https://serverfault.com/users/75899/"
]
} |
329,287 | we are using kvm/qemu with qcow2-images for our virtual machines. qcow2 has this nice feature where the image file only allocates the actually needed space by the virtual-machine. but how do i shrink back the image file, if the virtual machine's allocated space gets smaller? example: 1.) i create a new image with qcow2 format, size 100GB 2.) i use this image to install ubuntu. installation needs about 10 gb, the image-file grows up to about 10GB. nothing unexpected so far. 3.) i fill up the image with about 40 GB of additional data. the image-file grows up to 50GB. i am ok with that :-) 4.) this is where it gets strange: i delete all of the 40GB data on the image, but the image-size still eats up 50GB. question: how do i free up that 40GB of data and shrink the image to the only needed 10 GB? thanks in advance,
berni | virt-sparsify can do all this with less hassle on your part: http://libguestfs.org/virt-sparsify.1.html | {
"source": [
"https://serverfault.com/questions/329287",
"https://serverfault.com",
"https://serverfault.com/users/36256/"
]
} |
329,585 | I've launched my first instance, and am using it as a web server. I see that it has a public DNS (a public URL), e.g.: ec2-123-45-6-789.compute-1.amazonaws.com I can successfully go to this server in my browser, hit it via cURL, etc. I want to use this web server for a back-end service in an app I'm building, so I placed this URL in my app's config, and it works great. But when I manually stop and re-start my instance, I see that the public DNS changes! I've read that this happens when you explicitly stop and re-start, but doesn't happen if you just "reboot". I don't plan on explicitly stopping and re-starting this server ever, but my question is: will this public DNS ever change on its own for any reason? E.g. if the machine abnormally crashes, or whatever. In other words, is it safe to ship an app that's wired to this URL? | The public DNS name always matches the public IP address. The public IP address stays the same for an instance until it is terminated or stopped. A reboot does not change the public IP address. If an EC2 instance is in a VPC, then it will retain the same public IP address across a stop and start. If an EC2 instance that is not in a VPC is stopped and then started started again, it will probably receive a different public IP address. Instances can fail. When you start a new instance to replace a failed or terminated instance, it will probably receive a different public IP address. Because instances can fail, and because you may want to change the size of an instance (with a stop/start) it is not recommended to "ship an app that's wired to [the public IP address]" (or DNS name). Once your instance is stopped/terminated/failed another user could get that IP address assigned to their instance and all your traffic would go to them. It is recommended to use Elastic IP Addresses to associate public services with your instance. You get to keep the Elastic IP address and you can assign it to any instance you want over time, even if it's the same instance after a stop/start. Each Elastic IP Address comes with a public DNS name, but you would probably be better off mapping your own hostname to the Elastic IP address so that the name makes more sense to humans. Here's a guide to Elastic IP Addresses: http://aws.amazon.com/articles/1346 Here's an article I wrote that talks about the differences between rebooting and stop/start of an instance: Rebooting vs. Stop/Start of Amazon EC2 Instance http://alestic.com/2011/09/ec2-reboot-stop-start Here's an article I wrote that provides a reason you may want to stop/start an instance even though you don't think you will today: Moving an EC2 Instance to a Larger Size http://alestic.com/2011/02/ec2-change-type | {
"source": [
"https://serverfault.com/questions/329585",
"https://serverfault.com",
"https://serverfault.com/users/100538/"
]
} |
329,592 | I looked at the nginx documentation and it still confuses me utterly. How does try_files work? Here is what the documentation says: From NginxHttpCoreModule try_files syntax: try_files path1 [path2] uri default: none context: server, location availability: 0.7.27 Checks for the existence of files in order, and returns the first file
that is found. A trailing slash indicates a directory - $uri /. In the
event that no file is found, an internal redirect to the last
parameter is invoked. The last parameter is the fallback URI and must exist, or else an internal error will be raised. Unlike rewrite, $args are not automatically preserved if the fallback is not
a named location. If you need args preserved, you must do so
explicitly: I don't understand how it checks the paths and what if I don't want an internal error but have it resume the rest of the path in an effort to find another file? If I want to try a cached file at /path/app/cache/url/index.html and if it fails to try /path/app/index.php how would I write that? If I wrote: try_files /path/app/cache/ $uri
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket;
fastcgi_param SCRIPT_FILENAME $document_root/index.php; I have index index.php index.html index.htm; . When I visit /urlname , will it try checking /path/app/cache/urlname/index.php then /path/app/cache/urlname/index.html ? If we ignore everything after try_files is it possible for try_files to check the cache folder? I have been trying and have failed. | try_files tries the literal path you specify in relation to the defined root directive and sets the internal file pointer. If you use for instance try_files /app/cache/ $uri @fallback; with index index.php index.html; then it will test the paths in this order: $document_root/app/cache/index.php $document_root/app/cache/index.html $document_root$uri before finally internally redirecting to the @fallback named location. You can also use a file or a status code ( =404 ) as your last parameter but if using a file it must exist . You should note that try_files itself will not issue an internal redirect for anything but the last parameter. Meaning you cannot do the following: try_files $uri /cache.php @fallback; as that will cause nginx to set the internal file pointer to $document_root/cache.php and serve it, but since no internal redirect takes place the locations aren't re-evaluated and as such it will be served as plain text. (The reason it works with PHP files as the index is that the index directive will issue an internal redirect) | {
"source": [
"https://serverfault.com/questions/329592",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
329,795 | I have a number of users who are connecting to MySQL over a VPN, so we have grants along the lines of grant select on foo.* to user@ipaddress1 and so on. This week, the IP used on the VPN changed to address2 , so user@ipaddress1 grants no longer work. What's the best way to handle updating the user and grant information in MySQL to reflect this change? Note that the grants are a serious mess, because some users are excluded from particular columns in particular tables, so we've had to do grants around the excluded objects. | Apparently, the right way to do this is: RENAME USER user@ipaddress1 TO user@ipaddress2; For more details see the RENAME USER Statement section This takes care of all the grants. | {
"source": [
"https://serverfault.com/questions/329795",
"https://serverfault.com",
"https://serverfault.com/users/90299/"
]
} |
329,845 | I run a particular program on linux which sometimes crashes. If you open it quickly after that, it listens on socket 49201 instead of 49200 as it did the first time. netstat reveals that 49200 is in a TIME_WAIT state. Is there a program you can run to immediately force that socket move out of the TIME_WAIT state? | /etc/init.d/networking restart Let me elaborate. Transmission Control Protocol (TCP) is designed to be a bidirectional, ordered, and reliable data transmission protocol between two end points (programs). In this context, the term reliable means that it will retransmit the packets if it gets lost in the middle. TCP guarantees the reliability by sending back Acknowledgment (ACK) packets back for a single or a range of packets received from the peer. This goes same for the control signals such as termination request/response. RFC 793 defines the TIME-WAIT state to be as follows: TIME-WAIT - represents waiting for
enough time to pass to be sure
the remote TCP received the acknowledgment of its connection
termination request. See the following TCP state diagram: TCP is a bidirectional communication protocol, so when the connection is established, there is not a difference between the client and the server. Also, either one can call quits, and both peers needs to agree on closing to fully close an established TCP connection. Let's call the first one to call the quits as the active closer, and the other peer the passive closer. When the active closer sends FIN, the state goes to FIN-WAIT-1. Then it receives an ACK for the sent FIN and the state goes to FIN-WAIT-2. Once it receives FIN also from the passive closer, the active closer sends the ACK to the FIN and the state goes to TIME-WAIT. In case the passive closer did not received the ACK to the second FIN, it will retransmit the FIN packet. RFC 793 sets the TIME-OUT to be twice the Maximum Segment Lifetime, or 2MSL. Since MSL, the maximum time a packet can wander around Internet, is set to 2 minutes, 2MSL is 4 minutes.
Since there is no ACK to an ACK, the active closer can't do anything but to wait 4 minutes if it adheres to the TCP/IP protocol correctly, just in case the passive sender has not received the ACK to its FIN (theoretically). In reality, missing packets are probably rare, and very rare if it's all happening within the LAN or within a single machine. To answer the question verbatim, How to forcibly close a socket in TIME_WAIT?, I will still stick to my original answer: /etc/init.d/networking restart Practically speaking, I would program it so it ignores TIME-WAIT state using SO_REUSEADDR option as WMR mentioned. What exactly does SO_REUSEADDR do? This socket option tells the kernel
that even if this port is busy (in the TIME_WAIT state), go ahead and
reuse it anyway. If it is busy, but
with another state, you will still get
an address already in use error. It
is useful if your server has been shut
down, and then restarted right away
while sockets are still active on its
port. You should be aware that if
any unexpected data comes in, it may
confuse your server, but while this
is possible, it is not likely. | {
"source": [
"https://serverfault.com/questions/329845",
"https://serverfault.com",
"https://serverfault.com/users/73953/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.