source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
270,756
I want to tail -f my logs. However, I want to filter out everything that has the words: "ELB", "Pingdom", "Health"
I don't know about using awk instead of grep, but this works for me: tail -f file.log | grep -Ev '(ELB|Pingdom|Health)' EDIT: As dmourati and Caleb pointed out, you could also use egrep instead of grep -E for convenience. On some systems this this will be an link to the same binary, in others a copy of it supplied by the grep package. Either way it lives as an alternative to the -E switch. However, according to the GNU grep man page: […]two variant programs egrep and fgrep are available. egrep is the same as grep -E . fgrep is the same as grep -F . Direct invocation as either egrep or fgrep is deprecated, but is provided to allow historical applications that rely on them to run unmodified. Since they are synonymous commands, it comes down to preference unless you don't have egrep at all. However for forward compatibility it is recommended to use the grep -E syntax since the other method is officially deprecated.
{ "source": [ "https://serverfault.com/questions/270756", "https://serverfault.com", "https://serverfault.com/users/81082/" ] }
270,814
Is there anyway to extract a tar.gz file faster than tar -zxvf filenamehere ? We have large files, and trying to optimize the operation.
pigz is a parallel version of gzip. Although it only uses a single thread for decompression, it starts 3 additional threads for reading, writing, and check calculation. Your results may vary but we have seen significant improvement in decompression of some of our datasets. Once you install pigz, the tar file can be extracted with: pigz -dc target.tar.gz | tar xf -
{ "source": [ "https://serverfault.com/questions/270814", "https://serverfault.com", "https://serverfault.com/users/65061/" ] }
271,380
I can check its value by cat /proc/sys/net/core/somaxconn , is it OK if I change it simply by echo 1024 > /proc/sys/net/core/somaxconn ?
Yes. Alternatively, you can use: sysctl -w net.core.somaxconn=1024 Add net.core.somaxconn=1024 to /etc/sysctl.conf for it to become permanent (be reapplied after booting).
{ "source": [ "https://serverfault.com/questions/271380", "https://serverfault.com", "https://serverfault.com/users/81970/" ] }
271,385
I've been tasked with setting up a 3-tier SharePoint farm. Two load balanced webservers Two applications servers A SQL server Everything is all set up and working with load balancing etc. etc. My question is what do I do with the application servers? Do I load balance the two application servers? Do I cluster them? Do I run certain services on each server? Do I have the same services running on each server and SharePoint automatically chooses a server? I'm not quite sure why we have two application servers. Currently I just have the same services running on each app server. Any help/tips/explanations would be much appreciated. Thanks, Jamie
Yes. Alternatively, you can use: sysctl -w net.core.somaxconn=1024 Add net.core.somaxconn=1024 to /etc/sysctl.conf for it to become permanent (be reapplied after booting).
{ "source": [ "https://serverfault.com/questions/271385", "https://serverfault.com", "https://serverfault.com/users/81971/" ] }
271,810
I can stop nginx server using nginx -s stop or nginx -s quit . What is the difference?
Quit is a graceful shutdown. Nginx finishes serving the open connections before shutdown Stop is a quick shutdown where is terminates in between serving the connection http://wiki.nginx.org/CommandLine
{ "source": [ "https://serverfault.com/questions/271810", "https://serverfault.com", "https://serverfault.com/users/50001/" ] }
271,824
I'm sure this is very noobish, so forgive me. I'm trying to run a node.js server on port 8080 of my ubuntu 10.04. Here's the result of iptables -L on the server: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination And here's the result of nmap -p 8080 (edited the ip address because everything is or should be wide open) nmap 173.203.xxx.xxx -p 8080 -A Starting Nmap 5.00 ( http://nmap.org ) at 2011-05-19 22:52 PDT Interesting ports on xxx (173.203.xxx.xxx): PORT STATE SERVICE VERSION 8080/tcp closed http-proxy Why on earth is 8080 seen as closed? Adding this did not help: iptables -A OUTPUT -p tcp --dport 8080 -j ACCEPT iptables -A INPUT -p tcp --dport 8080 -j ACCEPT I'm really confused. I'll add this, in case it helps, but I don't know netstat -pan | grep 80 tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 16785/node tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 16471/apache2 I can access my regular website running off of port 80 on apache, but the node.js server is inaccessible from outside. Accessing it from the server itself works fine. So my question is: how would I go about debugging this? Could there be something else than iptables blocking some ports? How do I find out what it is? Any help much appreciated!
Thanks for adding the last netstat output, it really helped. You can't access node.js from outside because it is listening on localhost IP i.e 127.0.0.1. You need to configure node.js to listen on 0.0.0.0 so it will be able to accept connections on all the IPs of your machine. var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(8080, "0.0.0.0"); console.log('Server running at http://0.0.0.0:8080/');
{ "source": [ "https://serverfault.com/questions/271824", "https://serverfault.com", "https://serverfault.com/users/82069/" ] }
272,202
I have some very trivial basic questions about networking, but I find varying information on that, so I just wanted to settle this. As far as I understand a Network Switch handles traffic "intelligently" in the way that it only propagates packets to its port where it knows that the receiver is located (In contrast to Hub which brute force sends all data to all ports). Correct? So a Switch needs to remember all adresses of Hosts connected to it. If the host is not found the packet is sent to the default route (commonly Up link to a wider network) Correct? Now my major question is: Does a Switch remember IP-Addresses or MAC-Addresses to calculate its decisions?
Well, this depends on what kind of switch you are using. The very basic types operate at the link layer and are not aware of IP addresses. They use MAC addresses for their operation. These switches are often unmanaged. However, there are also more intelligent switches, which offer functionality at the IP layer, such as access control lists, and these are aware of IP addresses. In general, these switches are managed, i.e. they have either a web interface or a console interface (or both) to allow the user to configure the various options. However, the additional functionality works on top of the basic switch functions. Switches "learn" the MAC addresses of devices connected to their ports by listening to the traffic, and use them to decide where to send incoming datagrams. Switches in general do not perform routing. This is usually done by routers, and the datagrams sent by the router use the link layer address (MAC address in ethernet networks) to send the packet to the next hop.
{ "source": [ "https://serverfault.com/questions/272202", "https://serverfault.com", "https://serverfault.com/users/53998/" ] }
272,299
Is it possible to map 2 different MAC addresses to the same IP address? For my backup, I need to connect back from the server to the portable, and I would like to have the same IP both for the wireless and the wired interface. The openwrt web interface doesn't accept multiple dhcp entries with the same IP address, but perhaps there is a workaround? Clarification added on 23 may : I should have made it clear that only one of the network interfaces of the portable is connected to the network at any given time (hence switches shouldn't get confused). Initially I had 2 distinct IP addresses assigned to the interfaces, with the same DNS name, but this didn't work very well (timeouts when I got the wrong IP). Yet I want to use the same name for both, as it is hard-coded in my backup script. Sorry for the confusion.
(random semi-opinionated comment: it's rare to see this highish count of unconstructive and plain inaccurate answers and comments to a question) In contrast to others here, I claim that your request is actually quite elementary and has been supported in dnsmasq since version 2.46 , IIRC. This was the sole reason I switched from dd-wrt . After about a year of running OpenWRT, I now know there are actually plenty more reasons to switch, but that's beside the point. I'm running Backfire 10.04-rc4 : May 23 17:45:16 gateway dnsmasq[1925]: started, version 2.55 cachesize 150 My configuration: $ cat /etc/config/dhcp config 'dnsmasq' option 'domainneeded' '1' option 'boguspriv' '1' option 'localise_queries' '1' option 'rebind_protection' '1' option 'rebind_localhost' '1' option 'expandhosts' '1' option 'authoritative' '1' option 'readethers' '1' option 'leasefile' '/tmp/dhcp.leases' option 'resolvfile' '/tmp/resolv.conf.auto' option 'enable_tftp' '1' option 'domain' 'domain.net' option 'local' '/domain.net/' config 'dhcp' 'lan' option 'interface' 'lan' option 'start' '100' option 'limit' '150' option 'leasetime' 'infinite' config 'dhcp' 'wan' option 'interface' 'wan' option 'ignore' '1' option 'dynamicdhcp' '0' config 'dhcp' option 'interface' 'dmz' option 'start' '100' option 'limit' '150' option 'leasetime' '12h' config 'host' option 'name' 'travelmate' option 'mac' '00:11:22:33:44:55 aa:bb:cc:dd:ee:ff' option 'ip' '192.168.1.111' config 'host' option 'name' 'mobilitymac' option 'mac' '99:88:77:66:55:44 ff:ee:dd:cc:bb:aa' option 'ip' '192.168.1.104' Enjoy the seamless transition this setup provides, all existing session stay alive if you don't take too long with the switch.
{ "source": [ "https://serverfault.com/questions/272299", "https://serverfault.com", "https://serverfault.com/users/82233/" ] }
272,686
Our office is getting a serious make over and we are looking into updating our network infrastructure. My idea was to just update our current cat5e cables to cat6 but my boss (not an IT guy) doesn't want to use cables anymore, he wants to go fully wireless. My first concern was of course the speed, we have an art department who needs to transfer large graphic files from and to the server. My boss main reason to go wireless is because he is afraid in couple of years a new cable standard is coming out and he has to redo all the wires. So now I'm looking for arguments to convince him to still go for wires. Speed, security, continuity, ... The top reason for him would of course be the cost. Any advice? It's a network for about 25 people.
Your boss is, imho, barking mad. For a start you still need to wire the building to provide a backbone for the wireless access points to connect to, and secondly wired connections are both more reliable and much faster than wireless. As Julien suggests, these days you should probably look to do both anyway, and as xciter says, if you install modern cabling standards then these should have plenty of life in them - and more to the point, if your bosses theories about wired standards going out of date are true, then how does he propose to connect the wireless access points to the backbone if the standards change? And what happens when wireless standards go "out of date".
{ "source": [ "https://serverfault.com/questions/272686", "https://serverfault.com", "https://serverfault.com/users/63821/" ] }
272,754
In Putty, there are three tunneling options: Can someone explain what is the difference between them?
From the puTTY documentation , specifically, 4.23 The Tunnels Panel section: Set one of the ‘Local’ or ‘Remote’ radio buttons, depending on whether you want to forward a local port to a remote destination (‘Local’) or forward a remote port to a local destination (‘Remote’). Alternatively, select ‘Dynamic’ if you want PuTTY to provide a local SOCKS 4/4A/5 proxy on a local port (note that this proxy only supports TCP connections; the SSH protocol does not support forwarding UDP). Local -- Forward local port to remote host. Remote -- Forward remote port to local host. Dynamic -- Act as a SOCKS proxy . This requires special support from the software that connects to it, however the destination address is obtained dynamically at runtime rather than being fixed in advance.
{ "source": [ "https://serverfault.com/questions/272754", "https://serverfault.com", "https://serverfault.com/users/79496/" ] }
272,846
What are the features and potential benefits of Logical Volume Manager beyond what is detailed on its Wikipedia page?
Taken directly from my blog entry: http://www.standalone-sysadmin.com/blog/2008/09/introduction-to-lvm-in-linux/ First off, lets discuss life without LVM. Back in the bad old days, you had a hard drive. This hard drive could have partitions. You could install file systems on these partitions, and then use those filesystems. Uphill both ways. It looked a lot like this: You've got the actual drive, in this case sda. On that drive are two partitions, sda1 and sda2. There is also some unused free space. Each of the partitions has a filesystem on it, which is mounted. The actual filesystem type is arbitrary. You could call it ext3, reiserfs, or what have you. The important thing to note is that there is a direct one-to-one corrolation between disk partitions and possible file systems. Lets add some logical volume management that recreates the exact same structure: Now, you see the same partitions, however there is a layer above the partitions called a "Volume Group", literally a group of volumes, in this case disk partitions. It might be acceptible to think of this as a sort of virtual disk that you can partition up. Since we're matching our previous configuration exactly, you don't get to see the strengths of the system yet. You might notice that above the volume group, we have created logical volumes, which might be thought of as virtual partitions, and it is upon these that we build our file systems. Lets see what happens when we add more than one physical volume: Here we have three physical disks, sda, sdb, and sdc. Each of the first two disks has one partition taking up the entire space. The last, sdc, has one partition taking up half of the disk, with half remaining unpartitioned free space. We can see the volume group above that which includes all of the currently available volumes. Here lies one of the biggest selling points. You can build a logical partition as big as the sum of your disks. In many ways, this is similar to how RAID level 0 works, except there's no striping at all. Data is written for the most part linearly. If you need redundancy or the performance increases that RAID provides, make sure to put your logical volumes on top of the RAID arrays. RAID slices work exactly like physical disks here. Now, we have this volume group which takes up 2 and 1/2 disks. It has been carved into two logical volumes, the first of which is larger than any one of the disks. The logical volumes don't care how big the actual physical disks are, since all they see is that they're carved out of myVolumeGroup01. This layer of abstraction is important, as we shall see. What happens if we decide that we need the unused space, because we've added more users? Normally we'd be in for some grief if we used the one to one mapping, but with logical volumes, here's what we can do: Here we've taken the previously free space on /dev/sdc and created /dev/sdc2. Then we added that to the list of volumes that comprise myVolumeGroup01. Once that was done, we were free to expand either of the logical volumes as necessary. Since we added users, we grew myLogicalVolume2. At that point, as long as the filesystem /home supported it, we were free to grow it to fill the extra space. All because we abstracted our storage from the physical disks that it lives on. Alright, that covers the basic why of Logical Volume Management. Since I'm sure you're itching to learn more about how to prepare and build your own systems, here are some excellent resources to get you started: http://www.pma.caltech.edu/~laurence/Linux/lvm.html http://www.freeos.com/articles/3921/ http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
{ "source": [ "https://serverfault.com/questions/272846", "https://serverfault.com", "https://serverfault.com/users/79496/" ] }
273,201
I am monitoring the memory object in Windows 2k8, and am tracking the Page Faults/sec counter. Is there any threshold to determining what is an excessive amount of page faults? Or should I be more concerned with a sustained, high, amount of page faults? Is there a better way to look at page faults?
This is a good question because getting a read on memory issues for performance monitoring is difficult. First off, when looking at Page Faults/sec keep in mind that this includes soft faults, hard faults and file cache faults. For the most part, you can ignore soft faults (i.e. paging between memory locations) and cache faults (reading files in to memory) as they have limited performance impact in most situations. The real counter for memory shortages will be hard faults which can be found under Memory: Page Reads/sec . Hard faults mean process execution is interrupted so memory can be read from disk (usually it means hitting the page file). I would consider any sustained number of hard faults to be indicative of a memory shortage. As you go further down the rabbit hole, you can also compare disk queue lengths to hard faults to see if the disk reads are further affecting disk performance. To get a picture here, look at Physical Disk: Avg. Disk Queue Length. If this number is greater than the number of spindles in your array, you have a problem. However, if this number only spikes during hard page faults, you have a problem with memory capacity and not disk performance.
{ "source": [ "https://serverfault.com/questions/273201", "https://serverfault.com", "https://serverfault.com/users/42125/" ] }
273,221
we are running sbs2008 with exchange 2007. One user suddenly can't receive emails from one external sender any more and the sender never gets a return email. That user has to use hotmail or gmail etc to send to that sender. I am thinking must be blocked in somewhere but could not find it. Any direction can point me to? Thanks. Update 1: With Exchange Troubleshooting Assistant, I track down all mails sent from that sender, have been marked spams in the subject: ***SPAM*** I guess the spam then be deleted by server quietly?? I will need to check spam settings and come back later. Thanks. Update 2: I found in Anti-Spam -> Content Filter: If the message contains " SPAM ", will be blocked. Option1: Is it a good practice to have this kind of setting here? because no one is acknowledged about this deletion. On the other side, if I remove this policy, I may get hundreds of SPAM in our Inbox. Option2: How do I define a while list, so mails from certain sender will not be marked as SPAM ? Update 3: OK - I am totally cool now. I am using this GUI tool to manage the whilelist for anti-spam: http://gsexdev.blogspot.com/2009/02/content-filtering-system-whitelist-gui.html Thanks guys.
This is a good question because getting a read on memory issues for performance monitoring is difficult. First off, when looking at Page Faults/sec keep in mind that this includes soft faults, hard faults and file cache faults. For the most part, you can ignore soft faults (i.e. paging between memory locations) and cache faults (reading files in to memory) as they have limited performance impact in most situations. The real counter for memory shortages will be hard faults which can be found under Memory: Page Reads/sec . Hard faults mean process execution is interrupted so memory can be read from disk (usually it means hitting the page file). I would consider any sustained number of hard faults to be indicative of a memory shortage. As you go further down the rabbit hole, you can also compare disk queue lengths to hard faults to see if the disk reads are further affecting disk performance. To get a picture here, look at Physical Disk: Avg. Disk Queue Length. If this number is greater than the number of spindles in your array, you have a problem. However, if this number only spikes during hard page faults, you have a problem with memory capacity and not disk performance.
{ "source": [ "https://serverfault.com/questions/273221", "https://serverfault.com", "https://serverfault.com/users/67953/" ] }
273,225
We've installed PHP on a Windows Server 2008 R2 box using Web Platform Installer (WPI) 3.0.x. However, I'd like to uninstall PHP (5.3 in particular, leaving 5.2 as-is). Unfortunately, an uninstall option doesn't exist in Programs and Features, and in the past I've only upgraded PHP installs, and not had to do an uninstall. (Based on the lack of answers I've found online, it seems this is the case generally as well.) I realize that I can leave the extra install there, but for the sake of a having a clean server, and making it rather obvious what version of PHP is being used, I'd like to remove the installation. I suppose I could also remove the install directory - C:\Program Files (x86)\PHP\v5.3 - but that doesn't feel right. PHP Manager is also installed (also via WPI), but I see no way to remove an installation, only add.
This link has instructions on how to manually remove a version of PHP from IIS on Windows 7. I would think the instructions for Windows Server 2008 would be similar. It seems to be instructing you to edit the applicationHost.config file and delete the folder. http://forums.iis.net/t/1178803.aspx From the link: Open %userprofile%\documents\iisexpress\config\applicationhost.config file and: Find following entry (or similar entry) in applicationhost.config file and comment it or delete it. <application fullPath="C:\Program Files\iis express\PHP\v5.2\php-cgi.exe" monitorChangesTo="php.ini" activityTimeout="600" requestTimeout="600" instanceMaxRequests="10000"> <environmentVariables> <environmentVariable name="PHP_FCGI_MAX_REQUESTS" value="10000" /> <environmentVariable name="PHPRC" value="C:\Program Files\iis express\PHP\v5.2" /> </environmentVariables> </application> Find following entry in hanlders section and comment this as well or delete. <add name="PHP52_via_FastCGI" path="*.php" verb="GET,HEAD,POST" modules="FastCgiModule" scriptProcessor="C:\Program Files (x86)\iis express\PHP\v5.2\php-cgi.exe" resourceType="Either" /> By default Web Platform Installer installs PHP to %programfiles%\iis express\php. so open %programfiles%\iis express\php\ folder and delete the php version folder that you no longer need (don't forget to remove relavant entries from applicationhost.config as mentioned in step 1 and 2 above)
{ "source": [ "https://serverfault.com/questions/273225", "https://serverfault.com", "https://serverfault.com/users/19870/" ] }
273,238
Is there a way to run a command (e.g. ps aux|grep someprocess ) for n times? Something like: run -n 10 'ps aux|grep someprocess' I want to use it interactively. Update: The reason I am asking this is, that I do work on a lot of machines and I don't want to import all my adaped scripts etc into every box to get the same functionality accross every machine.
I don't think a command or shell builtin for this exists, as it's a trivial subset of what the Bourne shell for loop is designed for and implementing a command like this yourself is therefore quite simple. Per JimB's suggestion, use the Bash builtin for generating sequences: for i in {1..10}; do command; done For very old versions of bash, you can use the seq command: for i in `seq 10`; do command; done This iterates ten times executing command each time - it can be a pipe or a series of commands separated by ; or && . You can use the $i variable to know which iteration you're in. If you consider this one-liner a script and so for some unspecified (but perhaps valid) reason undesireable you can implement it as a command, perhaps something like this on your .bashrc (untested): #function run run() { number=$1 shift for i in `seq $number`; do $@ done } Usage: run 10 command Example: run 5 echo 'Hello World!'
{ "source": [ "https://serverfault.com/questions/273238", "https://serverfault.com", "https://serverfault.com/users/40559/" ] }
273,697
We currently have our web server in a DMZ. The web server cannot see anything within the internal network, but the internal network can see the web server. How safe would it be to punch a hole in the firewall between the DMZ and the internal network to only one web server in the intranet? We are working on something that will be interfacing with several of our back-office applications (which are all on one server) and it would be so much easier to do this project if we could communicate directly with the IBM i server holding this data (via web services). From my understanding (and I don't know brands), we have one firewall for the DMZ with a different external IP from our primary IP with another firewall. Another firewall is between the web server and the intranet. So something like: Web Server <==== Firewall ===== Intranet | | | | Firewall Firewall | | | | Internet IP1 Internet IP2
There's nothing wrong with creating access mechanisms for hosts in the DMZ to access hosts in the protected network when this is necessary to accomplish your intended result. It's, perhaps, not preferable to do so, but sometimes it's the only way to get the job done. The key things to consider are: Limit the access to the most specific firewall rule you can. If possible, name the specific hosts involved in the rule along with the specific protocols (TCP and/or UDP ports) that will be used. Basically, open only as small a hole as you need. Be sure that you're logging access from the DMZ host to the host on the protected network and, if possible, analyze those logs in an automated fashion for anomalies. You want to know when something out of the ordinary happens. Recognize that you're exposing an internal host, even if it's in an indirect manner, to the Internet. Stay on top of patches and updates for the software you're exposing and the host's operating system software itself. Consider mutual authentication between the DMZ host and the internal host, if that's feasible with your application architecture. It would be nice to know that the requests coming to the internal host are actually coming from the DMZ host. Whether you can do this or not is going to be highly dependent on your application architecture. Also, bear in mind that someone who "owns" the DMZ host will be able to make requests to the internal host even if authentication is occurring (since they'll, effectively, be the DMZ host). If there's concern about DoS attacks consider using rate-limiting to prevent the DMZ host from exhausting the resources of the internal host. You may want to consider using a layer 7 "firewall" approach, where the requests from the DMZ host are passed first to a special-purpose internal host that can "sanitize" the requests, sanity-check them, and then pass them on to the "real" back-end host. Since you're talking about interfacing to your back-office applications on your IBM iSeries I'm guessing that you have limited capability for performing sanity-checks against incoming requests on the iSeries itself. If you approach this in a methodical fashion and keep some common sense about it there's no reason you can't do what you're describing while keeping risk minimized at the same time. Frankly, that you've got a DMZ that doesn't have unfettered access to the protected network puts you leaps and bounds beyond a lot of networks I've seen. For some people, it seems, DMZ just means "another interface on the firewall, possibly with some different RFC 1918 addresses, and basically unfettered access to the Internet and the protected network". Try and keep your DMZ as locked-down as you can while still accomplishing business goals and you'll do well.
{ "source": [ "https://serverfault.com/questions/273697", "https://serverfault.com", "https://serverfault.com/users/602/" ] }
273,707
Is there a tool, that allows an individual to temporarily switch between different etc/hosts configurations, without setting up a dns? Example: I would like to quickly change the etc/hosts config file, so that project1.com forwards to a local IP like 192.168.10.50 without changing information in our DNS server. Why? We are developing several bigger cms projects. The cms is developed inside a virtual machine. Sometimes we have to make bigger changes to a cms system that is already in production. A developer needs to access the productive version of the site and some minutes later he wants to redirect all requests to the local virtual machine. A tool, that can easily swap between different etc/hosts configuration files would be ideal. If possible don't want the users to manually edit the etc/hosts file.
There's nothing wrong with creating access mechanisms for hosts in the DMZ to access hosts in the protected network when this is necessary to accomplish your intended result. It's, perhaps, not preferable to do so, but sometimes it's the only way to get the job done. The key things to consider are: Limit the access to the most specific firewall rule you can. If possible, name the specific hosts involved in the rule along with the specific protocols (TCP and/or UDP ports) that will be used. Basically, open only as small a hole as you need. Be sure that you're logging access from the DMZ host to the host on the protected network and, if possible, analyze those logs in an automated fashion for anomalies. You want to know when something out of the ordinary happens. Recognize that you're exposing an internal host, even if it's in an indirect manner, to the Internet. Stay on top of patches and updates for the software you're exposing and the host's operating system software itself. Consider mutual authentication between the DMZ host and the internal host, if that's feasible with your application architecture. It would be nice to know that the requests coming to the internal host are actually coming from the DMZ host. Whether you can do this or not is going to be highly dependent on your application architecture. Also, bear in mind that someone who "owns" the DMZ host will be able to make requests to the internal host even if authentication is occurring (since they'll, effectively, be the DMZ host). If there's concern about DoS attacks consider using rate-limiting to prevent the DMZ host from exhausting the resources of the internal host. You may want to consider using a layer 7 "firewall" approach, where the requests from the DMZ host are passed first to a special-purpose internal host that can "sanitize" the requests, sanity-check them, and then pass them on to the "real" back-end host. Since you're talking about interfacing to your back-office applications on your IBM iSeries I'm guessing that you have limited capability for performing sanity-checks against incoming requests on the iSeries itself. If you approach this in a methodical fashion and keep some common sense about it there's no reason you can't do what you're describing while keeping risk minimized at the same time. Frankly, that you've got a DMZ that doesn't have unfettered access to the protected network puts you leaps and bounds beyond a lot of networks I've seen. For some people, it seems, DMZ just means "another interface on the firewall, possibly with some different RFC 1918 addresses, and basically unfettered access to the Internet and the protected network". Try and keep your DMZ as locked-down as you can while still accomplishing business goals and you'll do well.
{ "source": [ "https://serverfault.com/questions/273707", "https://serverfault.com", "https://serverfault.com/users/82613/" ] }
273,708
We have a database called 'foo' on the first sql server instance called 'SQL01' that is backed up nightly via a snapshot with Snap Manager for SQL Server and then flex cloned and restored to a second server instance called 'SQL02'. In the database foo there is a sql user called 'someuser' that has datareader and stored procedure privileges. After the restore operation I cannot access the foo database on SQL02 with the someuser user. I can see that the permissions appear to be set correctly for the user, but can't access the database. The error is "The server principal "someuser" is not able to access the database "foo" under the current security context." If I remove the user from the database and then add them again it works fine. Any ideas?
There's nothing wrong with creating access mechanisms for hosts in the DMZ to access hosts in the protected network when this is necessary to accomplish your intended result. It's, perhaps, not preferable to do so, but sometimes it's the only way to get the job done. The key things to consider are: Limit the access to the most specific firewall rule you can. If possible, name the specific hosts involved in the rule along with the specific protocols (TCP and/or UDP ports) that will be used. Basically, open only as small a hole as you need. Be sure that you're logging access from the DMZ host to the host on the protected network and, if possible, analyze those logs in an automated fashion for anomalies. You want to know when something out of the ordinary happens. Recognize that you're exposing an internal host, even if it's in an indirect manner, to the Internet. Stay on top of patches and updates for the software you're exposing and the host's operating system software itself. Consider mutual authentication between the DMZ host and the internal host, if that's feasible with your application architecture. It would be nice to know that the requests coming to the internal host are actually coming from the DMZ host. Whether you can do this or not is going to be highly dependent on your application architecture. Also, bear in mind that someone who "owns" the DMZ host will be able to make requests to the internal host even if authentication is occurring (since they'll, effectively, be the DMZ host). If there's concern about DoS attacks consider using rate-limiting to prevent the DMZ host from exhausting the resources of the internal host. You may want to consider using a layer 7 "firewall" approach, where the requests from the DMZ host are passed first to a special-purpose internal host that can "sanitize" the requests, sanity-check them, and then pass them on to the "real" back-end host. Since you're talking about interfacing to your back-office applications on your IBM iSeries I'm guessing that you have limited capability for performing sanity-checks against incoming requests on the iSeries itself. If you approach this in a methodical fashion and keep some common sense about it there's no reason you can't do what you're describing while keeping risk minimized at the same time. Frankly, that you've got a DMZ that doesn't have unfettered access to the protected network puts you leaps and bounds beyond a lot of networks I've seen. For some people, it seems, DMZ just means "another interface on the firewall, possibly with some different RFC 1918 addresses, and basically unfettered access to the Internet and the protected network". Try and keep your DMZ as locked-down as you can while still accomplishing business goals and you'll do well.
{ "source": [ "https://serverfault.com/questions/273708", "https://serverfault.com", "https://serverfault.com/users/8454/" ] }
273,847
When I use ssh -X on my Mac (running OS X 10.6.7) to connect to my Ubuntu (11.04) box, I get the following warning: Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding. Is there something I can do to make this warning go away? If not, can I safely ignore it? X11 forwarding seems to work fine, though I do see this message: Xlib: extension "RANDR" missing on display "localhost:10.0". Is that related to the warning? (I'm guessing not. If it's not, I'll file a new question about that.)
Try ssh -Y Any reason you don't want to use the -Y flag instead of the -X flag? Quite simply, the difference between -X and -Y is that -Y enables trusted X11 forwarding.
{ "source": [ "https://serverfault.com/questions/273847", "https://serverfault.com", "https://serverfault.com/users/20205/" ] }
273,853
I am running a small LAMP-based web server on which PHP-based pages seem to take a minimum of 5 seconds to render. I believe that the problem is some issue with my PHP configuration in particular because: requests for static pages take 0.5s or less to satisfy requests for saved, static versions of PHP pages are served as fast as other static pages requests for PHP-based pages take >5s to satisfy whether or not they make requests to the DB the delay persists whether accessing the server by hostname or by IP, so it's not a DNS issue looking at vmstat during the request shows 0 for swap-in and swap-out, so it's not pagefile thrashing looking at top during the request shows that no apache process gets above 2-3% of CPU time, so it's not limited by CPU performance looking at the apache logs during requests shows a GET request for the PHP-based page, a 5-second delay, and then a request for the first CSS file that the PHP page loads. On that basis, I feel pretty sure that the problem is with the PHP configuration, but PHP configuration is foreign to me. What are my options for improving PHP performance here? What are some common problems I should check for? The main use case for this server is running a site based on the Joomla CMS, but this problem appears to be independent of Joomla, because the performance problems are happening with all PHP pages. A clarification in response to Zoredache's question: On the server, running time curl http://127.0.0.1/foo/ is fast - tenths of a second. Elsewhere on the local network, running time curl http://10.1.0.1/foo/ is slow - 5 seconds at least.
Try ssh -Y Any reason you don't want to use the -Y flag instead of the -X flag? Quite simply, the difference between -X and -Y is that -Y enables trusted X11 forwarding.
{ "source": [ "https://serverfault.com/questions/273853", "https://serverfault.com", "https://serverfault.com/users/30902/" ] }
273,949
How do you use the cp command without changing the target file's permissions? For example: cp /tmp/file /home/file I don't want to change chown and chgrp on /home/file .
If you've only opened the manual for cp ... The next will not overwrite the file permissions and ownership + groupship: cp --no-preserve=mode,ownership /tmp/file /home/file Note that root privileges are needed if you want to preserve the ownership and groupship. An excerpt from the manual: --preserve[=ATTR_LIST] preserve the specified attributes (default: mode,owner- ship,timestamps), if possible additional attributes: context, links, xattr, all
{ "source": [ "https://serverfault.com/questions/273949", "https://serverfault.com", "https://serverfault.com/users/79476/" ] }
273,967
The Openssh ssh and scp command provied an -i command line option to specify the path to the RSA/DSA key to be used for authentication. Looking at the sftp man pages I was not able to find a way to specify the RSA/DSA key. I am looking for a way to do initiate an sftp session that will use a specified RSA/DSA key, and not the ~/.ssh/id_{dsa,rsa} keys. I tried OpenSSH sftp client on Linux...but it should have the same options on other platforms.
One potential option is to use sftp -oIdentityFile=/path/to/private/keyfile . Need more info to say whether that will work for you. Seems to work under Mac/Linux.
{ "source": [ "https://serverfault.com/questions/273967", "https://serverfault.com", "https://serverfault.com/users/74808/" ] }
273,968
I'm looking for a solution to sync files automatically across a number of windows operating systems, including Windows Server 2003, 2008 and Windows 7. One of the use cases would be keeping Nagios config files in sync across a number of (identical) machines. I would only need to make the change on one master machine and then push it out to the clients. Clients would only need read access to the share which removes the need for locking or versioning. I have had a look at DFS, WebDAV, NFS, etc. One option might be puppet or cfengine (correct me if I'm wrong), but I'd prefer to use something custom built for the task rather than a whole configuration management suite. I would also prefer something that looks like just another folder to Windows (rather than a network drive), and we don't want to use a third party service like Dropbox. Finally, I would like to be able to push it from one location rather than writing a script to pull the files down, etc. Is there anything out there which matches this description?
One potential option is to use sftp -oIdentityFile=/path/to/private/keyfile . Need more info to say whether that will work for you. Seems to work under Mac/Linux.
{ "source": [ "https://serverfault.com/questions/273968", "https://serverfault.com", "https://serverfault.com/users/70231/" ] }
274,089
A little background: We have several Windows servers (2003, 2008) for our department. We're a division of IT so we manage our own servers. Of the four of us here I'm the only one with a slight amount of IT knowledge. (Note the "slight amount".) My boss says the servers need to be restarted at least weekly. I disagree. Our IT Department says that because she restarts them constantly that's the reason why our hard drives fail and power supplies go out on them. (That's happened to a few of our servers a couple times over the last four years, and very recently.) So the question is: How often does everyone restart their Windows servers? Is there an industry standard or recommendation? Is our IT department correct in saying that because we re-start that's why we're having hardware issues? (I need a reason if I'm going to change her mind!)
My boss says the servers need to be restarted at least weekly I strongly disagree. Microsoft has made great strides since the good-ole [NT, anyone?] days with regard to stability and uptime. It's a shame the consensus within IT support has not changed along with this. How often does everyone restart their Windows servers? Only when required -- Either because of an OS/software update, a critical software failure which cannot be recovered via other methods, hardware upgrade/replacement or other activity that cannot happen without a restart. 1 Is there an industry standard or recommendation? I have never seen a standard recommendation, per se , but I could not agree with any recommendation [except from MS themselves] which would indicate a required reboot at a specific time interval "just-because". Is our IT department correct in saying that because we re-start that's why we're having hardware issues? Restarting [and, more so, power cycling] is the most stressful period of hardware activity for a computer. You have most everything spinning up to 100% -- disk and fans... ...as well as significant fluctuations in component temperatures. Modern hardware is incredibly resilient, but that shouldn't be a reason for just bouncing servers, on a whim, a few times a week. 1 Aside, I loathe when techs "just" reboot a Windows server in the case of a failed service, or the like. I understand the need to get the service running again, but a reboot should be the last step in trouble shooting a server. Identifying, and fixing [!], the root cause of failure should almost never result in "Meh, just reboot it...."
{ "source": [ "https://serverfault.com/questions/274089", "https://serverfault.com", "https://serverfault.com/users/82728/" ] }
274,181
Of course, I realize the need to go to IPv6 out on the open Internet since we are running out of addresses, but I really don't understand why there is any need to use it on an internal network. I have done zero with IPv6, so I also wonder: Won't modern firewalls do NAT between internal IPv4 addresses, and external IPv6 addresses? I was just wondering since I have seen so many people struggling with IPv6 questions here, and wonder why bother?
There is no NAT for IPv6 (as you think of NAT anyway). NAT was an $EXPLETIVE temporary solution to IPv4 running out of addresses (a problem which didn't actually exist, and was solved before NAT was ever necessary, but history is 20/20). It adds nothing but complexity and would do little except cause headaches in IPv6 (we have so many IPv6 Address we unabashedly waste them). NAT66 does exist, and is meant to reduce the number of IPv6 addresses used by each host (it's normal for IPv6 hosts to have multiple addresses, IPv6 is somewhat different than IPv4 in many ways, this is one). The Internet was supposed to be end-to-end routable, that is part of the reason IPv4 in invented and why it gained acceptance. That is not to say that all address on the Internet were supposed to be reachable. NAT breaks both. Firewalls add layers of security by breaking reachability, but normally that it's at the expense of routability. You will want IPv6 in your networks as there is no way to specify an IPv6 endpoint with a IPv4 address. The other way around does work, which enables IPv6-only networks using DNS64 and NAT64 to access the IPv4 Internet still. It's actually possible today to ditch IPv4 all together, though it's a bit of hassle setting it up. It would be possible to proxy from IPv4 internal addresses to IPv6 servers. Adding and configuring a proxy server adds configuration, hardware, and maintenance costs to the network; usually much more than simply enabling IPv6. NAT causes it's own problems too. The Router has to be capable of coordinating every connection running through it, keeping track of endpoints, ports, timeouts, and more. All that traffic is being funneled through that single point usually. Though it's possible to build redundant NAT routers, the technology is massively complex and generally expensive. Redundant simple routers are easy and cheap (comparatively). Also, to re-establish some of the routability, forwarding and translating rules have to be established on the NAT system. This still breaks protocols which embed IP addresses, such as SIP. UPNP, STUN, and other protocols were invented to help with this problem too - more complexity, more maintenance, more that could go wrong .
{ "source": [ "https://serverfault.com/questions/274181", "https://serverfault.com", "https://serverfault.com/users/82140/" ] }
274,310
On a machine without yum, I have version 3.2 of a package installed and I have downloaded version 2.4 manually, how do I install the older version?
rpm -Uvh --oldpackage [filename] --oldpackage allows you to install older versions, -U means "upgrade", but in this case it will just replace the other version. If you use -i instead of -U you will end up with both versions installed.
{ "source": [ "https://serverfault.com/questions/274310", "https://serverfault.com", "https://serverfault.com/users/1598/" ] }
274,475
The following command works from prompt but not from crontab. grep abc /var/log/messages | grep "`date '+%B %d'`" | mail -s"abc log of `hostname`" s.o+`hostname`@gmail.com I need to add it to daily cron.
You have to escape the % signs. They have a special meaning in crontabs: man (5) crontab: Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.
{ "source": [ "https://serverfault.com/questions/274475", "https://serverfault.com", "https://serverfault.com/users/16842/" ] }
274,625
I'm trying to implement a simple centralized syslog server using stock rsyslogd (4.2.0-2ubuntu8.1) on Ubuntu 10.04 LTS. At this point I have all my client nodes sending logs to the central server, but the clients are sending log messages which contain their short hostname instead of their FQDN. Per the Ubuntu rsyslogd manpage: If the remote host is located in the same domain as the host, rsyslogd is running on, only the simple hostname will be logged instead of the whole fqdn. This is problematic for me, as I am reusing short names between environments, e.g. core1.example.com and core1.stg.example.com both log their messages as core1. Both client and server have the same /etc/default/rsyslog: RSYSLOGD_OPTIONS="-c4" and the same /etc/rsyslogd.conf file: $ModLoad imuxsock $ModLoad imklog $PreserveFQDN on $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $FileOwner root $FileGroup adm $FileCreateMode 0640 $IncludeConfig /etc/rsyslog.d/*.conf Clients have this /etc/rsyslog.d/remote.conf file, telling them to send to the remote server: *.* @@syslog.example.com and the server uses this /etc/rsyslog.d/server.conf file: $ModLoad imtcp $InputTCPServerRun 514 $DirGroup root $DirCreateMode 0755 $FileGroup root $template PerHostAuth,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/auth.log" $template PerHostCron,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/cron.log" $template PerHostSyslog,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/syslog" $template PerHostDaemon,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/daemon.log" $template PerHostKern,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/kern.log" $template PerHostLpr,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/lpr.log" $template PerHostUser,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/user.log" $template PerHostMail,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/mail.log" $template PerHostMailInfo,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/mail.info" $template PerHostMailWarn,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/mail.warn" $template PerHostMailErr,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/mail.err" $template PerHostNewsCrit,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/news.crit" $template PerHostNewsErr,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/news.err" $template PerHostNewsNotice,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/news.notice" $template PerHostDebug,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/debug" $template PerHostMessages,"/srv/rsyslog/%$YEAR%/%$MONTH%/%$DAY%/%HOSTNAME%/messages" auth,authpriv.* ?PerHostAuth *.*;auth,authpriv.none -?PerHostSyslog cron.* ?PerHostCron daemon.* -?PerHostDaemon kern.* -?PerHostKern lpr.* -?PerHostLpr mail.* -?PerHostMail user.* -?PerHostUser mail.info -?PerHostMailInfo mail.warn ?PerHostMailWarn mail.err ?PerHostMailErr news.crit ?PerHostNewsCrit news.err ?PerHostNewsErr news.notice -?PerHostNewsNotice *.=debug;\ auth,authpriv.none;\ news.none;mail.none -?PerHostDebug *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -?PerHostMessages As both client and server share a configuration which specifies "$PreserveFQDN on", I expect to see FQDN hostnames in syslog messages, but the setting seems to have had no effect. Most other settings I've found for rsyslog are aimed at stripping domains from FQDNs instead of retaining them. I think the root of the problem is that my clients do not send the FQDN in the first place, but I don't see how to force that behavior. Can anyone comment on what I might be missing? I imagine I'm not the only person who needs FQDNs to be included in log messages.
I ran into this issue as well. Here is how I was able to fix it. On the clients modify the /etc/hosts file so the desired hostname comes before localhost. 127.0.0.1 hostnameforlogs localhost On the clients and server modify /etc/rsyslog.conf to include this statement: $PreserveFQDN on On the server I used the %HOSTNAME% variable for the templates in rsyslog.conf:
{ "source": [ "https://serverfault.com/questions/274625", "https://serverfault.com", "https://serverfault.com/users/33902/" ] }
274,738
I created a "system" user in Ubuntu 11.04 ( adduser --system ) for running certain cron jobs, but sometimes I want to test things out by manually running commands as that user. What's the easiest way to do this? su doesn't work, because the user has /bin/false as its shell (which is fine for cron). I've been manually changing the shell to /bin/bash to do my testing and then changing it back again, but I wonder is there an easier way?
I use su - targetuser -s /bin/bash from a root shell. For direct command execution use -c : su - targetuser -s /bin/bash -c "/bin/echo hello world"
{ "source": [ "https://serverfault.com/questions/274738", "https://serverfault.com", "https://serverfault.com/users/2519/" ] }
274,857
I'm currently developing a project using node, and as I'm approaching the launch, I'm struggling to find resources on how to setup node for use on a commercial, production server. Most resources I have seen have consisted of contrived, simple examples without taking into account scalability and fault tolerance. So, my question is, can anybody offer advice or point me to resources for setting up a node installation that: Is fault tolerant. If an instance crashes, it needs to be logged and restarted Creating a pool of node instances that can be load balanced Provide useful insights into resource usage Production node security practices Anything else that would be helpful in a production web environment that I am surely missing
Check out this link: http://cuppster.com/2011/05/12/diy-node-js-server-on-amazon-ec2 For load balancing and static content delivering i would use nginx.
{ "source": [ "https://serverfault.com/questions/274857", "https://serverfault.com", "https://serverfault.com/users/82933/" ] }
275,206
I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df tells me that the disk is 100% full. If I delete 1G this is correctly shown. However, if I run a du -h -x / then du tells me that only 12G are used (I use -x because of some Samba mounts). So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference? I rebooted the machine for a fsck that went w/out errors. Should I run badblocks ? lsof shows me no open deleted files, lost+found is empty and there is no obvious warn/err/fail statement in the messages file. Feel free to ask for further details of the setup.
Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).
{ "source": [ "https://serverfault.com/questions/275206", "https://serverfault.com", "https://serverfault.com/users/73415/" ] }
275,215
In the following output, why does vpn1 route/ping to 10.100.0.1 instead of to 10.100.0.112? 10.100.0.1 is network gateway with no nat. 10.100.0.112 is dual home host with nat enabled. root@vpn1:~# ip ro 8.8.8.8 via 10.100.0.112 dev eth0 src 10.100.0.5 10.100.0.0/24 dev eth0 proto kernel scope link src 10.100.0.5 default via 10.100.0.1 dev eth0 metric 100 root@vpn1:~# traceroute 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.100.0.1 (10.100.0.1) 0.287 ms 0.257 ms 0.317 ms 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * *^C root@vpn1:~# ping 10.100.0.112 PING 10.100.0.112 (10.100.0.112) 56(84) bytes of data. 64 bytes from 10.100.0.112: icmp_req=1 ttl=127 time=0.321 ms ^C --- 10.100.0.112 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms</br> On the other side, 10.100.0.112 has a following configuration boban@boban-desktop:~$ ip ro 10.100.0.114 dev ppp0 proto kernel scope link src 10.100.0.112 x.y.z.q/28 dev eth0 proto kernel scope link src x.y.z.56 metric 1 **10.100.0.0/16 dev ppp0 scope link** default via x.y.z.62 dev eth0 proto static So, network 10.100.0.0/16 is on ppp0 interface, vpn (maybe this is of bigger importance than I think). boban@boban-desktop:~$ sudo iptables -t nat -S -P PREROUTING ACCEPT -P INPUT ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A POSTROUTING -o eth0 -j MASQUERADE boban@boban-desktop:~$ sudo iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT
Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).
{ "source": [ "https://serverfault.com/questions/275215", "https://serverfault.com", "https://serverfault.com/users/28955/" ] }
275,493
What's the best way of comparing two directory structures and deleting extraneous files and directories in the target location? I have a small web photo gallery app that I'm developing. Users add & remove images using FTP. The web gallery software I've written creates new thumbnails on the fly, but it doesn't deal with deletions. What I would like to do, is schedule a command/bash script to take care of this at predefined intervals. Original images are stored in /home/gallery/images/ and are organised in albums, using subdirectories. The thumbnails are cached in /home/gallery/thumbs/ , using the same directory structure and filenames as the images directory. I've tried using the following to achieve this: rsync -r --delete --ignore-existing /home/gallery/images /home/gallery/thumbs which would work fine if all the thumbnails have already been cached, but there is no guarantee that this would be the case, when this happens, the thumb directory has original full size images copied to it. How can I best achieve what I'm trying to do?
You need --existing too: rsync -r --delete --existing --ignore-existing /home/gallery/images /home/gallery/thumbs From the manpage: --existing, --ignore-non-existing This tells rsync to skip creating files (including directories) that do not exist yet on the destination. If this option is combined with the --ignore-existing option, no files will be updated (which can be useful if all you want to do is delete extraneous files).
{ "source": [ "https://serverfault.com/questions/275493", "https://serverfault.com", "https://serverfault.com/users/11604/" ] }
275,669
What is the easiest way to setup max login attempts in a LAMP environment (sshd installed via yum)? Is there a package or simple firewall rule?
I don't like to use any third party tools. Hence I used a combination of ssh configuration and firewall settings. With the following solution an attacker is allowed to produce exactly 3 fault logins in 2 minutes, or he will be blocked for 120 seconds. 1) Add the following line to /etc/ssh/sshd_config MaxAuthTries 1 This will allow only 1 login attempt per connection. Restart the ssh server. 2) Add the following firewall rules Create a new chain iptables -N SSHATTACK iptables -A SSHATTACK -j LOG --log-prefix "Possible SSH attack! " --log-level 7 iptables -A SSHATTACK -j DROP Block each IP address for 120 seconds which establishes more than three connections within 120 seconds. In case of the fourth connection attempt, the request gets delegated to the SSHATTACK chain, which is responsible for logging the possible ssh attack and finally drops the request. iptables -A INPUT -i eth0 -p tcp -m state --dport 22 --state NEW -m recent --set iptables -A INPUT -i eth0 -p tcp -m state --dport 22 --state NEW -m recent --update --seconds 120 --hitcount 4 -j SSHATTACK 3) See log entries of possible ssh attacks in /var/log/syslog Dec 27 18:01:58 ubuntu kernel: [ 510.007570] Possible SSH attack! IN=eth0 OUT= MAC=01:2c:18:47:43:2d:10:c0:31:4d:11:ac:f8:01 SRC=192.168.203.129 DST=192.168.203.128 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=30948 DF PROTO=TCP SPT=53272 DPT=1785 WINDOW=14600 RES=0x00 SYN URGP=0
{ "source": [ "https://serverfault.com/questions/275669", "https://serverfault.com", "https://serverfault.com/users/33944/" ] }
275,736
I'm setting up my first website on Amazon EC2, and I'm trying to decide which distro to use. I've used Redhat and CentOS in the past, but I have no bias towards any system, I just want to use whatever is best (I also have had partially-managed servers in the past, so I haven't done too much server administration until recently). The website is just a web app written in PHP and MongoDB. I like the idea of having a lightweight OS that is described for Amazon Linux, but I worry that it could suffer in compatibility/updates compared to Ubuntu or other options that have teams focused exclusively on a server OS. Any advice?
I was in a similar situation; fully managed dedicated server, LAMP, CentOS. Then we decided to move to EC2. Also, I had very little systems or linux administration experience. I have almost zero experience with Ubuntu, so I really cannot speak to which is the so-called better OS. I tried a bunch of pre-built AMI's with minimal OS installs from Rightscale, Alestic, Scalr and Amazon. I ended up building all my own AMI's on top of Amazon Linux, first using version 2010.11.01, now I've migrated all my custom AMI's to Amazon Linux version 2011.03.01. The decision to go with an Amazon Linux AMI vs the other AMI providers was not an easy one. I played around with and tested different setups for close to a month before I made my final decision. In the end, since I wanted to use CentOS, it basically boiled down to one thing. I figured who better to know what hardware related dependencies needed to be included in the OS than the people who designed, built and maintain EC2. Nothing against Rightscale, Scalr or Alestic. Six months later, even though I hit a few bumps in the road, Amazon's Linux has been quite stable. Though, I did decide to compile some of the software we use from the source (ie. php 5.3, MySQL 5.5, etc) because I ran into trouble with the pre-built packages Amazon maintained in their package repository.
{ "source": [ "https://serverfault.com/questions/275736", "https://serverfault.com", "https://serverfault.com/users/8964/" ] }
275,982
I'm making a website, and I need a sub-domain. I need to add the new part to my website, but I don't know which type of DNS record to add in the DNS console to point to this new site. Is it A or CNAME ?
It depends on whether you want to delegate hosting the subdomain off to a different DNS server (or to the same server, but in a different zone file). You delegate a zone when you want some other entity to control it, such as a different IT department or organization. If you do, then you need NS records. If not, A or CNAME records will suffice. Let's say you have the domain example.com. You have an A record for www.example.com and you want to create the subdomain info.example.com with www.info.example.com as a host in it. Delegation In the this situation, let's further say you have two DNS servers that will be hosting that subdomain. (They could be the same servers that are currently hosting example.com.) In this case, you will create two NS entries in the example.com zone file: info IN NS 192.168.2.2 info IN NS 192.168.2.3 On those two servers, you will create the info.example.com zone and populate it as you would any other domain. www IN A 192.168.2.6 No delegation Here, just add an A record in the example.com zone file, using a dot to indicate that you want to create the www.info host in the example.com domain: www.info IN A 192.168.2.6 Using CNAME The decision of whether to use a CNAME is independent of the delegation choice. I generally like to use a CNAME for the "generic" names which point to specific machine names. For example, I might name my machines using an organizational naming convention such as cartoon characters (daffy, elmer, mickey, etc.) or something bureaucratic (sc01p6-serv) and point the generic names to them. If the IP address of the machine ever changes, I need look in only one place to modify it. www IN CNAME sc01p6-serv mail IN CNAME sc01p6-serv sc01p6-serv IN A 192.168.2.6
{ "source": [ "https://serverfault.com/questions/275982", "https://serverfault.com", "https://serverfault.com/users/374157/" ] }
276,091
I have an external drive hooked up to my Mac, and I'm trying to determine things like, e.g., is this HFS or FAT, is it 32-bit or 64-bit, etc. It seems like there should be some trivial command that gives me this info, but I can't seem to find one. Ideas?
diskutil(8) with the info predicate will give you information about a disk or partition.
{ "source": [ "https://serverfault.com/questions/276091", "https://serverfault.com", "https://serverfault.com/users/39209/" ] }
276,098
I am working with a Powershell script that adds scheduled tasks to systems in our domain. When I run this script, it will prompt me for my password. I sometimes fat finger the password and the process starts, which locks out my account. Is there a way to verify my credentials to make sure that what I typed in will validate with the Domain? I'd like to find a way to query the Domain controller. I've done some Google searches and I should be able to do a WMI query and trap for an error. I would like to avoid that style of validation if possible. Any ideas? Thanks in advance.
I have this in my library: $cred = Get-Credential #Read credentials $username = $cred.username $password = $cred.GetNetworkCredential().password # Get current domain using logged-on user's credentials $CurrentDomain = "LDAP://" + ([ADSI]"").distinguishedName $domain = New-Object System.DirectoryServices.DirectoryEntry($CurrentDomain,$UserName,$Password) if ($domain.name -eq $null) { write-host "Authentication failed - please verify your username and password." exit #terminate the script. } else { write-host "Successfully authenticated with domain $domain.name" }
{ "source": [ "https://serverfault.com/questions/276098", "https://serverfault.com", "https://serverfault.com/users/20204/" ] }
276,541
Resetting the RDS master user's password is simple enough, but how do you find your master users username?
The master user name can be recovered with the rds-describe-db-instances command. If you are using aws cli-2, command shall become : aws rds describe-db-instances --region ap-south-1
{ "source": [ "https://serverfault.com/questions/276541", "https://serverfault.com", "https://serverfault.com/users/72107/" ] }
278,319
I have a simple nginx reverse proxy: server { server_name external.domain.com; location / { proxy_pass http://backend.int/; } } The problem is that Set-Cookie response headers contain ;Domain=backend.int , because the backend does not know it is being reverse proxied. How can I make nginx rewrite the content of the Set-Cookie response headers, replacing ;Domain=backend.int with ;Domain=external.domain.com ? Passing the Host header unchanged is not an option in this case. Apache httpd has had this feature for a while, see ProxyPassReverseCookieDomain , but I cannot seem to find a way to do the same in nginx.
Starting in 1.1.15, proxy_cookie_domain option was added to address this issue. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain This is an example with many domains. Your service might be accessible as service.foo.com and service.bar.com and you need to put foo.com or bar.com in Set-Cookie header accordingly while your backend service always puts backend_host.com in the cookie header: server { server_name ~^service\.(?<domain>.+)$; location / { proxy_pass http://backend.int/; proxy_cookie_domain backend_host.com $domain; } }
{ "source": [ "https://serverfault.com/questions/278319", "https://serverfault.com", "https://serverfault.com/users/113189/" ] }
278,351
After many hours getting nginx to serve single files such as robots.txt (hint: clear your browser cache each time), I wound up with two different ways, one using the alias directive, and one using the root directive, like so: location /robots.txt { alias /home/www/static/robots.txt; } location /robots.txt { root /home/www/static/; } Is there any functional difference between the two? Or security issues? Any conflicts with other directives? (Both seemed fine with another /static location). Or any reason to pick one over the other? Note - I didn't use both at the same time :) Rather I tried each, one at a time, and both worked. I'm not asking how they both interact together in the same file, but which one would be better to use.
Well, these two directives are slightly functional different because you do not use exact match in the latter case. So, /robots.txt1111 will match your second location too. location =/robots.txt { root /home/www/static/; } is an exact functional equivalent of your first directive.
{ "source": [ "https://serverfault.com/questions/278351", "https://serverfault.com", "https://serverfault.com/users/54374/" ] }
278,454
Here the output of free -m : total used free shared buffers cached Mem: 7188 6894 294 0 249 5945 -/+ buffers/cache: 698 6489 Swap: 0 0 0 I can see almost 6GB (5945MB) memory out of 7GB is used in caching the files. I know how to flush the caches. My question is: Is possible see which files(or inodes) are being cached?
Well, there is an easy way to take a look at the kernel's page cache if you happen to have ftools - "fincore" gives you some summary information on what files' pages are the content of the cache. You will need to supply a list of file names to check for their presence in the page cache. This is because the information stored in the kernel's page cache tables only will contain data block references and not filenames. fincore would resolve a given file's data blocks through inode data and search for respective entries in the page cache tables. There is no efficient search mechanism for doing the reverse - getting a file name belonging to a data block would require reading all inodes and indirect blocks on the file system. If you need to know about every single file's blocks stored in the page cache, you would need to supply a list of all files on your file system(s) to fincore . But that again is likely to spoil the measurement as a large amount of data would be read traversing the directories and getting all inodes and indirect blocks - putting them into the page cache and evicting the very page cache data you were trying to examine.
{ "source": [ "https://serverfault.com/questions/278454", "https://serverfault.com", "https://serverfault.com/users/82373/" ] }
278,532
What exactly is the difference between a VPS (Virtual Private Server), a Cloud Server, and a Dedicated Server? I'm having trouble finding a concise explanation that isn't littered with advertising.
VPS and Cloud are the same damn thing . A dedicated server is a physical box sitting in a rack somewhere that is not shared with anyone else, that you can do whatever you want with.
{ "source": [ "https://serverfault.com/questions/278532", "https://serverfault.com", "https://serverfault.com/users/73570/" ] }
278,743
I have a home and work computer, the home computer has a static IP address. If I ssh from my work computer to my home computer, the ssh connection works but X11 applications are not displayed. In my /etc/ssh/sshd_config at home: X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes At work I have tried the following commands: xhost + home HOME_IP ssh -X home ssh -X HOME_IP ssh -Y home ssh -Y HOME_IP My /etc/ssh/ssh_config at work: Host * ForwardX11 yes ForwardX11Trusted yes My ~/.ssh/config at work: Host home HostName HOME_IP User azat PreferredAuthentications password ForwardX11 yes My ~/.Xauthority at work: -rw------- 1 azat azat 269 Jun 7 11:25 .Xauthority My ~/.Xauthority at home: -rw------- 1 azat azat 246 Jun 7 19:03 .Xauthority But it doesn't work After I make an ssh connection to home: $ echo $DISPLAY localhost:10.0 $ kate X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. kate: cannot connect to X server localhost:10.0 I use iptables at home, but I've allowed port 22. According to what I've read that's all I need. UPD. With -vvv ... debug2: callback start debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 1: request x11-req confirm 1 debug2: client_session2_setup: id 1 debug2: fd 3 setting TCP_NODELAY debug2: channel 1: request pty-req confirm 1 ... When try to launch kate : debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 55486 debug2: fd 8 setting O_NONBLOCK debug3: fd 8 is O_NONBLOCK debug1: channel 2: new [x11] debug1: confirm x11 debug2: X11 connection uses different authentication protocol. X11 connection rejected because of wrong authentication. debug2: X11 rejected 2 i0/o0 debug2: channel 2: read failed debug2: channel 2: close_read debug2: channel 2: input open -> drain debug2: channel 2: ibuf empty debug2: channel 2: send eof debug2: channel 2: input drain -> closed debug2: channel 2: write failed debug2: channel 2: close_write debug2: channel 2: output open -> closed debug2: X11 closed 2 i3/o3 debug2: channel 2: send close debug2: channel 2: rcvd close debug2: channel 2: is dead debug2: channel 2: garbage collecting debug1: channel 2: free: x11, nchannels 3 debug3: channel 2: status: The following connections are open: #1 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1) #2 x11 (t7 r2 i3/0 o3/0 fd 8/8 cc -1) # The same as above repeate about 7 times kate: cannot connect to X server localhost:10.0 UPD2 Please provide your Linux distribution & version number. Are you using a default GNOME or KDE environment for X or something else you customized yourself? azat:~$ kded4 -version Qt: 4.7.4 KDE Development Platform: 4.6.5 (4.6.5) KDE Daemon: $Id$ Are you invoking ssh directly on a command line from a terminal window? What terminal are you using? xterm, gnome-terminal, or? How did you start the terminal running in the X environment? From a menu? Hotkey? or ? From terminal emulator `yakuake` Manualy press `Ctrl + N` and write commands Can you run xeyes from the same terminal window where the ssh -X fails? `xeyes` - is not installed But `kate` or another kde app is running Are you invoking the ssh command as the same user that you're logged into the X session as? From the same user UPD3 I also download ssh sources, and using debug2() write why it's report that version is different It see some cookies, and one of them is empty, another is MIT-MAGIC-COOKIE-1
The reason ssh X forwarding wasn't working was because I have a /etc/ssh/sshrc config file. The end of the sshd(8) man page states: If ~/.ssh/rc exists, runs it; else if /etc/ssh/sshrc exists, runs it; otherwise runs xauth So I add the following commands to /etc/ssh/sshrc (also from the sshd man page) on the server side: if read proto cookie && [ -n "$DISPLAY" ]; then if [ `echo $DISPLAY | cut -c1-10` = 'localhost:' ]; then # X11UseLocalhost=yes echo add unix:`echo $DISPLAY | cut -c11-` $proto $cookie else # X11UseLocalhost=no echo add $DISPLAY $proto $cookie fi | xauth -q - fi And it works!
{ "source": [ "https://serverfault.com/questions/278743", "https://serverfault.com", "https://serverfault.com/users/41731/" ] }
279,068
I have an executable which needs to link with libtest.so dynamically,so I put them in the same directory,then : cd path_to_dir ./binary But got this: error while loading shared libraries: libtest.so: cannot open shared object file: No such file or directory How can it be unable to find libtest.so which is already in the same directory as the executable itself?
The loader never checks the current directory for shared objects unless it is explicitly directed to via $LD_LIBRARY_PATH . See the ld.so(8) man page for more details.
{ "source": [ "https://serverfault.com/questions/279068", "https://serverfault.com", "https://serverfault.com/users/84132/" ] }
279,164
I need to keep watch on how much bandwidth some connections are taking in a server, and I know I have seen a top-like tool for that before. However, I can't remember the name of the tool, and I'm not having much luck searching for it. So, is there a top-like tool for that? I'm running Debian.
iftop or pktstat -nT (for short term monitoring) is what you need to do this (under *nix). For long-term monitoring, ntop is useful. Finding pktstat is a little tricky for those who aren't running a Debian / Ubuntu box, but this is a decent pktstat source-code archive Use tcpview if you want the same kind of stats under windows
{ "source": [ "https://serverfault.com/questions/279164", "https://serverfault.com", "https://serverfault.com/users/9060/" ] }
279,168
I currently have a database setup where my MSSQL 2008 R2 server runs on my local network, but the program runs on a notebook that is normally mobile and away from the network. VPN works, but I'm looking for a solution that that runs a local MSSQl 2008 R2 server on the local machine that syncs with the main server when the mobile system reconnects to the network, either through VPN or coming back on the network. The notebook is running Windows 7 Professional 64 bit, the server Server 2008 Enterprise 64 bit.
iftop or pktstat -nT (for short term monitoring) is what you need to do this (under *nix). For long-term monitoring, ntop is useful. Finding pktstat is a little tricky for those who aren't running a Debian / Ubuntu box, but this is a decent pktstat source-code archive Use tcpview if you want the same kind of stats under windows
{ "source": [ "https://serverfault.com/questions/279168", "https://serverfault.com", "https://serverfault.com/users/78333/" ] }
279,361
iptables doesn't seem to recognize --dport with -p all . iptables -A INPUT -p all --dport www -j ACCEPT yields: iptables v1.4.4: unknown option `--dport' Try `iptables -h' or 'iptables --help' for more information. --destination-port doesn't work either: iptables v1.4.4: unknown option `--destination-port' Adding two separate rules for -p tcp and -p udp works fine, so why doesn't it work for -p all ? In case it matters, this is on an Ubuntu 10.04 LTS Server with iptables package version 1.4.4-2ubuntu2
--dport is not a flag for general iptables rules. It's a flag for one of it's extended packet matching modules . These are loaded when you use -p protocol or -m . Unless you specify -m <protocol> or -p <protocol> with a specific protocol you can't use --dport You'll see this within the iptables(8) or iptables-extensions(8) manual page: tcp These extensions can be used if `--protocol tcp' is specified. It provides the following options: ... [!] --destination-port,--dport port[:port] Destination port or port range specification. The flag --dport is a convenient alias for this option. ... Not all protocols have a --dport flag because not all protocols support the notion of ports
{ "source": [ "https://serverfault.com/questions/279361", "https://serverfault.com", "https://serverfault.com/users/84226/" ] }
279,366
Does anyone know why my /var/run/mysqld/mysqld.sock socket file would not be on my computer when I install (or reinstall) MySQL 5.1? Right now, when I try to start up a server with mysqld, I get errors like Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect , but creating a blank file with that name (as suggested on the ubuntu forums) was unsuccessful. I had both mysql and postgres serving fine until I upgraded to natty a little while ago; I have spent hours walking through both databases trying to figure out what is going on. I can give up on postgres, but I cannot work without a working copy of mysql. The weirdest part: I use Kubuntu, and my understanding is that KDE uses mysql to store user permissions, etc. I am not experiencing any weird permissions issues; can I take this to mean that (somehow?) MySQL is actually working? Maybe these socket files live in a different place in natty? Would it be easier to just reinstall the os fresh? At this point, I am open to any suggestions that will stop wasting my time.
A socket file doesnt actually contain data, it transports it.. It is a special, unusual type of file created with special system calls/commands. It is not an ordinary file. It is like a pipe the server and the clients can use to connect and exchange requests and data. Also, it is only used locally. Its significance is merely as an agreed rendezvous location in the filesystem. Creating a plain old file and putting it in that location may actually interfere with the server creating it... and thereby prevent local clients from connecting to the server. My recommendation is to remove any file you put in the location. The special socket file is created by the server.
{ "source": [ "https://serverfault.com/questions/279366", "https://serverfault.com", "https://serverfault.com/users/84237/" ] }
279,482
I have never had the privilege of working in an environment that required complicated routing or if it did require it, it was handled upstream of me. I've always used very simple static routing configurations and never needed to do any multipath routing -- hence my general confusion regarding this subject. I would like to understand multicasting and anycasting better. What is the difference between unicast, anycast, broadcast and multicast traffic? What situations are they generally used in and why (e.g., what applications use which method)? How do you calculate how much broadcast traffic is too much for a given network segment or broadcast domain? What are the security implications of allowing broadcast and multicast traffic?
Simply put: ------------------------------------------------------------ | TYPE | ASSOCIATIONS | SCOPE | EXAMPLE | ------------------------------------------------------------ | Unicast | 1 to 1 | Whole network | HTTP | ------------------------------------------------------------ | Broadcast | 1 to Many | Subnet | ARP | ------------------------------------------------------------ | Multicast | One/Many to Many | Defined horizon | SLP | ------------------------------------------------------------ | Anycast | Many to Few | Whole network | 6to4 | ------------------------------------------------------------ Unicast is used when two network nodes need to talk to each other. This is pretty straight forward, so I'm not going to spend much time on it. TCP by definition is a Unicast protocol, except when there is Anycast involved (more on that below). When you need to have more than two nodes see the traffic, you have options. If all of the nodes are on the same subnet, then broadcast becomes a viable solution. All nodes on the subnet will see all traffic. There is no TCP-like connection state maintained. Broadcast is a layer 2 feature in the Ethernet protocol, and also a layer 3 feature in IPv4. Multicast is like a broadcast that can cross subnets, but unlike broadcast does not touch all nodes. Nodes have to subscribe to a multicast group to receive information. Multicast protocols are usually UDP protocols, since by definition no connection-state can be maintained. Nodes transmitting data to a multicast group do not know what nodes are receiving. By default, Internet routers do not pass Multicast traffic. For internal use, though, it is perfectly allowed; thus, "Defined horizon" in the above chart. Multicast is a layer 3 feature of IPv4 & IPv6. To use anycast you advertise the same network in multiple spots of the Internet, and rely on shortest-path calculations to funnel clients to your multiple locations. As far the network nodes themselves are concerned, they're using a unicast connection to talk to your anycasted nodes. For more on Anycast, try: What is "anycast" and how is it helpful? . Anycast is also a layer 3 feature, but is a function of how route-coalescing happens. Examples Some examples of how the non-Unicast methods are used in the real Internet. Broadcast ARP is a broadcast protocol, and is used by TCP/IP stacks to determine how to send traffic to other nodes on the network. If the destination is on the same subnet, ARP is used to figure out the MAC address that goes to the stated IP address. This is a Level 2 (Ethernet) broadcast, to the reserved FF:FF:FF:FF:FF:FF MAC address. Also, Microsoft's machine browsing protocol is famously broadcast based. Work-arounds like WINS were created to allow cross-subnet browsing. This involves a Level 3 (IP) broadcast, which is an IP packet with the Destination address listed as the broadcast address of the subnet (in 192.168.101.0/24, the broadcast address would be 192.168.101.255). The NTP protocol allows a broadcast method for announcing time sources. Multicast Inside a corporate network, Multicast can deliver live video to multiple nodes without having to have massive bandwidth on the part of the server delivering the video feed. This way you can have a video server feeding a 720p stream on only a 100Mb connection, and yet still serve that feed to 3000 clients. When Novell moved away from IPX and to IP, they had to pick a service-advertising protocol to replace the SAP protocol in IPX. In IPX, the Service Advertising Protocol, did a network-wide announcement every time it announced a service was available. As TCP/IP lacked such a global announcement protocol, Novell chose to use a Multicast based protocol instead: the Service Location Protocol. New servers announce their services on the SLP multicast group. Clients looking for specific types of services announce their need to the multicast group and listen for unicasted replies. HP printers announce their presence on a multicast group by default. With the right tools, it makes it real easy to learn what printers are available on your network. The NTP protocol also allows a multicast method (IP 224.0.1.1) for announcing time sources to areas beyond just the one subnet. Anycast Anycast is a bit special since Unicast layers on top of it. Anycast is announcing the same network in different parts of the network, in order to decrease the network hops needed to get to that network. The 6to4 IPv6 transition protocol uses Anycast. 6to4 gateways announce their presence on a specific IP, 192.88.99.1. Clients looking to use a 6to4 gateway send traffic to 192.88.99.1 and trust the network to deliver the connection request to a 6to4 router. NTP services for especially popular NTP hosts may very well be anycasted, but I don't have proof of this. There is nothing in the protocol to prevent it. Other services use Anycast to improve data locality to end users. Google does Anycast with its search pages in some places (and geo-IP in others). The Root DNS servers use Anycast for similar reasons. ServerFault itself just might go there, they do have datacenters in New York and Oregon, but hasn't gone there yet. Network concerns Excessive broadcast traffic can rob all nodes in that subnet of bandwidth. This is less of a concern these days with full-duplex GigE ports, but back in the half-duplex 10Mb days a broadcast storm could bring a network to a halt real fast. Those half-duplex networks with one big collision domain across all nodes were especially vulnerable to broadcast storms, which is why networking books, especially older ones, say to keep an eye on broadcast traffic. Switched/Full-Duplex networks are a lot harder to bring to a halt with a broadcast storm, but it can still happen. Broadcast is required for correct functioning of IP networks. Multicast has the same possibility for abuse. If one node on the multicast group starts sending huge amounts of traffic to that group, all subscribed nodes will see all of that traffic. As with broadcast, excessive Mcast traffic can increase the possibilities of collisions on such connections where that is a problem. Multicast is an optional feature with IPv4, but required for IPv6. The IPv4 broadcast is replaced by multicast in IPv6 (See also: Why can't IPv6 send broadcasts? ). It is frequently turned off on IPv4 networks. Not coincidentally, enabling multicast is one of the many reasons network-engineers are leery of moving to IPv6 before they have to do it. Calculating how much traffic is too much traffic depends on a few things Half vs Full Duplex: Half-duplex networks have much lower tolerances for bcast/mcast traffic. Speed of network ports: The faster your network, the less of an issue this becomes. In the 10Mb ethernet days 5-10% of traffic on a port could be bcast traffic, if not more, but on GigE less than 1% (probably way less) is more likely. Number of nodes on the network: The more nodes you have, the more unavoidable broadcast traffic you'll incur (ARP). If you have broadcast specific protocols in use, Windows browsing or other things like cluster heartbeats, where problems start will change. Network technology: Wired Ethernet is fast enough that so long as you have modern gear driving it, bcast/mcast isn't likely to cause you problems. Wireless, on the other hand, can suffer from excessive broadcast traffic as it is a shared medium amongst all nodes and therefore in a single collision domain. In the end, Bcast and Mcast traffic rob ports of bandwidth off the top. When you start to worry is highly dependent on your individual network and tolerance for variable performance. In general, network-node counts haven't scaled as fast as network speeds so the overall broadcast percentage-as-traffic number has been dropping over time. Some networks disallow Multicast for specific reasons, and others have never taken the time to set it up. There are some multicast protocols that can reveal interesting information (SLP is one such) to anyone listening for the right things. Personally , I don't mind minor multicast traffic as the biggest annoyance I've seen with it is polluted network captures when I'm doing some network analysis; and for that there are filters.
{ "source": [ "https://serverfault.com/questions/279482", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
279,493
I have an Apache http server running on my computer, listening to port 80. I can access my "hello world" page by visiting http://localhost/ in my browser. I've also set up a static IP address, so that I can access the same page by visiting http://192.168.0.35/ I recently changed apartments/roommates, so I've also switched routers. Before I moved, I was using a Netgear router (I can't remember the exact model), but I was able to access my page via the external IP address, just by forwarding port 80 to my computer. Now I'm on a SMC8014W-G, so I followed this guide to set up the forwarding. I also used their port checker program to verify whether the ports (80, 443-just-in-case) were being forwarded. According to that program, they are being properly forwarded (and I checked a few random other ports, to make sure it wasn't giving me false positives). As far as I can tell, I've done everything I need to do to make it so I can access my site from my external IP address, but it's just not working! Have I missed some crucial step? What could be wrong?
Simply put: ------------------------------------------------------------ | TYPE | ASSOCIATIONS | SCOPE | EXAMPLE | ------------------------------------------------------------ | Unicast | 1 to 1 | Whole network | HTTP | ------------------------------------------------------------ | Broadcast | 1 to Many | Subnet | ARP | ------------------------------------------------------------ | Multicast | One/Many to Many | Defined horizon | SLP | ------------------------------------------------------------ | Anycast | Many to Few | Whole network | 6to4 | ------------------------------------------------------------ Unicast is used when two network nodes need to talk to each other. This is pretty straight forward, so I'm not going to spend much time on it. TCP by definition is a Unicast protocol, except when there is Anycast involved (more on that below). When you need to have more than two nodes see the traffic, you have options. If all of the nodes are on the same subnet, then broadcast becomes a viable solution. All nodes on the subnet will see all traffic. There is no TCP-like connection state maintained. Broadcast is a layer 2 feature in the Ethernet protocol, and also a layer 3 feature in IPv4. Multicast is like a broadcast that can cross subnets, but unlike broadcast does not touch all nodes. Nodes have to subscribe to a multicast group to receive information. Multicast protocols are usually UDP protocols, since by definition no connection-state can be maintained. Nodes transmitting data to a multicast group do not know what nodes are receiving. By default, Internet routers do not pass Multicast traffic. For internal use, though, it is perfectly allowed; thus, "Defined horizon" in the above chart. Multicast is a layer 3 feature of IPv4 & IPv6. To use anycast you advertise the same network in multiple spots of the Internet, and rely on shortest-path calculations to funnel clients to your multiple locations. As far the network nodes themselves are concerned, they're using a unicast connection to talk to your anycasted nodes. For more on Anycast, try: What is "anycast" and how is it helpful? . Anycast is also a layer 3 feature, but is a function of how route-coalescing happens. Examples Some examples of how the non-Unicast methods are used in the real Internet. Broadcast ARP is a broadcast protocol, and is used by TCP/IP stacks to determine how to send traffic to other nodes on the network. If the destination is on the same subnet, ARP is used to figure out the MAC address that goes to the stated IP address. This is a Level 2 (Ethernet) broadcast, to the reserved FF:FF:FF:FF:FF:FF MAC address. Also, Microsoft's machine browsing protocol is famously broadcast based. Work-arounds like WINS were created to allow cross-subnet browsing. This involves a Level 3 (IP) broadcast, which is an IP packet with the Destination address listed as the broadcast address of the subnet (in 192.168.101.0/24, the broadcast address would be 192.168.101.255). The NTP protocol allows a broadcast method for announcing time sources. Multicast Inside a corporate network, Multicast can deliver live video to multiple nodes without having to have massive bandwidth on the part of the server delivering the video feed. This way you can have a video server feeding a 720p stream on only a 100Mb connection, and yet still serve that feed to 3000 clients. When Novell moved away from IPX and to IP, they had to pick a service-advertising protocol to replace the SAP protocol in IPX. In IPX, the Service Advertising Protocol, did a network-wide announcement every time it announced a service was available. As TCP/IP lacked such a global announcement protocol, Novell chose to use a Multicast based protocol instead: the Service Location Protocol. New servers announce their services on the SLP multicast group. Clients looking for specific types of services announce their need to the multicast group and listen for unicasted replies. HP printers announce their presence on a multicast group by default. With the right tools, it makes it real easy to learn what printers are available on your network. The NTP protocol also allows a multicast method (IP 224.0.1.1) for announcing time sources to areas beyond just the one subnet. Anycast Anycast is a bit special since Unicast layers on top of it. Anycast is announcing the same network in different parts of the network, in order to decrease the network hops needed to get to that network. The 6to4 IPv6 transition protocol uses Anycast. 6to4 gateways announce their presence on a specific IP, 192.88.99.1. Clients looking to use a 6to4 gateway send traffic to 192.88.99.1 and trust the network to deliver the connection request to a 6to4 router. NTP services for especially popular NTP hosts may very well be anycasted, but I don't have proof of this. There is nothing in the protocol to prevent it. Other services use Anycast to improve data locality to end users. Google does Anycast with its search pages in some places (and geo-IP in others). The Root DNS servers use Anycast for similar reasons. ServerFault itself just might go there, they do have datacenters in New York and Oregon, but hasn't gone there yet. Network concerns Excessive broadcast traffic can rob all nodes in that subnet of bandwidth. This is less of a concern these days with full-duplex GigE ports, but back in the half-duplex 10Mb days a broadcast storm could bring a network to a halt real fast. Those half-duplex networks with one big collision domain across all nodes were especially vulnerable to broadcast storms, which is why networking books, especially older ones, say to keep an eye on broadcast traffic. Switched/Full-Duplex networks are a lot harder to bring to a halt with a broadcast storm, but it can still happen. Broadcast is required for correct functioning of IP networks. Multicast has the same possibility for abuse. If one node on the multicast group starts sending huge amounts of traffic to that group, all subscribed nodes will see all of that traffic. As with broadcast, excessive Mcast traffic can increase the possibilities of collisions on such connections where that is a problem. Multicast is an optional feature with IPv4, but required for IPv6. The IPv4 broadcast is replaced by multicast in IPv6 (See also: Why can't IPv6 send broadcasts? ). It is frequently turned off on IPv4 networks. Not coincidentally, enabling multicast is one of the many reasons network-engineers are leery of moving to IPv6 before they have to do it. Calculating how much traffic is too much traffic depends on a few things Half vs Full Duplex: Half-duplex networks have much lower tolerances for bcast/mcast traffic. Speed of network ports: The faster your network, the less of an issue this becomes. In the 10Mb ethernet days 5-10% of traffic on a port could be bcast traffic, if not more, but on GigE less than 1% (probably way less) is more likely. Number of nodes on the network: The more nodes you have, the more unavoidable broadcast traffic you'll incur (ARP). If you have broadcast specific protocols in use, Windows browsing or other things like cluster heartbeats, where problems start will change. Network technology: Wired Ethernet is fast enough that so long as you have modern gear driving it, bcast/mcast isn't likely to cause you problems. Wireless, on the other hand, can suffer from excessive broadcast traffic as it is a shared medium amongst all nodes and therefore in a single collision domain. In the end, Bcast and Mcast traffic rob ports of bandwidth off the top. When you start to worry is highly dependent on your individual network and tolerance for variable performance. In general, network-node counts haven't scaled as fast as network speeds so the overall broadcast percentage-as-traffic number has been dropping over time. Some networks disallow Multicast for specific reasons, and others have never taken the time to set it up. There are some multicast protocols that can reveal interesting information (SLP is one such) to anyone listening for the right things. Personally , I don't mind minor multicast traffic as the biggest annoyance I've seen with it is polluted network captures when I'm doing some network analysis; and for that there are filters.
{ "source": [ "https://serverfault.com/questions/279493", "https://serverfault.com", "https://serverfault.com/users/84357/" ] }
279,495
At some costumer we have a really slow insertion to the database (three time slower then normal). I cannot see the reason why it happening. Checked the network and it seems to be ok. Any suggestions what can be wrong with the db here ? and how i can check it ? We are using sql server 2005. thanks .
Simply put: ------------------------------------------------------------ | TYPE | ASSOCIATIONS | SCOPE | EXAMPLE | ------------------------------------------------------------ | Unicast | 1 to 1 | Whole network | HTTP | ------------------------------------------------------------ | Broadcast | 1 to Many | Subnet | ARP | ------------------------------------------------------------ | Multicast | One/Many to Many | Defined horizon | SLP | ------------------------------------------------------------ | Anycast | Many to Few | Whole network | 6to4 | ------------------------------------------------------------ Unicast is used when two network nodes need to talk to each other. This is pretty straight forward, so I'm not going to spend much time on it. TCP by definition is a Unicast protocol, except when there is Anycast involved (more on that below). When you need to have more than two nodes see the traffic, you have options. If all of the nodes are on the same subnet, then broadcast becomes a viable solution. All nodes on the subnet will see all traffic. There is no TCP-like connection state maintained. Broadcast is a layer 2 feature in the Ethernet protocol, and also a layer 3 feature in IPv4. Multicast is like a broadcast that can cross subnets, but unlike broadcast does not touch all nodes. Nodes have to subscribe to a multicast group to receive information. Multicast protocols are usually UDP protocols, since by definition no connection-state can be maintained. Nodes transmitting data to a multicast group do not know what nodes are receiving. By default, Internet routers do not pass Multicast traffic. For internal use, though, it is perfectly allowed; thus, "Defined horizon" in the above chart. Multicast is a layer 3 feature of IPv4 & IPv6. To use anycast you advertise the same network in multiple spots of the Internet, and rely on shortest-path calculations to funnel clients to your multiple locations. As far the network nodes themselves are concerned, they're using a unicast connection to talk to your anycasted nodes. For more on Anycast, try: What is "anycast" and how is it helpful? . Anycast is also a layer 3 feature, but is a function of how route-coalescing happens. Examples Some examples of how the non-Unicast methods are used in the real Internet. Broadcast ARP is a broadcast protocol, and is used by TCP/IP stacks to determine how to send traffic to other nodes on the network. If the destination is on the same subnet, ARP is used to figure out the MAC address that goes to the stated IP address. This is a Level 2 (Ethernet) broadcast, to the reserved FF:FF:FF:FF:FF:FF MAC address. Also, Microsoft's machine browsing protocol is famously broadcast based. Work-arounds like WINS were created to allow cross-subnet browsing. This involves a Level 3 (IP) broadcast, which is an IP packet with the Destination address listed as the broadcast address of the subnet (in 192.168.101.0/24, the broadcast address would be 192.168.101.255). The NTP protocol allows a broadcast method for announcing time sources. Multicast Inside a corporate network, Multicast can deliver live video to multiple nodes without having to have massive bandwidth on the part of the server delivering the video feed. This way you can have a video server feeding a 720p stream on only a 100Mb connection, and yet still serve that feed to 3000 clients. When Novell moved away from IPX and to IP, they had to pick a service-advertising protocol to replace the SAP protocol in IPX. In IPX, the Service Advertising Protocol, did a network-wide announcement every time it announced a service was available. As TCP/IP lacked such a global announcement protocol, Novell chose to use a Multicast based protocol instead: the Service Location Protocol. New servers announce their services on the SLP multicast group. Clients looking for specific types of services announce their need to the multicast group and listen for unicasted replies. HP printers announce their presence on a multicast group by default. With the right tools, it makes it real easy to learn what printers are available on your network. The NTP protocol also allows a multicast method (IP 224.0.1.1) for announcing time sources to areas beyond just the one subnet. Anycast Anycast is a bit special since Unicast layers on top of it. Anycast is announcing the same network in different parts of the network, in order to decrease the network hops needed to get to that network. The 6to4 IPv6 transition protocol uses Anycast. 6to4 gateways announce their presence on a specific IP, 192.88.99.1. Clients looking to use a 6to4 gateway send traffic to 192.88.99.1 and trust the network to deliver the connection request to a 6to4 router. NTP services for especially popular NTP hosts may very well be anycasted, but I don't have proof of this. There is nothing in the protocol to prevent it. Other services use Anycast to improve data locality to end users. Google does Anycast with its search pages in some places (and geo-IP in others). The Root DNS servers use Anycast for similar reasons. ServerFault itself just might go there, they do have datacenters in New York and Oregon, but hasn't gone there yet. Network concerns Excessive broadcast traffic can rob all nodes in that subnet of bandwidth. This is less of a concern these days with full-duplex GigE ports, but back in the half-duplex 10Mb days a broadcast storm could bring a network to a halt real fast. Those half-duplex networks with one big collision domain across all nodes were especially vulnerable to broadcast storms, which is why networking books, especially older ones, say to keep an eye on broadcast traffic. Switched/Full-Duplex networks are a lot harder to bring to a halt with a broadcast storm, but it can still happen. Broadcast is required for correct functioning of IP networks. Multicast has the same possibility for abuse. If one node on the multicast group starts sending huge amounts of traffic to that group, all subscribed nodes will see all of that traffic. As with broadcast, excessive Mcast traffic can increase the possibilities of collisions on such connections where that is a problem. Multicast is an optional feature with IPv4, but required for IPv6. The IPv4 broadcast is replaced by multicast in IPv6 (See also: Why can't IPv6 send broadcasts? ). It is frequently turned off on IPv4 networks. Not coincidentally, enabling multicast is one of the many reasons network-engineers are leery of moving to IPv6 before they have to do it. Calculating how much traffic is too much traffic depends on a few things Half vs Full Duplex: Half-duplex networks have much lower tolerances for bcast/mcast traffic. Speed of network ports: The faster your network, the less of an issue this becomes. In the 10Mb ethernet days 5-10% of traffic on a port could be bcast traffic, if not more, but on GigE less than 1% (probably way less) is more likely. Number of nodes on the network: The more nodes you have, the more unavoidable broadcast traffic you'll incur (ARP). If you have broadcast specific protocols in use, Windows browsing or other things like cluster heartbeats, where problems start will change. Network technology: Wired Ethernet is fast enough that so long as you have modern gear driving it, bcast/mcast isn't likely to cause you problems. Wireless, on the other hand, can suffer from excessive broadcast traffic as it is a shared medium amongst all nodes and therefore in a single collision domain. In the end, Bcast and Mcast traffic rob ports of bandwidth off the top. When you start to worry is highly dependent on your individual network and tolerance for variable performance. In general, network-node counts haven't scaled as fast as network speeds so the overall broadcast percentage-as-traffic number has been dropping over time. Some networks disallow Multicast for specific reasons, and others have never taken the time to set it up. There are some multicast protocols that can reveal interesting information (SLP is one such) to anyone listening for the right things. Personally , I don't mind minor multicast traffic as the biggest annoyance I've seen with it is polluted network captures when I'm doing some network analysis; and for that there are filters.
{ "source": [ "https://serverfault.com/questions/279495", "https://serverfault.com", "https://serverfault.com/users/81964/" ] }
279,571
I have recently started using LVM on some servers for hard drives larger than 1 TB. They're useful, expandable and quite easy to install. However, I could not find any data about the dangers and caveats of LVM. What are the downsides of using LVM?
Summary Risks of using LVM: Vulnerable to write caching issues with SSD or VM hypervisor Harder to recover data due to more complex on-disk structures Harder to resize filesystems correctly Snapshots are hard to use, slow and buggy Requires some skill to configure correctly given these issues The first two LVM issues combine: if write caching isn't working correctly and you have a power loss (e.g. PSU or UPS fails), you may well have to recover from backup, meaning significant downtime. A key reason for using LVM is higher uptime (when adding disks, resizing filesystems, etc), but it's important to get the write caching setup correct to avoid LVM actually reducing uptime. -- Updated Dec 2019: minor update on btrfs and ZFS as alternatives to LVM snapshots Mitigating the risks LVM can still work well if you: Get your write caching setup right in hypervisor, kernel, and SSDs Avoid LVM snapshots Use recent LVM versions to resize filesystems Have good backups Details I've researched this quite a bit in the past having experienced some data loss associated with LVM. The main LVM risks and issues I'm aware of are: Vulnerable to hard disk write caching due to VM hypervisors, disk caching or old Linux kernels , and makes it harder to recover data due to more complex on-disk structures - see below for details. I have seen complete LVM setups on several disks get corrupted without any chance of recovery, and LVM plus hard disk write caching is a dangerous combination. Write caching and write re-ordering by the hard drive is important to good performance, but can fail to flush blocks to disk correctly due to VM hypervisors, hard drive write caching, old Linux kernels, etc. Write barriers mean the kernel guarantees that it will complete certain disk writes before the "barrier" disk write, to ensure that filesystems and RAID can recover in the event of a sudden power loss or crash. Such barriers can use a FUA (Force Unit Access) operation to immediately write certain blocks to the disk, which is more efficient than a full cache flush. Barriers can be combined with efficient tagged / native command queuing (issuing multiple disk I/O requests at once) to enable the hard drive to perform intelligent write re-ordering without increasing risk of data loss. VM hypervisors can have similar issues: running LVM in a Linux guest on top of a VM hypervisor such as VMware, Xen , KVM, Hyper-V or VirtualBox can create similar problems to a kernel without write barriers, due to write caching and write re-ordering. Check your hypervisor documentation carefully for a "flush to disk" or write-through cache option (present in KVM , VMware , Xen , VirtualBox and others) - and test it with your setup. Some hypervisors such as VirtualBox have a default setting that ignores any disk flushes from the guest. Enterprise servers with LVM should always use a battery backed RAID controller and disable the hard disk write caching (the controller has battery backed write cache which is fast and safe) - see this comment by the author of this XFS FAQ entry . It may also be safe to turn off write barriers in the kernel, but testing is recommended. If you don't have a battery-backed RAID controller, disabling hard drive write caching will slow writes significantly but make LVM safe. You should also use the equivalent of ext3's data=ordered option (or data=journal for extra safety), plus barrier=1 to ensure that kernel caching doesn't affect integrity. (Or use ext4 which enables barriers by default .) This is the simplest option and provides good data integrity at cost of performance. (Linux changed the default ext3 option to the more dangerous data=writeback a while back, so don't rely on the default settings for the FS.) To disable hard drive write caching : add hdparm -q -W0 /dev/sdX for all drives in /etc/rc.local (for SATA) or use sdparm for SCSI/SAS. However, according to this entry in the XFS FAQ (which is very good on this topic), a SATA drive may forget this setting after a drive error recovery - so you should use SCSI/SAS, or if you must use SATA then put the hdparm command in a cron job running every minute or so. To keep SSD / hard drive write caching enabled for better performance: this is a complex area - see section below. If you are using Advanced Format drives i.e. 4 KB physical sectors, see below - disabling write caching may have other issues. UPS is critical for both enterprise and SOHO but not enough to make LVM safe: anything that causes a hard crash or a power loss (e.g. UPS failure, PSU failure, or laptop battery exhaustion) may lose data in hard drive caches. Very old Linux kernels (2.6.x from 2009) : There is incomplete write barrier support in very old kernel versions, 2.6.32 and earlier ( 2.6.31 has some support , while 2.6.33 works for all types of device target) - RHEL 6 uses 2.6.32 with many patches. If these old 2.6 kernels are unpatched for these issues, a large amount of FS metadata (including journals) could be lost by a hard crash that leaves data in the hard drives' write buffers (say 32 MB per drive for common SATA drives). Losing 32MB of the most recently written FS metadata and journal data, which the kernel thinks is already on disk, usually means a lot of FS corruption and hence data loss. Summary: you must take care in the filesystem, RAID, VM hypervisor, and hard drive/SSD setup used with LVM. You must have very good backups if you are using LVM, and be sure to specifically back up the LVM metadata, physical partition setup, MBR and volume boot sectors. It's also advisable to use SCSI/SAS drives as these are less likely to lie about how they do write caching - more care is required to use SATA drives. Keeping write caching enabled for performance (and coping with lying drives) A more complex but performant option is to keep SSD / hard drive write caching enabled and rely on kernel write barriers working with LVM on kernel 2.6.33+ (double-check by looking for "barrier" messages in the logs). You should also ensure that the RAID setup, VM hypervisor setup and filesystem uses write barriers (i.e. requires the drive to flush pending writes before and after key metadata/journal writes). XFS does use barriers by default, but ext3 does not , so with ext3 you should use barrier=1 in the mount options, and still use data=ordered or data=journal as above. Unfortunately, some hard drives and SSDs lie about whether they have really flushed their cache to the disk (particularly older drives, but including some SATA drives and some enterprise SSDs ) - more details here . There is a great summary from an XFS developer . There's a simple test tool for lying drives (Perl script), or see this background with another tool testing for write re-ordering as a result of the drive cache. This answer covered similar testing of SATA drives that uncovered a write barrier issue in software RAID - these tests actually exercise the whole storage stack. More recent SATA drives supporting Native Command Queuing (NCQ) may be less likely to lie - or perhaps they perform well without write caching due to NCQ, and very few drives cannot disable write caching. SCSI/SAS drives are generally OK as they don't need write caching to perform well (through SCSI Tagged Command Queuing , similar to SATA's NCQ). If your hard drives or SSDs do lie about flushing their cache to disk, you really can't rely on write barriers and must disable write caching. This is a problem for all filesystems, databases, volume managers, and software RAID , not just LVM. SSDs are problematic because the use of write cache is critical to the lifetime of the SSD. It's best to use an SSD that has a supercapacitor (to enable cache flushing on power failure, and hence enable cache to be write-back not write-through). Most enterprise SSDs should be OK on write cache control, and some include supercapacitors. Some cheaper SSDs have issues that can't be fixed with write-cache configuration - the PostgreSQL project's mailing list and Reliable Writes wiki page are good sources of information. Consumer SSDs can have major write caching problems that will cause data loss, and don't include supercapacitors so are vulnerable to power failures causing corruption. Advanced Format drive setup - write caching, alignment, RAID, GPT With newer Advanced Format drives that use 4 KiB physical sectors, it may be important to keep drive write caching enabled, since most such drives currently emulate 512 byte logical sectors ( "512 emulation" ), and some even claim to have 512-byte physical sectors while really using 4 KiB. Turning off the write cache of an Advanced Format drive may cause a very large performance impact if the application/kernel is doing 512 byte writes, as such drives rely on the cache to accumulate 8 x 512-byte writes before doing a single 4 KiB physical write. Testing is recommended to confirm any impact if you disable the cache. Aligning the LVs on a 4 KiB boundary is important for performance but should happen automatically as long as the underlying partitions for the PVs are aligned, since LVM Physical Extents (PEs) are 4 MiB by default. RAID must be considered here - this LVM and software RAID setup page suggests putting the RAID superblock at the end of the volume and (if necessary) using an option on pvcreate to align the PVs. This LVM email list thread points to the work done in kernels during 2011 and the issue of partial block writes when mixing disks with 512 byte and 4 KiB sectors in a single LV. GPT partitioning with Advanced Format needs care, especially for boot+root disks, to ensure the first LVM partition (PV) starts on a 4 KiB boundary. Harder to recover data due to more complex on-disk structures : Any recovery of LVM data required after a hard crash or power loss (due to incorrect write caching) is a manual process at best, because there are apparently no suitable tools. LVM is good at backing up its metadata under /etc/lvm , which can help restore the basic structure of LVs, VGs and PVs, but will not help with lost filesystem metadata. Hence a full restore from backup is likely to be required. This involves a lot more downtime than a quick journal-based fsck when not using LVM, and data written since the last backup will be lost. TestDisk , ext3grep , ext3undel and other tools can recover partitions and files from non-LVM disks but they don't directly support LVM data recovery. TestDisk can discover that a lost physical partition contains an LVM PV, but none of these tools understand LVM logical volumes. File carving tools such as PhotoRec and many others would work as they bypass the filesystem to re-assemble files from data blocks, but this is a last-resort, low-level approach for valuable data, and works less well with fragmented files. Manual LVM recovery is possible in some cases, but is complex and time consuming - see this example and this , this , and this for how to recover. Harder to resize filesystems correctly - easy filesystem resizing is often given as a benefit of LVM, but you need to run half a dozen shell commands to resize an LVM based FS - this can be done with the whole server still up, and in some cases with the FS mounted, but I would never risk the latter without up to date backups and using commands pre-tested on an equivalent server (e.g. disaster recovery clone of production server). Update: More recent versions of lvextend support the -r ( --resizefs ) option - if this is available, it's a safer and quicker way to resize the LV and the filesystem, particularly if you are shrinking the FS, and you can mostly skip this section. Most guides to resizing LVM-based FSs don't take account of the fact that the FS must be somewhat smaller than the size of the LV: detailed explanation here . When shrinking a filesystem, you will need to specify the new size to the FS resize tool, e.g. resize2fs for ext3, and to lvextend or lvreduce . Without great care, the sizes may be slightly different due to the difference between 1 GB (10^9) and 1 GiB (2^30), or the way the various tools round sizes up or down. If you don't do the calculations exactly right (or use some extra steps beyond the most obvious ones), you may end up with an FS that is too large for the LV. Everything will seem fine for months or years, until you completely fill the FS, at which point you will get serious corruption - and unless you are aware of this issue it's hard to find out why, as you may also have real disk errors by then that cloud the situation. (It's possible this issue only affects reducing the size of filesystems - however, it's clear that resizing filesystems in either direction does increase the risk of data loss, possibly due to user error.) It seems that the LV size should be larger than the FS size by 2 x the LVM physical extent (PE) size - but check the link above for details as the source for this is not authoritative. Often allowing 8 MiB is enough, but it may be better to allow more, e.g. 100 MiB or 1 GiB, just to be safe. To check the PE size, and your logical volume+FS sizes, using 4 KiB = 4096 byte blocks: Shows PE size in KiB: vgdisplay --units k myVGname | grep "PE Size" Size of all LVs: lvs --units 4096b Size of (ext3) FS, assumes 4 KiB FS blocksize: tune2fs -l /dev/myVGname/myLVname | grep 'Block count' By contrast, a non-LVM setup makes resizing the FS very reliable and easy - run Gparted and resize the FSs required, then it will do everything for you. On servers, you can use parted from the shell. It's often best to use the Gparted Live CD or Parted Magic , as these have a recent and often more bug-free Gparted & kernel than the distro version - I once lost a whole FS due to the distro's Gparted not updating partitions properly in the running kernel. If using the distro's Gparted, be sure to reboot right after changing partitions so the kernel's view is correct. Snapshots are hard to use, slow and buggy - if snapshot runs out of pre-allocated space it is automatically dropped . Each snapshot of a given LV is a delta against that LV (not against previous snapshots) which can require a lot of space when snapshotting filesystems with significant write activity (every snapshot is larger than the previous one). It is safe to create a snapshot LV that's the same size as the original LV, as the snapshot will then never run out of free space. Snapshots can also be very slow (meaning 3 to 6 times slower than without LVM for these MySQL tests ) - see this answer covering various snapshot problems . The slowness is partly because snapshots require many synchronous writes . Snapshots have had some significant bugs, e.g. in some cases they can make boot very slow, or cause boot to fail completely (because the kernel can time out waiting for the root FS when it's an LVM snapshot [fixed in Debian initramfs-tools update, Mar 2015]). However, many snapshot race condition bugs were apparently fixed by 2015. LVM without snapshots generally seems quite well debugged, perhaps because snapshots aren't used as much as the core features. Snapshot alternatives - filesystems and VM hypervisors VM/cloud snapshots: If you are using a VM hypervisor or an IaaS cloud provider (e.g. VMware, VirtualBox or Amazon EC2/EBS), their snapshots are often a much better alternative to LVM snapshots. You can quite easily take a snapshot for backup purposes (but consider freezing the FS before you do). Filesystem snapshots: filesystem level snapshots with ZFS or btrfs are easy to use and generally better than LVM, if you are on bare metal (but ZFS seems a lot more mature, just more hassle to install): ZFS: there is now a kernel ZFS implementation , which has been in use for some years, and ZFS seems to be gaining adoption. Ubuntu now has ZFS as an 'out of the box' option, including experimental ZFS on root in 19.10 . btrfs: still not ready for production use ( even on openSUSE which ships it by default and has team dedicated to btrfs), whereas RHEL has stopped supporting it). btrfs now has an fsck tool (FAQ) , but the FAQ recommends you to consult a developer if you need to fsck a broken filesystem. Snapshots for online backups and fsck Snapshots can be used to provide a consistent source for backups, as long as you are careful with space allocated (ideally the snapshot is the same size as the LV being backed up). The excellent rsnapshot (since 1.3.1) even manages the LVM snapshot creation/deletion for you - see this HOWTO on rsnapshot using LVM . However, note the general issues with snapshots and that a snapshot should not be considered a backup in itself. You can also use LVM snapshots to do an online fsck: snapshot the LV and fsck the snapshot, while still using the main non-snapshot FS - described here - however, it's not entirely straightforward so it's best to use e2croncheck as described by Ted Ts'o , maintainer of ext3. You should "freeze" the filesystem temporarily while taking the snapshot - some filesystems such as ext3 and XFS will do this automatically when LVM creates the snapshot. Conclusions Despite all this, I do still use LVM on some systems, but for a desktop setup I prefer raw partitions. The main benefit I can see from LVM is the flexibility of moving and resizing FSs when you must have high uptime on a server - if you don't need that, gparted is easier and has less risk of data loss. LVM requires great care on write caching setup due to VM hypervisors, hard drive / SSD write caching, and so on - but the same applies to using Linux as a DB server. The lack of support from most tools ( gparted including the critical size calculations, and testdisk etc) makes it harder to use than it should be. If using LVM, take great care with snapshots: use VM/cloud snapshots if possible, or investigate ZFS/btrfs to avoid LVM completely - you may find ZFS or btrfs is sufficiently mature compared to LVM with snapshots. Bottom line: If you don't know about the issues listed above and how to address them, it's best not to use LVM.
{ "source": [ "https://serverfault.com/questions/279571", "https://serverfault.com", "https://serverfault.com/users/10904/" ] }
279,780
Is a Self Signed SSL certificate a false sense of security? If you are being eavesdropped, the user will simply accept the certificate like he/she always does.
Interesting question, it depends on the use in my opinion. You're still protected in terms of the session is encrypted but you have no way of telling if it's the correct SSL cert presented to you unless you distribute your CA root cert to users/clients. For internal testing/dev projets this would work great, you generate a root CA certificate that you use distribute to your users (can be done via Group Policy in Windows and via openssl command line in Linux/BSD) and then use that root cert for signing your CSR's. Users will not see a warning or anything and you know the certificate is signed by your internal CA. For external sites where you cannot assure this, I'd still say a self signed cert is better than no SSL at all if you are sending passwords or other sensitive information over the connection. However, on the plus side there are plenty of very cheap "commercial" certificate issuers, GoDaddy being one of them. You can get a cert for about 40 euros per year. GoDaddy even offer free certs to OpenSource project web-sites.
{ "source": [ "https://serverfault.com/questions/279780", "https://serverfault.com", "https://serverfault.com/users/58791/" ] }
279,803
I have a backup mail server in case of a failure on the main one. In that case of failure, mails come on the backup server and stay there until the main one is back. If I wait some times, the delivery will be done automatically as soon as the main server is back but it can be long. So how to force a send retry of all the mails? For exemple : postqueue -p : give me a list of mails I then tried postqueue -f (from man page : Flush the queue: attempt to deliver all queued mail.). It surely flushed the queue but mails were not been delivered...
According to postqueue(1) you can simply run postqueue -f to flush your mail queue. If the mails aren't delivered after flushing the queue but are being requeued instead, you might want to check your mail logs for errors. Taking a peek at postsuper(1) might also be helpful. Maybe the messages are on hold and need to be released first.
{ "source": [ "https://serverfault.com/questions/279803", "https://serverfault.com", "https://serverfault.com/users/84368/" ] }
280,099
Im trying to install an rpm file on CentOS 5 and Im not sure how to resolve this issues it brings up: $ rpm --install epel-release-6-5.noarch.rpm warning: epel-release-6-5.noarch.rpm: Header V3 RSA/SHA256 signature: NOKEY, key ID 0608b895 error: Failed dependencies: rpmlib(FileDigests) <= 4.6.0-1 is needed by epel-release-6-5.noarch rpmlib(PayloadIsXz) <= 5.2-1 is needed by epel-release-6-5.noarch What do the lines rpmlib(FileDigests) <= 4.6.0-1 mean? is rpmlib out of date or FileDigests out of date? Whats with the syntax of something followed by parentheses? Ive tried to use yum so that it can resolve dependencies automatically but it is unable: $ sudo yum --nogpgcheck install epel-release-6-5.noarch.rpm ... Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: rpmlib(FileDigests) is needed by epel-release-6-5.noarch rpmlib(PayloadIsXz) is needed by epel-release-6-5.noarch Complete! (1, [u'Please report this error in https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%205&component=yum']) On this page https://bugzilla.redhat.com/show_bug.cgi?id=665073 , they say my rpm is out of date but then say I should request an rpm file that works with my version of rpm (which is 4.4.2.3) but I don't want to do that. How do I make my system compatible with this rpm file? Bonus points if you tell me how I can fix the public key error.
Whats with the syntax of something followed by parentheses? From http://jfearn.fedorapeople.org/en-US/RPM/0.1/html/RPM_Guide/ch-advanced-packaging.html : Scripting languages such as Perl and Tcl allow for add-on modules. Your package may require some of these add-on modules. RPM uses a special syntax with parenthesis to indicate script module dependencies. For example: Requires: perl(Carp) >= 3.2 This indicates a requirement for the Carp add-on module for Perl, greater than or equal to version 3.2. In this case, it is referring to particular features of the rpm library. error: Failed dependencies: rpmlib(FileDigests) <= 4.6.0-1 is needed by epel-release-6-5.noarch rpmlib(PayloadIsXz) <= 5.2-1 is needed by epel-release-6-5.noarch This suggests you're trying to install the epel-release rpm on a system for which it was not designed. In fact, in your question, you state you're installing this on CentOS 5, while the package you're attempting to install is designed for CentOS 6 (or RHEL 6). For CentOS 5, you want epel-release-5-4.noarch.rpm . You might want to read the EPEL documentation before you proceed, which would have answered this question as well as others you might have. Ive tried to use yum so that it can resolve dependencies automatically but it is unable: Right, because those features aren't available on CentOS 5. From the perspective of yum you've asked it for magic unicorns. It can't find any. Bonus points if you tell me how I can fix the public key error. Install the EPEL signing key. If you read the EPEL documentation -- it's amazing what you'll find there -- you'll get a link to https://fedoraproject.org/keys , which includes instructions on installing the public keys used by the Fedora project.
{ "source": [ "https://serverfault.com/questions/280099", "https://serverfault.com", "https://serverfault.com/users/83407/" ] }
280,237
I've got another interesting one. I'm about to backup and reinstall the HR Administrator's PC. I suspect that the fastest way to do this is to use the Windows 7 Transfer tool, and create a backup of the entire Users and Settings profiles on the NAS. I don't see a problem with this. She claims that nobody else should be able to see the information on her computer. Fair enough. I think that the systems administrator (me), should be in a significant enough level of trust to be able to make a backup, no questions asked, and delete the backup once the task is complete. Her view is, that nobody (not even the other directors) should be able to view the HR documentation on her PC. We already have a semi-backup (files, not user-state) on box.net, which does allow granular access to various users. Questions: 1) Which one of us is nuts, her or me? 2) Do you trust your sysadmins to take backups of company policy / HR files? 3) Does anyone have a LART?
My opinion on this may not be popular here but I think she's right, HR is a very specific role in most businesses, requiring one very key skill - absolute discretion. IT people have to have a very wide range of skills and while discretion is important it's not the 'be all and end all' that it is with HR. Typically recruitment of IT people is less thorough in this area too. Perhaps there's a technical solution to this, how about getting your HR people to backup their own stuff to encrypted external disks that they own/manage/store? Ultimately you have to protect yourself, if there's no way you could get at HR data then you're in the clear, if your management see that you've tried your best and provided as secure and private a means to functionally get your job done without exposing yourself to accusations of data prying then they'll be happy - even if the process is clunky and slow. Basically don't be afraid to cover your own arse in this area - most people will understand and the HR people will appreciate that you're respecting their role and authority. Plus of course you should never piss off HR anyway, these ninny's help decide your fate for some crazy reason :)
{ "source": [ "https://serverfault.com/questions/280237", "https://serverfault.com", "https://serverfault.com/users/16732/" ] }
280,482
Is it possible to use psexec to execute a command on a remote machine without having admin privileges on the remote machine? I tried running psexec \\<machine> -u <username> -p <password> , where <username> and <password> are non-admin credentials, but I get an "access denied" error I can remote desktop into the remote machine with the same credentials without any problems. My local machine is running Windows 7 Enterprise 64-bit, and the remote machine is running Windows Server 2008 64-bit. I do have admin privileges on the local machine. EDIT : To all the people who are downvoting this question: I am not trying to circumvent any sort of security measure. I can already run the process on the remote machine by remote desktop-ing into the remote machine and running it. I'm simply looking for a command-line way to do something I can already do through a GUI.
As found at: https://stackoverflow.com/questions/534426/psexec-help-needed You need to have admin rights on the target as part of psexec starts up a windows service on the target, and you need admin rights to be able to do that. psexec copies a psexecsvc file to the admin share and then using remote management starts up a service using that file. It opens up named pipes and uses that for further communication. When it's finished it tidies up after itself. Although I can't find OFFICIAL documentation that says the same thing.
{ "source": [ "https://serverfault.com/questions/280482", "https://serverfault.com", "https://serverfault.com/users/84571/" ] }
281,230
I understand that, to get failover on an HAProxy load balancing setup, you need two machines running HAproxy (and route it to several webserver instances). But in this case, say abcd.com, how do we split/route this traffic to 2 IP addresses instead of one? DNS usually resolves domain names to a single IP. How do we do this in using free/cheap tools/services?
If you have so much load that you need to load balance across two haproxy instances then DNS round robin isn't a bad idea (I would be surprised if you have this load though). DNS round robin won't provide good failover though. At Stack Overflow we use heartbeat to provide a single virtual IP, this IP is active on only one haproxy host at a time (if it goes down, the other takes over this IP). You could use heartbeat to have an IP on each machine and then DNS round robin between the two. If one were to fail, then the other would have both of those IPs. HAProxy is using about 1-5% CPU on our physical server to balance our traffic which has a single Intel(R) Xeon(R) CPU E5504 @ 2.00GHz . So HAProxy can generally handle a lot of traffic easily.
{ "source": [ "https://serverfault.com/questions/281230", "https://serverfault.com", "https://serverfault.com/users/77780/" ] }
281,420
During website operation, in mysql process list, I see a couple of processes with the "command" column marked as "SLEEP". Should I worry? How to stop this?
Even the most powerful ones of us need to sleep sometimes. Without sleep one becames anxious and insomnia can lead to all kinds of serious symptoms. More seriously: sleep state means that MySQL process has done with its query, but the client-side did not yet exit. Many web applications don't clean up their connections afterwards, which leads to sleeping MySQL processes. Don't worry if there are only handful of them; MySQL will clean up those after a configurable timeout period (wait_timeout). Or if your web application uses persistent connections and connection pooling, then it's perfectly normal to have even lots of sleeping processes: in that case your application just opens up for example 100 SQL connections and keeps them open. That reduces connection opening/closing overhead. Unless your application is a very busy one, it's normal that not nearly every SQL process has something to do, so they sleep.
{ "source": [ "https://serverfault.com/questions/281420", "https://serverfault.com", "https://serverfault.com/users/81146/" ] }
281,581
I am investigating the vulnerability to Slowloris and I think I understand how and why this sort of attack works. What I don't understand is why Lighttpd and NginX are not affected (according to the same article as linked above). What do they make so different?
Apache has a theory of 'Maximum Clients' That is the number of simultaneous connections it can handle. I.E. if an apache server has a 'max clients' limit of 100, and each request takes 1 second to complete, it can handle a maximum of 100 requests per second. An application like SlowLoris will flood a server with connections, in our example if SlowLoris sends 200 connections per second, and Apache can only handle 100 connections per second the connection queue will keep getting bigger and use up all the memory on the machine bringing it to a hault. This is similar to the way Anonymous' LOIC works. NGINX and Lighttpd (Among others) don't have a maximum connections, they use worker threads instead so, theoretically, there's no limit to the number of connections they can handle. If you monitor your Apache connections, you'll see that the majority of the active connections are 'Sending' or 'Receiving' data from the client. In NGINX/Lighttpd they just ignore these requests and let them run on in the background, not using up system resources, and it only has to process connections with something going on (Parsing responses, reading data from backend servers etc.) I actually answered a similar question this afternoon, so the information in there might also be interesting to you Reducing Apache request queuing
{ "source": [ "https://serverfault.com/questions/281581", "https://serverfault.com", "https://serverfault.com/users/53998/" ] }
281,796
I know very little about DNS configuration. Could anyone please explain to me, in plain english, what the following DNS configuration achieve? It's the default configuration for my hosting provider. NAME/TYPE/VALUE/PRIORITY A X.X.X.X 0 * A X.X.X.X 0 smtp A Y.Y.Y.Y 0 MX smtp 10 NS foo1.bar.com. 0 NS foo2.bar.com. 0 example.com. TXT v=spf1 a mx +all 0 A couple of crucial points: Why the DOTs at the end of the domains? Why the MX record has some priority set and why 10? What's the difference between the first two records?
Dots at the end of names mean 'this is a fully qualified entry', without the dot, the DNS server appends the domain for which these entries are listed to the name. So, you would get foo1.bar.com.example.com The full stops are critical therefore, to prevent errors. All MX records have a priority. MX is a mail exchanger record, and you can have multiple MX entries per domain. The entry/entries tell mail servers where to send mail for your domain. The priority allows the mail server to try them in the right order ( lowest first). The first record says "if you look up this domain, you get this IP address", i.e. example.com gives x.x.x.x The second is a wild card, which says, if you look up any sub-domain for this domain and there is no specific match, then you get this IP address. i.e. bob.example.com and fred.example.com will resolve, and they will resolve to that X.X.X.X. TXT entries allow for informational records, of which yours is an SPF description. SPF is something else entirely, and handles e-mail validation, more info here - http://en.wikipedia.org/wiki/Sender_Policy_Framework . The two NS entries are name server records, and tell other DNS servers/resolvers which name servers to use for your example.com domain.
{ "source": [ "https://serverfault.com/questions/281796", "https://serverfault.com", "https://serverfault.com/users/28360/" ] }
282,093
I have a custom HTTP module for an ASP.NET MVC3 website that I'm loading from web.config: <system.web> <httpModules> <add name="MyModule" type="MySolution.Web.MyHttpModule, MySolution.Web" /> </httpModules> </system.web> The module is loaded correctly when I run the site from within the VS web server (the break point in my Init method is hit) but when I host it in IIS it seems to be ignored (the breakpoint is missed and the module's functionality is absent from the site). I have tried it on two separate IIS boxes with a similar result. What am I doing wrong? Is there a setting I have to flick in enable IIS to load modules from a site's web.config?
I figured this out shortly after I asked the question - IIS7 uses a different schema for the web.config. The correct place to load a module is now: <system.webServer> <modules> <add name="MyModule" type="MySolution.Web.MyHttpModule, MySolution.Web" /> </modules> </system.webServer>
{ "source": [ "https://serverfault.com/questions/282093", "https://serverfault.com", "https://serverfault.com/users/45175/" ] }
282,555
We host VPSes for customers. Each customer VPS is given an LVM LV on a standard spindle hard disk. If the customer were to leave, we zero out this LV ensuring that their data does not leak over to another customers. We are thinking of going with SSDs for our hosting business. Given that SSDs have the "wear levelling" technology, does that make zeroing pointless? Does this make this SSD idea unfeasable, given we can't allow customer data to leak over to another customer?
Assuming that what you are seeking to prevent is the next customer reading the disk to see the old customer's data, then writing all zeros would actually still work. Writing zeros to sector 'n' means that when sector 'n' is read, it will return all zeros. Now the fact is, the underlying actual data may still be on the flash chips, but since you can't do a normal read to get to it, it's not a problem for your situation. It IS a problem if someone can physically get hold of the disk and take it apart (because then they could directly read the flash chips), but if the only access they have is the SATA bus, then a write of all zeros to the whole disk will do just fine.
{ "source": [ "https://serverfault.com/questions/282555", "https://serverfault.com", "https://serverfault.com/users/46466/" ] }
283,125
I am looking to setup a TXT spf record that has 2 included domains... individually: v=spf1 include:_spf.google.com ~all and v=spf1 include:otherdomain.com ~all What is the proper way of combining them into a single item?
v=spf1 include:_spf.google.com include:otherdomain.com ~all There's no restriction against including multiple names in a single entry; Hotmail, for instance , takes this to extremes. Note that multiple includes, or nested ones, need to stay under the limit of 10 total DNS lookups for the whole SPF check.
{ "source": [ "https://serverfault.com/questions/283125", "https://serverfault.com", "https://serverfault.com/users/68702/" ] }
283,129
I've seen this with so many consoles (on Linux, Mac, ...), and with lots of different machines in many different networks. I can never pinpoint the exact reason, why this happens: All you have to do is log in to a machine via SSH. If the connection breaks for some reason (for simplicity, let's say the network cable was pulled), then sometimes the console just hangs forever - at other times, it just exits fine to the parent shell. It's so annoying when this happens (e.g. you lose the command history.) Is there maybe a secret keyboard shortcut which can force an exit (Ctrl-C or Ctrl-D don't work)? And what's the reason for this random "bug" across all the implementations anyway?
There is a "secret" keyboard shortcut to force an exit :~) From the frozen session, hit these keys in order: Enter ~ . The tilde (only after a newline) is recognized as an escape sequence by the ssh client, and the period tells the client to terminate it's business without further ado. The long-hang behavior on communication issues is not a bug, the SSH session is hanging out hoping the other side will come back. If the network breaks, sometimes even days later you can get an SSH session back. Of course you can specifically tell it to give up and die with the sequence above. There are also various things you can do such as setting keep-alive timeouts in your client so that if it doesn't have an active link for a certain amount of time it shuts off on it's own, but the default behavior is to stay as connected as possible! Edit: Another useful application of this interrupt key is to get the attention of the local ssh client and background it to get back to your local shell for a minute —say to get something from your history— then forground it to keep working remotely. Enter ~ Ctrl + Z to send the ssh client to the background job queue of your local shell, then fg as normal to get it back. Edit: When dealing with nested SSH sessions, you can add multiple tilde characters to only break out of one of the SSH sessions in the chain, but retain the others. For example, if you're nested in 3 levels, (i.e. you ssh from local->Machine1->Machine2->Machine3), Enter ~ . will get you back to your local session, Enter ~ ~ . will leave you in Machine1, and Enter ~ ~ ~ . will leave you in Machine2. This works for other escape sequences as well, such as moving the ssh session to background temporarily. The above works for any level of nesting, by just adding more tilde's. Finally, you can use Enter ~ ? to print a help menu of available escape commands. TL;DR - the supported escape commands are Supported escape sequences: ~. - terminate connection (and any multiplexed sessions) ~B - send a BREAK to the remote system ~C - open a command line ~R - request rekey ~V/v - decrease/increase verbosity (LogLevel) ~^Z - suspend ssh ~# - list forwarded connections ~& - background ssh (when waiting for connections to terminate) ~? - this message ~~ - send the escape character by typing it twice (Note that escapes are only recognized immediately after newline.)
{ "source": [ "https://serverfault.com/questions/283129", "https://serverfault.com", "https://serverfault.com/users/37454/" ] }
283,237
I'm having some trouble creating a urlacl reservation in Windows Server 2008; probably this a rookie mistake. The command line I'm using is: netsh http add urlacl url=http://+:99898/ user=ben The error that I see is: Url reservation add failed, Error: 87 The parameter is incorrect. There is a local user account named 'ben' that has admin privileges. I've made sure to put a trailing slash after the port number in the URL. Google and MSDN documentation are letting me down now - does anyone have any clue what I'm doing incorrectly?
I had the same error; in my case, the mistake I was making was omitting the trailing slash from the URL: C:\>netsh http add urlacl url=http://+:8085 user=DOMAIN\myname Url reservation add failed, Error: 87 The parameter is incorrect. C:\>netsh http add urlacl url=http://+:8085/ user=DOMAIN\myname URL reservation successfully added
{ "source": [ "https://serverfault.com/questions/283237", "https://serverfault.com", "https://serverfault.com/users/85406/" ] }
283,294
read /dev/urandom 3 The above is not working..How can I read random bytes from /dev/urandom in bash?
random="$(dd if=/dev/urandom bs=3 count=1)" if specifies the input file, bs the block size (3 bytes), and count the number of blocks (1 * 3 = 3 total bytes)
{ "source": [ "https://serverfault.com/questions/283294", "https://serverfault.com", "https://serverfault.com/users/85298/" ] }
283,405
This is so wierd. Logged in to a Linux (RHEL) box as a user 'g', doing an ls -lah shows drwxrwxrwx 6 g g 4.0K Jun 23 13:27 . drwxrw-r-x 6 root root 4.0K Jun 23 13:15 .. -rwxrw---- 1 g g 678 Jun 23 13:26 .bash_history -rwxrw---- 1 g g 33 Jun 23 13:15 .bash_logout -rwxrw---- 1 g g 176 Jun 23 13:15 .bash_profile -rwxrw---- 1 g g 124 Jun 23 13:15 .bashrc drw-r----- 2 g g 4.0K Jun 23 13:25 .ssh So the user 'g' in group 'g' /should/ be able to read and write to the .ssh directory but if I do ls -lah .ssh/ I get ls: .ssh/: Permission denied . I also get Permission denied if I try and cat any files in the directory If I go in as root and change the permissions to 700 , 744 , 766 or anything as long as the 'user' permission is 7 it works and I can CD and LS the directory and files within. id g returns uid=504(g) gid=506(g) groups=506(g) Edit: I've copied these permissions exactly to another identical box and there is no issue. I can cd into a directory without execute permissions.
The directory will require the execute bit set in order for you to enter it. I don't know what you tested, but you cannot enter a directory without the execute bit, or read files in it: $ mkdir foo $ echo "baz" > foo/bar $ chmod 660 foo $ cd foo bash: cd: foo: Permission denied $ cat foo/bar cat: foo/bar: Permission denied That is, unless your process has the CAP_DAC_OVERRIDE POSIX capability set (like root has), which allows you to enter directories without the executable bit set, iirc. Basically, you should try to keep you .ssh directory at 700, and everything in it at 600, just to be safe. The ssh man page gives per file instructions on the required ownerships and permission modes for files in ~/.ssh.
{ "source": [ "https://serverfault.com/questions/283405", "https://serverfault.com", "https://serverfault.com/users/80776/" ] }
283,467
I have an application in one of my application pools that has a virtual path of /Site/login.aspx . I want to remove it but it no longer exists on my computer and it's causing me issues setting up AppFabric. I understand that you can remove these phantom applications by recreating the application in IIS and then hitting Remove. That will get rid of the application from the pool but in this case I can't recreate the application due to the /login.aspx in the virtual path Any ideas how I remove this erroneous entry?
Since I had the same issue; application pools with applications that did not exist anymore, I did some research and finally managed to solve the issue. Here are some steps: Locate and edit your IIS 7 configuration file "applicationHost.config" with a text editor. It should be stored in " C:\windows\system32\inetsrv\config " Since the folder is somehow "protected", I usually edit like the following: Open Windows Explorer Navigate to "C:\windows\system32\inetsrv\config" Copy the file "applicationHost.config" Paste it to a folder where you can edit it, e.g. your Desktop Open it with your editor of choise and edit it Copy it back with Windows Explorer to "C:\windows\system32\inetsrv\config" Make a backup of your "applicationHost.config" file! Search with a text editor in your "applicationHost.config" for your non-existing applications. They should be located somewhere inside an <application ...> XML node. Delete the <application ...> node(s) of all your phantom applications. Save the file and copy it back to "C:\windows\system32\inetsrv\config" Refresh the IIS management console. Your application pools should now be without the phantom applications you previously deleted. Actually remove the now empty application pool. That worked for me, if it does not work for you, please post a comment here. A good help was this posting on the IIS forum . Please be also aware that when editing the "applicationHost.config" file directly in its original location, you need to use a 64-bit editor (e.g. Notepad++ 64-bit), because otherwise it would get stored in "C:\Windows\SysWOW64\inetsrv\Config" instead of the correct location .
{ "source": [ "https://serverfault.com/questions/283467", "https://serverfault.com", "https://serverfault.com/users/47290/" ] }
283,470
I am looking at automated deployment solutions for my team and have been playing with Chef for the past few days. I've been able to get a simple web app up an running from a base Red Hat VM using chef-solo. Our end goal is to use Chef (or another system) to automatically deploy application topologies to the cloud as we run builds. Our process would basically run like this: Our web app code, dependencies, and chef cookbooks are stored in SCM A build is executed and creates a single package for images to acquire and test against The build engine then deploys new cloud images that run a chef client to get packages installed. The images acquire the cookbooks from SCM or the Chef server and install everything to get up and running What are the benefits and/or use cases for getting a Chef Server running? Are there any major benefits to have a Chef Server hold and acquire the cookbooks from SCM vs. using chef-solo and having a script that will pull the cookbooks from SCM?
I am going to orient this answer as if the question was "what are the advantages of chef-solo" because that's the best way I know to cover the differences between the approaches. My summary recommendation is in line with others: use a chef-server if you need to manage a dynamic, virtualized environment where you will be adding and removing nodes often. A chef server is also a good CMDB , if you need one. Use chef-solo if you have a less dynamic environment where the nodes won't change too often but the roles and recipes will. Size and complexity of your environment is more or less irrelevant. Both approaches scale very well. If you deploy chef-solo, use a cronjob with rsync, 'git pull', or some other idempotent file transfer mechanism to maintain a full copy of the chef repository on each node. The cronjob should be easily configurable to (a) not run at all and (b) run, but without syncing the local repository. Add a nodes/ directory in your chef repository with a json file for each node. Your cronjob can be as sophisticated as you wish in terms of identifying the right nodefile (though I would recommend simply $(hostname -s).json. You also may want to create an opscode account and configure a client with hosted chef, if for no other reason than to be able to use knife to download community cookbooks and create skeletons. There are several advantages to this approach, besides the obvious "not having to administer a server". Your source control will be the final arbiter of all configuration changes, the repository will include all nodes and runlists, and each server being fully independent facilitates some convenient testing scenarios. Chef-server introduces a hole where you use the "knife upload" to update a cookbook, and you must patch this hole yourself (such as with a post-commit hook) or risk site changes being overwritten silently by someone who "knife upload"s an obsolete recipe from the outdated local repository on his laptop. This is less likely to happen with chef-solo, as all changes will be synced to servers directly from the master repository. The issue here is discipline and number of collaborators. If you're a solo developer or a very small team, uploading cookbooks via the API is not very risky. In a larger team it can be if you don't put good controls in place. Additionally, with chef-solo you can store all your nodes' roles, custom attributes and runlists as node.json files in your main chef repository. With chef-server, roles and runlists are modified on the fly using the API. With chef-solo, you can track this information in revision control. This is where the conflict between static and dynamic environments can be clearly seen. If your list of nodes (no matter how long it might be) doesn't change often, having this data in revision control is very useful. On the other hand, if you're frequently spawning new nodes and destroying old ones (never to see their hostname or fqdn again) keeping it all in revision control is just an unnecessary hassle, and having an API to make changes is very convenient. Chef-server has a whole features geared towards managing dynamic cloud environments as well, like the name option on "knife bootstrap" which lets you replace fqdn as the default way to identify a node. But in a static environment those features are of limited value, especially compared to having the roles and runlists in revision control with everything else. Finally, recipe test environments can be set up on the fly for almost no extra work. You can disable the cronjobs running on a server and make changes directly to its local repository. You can test the changes by running chef-solo and you will see exactly how the server will configure itself in production. Once everything is tested, you can check-in the changes and re-enable the local cronjobs. When writing recipes, though, you wouldn't be able to use the "Search" API, meaning that if you want to write dynamic recipes (eg loadbalancers) you will have to hack around this limitation, gathering the data from the json files in your nodes/ directory, which is likely to be less convenient and will lack some of the data available in the full CMDB. Once again, more dynamic environments will favor the database-driven approach, less dynamic environments will be fine with json files on local disk. In a server environment where a chef run must make API calls to a central database, you will be dependent on managing all testing environments within that database. The last can also be used in emergencies. If you are troubleshooting a critical issue on production servers and solve it with a configuration change, you can make the change immediately on the server's repository then push it upstream to the master. Those are the primary advantages of chef-solo. There are some others, like not having to administer a server or pay for hosted chef, but those are relatively minor concerns. To sum up: If you are dynamic and highly virtualized, chef-server provides a number of great features (covered elsewhere) and most of the chef-solo advantages will be less noticeable. However there are some definite, often unmentioned advantages to chef-solo especially in more traditional environments. Note that being deployed on the cloud doesn't necessarily mean you have a dynamic environment. If you can't, for example, add more nodes to your system without releasing a new version of your software, you probably aren't dynamic. Finally, from a high-level perspective a CMDB can be useful for any number of things only tangentially related to system administration and configuration such as accounting and information-sharing between teams. Using chef-server might be worth it for that feature alone.
{ "source": [ "https://serverfault.com/questions/283470", "https://serverfault.com", "https://serverfault.com/users/85490/" ] }
283,471
I'm coming from a vmware environment, wanting to play with Xen. I have a server with 2 x 500G SATA drives (no hardware RAID available, have to use software-based RAID1). My partitions are all RAID1 except for swap. I left a little over 400G for my VMs and I would like to use LVM for the disk images. For domU's swap, should I allocate that from the 400G or should that be coming from dom0's partition? I asked because I've seen numerous config options that shows either or.
I am going to orient this answer as if the question was "what are the advantages of chef-solo" because that's the best way I know to cover the differences between the approaches. My summary recommendation is in line with others: use a chef-server if you need to manage a dynamic, virtualized environment where you will be adding and removing nodes often. A chef server is also a good CMDB , if you need one. Use chef-solo if you have a less dynamic environment where the nodes won't change too often but the roles and recipes will. Size and complexity of your environment is more or less irrelevant. Both approaches scale very well. If you deploy chef-solo, use a cronjob with rsync, 'git pull', or some other idempotent file transfer mechanism to maintain a full copy of the chef repository on each node. The cronjob should be easily configurable to (a) not run at all and (b) run, but without syncing the local repository. Add a nodes/ directory in your chef repository with a json file for each node. Your cronjob can be as sophisticated as you wish in terms of identifying the right nodefile (though I would recommend simply $(hostname -s).json. You also may want to create an opscode account and configure a client with hosted chef, if for no other reason than to be able to use knife to download community cookbooks and create skeletons. There are several advantages to this approach, besides the obvious "not having to administer a server". Your source control will be the final arbiter of all configuration changes, the repository will include all nodes and runlists, and each server being fully independent facilitates some convenient testing scenarios. Chef-server introduces a hole where you use the "knife upload" to update a cookbook, and you must patch this hole yourself (such as with a post-commit hook) or risk site changes being overwritten silently by someone who "knife upload"s an obsolete recipe from the outdated local repository on his laptop. This is less likely to happen with chef-solo, as all changes will be synced to servers directly from the master repository. The issue here is discipline and number of collaborators. If you're a solo developer or a very small team, uploading cookbooks via the API is not very risky. In a larger team it can be if you don't put good controls in place. Additionally, with chef-solo you can store all your nodes' roles, custom attributes and runlists as node.json files in your main chef repository. With chef-server, roles and runlists are modified on the fly using the API. With chef-solo, you can track this information in revision control. This is where the conflict between static and dynamic environments can be clearly seen. If your list of nodes (no matter how long it might be) doesn't change often, having this data in revision control is very useful. On the other hand, if you're frequently spawning new nodes and destroying old ones (never to see their hostname or fqdn again) keeping it all in revision control is just an unnecessary hassle, and having an API to make changes is very convenient. Chef-server has a whole features geared towards managing dynamic cloud environments as well, like the name option on "knife bootstrap" which lets you replace fqdn as the default way to identify a node. But in a static environment those features are of limited value, especially compared to having the roles and runlists in revision control with everything else. Finally, recipe test environments can be set up on the fly for almost no extra work. You can disable the cronjobs running on a server and make changes directly to its local repository. You can test the changes by running chef-solo and you will see exactly how the server will configure itself in production. Once everything is tested, you can check-in the changes and re-enable the local cronjobs. When writing recipes, though, you wouldn't be able to use the "Search" API, meaning that if you want to write dynamic recipes (eg loadbalancers) you will have to hack around this limitation, gathering the data from the json files in your nodes/ directory, which is likely to be less convenient and will lack some of the data available in the full CMDB. Once again, more dynamic environments will favor the database-driven approach, less dynamic environments will be fine with json files on local disk. In a server environment where a chef run must make API calls to a central database, you will be dependent on managing all testing environments within that database. The last can also be used in emergencies. If you are troubleshooting a critical issue on production servers and solve it with a configuration change, you can make the change immediately on the server's repository then push it upstream to the master. Those are the primary advantages of chef-solo. There are some others, like not having to administer a server or pay for hosted chef, but those are relatively minor concerns. To sum up: If you are dynamic and highly virtualized, chef-server provides a number of great features (covered elsewhere) and most of the chef-solo advantages will be less noticeable. However there are some definite, often unmentioned advantages to chef-solo especially in more traditional environments. Note that being deployed on the cloud doesn't necessarily mean you have a dynamic environment. If you can't, for example, add more nodes to your system without releasing a new version of your software, you probably aren't dynamic. Finally, from a high-level perspective a CMDB can be useful for any number of things only tangentially related to system administration and configuration such as accounting and information-sharing between teams. Using chef-server might be worth it for that feature alone.
{ "source": [ "https://serverfault.com/questions/283471", "https://serverfault.com", "https://serverfault.com/users/25170/" ] }
283,722
When I login via ssh with -v I see that ssh is authenticating the following way debug1: Authentications that can continue: publickey,gssapi-with-mic,password,hostbased I would like to change the order ...any idea how? My bigger problem is that user with locked accounts, can still login via public-keys. I have found that I could add the user to a group "ssh-locked" add deny that group from sshing, but I am still wondering if there is a way to tell ssh'd: Please check password before keys...
The ssh server decides which authentication options it allows, the ssh client can be configured to decide in which order to try them. The ssh client uses the PreferredAuthentications option in the ssh config file to determine this. From man ssh_config ( see it online here ): PreferredAuthentications Specifies the order in which the client should try protocol 2 authentication methods. This allows a client to prefer one method (e.g. keyboard-interactive) over another method (e.g. password). The default is: gssapi-with-mic,hostbased,publickey, keyboard-interactive,password I don't believe it's possible, without playing with the source, to tell the OpenSSH server to prefer a certain order - if you think about it, it doesn't quite make sense anyway.
{ "source": [ "https://serverfault.com/questions/283722", "https://serverfault.com", "https://serverfault.com/users/67407/" ] }
283,909
I'm going to hire an IT guy to help manage my office's computers and network. We're a small shop, so he'll be the only one doing IT. Of course, I'll interview carefully, check references, and run a background check. But you never know how things will work out. How do I limit my company's exposure if the guy I hire turns out to be evil? How do I avoid making him the single most powerful person in the organization?
You do it the same way you protect the company from head of Sales running off with your client list, or the head of Accounting embezzling funds, or the Stock manager from running off with half the inventory, largely: Trust, but verify. At the very least, I would require that all passwords for all Administrator accounts on systems and services under IT be kept in a password safe (either digitally like KeePass, or a literal piece of paper kept in a safe). Periodically you will need to verify that these accounts are still active and have appropriate access rights. Most experienced IT people call this the "if I'm hit by a bus" scenario, and it's part of the general idea of eliminating points of failure. At the one business I worked at where I was the sole IT Admin, we maintained a relationship with an external IT consultant who handed this, primarily because the company had been burned in the past (by incompetence more than malice). They had remote access passwords and could, when asked, reset the essential administrator passwords. They did not have direct access to any company data, however. They could only reset passwords. Of course, since they could reset enterprise admin passwords, they could take control of the systems. Again, it became "Trust but Verify". They made sure they could access the systems. I made sure they didn't change anything without us knowing about it. And remember: the easiest way to make sure a person doesn't burn your company is to make sure they're happy. Make sure your pay is at least at the median value. I've heard of too many situations where IT personnel have damaged a company out of spite. Treat your employees right and they'll do the same.
{ "source": [ "https://serverfault.com/questions/283909", "https://serverfault.com", "https://serverfault.com/users/56888/" ] }
284,238
Using the Dropbox GUI, it's possible to controll specifically what folders to be synced. Can this somehow be done from the command-line too? Background: I'm trying out the solutions for installing Dropbox on a linux server given here, and it seems to work fine: http://ubuntuservergui.com/ubuntu-server-guide/install-dropbox-ubuntu-server
The official Dropbox CLI has an exclude option . On Linux Dropbox has a client ( dropbox ) and a deamon ( dropboxd ). The client has the exclude command, which you can use to exclude directories. E.g. to exclude node_modules from Dropbox you can enter dropbox exclude add ./node_modules dropbox help exclude will print the help information: dropbox exclude [list] dropbox exclude add [DIRECTORY] [DIRECTORY] ... dropbox exclude remove [DIRECTORY] [DIRECTORY] ... "list" prints a list of directories currently excluded from syncing. "add" adds one or more directories to the exclusion list, then resynchronizes Dropbox. "remove" removes one or more directories from the exclusion list, then resynchronizes Dropbox. With no arguments, executes "list". Any specified path must be within Dropbox.
{ "source": [ "https://serverfault.com/questions/284238", "https://serverfault.com", "https://serverfault.com/users/51598/" ] }
284,428
How can I copy a file's user/owner permissions to it's group permissions? For example if the permissions are 755 I want them to become 775. Clarification: 755 -> 775 123 -> 113 abc -> aac Bonus if I can do this recursively for all files in a directory. (That is, for every file the ownder permissions are copied to the group permissions. Each file may have different permissions.)
you can use g=u to make the group perms the same as the user perms ls -l file -rw------- 1 user users 0 Jun 27 13:47 file chmod g=u file ls -l file -rw-rw---- 1 user users 0 Jun 27 13:47 file and recursively chmod -R g=u * or for some filespec find /patch/to/change/perms -name '*.txt' -exec chmod g=u {} + the chmod manpage is your friend.
{ "source": [ "https://serverfault.com/questions/284428", "https://serverfault.com", "https://serverfault.com/users/31032/" ] }
285,256
I'm using Fail2Ban on a server and I'm wondering how to unban an IP properly. I know I can work with IPTables directly: iptables -D fail2ban-ssh <number> But is there not a way to do it with the fail2ban-client ? In the manuals it states something like: fail2ban-client get ssh actionunban <IP> . But that doesn't work. Also, I don't want to /etc/init.d/fail2ban restart as that would lose all the bans in the list.
With Fail2Ban before v0.8.8: fail2ban-client get YOURJAILNAMEHERE actionunban IPADDRESSHERE With Fail2Ban v0.8.8 and later: fail2ban-client set YOURJAILNAMEHERE unbanip IPADDRESSHERE The hard part is finding the right jail: Use iptables -L -n to find the rule name... ...then use fail2ban-client status | grep "Jail list" | sed -E 's/^[^:]+:[ \t]+//' | sed 's/,//g' to get the actual jail names. The rule name and jail name may not be the same but it should be clear which one is related to which.
{ "source": [ "https://serverfault.com/questions/285256", "https://serverfault.com", "https://serverfault.com/users/72090/" ] }
285,619
If one was to run the following command cat * | grep DATABASE the shell would spit out all the lines in * files that contained the word DATABASE in them. Is there any way to also spit out what file each line is apart of? I tried to use the -H option for grep which according to man says print the filename for each match but in my shell it just says (standard input):$DATABASE_FUNCTION = dothis();
Don't use cat for that. Instead use grep DATABASE * or grep -n DATABASE * (if you want to know the line numbers as well as the filenames) directly. See useless use of cat . To clarify a bit more: cat * actually concatenates all files as it feeds them to grep through the pipe, so grep has no way of knowing which content belongs to which file, and indeed can't even really know if it's scanning files or you're just typing mighty fast. It's all one big standard input stream once you use a pipe. Lastly, -H is redundant almost for sure as grep prints the filename by default when it's got more than one file to search. It could be of some use in case you want to parse the output, though, as there's some possibility the * glob will expand to a single file and grep would in that case omit the filename.
{ "source": [ "https://serverfault.com/questions/285619", "https://serverfault.com", "https://serverfault.com/users/21307/" ] }
285,800
On Linux (Debian Squeeze) I would like to disable SSH login using password to some users (selected group or all users except root). But I do not want to disable login using certificate for them. edit: thanks a lot for detailed answer! For some reason this does not work on my server: Match User !root PasswordAuthentication no ...but can be easily replaced by PasswordAuthentication no Match User root PasswordAuthentication yes
Try Match in sshd_config : Match User user1,user2,user3,user4 PasswordAuthentication no Or by group: Match Group users PasswordAuthentication no Or, as mentioned in the comment, by negation: Match User !root PasswordAuthentication no Note that match is effective "until either another Match line or the end of the file." (the indentation isn't significant)
{ "source": [ "https://serverfault.com/questions/285800", "https://serverfault.com", "https://serverfault.com/users/50205/" ] }
285,843
I was wondering if there was a proper way to clear logs in general? I'm new to Ubuntu and I'm trying to set up Postfix. The log in question is /var/log/mail.log . I was wondering if there was a correct way to clear it, rather than me going in it and deleting all the lines and saving it. I find that sometimes errors don't get written to it immediately after I clear the log and save it. Side note: I'm having trouble setting up Postfix and am trying to make it easier for me to read the logs hoping it can help me out, instead of having to scroll all the way down.
You can use: > /var/log/mail.log That will truncate the log without you having to edit the file. It's also a reliable way of getting the space back. In general it's a bad thing to use rm on the log then recreating the filename, if another process has the file open then you don't get the space back until that process closes it's handle on it and you can damage it's permissions in ways that are not immediately obvious but cause more problems later on. Yasar has a nice answer using truncate Also if you are watching the contents of the log you might like to use the tail command: tail -f /var/log/mail.log Ctrl-C will break off the tailing.
{ "source": [ "https://serverfault.com/questions/285843", "https://serverfault.com", "https://serverfault.com/users/67432/" ] }
285,997
We have shadow copy enabled on our Windows SBS 2008 server. Attempting to restore a file from shadow copy gave the following error- The source file name(s) are larger than is supported by the file system. Try moving to a location which has a shorter path name, or try renaming to shorter name(s) before attempting this operation. The filename has 67 characters, and it's shadow copy path is 170 characters. These seem to be under the NTFS limits (260?). We tried- Copying to the shortest path possible (C:) Copying to the shortest path possible on both a client computer and the server itself Is it possible to rename files in a shadow copy, before doing the copy? Any idea why the error is appearing despite the filename size appearing to be within limits? Steps taken On local computer, go to shared folder on SBS server (via mapped drive), e.g. J:\Projects\Foo\Bar Right click on folder and select Properties Click on the Previous Versions tab. Select a shadow copy and click Open In newly opened window, select folder/file and press Ctrl-C to copy. Open a new Windows Explorer, and paste folder/file onto local drive. Edit- (Un)fortunately, I am now unable to reproduce this error. The particular files causing the problem have since been deleted, and unable to recreate the error with other, similar files.
I had the exact same problem in Server 2008 R2 and this is how I solved it: Right click on the folder you're trying to restore from shadow copy and chose Previous Versions . Chose a date and click on Open . Right click on any file or folder within the previous folder and chose Properties . On the General tab copy what's shown in 'location', e.g.: \\localhost\D$\@GMT-2011.09.20-06.00.04\_Data Open cmd.exe and type in: subst X: \\localhost\D$\@GMT-2011.09.20-06.00.04\_Data Open PowerShell and use robocopy to copy content of X: e.g.: robocopy X: D:\Folder\ /E /COPYALL Check that all files have been copied. When finished type subst X: /D in the cmd (Command Prompt) window
{ "source": [ "https://serverfault.com/questions/285997", "https://serverfault.com", "https://serverfault.com/users/11130/" ] }
286,102
I've generated a self certified SSL cert for testing a new web site. The time has come for the site to go live and I now want to purchase a cert from GeoTrust. Can I use the same CSR that I generated for the self cert, or do I need to create a new one? Rich
As long as your using the same key, domain (aka common-name), contact details and validity period you should be able to use the same CSR. Though to be honest generating a CSR is a pretty simple job, so if you need to amend the contact details (which a lot of SSL providers are strict on) it's not a big deal.
{ "source": [ "https://serverfault.com/questions/286102", "https://serverfault.com", "https://serverfault.com/users/3692/" ] }
286,104
Similar to my previous question Cannot delete unresponsive machine on VMWare Infrastructure Web Access on Windows 2003 host and now YET another virtual machine has stopped at 95% (while starting up) for no just reason. Unfortunately in this case there is no recent backup for this particular one [Yes I know :( ] I think I foolishly botched things further cos I tried to create a new virtual machine using the same virtual hard drive (in a bid to rescue the data from the unresponsive machine) and the browser hung (IE 8). When I restarted the browser, I stupidly removed the virtual machine new copy (I guess that deleted the .vmdk file? I didn't know it would). After my folly I looked in the vmware virtual machines folder and saw only the .000001.vmdk file. Can I recover the original vmdk from this file? I have tried using WinMount to mount the Winbox.000001.vmdk. It tells me "Illegal VMDK descriptor". I have also tried the VMWare DiskMount Utility (vmware-mount) but I get the following when I reply the prompt for a Yes: C:\Program Files\VMware\VMware DiskMount Utility>vmware-mount.exe W: "X:\Winbo x.000001.vmdk" This disk is being used by a virtual machine that has an active snapshot. If you proceed, any changes you make are applied to the current version of the disk, and will be discarded if you revert to the snapshot. Do you wish to proceed (Y/N)? y Unable to mount the virtual disk. The disk may be in use by a virtual machine or mounted under another drive letter. If not, verify that the disk is a virtual disk file, and that the disk file has not been corrupted. The disk is "in use" because it has hung at 95%? I critically need to get some files out of this machine one way or the other. Please help. Suggestions? ETA: I am currently trying the evaluation version of MediaHeal VMDK Repair tool but besides their own testimony on their site, I can't see any references to this software anywhere. Do you know any alternatives to repair, fix or recover data from a corrupted VMDK file?
As long as your using the same key, domain (aka common-name), contact details and validity period you should be able to use the same CSR. Though to be honest generating a CSR is a pretty simple job, so if you need to amend the contact details (which a lot of SSL providers are strict on) it's not a big deal.
{ "source": [ "https://serverfault.com/questions/286104", "https://serverfault.com", "https://serverfault.com/users/85312/" ] }
286,204
I tend to prefix oft-used files and folders with the "accent grave" character (non-shift tilde, the back-tick, or plain-old accent, whathaveyou..), as it is easy to get at, and let's me sort things alphabetically, while letting me choose to show a few items on the top. It works nicely, except when I go to access these files via the CLI or SSH/SCP. If I try to run a command, calling the file unescaped ↝ it kicks me into an interactive session.. for example ↯ # scp -r dns.local:/`Downloads/CrazyRussianCars/ ~/ ↩ > Yet if I try the logical solution ↯ # scp -r dns.local:/\`Downloads/CrazyRussianCars/ ~/ ↩ bash: -c: line 0: unexpected EOF while looking for matching ``' bash: -c: line 1: syntax error: unexpected end of file I know the "new" rule is to use a syntax like export NOW=$(date) vs export NOW= `date` (in fact, I had a bear of a time even writing the latter in SE MD syntax...) but this is unrelated to the ENV or any script... Note: This is a Mac OS X environment, but that said, the GUI has never had a problem dealing with this character on a day-to-day basis, and usually, if there's going to be a syntax problem in the Terminal, Apple does a pretty good job of disabling the behavior in the GUI... Not sure if this is a bug, or if the technique for dealing with such paths is simply obscure.. but so far, I have been un-able to find a way "to escape it" ?
You can use 3 backslashes as mentioned by Jed Daniels or you can wrap it in single quotes (') and use a single backslash. Example of both are below. $ touch dir/'`rik' $ ls -l dir total 1865376 -rw-r--r-- 1 rik staff 0 Jul 1 09:51 `rik $ scp localhost:dir/\\\`rik ./ `rik 100% 0 0.0KB/s 00:00 $ scp localhost:dir/'\`rik' ./ `rik 100% 0 0.0KB/s 00:00 $
{ "source": [ "https://serverfault.com/questions/286204", "https://serverfault.com", "https://serverfault.com/users/60486/" ] }
286,421
Is it possible to offset a cron script set to run every 5 minutes? I have two scripts, script 1 collects some data from one database and inserts it into another, script 2 pulls out this data and a lot of other data and creates some pretty reports from it. Both scripts need to run every 5 minutes. I want to offset script 2 by one minute so that it can create a report from the new data. E.G. I want script one to run at :00, :05, :10, :15 [...] and script two to run at :01, :06, :11, :16 [...] every hour. The scripts are not dependent on each other, and script 2 must run regardless of whether script one was successful or not. But it would be useful if the reports could have hte latest data. Is this possible with cron? Post; I have thought about using both commands in a shell script so they run immediately after each other but this wouldn't work, sometimes script 1 can get hung up on waiting for external APIs etc. so might take up to 15 mins to run, but script 2 must run every 5 minutes so doing it this way would stop/delay the execution of script 2. If I could set this in Cron it would mean script 2 would run regardless of what script 1 was doing
The minute entry field for crontab accepts an "increments of" operator that is kind of confusing because it looks like it should be a mathematical "divide by" operator but isn't. You will most often see it used something like the following. Note that this does not find numbers that are divisible by five but rather takes every fifth item from a set: */5 * * * * command This tells cron to match every fifth item ( /5 ) from the set of minutes 0-59 ( * ) but you can change the set like this: 1-59/5 * * * * command This would take every fifth item from the set 1-59, running your command at minutes 6, 11, 16, etc. If you need more fine grained offsets than one minute, you can hack it using the sleep command as part of your crontab like this: */5 * * * * sleep 15 && command This would run your job every five minutes, but the command would not actually start until 15 seconds after the minute. For short running jobs where being a few seconds after something else makes all the difference but you don't want to be a full minute late, this is a simple enough hack.
{ "source": [ "https://serverfault.com/questions/286421", "https://serverfault.com", "https://serverfault.com/users/80776/" ] }
286,654
ls -l /etc/passwd gives $ ls -l /etc/passwd -rw-r--r-- 1 root root 1862 2011-06-15 21:59 /etc/passwd So an ordinary user can read the file. Is this a security hole?
Actual password hashes are stored in /etc/shadow , which is not readable by regular users. /etc/passwd holds other information about user ids and shells that must be readable by all users for the system to function.
{ "source": [ "https://serverfault.com/questions/286654", "https://serverfault.com", "https://serverfault.com/users/78915/" ] }
286,665
I recently stumbled upon a firewall issue with my EC2 instance. The TCP port was made available to everyone via the EC2 Security Group, however there was still instance-side filtering using iptables. I figured if anything Security Groups are just a fancy API for IPTables. It turns out they're running completely exclusively from what I can tell. Is there any reason to use both? One firewall should be plenty and adding another layer of complexity seems to be a headache just waiting to happen. In the meantime, I'm contemplating either opening up all ports in my Security Group and then doing all filtering via iptables, or the inverse, disable iptables and use Security Group filtering. Any feedback on whether or not my logic here is flawed? Am I missing something critical?
The security groups add no load to your server - they are processed externally, and block traffic to and from your server, independent of your server. This provides an excellent first line of defense that is much more resilient than the one residing on your server. However, security groups are not state-sensitive, you cannot have them respond automatically to an attack for instance. IPTables are well suited to more dynamic rules - either adapting to certain scenarios, or providing finer grained conditional control. Ideally you should use both to complement each other - block all the ports possible with your security group, and use IPTables to police the remaining ports and protect against attacks.
{ "source": [ "https://serverfault.com/questions/286665", "https://serverfault.com", "https://serverfault.com/users/19432/" ] }
287,102
I've started using Nginx as a reverse proxy for a set of servers that provide some sort of service. The service can be rather slow at times (its running on Java and the JVM sometimes gets stuck in "full garbage collection" that may take several seconds), so I've set the proxy_connect_timeout to 2 seconds, which will give Nginx enough time to figure out that the service is stuck on GC and will not respond in time, and it should pass the request to a different server. I've also set proxy_read_timeout to prevent the reverse proxy from getting stuck if the service itself takes too much time to compute the response - again, it should move the request to another server that should be free enough to return a timely response. I've run some benchmarks and I can see clearly that the proxy_connect_timeout works properly as some requests return exactly on the time specified for the connection timeout, as the service is stuck and doesn't accept incoming connections (the service is using Jetty as an embedded servlet container). The proxy_read_timeout also works, as I can see requests that return after the timeout specified there. The problem is that I would have expected to see some requests that timeout after proxy_read_timeout + proxy_connect_timeout , or almost that length of time, if the service is stuck and won't accept connections when Nginx tries to access it, but before Nginx can timeout - it gets released and starts processing, but is too slow and Nginx would abort because of the read timeout. I believe that the service has such cases, but after running several benchmarks, totaling several millions of requests - I failed to see a single request that returns in anything above proxy_read_timeout (which is the larger timeout). I would appreciate any comment on this issue, though I think that could be due to a bug in Nginx (I have yet to look at the code, so this is just an assumption) that the timeout counter doesn't get reset after the connection is successful, if Nginx didn't read anything from the upstream server.
I was actually unable to reproduce this on: 2011/08/20 20:08:43 [notice] 8925#0: nginx/0.8.53 2011/08/20 20:08:43 [notice] 8925#0: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-48) 2011/08/20 20:08:43 [notice] 8925#0: OS: Linux 2.6.39.1-x86_64-linode19 I set this up in my nginx.conf: proxy_connect_timeout 10; proxy_send_timeout 15; proxy_read_timeout 20; I then setup two test servers. One that would just timeout on the SYN, and one that would accept connections but never respond: upstream dev_edge { server 127.0.0.1:2280 max_fails=0 fail_timeout=0s; # SYN timeout server 10.4.1.1:22 max_fails=0 fail_timeout=0s; # accept but never responds } Then I sent in one test connection: [m4@ben conf]$ telnet localhost 2480 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GET / HTTP/1.1 Host: localhost HTTP/1.1 504 Gateway Time-out Server: nginx Date: Sun, 21 Aug 2011 03:12:03 GMT Content-Type: text/html Content-Length: 176 Connection: keep-alive Then watched error_log which showed this: 2011/08/20 20:11:43 [error] 8927#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: ben.dev.b0.lt, request: "GET / HTTP/1.1", upstream: "http://10.4.1.1:22/", host: "localhost" then: 2011/08/20 20:12:03 [error] 8927#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: ben.dev.b0.lt, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:2280/", host: "localhost" And then the access.log which has the expected 30s timeout (10+20): 504:32.931:10.003, 20.008:.:176 1 127.0.0.1 localrhost - [20/Aug/2011:20:12:03 -0700] "GET / HTTP/1.1" "-" "-" "-" dev_edge 10.4.1.1:22, 127.0.0.1:2280 - Here is the log format I'm using which includes the individual upstream timeouts: log_format edge '$status:$request_time:$upstream_response_time:$pipe:$body_bytes_sent $connection $remote_addr $host $remote_user [$time_local] "$request" "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $edge $upstream_addr $upstream_cache_status';
{ "source": [ "https://serverfault.com/questions/287102", "https://serverfault.com", "https://serverfault.com/users/6438/" ] }
287,688
What I want to acomplish is: Having a config file as template, with variables like $version $path (for example apache config) Having a shell script that "fills in" the variables of the template and writes the generated file to disk. Is this possible with a shell script. I would be very thankfull if you can name some commands/tools I can accomplish this or some good links.
This is very possible. A very simple way to implement this would be for the template file to actually be the script and use shell variables such as #! /bin/bash version="1.2.3" path="/foo/bar/baz" cat > /tmp/destfile <<-EOF here is some config for version $version which should also reference this path $path EOF You could even make this configurable on the command line by specifying version=$1 and path=$2 , so you can run it like bash script /foo/bar/baz 1.2.3 . The - before EOF causes whitespace before the lines be ignored, use plain <<EOF if you do not want that behavior. Another way to do this would be to use the search and replace functionality of sed #! /bin/bash version="1.2.3" path="/foo/bar/baz" sed -e "s/VERSION/$version/g" -e "s/PATH/$path/" /path/to/templatefile > /tmp/destfile which would replace each instance of the strings VERSION and PATH. If there are other reasons those strings would be in the template file you might make your search and replace be VERSION or %VERSION% or something less likely to be triggered accidentally.
{ "source": [ "https://serverfault.com/questions/287688", "https://serverfault.com", "https://serverfault.com/users/86726/" ] }
288,137
Here is what I want to do. Looked around but didnt find any straight answer. I have a Linux box running websites using Ubuntu/MySQL/Apache. I have my own static IP as well, i.e. not using web hosting. I would like to be able to stream the video feed from a webcam on a laptop (presumably running Windows) to my Linux server, and have users of one of my websites be able to see that video live as its being streamed. Obviously would the laptop would need to authenticate with the server somehow, but there should be no restrictions on who could view the live video on the website. Thanks.
I currently develop online streaming from 3 miniDV cameras connected via FireWire, which is quite similar to your needs. Quick hint: vlc + flowplayer/jw player First of all, there are two video formats, that you can use in online streaming: FLV and h264. FLV is easier to transcode, h264 has better size/quality ratio but transcoding is much more cpu consuming. Both can be displayed by flash players in web page. Second of all, streaming infrastructure. Since your bandwith from laptop is limited (couple Mbps tops) you need to get stream to your server and there restream it to clients. So the stream will flow 1 time to server and then N times to clients from there. You haven't described your internet connection for your laptop, so the scenario is divided into two section: Laptop is connected with public IP address OR you can NAT port to laptop . This scenario is much easier, since you can connect from server to laptop nice and easy. Big disadvantage is, that you're bound to one location (one IP address). Laptop is not connected with public I address . This is little bit tricky, but will work from any network which will allow you to SSH to your server and have sufficiant upload (1 Mbps should do it). Regardless on used scenario, the infrastructer will look like this CAMERA - (usb) - LAPTOP - (network, limited upload) - SERVER - (network) - Client 0 - Client 1 - Client 2 - Client N Streaming from laptop Capture video from webcam . I've never captured stream from locally attached webcam, but there are many examples how to do it via V4L, e.g: Webcam Setup . The only part which you should be interested is: laptop$ vlc v4l:// :v4l-vdev="/dev/video0" :v4l-adev="/dev/audio2" Which is the first part of VLC command to connect to webcam. For more details follow the mentioned HOWTO. Especially look at "video group" part and correct device path to /dev/video and /dev/audio. Those can be different on your laptop. Transcode video to FLV . I personally use FLV, since it is less CPU demanding. Transcode string I use is this: --sout '#transcode{vcodec=FLV1,vb=512,acodec=mpga,ab=64,samplerate=44100}' Which will transcode video stream to FLV format with MPGA audio (MP3 is not available in my Ubuntu). Samplerate is somehow mandatory, it won't work without it. But you can choose smaller, like 22050. This will transcode video stream 'as is', so the scale is 1:1. You can append width and height parameters, or even scale parameter. Look into VLC documentation. Stream it from laptop . Now you have to make local stream, on which will server connect: :std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=0.0.0.0:8081/stream.flv} This will bind VLC stream to 0.0.0.0:8081/stream.flv. The whole command will look like this: laptop$ vlc v4l:// :v4l-vdev="/dev/video0" :v4l-adev="/dev/audio2" --sout '#transcode{vcodec=FLV1,vb=512,acodec=mpga,ab=64,samplerate=44100}:std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=0.0.0.0:8081/stream.flv}' Restreaming on server Capture stream on server and restream it . Again, we use VLC to capture and stream. Usage is based on infrastructure scenario from early of this post. As I showed, VLC on laptop streams video on some port. This port has to be accesible from server. If you have public IP address of laptop, or NATed port, you can test it with telnet: server$ telnet public_ip_address 8081 Anything except "connection timeout" will reveal, that you can connect to your laptop's stream. If you don't have public IP address, or you can't NAT port, you have to do it the other way around. You can SSH from laptop to server and remote forward your laptop port to server. The correct SSH command would be: laptop$ ssh your_user@server_ip_address -R 8081:127.0.0.1:8081 This magic command will 'bind' your laptop port 8081, to server port 8081. That means when you connect on server to 8081, you will silently connect to your laptop port 8081 via SSH tunnel. Cool, huh?:) So all we have to do is simple VLC connect and stream: server$ vlc http://localhost:8081/stream.flv --sout '#std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=0.0.0.0:8082/stream.flv}' Or in case with public IP adress or NATed port: server$ vlc http://public_ip_address:8081/stream.flv --sout '#std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=0.0.0.0:8082/stream.flv}' As in the laptop part, your VLC on server is bind to port 8082. Why 8082 and not 8081? 8081 is already taken by the SSH remote forward. Why we don't use transcode part as in first example? The video is already in the right format, so all we have to do is just stream it as-is. Testing . In both examples, you can test functionality by viewing streams via VLC. You can test your local stream: laptop$ vlc http://localhost:8081/stream.flv And you can test your server's stream: laptop$ vlc http://server_ip_address:8082/stream.flv In both cases, you should see your webcam input. Display stream on web Displaying the stream on web, which will work in most cases, is via flash player. I tried two products, which are free for non-commercial usage: JW Player and Flowplayer . I stayed with Flowplayer, but I don't remember the reason, maybe because of plugins (which I don't use:) ) or because of better documentation. How to display FLV stream from VLC in web page is covered here: Stream VLC to Website with asf and Flash Troubleshooting Be aware of many problems that WILL arise. First thing, as in everything, read . VLC is very chatty program, so it will tell you where the problem is. Could be problem with permissions to access the video/audio device, missing codecs, misspelled --sout parameters,... Learn to use iftop to see if the data really flows thru network, etc.
{ "source": [ "https://serverfault.com/questions/288137", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
288,631
After using robocopy to copy files to a new drive I realized that all the file and directory creation times and been reset to the time of copying. Are there some switches to make robocopy keep the original files times?
Take a look at the options for the /COPY:[copyflags] and /DCOPY switches. As per the ROBOCOPY /? usage info: /COPY:copyflag[s] :: what to COPY for files (default is /COPY:DAT). (copyflags : D=Data, A=Attributes, T=Timestamps). (S=Security=NTFS ACLs, O=Owner info, U=aUditing info). /DCOPY:T :: COPY Directory Timestamps. For example: ROBOCOPY c:\src d:\dest /MIR /COPY:DT /DCOPY:T Will copy all files and folders and preserve the date and time stamps. ROBOCOPY c:\src d:\dest /MIR /COPY:DAT /DCOPY:T Will copy all files and folders and preserve the date & time stamps and file attributes. There is also another (and I believe deprecated?) switch /TIMFIX which does much the same as /COPY:DT but it doesn't fix the time stamps on folders. These were tested with ROBOCOPY 5.1.10.1027 on Windows 7 x64 Ultimate. Be aware that the /MIR switch mirrors the directory that you are copying from; that is, /MIR will also delete files in the destination folder not found in the source folder. The /MIR switch is the equivalent of /E and the /PURGE switches used together (but with a minor exception ).
{ "source": [ "https://serverfault.com/questions/288631", "https://serverfault.com", "https://serverfault.com/users/32056/" ] }
288,648
I want to make a DVD with some useful packages (for example php-common). The only problem is that if I try to install on a computer that's not connected to internet, I can't validate the public key. The scenario is like this: I download the RPMs, I copy them to DVD. I install CentOS 5.5 on my laptop (it has no internet connection). I try install one using yum (or rpm -i , or whatever). I get the following error: public key for "package" is not installed. How can I bypass that?
From yum -h : --nogpgcheck disable gpg signature checking
{ "source": [ "https://serverfault.com/questions/288648", "https://serverfault.com", "https://serverfault.com/users/70088/" ] }
288,665
For some reason I decided to create small initial partitions for my virtual machine server (30 GB system partition for Windows Server 2008, 100 GB data partition, both on the same virtual disk). Now, it seems Windows Update is filling up my system disk, and I need to expand it. "Good luck I have a virtual machine" I thought, and went to expand the virtual disk. Having expanded the disk, I went to the virtual server operating system Windows Server 2008 R2, and tried to expand the C-partition. Turns out I can't, as the data partition is directly following the C-partition on the same virtual disk. Anyway I can solve this? Anyone have any idea? Host: VMware ESXi 4.1 Guest: Windows Server 2008 R2 x64 Standard
From yum -h : --nogpgcheck disable gpg signature checking
{ "source": [ "https://serverfault.com/questions/288665", "https://serverfault.com", "https://serverfault.com/users/57809/" ] }
289,267
It sounds obvious that a faster connection lowers latency... But I wonder: I am working remotely on a host the other side of the world - light can only travel so fast (1 foot in a nano second) and we both have broadband connections in excess of 1,000kbps upload and 10,000kbps download: Will a higher bandwidth connection lower the time it takes to ping?? Since it is very little data how would a faster connection help? Currently ping takes 450ms is there any way I can improve it??
First, Bandwidth is not the same as latency. A faster connection won't necessarily reduce your latency. 450ms does seem a little slow but not that far off if you are going 1/2 way across the world. As a frame of reference a high speed, low latency link will take ~70-80ms to cross the US. You might be able to eek a bit less latency by changing your provider assuming they have a more optimal peering path. but I can't promise anything.
{ "source": [ "https://serverfault.com/questions/289267", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
290,088
I am having a difficult time grasping what is the correct way to read the size of the files since each command gives you varying results. I also came across a post at http://forums.devshed.com/linux-help-33/du-and-ls-generating-inconsistent-file-sizes-42169.html which states the following; du gives you the size of the file as it resides on the file system. ( IE will will always give you a result that is divisible by 1024 ). ls will give you the actual size of the file. What you are looking at is the difference between the actual size of the file and the amount of space on disk it takes. ( also called file system efficiency ). What is the difference between as it resides on the file system and actual size of the fil
This is called slack space : Each layer of abstraction on top of individual bits and bytes results in wasted space when a datafile is smaller than the smallest data unit the file system is capable of tracking. This wasted space within a sector, cluster, or block is commonly referred to as slack space, and it cannot normally be used for storage of additional data. For individual 256-byte sectors, the maximum wasted space is 255 bytes. For 64 kilobyte clusters, the maximum wasted space is 65,535 bytes. So, if your filesystem allocates space in units of 64 KB, and you store a 3 KB file, then: the file's actual size is 3 KB. the file's resident size is 64 KB, as the remaining 61 KB in that unit can't be allocated to another file and is thus lost. Note : Some filesystems support block suballocation , which helps to mitigate this issue by assigning multiple small files (or the tail ends of large files) into the same block.
{ "source": [ "https://serverfault.com/questions/290088", "https://serverfault.com", "https://serverfault.com/users/81492/" ] }
290,315
wget to a single specific url from one of my servers keeps getting timeouts. All other urls from this box work fine. This url works OK from any other boxes I have. Here's the output: wget -T 10 http://www.fcc-fac.ca --2011-07-14 14:44:29-- http://www.fcc-fac.ca/ Resolving www.fcc-fac.ca... 65.87.238.35, 207.195.108.140 Connecting to www.fcc-fac.ca|65.87.238.35|:80... failed: Connection timed out. Connecting to www.fcc-fac.ca|207.195.108.140|:80... failed: Connection timed out Can you tell me what might be wrong and how can I troubleshoot it? I'm using Ubuntu 11.04 (GNU/Linux 2.6.38-8-server x86_64) Thank you very much in advance and forgive my noobish ignorance :) ping, telnet, nc www.fcc-fac.ca 80 - all hang. However, some other urls that are easily wget'able though only some of their hosts are pingable. traceroute doesn't tell me much: 7 rx0nr-access-communications.wp.bigpipeinc.com (66.244.208.10) 148.834 ms 149.018 ms 148.940 ms 8 sw-1-research.accesscomm.ca (24.72.3.9) 158.901 ms 159.805 ms 160.162 ms 9 65.87.238.126 (65.87.238.126) 150.069 ms 148.861 ms 148.846 ms 10 * * * ... 30 * * * Thanks a lot for answers!
I think that the problem is that wget doesnt handle well IPv6 addresses and the DNS server is sending a IPv6 for that site. Sorry if I misunderstood your question. Check those tests: hmontoliu@ulises:~$ wget -T10 http://www.fcc-fac.ca --2011-07-14 16:44:34-- http://www.fcc-fac.ca/ Resolving www.fcc-fac.ca... failed: Connection timed out. wget: unable to resolve host address `www.fcc-fac.ca' If I force IPv6 because I believe that your problem is related to it, it fails: hmontoliu@ulises:~$ wget -6 http://www.fcc-fac.ca --2011-07-14 16:40:44-- http://www.fcc-fac.ca/ Resolving www.fcc-fac.ca... failed: No address associated with hostname. wget: unable to resolve host address `www.fcc-fac.ca' However if I force to use IPv4 it downloads right the index page hmontoliu@ulises:~$ wget -4 http://www.fcc-fac.ca --2011-07-14 16:40:56-- http://www.fcc-fac.ca/ Resolving www.fcc-fac.ca... 65.87.238.35, 207.195.108.140 Connecting to www.fcc-fac.ca|65.87.238.35|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6554 (6,4K) [text/html] Saving to: `index.html'
{ "source": [ "https://serverfault.com/questions/290315", "https://serverfault.com", "https://serverfault.com/users/87661/" ] }
290,889
I am working on a change in a Java EE application that would authenticate based on the user's IP address using ServletRequest.getRemoteAddr . We store IP address ranges (FROM_IP and TO_IP) in a database and the system would authenticate only if a user's IP address falls in a range. Now, testers have pointed out that digit 0 (zero) should not be allowed in FROM_IP and TO_IP values (in any place). Note that this is an Internet facing application, and so we will get only public IP addresses. Are testers right in suggesting that validation? Why can't we have zero in the range value such as in 167.23.0.1 - 167.23.255.255?
No, they are completely incorrect. In fact, this is a valid IP address: 192.168.24.0 As is 167.23.0.1 . Separation of the IP address into dotted segments is a purely human convenience for display. It's a lot easier to remember 192.168.1.42 than 3232235818 . What matters to computers is the separation (netmask). It's not valid to have an host address with the host section of the address set entirely to 0 or 1. So, 192.168.24.0 as long as the netmask is such that some bits get set in the host part. See the following calculations: michael@challenger:~$ ipcalc 192.168.24.0/16 Address: 192.168.24.0 11000000.10101000. 00011000.00000000 Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000 Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111 => Network: 192.168.0.0/16 11000000.10101000. 00000000.00000000 HostMin: 192.168.0.1 11000000.10101000. 00000000.00000001 HostMax: 192.168.255.254 11000000.10101000. 11111111.11111110 Broadcast: 192.168.255.255 11000000.10101000. 11111111.11111111 Hosts/Net: 65534 Class C, Private Internet In this case, the address part (right side) has 2 bits set. This is a valid host address in the 192.168.0.0/16 subnet. michael@challenger:~$ ipcalc 192.168.24.255/16 Address: 192.168.24.255 11000000.10101000. 00011000.11111111 Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000 Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111 => Network: 192.168.0.0/16 11000000.10101000. 00000000.00000000 HostMin: 192.168.0.1 11000000.10101000. 00000000.00000001 HostMax: 192.168.255.254 11000000.10101000. 11111111.11111110 Broadcast: 192.168.255.255 11000000.10101000. 11111111.11111111 Hosts/Net: 65534 Class C, Private Internet In this case, the address part has 10 bits set and 6 bits unset. This is another valid host address in the same subnet. michael@challenger:~$ ipcalc 192.168.24.0/24 Address: 192.168.24.0 11000000.10101000.00011000. 00000000 Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000 Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111 => Network: 192.168.24.0/24 11000000.10101000.00011000. 00000000 HostMin: 192.168.24.1 11000000.10101000.00011000. 00000001 HostMax: 192.168.24.254 11000000.10101000.00011000. 11111110 Broadcast: 192.168.24.255 11000000.10101000.00011000. 11111111 Hosts/Net: 254 Class C, Private Internet In this case, the address part has zero bits set. This is not a valid host address in the 192.168.24.0/24 network.
{ "source": [ "https://serverfault.com/questions/290889", "https://serverfault.com", "https://serverfault.com/users/87832/" ] }
291,546
The new CentOS 6 comes with Upstart, replacing init. I am trying to convert an /etc/inittab file to the new upstart format. This particular server only has 15 or so inittab entries, however, other servers have >30. We are mainly wanting the 'respawn' part of inittab and upstart. However, I have been reading all the upstart documentation I can find, (which is pretty much ALL based on Ubuntu, and apparently on an older version of upstart) and not getting anywhere. I can create a config file (lets call it /etc/init/test.conf). The file contains this (note, anonymized) start on runlevel [345] stop on starting shutdown respawn #Comment about what it does exec su -c "/usr/bin/ssh -2CNL 11111:127.0.0.1:11111 10.10.1.1" username If I issue a initctl reload-configuration the job is recognized. I can start it by calling initctl start test and the job will start. However, this won't work on a reboot, only manually. I have tried modifying the start command to the following, all with no luck start on started start on (local-filesystems and net-device-up IFACE!=lo) start on net-device-up IFACE=eth0 and about a dozen other ways I could see mentioned in different examples. none seem to start the script. (test.conf, like all the other files in this folder, are owned by root, and 644) Am I missing something glaringly obvious?
I found a very, very, very helpful upstart script for people that are having problems in the future. Put this into /etc/init/ # /etc/init/debug.conf start on ( starting JOB!=debug \ or started JOB!=debug \ or stopping JOB!=debug \ or stopped JOB!=debug ) script exec 1>>/tmp/log.file echo -n "$UPSTART_JOB/$UPSTART_INSTANCE ($0):$$:`date`:" echo "Job $JOB/$INSTANCE $UPSTART_EVENTS. Environment was:" env echo end script This script basically logs all jobs that start or stop. I have found that CentOS 6 does not 'emit' anything about runlevels. (nor some of the other common events I had tried.'). Looking the log file that debug job creates in /tmp/log.file was very helpful. By changing the start of my script from: start on runlevel [345] to start on started sshd all my jobs appear to start up correctly. This was a pain in the rear, since every example I found used the former syntax..
{ "source": [ "https://serverfault.com/questions/291546", "https://serverfault.com", "https://serverfault.com/users/63503/" ] }
291,763
I have a production system where several different people are allowed to log in to a single account - the account is for the application and not for the person as we don't have personal accounts on production servers. For auditing purposes I want to be able to tell who logged in at what time, and as we use SSH keys to log in it seems logical to track that (as there is no other identifier to track). When SSH authenticates a user, it logs the user name to the system's security log, but it does not log which of the authorized public keys was used in the log in. Is it possible to get OpenSSH to also report which public key was used, or maybe just the comment associated with that key? The operating system being used is CentOS 5.6, but I'd like to also hear if its possible on other operating systems.
If you raise the LogLevel to VERBOSE in your configuration file ( /etc/sshd/sshd_config or similar) it will log the fingerprint of the public key used to authenticate the user. LogLevel VERBOSE Then you get messages like this: Jul 19 11:23:13 centos sshd[13431]: Connection from 192.168.1.104 port 63529 Jul 19 11:23:13 centos sshd[13431]: Found matching RSA key: 54:a2:0a:cf:85:ef:89:96:3c:a8:93:c7:a1:30:c2:8b Jul 19 11:23:13 centos sshd[13432]: Postponed publickey for user from 192.168.1.104 port 63529 ssh2 Jul 19 11:23:13 centos sshd[13431]: Found matching RSA key: 54:a2:0a:cf:85:ef:89:96:3c:a8:93:c7:a1:30:c2:8b Jul 19 11:23:13 centos sshd[13431]: Accepted publickey for user from 192.168.1.104 port 63529 ssh2 You can use: ssh-keygen -lf /path/to/public_key_file to get the fingerprint of a particular public key.
{ "source": [ "https://serverfault.com/questions/291763", "https://serverfault.com", "https://serverfault.com/users/6438/" ] }
292,014
As we all know "unix" can have anything in a file except '/' and '\0', sysadmins however tend to have a much smaller preference, mainly due to nothing liking spaces as input ... and a bunch of things having a special meaning for ':' and '@' among others. Recently I'd seen yet another case where a timestamp was used in a filename, and after playing with different formats a bit to make it "better" I figured I'd try to find a "best practice", not seeing one I figured I'd just ask here and see what people thought. Possible "common" solutions (p=prefix and s=suffix): syslog/logrotate/DNS like format: p-%Y%m%d-suffix = prefix-20110719-s p-%Y%m%d%H%M-suffix = prefix-201107191732-s p-%Y%m%d%H%M%S-suffix = prefix-20110719173216-s pros: It's "common", so "good enough" might be better than "best". No weird characters. Easy to distinguish the "date/time blob" from everything else. cons: The date only version isn't easy to read, and including the time makes my eyes bleed and seconds as well is just "lol". Assumes TZ. ISO-8601- format p-%Y-%m-%d-s = p-2011-07-19-s p-%Y-%m-%dT%H:%M%z-s = p-2011-07-19T17:32-0400-s p-%Y-%m-%dT%H:%M:%S%z-s = p-2011-07-19T17:32:16-0400-s p-%Y-%m-%dT%H:%M:%S%z-s = p-2011-07-19T23:32:16+0200-s pros: No spaces. Takes TZ into account. Is "not bad" to read by humans (date only is v. good). Can be generated by $(date --iso={hours,minutes,seconds}) cons: scp/tar/etc. won't like those ':' characters. Takes a bit for "normal" people to see WTF that 'T' is for, and what the thing at the end is :). Lots of '-' characters. rfc-3339 format p-%Y-%m-%d-s = p-2011-07-19-s p-%Y-%m-%d %H:%M%:z-s = p-2011-07-19 17:32-04:00-s p-%Y-%m-%d %H:%M:%S%:z-s = p-2011-07-19 17:32:16-04:00-s p-%Y-%m-%d %H:%M:%S%:z-s = p-2011-07-19 23:32:16+02:00-s pros: Takes TZ into account. Can easily be read by "all humans". Can distinguish date/time from prefix/suffix. Some of the above can be generated with $(date --iso={hours,seconds}) cons: Has spaces in the time versions (which means all code will hate it). scp/tar/etc. won't like those ':' characters. I love hyphens: p-%Y-%m-%d-s = p-2011-07-19-s p-%Y-%m-%d-%H-%M-s = p-2011-07-19-17-32-s p-%Y-%m-%d-%H-%M-%S-s = p-2011-07-19-23-32-16-s pros: basically a slightly nicer syslog/etc. variant. cons: Lots of '-' characters. Assumes TZ. I love hyphens, with extensions: p.%Y-%m-%d.s = p.2011-07-19.s p.%Y-%m-%d.%H-%M.s = p.2011-07-19.17-32.s p.%Y-%m-%d.%H-%M-%S.s = p.2011-07-19.23-32-16.s pros: basically a slightly nicer "I love hyphens" variant. No weird characters. Can distinguish date/time from prefix/suffix. cons: Using '.' here is somewhat non-traditional. Assumes TZ. ...so anyone want to give a preference and a reason, or more than one (Eg. don't care about TZ if it's 95+% to stay machine local, but care a lot if it isn't). Or, obviously, something not in the above list.
ISO 8601 format should be adhered to as much as possible, since it is the closest thing there is to a standard. The 'T' is not enough of a stumbling block to really warrant getting rid of it. The ':'s are potentially killers, so those should be avoided. For the reasons mentioned in others' answers, UTC (or 'Z' time) should be used. ISO 8601 includes a format using UTC ('Z' time), which should be used. ISO 8601 includes a format that does not use the ':' character, which should be used. So...sample 'best' date-time formats: 20120317T1748Z 100% in accordance with ISO 8601 alphanumeric characters only (very sysadmin-friendly) not the quickest to read, but certainly readable by the layperson 2012-03-17T1748Z date portion is in accordance with ISO 8601 time portion is in accordance with ISO 8601 transition between date and time is in accordance with ISO 8601 mixes the ISO 8601 'extended' format (date with hyphens, time with colons) with the ISO 8601 'basic' format (date without hyphens, time without colons), which is likely not quite right adds '-' character (vs 1.) a bit easier for the layperson to read (vs 1.) 2012-03-17--1748Z date portion is in accordance with ISO 8601 time portion is in accordance with ISO 8601 transition between date and time is not in accordance with ISO 8601 mixes the ISO 8601 'extended' format with the ISO 8601 'basic' format a bit easier for the layperson to read (vs 1. and 2.) no new characters (vs 2.) I am partial to 1. since it is fully IAW the standard, but the others are close. Note:: Add seconds as necessary, of course. ...and yes, with or without seconds (or even minutes) is all IAW ISO 8601. :)
{ "source": [ "https://serverfault.com/questions/292014", "https://serverfault.com", "https://serverfault.com/users/13963/" ] }