source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
157,496 | I do know that 127.0.0.1 ~ 127.255.255.254 are the loopback IP addresses for most modern operating systems and we could use these IP addresses to refer to our own computer. But isn't 127.0.0.1 enough?!?! why a wide range? why from 127.0.0.1 to 127.255.255.254? | The 127/8 network can be used for a number of things. 1) Simulating a large number of different computers in a fast network (simply bring up more interfaces and bind services to them) without using virtual machines. This might be helpful if you wanted to have a number of different web servers running locally on port 80 for some reason. 2) Permitting more locally running services than the 64k TCP would permit (though it seems unlikely that you would hit that limit rationally) 3) Playing games with people who aren't familiar with this factoid; "Hey, you're a loser hacker, I bet you can't even hack me. Go ahead and try; I'm at 127.45.209.66" Probably other things too. | {
"source": [
"https://serverfault.com/questions/157496",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
157,520 | I have been investigating the differences between Apache and Nginx recently and am confused about which I should choose. I have done some searching but there is no definitive comparison between the two and I was wondering if someone here could give their views on the differences between the two. My current knowledge allows me to understand that mod_php is faster and more secure than fastcgi however Apache is much worse when it comes to simultaneous connections and memory consumption. My site is using a lot of long polling but has a non AJAX web base (i.e. Apache with long polling over the top). My original solution to Apaches memory problems were to send the long polling through node.js and then get node.js to access Apache every 2 secs in which case Apache would not have an open connection but instead node.js would. I have come to the realisation this might not be good enough and am looking at different solutions. I am still interested as to whether my original idea would have worked. So which is better for the modern web? Apache or Nginx? Update: All the suggestions given were good and valid. I have gone with the original second idea which is to use a full Nginx server. I am satisfied that being a dedicated server I could not suffer security issues from fastcgi and since my long polling scripts need to be written in PHP I require a server that can deal with high load simultaneous connections and Apache just cannot do that no matter how much I change the structure it will still be memory hungry. I have marked Martin F's answer since he gave such a clear and complete answer to my questions points that I feel he deserves the mark, however, all three answers were good and valid and will most definately look into using reverse proxy for another site I own since I just found something very very very kool that Nginx can do in proxying. Thanks, | You seem to have a few misconceptions which I feel needs to be addressed. First of all, mod_php is only marginally faster, all my tests have shown that the difference is so minuscule that it's not worth factoring in. I also doubt that the security aspect is relevant to you as you seem to be looking at a dedicated server and mod_php really only has an advantage in a shared environment - in fact, in a dedicated environment php-fpm will have the advantage as PHP and your web server now runs as different processes, and that's not even factoring in the awesome logging options in php-fpm such as slow log. If the world was black and white I'd say go with a pure nginx setup and compile php with php-fpm. More realistically if you already have Apache working then make nginx a reverse proxy to apache and you might save a few hours of setup time and the difference in performance will be tiny. But lets assume the world is black and white for a second, because this makes for far more awesome setups. You do nginx + php-fpm for your web server. To solve the uploads you use the upload module and upload progress module for nginx. This means that your web server accepts the upload and passes the file path onto PHP when it's done, so that the file doesn't need to be streamed between nginx and PHP via fastcgi protocol, sweet. (I have this in a live setup and it's working great, btw!) For user downloading you use nginxs x-send-file-like feature called x-accel-redirect, essentially you do your authentication in PHP and set a header which nginx picks up on and starts transfer that file. PHP ends execution and your web server is handling the transfer, sweet! (Again, I have this in a live setup and it's working great) For distributing files across servers or other long running operations we realize that PHP isn't really best suited for this, so we install gearman, which is a job server that can distribute jobs between workers on different servers, these workers can be written in any language. Therefore you can create a distribute worker and spawn 5 of them using a total of 200 KB of memory instead of the 100 MB PHP would use. Sweet. (I also have this running live, so it's all actually possible) In case you haven't picked up on it yet, I think many of your problems aren't related to your web server at all, you just think that way because Apache forces it to be related to your web server due to it's structure, often there are far better tools for the job than PHP and PHP is a language which knows this and provides excellent options to off-loading work without ever leaving PHP. I'd highly recommend nginx, but I also think you should look at other options for your other problems, if you have a scaling or performance problem then feel free to write me. I don't know if you can send messages through here but otherwise write me at [email protected] as I don't stalk server fault for anything not tagged with nginx. :) | {
"source": [
"https://serverfault.com/questions/157520",
"https://serverfault.com",
"https://serverfault.com/users/47551/"
]
} |
157,526 | tail -f path The above will output modifications to the file instantly, but I want to apply a filter to the output, only show when there is a keyword xxx in it. How to approach this? | With Unix you can pipe the output of one program into another. So to filter tail, you can use grep: tail -f path | grep your-search-filter | {
"source": [
"https://serverfault.com/questions/157526",
"https://serverfault.com",
"https://serverfault.com/users/42869/"
]
} |
157,705 | I would like to run mongod in the background as an always present sort of thing. What would be the best way to do this? Kind of like the way I can run MySQL on startup and it's just always there running in the background. Maybe it's just some bash scripts, but it would be nice to hear if there is a better way. If it is just bash - what would that look like? Thanks. | The MongoDB daemon (mongod) has a command-line option to run the server in the background... --fork This command-line option requires that you also specify a file to log messages to (since it can not use the current console). An example of this command looks like: mongod --fork --logpath /var/log/mongod.log You could put this into an /etc/init.d/mongod bash script file. And then to have the service run at startup, create the standard symbolic links (S## & K##) inside of /etc/rc#.d/. Here is a tutorial that explains this process in more detail. Scroll down to the section titled "Init Script Activation". This also has the added benefit of being able to execute commands like... service mongod status
service mongod start
service mongod stop | {
"source": [
"https://serverfault.com/questions/157705",
"https://serverfault.com",
"https://serverfault.com/users/15059/"
]
} |
157,710 | I am creating Windows Server 2008 R2 VM and was wondering if anyone had an estimate on the base size of the o/s when installed? I need to decide if I am think/thick provisioning it and need an estimate for the SAN space! Cheers,
Conor | The MongoDB daemon (mongod) has a command-line option to run the server in the background... --fork This command-line option requires that you also specify a file to log messages to (since it can not use the current console). An example of this command looks like: mongod --fork --logpath /var/log/mongod.log You could put this into an /etc/init.d/mongod bash script file. And then to have the service run at startup, create the standard symbolic links (S## & K##) inside of /etc/rc#.d/. Here is a tutorial that explains this process in more detail. Scroll down to the section titled "Init Script Activation". This also has the added benefit of being able to execute commands like... service mongod status
service mongod start
service mongod stop | {
"source": [
"https://serverfault.com/questions/157710",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
157,745 | Every now and then I read that a severe fire has happened in some datacenter, lots of equipment has been damaged and customers have gone offline. Now I wonder what is there to support and spread fire? I mean walls in a server room usually have little or no finish. Racks are made of metal. Almost all units have metal cases. Cables have (or at least should have) insulation of materials that don't spread fire. What is spreading fire in a server room or datacenter? | Here's a bit of information not generally published and at times even denied - the insulation used in most electronic components will burn and burn rather well once a suitable temperature has been reached. This includes the material circuit boards are made of, as well as the lacquer used to coat most components. Some types of insulation, once lit, will add to the problem by producing gases that themselves are somewhat flammable. Of course they also produce noxious gases that could incapacitate or kill anyone unfortunate enough to breath in too much of it. The insulation on normal (non fire resistant) cables burns extremely well. Even worse, prior to, as well as after, ignition the plastic flows very freely, which helps greatly in the spread of the fire. During a fire some components, such as capacitors, will explode, which further helps to spread the fire by throwing already burning bits around. Once the temperature is nice and high we start to factor in things like paint and other decorative coatings that can be hard to light but burn well once lit. Even powder coating will burn. Far from adding protection from an already lit fire the metal case will add to the fire by providing additional fuel (the paint or other coating) and will help to sustain the fire by retaining heat and feeding it back to the fire. | {
"source": [
"https://serverfault.com/questions/157745",
"https://serverfault.com",
"https://serverfault.com/users/101/"
]
} |
158,392 | On my local host alpha I have a directory foo that is mapped via sshfs to host bravo as follows: $ sshfs charlie@bravo:/home/charlie ~/foo However, on host bravo there is another user, delta, that I want to sudo /bin/su as, so that I can do work in bravo:/home/delta . delta may not be logged into via ssh; for reasons which I cannot change, you can only sudo over to delta once you're on the machine. Normally I'd ssh into bravo , then sudo to delta, but I'm wondering if there's any way that I can do that when I've got charlie's home dir mounted via ssh. | This will vary depending on the OS of the server you are connecting to. For centOS 5 you would add to the sshfs mount options: -o sftp_server="/usr/bin/sudo /usr/libexec/openssh/sftp-server" For Ubuntu 9.10 (I think, might be 9.04, but it's probably the same for both) or Debian you would add: -o sftp_server="/usr/bin/sudo /usr/lib/openssh/sftp-server" . To find an the correct path for other systems running openSSH run sudo grep Subsystem /etc/ssh/sshd_config and look for the location of the sftp-server binary. You might need to setup sudo with NOPASS:{path to sftp-server} or prevalidate with ssh user@host sudo -v so that sudo has a updated timestamp for notty . In my case, my two commands were: ssh login_user@host sudo -v
sshfs login_user@host:remote_path local_path -o sftp_server="/usr/bin/sudo -u as_user /usr/lib/ssh/sftp-server" | {
"source": [
"https://serverfault.com/questions/158392",
"https://serverfault.com",
"https://serverfault.com/users/10376/"
]
} |
158,610 | I can run IIS on a normal installation, what is so different about windows server? | Several key areas: Amount of memory supported Number of processors supported Number of network connections may be greater than 10 Some apps do an OS version check, and they won't install unless it's the proper server version (as previously mentioned) By default, the server OS is configured to give priority to background apps/services and the client OS is configured to give priority to foreground apps. This can be configured, however. There are some services (Microsoft created) that are only permitted to run on the server OS (DNS, DHCP, Active Directory, PKI, etc.) as previously mentioned. | {
"source": [
"https://serverfault.com/questions/158610",
"https://serverfault.com",
"https://serverfault.com/users/11090/"
]
} |
158,772 | I have to setup a firewall on a Linux server (all my previous experience is with Windows). My rules are meant to be pretty simple - forbid all, allow some ports with all, allow some ports for specific IP subnets, while the network is small but complex (each host has IPs in at least 2 192.168... nets, everyone can interconnect many different ways). I think using iptables wrappers can overcomplicate the system logically introducing many unnecessary entities and it would be better to keep it simple and use iptables directly. Can you recommend a good quick intro on how to write iptables rules? | Links to official and recommended documentation exist on the Netfilter Web site. This is not a new subject, resources are limitless . Most basic commands are fairly intuitive and can easily be reference to the manpage . netfilter, which is the kernel level technology that enables the packet filtering, is quite advanced. There are additional tables that can mangle packets, translate packets, and otherwise affect routing. The iptables utility is the userland tool for interacting with netfilter. If you wish to learn about advanced functionality, I suggest you reference the aforementioned documentation. For an introduction to the basic functionality, please read further. To list all existing rules: iptables -L -n -n prevents iptables from resolving ips, which produces faster output. The default table is the filter table, which is what is used to apply basic firewall rules to the three chains. The three default chains in the filter table are INPUT , OUTPUT , and FORWARD . The chains are largely self explanatory. The INPUT chain affects packets coming in, the OUTPUT chain affects locally generated packets, and finally FORWARD for any packets that route through the system. Among the targets you can specify, you can DROP packets, meaning simply ignore and not respond. You can REJECT packets, where an icmp response would be sent to the source of the denial. Finally, you can ACCEPT them, which allows the packets to continue routing. Often with an external facing firewall the default choice will be DROP as opposed to REJECT , as it reduces the visible footprint of your network on the Internet. For example, an IP that otherwise limits services to a specific host would have less visibility with DROP . Note, -A means append to the end of the chain. If you wish to insert to the top, you can use -I . All rules are processed from the top down. -D for deletion. To DROP an incoming packet coming from the 192.168.235.235 : iptables -A INPUT -s 192.168.235.235 -j DROP This jumps to the DROP target for all protocols coming from that IP. To accept: iptables -A INPUT -s 192.168.235.235 -j ACCEPT To prevent access to that IP from your local server or network: iptables -A OUTPUT -d 192.168.235.235 -j DROP You can specify the -p protocol, the -s source of the packet, the -d destination for the packet, the destination port --dport , the source port --sport , and many other flags that will affect how the packets are treated by the rule. If your default INPUT policy were DROP and you wanted to allow everyone at the 192.168.123.0/24 subnet to access SSH on your server, here is an example: iptables -A INPUT -s 192.168.123.0/24 -p tcp --dport 22 -j ACCEPT That's right, you can use CIDR notation too! Generally speaking, the best default policy is DROP for all chains. Every chain has a default policy, which is specified by the -P flag. Even if you have your policy set to default DROP , it is still advised to have the final entry in a chain to be a DROP as well. For example, to change the policy to DROP for the INPUT, FORWARD, and OUTPUT chains: iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP Be careful , if you specify the default policy of DROP for INPUT on a remote system without first allowing yourself SSH access, you could prevent yourself from accessing the system. If on a remote system, you could specify a temporary crontab to flush all rules every 5 minutes as a failsafe. To delete all rules and allow all traffic: iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -X
iptables -F Note, -X removes all created chains. -F flushes all rules. There are native tools to save and restore the rules. Particularly, iptables-save and iptables-restore . Most modern Linux distributions have save and restore functions within an iptables init file provided with the system. There are other firewall best practices, such as dropping malformed packets and other type of undesirable traffic. This is one advantage of using a front end utility such as Shorewall , as it will implement many of these polices by default. Nevertheless, I agree with your approach and prefer to maintain my own rules directly as well, and these same best practices can be implemented without a front end. | {
"source": [
"https://serverfault.com/questions/158772",
"https://serverfault.com",
"https://serverfault.com/users/42777/"
]
} |
159,273 | So yesterday I found out that my server was rooted via the h00lyshit exploit. So far I deleted all the files that might be asociated with the exploit. I also deleted all the ssh keys in ~/.ssh/authorized_keys . I changed the root password to 25 random character password and changed mysql passwords as well. Also i think the attacker was from italy, and since i need to have access only from my country i blocked every ip range except my own country, will this help? Do you guys have any good advice what i should do? I plan to disable root via ssh (i should have done it much sooner, I know :( ). And is there a way to check if he can access my server again? Also no damage was done luckyly, oh an i'm running Debian Lenny with 2.6.26 kernel if somebody is interested. PS: yay my first question :D | You should restore the server from a known good backup. There's no real way to know that no other back doors were installed is there? | {
"source": [
"https://serverfault.com/questions/159273",
"https://serverfault.com",
"https://serverfault.com/users/48049/"
]
} |
159,313 | It looks like nginx 0.8.35 may support chunked transfer encoding : Changes with nginx 0.8.35 01 Apr 2010 *) Change: now the charset filter runs before the SSI filter.
*) Feature: the "chunked_transfer_encoding" directive. This is great, because I'm trying to get push git changes through an nginx reverse proxy to a git-http-backend process. Git HTTP takes advantage of chunked transfer encoding for client-side efficiency reasons . However, I can't get it to work. I'm using nginx 0.8.44 on Debian Lenny with the following configure invocation: ./configure \
--sbin-path=/usr/sbin \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--user=www-data \
--group=www-data \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/lock/nginx.lock \
--with-http_ssl_module \
--with-http_gzip_static_module \
--with-http_realip_module And the following conf file: server {
server_name example.com;
location / {
proxy_pass http://192.168.0.10;
include /etc/nginx/proxy.conf;
chunked_transfer_encoding on;
}
} And my proxy.conf looks like this: proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 100M;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k; (Originally I posted this question to Stack Overflow but was advised it's more appropriate to Server Fault) | This is an old question, I know, but it came up in a search for the problem (which I've spent the afternoon trying to solve). Martin F's comment gave me enough of a clue to get it working! The trick is to set proxy_buffering off; in your location block. Assuming that your upstream server is sending back chunked responses, this will cause nginx to send the individual chunks back to the client - even gzipping them on the fly if you have gzip output compression turned on. Note that turning off buffering may have other disadvantages, so don't go blindly turning off buffering without understanding why. | {
"source": [
"https://serverfault.com/questions/159313",
"https://serverfault.com",
"https://serverfault.com/users/14451/"
]
} |
159,334 | On Ubuntu: touch: cannot touch `/var/run/test.pid': Permission denied I am starting start-stop-daemon and like to write the PID file in /var/run
start-stop-daemon is run as my-program-user /var/run setting is drwxr-xr-x 9 root root I like to avoid putting my-program-user in the root group. | By default, you can only write to /var/run as a user with an effective user ID of 0 (ie as root). This is for good reasons, so whatever you do, don't go and change the permissions of /var/run... Instead, as root, create a directory under /var/run: # mkdir /var/run/mydaemon Then change its ownership to the user/group under which you wish to run your process: # chown myuser:myuser /var/run/mydaemon Now specify to use /var/run/mydaemon rather than /var/run. You can always test this by running a test as the user in question. | {
"source": [
"https://serverfault.com/questions/159334",
"https://serverfault.com",
"https://serverfault.com/users/48047/"
]
} |
159,339 | I have installed PostGreSQL 8.3 on an Ubuntu machine and I want to configure a pssw for it, however when I run this command: sudu su postgres -c psql template1 I get the following error: "psql: could not connect to server: No such file or directory
Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?" Can anybody tell me how I can check if the db server is running locally (only accepting connections on localhost)? How I can start the psql server? I'm a Linux newbie, btw | You can know the status of your Postgres server via the command sudo /etc/init.d/postgresql-8.3 status To start it you can issue the command sudo /etc/init.d/postgresql-8.3 start and to stop you can issue the command sudo /etc/init.d/postgresql-8.3 stop | {
"source": [
"https://serverfault.com/questions/159339",
"https://serverfault.com",
"https://serverfault.com/users/23852/"
]
} |
159,422 | I'm running OS X 10.6 Server, and I want to eject my external drive so I can do some disk maintenance such as defraging it. However when I try to eject the drive it fails saying the disk is in use. I can force eject it but that could cause corruption... How can I tell which application is using the drive and holding it open? | Try sudo lsof | grep /Volumes/External , where "External" would be the name of your external drive. Are you hosting any services' data off of that drive? | {
"source": [
"https://serverfault.com/questions/159422",
"https://serverfault.com",
"https://serverfault.com/users/40549/"
]
} |
159,424 | For the first time I've installed original Debian (I used to use only Ubuntu server and client editions and Arch client before) server, it was surprise for me that it has no sudo, and no ssh server installed by default, and allows root login over ssh after ssh is installed (so obviously setting up and securing SSH daemon is #1 task to be done on new installation). Any more such surprises there? | Try sudo lsof | grep /Volumes/External , where "External" would be the name of your external drive. Are you hosting any services' data off of that drive? | {
"source": [
"https://serverfault.com/questions/159424",
"https://serverfault.com",
"https://serverfault.com/users/42777/"
]
} |
159,612 | How can I find a Windows server's last reboot time, apart from 'net statistics server/workstation'? | Start → Run → cmd.exe : systeminfo | find "System Boot Time" Or for older OS versions (see comment): systeminfo | find "System Up Time" | {
"source": [
"https://serverfault.com/questions/159612",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
160,396 | I was wondering if there is a way of listing all the smb servers on a local network (like looking at a network neighborhood in windows) via the command line in fedora. | This command is a very little known secret of Samba. It returns IP adresses of all Samba servers in one's own broadcast domain: nmblookup __SAMBA__ This one returns a list of all NetBIOS names and their aliases of all Samba servers in the neighbourhood (it does a 'node status query' ): nmblookup -S __SAMBA__ This one returns a list of all IP adresses of SMB servers (that is, Linux+Unix/Samba or Windows) in the neighbourhood: nmblookup '*' Finally, all NetBIOS names and their aliases of all SMB servers (Linux+Unix/Samba or Windows): nmblookup -S '*' The command given in the other answer nmblookup -S WORKGROUP does NOT return all Samba or all SMB servers from the neighbourhood. Instead, it returns all servers' NetBIOS names who happen to be members of a workgroup named 'WORKGROUP' . The results are independent of the servers' OS (wether that is Windows, or wether that is Linux/Samba) -- and it is a well known fact that sometimes lots of Windows member server are part of a Samba-controlled domain or workgroup. [Yes, it happens that Samba's default workgroup name is 'WORKGROUP'... but so what??]. -- But the question was 'How do I get to know all SMB (Samba?!?) servers in my network neighbourhood?' | {
"source": [
"https://serverfault.com/questions/160396",
"https://serverfault.com",
"https://serverfault.com/users/48348/"
]
} |
160,581 | How can passwordless sudo access be setup on either RHEL (Fedora, CentOS, etc) or Ubuntu distributions? (If it's the same across distros, that's even better!) Setting: personal and/or lab/training equipment with no concern for unauthorized access (ie, the devices are on non-public networks, and any/all users are fully trusted, and the contents of the devices are "plain-vanilla"). | EDIT thanks to medina's comment: According to the man page, you should be able to write ALL ALL = (ALL) NOPASSWD: ALL to allow all users to run all commands without a password. For reference, I'm leaving my previous answer: If you add a line of the form %wheel ALL = (ALL) NOPASSWD: ALL to /etc/sudoers (using the visudo command, of course), it will let everyone in the group wheel run any commands without providing a password. So I think the best solution is to put all your users in some group and put a line like that in sudoers - obviously you should replace wheel with the actual group you use. Alternatively, you can define a user alias, User_Alias EVERYONE = user1, user2, user3, ... and use that: EVERYONE ALL = (ALL) NOPASSWD: ALL although you would have to update /etc/sudoers every time you add or remove a user. | {
"source": [
"https://serverfault.com/questions/160581",
"https://serverfault.com",
"https://serverfault.com/users/2321/"
]
} |
160,612 | I have 60 instances of Console Kit daemon on Ubuntu 9.04 server installation. Is it safe to kill those or stop these processes. They seem to be taking about 20% of RAM each(see on htop). | ConsoleKit manages console logins in graphical mode (i.e. with gdm or equivalent); if your server doesn't have those, you don't need it, but then it won't be started anyway. Also, you may be interested in this question . But you don't really have 60 instances taking 20% of RAM each. The ConsoleKit daemon is multithreaded, and htop shows a separate line for each thread. It's really one process and there's a single copy of that memory; you can confirm that with ps wwu -C console-kit-daemon . Additionally, the memory usage shown by htop includes code memory, some of which is likely to be shared with other processes using the same dynamic libraries. | {
"source": [
"https://serverfault.com/questions/160612",
"https://serverfault.com",
"https://serverfault.com/users/26150/"
]
} |
160,716 | If you have many sub-domain names like xxx.example.com , xyz.example.com etc, you can solve these through server-side scripting or other means by using a wildcard A record for *.example.com in your DNS. How can I determine whether a wildcard domain is configured for any given domain name? Using http://network-tools.com gives a lot of information, but doesn't reveal this. If I need to use commandline tools: I use Windows. One such example that uses a wildcard domain DNS, I think, is blogspot.com. | You can literally query "*.example.com" and find out if there is a wildcard, but it is impossible to tell the difference between these two zones: xyz.example.com. IN A 1.2.3.4
*.example.com. IN A 1.2.3.4 and *.example.com. IN A 1.2.3.4 i.e., you can't find out whether you're being answered by a wildcard for a given query, only that a wildcard exists. I haven't found any Web-accessible looking glasses that support it yet, as they seem to think it's invalid input, but raw dig (or even nslookup on Windows) works like a charm: c:\Some\User> nslookup
> *.my-test-domain.com
Server: Wireless_Broadband_Router.home
Address: 192.168.1.1
Non-authoritative answer:
Name: *.my-test-domain-is-not-a-real-domain.com
Address: 1.2.3.4 Or with dig: # dig +short '*.not-a-real-domain.com'
1.2.3.4 | {
"source": [
"https://serverfault.com/questions/160716",
"https://serverfault.com",
"https://serverfault.com/users/24901/"
]
} |
160,768 | Using instructions from this site but varying them just a little i created a CA using -newca, i copied cacert.pem to my comp and imported as trusted issuer in IE. I then did -newreq and -sign (note: i do /full/path/CA.sh -cmd and not sh CA.sh -cmd ) and moved the cert and key to apache. I visited the site in IE and using .NET code and it appears trusted, great (unless i write www. in front which is expected). But every time i restart apache i need to type in my password for the site(s?). How can i make it so i DO NOT need to type in the password? | You want to remove the passphrase from a key file. Run this: openssl rsa -in key.pem -out newkey.pem Be aware that this means that anyone with physical access to the server can copy (and thereby abuse) the key. | {
"source": [
"https://serverfault.com/questions/160768",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
161,401 | Here is my iptables, how can I make it so that I can allow a range of ip's on ETH1 (10.51.x.x) # Generated by iptables-save v1.4.4 on Thu Jul 8 13:00:14 2010
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:fail2ban-ssh - [0:0]
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A INPUT -i lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 143 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 110 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 25 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -j ACCEPT
-A fail2ban-ssh -j RETURN
COMMIT | If you only want to allow a certain range of IP addresses inside of 10.50.0.0 (such as from 10.50.10.20 through 10.50.10.80) you can use the following command: iptables -A INPUT -i eth1 -m iprange --src-range 10.50.10.20-10.50.10.80 -j ACCEPT If you want to allow the entire range you can use this instead: iptables -A INPUT -i eth1 -s 10.50.0.0/16 -j ACCEPT See iptables man page and this question here on ServerFault: Whitelist allowed IPs (in/out) using iptables | {
"source": [
"https://serverfault.com/questions/161401",
"https://serverfault.com",
"https://serverfault.com/users/6342/"
]
} |
161,678 | Apparently, since the last time I had to do a lot of playing with partitions and images, Symantec acquired, neglected, and killed off PartitionMagic. (Yeah, it's been a while.) What, then, are we meant to use instead, that gives us that general behavior of actually doing what it's goddamned supposed to that PartitionMagic so nobly displayed? | There are quite a few alternatives, both commercial and Open Source. One of the more popular OS programs is GParted . | {
"source": [
"https://serverfault.com/questions/161678",
"https://serverfault.com",
"https://serverfault.com/users/1736/"
]
} |
161,682 | I have a VM instance of Ubuntu Server 10.04, running django with mod_wsgi. It was working fine, and then after doing an /etc/init.d/apache2 reload and /etc/init.d/apache2 restart, I get nothing but Internal 500 errors. I checked the logs, and when I hit the server, nothing it output. However when the server is restarted, I get the following, which as far as I can tell is not related to my problem. EDIT: Just realized I have a custom logging file, not in the usual place. Here is what the real log file has been catching all this time: [Mon Jul 19 05:40:10 2010] [info] [client 192.168.1.152] mod_wsgi (pid=1693, process='', application='192.168.1.153|'): Loading WSGI script '/srv/www/mysite.com/application/django.wsgi'.
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] mod_wsgi (pid=1693): Exception occurred processing WSGI script '/srv/www/mysite.com/application/django.wsgi'.
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] Traceback (most recent call last):
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 241, in __call__
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] response = self.get_response(request)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 142, in get_response
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return self.handle_uncaught_exception(request, resolver, exc_info)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 166, in handle_uncaught_exception
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return debug.technical_500_response(request, *exc_info)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/views/debug.py", line 58, in technical_500_response
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] html = reporter.get_traceback_html()
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/views/debug.py", line 137, in get_traceback_html
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return t.render(c)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/__init__.py", line 173, in render
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return self._render(context)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/__init__.py", line 167, in _render
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return self.nodelist.render(context)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/__init__.py", line 796, in render
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] bits.append(self.render_node(node, context))
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/debug.py", line 72, in render_node
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] result = node.render(context)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/debug.py", line 89, in render
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] output = self.filter_expression.resolve(context)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/__init__.py", line 579, in resolve
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] new_obj = func(obj, *arg_vals)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/template/defaultfilters.py", line 693, in date
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return format(value, arg)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/dateformat.py", line 281, in format
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return df.format(format_string)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/dateformat.py", line 30, in format
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] pieces.append(force_unicode(getattr(self, piece)()))
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/dateformat.py", line 187, in r
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return self.format('D, j M Y H:i:s O')
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/dateformat.py", line 30, in format
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] pieces.append(force_unicode(getattr(self, piece)()))
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/encoding.py", line 66, in force_unicode
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] s = unicode(s)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py", line 206, in __unicode_cast
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return self.__func(*self.__args, **self.__kw)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/__init__.py", line 55, in ugettext
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return real_ugettext(message)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py", line 55, in _curried
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/__init__.py", line 36, in delayed_loader
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return getattr(trans, real_name)(*args, **kwargs)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 276, in ugettext
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] return do_translate(message, 'ugettext')
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 266, in do_translate
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] _default = translation(settings.LANGUAGE_CODE)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 176, in translation
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] default_translation = _fetch(settings.LANGUAGE_CODE)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 159, in _fetch
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] app = import_module(appname)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py", line 35, in import_module
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] __import__(name)
[Mon Jul 19 07:40:11 2010] [error] [client 192.168.1.152] TemplateSyntaxError: Caught
ImportError while rendering: No module named mysite.website
[Mon Jul 19 07:40:11 2010] [debug] mod_deflate.c(615): [client 192.168.1.152] Zlib: Compressed 620 to 383 : URL /admin | There are quite a few alternatives, both commercial and Open Source. One of the more popular OS programs is GParted . | {
"source": [
"https://serverfault.com/questions/161682",
"https://serverfault.com",
"https://serverfault.com/users/44447/"
]
} |
162,280 | What are the differences (and maybe pros and cons) of KMS and MAK license activation for Windows 7? | Assuming that you are in an Active Directory domain environment with at least 25 computers on-site (or on your reliable, mission-critical WAN or VPN) running any combination of Server 2008, Server 2008 R2, Windows Vista, and Windows 7, you will want to use KMS. Detailed pros/cons: MAK: Pro: When you activate a product with a MAK key, it is activated permanently. It does not need to reactivate on any pre-set interval. Con: You must enter a product key each time you set up a new workstation or server. Operating systems will not activate automatically. Con: Each MAK key has a limited number of activations. (You still may be able to activate more computers than licensed. You are responsible for maintaining records of your licensing compliance.) Con: Hardware changes may invalidate your activation. If your MAK key's activation count is depleted, reactivating will require a phone call to Microsoft. KMS: Pro: Your KMS server can activate unlimited computers without further approval from Microsoft. When you activate a Windows product with a KMS key, that computer becomes your KMS server. Normally, the KMS server that activates your Win7 workstations will be running Windows Server 2008 R2, activated with a Server 2008 R2 KMS key. This is because a Server 2008 R2 KMS can activate any Windows 6.x product, including Vista, Win7, and Server 2008. Although the KMS key itself will activate only a few times, this is not a limitation because you need only one KMS server. (You are still responsible for maintaining records of your licensing compliance; unlimited activations does not mean unlimited licenses.) Pro: You do not need to enter a product key each time you add a new workstation or server. They will activate against your KMS server automatically upon joining your domain. (This works with Vista, Server 2008, Win7, and Server 2008 R2. The KMS must have a proper SRV record in your enterprise's DNS.) Pro: Hardware changes that invalidate a computer's activation will resolve automatically after a reboot, with no phone call to Microsoft, because the computer will reactivate with the KMS server automatically. Con: The KMS server must receive activation requests from at least 25 products (any combination of Win7/Vista, Server 2008, and Server 2008 R2) before it will grant activation for Windows 7. Therefore, if you do not have at least 25 computers and/or VMs running Windows 6.x operating systems, you cannot use KMS in your enterprise. Con: KMS activations expire after 180 days. All KMS clients must have network access to the KMS server at least once every 180 days in order to reactivate. Transitioning from MAK to KMS: When you are transitioning to KMS, you may need to convert existing computers from MAK activation to KMS activation in order to reach the minimum count of 25. It is possible to convert computers from MAK to KMS activation remotely using slmgr.vbs. Microsoft publishes lists of product keys that will configure Windows Vista/7/Server2008/R2 as KMS clients instead of using MAK activation. The KMS client configuration keys for Windows 7 and Windows Server 2008 R2 are published in Microsoft's "Volume Activation Deployment Guide" at http://technet.microsoft.com/en-us/library/ff793406.aspx . The KMS client configuration keys for Windows Vista and Windows Server 2008 (not R2) are published by Microsoft in an older version of the same guide: http://technet.microsoft.com/en-us/library/cc303280.aspx#_KMS_Client_Setup . The following commands can be executed by the domain administrator at a Windows command prompt to change a Windows 7 Professional computer named EXAMPLE-PC from MAK activation to KMS activation, activating against KMS-HOSTNAME. (Note: The product key included after the /ipk switch is a special key that tells Windows 7 to contact a KMS server. For other versions of Windows, please refer to the TechNet articles linked above for the correct keys) : slmgr.vbs EXAMPLE-PC /ipk FJ82H-XT6CR-J8D7P-XQJJ2-GPDD4 slmgr.vbs EXAMPLE-PC /skms KMS-HOSTNAME slmgr.vbs EXAMPLE-PC /ato Such changes are to be made at your own risk, and only after careful planning. Changing a computer's product key will invalidate a computer's existing MAK activation. If you are not able to get the KMS working correctly, e.g. because you do not have the minimum 25 computers or the KMS is not configured correctly, this could create major problems. As always, RTFM (and test) before you leap! | {
"source": [
"https://serverfault.com/questions/162280",
"https://serverfault.com",
"https://serverfault.com/users/1349/"
]
} |
162,362 | I'm a chroot novice trying to make a simple chroot jail but am banging my head against the same problem time and time again... Any help would be massively appreciated I've created a directory /usr/chroot that I want to use as a jail and created subdirectories under it and copied the dependencies of /bin/bash into it: [root@WIG001-001 ~]# cd /usr/chroot/
[root@WIG001-001 chroot]# ls
[root@WIG001-001 chroot]# mkdir bin etc lib var home
[root@WIG001-001 chroot]# ldd /bin/bash
linux-vdso.so.1 => (0x00007fff99dba000)
libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00000037a2000000)
libdl.so.2 => /lib64/libdl.so.2 (0x000000379fc00000)
libc.so.6 => /lib64/libc.so.6 (0x000000379f800000)
/lib64/ld-linux-x86-64.so.2 (0x000000379f400000)
[root@WIG001-001 chroot]# cp /lib64/libtinfo.so.5 /usr/chroot/lib/
[root@WIG001-001 chroot]# cp /lib64/libdl.so.2 /udr/csr/chroot/lib/
[root@WIG001-001 chroot]# cp /lib64/libc.so.6 /usr/chroot/lib/
[root@WIG001-001 chroot]# cp /lib64/ld-linux-x86-64.so.2 /usr/chroot/lib/
[root@WIG001-001 chroot]# cp /bin/bash bin
[root@WIG001-001 chroot]# pwd
/usr/chroot
[root@WIG001-001 chroot]# /usr/sbin/chroot .
/usr/sbin/chroot: cannot run command `/bin/bash': No such file or directory
it looks like the /bin/bash created under /usr/chroot is fine as the below works:
[root@WIG001-001 chroot]# su - nobody -s /usr/chroot/bin/bash
-bash-4.0$ Can anyone give me any idea where to go from here? | The error message is misleading : /bin/bash: No such file or directory can mean either that /bin/bash doesn't exist, or that the dynamic loader used by /bin/bash doesn't exist. (You'll also get this message for a script if the interpreter on the #! line doesn't exist.) /bin/bash is looking for /lib64/ld-linux-x86-64.so.2 but you provided /lib/ld-linux-x86-64.so.2 . Make /usr/chroot/lib64 a symbolic to lib or vice versa. | {
"source": [
"https://serverfault.com/questions/162362",
"https://serverfault.com",
"https://serverfault.com/users/48846/"
]
} |
162,388 | The first noncomment line in a legacy crontab file begins with five asterisks: * * * * * ([a_command]) >/dev/null 2>&1 The authors are gone, so I do not know their intent . What does all-wildcards mean to (Solaris 8) cron? The betting here is either run once, run continuously, or run never, which is unfortunately broad. If you are wondering about the comment line preceding this, it is "Do not delete." Note: This cron file is working . This question is not a duplicate of a question about broken cron files or cron files which require troubleshooting. | Every minute of every day of every week of every month, that command runs. man 5 crontab has the documentation of this. If you just type man crontab , you get the documentation for the crontab command . What you want is section 5 of the manual pages which covers system configuration files including the /etc/crontab file. For future reference, the sections are described in man man : 1 Executable programs or shell commands
2 System calls (functions provided by the kernel)
3 Library calls (functions within program libraries)
4 Special files (usually found in /dev)
5 File formats and conventions eg /etc/passwd
6 Games
7 Miscellaneous (including macro packages and conven‐
tions), e.g. man(7), groff(7)
8 System administration commands (usually only for root)
9 Kernel routines [Non standard] | {
"source": [
"https://serverfault.com/questions/162388",
"https://serverfault.com",
"https://serverfault.com/users/1858/"
]
} |
162,429 | How do I set the Access-Control-Allow-Origin header so I can use web-fonts from my subdomain on my main domain? Notes: You'll find examples of this and other headers for most HTTP servers in the HTML5BP Server Configs projects https://github.com/h5bp/server-configs | Nginx has to be compiled with http://wiki.nginx.org/NginxHttpHeadersModule (default on Ubuntu and some other Linux distros). Then you can do this location ~* \.(eot|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
} | {
"source": [
"https://serverfault.com/questions/162429",
"https://serverfault.com",
"https://serverfault.com/users/11435/"
]
} |
163,160 | I am wondering if when an email is forward does it lose its original headers? | Forwarding, in a mail client (MUA), usually means sending a new mail, with the original mail included in the body or as an attachment. Depending on the client, the headers of the original mail may be included verbatim (e.g., mutt) or only in a highly abridged fashion (e.g., outlook). Some MUAs offer a remail command (it could be called something else such as resend or bounce, or be an option to the forward command). This resends the mail with exactly the same headers as when it was delivered. Delivery of a mail can change some headers (e.g., virus/spam scanners recording the result of their analysis), so the second recipient may not see exactly the same headers as the first. Forwarding, in a mail server (MTA), means that the mail is resent to another system for delivery there. MTAs normally add a Received: header at the very beginning when they receive a mail, so each forwarding server adds its own mark. Forwarding normally doesn't affect existing headers, unless the MTA has been specifically directed to rewrite some headers. There are a few headers that are more likely than others to be rewritten by an MTA on forward. For example, some MTAs will rewrite X-Envelope-… headers to match the envelope they see instead of the envelope the previous MTA (that added the X-Envelope-… header) saw. More and more MTAs are configured to add some headers for spam and spoofing prevention, and they may throw away any existing header with the same name; there is a lot of variety in those. | {
"source": [
"https://serverfault.com/questions/163160",
"https://serverfault.com",
"https://serverfault.com/users/49075/"
]
} |
163,244 | Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able
to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group. The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is: b $ ethtool -i eth1
driver: bnx2
version: 2.0.2
firmware-version: 5.0.11 NCSI 2.0.5
bus-info: 0000:01:00.1 I'm using smcroute to manage my multicast registrations. b$ smcroute -d
b$ smcroute -j eth1 233.37.54.71 After joining the group ip maddr shows the newly added registration. b$ ip maddr
1: lo
inet 224.0.0.1
inet6 ff02::1
2: eth0
link 33:33:ff:40:c6:ad
link 01:00:5e:00:00:01
link 33:33:00:00:00:01
inet 224.0.0.1
inet6 ff02::1:ff40:c6ad
inet6 ff02::1
3: eth1
link 01:00:5e:25:36:47
link 01:00:5e:25:36:3e
link 01:00:5e:25:36:3d
link 33:33:ff:40:c6:af
link 01:00:5e:00:00:01
link 33:33:00:00:00:01
inet 233.37.54.71 <------- McastGroup.
inet 224.0.0.1
inet6 ff02::1:ff40:c6af
inet6 ff02::1 So far so good, I can see that I'm receiving data for this multicast group. b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes
09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268
09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
... I can also confirm that the interface is receiving mcast packets. b $ ethtool -S eth1 | grep mcast_pack
rx_mcast_packets: 103998
tx_mcast_packets: 33 Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints
the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server. require 'socket'
s = UDPSocket.new
s.bind("", 15572)
5.times do
text, sender = s.recvfrom(2)
puts text
end If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly. irb(main):001:0> require 'socket'
=> true
irb(main):002:0> s = UDPSocket.new
=> #<UDPSocket:0x7f3ccd6615f0>
irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572) When I check the protocol statistics I see that InMcastPkts is not increasing. While on
the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds. b $ netstat -sgu ; sleep 10 ; netstat -sgu
IcmpMsg:
InType3: 11
OutType3: 11
Udp:
446 packets received
4 packets to unknown port received.
0 packet receive errors
461 packets sent
UdpLite:
IpExt:
InMcastPkts: 4654 <--------- Same as below
OutMcastPkts: 3426
InBcastPkts: 9854
InOctets: -1691733021
OutOctets: 51187936
InMcastOctets: 145207
OutMcastOctets: 109680
InBcastOctets: 1246341
IcmpMsg:
InType3: 11
OutType3: 11
Udp:
446 packets received
4 packets to unknown port received.
0 packet receive errors
461 packets sent
UdpLite:
IpExt:
InMcastPkts: 4656 <-------------- Same as above
OutMcastPkts: 3427
InBcastPkts: 9854
InOctets: -1690886265
OutOctets: 51188788
InMcastOctets: 145267
OutMcastOctets: 109712
InBcastOctets: 1246341 If I try forcing the interface into promisc mode nothing changes. At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking? b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server
CONFIG_IP_MULTICAST=y Any thoughts on where to go from here? | In our instance, our problem was solved by sysctl parameters, one different from Maciej. Please note that I do not speak for the OP (buecking), I came on this post due to the problem being related by the basic detail (no multicast traffic in userland). We have an application that reads data sent to four multicast addresses, and a unique port per multicast address, from an appliance that is (usually) connected directly to an interface on the receiving server. We were attempting to deploy this software on a customer site when it mysteriously failed with no known reason. Attempts at debugging this software resulted in inspecting every system call, ultimately they all told us the same thing: Our software asks for data, and the OS never provides any. The multicast packet counter incremented, tcpdump showed the traffic reaching the box/specific interface, yet we couldn't do anything with it. SELinux was disabled, iptables was running but had no rules in any of the tables. Stumped, we were. In randomly poking around, we started thinking about the kernel parameters that sysctl handles, but none of the documented features was either particularly relevant, or if they had to do with multicast traffic, they were enabled. Oh, and ifconfig did list "MULTICAST" in the feature line (up, broadcast, running, multicast). Out of curiosity we looked at /etc/sysctl.conf . 'lo and behold, this customer's base image had a couple of extra lines added to it at the bottom. In our case, the customer had set net.ipv4.all.rp_filter = 1 . rp_filter is the Route Path filter, which (as I understand it) rejects all traffic that could not have possibly reached this box. Network subnet hopping, the thought being that the source IP is being spoofed. Well, this server was on a 192.168.1/24 subnet and the appliance's source IP address for the multicast traffic was somewhere in the 10.* network. Thus, the filter was preventing the server from doing anything meaningful with the traffic. A couple of tweaks approved by the customer; net.ipv4.eth0.rp_filter = 1 and net.ipv4.eth1.rp_filter = 0 and we were running happily. | {
"source": [
"https://serverfault.com/questions/163244",
"https://serverfault.com",
"https://serverfault.com/users/8339/"
]
} |
163,371 | I'm passing a variable to a script on the command line. What is the character limit of a command? eg: $ MyScript reallyreallyreally...reallyreallyreallylongoption Thanks. | The shell/OS imposed limit is usually one or two hundred thousand characters. getconf ARG_MAX will give you the maximum input limit for a command. On the Debian system I currently have a terminal open on this returns 131072 which is 128*1024 . The limit is reduced by your environment variables and if my memory serves me correctly these are passed in the same structure by the shell, though that will only take off a few hundred characters in most cases. To find an approximation of this value run env | wc -c - this suggests 325 characters at the current time on this login on this machine. Scripts are likely to permit this full length, but it is not unlikely that other utilities will impose their own limits either intentionally or through design issues. There may also be artificial limits to how long an individual argument on a long command line can be, and/or how long a path to a file can be. | {
"source": [
"https://serverfault.com/questions/163371",
"https://serverfault.com",
"https://serverfault.com/users/49137/"
]
} |
163,487 | I know that certain processors are Big Endian and others are Little Endian. But is there a command, bash script, python script or series of commands that can be used at the command line to determine if a system is Big Endian or Little Endian? Something like: if <some code> then
echo Big Endian
else
echo Little Endian
fi Or is it more simple to just determine what processor the system is using and go with that to determine its Endianess? | On a Big Endian-System (Solaris on SPARC) $ echo -n I | od -to2 | head -n1 | cut -f2 -d" " | cut -c6 0 On a little endian system (Linux on x86) $ echo -n I | od -to2 | head -n1 | cut -f2 -d" " | cut -c6 1 The solution above is clever and works great for Linux *86 and Solaris Sparc. I needed a shell-only (no Perl) solution that also worked on AIX/Power and HPUX/Itanium. Unfortunately the last two don't play nice: AIX reports "6" and HPUX gives an empty line. Using your solution, I was able to craft something that worked on all these Unix systems: $ echo I | tr -d [:space:] | od -to2 | head -n1 | awk '{print $2}' | cut -c6 Regarding the Python solution someone posted, it does not work in Jython because the JVM treats everything as Big. If anyone can get it to work in Jython, please post! Also, I found this, which explains the endianness of various platforms. Some hardware can operate in either mode depending on what the O/S selects: http://labs.hoffmanlabs.com/node/544 If you're going to use awk this line can be simplified to: echo -n I | od -to2 | awk '{ print substr($2,6,1); exit}' For small Linux boxes that don't have 'od' (say OpenWrt) then try 'hexdump': echo -n I | hexdump -o | awk '{ print substr($2,6,1); exit}' | {
"source": [
"https://serverfault.com/questions/163487",
"https://serverfault.com",
"https://serverfault.com/users/21307/"
]
} |
163,511 | What is the mandatory information a HTTP Request Header must contain ? | GET / HTTP/1.0 is a legal HTTP request. If there's no Host header field, you may not get the results you were hoping for if the destination server is a virtual host that doesn't have its own IP address to distinguish itself from other virtual hosts. HTTP 1.1 requires the Host field. | {
"source": [
"https://serverfault.com/questions/163511",
"https://serverfault.com",
"https://serverfault.com/users/49173/"
]
} |
163,514 | I want to make a single application avail. via Terminal Services RemoteApp on a Server running Windows SBS 2008. Do I need to purchase any additional licenses or does SBS 2008 come with some? | GET / HTTP/1.0 is a legal HTTP request. If there's no Host header field, you may not get the results you were hoping for if the destination server is a virtual host that doesn't have its own IP address to distinguish itself from other virtual hosts. HTTP 1.1 requires the Host field. | {
"source": [
"https://serverfault.com/questions/163514",
"https://serverfault.com",
"https://serverfault.com/users/21248/"
]
} |
163,542 | I'm working on a network with ~10 kubuntu desktops (and numerous servers and IP phones) and am trying to get dnsmasq to specify another dns server as a failover. I tried using server=192.168.0.90 but that just added the single dhcp/dns server to /etc/resolv.conf on my test machine (dynamic IP and freshly rebooted with no lease). | Answered my own question, thanks to rfc2132 dhcp-option=6,192.168.0.90,192.168.0.98 However, RFC2132 specifies option 5 as a list of name servers and option 6 as a list of domain name servers, and I'm not sure what the difference is. Either way, option 6 put them correctly as nameserver 192.168.0.90
nameserver 192.168.0.98 in /etc/resolv.conf | {
"source": [
"https://serverfault.com/questions/163542",
"https://serverfault.com",
"https://serverfault.com/users/28104/"
]
} |
163,543 | I'm installing Ubuntu Server in a machine that has certain RAID controller not supported by the default kernel. A patch for the kernel has to be downloaded and compiled as a module for this to work. As this is going to be the booting volume, the module has to be already loaded on install boot for Ubuntu to detect my RAID volume. I've been thinking that maybe burning a custom install CD or maybe by network install and preseeding a different kernel than the stock one would be OK, but I'm not really sure of the safest/easyest way of doing it. It's:
Ubuntu 10.04
HighPoint RocketRAID 2310
3 SATA drives in a RAID5 | Answered my own question, thanks to rfc2132 dhcp-option=6,192.168.0.90,192.168.0.98 However, RFC2132 specifies option 5 as a list of name servers and option 6 as a list of domain name servers, and I'm not sure what the difference is. Either way, option 6 put them correctly as nameserver 192.168.0.90
nameserver 192.168.0.98 in /etc/resolv.conf | {
"source": [
"https://serverfault.com/questions/163543",
"https://serverfault.com",
"https://serverfault.com/users/2061/"
]
} |
163,563 | I am looking for a way to diagnose issues, such as swap death, where a balooning memory process fills up swap and kills the whole machine (such as apache). I'm already using cacti and I can set up nagios (though would rather not) or munin but as far as I can tell they can't record individual program usage - just overall status. I know I can roll a script that >> to some file every 30s but I'd like to see if an existing mature solution already exists. Again, ideally it would: record processes' memory usage every N seconds record processes' CPU usage every N seconds support charts and history support averages - like mysqld has used 43% CPU in the last day and averaged 400MB memory be free and open source Process names are not and should not be known in advance - the idea is to just let it monitor and then have a look at the top offenders. My system is Linux (OpenSUSE). | It you want just the top offenders, consider running top with a relatively long interval (60 seconds plus) in batch mode. You may need more than one top running to capture the top offenders on multiple resources. I have configured systems to run top for a few cycles when a resource was being over used. Consider running sar in batch mode to capture resource utilization. I realize this is server based, but it useful to determine times when problems are occurring. Run munin and enable notifications. This may give you a chance to get in and watch the server going down. You may be able to correct the problem before it goes down. For memory leaks, a steady increase in swap usage indicates a problem. I once watched a server slowly die over a period of days. The problem service was a program monitoring other processes for memory leaks. The system admin kept insisting the increasing swap usage was not a problem, right up until the server stopped responding. You may find that cfengine 's anomaly detection can be used to trigger a script to capture the system state when things go wrong. You may want a lot of information besides just the processes using the most resources. For a sudden influx of usage you may want a list of network connections (by address not name). Memory usage is also useful. | {
"source": [
"https://serverfault.com/questions/163563",
"https://serverfault.com",
"https://serverfault.com/users/2491/"
]
} |
164,305 | In bash, I can do EDITOR=vim crontab -e . Can I get similar effect in Fish shell? | begin; set -lx EDITOR vim; crontab -e; end | {
"source": [
"https://serverfault.com/questions/164305",
"https://serverfault.com",
"https://serverfault.com/users/49417/"
]
} |
164,350 | We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed ( and started ) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path ): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note : I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume : no sticky-sessions, round-robin load balancing, LAMP servers. | Disclaimer: You'd be mad to listen to me without doing a tonne of testing AND getting a 2nd opinion from someone qualified - I'm new to this game . The efficiency improvement idea proposed in this question won't work. The main mistake that I made was to think that the order that the memcached stores are defined in the pool dictates some kind of priority. This is not the case . When you define a pool of memached daemons (e.g. using session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211" ) you can't know which store will be used. Data is distributed evenly, meaning that a item might be stored in the first, or it could be the last (or it could be both if the memcache client is configured to replicate - note it is the client that handles replication, the memcached server does not do it itself). Either way will mean that using localhost as the first in the pool won't improve performance - there is a 50% chance of hitting either store. Having done a little bit of testing and research I have concluded that you CAN share sessions across servers using memcache BUT you probably don't want to - it doesn't seem to be popular because it doesn't scale as well as using a shared database at it is not as robust. I'd appreciate feedback on this so I can learn more... Ignore the following unless you have a PHP app: Tip 1: If you want to share sessions across 2 servers using memcache: Ensure you answered Yes to " Enable memcache session handler support? " when you installed the PHP memcache client and add the following in your /etc/php.d/memcache.ini file: session.save_handler = memcache On webserver 1 (IP: 192.168.0.1): session.save_path="tcp://192.168.0.1:11211" On webserver 2 (IP: 192.168.0.2): session.save_path="tcp://192.168.0.1:11211" Tip 2: If you want to share sessions across 2 servers using memcache AND have failover support: Add the following to your /etc/php.d/memcache.ini file: memcache.hash_strategy = consistent
memcache.allow_failover = 1 On webserver 1 (IP: 192.168.0.1): session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211" On webserver 2 (IP: 192.168.0.2): session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211" Notes: This highlights another mistake I made in the original question - I wasn't using an identical session.save_path on all servers. In this case "failover" means that should one memcache daemon fail, the PHP memcache client will start using the other one. i.e. anyone who had their session in the store that failed will be logged out. It is not transparent failover. Tip 3: If you want to share sessions using memcache AND have transparent failover support: Same as tip 2 except you need to add the following to your /etc/php.d/memcache.ini file: memcache.session_redundancy=2 Notes: This makes the PHP memcache client write the sessions to 2 servers. You get redundancy (like RAID-1) so that writes are sent to n mirrors, and failed get's are retried on the mirrors. This will mean that users do not loose their session in the case of one memcache daemon failure. Mirrored writes are done in parallel (using non-blocking-IO) so speed performance shouldn't go down much as the number of mirrors increases. However, network traffic will increase if your memcache mirrors are distributed on different machines. For example, there is no longer a 50% chance of using localhost and avoiding network access. Apparently, the delay in write replication can cause old data to be retrieved instead of a cache miss. The question is whether this matters to your application? How often do you write session data? memcache.session_redundancy is for session redundancy but there is also a memcache.redundancy ini option that can be used by your PHP application code if you want it to have a different level of redundancy. You need a recent version (still in beta at this time) of the PHP memcache client - Version 3.0.3 from pecl worked for me. | {
"source": [
"https://serverfault.com/questions/164350",
"https://serverfault.com",
"https://serverfault.com/users/21415/"
]
} |
164,361 | I am running Fedora Core. I have a user /home/john/public_html/... when a php script creates a file, the permissions get set to apache.apache so I cant exit the file through my FTP client without fist logging in a Root and manually changing the permissions to john. What is the best way to solve this? | Disclaimer: You'd be mad to listen to me without doing a tonne of testing AND getting a 2nd opinion from someone qualified - I'm new to this game . The efficiency improvement idea proposed in this question won't work. The main mistake that I made was to think that the order that the memcached stores are defined in the pool dictates some kind of priority. This is not the case . When you define a pool of memached daemons (e.g. using session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211" ) you can't know which store will be used. Data is distributed evenly, meaning that a item might be stored in the first, or it could be the last (or it could be both if the memcache client is configured to replicate - note it is the client that handles replication, the memcached server does not do it itself). Either way will mean that using localhost as the first in the pool won't improve performance - there is a 50% chance of hitting either store. Having done a little bit of testing and research I have concluded that you CAN share sessions across servers using memcache BUT you probably don't want to - it doesn't seem to be popular because it doesn't scale as well as using a shared database at it is not as robust. I'd appreciate feedback on this so I can learn more... Ignore the following unless you have a PHP app: Tip 1: If you want to share sessions across 2 servers using memcache: Ensure you answered Yes to " Enable memcache session handler support? " when you installed the PHP memcache client and add the following in your /etc/php.d/memcache.ini file: session.save_handler = memcache On webserver 1 (IP: 192.168.0.1): session.save_path="tcp://192.168.0.1:11211" On webserver 2 (IP: 192.168.0.2): session.save_path="tcp://192.168.0.1:11211" Tip 2: If you want to share sessions across 2 servers using memcache AND have failover support: Add the following to your /etc/php.d/memcache.ini file: memcache.hash_strategy = consistent
memcache.allow_failover = 1 On webserver 1 (IP: 192.168.0.1): session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211" On webserver 2 (IP: 192.168.0.2): session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211" Notes: This highlights another mistake I made in the original question - I wasn't using an identical session.save_path on all servers. In this case "failover" means that should one memcache daemon fail, the PHP memcache client will start using the other one. i.e. anyone who had their session in the store that failed will be logged out. It is not transparent failover. Tip 3: If you want to share sessions using memcache AND have transparent failover support: Same as tip 2 except you need to add the following to your /etc/php.d/memcache.ini file: memcache.session_redundancy=2 Notes: This makes the PHP memcache client write the sessions to 2 servers. You get redundancy (like RAID-1) so that writes are sent to n mirrors, and failed get's are retried on the mirrors. This will mean that users do not loose their session in the case of one memcache daemon failure. Mirrored writes are done in parallel (using non-blocking-IO) so speed performance shouldn't go down much as the number of mirrors increases. However, network traffic will increase if your memcache mirrors are distributed on different machines. For example, there is no longer a 50% chance of using localhost and avoiding network access. Apparently, the delay in write replication can cause old data to be retrieved instead of a cache miss. The question is whether this matters to your application? How often do you write session data? memcache.session_redundancy is for session redundancy but there is also a memcache.redundancy ini option that can be used by your PHP application code if you want it to have a different level of redundancy. You need a recent version (still in beta at this time) of the PHP memcache client - Version 3.0.3 from pecl worked for me. | {
"source": [
"https://serverfault.com/questions/164361",
"https://serverfault.com",
"https://serverfault.com/users/34532/"
]
} |
165,070 | How to zero fill a virtual disk's free space on windows for better compression? I would like a simple open source tool (or at least free) for that. It should probably write an as big as possible file full of 0 and erase it afterwards. Only one pass (this is not for security reasons but for compression, we are backing up virtual machines). Should run from inside windows and not from a disk. On Linux I do it like this (as a user): cd
mkdir wipe
sudo sfill -f -l -l -z ./wipe/ Edit 1: I decided to use sdelete from the accepted answer. I had a look at the sdelete's help: C:\WINDOWS\system32>sdelete /?
SDelete - Secure Delete v1.51
Copyright (C) 1999-2005 Mark Russinovich
Sysinternals - www.sysinternals.com
usage: sdelete [-p passes] [-s] [-q] <file or directory>
sdelete [-p passes] [-z|-c] [drive letter]
-c Zero free space (good for virtual disk optimization)
-p passes Specifies number of overwrite passes (default is 1)
-q Don't print errors (Quiet)
-s Recurse subdirectories
-z Clean free space This is an old version. I used the -c switch from the 2nd invocation and this was quite fast (syntax only valid for older versions before V1.6): c:\>sdelete -c c: (OUTDATED!) I have the impression this does what I want. The sdelete tool is easy to use and easy to get. Edit 2: As scottbb pointed out in his answer below, there was a September 2011 change to the tool (version 1.6) The -c and -z options have changed meanings. The correct usage from 1.6 onwards is c:\>sdelete -z c: Edit 3: There is a 2.0 version of sdelete and sdelete64. They appear to be buggy when zeroing. It will appear to be stuck at 100% for extremely long times. Some people have reported 10 - 40 times longer. The older version 1.61 does not have this issue. See https://social.technet.microsoft.com/Forums/en-US/2ffb2539-34ba-4378-aa8a-941d243f117e/sdelete-hangs-at-100?forum=miscutils Edit 4: Now there's the issue of dynamically allocated virtual disc space . If you have a 100GB disk that is not full and uses only 30GB on the host, zero filling should not increase dramatically the size of the disc because that contradicts the purpose of dynamic allocation . There is an answer for Oracle VM VirtualBox https://superuser.com/q/907196/44402 - but on other stacks like VMWare, Xen, XenServer, etc., this needs to be answered separately. | On windows use the sysinternals tool sdelete to zero out all the empty space. The command you want would look like this sdelete -z c: . Usage: sdelete [-p passes] [-s] [-q] ...
sdelete [-p passes] [-z|-c] [drive letter] ...
-a Remove Read-Only attribute.
-c Clean free space.
-p passes Specifies number of overwrite passes (default is 1).
-q Don't print errors (Quiet).
-s or -r Recurse subdirectories.
-z Zero free space (good for virtual disk optimization). For Linux I suggest you use zerofree . | {
"source": [
"https://serverfault.com/questions/165070",
"https://serverfault.com",
"https://serverfault.com/users/45819/"
]
} |
165,080 | I have a MS SQL 2008 database backup maintenance plan that does a Full backup and then two differential backups throughout the day. If a new database has been created the differential chokes because there is no full to base it off of (at least, this is my guess). I see that I can manually create a database backup by going to the database and choosing it as a task, but this doesn't auto name it like the plan does and I don't want to mess up the flow of this. Manually kicking of the Full plan again seems like it is a lot of overhead for a couple of small databases. What is the typical way to handle this situation? | On windows use the sysinternals tool sdelete to zero out all the empty space. The command you want would look like this sdelete -z c: . Usage: sdelete [-p passes] [-s] [-q] ...
sdelete [-p passes] [-z|-c] [drive letter] ...
-a Remove Read-Only attribute.
-c Clean free space.
-p passes Specifies number of overwrite passes (default is 1).
-q Don't print errors (Quiet).
-s or -r Recurse subdirectories.
-z Zero free space (good for virtual disk optimization). For Linux I suggest you use zerofree . | {
"source": [
"https://serverfault.com/questions/165080",
"https://serverfault.com",
"https://serverfault.com/users/2561/"
]
} |
166,383 | I have a Debian Lenny server, and I would like the www-data user to have /usr/local/zend/bin in its PATH, so it can execute a script in cron as www-data . How do I add /usr/local/zend/bin to PATH, so www-data can execuate files in /usr/local/zend/bin ? | The first place where PATH is set is /etc/login.defs . There's a setting for root and a setting for everyone else. Another place where you can define environment variables is /etc/environment . These settings will apply to everyone (you can't write arbitrary shell code there). A third place where you can define environment variables is /etc/profile . There you can write arbitrary shell code. If you want a user-specific setting, there is the corresponding per-user file ~www-data/.profile . But this will only apply to console interactive logins; in particular it won't apply to cron jobs unless they explicitly source /etc/profile . If you only need that PATH setting in a user crontab, you can write it at the beginning of the crontab. Note that you need the full list ( PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/zend/bin ), you can't use a variable substitution ( PATH=$PATH:/usr/local/zend/bin won't work there). | {
"source": [
"https://serverfault.com/questions/166383",
"https://serverfault.com",
"https://serverfault.com/users/34187/"
]
} |
166,874 | I'm trying to get curl, using a script, to download a file and save it to a certain directory. I got it to download but I dont know how to get it to a certain directory from a script. It usually just saves it to the current working directory. | curl http://google.com > /path/to/dir/index.html | {
"source": [
"https://serverfault.com/questions/166874",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
167,371 | I have a .net web application which needs to obtain the groups a user is a member of in Active Directory. Todo this I am using the memberOf attribute on the users records. I need to know the permissions required to read this attribute on all users records. Currently I am getting inconsistent results when trying to read this attribute. For example I have a user group of 30 users in the same OU path. Using my own credentials to query AD - I can read the memberOf attribute for some users but not others. I know all the users have a memberOf attribute set as I have checked when logged on with a domain admin account. | On your domain object, you need to assign the querying user the "Read MemberOf" right to User objects. Open AD U&C browse to your domain object Right click and go to properties: (source: sysadmin1138.net ) Security tab, click Advanced Click Add Enter the user name to add Click the Properties tab In 'Apply Onto' change the type to User Click the "Read MemberOf" checkbox: (source: sysadmin1138.net ) OK out of there That should set it up so that the specified account can read the group memberships of all User accounts in the domain. | {
"source": [
"https://serverfault.com/questions/167371",
"https://serverfault.com",
"https://serverfault.com/users/47401/"
]
} |
167,416 | I'm trying to get ssh to automatically change to a particular directory when I log in. I tried to get that behaviour working using the following directives in ~/.ssh/config : Host example.net
LocalCommand "cd web" but whenever I log in, I see the following: /bin/bash: cd web: No such file or directory although though there is definitely a web folder in my home directory. Even using an absolute path gives the same message. To be clear, if I type cd web after logging in I get to the right folder. What am I missing here? EDIT: Different combinations of quotes/absolute paths give different error messages: LocalCommand "cd web"
/bin/bash: cd web: No such file or directory
LocalCommand cd web
/bin/bash: line 0: cd: web: No such file or directory
LocalCommand cd /home/gareth/web
/bin/bash: line 0: cd: /home/gareth/web: Input/output error This makes me think that the quotes shouldn't be there, and that there's another error happening. | cd is a shell builtin. LocalCommand is executed as: /bin/sh -c <localcommand> What you're looking to do can't really be accomplished via SSH; you need to modify the shell in some way, e.g. via bashrc/bash_profile . <Editing almost a decade later...> LocalCommand isn't what you want, anyway. That's run on your machine. You want RemoteCommand . Something like this worked for me: Host example.net
RemoteCommand cd / && exec bash --login
RequestTTY yes | {
"source": [
"https://serverfault.com/questions/167416",
"https://serverfault.com",
"https://serverfault.com/users/45754/"
]
} |
167,425 | I'm trying to think of a way to enable us to use our SBS 2008 VM's Fax Server within Hyper-V. I had immediately considered a simple USB Fax Modem but of course, for some reason, Hyper-V doesnt support USB devices. I've read about USB over Ethernet devices like AnywhereUSB and wondered if anyone could recommend either a product of this type, or an alternative solution to allow us to receive faxes within the SBS VM. | cd is a shell builtin. LocalCommand is executed as: /bin/sh -c <localcommand> What you're looking to do can't really be accomplished via SSH; you need to modify the shell in some way, e.g. via bashrc/bash_profile . <Editing almost a decade later...> LocalCommand isn't what you want, anyway. That's run on your machine. You want RemoteCommand . Something like this worked for me: Host example.net
RemoteCommand cd / && exec bash --login
RequestTTY yes | {
"source": [
"https://serverfault.com/questions/167425",
"https://serverfault.com",
"https://serverfault.com/users/47659/"
]
} |
167,448 | As the title states, I have a linux box. As far as I can tell I can use hosts.allow / hosts.deny or iptables to secure. What's the difference? Is there another mechanism that can be used? | IPTables works at the Kernel level. In general this means it has no knowledge of applications or processes. It can only filter based on what it gets from the various packet headers for the most part. The host.allow/deny however operates on the application/process level. You can create rules for various processes or daemons running on the system. So for example IPTables can filter on port 22. SSH can be configured to use this port and generally is, but it can also be configured to be on a different port. IPTables does not know which port it is on, it only knows about the port in the TCP header. The hosts.allow files however can be configured for certain daemons such as the openssh daemon. If you have to chose I would generally opt for at a minimum IPTables. I view the hosts.allow as a nice bonus. Even thought it seems like the daemon levels seems easier IPTables will block the packet before it really even gets very far. With security the sooner you can block something the better. However, I am sure there are situations however that change this choice. | {
"source": [
"https://serverfault.com/questions/167448",
"https://serverfault.com",
"https://serverfault.com/users/47012/"
]
} |
167,851 | The IIS worker processes are taking lot of memory on our servers. I want to limit the memory each application can use. I am confused whether I should set a limit on Virtual Memory Limit, or Private Memory Limit. Each application in our IIS is on its own application pool. If I set private memory limit to 500MB and virtual memory limit to 3GB. When does the application pool recycle? Does it recycles after reaching 500MB or after reaching 3GB. | IIS will respect both of those limits. If you set a 500MB private byte limit, as soon as a worker process attempts to commit 501MB, IIS will spin up a new worker process and kill the old one. If you set a 3GB virtual memory limit, as soon as a worker process attempts to reserve 3.001GB, IIS will spin up a new worker process and kill the old one. If you are on a 64bit platform, you should be aware that ASP.NET applications aggressively reserve virtual memory. As an example, I have an app on a farm that uses only 88MB of private bytes, but its sitting at 5.4GB Virtual Size right now. I believe the virtual memory reservation is a function of physical RAM on the server. It's also important to understand that on a 64bit platform, reserving large portions of virtual memory has zero performance impact. Basically, if you are having memory consumption issues on an IIS server, the setting you want to limit is Private Memory/Bytes, this is what corresponds to actual memory usage. | {
"source": [
"https://serverfault.com/questions/167851",
"https://serverfault.com",
"https://serverfault.com/users/31071/"
]
} |
167,864 | EDIT: QUESTION HAS BEEN ANSWERED SEE THE BOTTOM OF MY POST FOR MY FINAL SCRIPT... Man, I'm a powershell noob... I have this link: http://gallery.technet.microsoft.com/ScriptCenter/en-us/da3fee00-e79d-482b-91f2-7c729c38068f I'd like to use that to take a list of servers, run it against that list, and then get a report of each server with disk space similar to: SERVER1 C: Total=120GB Free Space=60GB
D: Total=400GB Free Space=200GB etc. The problem is... I don't know what to do with the script on that link to get it to work. Copy/paste it into notepad and save as a .ps1?? Doesn't seem to be that way in the description. Will it work if I run it from my Win7 box with PS, or does every server have to have Powershell installed for a remote PS script to work? Is there a way to setup the script to email me? That way I can set it as a weekly task or similar. Thank you! ======== Final script function Get-FreeDisk
{
[CmdletBinding()]
param(
[Parameter(Position=0, ValueFromPipeline=$true)]
[string[]]$Computername="localhost",
[int]
[ValidateRange(0,100)]
$MinPercFree = 100,
[Management.Automation.PSCredential]$Credential = ([Management.Automation.PSCredential]::Empty)
)
begin{
[String[]]$computers = @()
}
process{
$computers += $Computername
}
end{
Get-WmiObject -computername $computers -Credential $Credential `
-Query "select __SERVER, Caption,Label,Capacity,FreeSpace from Win32_Volume where DriveType != 5 and Capacity > 0" | `
Add-Member -Name Free -MemberType ScriptProperty -PassThru -Value {($this.FreeSpace/10000) / ($this.Capacity/1000000)} | `
Where { $_.Free -lt $MinPercFree } | `
sort __SERVER,Caption | `
Format-Table @{Label="Computer"; Expression={$_.__SERVER}}, Caption, Label,`
@{Label="Size/MB"; FormatString="{0,7:N0}"; Expression={$_.Capacity / 1mb}},`
@{Label="FreeSpace/MB"; FormatString="{0,7:N0}"; Expression={$_.Freespace / 1mb}}, `
@{Label="Free"; FormatString="{0,3:N0}%"; Expression={$_.Free}} -AutoSize
}
}
Get-Content .\servers.txt | Get-FreeDisk | Format-Table -AutoSize | Out-File diskusage.txt
Send-MailMessage -To [email protected] -Subject "Server Disk Report" -From [email protected] -SmtpServer mail.domain.com -Attachments "diskusage.txt" | IIS will respect both of those limits. If you set a 500MB private byte limit, as soon as a worker process attempts to commit 501MB, IIS will spin up a new worker process and kill the old one. If you set a 3GB virtual memory limit, as soon as a worker process attempts to reserve 3.001GB, IIS will spin up a new worker process and kill the old one. If you are on a 64bit platform, you should be aware that ASP.NET applications aggressively reserve virtual memory. As an example, I have an app on a farm that uses only 88MB of private bytes, but its sitting at 5.4GB Virtual Size right now. I believe the virtual memory reservation is a function of physical RAM on the server. It's also important to understand that on a 64bit platform, reserving large portions of virtual memory has zero performance impact. Basically, if you are having memory consumption issues on an IIS server, the setting you want to limit is Private Memory/Bytes, this is what corresponds to actual memory usage. | {
"source": [
"https://serverfault.com/questions/167864",
"https://serverfault.com",
"https://serverfault.com/users/7861/"
]
} |
168,247 | Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` (
`id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL,
`rel_id` INTEGER(13) NOT NULL,
`p1` INTEGER(13) NOT NULL,
`p2` INTEGER(13) DEFAULT NULL,
`p3` INTEGER(13) DEFAULT NULL,
`s` INTEGER(13) NOT NULL,
`p4` INTEGER(13) DEFAULT NULL,
`p5` INTEGER(13) DEFAULT NULL,
`p6` INTEGER(13) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY (`s`),
KEY (`rel_id`),
KEY (`p3`),
KEY (`p4`)
); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4"
SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id"
INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6)
VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit] | pQd's estimate of 7PB seems reasonable, and that's a lot of data for a RDBMS. I'm not sure I've ever heard of someone doing 7PB with any shared disk system, let alone MySQL.
Querying this volume of data with any shared disk system is going to be unusably slow. The fastest SAN hardware maxes out at 20GB/sec even when tuned for large streaming queries. If you can afford SAN hardware of this spec you can affort to use something better suited to the job than MySQL. In fact, I'm struggling to conceive of a scenario where you could have a budget for a disk subsystem of this spec but not for a better DBMS platform. Even using 600GB disks (the largest 15K 'enterprise' drive currently on the market) you're up for something like 12,000 physical disk drives to store 7PB. SATA disks would be cheaper (and with 2TB disks you would need around 1/3 of the number), but quite a bit slower. A SAN of this spec from a major vendor like EMC or Hitachi would run to many millions of dollars. Last time I worked with SAN equipment from a major vendor, the transfer cost of space on an IBM DS8000 was over £10k/TB, not including any capital allowance for the controllers. You really need a shared nothing system like Teradata or Netezza for this much data. Sharding a MySQL database might work but I'd recommend a purpose built VLDB platform. A shared nothing system also lets you use much cheaper direct-attach disk on the nodes - take a look at Sun's X4550 (thumper) platform for one possibility. You also need to think of your performance requirements. What's an acceptable run time for a query? How often will you query your dataset? Can the majority of the queries be resolved using an index (i.e. are they going to look at a small fraction - say: less than 1% - of the data), or do they need to do a full table scan? How quickly is data going to be loaded into the database? Do your queries need up-to-date data or could you live with a periodically refreshed reporting table? In short, the strongest argument against MySQL is that you would be doing backflips to get decent query performance over 7PB of data, if it is possible at all. This volume of data really puts you into shared-nothing territory to make something that will query it reasonably quickly, and you will probably need a platform that was designed for shared-nothing operation from the outset. The disks alone are going to dwarf the cost of any reasonable DBMS platform. Note: If you do split your operational and reporting databases you don't necessarily have to use the same DBMS platform for both. Getting fast inserts and sub-second reports from the same 7PB table is going to be a technical challenge at the least. Given from your comments that you can live with some latency in reporting, you might consider separate capture and reporting systems, and you may not need to keep all 7PB of data in your operational capture system. Consider an operational platform such as Oracle (MySQL may do this with InnoDB) for data capture (again, the cost of the disks alone will dwarf the cost of the DBMS unless you have a lot of users) and a VLDB platform like Teradata, Sybase IQ, RedBrick, Netezza (note: proprietary hardware) or Greenplum for reporting | {
"source": [
"https://serverfault.com/questions/168247",
"https://serverfault.com",
"https://serverfault.com/users/50568/"
]
} |
168,260 | I have configured Sendmail to work with a smarthost by adding the following line to the configuration define('SMART_HOST', 'smtp.ISP.TLD')dnl and after that I issued make -C /etc/mail The mail log shows Aug 8 17:51:23 mailserver sendmail[10677]: o78FpM8q010677: from=XXXXXXXX, size=64, class=0, nrcpts=1, msgid=<[email protected]>, relay=root@localhost
Aug 8 17:51:23 mailserver sm-mta[10678]: o78FpNmK010678: from=<[email protected]>, size=360, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=mainframe.domain.tld [127.0.0.1]
Aug 8 17:51:23 mailserver sendmail[10677]: o78FpM8q010677: [email protected], ctladdr=XXXXXXXX (1000/1000), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30064, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o78FpNmK010678 Message accepted for delivery) So it seems it never passes on to the smarthost.
Oh yes, and I'm using Debian 5.0 | pQd's estimate of 7PB seems reasonable, and that's a lot of data for a RDBMS. I'm not sure I've ever heard of someone doing 7PB with any shared disk system, let alone MySQL.
Querying this volume of data with any shared disk system is going to be unusably slow. The fastest SAN hardware maxes out at 20GB/sec even when tuned for large streaming queries. If you can afford SAN hardware of this spec you can affort to use something better suited to the job than MySQL. In fact, I'm struggling to conceive of a scenario where you could have a budget for a disk subsystem of this spec but not for a better DBMS platform. Even using 600GB disks (the largest 15K 'enterprise' drive currently on the market) you're up for something like 12,000 physical disk drives to store 7PB. SATA disks would be cheaper (and with 2TB disks you would need around 1/3 of the number), but quite a bit slower. A SAN of this spec from a major vendor like EMC or Hitachi would run to many millions of dollars. Last time I worked with SAN equipment from a major vendor, the transfer cost of space on an IBM DS8000 was over £10k/TB, not including any capital allowance for the controllers. You really need a shared nothing system like Teradata or Netezza for this much data. Sharding a MySQL database might work but I'd recommend a purpose built VLDB platform. A shared nothing system also lets you use much cheaper direct-attach disk on the nodes - take a look at Sun's X4550 (thumper) platform for one possibility. You also need to think of your performance requirements. What's an acceptable run time for a query? How often will you query your dataset? Can the majority of the queries be resolved using an index (i.e. are they going to look at a small fraction - say: less than 1% - of the data), or do they need to do a full table scan? How quickly is data going to be loaded into the database? Do your queries need up-to-date data or could you live with a periodically refreshed reporting table? In short, the strongest argument against MySQL is that you would be doing backflips to get decent query performance over 7PB of data, if it is possible at all. This volume of data really puts you into shared-nothing territory to make something that will query it reasonably quickly, and you will probably need a platform that was designed for shared-nothing operation from the outset. The disks alone are going to dwarf the cost of any reasonable DBMS platform. Note: If you do split your operational and reporting databases you don't necessarily have to use the same DBMS platform for both. Getting fast inserts and sub-second reports from the same 7PB table is going to be a technical challenge at the least. Given from your comments that you can live with some latency in reporting, you might consider separate capture and reporting systems, and you may not need to keep all 7PB of data in your operational capture system. Consider an operational platform such as Oracle (MySQL may do this with InnoDB) for data capture (again, the cost of the disks alone will dwarf the cost of the DBMS unless you have a lot of users) and a VLDB platform like Teradata, Sybase IQ, RedBrick, Netezza (note: proprietary hardware) or Greenplum for reporting | {
"source": [
"https://serverfault.com/questions/168260",
"https://serverfault.com",
"https://serverfault.com/users/49515/"
]
} |
168,262 | I am trying to fix a lot of errors in our old website regarding a thing with product pages having multiple URLs associated with a single product. I am hoping that I can use regular expressions in with a regular redirect 301 line but so far I cannot seem to get it to work. Here is what I am trying: redirect 301 /products/(.*?)/(.*?)/5702/(.*?).html http://mycompany.com/footwear/wolverine-boots-waterproof-durashocks-work-boots-2582-33390.html Does anyone have any ideas as to what I am doing wrong? | pQd's estimate of 7PB seems reasonable, and that's a lot of data for a RDBMS. I'm not sure I've ever heard of someone doing 7PB with any shared disk system, let alone MySQL.
Querying this volume of data with any shared disk system is going to be unusably slow. The fastest SAN hardware maxes out at 20GB/sec even when tuned for large streaming queries. If you can afford SAN hardware of this spec you can affort to use something better suited to the job than MySQL. In fact, I'm struggling to conceive of a scenario where you could have a budget for a disk subsystem of this spec but not for a better DBMS platform. Even using 600GB disks (the largest 15K 'enterprise' drive currently on the market) you're up for something like 12,000 physical disk drives to store 7PB. SATA disks would be cheaper (and with 2TB disks you would need around 1/3 of the number), but quite a bit slower. A SAN of this spec from a major vendor like EMC or Hitachi would run to many millions of dollars. Last time I worked with SAN equipment from a major vendor, the transfer cost of space on an IBM DS8000 was over £10k/TB, not including any capital allowance for the controllers. You really need a shared nothing system like Teradata or Netezza for this much data. Sharding a MySQL database might work but I'd recommend a purpose built VLDB platform. A shared nothing system also lets you use much cheaper direct-attach disk on the nodes - take a look at Sun's X4550 (thumper) platform for one possibility. You also need to think of your performance requirements. What's an acceptable run time for a query? How often will you query your dataset? Can the majority of the queries be resolved using an index (i.e. are they going to look at a small fraction - say: less than 1% - of the data), or do they need to do a full table scan? How quickly is data going to be loaded into the database? Do your queries need up-to-date data or could you live with a periodically refreshed reporting table? In short, the strongest argument against MySQL is that you would be doing backflips to get decent query performance over 7PB of data, if it is possible at all. This volume of data really puts you into shared-nothing territory to make something that will query it reasonably quickly, and you will probably need a platform that was designed for shared-nothing operation from the outset. The disks alone are going to dwarf the cost of any reasonable DBMS platform. Note: If you do split your operational and reporting databases you don't necessarily have to use the same DBMS platform for both. Getting fast inserts and sub-second reports from the same 7PB table is going to be a technical challenge at the least. Given from your comments that you can live with some latency in reporting, you might consider separate capture and reporting systems, and you may not need to keep all 7PB of data in your operational capture system. Consider an operational platform such as Oracle (MySQL may do this with InnoDB) for data capture (again, the cost of the disks alone will dwarf the cost of the DBMS unless you have a lot of users) and a VLDB platform like Teradata, Sybase IQ, RedBrick, Netezza (note: proprietary hardware) or Greenplum for reporting | {
"source": [
"https://serverfault.com/questions/168262",
"https://serverfault.com",
"https://serverfault.com/users/50572/"
]
} |
168,752 | https://stackoverflow.com/questions/510170/the-difference-between-the-local-system-account-and-the-network-service-accou tells: Local System : Completely trusted
account, moreso than the administrator
account. There is nothing on a single
box that this account can not do and it has the right to access the network as the machine (this requires Active
Directory and granting the machine
account permissions to something)" http://msdn.microsoft.com/en-us/library/aa274606(SQL.80).aspx (Preparing to install SQL Server 2000(64 bit) - Creating Windows Service Accounts) tells: "The local system account does not
require a password, does not have
network access rights, and restricts
your SQL Server installation from
interacting with other servers. " http://msdn.microsoft.com/en-us/library/ms684190(v=VS.85).aspx (LocalSystem Account, Build date: 8/5/2010) tells: "The LocalSystem account is a
predefined local account used by the
service control manager. This account is not recognized by the security
subsystem , so you cannot specify its
name in a call to the
LookupAccountName function. It has
extensive privileges on the local
computer, and acts as the computer on
the network. Its token includes the NT
AUTHORITY\SYSTEM and
BUILTIN\Administrators SIDs ; these
accounts have access to most system
objects. The name of the account in
all locales is .\LocalSystem . The
name, LocalSystem or
ComputerName\LocalSystem can also be
used. This account does not have a
password. If you specify the LocalSystem account in a call to the
CreateService function, any password
information you provide is ignored" http://technet.microsoft.com/en-us/library/ms143504.aspx (Setting Up Windows Service Accounts) tells: Local System is a very high-privileged
built-in account. It has extensive
privileges on the local system and
acts as the computer on the network. > The actual name of the account is "NT
AUTHORITY\SYSTEM". Well-known security identifiers in Windows operating systems
( http://support.microsoft.com/kb/243330 )
does not have any SYSTEM at all (but only " LOCAL SYSTEM ") My Windows XP Pro SP3 (with MS SQL Server setup, developing machine in workgroup ) does have SYSTEM but not LocalSystem or " Local System ". QUESTIONS: Can somebody clear out this mess? It is possible to burn hours after hours, day after day reading MS docs just to collect more and more contradictions and misunderstandings... 1)
Has LocalSystem rights to access the network or not?
What is the mechanism? 2)
Are the SYSTEM and the LocalSystem (and the "Local System") synonyms? Why they have been introduced? What are the differences between SYSTEM and Local System ---------- Update1: Hi, sysamin1138! Your answers add even more confusion if to compare them to observed reality,
for ex., to the fact that Fresh installed or workgroup Windows XP Pro SP3 has only SYSTEM (but not LocalSystem). Sysadmin138 wrote: "Different security principles for similar problems, which allow a bit of granularity in your security design. One is local only, the other has domain visibility." Does this phrase mean that LocalSystem is added upon joining computer to domain? Should it be understood that SYSTEM is for "local"/internal and workgroup access (computer identification) and LocalSystem for identification of computer in domain? ---------- Update2: same workgroup Windows XP Pro SP3 if not specified otherwise Hi, Sysadmin1138 ,
In your Edit "It's just that in that case SYSTEM
and NT Authority/SYSTEM are equivalent
in ability", how are they (NT Authority/SYSTEM and SYSTEM) related to LocalSystem? Did not you err one of them with LocalSystem? Greg Askew, "Note that if you configure a service
to logon as .\LocalSystem, it will
still appear as logged on as NT
AUTHORITY\SYSTEM in Process Explorer
or System in Task Manager" This is a little be closer. I cannot choose LocalSystem in either NTFS/share premissions, RunAs list.
But in services.msc the service "SQL Server (MS SQL SERVER)" --> double-click or rc --> Properties ---> tab "Logo on as:" has radiobuttom "Local System account". This service then appears in Windows Task Manager as SYSTEM Greg Askew and sysadmin1138 , "NT AUTHORITY" or any "xxx\" does not appear anywhere. All account names are single-labeled. Note it is Windows XP workgroup computer. Though I run an instance of ADAM (Active Directory Application Mode). I guess "NT AUTHORITY" is from that famous "security subsystem" which is absent in workgroup(?) Would "NT Authority" appear if I join computer to a domain? NTFS/share permission list has 2 columns: "Name(RDN)" colum having single-label account names "In Folder" column having either MyCompName (eg, for Administrator, Administrators, ASPNET, SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER, etc.) or blank (e.g., for ANONYMOUS LOGON, Authenticated Users, CREaTOR GROUP, CREAtOR OWNER, NETWORKING SERVICES,SYSTEM , etc.). The former ones have also synonyms for coding as "MyCompName\xxxx" or ".\xxx" (i.e. SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER = = MyCompName\SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER = .\SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER) Can you synchronize your answers in context of http://blogs.msdn.com/aaron_margosis/archive/2009/11/05/machine-sids-and-domain-sids.aspx (Machine SIDs and Domain SIDs)? ---------- Update3: same workgroup Windows XP Pro SP3 if not specified otherwise Hi, Sysadmin1138 , And how to see edit-history? and dereference SID? Breakthrough! cacls shows "NT Authority\SYSTEM"... Though for services it is all vice versa: all services show under "Log On" tab the radiobutton "Local System account" which results in SYSTEM in WIndowsTaskManager and the "This account" radiobutton --> btn "Browse..." that doesn't show the SYSTEM account in the list Sorry for your time, but I couldn't get yet to any LocalSystem in Windows XP! LocalSystem does not show up anywhere in XP! but the problem that all MS docs dwell only on LocalSystem... BTW, http://support.microsoft.com/kb/120929 ("How the System account is used in Windows") tells that SYSTEM is for internal to computer logging of services, and surprise-surprise "APPLIES TO" all Windows from NT Workstation 3.1 to Windows Server 2003 except Windows XP (?!). Is Windows XP some anomaly in Windows line? ---------- Update4: same workgroup Windows XP Pro SP3 if not specified otherwise I couldn't detect any LocalSystem (only "local system" mentioned in text to radiobutton of services LogOn)in Windows XP though all MS docs usually dwell on LocalSystem only but not SYSTEM. I marked this question as answered having understood for me that Windows XP is anomaly/exception in Windows OS-es having some GUI usability bug and I should guess how a scenario would have appeared in other Windows (with the help of answer(s) here) If it is not correct, please be free to prove/share another point of view Update5: same workgroup Windows XP Pro SP3 if not specified otherwise Venceremos! I found "Local System" in Windows XP! It is shown in "Log On As" column in services.msc! | [wiped large answer, summarizing for clarity. See edit-history for sordid tale.] There is a single well-known SID for the local system. It is S-1-5-18, as you found from that KB article. This SID returns multiple names when asked to be dereferenced. The 'cacls' command-line command (XP) shows this as " NT Authority\SYSTEM ". The 'icacls' command-line command (Vista/Win7) also shows this as " NT Authority\SYSTEM ". The GUI tools in Windows Explorer show this as " SYSTEM ". When you're configuring a Service to run, this is shown as " Local System ". Three names, one SID. In Workgroups, the SID only has a meaning on the local workstation. When accessing another workstation, the SID is not transferred just the name. The 'Local System' can not access any other systems. In Domains, the Relative ID is what allows the Machine Account access to resources not local to that one machine. This is the ID stored in Active Directory, and is used as a security principle by all domain-connected machines. This ID is not S-1-5-18. It is in the form of S-1-5-21[domainSID]-[random]. Configuring a service as "Local Service" tells the service to log on locally to the workstation as S-1-5-18. It will not have any Domain credentials of any kind. Configuring a service as "Network Service" or "NT Authority\NetworkService" tells the service to log on to the domain as that machine's domain account, and will have access to Domain resources. The Windows XP Service Configurator does not have the ability to select "Network Service" as a login type. The SQL Setup program might. "Network Service" can do everything "Local System" can, as well as access Domain resources. "Network Service" has no meaning in a Workgroup context. In short: NT Authority\System = Local System = SYSTEM = S-1-5-18 If you need your service to access resources not located on that machine, you need to either: Configure it as a Service using a dedicated login user Configure it as a Service using "Network Service" and belong to a domain | {
"source": [
"https://serverfault.com/questions/168752",
"https://serverfault.com",
"https://serverfault.com/users/47886/"
]
} |
168,755 | Do most popular proxy servers cache the response data based on the Uri ? Also, lets exclude HTTP Headers like Cache-Control, etc... and assume they have been set to public, max-age: xxxx s-maxage: yyyyy etc... So .. assuming the proxy server says 'this resource needs to be cached' .. what is the 'key'? the Uri? so if i have www.somedomain.com www.somedomain.com/foo www.somedomain.com/foo? www.somedomain.com/foo?a=1 www.somedomain.com/foo?a=1&b=2 they are all separate cached items? | [wiped large answer, summarizing for clarity. See edit-history for sordid tale.] There is a single well-known SID for the local system. It is S-1-5-18, as you found from that KB article. This SID returns multiple names when asked to be dereferenced. The 'cacls' command-line command (XP) shows this as " NT Authority\SYSTEM ". The 'icacls' command-line command (Vista/Win7) also shows this as " NT Authority\SYSTEM ". The GUI tools in Windows Explorer show this as " SYSTEM ". When you're configuring a Service to run, this is shown as " Local System ". Three names, one SID. In Workgroups, the SID only has a meaning on the local workstation. When accessing another workstation, the SID is not transferred just the name. The 'Local System' can not access any other systems. In Domains, the Relative ID is what allows the Machine Account access to resources not local to that one machine. This is the ID stored in Active Directory, and is used as a security principle by all domain-connected machines. This ID is not S-1-5-18. It is in the form of S-1-5-21[domainSID]-[random]. Configuring a service as "Local Service" tells the service to log on locally to the workstation as S-1-5-18. It will not have any Domain credentials of any kind. Configuring a service as "Network Service" or "NT Authority\NetworkService" tells the service to log on to the domain as that machine's domain account, and will have access to Domain resources. The Windows XP Service Configurator does not have the ability to select "Network Service" as a login type. The SQL Setup program might. "Network Service" can do everything "Local System" can, as well as access Domain resources. "Network Service" has no meaning in a Workgroup context. In short: NT Authority\System = Local System = SYSTEM = S-1-5-18 If you need your service to access resources not located on that machine, you need to either: Configure it as a Service using a dedicated login user Configure it as a Service using "Network Service" and belong to a domain | {
"source": [
"https://serverfault.com/questions/168755",
"https://serverfault.com",
"https://serverfault.com/users/58/"
]
} |
169,181 | I can't figure out how to set up stats for HAProxy. This is my configuration: global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
daemon
#debug
user haproxy
group haproxy
maxconn 4096
stats socket /tmp/haproxy
defaults
log global
mode tcp
option tcplog
option dontlognull
option redispatch
option clitcpka
option srvtcpka
option tcpka
retries 3
maxconn 2000
contimeout 10000
clitimeout 50000
srvtimeout 50000
stats enable
stats hide-version
stats scope .
stats realm Haproxy\ Statistics
stats uri /haproxy?stats
stats auth xxxxx:xxxxx
option contstats
listen rtmp :1935
mode tcp
balance roundrobin
server s1 xxx.xxx.xxx.xxx:1935 check
server s2 xxx.xxx.xxx.xxx:1935 check As far as I understand the manual there should be a stats page available via http://mysite/haproxy?stats . What am I missing? EDIT: I can access the stats with socat but not with a web browser. | That is not correct. There would be a stats page on http://yoursite.com:1935/haproxy?stats To overcome this add another listener- listen stats
bind :1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth Username:Password Then go to http://yoursite.com:1936/ and login. Make sure your firewall isn't blocking this request on port 1936 (you can change this to whatever you like) As per user suggestions, for newer versions of HAProxy I've moved the bind into the listen stats block | {
"source": [
"https://serverfault.com/questions/169181",
"https://serverfault.com",
"https://serverfault.com/users/49876/"
]
} |
169,279 | It seems like a good idea to use Google's public DNS 8.8.8.8 and 8.8.4.4 because it's really fast -- much faster than my own ISP's DNS! -- and probably more reliable, too. That seems like a ridiculously quick win for me, and much easier to remember. Assuming we're not all "tin foil hat" about Google, why shouldn't everybody use Google DNS? How can I determine which DNS server would be the fastest, most reliable, or what would generally be considered the best? Note: I've seen this question , but I don't want a comparison to OpenDNS. This is about everyday use by everyday people in their homes. Update: I seem to have put my hand in a wasps' nest of privacy concerns. I appreciate the issue, but I was expecting a more technology-oriented discussion... | There is a useful tool that test the different DNS nameservers available (your ISP, current configuration, DynDNS, Google Public DNS and other one). From my point of view Google DNS are pretty fast but depending on the load GoogleDNS supports my ISP Dns is sometimes faster. NameBench (Linux/Windows/Mac OS X) Output : (source: googlecode.com ) | {
"source": [
"https://serverfault.com/questions/169279",
"https://serverfault.com",
"https://serverfault.com/users/19644/"
]
} |
169,676 | I'm having a problem with a Linux system and I have found sysstat and sar to report huge peaks of disk I/O, average service time as well as average wait time. How could I determine which process is causing these peaks the next time it happen? Is it possible to do with sar ? Can I find this info from the already recorded sar files? Output of sar -d , system stall happened around 12.58-13.01pm. 12:40:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
12:40:01 dev8-0 11.57 0.11 710.08 61.36 0.01 0.97 0.37 0.43
12:45:01 dev8-0 13.36 0.00 972.93 72.82 0.01 1.00 0.32 0.43
12:50:01 dev8-0 13.55 0.03 616.56 45.49 0.01 0.70 0.35 0.47
12:55:01 dev8-0 13.99 0.08 917.00 65.55 0.01 0.86 0.37 0.52
13:01:02 dev8-0 6.28 0.00 400.53 63.81 0.89 141.87 141.12 88.59
13:05:01 dev8-0 22.75 0.03 932.13 40.97 0.01 0.65 0.27 0.62
13:10:01 dev8-0 13.11 0.00 634.55 48.42 0.01 0.71 0.38 0.50 I also have this follow-up question to another thread I started yesterday: Sudden peaks in load and disk block wait | If you are lucky enough to catch the next peak utilization period, you can study per-process I/O stats interactively, using iotop . | {
"source": [
"https://serverfault.com/questions/169676",
"https://serverfault.com",
"https://serverfault.com/users/35441/"
]
} |
170,079 | How can I forward ports on a server running libvirt/KVM to specified ports on VM's, when using NAT? For example, the host has a public IP of 1.2.3.4. I want to forward port 80 to 10.0.0.1 and port 22 to 10.0.0.2. I assume I need to add iptables rules, but I'm not sure where is appropriate and what exactly should be specified. Output of iptables -L Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere 10.0.0.0/24 state RELATED,ESTABLISHED
ACCEPT all -- 10.0.0.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT)
target prot opt source destination Output of ifconfig eth0 Link encap:Ethernet HWaddr 00:1b:fc:46:73:b9
inet addr:192.168.1.14 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::21b:fcff:fe46:73b9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:201 errors:0 dropped:0 overruns:0 frame:0
TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:31161 (31.1 KB) TX bytes:12090 (12.0 KB)
Interrupt:17
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
virbr1 Link encap:Ethernet HWaddr ca:70:d1:77:b2:48
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::c870:d1ff:fe77:b248/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:468 (468.0 B) I'm using Ubuntu 10.04. | The latest stable release for libvirt for Ubuntu is version 0.7.5, which doesn't have some newer features (i.e. script hooks and network filters) which make automatic network configuration easier. That said, here's how to enable port forwarding for libvirt 0.7.5 on Ubuntu 10.04 Lucid Lynx. These iptables rules should do the trick: iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 10.0.0.1:80
iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 22 -j DNAT --to-destination 10.0.0.2:22
iptables -I FORWARD -m state -d 10.0.0.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT The default KVM NAT config provides a rule similar to the 3rd I gave above, but it omits the NEW state, which is essential for accepting incoming connections. If you write a startup script to add these rules and you're not careful, libvirt 0.7.5 overrides them by inserting its own. So, in order to make sure these rules are applied properly on startup, you need to make sure libvirt has initialized before you insert your rules. Add the following lines to /etc/rc.local, before the line exit 0 : (
# Make sure the libvirt has started and has initialized its network.
while [ `ps -e | grep -c libvirtd` -lt 1 ]; do
sleep 1
done
sleep 10
# Set up custom iptables rules.
iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 10.0.0.1:80
iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 22 -j DNAT --to-destination 10.0.0.2:22
iptables -I FORWARD -m state -d 10.0.0.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT
) & The sleep 10 above is a hack to make sure the libvirt daemon has had a chance to initialize its iptables rules before we add our own. I can't wait until they release libvirt version 0.8.3 for Ubuntu. | {
"source": [
"https://serverfault.com/questions/170079",
"https://serverfault.com",
"https://serverfault.com/users/51072/"
]
} |
170,192 | I've got nginx 0.7x + PHP-FPM running under PHP 5.2.10 on one RHEL5 server, but trying to duplicate that setup under the bundled-in PHP-FPM in PHP 5.3.3 on a second server, I'm having some trouble with permission errors every time there's a GET. FPM is started, and confirmed that fastcgi is listening on 9000, but each time I do a GET, I see this error in the nginx log: 2010/08/12 23:38:53 [crit] 5019#0: *5 stat() "/home/noisepages/www/" failed (13: Permission denied), client: 24.215.173.141, server: dev.noisepages.com, request: "GET / HTTP/1.1", host: "dev.noisepages.com" Barebones nginx.conf.default works, at least. Here's my nginx.conf server {
listen 80;
server_name dev.noisepages.com;
root /home/noisepages/www;
index index.html index.htm index.php;
access_log logs/dev.access.log;
error_log logs/dev.error.log;
location / {
if (-f $request_filename) {
expires 30d;
break;
}
# this sends all non-existing file or directory requests to index.php
rewrite ^.*/files/(.*) /wp-includes/ms-files.php?file=$1;
if (!-e $request_filename) {
rewrite ^.+?(/wp-.*) $1 last;
rewrite ^.+?(/.*\.php)$ $1 last;
rewrite ^ /index.php last;
}
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/dev/shm/php-fastcgi.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/dev/www/$fastcgi_script_name;
}
} (The extra rewrite directives are for the use of WordPress multisite aka WordPress MU) I've also verified that user www-data is declared not only in nginx.conf but also in php-fpm.conf for user and group values. Maybe I'm not understanding what causes the error 13 message? Oddly enough, I'd tried to set up dev.noisepages.com on the first server in parallel to a couple of other virtual hosts -- each of which was working fine - and got the same error. | You need to ensure you have +x on all of the directories in the path leading to the site's root - so /home , /home/noisepages and /home/noisepages/www | {
"source": [
"https://serverfault.com/questions/170192",
"https://serverfault.com",
"https://serverfault.com/users/51068/"
]
} |
170,193 | Let's say I see an HTTP response with its header. How do I know if it is a response to a HEAD request? RFC 2616 states that if 200 OK is the status of the response, it should contain a message body only if it's not a response to a HEAD request. So I need to know if it is a response to a HEAD. Do I have to keep a state and remember whether it is a response to a HEAD or is it possible to know that only from the response fields? Thanks. | You need to ensure you have +x on all of the directories in the path leading to the site's root - so /home , /home/noisepages and /home/noisepages/www | {
"source": [
"https://serverfault.com/questions/170193",
"https://serverfault.com",
"https://serverfault.com/users/45074/"
]
} |
170,194 | I've just read this question which essentially says that when I set up DNS for example.com, the root record can't be a CNAME, it has to be an A record. My question is, why? I'm sure the clever people who designed DNS didn't make arbitary restrictions for no reason, but I don't see what we gain by requring root domains to be A records. I would love to just point my example.com domain to someserver.somewebhost.example and forget about it, but I can't. Please enlighten me, billpg. | Firstly, the underlying reason is not that you must use an A record, but that you cannot use a CNAME record because those cannot coexist with other normal resource record types. The reason for that restriction is in §3.6.2 of RFC 1034: If a CNAME RR is present at a node, no
other data should be present; this
ensures that the data for a canonical
name and its aliases cannot be
different. This rule also insures
that a cached CNAME can be used
without checking with an authoritative
server for other RR types. As the root of a (delegated) domain must have an SOA and NS records the rule above kicks in, preventing use of CNAMEs too. | {
"source": [
"https://serverfault.com/questions/170194",
"https://serverfault.com",
"https://serverfault.com/users/3667/"
]
} |
170,346 | I'd like to set up command completion on zsh to display host names after I type ssh [TAB] taking the names out of my .ssh/config file (and preferably from known_hosts and /etc/hosts and anywhere else that makes sense) and presenting one single list. It does some of this currently, but it doesn't use .ssh/config at all it requires a username first, even though using .ssh/config makes typing usernames unnecessary it presents multiple lists (probably one from known_hosts and another from /etc/hosts, but I haven't verified that) So I want to be include known usernames as well as known hostnames in the (preferably single) list after typing ssh [TAB] (I'm coming here before Google because 1) it'll result in the answer getting stored here, and 2) it's probably more efficient. If no one else answers, I'll hunt down the answer.) | Here's the relevant part from my .zshrc . It hasn't changed since 2002, so I might write it differently today, but it still works to complete host names from ~/.ssh/config and ~/.ssh/known_hosts (if HashKnownHosts is off — it didn't exist in those days). h=()
if [[ -r ~/.ssh/config ]]; then
h=($h ${${${(@M)${(f)"$(cat ~/.ssh/config)"}:#Host *}#Host }:#*[*?]*})
fi
if [[ -r ~/.ssh/known_hosts ]]; then
h=($h ${${${(f)"$(cat ~/.ssh/known_hosts{,2} || true)"}%%\ *}%%,*}) 2>/dev/null
fi
if [[ $#h -gt 0 ]]; then
zstyle ':completion:*:ssh:*' hosts $h
zstyle ':completion:*:slogin:*' hosts $h
fi | {
"source": [
"https://serverfault.com/questions/170346",
"https://serverfault.com",
"https://serverfault.com/users/40159/"
]
} |
170,391 | First of all, I must say I've never had a problems with making cables by myself, last years stopped using cable testers just because they always say the cable I've made is ok. But I see there's a lot of the factory-made cables on the market, they became cheap and in addition you may choose any color for the better management. Some people say the factory made cable is always better than you may do. As there could be couple of the different situations, let's split a question: buy or make a cables for the networking inside a rack? buy or make for the networking outside i.e. to users' computers, to other racks etc.? | Personally I would always buy cables and always do buy cables in large volumes. The reason for this is that unless you are making cables for home you are making cables for a business. It takes time to make cables and you are paying people/you are being paid by the organisation you are working for. Unless you are making them in very large quantities the amount you save from making cables yourself does not cover time cost of the time taken to make the cables. Also as you said you can buy multiple colours and most lengths, failing that there are companies out there who will make you any length of cable you want cost effectively. Hope this helps. | {
"source": [
"https://serverfault.com/questions/170391",
"https://serverfault.com",
"https://serverfault.com/users/6080/"
]
} |
170,578 | I connect an iSCSI target, create a Physical Volume and Volume Group on it. Then I create an LV, and mkfs.ext3 /dev/vg00/vm and all that works great. Then I disconnect the target iscsiadm -m node -T iqn.2004-04.com.qnap:ts-509:iscsi.linux01.ba4731 -p 192.168.0.4 -u login to another Linux server, and connect the target there iscsiadm -m node -T iqn.2004-04.com.qnap:ts-509:iscsi.linux01.ba4731 -p 192.168.0.4 -l and I get: linux01:~ # lvdisplay
--- Logical volume ---
LV Name /dev/vg00/vm
VG Name vg00
LV UUID NBNRGV-FkSR-ZNZ9-9AVk-chLQ-j5nc-RazeBw
LV Write Access read/write
LV Status NOT available
LV Size 17.00 GB
Current LE 4352
Segments 1
Allocation inherit
Read ahead sectors 0 I can see that /dev/vg00/vm doesn't exist, as I would have expected. What am I doing wrong? | You need to activate a volume group after you attached it. To activate all the inactive volumes on the system you would use a command like vgchange -a y . | {
"source": [
"https://serverfault.com/questions/170578",
"https://serverfault.com",
"https://serverfault.com/users/34187/"
]
} |
170,583 | In linux, how do I do something like echo 'hello world' > log.txt but instead of overwriting the contents of log.txt, it appends to the end of of log.txt? | echo 'hello world' >> log.txt | {
"source": [
"https://serverfault.com/questions/170583",
"https://serverfault.com",
"https://serverfault.com/users/14896/"
]
} |
170,682 | I've got a Github repo I want to access from two different Linux machines. For the first machine, I followed Github's instructions for generating SSH keys, and added the resulting public key to Github. This client works fine. For the second client, I copied the /home/{user}/.ssh/id_rsa file from the first client. I thought this might be all I had to do, but when I try to connect I get 'Permission denied (publickey).' What am I missing? | The same SSH key should be able to be used from multiple clients. I have different SSH keys for different networks and they're actually stored on an encrypted USB drive that I use from several different computers without a problem. SSH is very picky about file permissions so I would first check all the permissions from /home/{user} all the way down to the id_rsa file itself. SSH does not really care for group or world write permissions so make sure you chmod go-w your home directory and the ~/.ssh directory for starters. I'd also make sure they're owned by your user chown ${USER}:${USER} . For the SSH key itself I chmod 600 them... If you want I've have additional info on how I manage my SSH keys in my answer to another SSH question. | {
"source": [
"https://serverfault.com/questions/170682",
"https://serverfault.com",
"https://serverfault.com/users/41823/"
]
} |
171,095 | Using the pipes ( | ) feature in Linux I can forward chain the standard input to one or several output streams. I can use tee to split the output to separate sub processes. Is there a command to join two input streams? How would I go about this? How does diff work? | Personally, my favorite (requires bash and other things that are standard on most Linux distributions) The details can depend a lot on what the two things output and how you want to merge them ... Contents of command1 and command2 after each other in the output: cat <(command1) <(command2) > outputfile Or if both commands output alternate versions of the same data that you want to see side-by side (I've used this with snmpwalk; numbers on one side and MIB names on the other): paste <(command1) <(command2) > outputfile Or if you want to compare the output of two similar commands (say a find on two different directories) diff <(command1) <(command2) > outputfile Or if they're ordered outputs of some sort, merge them: sort -m <(command1) <(command2) > outputfile Or run both commands at once (could scramble things a bit, though): cat <(command1 & command2) > outputfile The <() operator sets up a named pipe (or /dev/fd) for each command, piping the output of that command into the named pipe (or /dev/fd filehandle reference) and passes the name on the commandline. There's an equivalent with >(). You could do: command0 | tee >(command1) >(command2) >(command3) | command4 to simultaneously send the output of one command to 4 other commands, for instance. | {
"source": [
"https://serverfault.com/questions/171095",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
171,400 | If cost were not an issue, would there be any benefit in deploying a software load balancer for web traffic compared to a hardware one? | The distinction between "hardware" and "software" load balancers is no longer meaningful. A so-called "hardware" load balancer is a PC class CPU, network interfaces with packet processing capabilities, and some software to bind it all together. A "software" load balancer realized on a good server with modern NICs is ... the same. What you get with high-end commercial offerings like F5 or Citrix Netscaler is: A rich and deep feature set. Their solution is mature and can quickly handle all common needs and some uncommon ones as well. Excellent statistics. Management types love statistics, and network techs realize that stats can be useful in troubleshooting too. A single vendor to choke when something isn't working, i.e. support contract directly with the solution vendor. Lower salary costs. The appliance mostly just works, and managing one doesn't take that many hours. With (open source) software load balancers is you don't get the opposite, what you get depends on the software you choose and how you go about it. That said, typically you'll see: Longer time to set up the initial solution. Especially if you need more than just load balancing, fx caching + content rewriting + HA, then setting up open source software takes more manhours. You build it, you own it. If your company sets up open source software load balancers with inhouse techs, then you're 100% responsible for the solution yourself. Documentation, upgrade path, disaster recovery etc will all need to be considered and perhaps be implemented by you . The differentiation isn't really on "hardware" versus "software". It is on "buy a proven technology stack as an appliance" versus "build it yourself". There are of course many variables to consider when making the final decision (costs, inhouse skill sets, tolerance for downtime, future growth etc). | {
"source": [
"https://serverfault.com/questions/171400",
"https://serverfault.com",
"https://serverfault.com/users/50438/"
]
} |
171,551 | I just can't wrap my head around it. For example: I want the left-most router to be able to ping my computers on the left and vice-versa. Where would I set up ip route and to what address. I feel like I'm just guessing and don't really understand the concept. Picture is from Cisco Packettracer. | I made a diagram that may be helpful: With regard to static routing, consider the above diagram. We have three separate networks: 192.168.1.0, 192.168.2.0, and 192.168.3.0. At first, network hosts (routers, computers, etc.) can only communicate with other hosts that are on their own network. For instance, the computer named James has a single interface on network 192.168.1.0, so that's the only network that it can 'see'. Initially, it will only be able to communicate with Router A. Router A has network interfaces on the 192.168.1.0 and 192.168.2.0 networks, so those are the two networks that it can 'see'. These are the only networks Router A 'knows' about, so it can only communicate with hosts on the 192.168.1.0 and 192.168.2.0 networks. So Router A doesn't even 'know' that the 192.168.3.0 network exists. Similarly, Router B can 'see' networks 192.168.2.0 and 192.168.3.0. When you enter a route into the table, you're telling a host that there's a new network it can get to, and you're giving it the address of a gateway that it can use to get to the new network. So to be able to contact Jesus (or any other host on the 192.168.3.0 network) from Router A, you'd enter the command: ip route 192.168.3.0 255.255.255.0 192.168.2.2
^ ^ ^
network mask gateway This works because Router B can 'see' both Router A and Jesus. Thanks to this routing table entry when Router A wants to reach the 192.168.3.0 network, it knows it can get there via Router B at 192.168.2.2, so it sends the packet to Router B. Router B can see the 192.168.3.0 network directly, so it forwards the packet along to Jesus at 192.168.3.11. So, now we know how to direct router A to the 192.168.3.0 network. But what if we want James to also be able to reach the 192.168.3.0 network? Well, Router A already knows how to get there, and James can already 'see' Router A, since they're both on network 192.168.1.0. So we can just tell James to use Router A as its gateway to the 192.168.3.0 network. If James were a router instead of a computer, we'd use the command: ip route 192.168.3.0 255.255.255.0 192.168.1.1
^ ^ ^
network mask gateway James would then be able to contact Jesus (or any host on the 192.168.3.0) network by forwarding the packet to 192.168.1.1 (Router A), which would then forward the packet to 192.168.2.2 (Router B) which would then forward the packet to its destination (Jesus in this case) via its directly connected interface. Now, for Jesus to be able to respond to James, Jesus would need to have Router B set up as its gateway to the 192.168.1.0 network, and Router B would have to have Router A set up as its gateway to the 192.168.1.0 network. Then, any host on the 192.168.1.0 network would have a path to the 192.168.3.0 network and vice versa. Hope that helps. | {
"source": [
"https://serverfault.com/questions/171551",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
171,678 | I have a small web server that serves requests on port 5010 rather than 80. I would like to use nginx as a front end proxy to receive requests on port 80 and then let those requests be handle by port 5010. I have installed nginx successfully and it runs smoothly on Ubuntu Karmic. But, my attempts to reconfigure the default nginx.conf have not been successful. I tried including in the server directive the listen argument for port 5010. I have also tried proxy_pass directive. Any suggestions on changes that need to be made or directives that need to be set in order to just have port forwarding. | I'm assuming that nginx is not the server listening on port 5010 as well as 80, correct? Something else is listening on 5010 and you wish to have nginx proxy to that server? If that's the case, here's a nice sample config I've used in the past with success: server {
listen 80;
server_name <YOUR_HOSTNAME>;
location / {
proxy_pass http://127.0.0.1:5010/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
} I believe that should accomplish what you're seeking. Good luck! | {
"source": [
"https://serverfault.com/questions/171678",
"https://serverfault.com",
"https://serverfault.com/users/51512/"
]
} |
171,744 | Is there a simple ping-like command to test whether a DHCP service is running on a network? ...on Linux | Based on this answer , assuming you have installed nmap ( sudo apt install nmap ): sudo nmap --script broadcast-dhcp-discover Add the -e $interface option if you have more than one network interface. (For example: nmap --script broadcast-dhcp-discover -e eth0 ) Sample output: Starting Nmap 7.01 ( https://nmap.org ) at 2017-09-27 17:40 CEST
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 1:
| IP Offered: 192.168.81.94
| DHCP Message Type: DHCPOFFER
| Server Identifier: 192.168.81.2
| IP Address Lease Time: 5m00s
| Subnet Mask: 255.255.255.0
| Router: 192.168.81.2
| Domain Name Server: 192.168.81.2
| Domain Name: example.lan
| NTP Servers: 192.168.81.10, 192.168.81.2
| NetBIOS Name Server: 192.168.81.10
|_ NetBIOS Node Type: 8
WARNING: No targets were specified, so 0 hosts scanned.
Nmap done: 0 IP addresses (0 hosts up) scanned in 0.66 seconds Or this output on a different network, with nmap v. 6 : Starting Nmap 6.00 ( http://nmap.org ) at 2017-09-27 17:42 CEST
Pre-scan script results:
| broadcast-dhcp-discover:
| IP Offered: 192.168.4.101
| DHCP Message Type: DHCPOFFER
| Server Identifier: 192.168.4.1
| IP Address Lease Time: 7 days, 0:00:00
| Subnet Mask: 255.255.255.0
| Time Offset: 7200
| Router: 192.168.4.1
| Domain Name Server: 208.91.112.53, 208.91.112.52
| Renewal Time Value: 3 days, 12:00:00
|_ Rebinding Time Value: 6 days, 3:00:00
WARNING: No targets were specified, so 0 hosts scanned.
Nmap done: 0 IP addresses (0 hosts up) scanned in 1.15 seconds | {
"source": [
"https://serverfault.com/questions/171744",
"https://serverfault.com",
"https://serverfault.com/users/4935/"
]
} |
171,833 | Is there a simple command to find out the current number of messages in the linux mail queue? mailq dumps out a verbose list, but it's not convenient for a quick overview. I'm using Ubuntu and postfix. | If you just want to know the number of messages sitting in the deferred queue, then the following should get you a quick answer: find /var/spool/postfix/deferred -type f | wc -l There are three other queues. See http://www.porcupine.org/postfix/queueing.html for details. | {
"source": [
"https://serverfault.com/questions/171833",
"https://serverfault.com",
"https://serverfault.com/users/51568/"
]
} |
171,893 | We all know it happens. A bitter old IT guy leaves a backdoor into the system and network in order to have fun with the new guys and show the company how bad things are without him. I've never personally experienced this. The most I've experienced is somebody who broke and stole stuff right before leaving. I'm sure this happens, though. So, when taking over a network that can't quite be trusted, what steps should be taken to ensure everything is safe and secure? | It's really, really, really hard. It requires a very complete audit. If you're very sure the old person left something behind that'll go boom, or require their re-hire because they're the only one who can put a fire out, then it's time to assume you've been rooted by a hostile party. Treat it like a group of hackers came in and stole stuff, and you have to clean up after their mess. Because that's what it is. Audit every account on every system to ensure it is associated with a specific entity. Accounts that seem associated to systems but no one can account for are to be mistrusted. Accounts that aren't associated with anything need to be purged (this needs to be done anyway, but it is especially important in this case) Change any and all passwords they might conceivably have come into contact with. This can be a real problem for utility accounts as those passwords tend to get hard-coded into things. If they were a helpdesk type responding to end-user calls, assume they have the password of anyone they assisted. If they had Enterprise Admin or Domain Admin to Active Directory, assume they grabbed a copy of the password hashes before they left. These can be cracked so fast now that a company-wide password change will need to be forced within days. If they had root access to any *nix boxes assume they walked off with the password hashes. Review all public-key SSH key usage to ensure their keys are purged, and audit if any private keys were exposed while you're at it. If they had access to any telecom gear, change any router/switch/gateway/PBX passwords. This can be a really royal pain as this can involve significant outages. Fully audit your perimeter security arrangements. Ensure all firewall holes trace to known authorized devices and ports. Ensure all remote access methods (VPN, SSH, BlackBerry, ActiveSync, Citrix, SMTP, IMAP, WebMail, whatever) have no extra authentication tacked on, and fully vet them for unauthorized access methods. Ensure remote WAN links trace to fully employed people, and verify it. Especially wireless connections. You don't want them walking off with a company paid cell-modem or smart-phone. Contact all such users to ensure they have the right device. Fully audit internal privileged-access arrangements. These are things like SSH/VNC/RDP/DRAC/iLO/IMPI access to servers that general users don't have, or any access to sensitive systems like payroll. Work with all external vendors and service providers to ensure contacts are correct. Ensure they are eliminated from all contact and service lists. This should be done anyway after any departure, but is extra-important now. Validate all contacts are legitimate and have correct contact information, this is to find ghosts that can be impersonated. Start hunting for logic bombs. Check all automation (task schedulers, cron jobs, UPS call-out lists, or anything that runs on a schedule or is event-triggered) for signs of evil. By "All" I mean all. Check every single crontab. Check every single automated action in your monitoring system, including the probes themselves. Check every single Windows Task Scheduler; even workstations. Unless you work for the government in a highly sensitive area you won't be able to afford "all", do as much as you can. Validate key system binaries on every server to ensure they are what they should be. This is tricky, especially on Windows, and nearly impossible to do retroactively on one-off systems. Start hunting for rootkits. By definition they're hard to find, but there are scanners for this. The decision to kick off an audit of this incredible scope needs to be made at a very high level. The decision to treat this as a potential criminal case will be made by your Legal team. If they elect to do some preliminary investigation first, go for it. Start looking. If you find any evidence, stop immediately . Notify your legal team as soon as you find something likely. The decision to treat it as a criminal case will be made at that time. Further action by untrained hands (you) can spoil evidence and you don't want that, not unless you want the perp to walk free. If outside security experts are retained, you are their local expert. Work with them, to their direction. They understand the legal requirements for evidence, you do not. There will be a lot of negotiation between the security experts, your management, and legal counsel. That's expected, work with them. But, really, how far do you have to go? This is where risk management comes into play. Simplistically, this is the method of balancing expected risk against loss. Sysadmins do this when we decide which off-site location we want to put backups; bank safety deposit box vs an out-of-region datacenter. Figuring out how much of this list needs following is an exercise in risk-management. In this case the assessment will start with a few things: The expected skill level of the departed The access of the departed The expectation that evil was done The potential damage of any evil Regulatory requirements for reporting perpetrated evil vs preemptively found evil. Generally you have to report the former, but not the later. The decision of how far down the above rabbit-hole to dive will depend on the answers to these questions. For routine admin departures where expectation of evil is very slight, the full circus is not required; changing admin-level passwords and re-keying any external-facing SSH hosts is probably sufficient. Again, corporate risk-management security posture determines this. For admins who were terminated for cause, or evil cropped up after their otherwise normal departure, the circus becomes more needed. The worst-case scenario is a paranoid BOFH-type who has been notified that their position will be made redundant in 2 weeks, as that gives them plenty of time to get ready; in circumstances like these Kyle's idea of a generous severance package can mitigate all kind of problems. Even paranoids can forgive a lot of sins after a check containing 4 months pay arrives. That check will probably cost less than the cost of the security consultants needed to ferret out their evil. But ultimately, it comes down to the cost of determining if evil was done versus the potential cost of any evil actually being done. | {
"source": [
"https://serverfault.com/questions/171893",
"https://serverfault.com",
"https://serverfault.com/users/33118/"
]
} |
171,985 | I just updated my robots.txt file on a new site; Google Webmaster Tools reports it read my robots.txt 10 minutes before my last update. Is there any way I can encourage Google to re-read my robots.txt as soon as possible? UPDATE: Under Site Configuration | Crawler Access | Test robots.txt: Home Page Access shows: Googlebot is blocked from http://my.example.com/ FYI: The robots.txt that Google last read looks like this: User-agent: *
Allow: /<a page>
Allow: /<a folder>
Disallow: / Have I shot myself in the foot, or will it eventually read: http:///robots.txt (as it did the last time it read it)? Any ideas on what I need to do? | In case anyone else runs into this problem there is a way to force google-bot to re-download the robots.txt file. Go to Health -> Fetch as Google [1] and have it fetch /robots.txt That will re-download the file and google will also re-parse the file. [1] in the previous Google UI it was 'Diagnostics -> Fetch as GoogleBot'. | {
"source": [
"https://serverfault.com/questions/171985",
"https://serverfault.com",
"https://serverfault.com/users/721/"
]
} |
171,992 | I am using a (dv) server from Media Temple, and I have wordpress installed on the root of my domain. I have a directory called support that holds a standard xhtml webpage, but if the user enters the domain name as follows: www.domain.com/directory the wordpress page not found shows up, however, if the user enters: www.domain.com/directory/ the xhtml page shows up. Is it possible to force the server to direct anyone who types /directory to /directory/? Sorry if this isn't making much sense! Thanks, Danny | In case anyone else runs into this problem there is a way to force google-bot to re-download the robots.txt file. Go to Health -> Fetch as Google [1] and have it fetch /robots.txt That will re-download the file and google will also re-parse the file. [1] in the previous Google UI it was 'Diagnostics -> Fetch as GoogleBot'. | {
"source": [
"https://serverfault.com/questions/171992",
"https://serverfault.com",
"https://serverfault.com/users/41698/"
]
} |
172,284 | In Linux if you type cat *, you will get something like this: line1 from file1 line2 from file1 line1 from file2 line1 from file3 line2 from file3 line3 from file3 What I would like is to display a separator among files. Something like this: line1 from file1 line2 from file1 XXXXXXXXXXXX line1 from file2 XXXXXXXXXXXX line1 from file3 line2 from file3 line3 from file3 Is that easily possible with a one-liner easy to type by heart? | If you're not too fussy about the appearance of the separator: tail -n +1 * | {
"source": [
"https://serverfault.com/questions/172284",
"https://serverfault.com",
"https://serverfault.com/users/48214/"
]
} |
172,326 | Outlook and a number of other email clients now feature autodiscovery of mail server settings and it bugs me that I don't have this set up for our domains, but I'm not sure how to do it and a quick google hasn't turned up anything. I presume it's done with some kind of SRV record in DNS - is this correct and if so what's the correct format? | I am sorry I might be late to the party here. If you are still looking for a solution, I spent a weekend figuring out how to provide Auto Configuration (autodiscover what Outlook 2010 calls it) for most popular email clients including iOS. I wrote it all down in a blog post here: http://moens.ch/2012/05/31/providing-email-client-autoconfiguration-information/ (also available via archive.org ) Outlook 2010 actually does a combination of DNS lookup and XML config. It first does a SRV lookup for _autodiscover._tcp.<yourdomain> and then does an xml POST request to your autodiscover url and expects an XML response. My post contains samples of the XML response and a link to the full autodiscover xml Response spec on MS technet. In short: You can provide full autodiscover functionality to your users even without Exchange server. | {
"source": [
"https://serverfault.com/questions/172326",
"https://serverfault.com",
"https://serverfault.com/users/402/"
]
} |
172,337 | If I run this command in Ubuntu sudo cat /proc/sys/kernel/random/entropy_avail it returns a number that indicates how much "entropy" is available to the kernel, but that's about all I know. What unit is this entropy measured in? What is it used for? I've been told it's "bad" if that number is "low". How low is "low" and what "bad" things will happen if it is? What's a good range for it to be at? How is it determined? | Your system gathers some "real" random numbers by keeping an eye about different events: network activity, hardware random number generator (if available; for example VIA processors usually has a "real" random number generator), and so on. If feeds those to kernel entropy pool, which is used by /dev/random. Applications which need some extreme security tend to use /dev/random as their entropy source, or in other words, the randomness source. If /dev/random runs out of available entropy, it's unable to serve out more randomness and the application waiting for the randomness stalls until more random stuff is available. The example I've seen during my career is that Cyrus IMAP daemon wanted to use /dev/random for the randomness and its POP sessions wanted to generate the random strings in APOP connections from /dev/random. In a busy environment there were more login attempts than traffic for feeding the /dev/random -> everything stalled. In that case I installed rng-tools and activated the rngd it had -- that shoveled semi-random numbers from /dev/urandom to /dev/random in case /dev/random ran out of "real" entropy. | {
"source": [
"https://serverfault.com/questions/172337",
"https://serverfault.com",
"https://serverfault.com/users/36088/"
]
} |
172,484 | I'm ssh ing into a server and I'm starting a Python script that'll take approx. 24 hours to complete. What if my internet connection dies in the middle? Will that stop the command? Is there any way to run my long-running command in a way that local disconnects won't affect it and I could continue to see its output after I log in to ssh again? | The best way is to use screen (on the server) to start a session to run the command in and then disconnect the screen so it will keep running, and you can do other things, or just disconnect from the server. The other option is to use nohup in combination with & so you would have nohup <command> & | {
"source": [
"https://serverfault.com/questions/172484",
"https://serverfault.com",
"https://serverfault.com/users/20346/"
]
} |
173,158 | What are the differences between OpenSwan and StrongSwan? All I found is this comparison between the outdated FreeSwan and testing version of OpenSwan - i.e. current stable of OpenSwan is 2.6 (3.0 in comparison) and current stable for StrongSwan is 4.4 (4.1.7 in comparison) which seems grossly unfair (there is no point in comparing Windows 98 with Ubuntu 10.10 or Mac OS X 10.7 with Slackware 8.0). After reading some websites, StrongSwan seems to be better maintained while OpenSwan seems to be more popular. | Libreswan is the project the Openswan developers created after the company they had originally founded to develop Openswan sued them over the trademark. So Libreswan is what we will discuss here. The most obvious differences are: StrongSwan has much more comprehensive and developed documentation than Libreswan . StrongSwan has support for EAP authentication methods, which make it easier to integrate into heterogeneous environments (such as authenticating to Active Directory). These are less well developed or even missing from Libreswan. StrongSwan can be clustered and load balanced . Libreswan does not seem to have any support to do either. Libreswan supports more hardware crypto accelerators than StrongSwan, but requires kernel patches to do so. Distro support: StrongSwan is the recommended default in Ubuntu since 14.04 . RHEL 7 ships Libreswan, though StrongSwan is available in EPEL. IPSec-tools was a port of the KAME IPSec userland from BSD to Linux. It appears to be no longer maintained. | {
"source": [
"https://serverfault.com/questions/173158",
"https://serverfault.com",
"https://serverfault.com/users/14085/"
]
} |
173,187 | I have seen what the text representation of an HTTP request is, but what does a DNS request look like? Where in the data is the location of the URL you are trying to locate? Also, how is the response formatted? | This is a raw dump from Wireshark of a DNS query. The DNS part starts with 24 1a: 0000 00 00 00 00 00 00 00 00 00 00 00 00 08 00 45 00 ........ ......E.
0010 00 3c 51 e3 40 00 40 11 ea cb 7f 00 00 01 7f 00 .<Q.@.@. ........
0020 00 01 ec ed 00 35 00 28 fe 3b 24 1a 01 00 00 01 .....5.( .;$.....
0030 00 00 00 00 00 00 03 77 77 77 06 67 6f 6f 67 6c .......w ww.googl
0040 65 03 63 6f 6d 00 00 01 00 01 e.com... .. And here is the breakdown: Domain Name System (query)
[Response In: 1852]
Transaction ID: 0x241a
Flags: 0x0100 (Standard query)
0... .... .... .... = Response: Message is a query
.000 0... .... .... = Opcode: Standard query (0)
.... ..0. .... .... = Truncated: Message is not truncated
.... ...1 .... .... = Recursion desired: Do query recursively
.... .... .0.. .... = Z: reserved (0)
.... .... ...0 .... = Non-authenticated data OK: Non-authenticated data is unacceptable
Questions: 1
Answer RRs: 0
Authority RRs: 0
Additional RRs: 0
Queries
www.google.com: type A, class IN
Name: www.google.com
Type: A (Host address)
Class: IN (0x0001) And the response, again starting at 24 1a: 0000 00 00 00 00 00 00 00 00 00 00 00 00 08 00 45 00 ........ ......E.
0010 00 7a 00 00 40 00 40 11 3c 71 7f 00 00 01 7f 00 .z..@.@. <q......
0020 00 01 00 35 ec ed 00 66 fe 79 24 1a 81 80 00 01 ...5...f .y$.....
0030 00 03 00 00 00 00 03 77 77 77 06 67 6f 6f 67 6c .......w ww.googl
0040 65 03 63 6f 6d 00 00 01 00 01 c0 0c 00 05 00 01 e.com... ........
0050 00 05 28 39 00 12 03 77 77 77 01 6c 06 67 6f 6f ..(9...w ww.l.goo
0060 67 6c 65 03 63 6f 6d 00 c0 2c 00 01 00 01 00 00 gle.com. .,......
0070 00 e3 00 04 42 f9 59 63 c0 2c 00 01 00 01 00 00 ....B.Yc .,......
0080 00 e3 00 04 42 f9 59 68 ....B.Yh Breakdown: Domain Name System (response)
[Request In: 1851]
[Time: 0.000125000 seconds]
Transaction ID: 0x241a
Flags: 0x8180 (Standard query response, No error)
1... .... .... .... = Response: Message is a response
.000 0... .... .... = Opcode: Standard query (0)
.... .0.. .... .... = Authoritative: Server is not an authority for domain
.... ..0. .... .... = Truncated: Message is not truncated
.... ...1 .... .... = Recursion desired: Do query recursively
.... .... 1... .... = Recursion available: Server can do recursive queries
.... .... .0.. .... = Z: reserved (0)
.... .... ..0. .... = Answer authenticated: Answer/authority portion was not authenticated by the server
.... .... .... 0000 = Reply code: No error (0)
Questions: 1
Answer RRs: 3
Authority RRs: 0
Additional RRs: 0
Queries
www.google.com: type A, class IN
Name: www.google.com
Type: A (Host address)
Class: IN (0x0001)
Answers
www.google.com: type CNAME, class IN, cname www.l.google.com
Name: www.google.com
Type: CNAME (Canonical name for an alias)
Class: IN (0x0001)
Time to live: 3 days, 21 hours, 52 minutes, 57 seconds
Data length: 18
Primary name: www.l.google.com
www.l.google.com: type A, class IN, addr 66.249.89.99
Name: www.l.google.com
Type: A (Host address)
Class: IN (0x0001)
Time to live: 3 minutes, 47 seconds
Data length: 4
Addr: 66.249.89.99
www.l.google.com: type A, class IN, addr 66.249.89.104
Name: www.l.google.com
Type: A (Host address)
Class: IN (0x0001)
Time to live: 3 minutes, 47 seconds
Data length: 4
Addr: 66.249.89.104 Edit: Note that if your real question is "how do I write a DNS server?", then there are two appropriate answers: Don't do it, use an existing one, e.g., bind or dnsmasq Read the spec Edit(2): The request was sent using host on a linux box: host www.google.com If you are on Windows, you can use nslookup nslookup www.google.com | {
"source": [
"https://serverfault.com/questions/173187",
"https://serverfault.com",
"https://serverfault.com/users/33970/"
]
} |
173,286 | Do you consider Arch Linux suitable for server environment? Its rolling release model and simplicity seems to be a good thing, because once you installed it, you do not need to reinstall like the release model from other distros. But that constant upgrading does not cause stability problems? Although it is bleeding edge, Arch Linux uses the most recent STABLE version of software. | Probably the biggest issue with Arch as a server operating system is that it's not clear where and when applications may break after an upgrade. More often than not, you have to keep up with what's going on in the wiki and on the forums before doing any sort of upgrade; with Debian and CentOS, you can well assured that any upgrades won't break any applications, since more often than not, the upgrades done on the STABLE branch will be security/bug fixes. | {
"source": [
"https://serverfault.com/questions/173286",
"https://serverfault.com",
"https://serverfault.com/users/50498/"
]
} |
173,391 | I am looking for a solution to load balancing and failover strategy, mainly for big web applications. We have many services to be balanced, such as web, MySQL, and many other HTTP or TCP based services. But I am not sure what their pros and cons are, and which I should choose. | The most important thing that differentiates the two solutions (LVS, HAproxy) is that one is working at layer 4 (LVS) and the other at layer 7 (HAproxy). Note that the layers references are from OSI networking model. If you understand this, you'll be able to use one in the right place. For example : if you need to balance based solely on number of connections (let's say), the layer 4 load balancer should suffice; on the other hand, if you want to load-balancer based on HTTP response time, you'll need a higher layer kind of LB. The drawbacks of using a higher level LB is the resource needed (for the same amount of let's say, traffic). The plusses are obvious - think "packet level inspection", "protocol routing", etc - things far more complicated than simple "packet routing". The last point I want to make is that HAproxy is userspace (think "far more easy to customize/tweak", but slower (performance)), while LVS is in kernel space (think "fast as hell", but rigid as the kernel). Also, don't forget about "upgrading LVS might mean kernel change - ergo, reboot"... In conclusion, use the right tool for the right job. | {
"source": [
"https://serverfault.com/questions/173391",
"https://serverfault.com",
"https://serverfault.com/users/14243/"
]
} |
173,435 | I'm running ESXi 4.1 on a Dell T110 Server I connect to ESXi using vSphere vSphere is running inside a Windows 7 VM The Windows 7 VM is running in VMware Fusion on my Mac OS X system When I'm in vSphere and I've selected a VM and I click the console tab on some systems the VM console won't release me when I press the control + command keys. pfSense (FreeBSD) and Ubuntu Server behave like this. I can't exit their console screen. I have to shut down these VM's to be released from their VM console access. Windows, Ubuntu Desktop, etc. all behave like I'd expect; When I press the control + command keys I'm released from the VM console and I'm able to navigate in vSphere. Does anyone know what might be causing this or a way around this? Thanks in advance. | The key combination Control + Alt solves this problem | {
"source": [
"https://serverfault.com/questions/173435",
"https://serverfault.com",
"https://serverfault.com/users/37321/"
]
} |
173,442 | I might be interested in getting a VPS hosting plan for some small personal sites and .NET projects. Was thinking of Softsys Bronze Plan, as my current shared host plan is with them too. The stuff I want to host has grown beyond the capabilities of a Shared hosting plan, and I also want more control over the IIS/ASP.NET configuration, that's why I'm considering VPS. The main config details would be: Hyper-V 30 GB of diskspace 1 GB of RAM More info here: http://www.softsyshosting.com/Windows-VPS-HyperV.aspx Does anyone have experience with this plan (or something similar from another host), and maybe could answer these couple of questions: Bronze has a total diskspace of 30GB. Is the OS part of this quota or not ? If so, how much does a base configuration with Windows 2008 take up in diskspace ? Would you advise Windows 2008 R2 or Normal. Or would you advise to use Windows 2003 with this config. I'm planning on running a SQL Server Express install too. Would 1 GB of RAM be enough for both the Windows 2008 (R2) and SQL Express. The database load will not be that very high (a couple of 1000 records returned each day). The DB will most likely be far away from the 4GB limit, that's why I'd go for a SQL Express instead of paying extra licensing costs for a SQL Web install. But I'm more concerned about performance. Would you recommend Softsys as a VPS host ? I've been with them for one year for my Shared hosting plan, and have no complaints so far. Also, as I have no VPS experience, what are the pitfalls I need to be aware of, in terms of performance mainly, but maybe in other areas too ? Mathieu | The key combination Control + Alt solves this problem | {
"source": [
"https://serverfault.com/questions/173442",
"https://serverfault.com",
"https://serverfault.com/users/48726/"
]
} |
173,742 | Question says it all. We are designing a system where security is very important. One of the ideas someone had was to force users to change passwords every 3 months. My take on this is that while its more secure because the password changes often it also forces our users to remember ever changing passwords and makes it more possible that they will just write it down somewhere to help remember. In the same idea is it really good to force users to use a super hard to guess password. Force them to use ?%&% and uppercase lowercase letters. I know its quite the hassle to invent such a password and then remembering it. Then again we do not want anyone using 12345. So. Is there any whitepapers about this subject? Good practice? I am talking about a website created with PHP. MySQL in a lamp environment if that changes anything. | I think I might be in the minority on this (based on my limited experience dealing with IT departments at school and work), but I think mandatory, time-based password change policies are worthless at best, and harmful at worst. People tend to be very bad at choosing good passwords and keeping them secret. Password expiration policies are designed to mitigate this by limiting the amount of time any one password can be cracked/social engineered/stolen; however, they fail to achieve this in practice, primarily because they force users to relearn their password on a continuous basis. By making it harder for user to commit their passwords to memory, you end up causing many of them to choose weaker passwords, and/or write their passwords down someplace where prying eyes can find them. Furthermore, when forced to change their password on a regular basis, many users will choose passwords that follow a very recognizable pattern, such as [base string][digit] . Let's say a user wants to use their cat's name Fluffy as their password. They might start out with a password of fluffy , then change it to fluffy1 , fluffy2 , fluffy3 and so on. In this case, the policy doesn't really help security; even if the user chooses a more secure base string than fluffy , and even if they keep their password safely memorized, the single suffix character that changes every few months does very little to mitigate cracking or social engineering attacks. See also: Password Expiration Considered Harmful , a short article (not written by me) which I think gives a good introduction to these problems. | {
"source": [
"https://serverfault.com/questions/173742",
"https://serverfault.com",
"https://serverfault.com/users/50206/"
]
} |
173,978 | mysqladmin -uroot create foo returns an exit status of 1 if foo exists, and 0 otherwise, but of course it will also create the database if it doesn't already exist. Is there some easy way to simply check whether a database exists? | I realize this was answered a long time ago, but it seems much cleaner to me to do this: mysql -u root -e 'use mydbname' If the database exists, this will produce no output and exit with returncode == 0. If the database does not exist, this will produce an error message on stderr and exit with returncode == 1. So you'd do something like this: if ! mysql -u root -e 'use mydbname'; then
...do stuff to create database...
fi This operates nicely with shell scripts, doesn't require any processing of the output, and doesn't rely on having local filesystem access. | {
"source": [
"https://serverfault.com/questions/173978",
"https://serverfault.com",
"https://serverfault.com/users/14573/"
]
} |
173,999 | Is it possible to dump the current memory allocated for a process (by PID) to a file? Or read it somehow? | I'm not sure how you dump all the memory to a file without doing this repeatedly (if anyone knows an automated way to get gdb to do this please let me know), but the following works for any one batch of memory assuming you know the pid: $ cat /proc/[pid]/maps This will be in the format (example): 00400000-00421000 r-xp 00000000 08:01 592398 /usr/libexec/dovecot/pop3-login
00621000-00622000 rw-p 00021000 08:01 592398 /usr/libexec/dovecot/pop3-login
00622000-0066a000 rw-p 00622000 00:00 0 [heap]
3e73200000-3e7321c000 r-xp 00000000 08:01 229378 /lib64/ld-2.5.so
3e7341b000-3e7341c000 r--p 0001b000 08:01 229378 /lib64/ld-2.5.so Pick one batch of memory (so for example 00621000-00622000) then use gdb as root to attach to the process and dump that memory: $ gdb --pid [pid]
(gdb) dump memory /root/output 0x00621000 0x00622000 Then analyse /root/output with the strings command, less you want the PuTTY all over your screen. | {
"source": [
"https://serverfault.com/questions/173999",
"https://serverfault.com",
"https://serverfault.com/users/20077/"
]
} |
174,054 | I want to inspect TXT records for my domain, such as SPF records. I tried the following command with nslookup but it didn't list the TXT records: nslookup -type=TXT example.com What is the correct command, or is there a better tool use on Windows 7? | First start nslookup without parameters, then type set type=txt , then type the domain name. nslookup <enter>
set type=txt <enter>
villagevines.com Example C:\Users\wilfried>nslookup
Default Server: mydnsserver
Address: 192.168.1.1
> set type=txt
> villagevines.com
Server: mydnsserver
Address: 192.168.1.1
*** No text (TXT) records available for villagevines.com
> | {
"source": [
"https://serverfault.com/questions/174054",
"https://serverfault.com",
"https://serverfault.com/users/20215/"
]
} |
174,064 | In the past with NetWare we could use Remote Manager to see what user had a file locked, and clear the connection. How is this accomplished for an NSS volume hosted on OES2 Linux? Thanks,
Tom | First start nslookup without parameters, then type set type=txt , then type the domain name. nslookup <enter>
set type=txt <enter>
villagevines.com Example C:\Users\wilfried>nslookup
Default Server: mydnsserver
Address: 192.168.1.1
> set type=txt
> villagevines.com
Server: mydnsserver
Address: 192.168.1.1
*** No text (TXT) records available for villagevines.com
> | {
"source": [
"https://serverfault.com/questions/174064",
"https://serverfault.com",
"https://serverfault.com/users/50458/"
]
} |
174,181 | I modified /etc/fstab . I verified the new devices and I can mount them with the mount command. How may I validate the modifications made to /etc/fstab ? | You can simple run: mount -a -a
Mount all filesystems (of the given types) mentioned in fstab. This command will mount all (not-yet-mounted) filesystems mentioned in fstab and is used in system script startup during booting. | {
"source": [
"https://serverfault.com/questions/174181",
"https://serverfault.com",
"https://serverfault.com/users/6343/"
]
} |
174,678 | I have a Debian box with some jobs scheduled using at . I know I can list the jobs with their times using atq , but is there any way to print out their contents, apart from peeking into /var/spool/cron/atjobs ? | at -c jobnumber will list a single job. If you want to see all of them, you might create a script like #!/bin/bash
MAXJOB=$(atq | head -n1 | awk '{ print $1; }')
for each in $(seq 1 $MAXJOB); do echo "JOB $each"; at -c $each; done Probably there's a shorter way to do that, I just popped that out of my head :) | {
"source": [
"https://serverfault.com/questions/174678",
"https://serverfault.com",
"https://serverfault.com/users/6503/"
]
} |
174,737 | I'm running Ubuntu and I have a deb file installed. I've made deb packages before, so I know there is a debian changelog (debchange). Is there anyway to see the debian changelog for any package that I have installed? Assume I don't have access to the deb source file for this package, and I don't have the deb file available. I am able to install extra packages if needed. | Alternatively if the deb is also in the repository and you want to know older versions changelog, you can use apt-get changelog package to read all the changelog. For example for openssl: apt-get changelog libssl1.0.0 | {
"source": [
"https://serverfault.com/questions/174737",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
174,909 | How can block files be mounted on osx? I tried hdiutil attach filename however this is terminating with hdiutil: attach failed - not recognized hdiutil only seems to work for iso/dmg images. On ubuntu the block file can easily be mounted with mount -o loop filename mountpoint Background: I used vdfuse ( http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0 ) to mount virtual disk files (e.g. vhd). vdfuse itself works fine and the partitions contained in the virtual disc appear as block files on the mount point. VHD File -> /my/mountpoint/Partition1, Partition2, ... On ubuntu the block files can be mounted via mount -o loop /my/mountpoint/Partition1 however the -o loop option does not exist on osx. | hdiutil was the right way to go but a crucial ingredient are the parameters. The command hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount filename perfectly mounts the block file as a block file system. Afterwards you can mount any fs like Chris S pointed out. | {
"source": [
"https://serverfault.com/questions/174909",
"https://serverfault.com",
"https://serverfault.com/users/52273/"
]
} |
174,913 | Sorry for the newbie question, but I'm having issues with tasksel. I tried to install lamp-server, but php5 isn't installed. user@host:/var/www$ aptitude show apache2 | grep -i installed
State: installed
Automatically installed: no
user@host:/var/www$ aptitude show mysql-server | grep -i installed
State: installed
Automatically installed: no
user@host:/var/www$ aptitude show php5 | grep -i installed
State: not installed
This package is a metapackage that, when installed, guarantees that you have at
least one of the three server-side versions of the PHP5 interpreter installed.
user@host:/var/www$ In the above, I'm in /var/www because I created an info.php file to see if there was possibly some other version of php running I wasn't aware of, but there's not. During installation, I had to fill out items for mysql (root password), and if I try to connect to the webserver, I get apache's "It works!" page, but there's no sign of php anywhere, and I can't figure why. I've never used tasksel before, so I could be misunderstanding something, but could someone let me know what I did wrong? I'm using Ubuntu 10.04 Server. | hdiutil was the right way to go but a crucial ingredient are the parameters. The command hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount filename perfectly mounts the block file as a block file system. Afterwards you can mount any fs like Chris S pointed out. | {
"source": [
"https://serverfault.com/questions/174913",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
175,052 | I would like to know if there's a way to tell which local branch is tracking which remote branch in Git. I'm using one remote server, which is named "origin". | Using the example of my copy of Puppet checked out from the upstream Git repository on Github.com... $ git remote show origin
* remote origin
Fetch URL: git://github.com/reductivelabs/puppet.git
Push URL: git://github.com/reductivelabs/puppet.git
HEAD branch: master
Remote branches:
0.24.x tracked
0.25.x tracked
2.6.x tracked
master tracked
next tracked
primordial-ooze tracked
reins-on-a-horse tracked
testing tracked
testing-17-march tracked
testing-18-march tracked
testing-2-april tracked
testing-2-april-midday tracked
testing-20-march tracked
testing-21-march tracked
testing-24-march tracked
testing-26-march tracked
testing-29-march tracked
testing-31-march tracked
testing-5-april tracked
testing-9-april tracked
testing4268 tracked
Local branch configured for 'git pull':
master merges with remote master
Local ref configured for 'git push':
master pushes to master (up to date) Then if I were to execute the following: $ git checkout -b local_2.6 -t origin/2.6.x
Branch local_2.6 set up to track remote branch 2.6.x from origin.
Switched to a new branch 'local_2.6' And finally re-run the git remote show origin command again I will then see the following down near the bottom: Local branches configured for 'git pull':
local_2.6 merges with remote 2.6.x
master merges with remote master | {
"source": [
"https://serverfault.com/questions/175052",
"https://serverfault.com",
"https://serverfault.com/users/28583/"
]
} |
175,504 | I am about do move a server from one Ubuntu box to another. I'm not cloning the old box to the new; I'm creating a new system and will move data as needed. I want to install all the software that I have on the old box on the new one. Is there a simple way to find the history of all the "sudo apt-get install" commands I have given over time? That is, dpkg -l shows me all the packages that have been installed, but not which top-level package installed them. If there is a way for dpkg to give me the installing package, I can find the unique ones there; otherwise, I want something else to say "you installed these 24 packages". | The apt history is in /var/log/apt/history.log as said in a comment above. That said, this will not list packages that were installed manually, using dpkg or GUIs such as gdebi . To see all the packages that went through dpkg , you can look at /var/log/dpkg.log . | {
"source": [
"https://serverfault.com/questions/175504",
"https://serverfault.com",
"https://serverfault.com/users/11131/"
]
} |
175,664 | on Windows XP/server 2003 When telnet some remote host on a specified port, after connection established, sometimes press ctrl+] doesn't quit. Is there any command can quit instead of just close the command line window? Thanks. EDIT:
But sometimes even type ctrl + ] , telnet command line doesn't show up, still stuck at the blank screen. | ctrl+] is an escape sequence that puts telnet into command mode, it doesn't terminate the session. If you type close after hitting ctrl+] , that will "close" the telnet session. | {
"source": [
"https://serverfault.com/questions/175664",
"https://serverfault.com",
"https://serverfault.com/users/32166/"
]
} |
175,803 | Some clients in the subnet has cached the IP with old MAC address, I want them to update the new value by doing a ARP broadcast, is it possible in Linux? | Yes, it's called "Unsolicited ARP" or "Gratuitous ARP". Check the manpage for arping for more details, but the syntax looks something like this: arping -U 192.168.1.101 If you're spoofing an address, you may need to run this first: echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind Finally, because of its spoofing ability, sending Unsolicited ARP packets is sometimes considered a "hostile" activity, and may be ignored, or might lead to being blocked by some third-party firewalls. | {
"source": [
"https://serverfault.com/questions/175803",
"https://serverfault.com",
"https://serverfault.com/users/52746/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.