source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
17,624
The windows backup utility comes with an option to backup the system-state. What is this for? Has it a real utility? Can I recover that windows on another machine?
The system state contains a number of items: System Registry COM + Database Certificate Services Active Directory SysVol IIS Metabase Some of these items are only included if the specified service is installed (AD, IIS, Certificates). (Details are online. TechNet: Server 2003/2003R2 . MSDN: Server 2003/2003R2 .TechNet forums: Server 2008 . MSDN: Server 2008 and upwards ) If you need to restore a server, you will need this state to recover the registry, or your AD Domain, or IIS sites. You can restore system state to the same server, or another server with identical hardware. Microsoft does not support restoring system state to different hardware (see this article ), however it is possible in some occasions, and with some parts of the system state, for example the IIS metabase. In that guess its really a case of try it an see, but its not a recommended solution..
{ "source": [ "https://serverfault.com/questions/17624", "https://serverfault.com", "https://serverfault.com/users/5920/" ] }
17,710
If I have a Windows server (typically 2000, 2003 or 2008), is there a simple way to list all local directories shared on that server? I can find the shares themselves easily enough, but I would love a quick way to find the local directories they represent on disk. Thanks!
You can go into computer management (right click my computer, select manage), expand the Shared Folders node and see a list of all shares, connected sessions and open files. For W2K8, you do this in Server Manager instead: Roles -> File Services -> Share and Storage Management; the Shares tab in the center of the window. For listing shares of remote servers, note that NET VIEW svr_name will only show user shares, no admin or hidden shares. Adding the /all switch at the end will show these others (for W2K8). C:\>net view sx1 Shared resources at sx1 Share name Type Used as Comment -------------------------------------------- SHARE_CIFS Disk The command completed successfully. C:\>net view sx1 /all Shared resources at sx1 Share name Type Used as Comment -------------------------------------------- ADMIN$ Disk Remote Admin SHARE_CIFS Disk C$ Disk Default share IPC$ IPC Remote IPC The command completed successfully.
{ "source": [ "https://serverfault.com/questions/17710", "https://serverfault.com", "https://serverfault.com/users/7410/" ] }
17,718
How do I change swap partition in Linux? If I currently use /dev/hda3 for swap, and I rather would like to use /dev/hda4, which steps should I go through?
Do it as root: swapoff /dev/hda3 mkswap /dev/hda4 swapon /dev/hda4 and edit swap entry in /etc/fstab
{ "source": [ "https://serverfault.com/questions/17718", "https://serverfault.com", "https://serverfault.com/users/7494/" ] }
17,814
I'd like to allow certain users to su to another user account without having to know that account's password, but not allow access to any other user account (i.e. root). For instance, I'd like to allow Tom the DBA to su to the oracle user, but not to the tomcat user or root. I imagine this could be done with the /etc/sudoers file - is it possible? If so, how?
Yes, this is possible. In /etc/sudoers the item immediately following the equals is the user that the command will be allowed to execute as. tom ALL=(oracle) /bin/chown tom * The user (tom) can type sudo -u oracle /bin/chown tom /home/oracle/oraclefile
{ "source": [ "https://serverfault.com/questions/17814", "https://serverfault.com", "https://serverfault.com/users/2304/" ] }
17,870
Every night I get hundreds, sometimes thousands, of failed ssh logins on my RedHat 4 server. For firewall reasons from remote sites, I need to run on the standard port. Is there anything I should be doing to block this. I notice that many come from the same IP address. Shouldn't it be stopping those after a while?
You can use iptables to rate-limit new incoming connections to the SSH port. I'd have to see your entire iptables configuration in order to give you a turnkey solution, but you're basically talking about adding rules like: iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 5 --name SSH --rsource -j DROP iptables -A INPUT -p tcp --dport 22 -m recent --set --name SSH --rsource -j ACCEPT These rules assume that you're accepting ESTABLISHED connections earlier in the table (so that only new connections will hit these rules). New SSH connections will hit these rules and be marked. In 60 seconds, 5 attempts from a single IP address will result in new incoming connections from that IP being dropped. This has worked well for me. Edit: I prefer this method to "fail2ban" because no additional software to be installed, and happens totally in kernel-mode. It doesn't handle parsing log files like "fail2ban" will, but if your problem is only with SSH I wouldn't use something user-mode that requires software installation and is more complex.
{ "source": [ "https://serverfault.com/questions/17870", "https://serverfault.com", "https://serverfault.com/users/7580/" ] }
17,931
I have heard that pssh and clusterssh are two popular ones, but I thought I would open it to discussion here and see what the community's experiences with these tools were? What are the gotchas? Any decent hacks or use cases?
I have used pssh and it's easy and works quite well. It's really great for quick queries. If you find yourself managing servers I'd suggest something more robust and in a slightly different realm (configuration management) such as Puppet or CFEngine.
{ "source": [ "https://serverfault.com/questions/17931", "https://serverfault.com", "https://serverfault.com/users/7246/" ] }
17,932
Steve recommends to run the following code before you start your Emacs stty erase ^\? I get after running it stty: illegal option -- Backups usage: stty [-a|-e|-g] [-f file] [options] Steve's blog post Note that the ^\? - - dorks the Delete key by making it send a ^H, but enables the C-h sequence in Emacs. Note that C-Delete does a normal backward-delete-char, so just remember that when you're backspacing in a terminal, hold the control key down. How can you see what Steve's command does?
I have used pssh and it's easy and works quite well. It's really great for quick queries. If you find yourself managing servers I'd suggest something more robust and in a slightly different realm (configuration management) such as Puppet or CFEngine.
{ "source": [ "https://serverfault.com/questions/17932", "https://serverfault.com", "https://serverfault.com/users/1944/" ] }
17,990
What I want is to configure a computer in home with Windows and use it as a TCP proxy for connect and route packets from the 80 to port 23 in another server in the Internet
You can use the built-in netsh portproxy . In your case: netsh interface portproxy add v4tov4 listenport=80 connectaddress=ip-of-server-on-internet connectport=23 listenaddress=ip-of-windows-machine protocol=tcp You'll need Administrator privileges. No need to install additional software! You are required to install IPv6 on your operating system before using this feature. On Vista and later this is a non-issue as IPv6 comes installed by default, but on XP/2003 you have to open up your network interface property panel, and add the Microsoft TCP/IP version 6 protocol first.
{ "source": [ "https://serverfault.com/questions/17990", "https://serverfault.com", "https://serverfault.com/users/3262/" ] }
18,000
Let us say we own the zone mywebservice.com. I would like each of my customers to get their own subdomain, such as customer.mywebservice.com. customer.mywebservice.com needs to be a CNAME to a given server offsite. Since that site manages its own equipment and can change addresses at any point in time, the CNAME is a requirement. People also need to be able to send email to [email protected], which would require a simple MX record. However, and this is where I'd like some guidance: According to RFC 1034 : If a CNAME RR is present at a node, no other data should be present; this ensures that the data for a canonical name and its aliases cannot be different. I have also verified that my DNS server will refuse to serve up anything but a CNAME for hosts that use them. So, it seems that I may have a losing situation. If I want to use the MX record, I need to use an A instead of a CNAME. Can anyone think of any workarounds? Thanks!
Unfortunately, what you're running into is a limitation of the DNS specification. Having an MX record for the same hostname as is defined as a CNAME record will fail in most DNS server implementations. Some older DNS servers will allow this, but they have been mostly phased out in favor of newer, more secure implementations. Instead of using CNAME records, you will need to use 'A' records with the IP addresses of the customer sites directly instead of aliasing the names.
{ "source": [ "https://serverfault.com/questions/18000", "https://serverfault.com", "https://serverfault.com/users/1246/" ] }
18,113
When I edit my bind dns records, I need to add a trailing period for it to work. What is the point of this? How come when I use everydns.net, they do not require me to add a trailing period? Is this an implementation quirk?
DNS itself has a root zone. this zone it called literally ".". Bind requires that you fully qualify a DNS name (this includes the . or root zone). Other UIs simplify this by assuming the root zone for you. Within Bind, you may define a variable ORIGIN that will be automatically appended if you do not specify a FQDN (Fully Qualified Domain Name, including the trailing .). Alnitak has an excelent example of the syntax and various uses of this.
{ "source": [ "https://serverfault.com/questions/18113", "https://serverfault.com", "https://serverfault.com/users/1322/" ] }
18,125
I need to transfer a huge amount of mp3s between two serves (Ubuntu). By huge I mean about a million files which are on average 300K. I tried with scp but it would have taken about a week. (about 500 KB/s) If I transfer a single file by HTTP, I get 9-10 MB/s, but I don't know how to transfer all of them. Is there a way to transfer all of them quickly?
I would recommend tar. When the file trees are already similar, rsync performs very well. However, since rsync will do multiple analysis passes on each file, and then copy the changes, it is much slower than tar for the initial copy. This command will likely do what you want. It will copy the files between the machines, as well as preserve both permissions and user/group ownerships. tar -c /path/to/dir | ssh remote_server 'tar -xvf - -C /absolute/path/to/remotedir' As per Mackintosh's comment below this is the command you would use for rsync rsync -avW -e ssh /path/to/dir/ remote_server:/path/to/remotedir
{ "source": [ "https://serverfault.com/questions/18125", "https://serverfault.com", "https://serverfault.com/users/1757/" ] }
18,309
At work, I'm the only IT guy (do it all, and do it now type guy) for the last 10 years. If I ever got hit by a bus they would be totally screwed. I've mentioned it several times to management/president type people, yet they ignore me. Too bad for them. What can I do to alleviate their pain? (Or should I even care?) (Yes, this should be a community wiki, but, I don't see the checkbox... maybe I don't have enough rep.)
Document the heck out of everything. There was a thread on Slashdot recently about starting documentation, which inspired me to write down my thoughts about documentation. My key points were: Principle #1: It Is Never Done Documentation is an on-going effort that will always lag behind what is in production. Changes are made ad-hoc, things moved around or discontinued or put into service at random. Documentation will never catch up. You have to sell the people paying the bills on the value of spending time (and therefore, money) on keeping the running documentation up to date. Frequently those conversations go like this: "remember when I had to spend $TIME figuring out how $THING was broken? Well when I was finished, there was this tech note detailing $THING, so that the next guy to come along won't have to figure it all out." You have to do it, even though you will never finish. Principle #2: The Only Thing Worse Than No Documentation Is Wrong Documentation This is more of a truism than a principle. Documentation can lull you into the false sense that something is in a known state and that if something goes wrong you can therefore have a running start at fixing it. It is important to acknowledge this problem. Principle #3: You Are Writing Documentation For Your Successor Odds are 95% of anything you do document you will never have to refer to again. Documentation is a collection of wisdom for the future, not for you. So you have to assume that your audience knows little or nothing about the specifics of how things are the way they are. And there will be a successor. I don't know about you, but I don't plan to be in these specific environments for the rest of my life. Opportunities come and go, and when they come, sometimes you go. But life goes on behind you, and the smoother you can make life for your successor the better. Otherwise you might have a collection of former customers who quietly say unflattering things about you. I like to say that it is the same 50 guys working everywhere in IT in Ottawa because you keep running into them everywhere. Helping your successor might open doors for you in the future. Now to a certain extent there is always a degree of "blame the previous guy" when trouble comes up. That is part of the business. I've done it myself. But on several occasions when I had blasted the previous guy as some kind of moron, I have learned otherwise that he really had his act together and knew more about what was going on than I did at the time. Principle #4: "Why" is often more important than "How" When looking at a system most of us start thinking thinks like why the hell is this like this? There are almost always very specific reasons for the configuration choices made. In these circumstances, the "Why" dictates the "How", and you have to make sure that the reader understands the specific problems being solved when examining the smoking remains of your solution. Principle #5: It has to be easy or you won't do it This means you have to be very aware of your tools as well as those who are going to use your tools. Keeping things up-to-date has to be easy. If you have to make any kind of effort, then you will find excuses to avoid doing it when it is best done, which is immediately after a change. If your tools are not easy for others to use, then they won't use it. This can be especially crippling in a team environment, since the larger the team gets the more likely you will encounter a team member who does not like your choice of tools. Personally, I like a wiki for documents. However the problem is that a wiki does not force a structure on you, so the structure must be imposed from outside. This always leads to conflict somewhere as somebody else has a better/different idea. In some places I've used Word and Visio documents "published" to PDF, with the "latest" PDF being considered authoritative. This is good in that you then have a collection that you can hand to your employer/successor. The PDFs, if properly dated, can provide a historical record of what happened, although one which is not easy to navigate through. It is bad in that I don't like Word or Visio and have been forced to get a basic understanding of these tools in order to effectively communicate the ideas. My current employer is toying with the idea of Word documents in a Sharepoint portal. We'll just have to see how far we get there
{ "source": [ "https://serverfault.com/questions/18309", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
18,339
If you had to explain Active Directory to someone how would you explain it?
I'm glossing over quite a bit here, of course, but it's a decent semi-technical summary that would be suitable for communicating to others who are not familiar with Active Directory itself, but generally familiar with computers and the issues associated with authentication and authorization. Active Directory is, at its heart, a database management system. This database can be replicated amongst an arbitrary number of server computers (called Domain Controllers) in a multi-master manner (meaning that changes can be made to each independent copy, and eventually they'll be replicated to all the other copies). The Active Directory database in an enterprise can be broken up into units of replication called "Domains". The system of replication between server computers can be configured in a very flexible manner to permit replication even in the face of failures of connectivity between domain controller computers, and to replicate efficiently between locations that might be connected with low-bandwidth WAN connectivity. Windows uses the Active Directory as a repository for configuration information. Chief amongst these uses is the storage of user logon credentials (usernames / password hashes) such that computers can be configured to refer to this database to provide a centralized single sign-on capability for large numbers of machines (called "members" of the "Domain"). Permissions to access resources hosted by servers that are members of an Active Directory domain can be controlled through explicit naming of user accounts from the Active Directory domain in permissions called Access Control Lists (ACLs), or by creating logical groupings of user accounts into Security Groups. The information about the names and membership of these security groups are stored in the Active Directory. The ability to modify records stored in the Active Directory database is controlled through security permissions that, themselves, refer to the Active Directory database. In this way, enterprises can provide "Delegation of Control" functionality to allow certain authorized users (or members of security groups) to perform administrative functions on the Active Directory of a limited and defined scope. This would allow, for example, a helpdesk employee to change the password of another user, but not to place his own account into security groups that might grant him permission to access sensitive resources. Versions of the Windows operating system also can perform installations of software, make modifications to the user's environment (desktop, Start menu, behaviour of application programs, etc) by using the Group Policy. The back-end storage of the data that drives this Group Policy system is stored in Active Directory, and thus is given replication and security functionality. Finally, other software applications, both from Microsoft and from third-parties, store additional configuration information in the Active Directory database. Microsoft Exchange Server, for example, makes heavy use of the Active Directory. Applications use Active Directory to gain the benefits of replication, security, and delegation of control described above. Whew! Not too bad, I don't think, for a stream of consciousness! Super short answer: AD is a database to store user logon and group information, and configuration information that drives group policy and other application software.
{ "source": [ "https://serverfault.com/questions/18339", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
18,748
I have an internal network with a DNS server running BIND, connected to the internet through a single gateway. My domain "example.com" is managed by an external DNS provider. Some of the entries in that domain, say "host1.example.com" and "host2.example.com", as well as the top-level entry "example.com", point to the public IP address of the gateway. I would like hosts located on the internal network to resolve "host1.example.com", "host2.example.com" and "example.com" to internal IP addresses instead of that of the gateway. Other hosts like "otherhost.example.com" should still be resolved by the external DNS provider. I have succeeded in doing that for the host1 and host2 entries, by defining two single-entry zones in BIND for "host1.example.com" and "host2.example.com". However, if I add a zone for "example.com", all queries for that domain are resolved by my local DNS server, and e.g. querying "otherhost.example.com" results in an error. Is it possible to configure BIND to override only some entries of a domain, and to resolve the rest recursively?
The best method is via the response policy zone in Bind 9.8.1 or newer. It allows you to override single records in arbitrary zones (and there's no need to create a whole subdomain for that, only the single record you want to change), it allows you to override CNAMEs, etc. Other solutions such as Unbound cannot override CNAMEs. https://www.redpill-linpro.com/sysadvent/2015/12/08/dns-rpz.html EDIT: Let's do this properly then. I will document what I've done based on the tutorial linked above. My OS is Raspbian 4.4 for Raspberry Pi, but the technique should work without any changes on Debian and Ubuntu, or with minimal changes on other platforms. Go to where your Bind config files are kept on your system - here it's in /etc/bind . Create in there a file called db.rpz with the following contents: $TTL 60 @ IN SOA localhost. root.localhost. ( 2015112501 ; serial 1h ; refresh 30m ; retry 1w ; expiry 30m) ; minimum IN NS localhost. localhost A 127.0.0.1 www.some-website.com A 127.0.0.1 www.other-website.com CNAME fake-hostname.com. What does it do? it overrides the IP address for www.some-website.com with the fake address 127.0.0.1 , effectively sending all traffic for that site to the loopback address it sends traffic for www.other-website.com to another site called fake-hostname.com Anything that could go in a Bind zone file you can use here. To activate these changes there are a few more steps: Edit named.conf.local and add this section: zone "rpz" { type master; file "/etc/bind/db.rpz"; }; The tutorial linked above tells you to add more stuff to zone "rpz" { } but that's not necessary in simple setups - what I've shown here is the minimum to make it work on your local resolver. Edit named.conf.options and somewhere in the options { } section add the response-policy option: options { // bunch // of // stuff // please // ignore response-policy { zone "rpz"; }; } Now restart Bind: service bind9 restart That's it. The nameserver should begin overriding those records now. If you need to make changes, just edit db.rpz , then restart Bind again. Bonus: if you want to log DNS queries to syslog, so you can keep an eye on the proceedings, edit named.conf.local and make sure there's a logging section that includes these statements: logging { // stuff // already // there channel my_syslog { syslog daemon; severity info; }; category queries { my_syslog; }; }; Restart Bind again and that's it. Test it on the machine running Bind: dig @127.0.0.1 www.other-website.com. any If you run dig on a different machine just use @the-ip-address-of-Bind-server instead of @127.0.0.1 I've used this technique with great success to override the CNAME for a website I was working on, sending it to a new AWS load balancer that I was just testing. A Raspberry Pi was used to run Bind, and the RPi was also configured to function as a WiFi router - so by connecting devices to the SSID running on the RPi I would get the DNS overrides I needed for testing.
{ "source": [ "https://serverfault.com/questions/18748", "https://serverfault.com", "https://serverfault.com/users/6195/" ] }
18,761
I want to change which port sshd uses on a Mac server. For example, let's say from port 22 to port 32. Editing /etc/sshd_config does not seem to work. Does anyone know how to change it? I'd prefer a method that's compatible with all OSX versions (or as many as possible, at least).
Every previous answer is working (as google suggest too), but they are dirty and inelegant . The right way to change the listening port for a launchd handled service on Mac OS X is to make the changes the dedicated keys available in ssh.plist So the solution is as simple as to use the port number instead of the service name. An excerpt from my edited /System/Library/LaunchDaemons/ssh.plist : <key>Sockets</key> <dict> <key>Listeners</key> <dict> <key>SockServiceName</key> <string>22022</string> <key>SockFamily</key> <string>IPv4</string> <key>Bonjour</key> <array> <string>22022</string> </array> </dict> </dict> Note: To be able to edit this file on El Capitan , Sierra and probably future versions as well, you need to disable SIP (System Integrity Protection). See How do I disable System Integrity Protection (SIP) . For Catalina , even after disabling SIP, the volumes are unwritable. Use sudo mount -uw / in order to enable writing to /System . Do the change then restore SIP and reboot. The above edit will also force sshd to listen only over IPV4. After making any changes to ssh.plist , the file must be reloaded as follows: sudo launchctl unload /System/Library/LaunchDaemons/ssh.plist sudo launchctl load /System/Library/LaunchDaemons/ssh.plist Note that using launchctl stop ... and launchctl start ... will NOT reload this file. The man page with more information can be found by typing man launchd.plist or using this link .
{ "source": [ "https://serverfault.com/questions/18761", "https://serverfault.com", "https://serverfault.com/users/10421/" ] }
18,872
Is there a one-liner that will zip/unzip files (*.zip) in PowerShell?
This is how you can do it purely from Powershell without any external tools. This unzips a file called test.zip onto the current working directory: $shell_app=new-object -com shell.application $filename = "test.zip" $zip_file = $shell_app.namespace((Get-Location).Path + "\$filename") $destination = $shell_app.namespace((Get-Location).Path) $destination.Copyhere($zip_file.items())
{ "source": [ "https://serverfault.com/questions/18872", "https://serverfault.com", "https://serverfault.com/users/4113/" ] }
19,323
I'm planning to deploy some kiosk computers and would like to leave them with a small pendrive as boot disk, keeping the rest at an easy to back up server, ala LTSP . Right now I'm pondering two options. An NFSed /home/, or a local copy of ~/ copied on login, rsynced on logout. My fears are that working with files might get too slow, or my network might get clogged .
I use NFS for my home directories in our production environment. There are a couple of tricks. Don't NFS mount to /home - that way you can have a local user that allows you in in the event that the NFS server goes down. We mount to /mnt/nfs/home Use soft mounts and a very short timeout - this will prevent processes from blocking forever. Use the automounter . This will keep resource usage down and also means that you don't need to worry about restarting services when the NFS server comes up if it goes down for some reason. auto.master: +auto.master /mnt/nfs /etc/auto.home --timeout=300 auto.home home -rw,soft,timeo=5,intr home.bzzprod.lan:/home Use a single sign-on system so you don't run into permission related issues. I have an OpenLDAP server.
{ "source": [ "https://serverfault.com/questions/19323", "https://serverfault.com", "https://serverfault.com/users/7949/" ] }
19,360
I have heard many times that HTTPS should be used for transferring private data, since HTTP is vulnerable to eavesdroppers. But in practical terms, just who is capable of eavesdropping on a given surfer's HTTP traffic? Their ISP? Other people on the same LAN? Anyone who knows their IP address?
Easy - just follow the cable from your PC to the server. This is maybe specific to Austria, but it probably looks similar all over the world. let's assume we've got an DSL user: PC -> Ethernet -> Modem Anybody with access to the local infrastructure can sniff the traffic Modem -> 2-wire-Copper -> DSLAM Anybody with access to the copper infrastructure and equipment which is able to decode the data can eavesdrop. Most of this wiring is relatively unprotected and easy to access if you know where to look, but to actually decode the data you'd probably need some very specific equipment. DSLAM -> ISP infrastructure -> ISP core routers Most DSLAMs are connected via Fibre to some sort of Fibre Ring/MAN to routers of the ISP. There have been stories in Germany where supposedly three-letter-agencies from the U.S. of A eavesdropped on traffic of a Metropolitan Area Network. There are off-the-shelf devices which can do this, you just need the right budget, intent and knowledge of the local infrastructure. ISP core routers -> BGP -> Target AS Given that the destination server is not in the same Autonomous System as the user is, the traffic has to be sent over the "Internet". If you're going over the Internet, to use a quote from Snatch, "All Bets Are Off". There are so many nooks and crannies were a malicious operators could attach themselves, that you're best assuming that all your traffic is going to be read. The DHS (or maybe some other agency) actively eavesdropped on backbone infrastructure in the USA on this level. Target AS Border router -> ISP infrastructure -> Housing Center See above. Housing Center Router -> Switches -> Server This is how quite a few sites were already attacked. Ethernet offers no protection for hosts which are in the same (V)LAN/broadcast domain, so any host can try ARP spoofing/poisoning to impersonate another server. This means that all traffic for a given server can be tunneled through a machine in the same (V)LAN.
{ "source": [ "https://serverfault.com/questions/19360", "https://serverfault.com", "https://serverfault.com/users/6129/" ] }
19,367
I need to write a script that will build my server from a fresh Ubuntu server install. Among things like Apache and PHP it needs to install MySQL. The only problem here is that when I install MySQL with apt-get, at some point the installation will bring up a dialog that allows me to type my root password. I.e., human interaction is required. How can I bypass this screen during installation and avoid human interaction while still using apt-get to install MySQL?
You need to preseed the debconf database. debconf needs to be installed first before you try this. The version of mysql and ubuntu could change the line: echo mysql-server mysql-server/root_password select PASSWORD | debconf-set-selections echo mysql-server mysql-server/root_password_again select PASSWORD | debconf-set-selections For example you made need this instead: echo mysql-server-5.0 mysql-server/root_password password PASSWORD | debconf-set-selections echo mysql-server-5.0 mysql-server/root_password_again password PASSWORD | debconf-set-selections
{ "source": [ "https://serverfault.com/questions/19367", "https://serverfault.com", "https://serverfault.com/users/1205/" ] }
19,561
We're running PHP 5.2.5 on an IIS 7 Server and we're having problems making PHP errors visible... At the moment whenever we have a PHP error the server sends back a 500 error with the message "The page cannot be displayed because an internal server error has occurred." This might be a good setting for production websites but it's rather annoying on a development server... ;-) I have tried configuring php.ini to display errors to the screen as well as log them to a specific folder but it seems that the Server catches all errors before and prevents and handling by PHP... Does someone know what we have to do to make IIS display PHP errors on screen? Any links, tipps or tutorials on the subject would be appreciated!
Just to double check, do you have logging set to error_reporting = E_ALL , and display_errors = On in your php.ini ? Usually this is enough to display these errors in IIS 7. Next, take a look at your IIS settings, as it may be set to only show error messages locally. In the IIS 7 configuration editor this is under system.webServer->httpErrors. You will need to change errorMode to Detailed from DetailedLocalOnly. Obviously this now means anyone browsing your site will be able to see the error. Alternatively, if you want to keep them local you can use Remote Desktop to log in to the server and run the app from there, if you can.
{ "source": [ "https://serverfault.com/questions/19561", "https://serverfault.com", "https://serverfault.com/users/2458/" ] }
19,563
I am ssh'ing into a server (SLES 10 sp2) that does not have access to the internet. I need to run updates and install new software on this server, preferably using Yast. So my idea was: Create a proxy using ssh to a box that has access to the outside. Setup Yast to use this proxy. The ssh command I run on the isolated server looks as follows: ssh -D 9999 username@ip-of-box-with-internet-access In Yast I go to Network Service > Proxy and enter the following as the HTTP Proxy URL: http://localhost:9999 When I go to Test proxy settings it fails. I suspect that Yast does not know it is a SOCKS5 proxy. Could anyone tell me how I can setup Yast to use a proxy created with ssh? Any help would be appreciated!
Just to double check, do you have logging set to error_reporting = E_ALL , and display_errors = On in your php.ini ? Usually this is enough to display these errors in IIS 7. Next, take a look at your IIS settings, as it may be set to only show error messages locally. In the IIS 7 configuration editor this is under system.webServer->httpErrors. You will need to change errorMode to Detailed from DetailedLocalOnly. Obviously this now means anyone browsing your site will be able to see the error. Alternatively, if you want to keep them local you can use Remote Desktop to log in to the server and run the app from there, if you can.
{ "source": [ "https://serverfault.com/questions/19563", "https://serverfault.com", "https://serverfault.com/users/8052/" ] }
19,611
Where do I go to disable the password complexity policy for the domain? I've logged onto the domain controller (Windows Server 2008) and found the option in local policies which is of course locked from any changes. However I can't find the same sort of policies in the group policy manager. Which nodes do I have to expand out to find it?
You're looking to change the password complexity setting you found in the "Default Domain Policy", not the local group policy. Then do a "gpupdate" and you'll see the change take effect. Open Group Policy Management Console (Start / Run / GPMC.MSC), open the Domain, and right-click and Edit the "Default Domain Policy". Then dig into the "Computer Configuration", "Windows Settings", "Security Settings", "Account Policies", and modify the password complexity requirements setting. Editing the "Default Domain Policy" is definitely a quick-and-dirty thing to do. The better thing to do, once you get a better handle on group policy management, would be to return the default back to default settings and make a new GPO overriding the default with the settings you want. To get you by fast, though, editing the default isn't going to hurt you.
{ "source": [ "https://serverfault.com/questions/19611", "https://serverfault.com", "https://serverfault.com/users/8079/" ] }
19,634
Is there a way to connect to an ssh session that was disconnected? We are having problems with our network connection to a remote site that we are working on separately; however, in the mean time we experience a large number of disconnects due to lost packets while connected to servers at the remote location. Many times the session stays active for a while, and sometimes it happens to be in the middle of some action (file editing, running some process, etc...) that I need to get back to rather than restart if possible.
UPDATE: For an actual answer see zero_r's answer below This isn't an answer, but a workaround. Use screen . When you first log in, run screen. You get another shell, run commands in that. If you're disconnected, the screen process keeps the terminal alive so that your shell and the processes it is running don't fall over. When you reconnect, run 'screen -r' to resume. There's a bunch more to configuring and using screen, but the above should workaround your problem.
{ "source": [ "https://serverfault.com/questions/19634", "https://serverfault.com", "https://serverfault.com/users/2273/" ] }
19,935
There's something about Windows memory management and it's relationship to TaskManager that I don't understand and I'm hoping someone can enlighten me. If I'm running a virtual machine (doesn't matter if it's Virtual PC 2007, Virtual Server 2005, or VirtualBox since they act the same way) and bring up TaskManager I can see on the processes tab some entries for the VM but the memory values are fairly small (around 30 MB). Obviously it's not including the memory actually being consumed by the VM itself. None of the various Memory-related columns you can make visible appear to work differently. The Memory usage on the Performance tab appears to be correct for total memory usage including the VM. So my question is why doesn't the VM's memory usage (which will be 100's of MB) show up on the Processes tab?
VirtualPC, HyperV and probably similar products use something called driver locked memory, which is not visible in Process Explorer, Task Manager, etc. RAMMap will show you driver locked memory used by a process.
{ "source": [ "https://serverfault.com/questions/19935", "https://serverfault.com", "https://serverfault.com/users/4578/" ] }
20,106
I'm in the process of reviewing every SQL statement that an application makes against the database, for performance reasons. Is there an easy way to log all statements that are executed by the PostgreSQL database server? Thanks.
The config option you're looking for is log_statement = "all" (if you just want the statements), or log_min_statement_duration = <some number> if you're just after "slow" queries (for some value of "slow"). See http://www.postgresql.org/docs/current/static/runtime-config-logging.html for more details on logging configuration.
{ "source": [ "https://serverfault.com/questions/20106", "https://serverfault.com", "https://serverfault.com/users/8242/" ] }
20,383
We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different Google IPs, plus 104k from Yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E , and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use , ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a Windows shop but we have some Linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?
If your PIX is running version 7.2 or greater of the OS, or can be upgraded to it, then you can implement QOS policies at the firewall level. In particular this allows you to shape traffic and should allow you to limit the bandwidth used by bots. Cisco have a good gudie to this here .
{ "source": [ "https://serverfault.com/questions/20383", "https://serverfault.com", "https://serverfault.com/users/1/" ] }
20,652
Do you keep the counter ON during heavy production loads Which performance counters do you find useful for ASP.Net/IIS 6.0 websites?
I've never had problems running performance counters on my servers. Microsoft suggests watching following counters for IIS : Memory\Pages/sec Memory\Available Bytes Memory\Committed Bytes Memory\Pool Nonpaged Bytes Processor\% Processor Time Processor\Interrupts/sec Processor\System Processor Queue Length LogicalDisk\% Disk Time PhysicalDisk\% Disk Time LogicalDisk\Avg. Disk Queue Length PhysicalDisk\Avg. Disk Queue Length LogicalDisk\Avg. Disk Bytes/Transfer PhysicalDisk\Avg. Disk Bytes/Transfer System\Context Switches/sec Web Service\Bytes Total/sec Web Service\Total Method Requests/sec Web Service\Current Connections Web Service Cache\File Cache Hits % Web Service Cache\Kernel:URI Cache Misses Web Service Cache\Kernel:URI Cache Hits % Specifically for ASP.NET I would watch ASP.NET\Application Restarts ASP.NET\Requests Queued ASP.NET\Worker Process Restarts ASP.NET Applications\Errors Total ASP.NET Applications\Requests/Sec ASP.NET Applications\Pipeline Instance Count .NET CLR Exceptions# of Exceps Thrown
{ "source": [ "https://serverfault.com/questions/20652", "https://serverfault.com", "https://serverfault.com/users/6230/" ] }
20,702
I would like to be able to create new users in Mac OS X 10.5 remotely after ssh'ing into the machine. How do I do this?
Use the dscl command. This example would create the user "luser", like so: dscl . -create /Users/luser dscl . -create /Users/luser UserShell /bin/bash dscl . -create /Users/luser RealName "Lucius Q. User" dscl . -create /Users/luser UniqueID "1010" dscl . -create /Users/luser PrimaryGroupID 80 dscl . -create /Users/luser NFSHomeDirectory /Users/luser You can then use passwd to change the user's password, or use: dscl . -passwd /Users/luser password You'll have to create /Users/luser for the user's home directory and change ownership so the user can access it, and be sure that the UniqueID is in fact unique. This line will add the user to the administrator's group: dscl . -append /Groups/admin GroupMembership luser
{ "source": [ "https://serverfault.com/questions/20702", "https://serverfault.com", "https://serverfault.com/users/8401/" ] }
20,909
There are many SQL Server options that can be enabled for databases, and one of the most misunderstood ones is auto-shrink. Is it safe? If not, why not?
(I originally asked as a regular question but then found out the correct method - thanks BrentO) No, never. I've come across this several times now on ServerFault and want to reach a nice wide audience with some good advice. If people frown on this way of doing things, downvote and I'll remove this gladly. Auto-shrink is a very common database setting to have enabled. It seems like a good idea - remove the extra space from the database. There are lots of 'involuntary DBAs' out there (think TFS, SharePoint, BizTalk, or just regular old SQL Server) who may not know that auto-shrink is positively evil. While at Microsoft I used to own the SQL Server Storage Engine and tried to remove the auto-shrink feature, but it had to stay for backwards compatibility. Why is auto-shrink so bad? The database is likely to just grow again, so why shrink it? Shrink-grow-shrink-grow causes file-system level fragmentation and takes lots of resources. You can't control when it kicks-in (even though it's regular-ish) It uses lots of resources. Moving pages around in the database takes CPU, lots of IO, and generates lots of transaction log. Here's the real kicker: data file shrink (whether auto- or not) causes massive index fragmentation, which leads to poor performance. I did a blog post a while back that has an example SQL script that shows the problems it causes and explains in a bit more detail. See Auto-shrink – turn it OFF! (no advertising or junk like that on my blog). Don't get this confused with shrinking the log file, which is useful and necessary on occasion. So do yourselves a favor - look in your database settings and turn off auto-shrink. You should also not have shrink in your maintenance plans, for exactly the same reason. Spread the word to your colleagues. Edit: I should add this, reminded by the second answer - there's common misconception that interrupting a shrink operation can cause corruption. No it won't. I used to own the shrink code in SQL Server - it rolls back the current page move that it's doing if interrupted. Hope this helps!
{ "source": [ "https://serverfault.com/questions/20909", "https://serverfault.com", "https://serverfault.com/users/1992/" ] }
21,105
I don't get why there are two different programs in a minimal install to install software. Don't they do the same thing? Is there a big difference? I have read everywhere to use aptitude over apt-get but I still don't know the difference
aptitude is a wrapper for dpkg just like apt-get/apt-cache, but it is a one-stop-shop tool for searching/installing/removing/querying. A few examples that apt might not supply: $ aptitude why libc6 i w64codecs Depends libc6 (>= 2.3.2) $ aptitude why-not libc6 Unable to find a reason to remove libc6. $ aptitude show libc6 Package: libc6 State: installed Automatically installed: no Version: 2.9-4ubuntu6 Priority: required Section: libs Maintainer: Ubuntu Core developers <[email protected]> Uncompressed Size: 12.1M Depends: libgcc1, findutils (>= 4.4.0-2ubuntu2) Suggests: locales, glibc-doc Conflicts: libterm-readline-gnu-perl (< 1.15-2), tzdata (< 2007k-1), tzdata-etch, nscd (< 2.9) Replaces: belocs-locales-bin Provides: glibc-2.9-1 Description: GNU C Library: Shared libraries Contains the standard libraries that are used by nearly all programs on the system. This package includes shared versions of the standard C library and the standard math library, as well as many others.
{ "source": [ "https://serverfault.com/questions/21105", "https://serverfault.com", "https://serverfault.com/users/1131/" ] }
21,106
I'm currently using a Joyent Accelerator to host my webapps, and it's working fine, however I need to reduce costs so I'm downgrading my current plan and that imposes some new memory limits (256M rss, 512M swap). I wasn't too far over them yesterday, but after restarting Apache several times today, I'm now 411M rss, 721M swap (prstat -Z -s cpu). Searching in Server Fault only gives me lots of ways and specific tools to monitor the server, but no advice on how to reduce/optimize it's memory usage. I've also seen this question , but I don't think it's good for this particular (or may I say generic?) situation. The server is running Solaris on a shared CPU, and I'm using a Apache + MySQL + PHP stack. I'm interested in knowing the steps one can take to troubleshot this and solve the issues. However, I'm also running out of time to lower my memory foot print and downgrade the plan before the current ends, so anything that can make magic and save the day is welcome as well :)
Thanks everyone for your answers! Following your suggestions I've been able to reduce my memory usage to 195M SWAP and 108M RSS, without touching my code (I'll definitely optimize it soon, but this was supposed to be a solution to get me out of trouble fast). Here's the list of things I did: Got rid of the wildcard used in VirtualHost entries. Instead of *:80 and *:443, I used the real IP of my server. Changed Apache's prefork MPM. These are the values I ended up using: StartServers 1 MinSpareServers 1 MaxSpareServers 5 ServerLimit 16 MaxClients 16 MaxRequestsPerChild 0 ListenBacklog 100 These are by no means magical numbers. I've spent some time trying different values and combination, and then testing them against the real usage of my server and everyone should do the same in their enviroment. For the record, my server receives close to 2M pvs/month, serving both dynamic pages and assets at a regular rate - no digg effect. The intention, again, was to reduce the memory footprint, not to improve performance or HA. Reference: http://httpd.apache.org/docs/2.0/misc/perf-tuning.html http://httpd.apache.org/docs/2.2/mod/mpm_common.html Tunned down Apache's KeepAlive. By setting KeepAliveTimeout to a lower value (2 in my case) I can expect less server processes just waiting on connections with idle clients that may not request any more content. Reference: http://httpd.apache.org/docs/2.0/mod/core.html#keepalivetimeout Removed MySQL's unused module. I added skip-innodb to MySQL's my.cnf. Massive memory consumption reduction. There are also some remarkable good suggestions that I couldn't personally do: Remove PHP modules you do not need. The PHP on my server has most mods already compiled, I'll probably try my own minimal PHP on other VPS. Switch to nginx with php-fastcgi. That's another good advice that I'll be trying soon, but right now I can't risk the downtime.
{ "source": [ "https://serverfault.com/questions/21106", "https://serverfault.com", "https://serverfault.com/users/3912/" ] }
21,143
I've noticed that we have in Active Directory more users than the company has actual employees. Is there a simple way to check multiple Active Directory accounts and see if there are any accounts that have not been used for a while? This should help me determine whether some accounts should be disabled or deleted.
O'Reiley's Active Directory Cookbook gives an explanation in chapter 6: 6.28.1 Problem: You want to determine which users have not logged on recently. 6.28.2 Solution 6.28.2.1 Using a graphical user interface Open the Active Directory Users and Computers snap-in. In the left pane, right-click on the domain and select Find. Beside Find, select Common Queries. Select the number of days beside Days since last logon. Click the Find Now button. 6.28.2.2 Using a command-line interface dsquery user -inactive < NumWeeks > To get more information, see recipe 6.28
{ "source": [ "https://serverfault.com/questions/21143", "https://serverfault.com", "https://serverfault.com/users/4694/" ] }
21,157
What are the differences between using dev tap and dev tun for openvpn? I know the different modes cannot inter-operate. What is the technical differences, other then just layer 2 vs 3 operation. Are there different performance characteristics, or different levels of overhead. Which mode is better. What functionality is exclusively available in each mode.
if it's ok to create vpn on layer 3 (one more hop between subnets) - go for tun. if you need to bridge two ethernet segments in two different locations - then use tap. in such setup you can have computers in the same ip subnet (eg 10.0.0.0/24) on both ends of vpn, and they'll be able to 'talk' to each other directly without any changes in their routing tables. vpn will act like ethernet switch. this might sound cool and is useful in some cases but i would advice not to go for it unless you really need it. if you choose such layer 2 bridging setup - there will be a bit of 'garbage' (that is broadcast packets) going across your vpn. using tap you'll have slightly more overhead - besides ip headers also 38B or more of ethernet headers are going to be sent via the tunnel (depending on the type of your traffic - it'll possibly introduce more fragmentation).
{ "source": [ "https://serverfault.com/questions/21157", "https://serverfault.com", "https://serverfault.com/users/1131/" ] }
21,197
Property management at my organization has informed me that our building will be losing power for 4 hours tomorrow. I need to be prepared for this event (we're a small organization, i'm young, therefore I am I.T). What sorts of things do I need to be aware of. I am planning on going in and shutting down all machines and printers. Will this cover me? We have a managed switch. Does it need to be shut down? Do I need to disconnect plugs in case of a surge? Seems like I'll be covered all around if I just unplug everything. Thanks for any insight though.
Before the outage: Power everything off - workstations, servers, printers, switches, the works. Turn off your UPS' so they don't panic when power is lost. After outage in this order: Turn on UPS Turn on networking (router, switches etc) Turn on servers Turn on workstations Turn on everything else Have a test plan ready so you can test important functionality is working Internet connectivity Email, printing etc If possible, have a laptop with a separate network connection handy (ie: you can get to the internet without your work router working). That way you have a way to ask for help here if something goes wrong with the networking when it comes back up. :) You should be fine though - the fact that you took the time to ask here shows you already have the requisite "clue" required for IT support!
{ "source": [ "https://serverfault.com/questions/21197", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
21,218
I'd like to know pros and cons.. reasons for and against the idea of sys admin’s maintaining user account lists with passwords.. and additionally not allowing those users to change their passwords. I understand that systems like Windows seem to encourage the idea that users should maintain their own password security and be allowed to change their password at will. I can appreciate the need for privacy and users having alibis to protect themselves in the event that a colleagues word disagrees with the system's logs. But at the same time i can also see how some people might justify having user’s passwords on file in the event that access is required to some of the materials that users may want to keep private. I'd really love to be educated on this idea.
A sysadmin should be able to access any files a user has, unless they're encrypted, in which case the user's Windows password won't help. Having the system knowing the passwords means that you can never know if a user did something, or a sysadmin did, which could cause a lot of problems if you ever get into a dispute. The passwords would have to be stored somewhere, which means there's the potential for them to be lost. Finally, users will find it harder to remember a password they didn't create. The pros are that there's no need to reset passwords, but you'll have to remind users of them. It also makes it easier to login to users accounts, but outside of testing or diagnosing a problem, this isn't needed, and you can get the passwords on a case by case basis then. There really isn't any reason to do this, it creates a lot of problems, for no real gain.
{ "source": [ "https://serverfault.com/questions/21218", "https://serverfault.com", "https://serverfault.com/users/8543/" ] }
21,230
Firefox adoption in the home/personal user base seems to be growing fine, but adoption in the enterprise is not going anywhere quickly. My view on this is because SysAdmins are not promoting it within the organisations because Internet Explorer has features which make it more acceptable to an enterprise, such as Managing settings via GPO Integration into the rest of the update stack Support of common business applications So what would you add to Firefox to get it more promotion by SysAdmins in the enterprise?
If it came in MSI format for easy installation to Windows workstations, and could be managed by GPO and Apple Open Directory then it would be perfect. It would also need to work well with things like Sharepoint, but I suspect that's an issue for the people designing sites in Sharepoint rather than Mozilla. I know there's currently a fork of firefox that is designed to work with GPOs, but I'm talking about having it work with the "standard" product out of the box, and being able to control and lock down and any all preferences. As neobyte says, patch management is also an issue. Firefox's current method doesn't scale for business imho. EDIT: Extension management - this needs to be controllable by the enterprise too, there needs to be a way to roll out and "lock" into place a standard set of extensions, regardless of whether or not you want users to be able to add their own, possibly to nominate a trusted location of your own where you publish "approved" extensions, that kind of thing. NTLM auth - looks like there's a hack to add this to the browser anyway if you look around the web but this needs to be obviously better exposed.
{ "source": [ "https://serverfault.com/questions/21230", "https://serverfault.com", "https://serverfault.com/users/103/" ] }
21,255
How do you do a headless install of linux? No monitor, no keyboard. Machine has floopy and cd drive. Can I configure a live cd to run sshd with a preconfigured pass or something similar and manage it via that? Instructions for fedora 8+ would be ace but anything else also welcomed. Cheers.
For Redhat/CentOS/Fedora, you are looking for kickstart . For Ubuntu and Debian you want to look at preseeding . Both work in much the same way by feeding the installer a file that answers all the questions that the installer would normally ask you for. They also allow you to run scripts after the installation has completed, so you can customize the install.
{ "source": [ "https://serverfault.com/questions/21255", "https://serverfault.com", "https://serverfault.com/users/8571/" ] }
21,374
Obviously seeing as how many of us here are system administrator type people, we have a lot of passwords strung out across numerous systems and accounts. Some of them are low priority, others could cause serious harm to a company if discovered (don't you just love power?). Simple, easy to remember passwords just aren't acceptable. The only option is complex, hard-to-remember (and type) passwords. So, what do you use to keep track of your passwords? Do you use a program to encrypt them for you (requiring yet another password in turn), or do you do something less complicated such as a piece of paper kept on your person, or is it somewhere in between those options?
KeePass is great.
{ "source": [ "https://serverfault.com/questions/21374", "https://serverfault.com", "https://serverfault.com/users/8297/" ] }
21,475
I tried to upgrade Ubuntu from Hardy to Intrepid last night, and seem to have killed it. I can boot into "recovery mode" and the root shell, but it freezes when it tries to start the Gnome environment etc. In this recovery mode it doesn't seem to be on the network (ie. ifconfig shows the lo bit but not the eth0 bit) And I can't ping or ssh to it. How can I start networking from this prompt? cheers phil
Do you normally get your IP address from DHCP server ? $ ifconfig eth0 up $ sudo dhclient eth0 To set IP address you want (for example 192.168.0.1) type: ifconfig eth0 192.168.0.1 netmask 255.255.255.0 up route add default gw GATEWAY-IP eth0 If you have a problem with gdm during the boot, switch to the real console: Use the Ctrl - Alt - F1 shortcut keys to switch to the first console. To switch back to Desktop mode ( gdm ), use the Ctrl - Alt - F7 shortcut keys.
{ "source": [ "https://serverfault.com/questions/21475", "https://serverfault.com", "https://serverfault.com/users/7355/" ] }
21,580
I often use SCP to copy files around - particularly web-related files. The problem is that whenever I do this, I can't get my command to copy hidden files (eg, .htaccess). I typically invoke this: scp -rp src/ user@server:dest/ This doesn't copy hidden files. I don't want to have to invoke this again (by doing something like scp -rp src/.* ... - and that has strange . and .. implications anyway. I didn't see anything in the scp man page about an "include hidden files". How can I accomplish this?
That should absolutely match hidden files. The / at the end of the source says "every file under this directory". Nevertheless, testing and research bear you out. This is stupid behavior. The " answer " is to append a dot to the end of the source: scp -rp src/. user@server:dest/ The real answer is to use rsync.
{ "source": [ "https://serverfault.com/questions/21580", "https://serverfault.com", "https://serverfault.com/users/1516/" ] }
21,806
I want to be able to launch screen sessions on remote servers from a single ssh command on my desktop. However, screen seems to need a terminal, which is not available when running a command through ssh. So the obvious ssh [email protected] screen "tail -f /var/log/messages" (as an example) does not work, and gives Must be connected to a terminal. I want ssh to launch the command under a screen so I can log in later and attach as I would to a screen session I would have launched manually.
Try using the -t option to ssh ssh -t [email protected] screen "tail -f /var/log/messages" From man ssh -t Force pseudo-tty allocation. This can be used to execute arbi- trary screen-based programs on a remote machine, which can be very useful, e.g., when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
{ "source": [ "https://serverfault.com/questions/21806", "https://serverfault.com", "https://serverfault.com/users/162/" ] }
21,820
The problem doesn't occur very often, but still it surely exists and I'm not sure where to start from. I have grepped for the mongrel PIDs in /var/log/ and the only messages that contained them are these: Jun 7 07:46:24 staging kernel: 4gb seg fixup, process mongrel_rails (pid 29498), cs:ip 73:00937a5c It has something to do with Xen specific version of libc , but it's not critical, and the processes are still running with these messages accumulating in kern.log I'm actually looking not only for specific solution (which probably couldn't be provided from the above description) but for any advice on how to set up monitoring or investigate such cases.
Try using the -t option to ssh ssh -t [email protected] screen "tail -f /var/log/messages" From man ssh -t Force pseudo-tty allocation. This can be used to execute arbi- trary screen-based programs on a remote machine, which can be very useful, e.g., when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
{ "source": [ "https://serverfault.com/questions/21820", "https://serverfault.com", "https://serverfault.com/users/47936/" ] }
22,140
I need a cell phone that will help me keep an eye on my servers and services when I am away from my computer/desk/workplace. Which smart phone would you recommend for sysadmins? An SSH client is a must. I haven't used an iPhone, but I guess having a keyboard would be better. Currently I'm looking at this alternatives: iPhone would be the "default" smartphone. Nokia E71 has got good recommendations, including from Joel Spolky's. The Android platform looks good, but I'm not sure the few models (HTC G1 / HTC Magic / HTC Dream) are mature enough. I'm not sure about Blackberry. WinCE / Windows Mobile phones? Any Nokia phone better than an E71? Which choice did you make? What would you recommend?
I like my iPhone. There are some nice sysadmin apps . But besides the obvious email/calendar/contacts clients with Exchange integration, I primarily use it for note taking (EverNote), which is also very important in our job. The web browsing experience is great if you had to research something and last, but not least for my personal education: reading RSS feeds, listening to podcasts etc. This question here on ServerFault.com contains lists of nice tools for the iPhone: What are some “must have” iPhone/iTouch apps for IT people? This is the "Tools Page" on my iPhone : alt text http://img53.yfrog.com/img53/9003/fj8.jpg
{ "source": [ "https://serverfault.com/questions/22140", "https://serverfault.com", "https://serverfault.com/users/8766/" ] }
22,182
Is there a way to view the members of an Active Directory group if you aren't a domain admin and can't log into to a domain controller?
Absolutely. From a computer that's a member of the domain, open a command-prompt and run a: NET GROUP "group name" /DOMAIN Unless your administrators have changed the stock permissions on the group object you will be able to view the membership that way. You can use AD Users and Computers even if you're not an administrator, but this, at least, can be done w/o installing anything.
{ "source": [ "https://serverfault.com/questions/22182", "https://serverfault.com", "https://serverfault.com/users/2966/" ] }
22,324
I have installed a new Linux Debian lenny server that will be a LAMP and a Subversion server. Should I have to enable automatic updates? If I enable it, I am sure that I have the latest security patches. It also should not break my system since Debian stable provides only security patches. If I install them manually, I may be on high security risk during multiple days & week. Please keep in mind that I am not an full time system administrator, so I do not have the time to look at security bulletins. What are you usually doing with your servers? What is your advice?
(Warnings regarding automatic upgrades have already been voiced by previous posters.) Given the track record of the Debian Security team in the last few years, I consider the risks of broken upgrades far less than the benefit of having automatic updates on seldom-visited systems. Debian Lenny comes with unattended-upgrades , which originated from Ubuntu and is considered to be the defacto solution for unattended upgrades for Debian starting from Lenny/5.0. To get it up and running on a Debian system you need to install the unattended-upgrades package. Then add these lines to /etc/apt/apt.conf : APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; (Note: In Debian Squeeze/6.0 there is no /etc/apt/apt.conf . The preferred method is to use the following command, which will create the above lines in /etc/apt/apt.conf.d/20auto-upgrades :) sudo dpkg-reconfigure -plow unattended-upgrades A cron job is then run nightly and checks if there are security updates which need to be installed. Actions by unattended-upgrades can be monitored in /var/log/unattended-upgrades/ . Be wary, that for kernel security fixes to become active, you need to reboot the server manually. This can also be done automatically in course of a planned (e.g. monthly) maintenance window.
{ "source": [ "https://serverfault.com/questions/22324", "https://serverfault.com", "https://serverfault.com/users/37220/" ] }
22,414
Say you're running a server and you don't want to upgrade to Testing (Squeeze) from Stable (Lenny) to just install a required package or two. What's the best way of installing only certain packages from Testing?
Many people seem to be afraid of mixing stable with testing, but frankly, testing is fairly stable in its own right, and with proper preferences and solution checking, you can avoid the "stability drift" that puts your core packages on the unstable path. "Testing is fairly stable??" , you ask. Yes. In order for a package to migrate from unstable to testing, it has to have zero open bugs for 10 consecutive days. Chances are that, especially for the more popular packages, somebody is going to submit a bug report for an unstable version if something is wrong. Even if you don't want to mix the environments, it's still nice to have the option there in case you run into something that requires a newer version than what is in stable. Here's what I recommend for setting this up: First, create the following files in /etc/apt/preferences.d : stable.pref : # 500 <= P < 990: causes a version to be installed unless there is a # version available belonging to the target release or the installed # version is more recent Package: * Pin: release a=stable Pin-Priority: 900 testing.pref : # 100 <= P < 500: causes a version to be installed unless there is a # version available belonging to some other distribution or the installed # version is more recent Package: * Pin: release a=testing Pin-Priority: 400 unstable.pref : # 0 < P < 100: causes a version to be installed only if there is no # installed version of the package Package: * Pin: release a=unstable Pin-Priority: 50 experimental.pref : # 0 < P < 100: causes a version to be installed only if there is no # installed version of the package Package: * Pin: release a=experimental Pin-Priority: 1 (Don't be afraid of the unstable/experimental stuff here. The priorities are low enough that it's never going to automatically install any of that stuff. Even the testing branch will behave, as it's only going to install the packages you want to be in testing.) Now, creating a matching set for /etc/apt/sources.list.d : stable.list : Copy from your original /etc/apt/sources.list . Rename the old file to something like sources.list.orig . testing.list : Same as stable.list , except with testing . unstable.list : Same as stable.list , except with unstable , and remove the security lists. experimental.list : Same as unstable.list , except with experimental . You can also add a oldstable in sources.lists.d and preferences.d (use a priority of 1), though this moniker will tend to expire and disappear before the next stable cycle. In cases like that, you can use http://archive.debian.org/debian/ and "hardcode" the Debian version (etch, lenny, etc.). To install the testing version of a package, simply use aptitude install lib-foobar-package/testing , or just jump into aptitude's GUI and select the version inside of the package details (hit enter on the package you're looking at). If you get complaints of package conflicts, look at the solutions first. In most cases, the first one is going to be "don't install this version". Learn to use the per-package accept/reject resolver choices. For example, if you're installing foobar-package/testing, and the first solution is "don't install foobar-package/testing", then mark that choice as rejected, and the other solutions will never veer to that path again. In cases like these, you'll probably have to install a few other testing packages. If it's getting too hairy (like it's trying to upgrade libc or the kernel or some other huge core system), then you can either reject those upgrade paths or just back out of the initial upgrade altogether. Remember that it's only going to upgrade stuff to testing/unstable if you allow it to. EDIT: Fixed some priority pins, and updated the list.
{ "source": [ "https://serverfault.com/questions/22414", "https://serverfault.com", "https://serverfault.com/users/1576/" ] }
22,419
I have installed, configured DNS server(local instance of Dnsmasq) which resolves to localhost as I want, all OK. When I go offline, it stops working, because OS X empty content of resolv.conf and ignore attempt to reflect changes in this file. Any idea, how to configure DNS even when offline? Similar issue(unresolved): http://blog.steamshift.com/geek/leopard-lookupd-and-local-web-development-sites Main motivation is ease development of RoR application which uses subdomains as account keys. And you can not use 127.0.0.1 *.yourapp.local in /etc/hosts. Some guy registered domain smackaho.st and srt DNS for it like .smackaho.st at 127.0.0.1 but still, you can not use it when you are working offline. EDIT: tried scutil command, but it seems you can change DNS if offline NOTE: when you have all interfaces down, you cannot set DNS servers in Pref. panel.
SEE UPDATE BELOW! I also enjoy using Dnsmasq on my local machine, and I had this problem too. Here is the solution: From man 5 resolver : The configuration for a particular client may be read from a file having the format described in this man page. These are at present located by the system in the /etc/resolv.conf file and in the files found in the /etc/resolver directory. /etc/resolver/ is not present by default; you must create it yourself. Also from the man page: domain Domain name associated with this resolver configuration. This option is normally not required by the Mac OS X DNS search system when the resolver configuration is read from a file in the /etc/resolver directory. In that case the file name is used as the domain name. So if you wanted all dns queries for the top level domain of dev to be routed to the local nameserver, you would: # mkdir /etc/resolver # echo 'nameserver 127.0.0.1' > /etc/resolver/dev configd does not alter files in /etc/resolver/ , so this setting will persist through network changes and reboots. UPDATE 17 July 2012 Unfortunately, as of OS X Lion, the top resolver (as shown by scutil --dns ) disappears when no interfaces are active: # scutil --dns # Online DNS configuration resolver #1 nameserver[0] : 127.0.0.1 ... resolver #8 domain : dev nameserver[0] : 127.0.0.1 # scutil --dns # Offline DNS configuration resolver #1 ... resolver #8 domain : dev nameserver[0] : 127.0.0.1 Notice that resolver #1 is empty, but that the /etc/resolver derived nameserver entry remains. It turns out that since you can specify the resolver domain directly in the /etc/resolver/ file, specifying the special Internet root domain . causes the creation of a global resolver entry that looks like: resolver #8 nameserver[0] : 127.0.0.1 Now all DNS queries are routed to localhost, even when offline. Of course, you will still have to resolve your chosen domains as 127.0.0.1 using something like dnsmasq's --address option: # dnsmasq --address=/dev/127.0.0.1 In summary: Set all your network interface dns servers to 127.0.0.1: networksetup -setdnsservers Ethernet 127.0.0.1 networksetup -setdnsservers Wi-Fi 127.0.0.1 ... Create a file /etc/resolver/whatever: nameserver 127.0.0.1 domain . Set up a local DNS server and be happy. cf. http://opensource.apple.com/source/configd/configd-395.11/dnsinfo/dnsinfo_flatfile.c
{ "source": [ "https://serverfault.com/questions/22419", "https://serverfault.com", "https://serverfault.com/users/3698/" ] }
22,558
I often open a file in vim, make some changes and when it's time to save the file is read-only.. (owned by another user). I'm looking for tips on how I could re-open the file as root and keep my changes without first saving it to a temporary file for copy or re-edit as root.
From this stackoverflow answer , by skinp :w !sudo tee % I often forget to sudo before editing a file I don't have write permissions on. When I come to save that file and get a permission error, I just issue that vim command in order to save the file without the need to save it to a temp file and then copy it back again.
{ "source": [ "https://serverfault.com/questions/22558", "https://serverfault.com", "https://serverfault.com/users/6793/" ] }
22,577
I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file . It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/ fruit /foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on *www.example.com/folder-name-static/ *folder-name-dinamyc/***file-name-static.txt*, where those parts are allways the same , just **that one change ? EDIT : As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt
You could use Files / FilesMatch and a regular expression: <Files ~ "\.txt$"> Order allow,deny Deny from all </Files> This is how .htpasswd is protected. or redirect any access of .txt to a 404: RedirectMatch 404 \.txt$
{ "source": [ "https://serverfault.com/questions/22577", "https://serverfault.com", "https://serverfault.com/users/8331/" ] }
22,626
Vista allows files with empty "first name" (for example, ".svn"). However, when I try to remove the filename of an existing file, leaving the prefix, in Explorer or using cmd's 'rename', I fail. How can I easily rename files to include the suffix only? (I use Vista, if that matters).
You can also do a file name that starts with a period, and has no extension. Try naming it ".whatever." ( note the trailing period ). This works in both Explorer and from the command line.
{ "source": [ "https://serverfault.com/questions/22626", "https://serverfault.com", "https://serverfault.com/users/88/" ] }
22,644
My company is working with another company and as part of the contract they are requesting a copy of my company's written IT Security Policy. I don't have a written IT security policy, and I'm not exactly sure what I want to give to them. We're a Microsoft shop. We have update schedules, limited accesses accounts to manage servers, firewalls, ssl certificates and we run the Microsoft Baseline Security Analyzer from time to time. We configure services and user accounts as we feel is mostly safe and secure (it's tough when you don't have full control over what what software you run), but I can't go into every detail, each service and server is different. I'm getting more information about what they want but I feel as if they're on a fishing expedition. My questions are, Is this a standard practice to ask for this information? (I'm not against it honestly, but it's never happened before.) And if this is standard, is there a standard format and expected level of detail I should present?
They don't need a copy of your entire internal IT policy but I think they may be after something similar to this - someone definitely needs to get you enough information about the contract to determine how much detail you need to provide, and about what. Tho I agree with Joseph - if they need the information for legal/compliance reasons, there needs to be legal input. Background Information 1) Are any of your employees located outside of the US? 2) Does your company have formalized and documented information security policies in place? 3) Is the handling and classification of information and data covered by your information security policies? 4) Are there any outstanding regulatory issues that you are currently addressing in the state(s) you operate in? If yes, please explain. General Security 1) Do you have an information security awareness training program for employees and contractors? 2) Which of the following methods for authenticating and authorizing access to your systems and applications do you currently use: Performed by operating system Performed by commercial product Single sign-on Client-side digital certificates Other two-factor authentication Home grown No authentication mechanism in place 3) Who authorizes access for employees, contractors, temps, vendors, and business partners? 4) Do you allow your employees (including contractors, temps, vendors, etc.) to have remote access to your networks? 5) Do you have an information security incident response plan? If no, how are information security incidents handled? 6) Do you have a policy that addresses the handling of internal or confidential information in e-mail messages to outside your company? 7) Do you review your information security policies and standards at least annually? 8) What methods and physical controls are in place to prevent unauthorized access to your company's secure areas? Network servers in locked rooms Physical access to servers limited by security identification (access cards, biometrics, etc.) Video monitoring Sign-in logs and procedures Security badges or ID cards visible at all times in secure areas Security guards None Other, Please provide additional details 9) Please describe your password policy for all environments? I.e.. Length, strength and aging 10) Do you have a disaster recovery (DR) plan? If yes, how often do you test it? 11) Do you have a Business Continuity (BC) plan? If yes, how often do you test it? 12) Will you provide us a copy of your tests results (BC and DR) if requested? Architecture and system review 1) Will [The Company]’s data and/or applications be stored and/or processed on a dedicated or shared server? 2) If on a shared server, how will [The Company]’s data be segmented from other companies’ data? 3) What type(s) of company-to-company connectivity will be provided? Internet Private/Leased line (e.g., T1) Dial-up VPN (Virtual Private Network) Terminal Service None Other, Please provide additional details 4) Will this network connectivity be encrypted? If yes, what method(s) of encryption will be used? 5) Is there any client-side code (including ActiveX or Java code) required in order to utilize the solution? If yes, please describe. 6) Do you have a firewall(s) to control external network access to your web server(s). If no, where is this server(s) located? 7) Does your network include a DMZ for Internet access to applications? If no, where are these applications located? 8) Does your organization take steps to ensure against Denial-of-Service outages? Please describe these steps 9) Do you perform any of the following information security reviews/tests Internal system/network scans Internally managed self assessments and/or due diligence reviews Internal code reviews/peer reviews External 3rd party penetration tests/studies Other, Please provide details How frequently are these tests performed? 10) Which of the following information security practices are being actively used within your organization Access control lists Digital certificates - Server Side Digital certificates - Client Side Digital signatures Network based intrusion detection/prevention Host Based intrusion detection/prevention Scheduled updates to intrusion detection/prevention signature files Intrusion monitoring 24x7 Continuous virus scanning Scheduled updates to virus signature files Penetration studies and/or tests None 11) Do you have standards for hardening or securing your operating systems? 12) Do you have a schedule for applying updates and hot fixes to your operating systems? If no, please tell us how you determine what and when to apply patches and critical updates 13) To provide protection from a power or network failure, do you maintain fully redundant systems for your key transactional systems? Web Server (if applicable) 1) What is the URL that will be used to access the application/data? 2) What operating system(s) is the web server (s)? (Please provide OS name, version and service pack or patch level.) 3) What is the web server software? Application Server (if applicable) 1) What operating system(s) is the application server (s)? (Please provide OS name, version and service pack or patch level.) 2) What is the application server software? 3) Are you using role based access control? If yes, how are the access levels assigned to roles? 4) How do you ensure that appropriate authorization and segregation of duties are in place? 5) Does your application employ multi-level user access / security? If yes, please provide details. 6) Are activities in your application monitored by a third party system or service? If yes please provide us with the company and service name and what information is being monitored Database Server (if applicable) 1) What operating system(s) is the database server (s)? (Please provide OS name, version and service pack or patch level.) 2) Which databases server software is being utilized? 3) Is the DB replicated? 4) Is the DB server part of a cluster? 5) What is done (if anything) to isolate [The Company]’s data from other companies? 6) Will [The Company]’s data, when stored on disk, be encrypted? If yes, please describe encryption method 7) How is source data captured? 8) How are data integrity errors handled? Auditing and Logging 1) Do you log customer access on: The web server? The application server? The database server? 2) Are the logs reviewed? If yes, please explain the process and how often are they reviewed? 3) Do you provide systems and resources to maintain and monitor audit logs and transaction logs? If yes, what logs do you retain and how long do you store them? 4) Will you allow [The Company] to review your system logs as they pertain to our company? Privacy 1) What are the processes and procedures used to declassify/delete/discard [The Company]’s data when no longer needed? 2) Have you at any time erroneously or accidentally disclosed customer information? If yes, what corrective measures have you implemented since? 3) Do contractors (non-employees) have access to sensitive or confidential information? If yes, have they signed a non-disclosure agreement? 4) Do you have vendors that are authorized to access and maintain your networks, systems, or applications? If yes, are these vendors under written contracts providing for confidentiality, background checks, and insurance/indemnification against loss? 5) How is your data classified and secured? Operations 1) What is the frequency and level of your back-ups? 2) What is the onsite retention period of back-ups? 3) What format are your backups stored in? 4) Do you store backups at an off-site location? If yes, what is the retention period? 5) Do you encrypt your data backups? 6) How do you ensure that only valid production programs are executed?
{ "source": [ "https://serverfault.com/questions/22644", "https://serverfault.com", "https://serverfault.com/users/966/" ] }
22,712
I want to start scheduling remote mysqldump crons, and I'd prefer to use a special account for that purposes. I want to grant that user the minimum permissions for getting a full dump, but I'm not sure the best way to go about that. Is it as simple as grant SELECT on *.* to '$username'@'backuphost' identified by 'password'; or am I missing a better way?
I believe the user just needs select permissions on the tables to be backed up. Edit: This guy says to assign the "lock tables" permission too, which makes sense.
{ "source": [ "https://serverfault.com/questions/22712", "https://serverfault.com", "https://serverfault.com/users/4392/" ] }
22,743
Reverse DNS seems to be strongly tied to class boundaries, what methods exist now that CIDR is the standard to delegate authority for a subnet? If multiple methods exist which one is best? Do you need to handle delegation differently depending on the DNS server (Bind, djbdns, Microsoft DNS, other)? Lets say I have control of a network that is a Class B 168.192.in-addr.arpa please provide examples for: How to delegate authority for a /22? How to delegate authority for a /25?
Delegating a /22 is easy, it's delegation of the 4 /24s. A /14 is delegation of the 4 /16s, etc. RFC2317 covers the special cases with a netmask longer than /24. Basically there's no super-clean way to do delegation of in-addr.arpa zones on anything but octet boundaries, but you can work around this. Let's say I want to delegate 172.16.23.16/29, which would be the IP addresses 172.16.23.16 -> 172.16.23.23. As the owner of the 23.16.172.in-addr.arpa zone, I might would put this in my 23.16.172.rev zone file to delegate this range to my customer: 16-29 IN NS ns1.customer.com 16-29 IN NS ns2.customer.com 16 IN CNAME 16.16-29.23.16.172.in-addr.arpa. 17 IN CNAME 17.16-29.23.16.172.in-addr.arpa. 18 IN CNAME 18.16-29.23.16.172.in-addr.arpa. 19 IN CNAME 19.16-29.23.16.172.in-addr.arpa. 20 IN CNAME 20.16-29.23.16.172.in-addr.arpa. 21 IN CNAME 21.16-29.23.16.172.in-addr.arpa. 22 IN CNAME 22.16-29.23.16.172.in-addr.arpa. 23 IN CNAME 23.16-29.23.16.172.in-addr.arpa. So, you can see that I'm defining a new zone (16-29.23.16.172.in-addr.arpa.) and delegating it to my customer's name servers. Then I'm creating CNAMEs from the IPs to be delegated to the corresponding number under the newly delegated zone. As the customer to whom these have been delegated, I would do something like the following in named.conf: zone "16-29.23.16.172.in-addr.arpa" { type master; file "masters/16-29.23.16.172.rev"; }; And then in the .rev file, I would just make PTRs like any normal in-addr.arpa zone: 17 IN PTR office.customer.com. 18 IN PTR www.customer.com. (etc) That's sort of the clean way to do it and it makes savvy customer happy because they have an in-addr.arpa zone to put the PTRs in, etc. A shorter way to do it for customer who want to control reverse DNS but don't want to set up a whole zone is to just CNAME individual record to similar names in their main zone. In this case, we, as the delegators, would have something like this in our 23.16.172.rev file: 16 IN CNAME 16.customer.com. 17 IN CNAME 17.customer.com. 18 IN CNAME 18.customer.com. 19 IN CNAME 19.customer.com. 20 IN CNAME 20.customer.com. 21 IN CNAME 21.customer.com. 22 IN CNAME 22.customer.com. 23 IN CNAME 23.customer.com. So it's similar in concept to the other idea, but instead of creating a new zone and delegating it to the customer, you're CNAMEing the records to names in the customer's already-existing main zone. The customer would have something like this in their customer.com zone file: office IN A 172.16.23.17 17 IN PTR office.customer.com. www IN A 172.16.23.18 18 IN PTR www.customer.com. (etc) It just depends on the type of customer. Like I said, it just depends on the customer type. A savvy customer will prefer to set up their own in-addr.arpa zone and will think it very odd to have PTRs in a domain-name zone. A non-savvy customer will want it to "just work" without having to do a ton of extra configuration. There are likely other methods, just detailing the two I'm familiar with. I was just thinking about my statement about how /22 and /14 are easy and thinking about why that's true but anything between 25 and 32 is hard. I haven't tested this, but I wander if you could delegate the entire /32 to the customer like this: 16 IN NS ns1.customer.com. 17 IN NS ns1.customer.com. (etc) Then, on the customer side, you catch the entire /32: zone "16.23.16.172.in-addr.arpa" { type master; file "masters/16.23.16.172.rev"; }; zone "17.23.16.172.in-addr.arpa" { type master; file "masters/17.23.16.172.rev"; }; (etc) And then in the individual file you would have something like this: @ IN PTR office.customer.com. The obvious downside is that one file per /32 is kind of gross. But I bet it would work. All the stuff I mentioned is pure DNS, if any DNS server didn't let you do it it's because it's restricting the full functionality of DNS. My examples are obviously using BIND, but we've done the customer side of this using Windows DNS and BIND. I don't see a reason it wouldn't work with any server.
{ "source": [ "https://serverfault.com/questions/22743", "https://serverfault.com", "https://serverfault.com/users/984/" ] }
22,833
I am writing a shell script which makes calls to psql using 2 forms... one is by command (-c), the other is by file (-f). e.g. psql -c "create table foo (bar integer)" psql -f foobar.sql One difference between these forms is that a call by command (-c) returns a non-zero exit code if an error is encountered, while a call by file (-f) always seems to return zero. I'm wondering if there is a workaround for this behaviour? (i.e. return non-zero if an error occurs while executing a file). Thanks.
I found out how to resolve this. I need to enable ON_ERROR_STOP at the top of the file. Example: \set ON_ERROR_STOP true
{ "source": [ "https://serverfault.com/questions/22833", "https://serverfault.com", "https://serverfault.com/users/8242/" ] }
22,866
Is there any way (short of getting an active directory browser) to view my OU while logged in to the domain?
gpresult /r | find "OU" will do it.
{ "source": [ "https://serverfault.com/questions/22866", "https://serverfault.com", "https://serverfault.com/users/1047/" ] }
22,990
Is there a way to get wireshark to capture packets sent from/to localhost? When I monitor traffic going from my computer to another, or from another computer to my computer, then it works. But from localhost to localhost does not register anything.
There's a WIKI Entry about exactly this issue on the wireshark homepage. They also mention specifics about the loopback interface regarding Windows - you could be running just into that. You can't capture on the local loopback address 127.0.0.1 with a Windows packet capture driver like WinPcap.
{ "source": [ "https://serverfault.com/questions/22990", "https://serverfault.com", "https://serverfault.com/users/481/" ] }
23,157
I'm using nginx to server my static content, is there a way that I can set the expires headers for every file that meets a specific rule? For example can I set the expires header for all files that have an extension of '.css'?
I prefer to do a more complete cache header, in addition to some more file extensions. The '?' prefix is a 'non-capturing' mark, nginx won't create a $1. It helps to reduce unnecessary load. location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { expires 30d; add_header Pragma public; add_header Cache-Control "public"; }
{ "source": [ "https://serverfault.com/questions/23157", "https://serverfault.com", "https://serverfault.com/users/51157/" ] }
23,385
Okay, this is creeping me out - I see about 1500-2500 of these: root@wherever:# netstat Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:60930 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60934 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60941 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60947 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60962 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60969 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60998 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60802 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60823 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60876 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60886 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60898 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60897 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60905 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60918 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60921 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60673 localhost:sunrpc TIME_WAIT tcp 0 0 localhost:60680 localhost:sunrpc TIME_WAIT [etc...] root@wherever:# netstat | grep 'TIME_WAIT' |wc -l 1942 That number is changing rapidly. I do have a pretty tight iptables config so I have no idea what can cause this. any ideas? Thanks, Tamas Edit: Output of 'netstat -anp': Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:60968 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60972 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60976 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60981 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60980 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60983 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60999 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60809 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60834 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60872 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60896 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60919 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60710 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60745 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60765 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60772 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60558 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60564 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60600 127.0.0.1:111 TIME_WAIT - tcp 0 0 127.0.0.1:60624 127.0.0.1:111 TIME_WAIT -
EDIT: tcp_fin_timeout DOES NOT control TIME_WAIT duration, it is hardcoded at 60s As mentioned by others, having some connections in TIME_WAIT is a normal part of the TCP connection. You can see the interval by examining /proc/sys/net/ipv4/tcp_fin_timeout : [root@host ~]# cat /proc/sys/net/ipv4/tcp_fin_timeout 60 And change it by modifying that value: [root@dev admin]# echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Or permanently by adding it to /etc/sysctl.conf net.ipv4.tcp_fin_timeout=30 Also, if you don't use the RPC service or NFS, you can just turn it off: /etc/init.d/nfsd stop And turn it off completely chkconfig nfsd off
{ "source": [ "https://serverfault.com/questions/23385", "https://serverfault.com", "https://serverfault.com/users/1353/" ] }
23,429
We are developing a web system and considering using the Open Id feature. Do you think it is any better than the usual way of loggin users in? If we use the Open Id feature that means the users will be redirected to the site of their choice of Open Id providers which would take more actions. Then they have to login there and get redirected back to our site. Would users be comfortable with this? Note: It's more of a social networking site but not anything bulky.
I love OpenID, and it's absolutely better than the "traditional" per-site credentials metaphor. I don't want more credentials to manage, and I don't want to trust J. Random site to store credentials I provide securely. I think that users will become more comfortable with it as it becomes more commonplace. Hopefully it becomes more commonplace.
{ "source": [ "https://serverfault.com/questions/23429", "https://serverfault.com", "https://serverfault.com/users/4858/" ] }
23,433
This is an old question that I've seen from time to time. My understanding of it is rather limited (having read about the differences a long time ago, but the factoid(s) involved never really stuck). As I understand it, Buffers Are used by programs with active I/O operations, i.e. data waiting to be written to disk Cache Is the result of completed I/O operations, i.e. buffers that have been flushed or data read from disk to satisfy a request. Can I get a clear explanation for posterity?
The "cached" total will also include some other memory allocations, such as any tmpfs filesytems. To see this in effect try: mkdir t mount -t tmpfs none t dd if=/dev/zero of=t/zero.file bs=10240 count=10240 sync; echo 3 > /proc/sys/vm/drop_caches; free -m umount t sync; echo 3 > /proc/sys/vm/drop_caches; free -m and you will see the "cache" value drop by the 100Mb that you copied to the ram-based filesystem (assuming there was enough free RAM, you might find some of it ended up in swap if the machine is already over-committed in terms of memory use). The "sync; echo 3 > /proc/sys/vm/drop_caches" before each call to free should write anything pending in all write buffers (the sync) and clear all cached/buffered disk blocks from memory so free will only be reading other allocations in the "cached" value. The RAM used by virtual machines (such as those running under VMWare) may also be counted in free's "cached" value, as will RAM used by currently open memory-mapped files (this will vary depending on the hypervisor/version you are using and possibly between kernel versions too). So it isn't as simple as "buffers counts pending file/network writes and cached counts recently read/written blocks held in RAM to save future physical reads", though for most purposes this simpler description will do.
{ "source": [ "https://serverfault.com/questions/23433", "https://serverfault.com", "https://serverfault.com/users/1561/" ] }
23,449
Over last year we have tried to deploy antivirus software on production linux servers. In most cases after a few weeks under month end loads applications start running slow, or do not work as it should. I have always questioned to reason for having antivirus on linux, but it just seems a be must have item on auditors list. It is my understanding that the amount of linux malware is little in comparison to windows, which brings me to my question why linux servers are required to have anti virus in terms of SOX? We have tried 2 different anti virus products and both deployments where rolled back on critical servers. Should we just put a compensating factor in place and forget about anti virus on linux altogether
The main reason to have anti-virus running on linux servers is usually not to protect the server itself - but to protect the end users who use the services / files on the server. Think of the server as a potential virus carrier . In order to protect the server itself you should be looking at proper firewalling and server hardening procedures, and packages like aide / tripwire and chkrootkit / rkhunter to detect compromises if they happen. We use clamav on our fileservers, mailservers, and webservers. On the fileservers (by far the largest) we configured it to scan the modified files hourly, and do a full scan over the weekend on a monthly basis. Otherwise the default configuration has not caused a noticeable performance impact.
{ "source": [ "https://serverfault.com/questions/23449", "https://serverfault.com", "https://serverfault.com/users/8882/" ] }
23,621
I'm running Windows 7 on a dual core, x64 AMD with 8 GB RAM. Do I even need a page file? Will removing it help or hurt performance? Would it make a difference if this is a server or a desktop? Does Windows 7 vs. Windows 2008 make a difference with a page file?
TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins. Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands. This is incorrect. There's more going on under the hood. Generally speaking, Windows maintains a backing store , meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands. Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals , and note that a new edition will soon be available), but there are a few nice things to note. First, much of what's in RAM is intrinsically already on the disk - program code fetched from an executable file or a DLL for example. So this doesn't need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed. Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM - possibly even OS code. Now every time those other apps - or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with. There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios. Some references: Windows Internals book(s) ( 4th edition and 5th edition ) Pushing the Limits of Windows: Physical Memory Pushing the Limits of Windows: Virtual Memory Inside the Windows Vista Kernel: Part 1 Inside the Windows Vista Kernel: Part 2 Inside the Windows Vista Kernel: Part 3 Understanding Virtual Memory RAM, Virtual Memory, Pagefile and all that stuff (here's a longer version ) The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile? dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I'm putting it here because I suspect some people won't scroll down to other answers - but if you find it valuable, you owe dmo a vote, so use the link to get there!
{ "source": [ "https://serverfault.com/questions/23621", "https://serverfault.com", "https://serverfault.com/users/4600/" ] }
23,724
How can I locate unused IP addresses on my network? The DHCP server keeps assigning the same address and I need a different IP address to test my application with. The software would need to run on Windows.
Probably the best way is to use NMAP ( http://nmap.org/ ) in ARP Ping scan mode. The usage will be something like nmap -sP -PR 192.168.0.* (or whatever your network is). The advantage of this approach is that it uses the Address Resolution Protocol to detect if IP addresses are assigned to machines. Any machine that wants to be found on a network needs to answer the ARP, so this approach works where ping scans, broadcast pings and port scans don't (due to firewalls, OS policy, etc.).
{ "source": [ "https://serverfault.com/questions/23724", "https://serverfault.com", "https://serverfault.com/users/9107/" ] }
23,744
This is a Canonical Question about whether to outsource DNS resolution for ones own domains I currently have my ISP providing DNS for my domain, but they impose limitations on adding records. Therefore, I am thinking about running my own DNS. Do you prefer to host your own DNS, or is it better to have your ISP do this? Are there alternatives which I can look into?
I wouldn't run my own DNS server - in my case, the hosting company that hosts my website provides free DNS service. There are also alternatives, companies that do nothing but DNS hosting ( DNS Made Easy comes to mind, but there are many others) which are the kind of thing you should probably look into. The reason I wouldn't do it myself is that DNS is supposed to be fairly reliable, and unless you have a geographically distributed network of servers of your own, you'd be putting all your eggs in one basket, so to speak. Also, there are plenty of dedicated DNS servers out there, enough that you wouldn't need to start up a new one.
{ "source": [ "https://serverfault.com/questions/23744", "https://serverfault.com", "https://serverfault.com/users/2541/" ] }
23,823
What process is necessary to configure a Windows environment to allow me to use DNS CNAME to reference servers? I want to do this so that I can name my servers something like SRV001, but still have \\file point to that server, so when SRV002 replaces it I don't have to update any of the links people have, just update the DNS CNAME and everyone will get pointed to the new server.
To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Edit the registry key (on 2008 R2) HKLM\SYSTEM\CurrentControlSet\Control\Print and add a value DnsOnWire of type DWORD set to 1 Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 , add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN . To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool ( setspn.exe ). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885
{ "source": [ "https://serverfault.com/questions/23823", "https://serverfault.com", "https://serverfault.com/users/9131/" ] }
23,965
How viable as a backup strategy would be periodical LVM snapshots of xen domU's? Pros, cons, any gotchas? To me it seems like the perfect solution for a fast, brainless restore. Any investigation could take place on the broken logical volume with domU successfuly running without interruption. EDIT: Here's where I'm at now, when doing full system backups. lvm snapshot of domU disk a new logical volume which size equals the snapshot size. dd if=/dev/snapshot of=/dev/new_lv disposing of snapshot with lvremove optional verification with kpartx/mount/ls Now I need to automate this.
LVM snapshots are meant to capture the filesystem in a frozen state. They are not meant to be a backup in and of themselves. They are, however, useful for obtaining backup images that are consistent because the frozen image cannot and will not change during the backup process. So while you won't use them directly to make long-term backups, they will be of great value in any backup process that you decide to use. There are a few steps to implement a snapshot. The first is that a new logical volume has to be allocated. The purpose of this volume is to provide an area where deltas (changes) to the filesystem are recorded. This allows the original volume to continue on without disrupting any existing read/write access. The downside to this is that the snapshot area is of a finite size, which means on a system with busy writes, it can fill up rather quickly. For volumes that have significant write activity, you will want to increase the size of your snapshot to allow enough space for all changes to be recorded. If your snapshot overflows (fills up) both the snapshot will halt and be marked as unusable. Should this happen, you will want to release your snapshot so you can get the original volume back online. Once the release is complete, you'll be able to remount the volume as read/write and make the filesystem on it available. The second thing that happens is that LVM now "swaps" the true purposes of the volumes in question. You would think that the newly allocated snapshot would be the place to look for any changes to the filesystem, after all, it's where all the writes are going to, right? No, it's the other way around. Filesystems are mounted to LVM volume names , so swapping out the name from underneath the rest of the system would be a no-no (because the snapshot uses a different name). So the solution here is simple: When you access the original volume name, it will continue to refer to the live (read/write) version of the volume you did the snapshot of. The snapshot volume you create will refer to the frozen (read-only) version of the volume you intend to back up. A little confusing at first, but it will make sense. All of this happens in less than 2 seconds. The rest of the system doesn't even notice. Unless, of course, you don't release the snapshot before it overflows... At some point you will want to release your snapshot to reclaim the space it occupies. Once the release is complete, the snapshot volume is released back into the volume, and the original remains. I do not recommend pursuing this as a long-term backup strategy. You are still hosting data on the same physical drive that can fail, and recovery of your filesystem from a drive that has failed is no backup at all. So, in a nutshell: Snapshots are good for assisting backups Snapshots are not, in and of themselves, a form of backup Snapshots do not last forever A full snapshot is not a good thing Snapshots need to be released at some point LVM is your friend, if you use it wisely.
{ "source": [ "https://serverfault.com/questions/23965", "https://serverfault.com", "https://serverfault.com/users/8851/" ] }
24,003
A friend of mine asked me today (trying to calm down an agitated customer of his) how you could find out in SQL Server 2005 which database uses how much memory (in the server's RAM that is) at any given time. Is that possible at all? If so - how? Can you do this with built-in SQL Server tools, or do you need extra third-party options? His customer was all flustered because his dedicated SQL Server machine suddenly uses all but 200KB of its 4 GB of RAM. I don't think this is a problem, really - but since this guy claims it happened more or less over night, he wants to know what caused this increase in memory usage..... Marc
It was most likely caused by a query wanting to read more pages into the buffer pool, and the buffer pool grabbing more memory to accomodate that. This is how SQL Server is supposed to work. If the box experiences memory pressure, it will ask SQL Server to give up some memory, which it will do. The customer shouldn't be concerned. You can use the DMV sys.dm_os_buffer_descriptors to see how much of the buffer pool memory is being used by which database. This snippet will tell you how many clean and dirty (modified since last checkpoint or read from disk) pages from each database are in the buffer pool. You can modify further. SELECT (CASE WHEN ([is_modified] = 1) THEN 'Dirty' ELSE 'Clean' END) AS 'Page State', (CASE WHEN ([database_id] = 32767) THEN 'Resource Database' ELSE DB_NAME (database_id) END) AS 'Database Name', COUNT (*) AS 'Page Count' FROM sys.dm_os_buffer_descriptors GROUP BY [database_id], [is_modified] ORDER BY [database_id], [is_modified]; GO I explain this a little more in this blog post Inside the Storage Engine: What's in the buffer pool? You could also checkout KB 907877 ( How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005 ) which will give you an idea of the breakdown of the rest of SQL Server's memory usage (but not per-database). Hope this helps!
{ "source": [ "https://serverfault.com/questions/24003", "https://serverfault.com", "https://serverfault.com/users/1167/" ] }
24,121
We have a Postfix hub and I'm trying to better understand the information in the mail.log file. I use tools like qshape, pflogsumm.pl and amavis-logwatch to summarize the log files, but I have still have questions about some of the elements of the raw log file. My first question is in regard to the delay entry that appears from Postfix when an email is finally delivered. I am guessing that these values are in seconds, but what does this information exactly mean. delay=2.4, delays=0.18/0.01/1.4/0.81 Did the email take a total of 2.4 seconds to process? What is the breakdown of timings in the delays section?
Postfix feature # 20051103 added the following (from the 2.3.13 release notes): Better insight into the nature of performance bottle necks, with detailed logging of delays in various stages of message delivery. Postfix logs additional delay information as "delays=a/b/c/d" where a=time before queue manager, including message transmission; b=time in queue manager; c=connection setup time including DNS, HELO and TLS; d=message transmission time. As I would suspect, the longest delay in your setup is being caused by connection setup, DNS, and the initial SMTP conversation. Seems normal to me.
{ "source": [ "https://serverfault.com/questions/24121", "https://serverfault.com", "https://serverfault.com/users/5736/" ] }
24,400
I have some folder, say C:\foo I want to mount as drive M:\ In linux I would do this with a bind mount.
You can use the subst command in Windows. subst m: c:\foo To make a persistent redirection, you can edit the registry. Add a string (REG_SZ) value to: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices Set the name of the value to the drive letter (e.g. M: ), then the data to: \??\C:\foo\foosub This method will work across logins and reboots. I tested this on Windows 2008, so it should also work on Vista, XP, 2003 and 2000.
{ "source": [ "https://serverfault.com/questions/24400", "https://serverfault.com", "https://serverfault.com/users/1682/" ] }
24,425
I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen . Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/ nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )
If you're using Bash, you can run disown -h job disown disown [-ar] [-h] [jobspec ...] Without options, each jobspec is removed from the table of active jobs. If the -h option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If jobspec is not present, and neither the -a nor -r option is supplied, the current job is used. If no jobspec is supplied, the -a option means to remove or mark all jobs; the -r option without a jobspec argument restricts operation to running jobs.
{ "source": [ "https://serverfault.com/questions/24425", "https://serverfault.com", "https://serverfault.com/users/9253/" ] }
24,515
I know there is a simple command for this, but how do I tell my Ubuntu server instance to request a new IP address from the DHCP server on eth0?
dhclient eth0 man page: dhclient
{ "source": [ "https://serverfault.com/questions/24515", "https://serverfault.com", "https://serverfault.com/users/9219/" ] }
24,523
I've been using Linux for a couple of years now but I still haven't figured out what the origin or meaning of some the directory names are on Unix and Unix like systems. E.g. what does etc stand for or var ? Where does the opt name come from? And while we're on the topic anyway. Can someone give a clear explanation of what directory is best used for what. I sometimes get confused where certain software is installed or what the most appropriate directory is to install software into.
For more data on the layout of Linux file-systems, look at the Filesystem Hierarchy Standard (now at version 2.3, with the beta 3.0 version deployed on most recent distros). It does explain some of where the names came from: /bin - Bin aries. /boot - Files required for boot ing. /dev - Dev ice files. /etc - Et c etera . The name is inherited from the earliest Unixes, which is when it became the spot to put config-files. /home - Where home directories are kept. /lib - Where code lib raries are kept. /media - A more modern directory, but where removable media gets mounted. /mnt - Where temporary file-systems are m ou nt ed. /opt - Where opt ional add-on software is installed. This is discrete from /usr/local/ for reasons I'll get to later. /run - Where run time variable data is kept. /sbin - Where s uper- bin aries are stored. These usually only work with root. /srv - Stands for " s e rv e". This directory is intended for static files that are served out. /srv/http would be for static websites, /srv/ftp for an FTP server. /tmp - Where t e mp orary files may be stored. /usr - Another directory inherited from the Unixes of old, it stands for " U NIX S ystem R esources". It does not stand for "user" (see the Debian Wiki ). This directory should be sharable between hosts, and can be NFS mounted to multiple hosts safely. It can be mounted read-only safely. /var - Another directory inherited from the Unixes of old, it stands for " var iable". This is where system data that varies may be stored. Such things as spool and cache directories may be located here. If a program needs to write to the local file-system and isn't serving that data to someone directly, it'll go here. /opt vs /usr/local The rule of thumb I've seen is best described as: Use /usr/local for things that would normally go into /usr , or are overriding things that are already in /usr . Use /opt for things that install all in one directory, or are otherwise special.
{ "source": [ "https://serverfault.com/questions/24523", "https://serverfault.com", "https://serverfault.com/users/1205/" ] }
24,622
Any unix: I have the following cmd line which works fine. rsync -avr -e ssh /home/dir [email protected]:/home/ But I need to set it up now to rsync to a remote server that only has an FTP server on it. How do I go about that? I looked at the rsync help but quickly got lost (I don't do this stuff very often).
rsync isn't going to work for you for the reasons others have mentioned. However, lftp and ncftp both have "mirror" modes that will probably meet your needs. I use this to push stuff from my local directory to a ftp or sftp web host: lftp -c "set ftp:list-options -a; open ftp://user:[email protected]; lcd ./web; cd /web/public_html; mirror --reverse --delete --use-cache --verbose --allow-chown --allow-suid --no-umask --parallel=2 --exclude-glob .svn"
{ "source": [ "https://serverfault.com/questions/24622", "https://serverfault.com", "https://serverfault.com/users/7696/" ] }
24,762
Is there a command I can use to easily find the path to an executable? I'm looking for identify on my local machine - something like pwd? pwd identify => /usr/local/bin/identify
which will search your path for the arguments you supply, it's found on just about any BSD or SysV UNIX moriarty:~ dave$ which bash true false /bin/bash /usr/bin/true /usr/bin/false
{ "source": [ "https://serverfault.com/questions/24762", "https://serverfault.com", "https://serverfault.com/users/75/" ] }
24,803
I wanted to share some knowledge I picked up when I ran into trouble using libpcap and snort to sniff a high-capacity (1 GB full duplex; 2 GB max aggregate) network link. The applications would sniff all traffic successfully, but would crash when the file size hit 2 GB captured. If you're having issues with creating 2 GB files, even though you have a filesystem that supports it, and/or you know the kernel supports it, this is for you.
Large file support ( >2GB ) for Linux needs to be addressed in three separate locations to ensure you do not run into the 2 GB max file size limit: Large file support enabled in the kernel A filesystem that supports large file sizes (many Linux-based filesystems do (ext3, reiserfs >= 3.6, etc)) Large file support within the libraries or applications utilized Kernel support for large files has been around since 2.4.0-test7; if you use a custom kernel, ensure you've included the large file options. Most Linux filesystems support large files, but you may have issues using a network file system. Lastly, libraries (i.e. libpcap) and applications need to be compiled with the gcc options -D _LARGEFILE64_SOURCE -D _FILE_OFFSET_BITS=64. Check to ensure the packages you are using either come precompiled with these options or roll your own. See here for more details.
{ "source": [ "https://serverfault.com/questions/24803", "https://serverfault.com", "https://serverfault.com/users/3454/" ] }
24,821
I have a Windows Service that makes use of a SQL Server database. I don't have control over the installation of the service, but would like to add a dependency on the service to ensure that it starts after SQL server has started. (SQL server is running on the same machine as the service in question) Is there a tool to add a dependency or possibly editing the registry directly?
This can also be done via an elevated command prompt using the sc command. The syntax is: sc config [service name] depend= <Dependencies(separated by / (forward slash))> Note : There is a space after the equals sign, and there is not one before it. Warning : depend= parameter will overwrite existing dependencies list, not append. So for example, if ServiceA already depends on ServiceB and ServiceC, if you run depend= ServiceD , ServiceA will now depend only on ServiceD. (Thanks Matt !) Examples Dependency on one other service: sc config ServiceA depend= ServiceB Above means that ServiceA will not start until ServiceB has started. If you stop ServiceB, ServiceA will stop automatically. Dependency on multiple other services: sc config ServiceA depend= ServiceB/ServiceC/ServiceD/"Service Name With Spaces" Above means that ServiceA will not start until ServiceB, ServiceC, and ServiceD have all started. If you stop any of ServiceB, ServiceC, or ServiceD, ServiceA will stop automatically. To remove all dependencies: sc config ServiceA depend= / To list current dependencies: sc qc ServiceA
{ "source": [ "https://serverfault.com/questions/24821", "https://serverfault.com", "https://serverfault.com/users/611/" ] }
24,885
I have a couple IIS/6.0 servers that security is asking me to remove a couple of response headers that are sent to client browsers on requests. They are concerned about divulging platform information through response headers. I have removed all the HTTP-HEADERS out of the IIS configuration for the website (X-Powered-By or some such header). (I personally do know that this information can be easily found out, even if it is hidden, but it isn't my call.) Headers I want to remove: Server - Microsoft-IIS/6.0 X-AspNet-Version - 2.0.50727 I also know that ASP.NET MVC also emits its own header too, if you know how to remove it also, that would be helpful. X-AspNetMvc-Version - 1.0
Your security department wants you to do this to make the server type harder to identify. This may lessen the barrage of automated hacking tools and make it more difficult for people to break into the server. Within IIS, open the web site properties, then go to the HTTP Headers tab. Most of the X- headers can be found and removed here. This can be done for individual sites, or for the entire server (modify the properties for the Web Sites object in the tree). For the Server header, on IIS6 you can use Microsoft's URLScan tool to remote that. Port 80 Software also makes a product called ServerMask that will take care of that, and a lot more, for you. For IIS7 (and higher), you can use the URL Rewrite Module to rewrite the server header or blank it's value. In web.config (at a site or the server as a whole), add this content after the URL Rewrite Module has been installed: <rewrite> <outboundRules rewriteBeforeCache="true"> <rule name="Remove Server header"> <match serverVariable="RESPONSE_Server" pattern=".+" /> <action type="Rewrite" value="" /> </rule> </outboundRules> </rewrite> You can put a custom value into the rewrite action if you'd like. This sample sourced from this article which also has other great information. For the MVC header, in Global.asax: MvcHandler.DisableMvcResponseHeader = true; Edited 11-12-2019 to update the IIS7 info since the TechNet blog link was no longer valid.
{ "source": [ "https://serverfault.com/questions/24885", "https://serverfault.com", "https://serverfault.com/users/1522/" ] }
24,943
I started a new service and we need to send emails to our customers (new account confirms, etc). My server is known as prod01.bidrodeo.com and resolves to 97.107.134.38 . For reverse DNS, 97.107.134.38 resolves to prod01.bidrodeo.com . However, all our email addresses are in the form of [email protected]. Should I make the reverse DNS point to bidrodeo.com instead? My emails are getting delayed or rejected by certain systems and i am not sure if my reverse DNS is not set up correctly.
What you've got is "forward confirmed reverse DNS" -- that is, the named returned by reverse-look-up, when run thru a forward look-up, returns the same IP as the original IP used in the reverse look-up (see http://en.wikipedia.org/wiki/Forward_Confirmed_reverse_DNS for the more verbose description). That's a good first step. The rejection messages are your best source of information about why your emails are being rejected. It looks like prod01.bidrodeo.com isn't listed as an MX for the domain bidrodeo.com , and that's going to cause problems with some anti-spam techniques. I would consider configuring the proper TXT record for SPF (see http://old.openspf.org/dns.html ) for this server computer and MXs for your domain. That's going to help with some email reception issues. If you have examples of some of the rejections and have questions about them link them to the question.
{ "source": [ "https://serverfault.com/questions/24943", "https://serverfault.com", "https://serverfault.com/users/9346/" ] }
25,081
I have a batch script that looks like: sc stop myservice sc start myservice it errors out because sc doesn't wait till the service is stopped. How do I restart a service with a script?
The poster wants to ensure the service is stopped before trying to restart it. You can use a loop on the output of "sc query" doing something like this: :stop sc stop myservice rem cause a ~10 second sleep before checking the service state ping 127.0.0.1 -n 10 -w 1000 > nul sc query myservice | find /I "STATE" | find "STOPPED" if errorlevel 1 goto :stop goto :start :start net start | find /i "My Service">nul && goto :start sc start myservice
{ "source": [ "https://serverfault.com/questions/25081", "https://serverfault.com", "https://serverfault.com/users/1215/" ] }
25,199
I want to copy all of the files and folders from one host to another. The files on the old host sit at /var/www/html and I only have FTP access to that server, and I can't TAR all the files. Regular connection to the old host through FTP brings me to the /home/admin folder. I tried running the following command form my new server: wget -r ftp://username:[email protected] But all I get is a made up index.html file. What the right syntax for using wget recursively over FTP?
Try -m for --mirror wget -m ftp://username:[email protected]
{ "source": [ "https://serverfault.com/questions/25199", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
25,319
What is the state of the art in fire suppression for server rooms? What are the top priorities in choosing a good system?
Check local fire-codes. Really. In 2003 I was involved in setting up a new datacenter for work. My job was more moving the gear, not wrangling with the architects and contractors who were building it. Imagine my surprise when I found sprinkler heads in the datacenter during my first walk through. I got about a quarter of the way to indignant outrage before my boss short-circuited me with logic. It seems that local fire code actually covers data-centers, and it mandates sprinklers. I was assured, assured I tell you, that they wouldn't go off unless the FM200 system failed to snuff the fire. I was dubious, but the fire inspectors really did mean that. Anyway, there are a series of things you need in a fire suppression system. An Emergency Power Off function If there is a fire, the EPO will drop power to the room hard. Yes, that'll cause data damage, but so does fire. If the fire is electrical in nature, this may stop it. Also, if all the gear is de-powered, a water dump does less damage. A sealed room You want sealed for correct HVAC anyway. You don't want to rely on building HVAC unless the building was designed with that room in mind in the first place. Also, this allows you to use... A gas-based suppression system FM200 is popular choice for this. Unlike the halon systems of old, it isn't as environmentally evil. A water based backup suppression system If the FM-200 fails, you need to get the fire out. After the EPO has fired, and the FM-200 dumps, if there is still a fire then you need old fashioned water. Water detection sensors in/on the floor If you have any water pipes overhead, you need water sensors in the floor. This is more of an asset-protection thing, but if you DO have sprinklers you need water sensors to detect leaks. Also good for detecting leaks in your HVAC chillers. Call-out capabilities If the fire system trips, you want to notify both Facilities people, as well as data-center staff and management. Obviously, this system should NOT rely upon assets in the data-center that's on fire. This can be hard. On water... Consider a UPS battery explosion. All those lead-acid batteries, some of which are leaking hydrogen, and kaboom. Your nice sealed room? Not so sealed anymore. When the FM-200 dumps, it does very little because the room isn't sealed. So that fire that started? It's now going to eat into the neighboring office-space. You want sprinklers in there, it's a life-safety issue. There may be more, but that's off the top of my head. The EPO can be a destructive option, so I don't know how wide-spread they are. But they make all kinds of sense in a room where a water dump is possible. If you have to retrofit a pre-existing room, some of the above may not be possible. As a fire-inspector once told me, to extingish a fire you need one of three things: Remove the fuel Remove the oxidizer Cool the reaction below the combustion point The system I lined out above does all three. The EPO removes fuel. The FM-200 partially removes oxygen, but mostly cools the reaction below the combustion point. The water dump smothers the fire due to lack of oxygen, and also cools it. For a high-value asset like a data-center, you want at least two of these. Because of this, I'd say that your top priority is to see if you can get a gas-based extinguishing system in place as it does far less damage than water does (even with an EPO on your power-distribution-unit or main breaker panel). A truly good system, no matter what the actual suppression technology, has a flexible notification system that allows more than just the facilities supervisor to be notified of the fire-suppression systems activating. As for hand-held extinguishers, use Class C. But be careful. Dry chemical style extinguishers blow a powder everywhere. And that powder is somewhat corrosive. In the typical high airflow data-center, a fired extinguisher's residue can get everywhere . If the powder gets inside server intakes, it can cause higher equipment failure rates for the next several years. We've had demonstrations of extinguishing fires at our workplace, and have seen how messy it can get. When you buy your extinguishers for in-center usage, use the gas-style Class C extinguishers.
{ "source": [ "https://serverfault.com/questions/25319", "https://serverfault.com", "https://serverfault.com/users/1293/" ] }
25,406
I ran nmap on my server and found a strange port open. I'm trying to figure out if there is a way to map that port to a specific process but have no idea if there is such a tool. Any suggestions?
As well as Netstat, mentioned in other posts, the lsof command should be able to do this just fine. Just use this: lsof -i :<port number> and all of the processes should come up. I use it on OS X quite frequently. Debian Administration article for lsof
{ "source": [ "https://serverfault.com/questions/25406", "https://serverfault.com", "https://serverfault.com/users/9447/" ] }
25,423
I am trying to set up a server with multiple web applications which will all be served through apache VirtualHost (apache running on the same server). My main constraint is that each web application must use SSL encryption. After googling for a while and looking at other questions on stackoverflow, I wrote the following configuration for the VirtualHost: <VirtualHost 1.2.3.4:443> ServerName host.example.org <Proxy *> Order deny,allow Allow from all </Proxy> SSLProxyEngine On ProxyRequests Off ProxyPreserveHost On ProxyPass / https://localhost:8443/ ProxyPassReverse / https://localhost:8443/ </VirtualHost> Even though https://host.example.org:8443 is accessible, https://host.example.org is not, which defeats the purpose of my virtual host configuration. Firefox complains that, even though it successfully connected to the server, the connection was interrupted. I also get the following warning in apache's error.log: proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be host.example.org for uri On the web application (a Tomcat server) the access log shows a strange access request: "?O^A^C / HTTP/1.1" 302 Following is the correct access resquest I get when I connect directly to https://host.example.org:8443 : "GET / HTTP/1.1" 302 Finally I should also mention that the virtual host works perfectly fine when I do not use SSL. How can I make this work?
At last I found a way to make it work. First I tried Dave Cheney suggestion, so I installed an other certificate for the apache server redirected to Tomcat non SSL port (so the proxy was redirecting to http://localhost:8080/ ). Unfortunately it did not fully work as in the web browser the https was transformed to http immediately upon connection. So I reverted to using https://localhost:8443/ and the final touch to make it work was to add again SSLProxyEngine. Here is the resulting VirtualHost configuration: <VirtualHost 1.2.3.4:443> ServerName host.domain.org <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/certificate.crt SSLCertificateKeyFile /etc/apache2/ssl/certificate.key ProxyRequests Off ProxyPreserveHost On ProxyPass / https://localhost:8443/ ProxyPassReverse / https://localhost:8443/ </VirtualHost>
{ "source": [ "https://serverfault.com/questions/25423", "https://serverfault.com", "https://serverfault.com/users/9452/" ] }
25,545
I'm a programmer, and I have worked for a few clients whose networks block outgoing connections on port 22. Considering that programmers often need to use port 22 for ssh, this seems like a counterproductive procedure. At best, it forces the programmers to bill the company for 3G Internet. At worst, it means they can't do their jobs effectively. Given the difficulties this creates, could an experienced sysadmin please explain the desired benefit to what seems like a lose-lose action?
I don't see that anyone has spelled out the specific risk with SSH port forwarding in detail. If you are inside a firewall and have outbound SSH access to a machine on the public internet, you can SSH to that public system and in the process create a tunnel so that people on the public internet can ssh to a system inside your network, completely bypassing the firewall. If fred is your desktop and barney is an important server at your company and wilma is public, running (on fred): ssh -R*:9000:barney:22 wilma and logging in will let an attacker ssh to port 9000 on wilma and talk to barney's SSH daemon. Your firewall never sees it as an incoming connection because the data is being passed through a connection that was originally established in the outgoing direction. It's annoying, but a completely legitimate network security policy.
{ "source": [ "https://serverfault.com/questions/25545", "https://serverfault.com", "https://serverfault.com/users/7754/" ] }
25,653
What is better for performance? A partition closer to the inside of the disk will have slower access times, and we must wait for the drive to switch between the OS and swap partitions. On the other hand, a swap partition bypasses all of the filesystem allowing writes to the disk directly, which can be faster than a file. What is the performance trade off? How much does having a fixed size swapfile make a difference? Is it a case that it will be longer to change to the swap partition, but performance will be better while it is on the swap partition that if it had been a swap file?
On hard disks, throughput and seeking is often faster towards the beginning of the disk, because that data is stored closer to the outer area of the disk, which has more sectors per cylinder. Thus, creating the swap at the beginning of the disk might improve performance. For a 2.6 Linux kernel, there is no performance difference between a swap partition and an unfragmented swap file. When a swap partition/file is enabled by swapon, the 2.6 kernel finds which disk blocks the swapfile is stored on , so that when it comes time to swap, it doesn't have to deal with the filesystem at all. Thus, if the swapfile isn't fragmented, it's exactly as if there were a swap partition at its same location. Or put another way, you'd get identical performance if you used a swap partition raw, or formatted it with a filesystem and then created a swapfile that filled all space, since either way on that disk there is a contiguous region used for swapping, which the kernel uses directly. So if one creates the swapfile when the filesystem is fresh (thus ensuring it's not fragmented and at the beginning of the volume), performance should be identical to having a swap partition just before the volume. Further, if one created the swapfile say in the middle of the volume, with files on either side, one might get better performance, since there's less seeking to swap. On Linux, if the swapfile is created unfragmented, and never expanded, it cannot become fragmented, at least with normal filesystems like ext3/4. It will always use the same disk blocks, which are contiguous. I conclude that about the only benefit of a dedicated swap partition is guaranteed unfragmentation when you need to expand it; if your swap will never be expanded, a file created on a fresh filesystem doesn't require an extra partition.
{ "source": [ "https://serverfault.com/questions/25653", "https://serverfault.com", "https://serverfault.com/users/8614/" ] }
25,779
How is it possible to pipe out wget 's downloaded file? If not what alternatives should I use?
wget -O - -o /dev/null http://google.com
{ "source": [ "https://serverfault.com/questions/25779", "https://serverfault.com", "https://serverfault.com/users/3320/" ] }
25,907
Can anyone tell me what some of the implications of having two different subnets on the same switch would be if VLANs are not being used?
A host will send ARP requests for address(es) in subnet(s) local to its interface(s). Typically this would be the subnet (or subnets , if multiple addresses are assigned to interfaces) in which the interfaces' IP address (or addresses ) are located. You can add routing table entries to make other subnets appear local to the host's interface(s) as well. Two hosts, each configured with a single IP address assigned, and each in different subnets, will not make ARP requests for the other's IP address. Assuming the hosts have a gateway specified (either a default gateway or a specific gateway to the other subnet) they will make ARP requests for the applicable gateway and send traffic for the other subnet to that gateway for routing. Configuring two hosts in this manner will provide a logical isolation. Because the hosts share a broadcast domain, however, no isolation (as there would be if you were using VLANs) is really achieved. It would be easy to ARP and MAC spoof hosts in either subnet from the attached hosts. If you're doing this in a lab scenario it's a fine configuration. If you truly need isolation, though (as in a production deployment) you should use VLANs or separate physical switches.
{ "source": [ "https://serverfault.com/questions/25907", "https://serverfault.com", "https://serverfault.com/users/2561/" ] }
25,985
I would like to open up a discussion on your experience with either using cable management arms or not. It seems like a nice idea to ensure that you have enough cable slack to be able to pull a running server out of a rack without worrying about accidentally unplugging a cable, but how many times is this really done? It seems like I'm still taking down a machine for maintenance if I need to get inside so I'm not sure of the benefit. It also seems to me that the cable management arms restrict the air flow coming out of the server and the rack as a whole. I'd like some thoughts on what others are doing either with or without the cable management arms.
Coming from a webhosting env. We dealt with hundreds of servers some of which were always moving based on contract changes. I don't care for them and prefer velcro instead. IMO, if you're going to pull a server from a rack to do something inside the case it should be off. Hot swappable drives are all accessible from the front. It was one more thing I didn't need stuffed into the back of the rack. It added to install time, and removal time. It made it harder to replace a bad cable in a hurry. It blocked access to the label on the cables near the jack. It made it hard to move a server and cables if say I wanted to move it higher up and shorten them. It added to any heat problems we might have had.
{ "source": [ "https://serverfault.com/questions/25985", "https://serverfault.com", "https://serverfault.com/users/2359/" ] }
26,303
I've noticed that the sudoers file and cron config files act in a special way compared to other config files on Linux. They need to be edited with a special wrapper rather than any text editor. Why is this?
You use visudo mostly to prevent from breaking your system. Visudo runs checks on your changes to make sure you didn't mess anything up. If you did mess something up, you could completely wreck your ability to fix it or do anything requiring privileges without rebooting into a rescue mode. The man page describes this . visudo edits the sudoers file in a safe fashion, analogous to vipw(8). visudo locks the sudoers file against multiple simultaneous edits, provides basic sanity checks, and checks for parse errors. If the sudoers file is currently being edited you will receive a message to try again later.
{ "source": [ "https://serverfault.com/questions/26303", "https://serverfault.com", "https://serverfault.com/users/1655/" ] }
26,405
I've recently started using a new desktop PC with Ubuntu Linux installed. However the terminal beeps annoyingly. i.e. If I'm at the start of the line and I press Backspace, it'll beep to tell me that there are no characters to delete. Of if I am trying to tab complete and there are no completions for it, then it'll beep. How do I turn this off?
As the pc speaker is annoying altogether (at least, I think it is), I just go modprobe -r pcspkr and add it to /etc/modprobe.d/blacklist.conf like this: blacklist pcspkr No more beeps. Ever. Does not work for bells through /dev/snd/*, obviously
{ "source": [ "https://serverfault.com/questions/26405", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
26,462
I'm migrating my current half-size rack to a full-size rack and want to take the opportunity to reorganize and sort our spaghetti-hell of ethernet cables. What system do you use for organising your cables? Do you use any tracking software? Do you physically label the cables? What are you identifying when you label each end? Mac address? Port number? Asset number? What do you use to label them? I was looking at a hand held labeler, but the wrap around laser printer sheets might work. The Brady ID PAL seems good, but it's pricey. Ideas?
Here's what I do Label each cable I have a brother P-Touch labeler that I use. Each cable gets a label on both ends. This is because if I unplug something from a switch, I want to know where to plug it back into, and vice versa on the server end. There are two methods that you can use to label your cables with a generic labeler. You can run the label along the cable, so that it can be read easily, or you can wrap it around the cable so that it meets itself and looks like a tag. The former is easier to read, the latter is either harder to read or uses twice as much label since you type the word twice to make sure it's read. Long labels on mine get the "along the cable" treatment, and shorter ones get the tag. You can also buy a specific cable labeler which provides plastic sleeves. I've never used it, so I can't offer any advice. Color code your cables I run each machine with bonded network cards. This means that I'm using both NICs in each server, and they go to different switches. I have a red switch and a blue switch. All of the eth0's go to red switch using red cables (and the cables are run to the right , and all eth1's go to the blue switch using blue cables (and the cables are run to the left ). My network uplink cables are an off color, like yellow, so that they stand out. In addition, my racks have redundant power. I've got a vertical PDU on each side. The power cables plugged into the right side all have a ring of electrical tape matching the color of the side, again, red for right, blue for left. This makes sure that I don't overload the circuit accidentally if things go to hell in a hurry. Buy your cables This may ruffle some feathers. Some people say you should cut cables exactly to length so that there is no excess. I say "I'm not perfect, and some of my crimp jobs may not last as long as molded ends", and I don't want to find out at 3 in the morning some day in the future. So I buy in bulk. When I'm first planning a rack build, I determine where, in relation to the switches, my equipment will be. Then I buy cables in groups based on that distance. When the time comes for cable management, I work with bundles of cable, grouping them by physical proximity (which also groups them by length, since I planned this out beforehand). I use velcro zip ties to bind the cables together, and also to make larger groups out of smaller bundles. Don't use plastic zip ties on anything that you could see yourself replacing. Even if they re-open, the plastic will eventually wear down and not latch any more. Keep power cables as far from ethernet cables as possible Power cables, especially clumps of power cables, cause ElectroMagnetic Interference (EMI aka radio frequency interference (or RFI)) on any surrounding cables, including CAT-* cables (unless they're shielded, but if you're using STP cables in your rack, you're probably doing it wrong). Run your power cables away from the CAT5/6. And if you must bring them close, try to do it at right angles. Edit I forgot! I also did a HOWTO on this a long time ago: http://www.standalone-sysadmin.com/blog/2008/07/howto-server-cable-management/
{ "source": [ "https://serverfault.com/questions/26462", "https://serverfault.com", "https://serverfault.com/users/9663/" ] }
26,509
When I have dircolors defined life is full of... color. When I pipe ls through less to scroll around I lose the colors. Any suggestions?
Most likely your ls is aliased to ls --color=auto , which tells ls to only use colors when its output is a tty. If you do ls --color (which is morally equivalent to ls --color=always ), that will force it to turn on colors. You could also change your alias to do that, but I wouldn't really call that a good idea. Better to make a different alias with --color . less needs -R too, which causes it to output the raw control characters.
{ "source": [ "https://serverfault.com/questions/26509", "https://serverfault.com", "https://serverfault.com/users/8199/" ] }
26,564
On the Windows platform, what native options to I have to check if a port (3306, for example) on my local machine (as in localhost ), is being blocked?
Since you are on a Windows machine, these things can be done: Execute the following command and look for a ":3306" listener (you did not mention UDP/TCP). This will confirm there is something running on the port. netstat -a -n After this, if you are expecting incoming connections on this port and feel that the firewall may be blocking them, you could use start windows firewall logging and check the logs for dropped connections Go to Windows Firewall, Advanced settings Click on the Settings button next to "Local Area Connection" Select "Log dropped packets" Look at the log file location (if not present, define one) Click OK Now, when the connection attempt is made (assuming you know when this is done), look at the log file for a drop on port 3306. If this is seen, you will want to add an exception for this port. There is one more command to check the firewall state (Updated for Windows 7 users -- as referred by Nick below -- use netsh advfirewall firewall ) netsh firewall show state this will list the blocked ports as well as active listening ports with application associations This command will dump the Windows firewall configuration detail netsh firewall show config If you have an active block (incoming connections are being dropped by firewall) after you start logging, you should see that in the log. If you are running an application/service that is listening on 3306, the firewall config should show it to be Enabled. If this is not seen, you have probably missed adding an exception with the firewall to allow this app/service. Finally, port 3306 is typically used for MySQL. So, I presume you are running MySQL server on this windows machine. You should therefore see a listener for 3306 accepting incoming connections. If you do not see that, you need to work with your application (MySQL) to get that started first.
{ "source": [ "https://serverfault.com/questions/26564", "https://serverfault.com", "https://serverfault.com/users/9676/" ] }
26,954
I have a Git repository on a staging server which multiple developers need to be able to pull to. git-init seems to have a flag very close to what I'm looking for: --shared , except I'd like multiple people to pull to that repository, as well. The git-clone 's --shared flag does something entirely different. What's the easiest way to change an existing repository's permissions?
Permissions are a pest. Basically, you need to make sure that all of those developers can write to everything in the git repo. Skip down to The New-Wave Solution for the superior method of granting a group of developers write capability. The Standard Solution If you put all the developers in a specially-created group, you can, in principle, just do: chgrp -R <whatever group> gitrepo chmod -R g+swX gitrepo Then change the umask for the users to 002 , so that new files get created with group-writable permissions. The problems with this are legion; if you’re on a distro that assumes a umask of 022 (such as having a common users group that includes everyone by default), this can open up security problems elsewhere. And sooner or later, something is going to screw up your carefully crafted permissions scheme, putting the repo out of action until you get root access and fix it up (i.e., re-running the above commands). The New-Wave Solution A superior solution—though less well understood, and which requires a bit more OS/tool support—is to use POSIX extended attributes. I’ve only come to this area fairly recently, so my knowledge here isn’t as hot as it could be. But basically, an extended ACL is the ability to set permissions on more than just the 3 default slots (user/group/other). So once again, create your group, then run: setfacl -R -m g:<whatever group>:rwX gitrepo find gitrepo -type d | xargs setfacl -R -m d:g:<whatever group>:rwX This sets up the extended ACL for the group so that the group members can read/write/access whatever files are already there (the first line); then, also tell all existing directories that new files should have this same ACL applied (the second line). Hope that gets you on your way.
{ "source": [ "https://serverfault.com/questions/26954", "https://serverfault.com", "https://serverfault.com/users/8511/" ] }
27,044
Would like to be able to connect to an existing X display, so can access my work environment and everything I left open from home. I vaguely remember something about using x11vnc in the past. But the package does not exists for Fedora 11, so I am thinking there is some built in method now. NOTE: I connect to my work machine through a VPN so password protection is all I need security wise.
Use x11vnc , It will attach to a running session and let you share the desktop. If you run it as root to connect to an xdm session, you will need to do some research into Xauth as it can be a bit fiddly to set up. Edit to add: Karl Runge no longer appears to be maintaining the original x11vnc however development is continuing on github . Or you could do as suggested below by @ivan-talalaev and use x0vncserver. Another advantage of this server is that it supports alot of the advanced VNC features used by UltraVNC including large bitmap caching and file-transfer.
{ "source": [ "https://serverfault.com/questions/27044", "https://serverfault.com", "https://serverfault.com/users/3258/" ] }
27,134
I have servers hosted at a hosting provider and they also host the DNS records for my domain names. Now I want to add subdomains that are resolved by my own DNS service. So for example: the hosting provider's name server knows the IP address for econemon.com one of my servers knows the IP address for ftp.econemon.com Also, unknown or undefined subdomains should be routed to the same IP as the parent domain on failure of my DNS service, it would be great if the requests all go to the IP address that is associated with econemon.com , but I'm not sure how that should work. Now, I've read through the Wikipedia articles on DNS to dust off my knowledge, but the part that leaves me confused is: how does a client know which server to ask for the IP address for ftp.econemon.com ? Does it get that information from the hoster? If so, do I have to register the subdomain there (and what would I need my name server for then)?
If you want to delegate authority for a section of your domain you are going to need to add another level to the hierarchy. When a recursive DNS server asks for the address for ftp.econemon.com it is going to go through a number of steps. First it is going to ask one of the root servers which will reply with the name servers for the .com domain (this step will likely be cached and only done infrequently). It will then ask the .com servers and they will respond with the name servers for the econemon.com domain. Finally it will ask these servers for the address record for ftp.econemon.com. In theory you could simply add ftp.econemon.com as an NS entry in the parent zone e.g: services NS ns1.econemon.com. ns1 A 192.0.2.1 And then create ftp.econemon.com as a zone in your name server. But if you do it this way you will have to create a new zone per server. What you probably want to do is ask your provider to add a delegated subdomain. e.g.: services NS ns1.services.econemon.com. services NS ns2.services.econemon.com. ns1.services A 192.0.2.1 ns2.services A 192.0.2.2 You can then add services.econemon.com as a zone on your name servers and simply add new entries as you need them in this single zone. If you really need the short names too it shouldn't be too much trouble to get CNAME records added such that ftp.econemon.com has a canonical name of ftp.services.econemon.com which leaves you able to change the IP address whenever you want to and allows users to use a short name. ftp.econemon.com. CNAME ftp.services.econemon.com.
{ "source": [ "https://serverfault.com/questions/27134", "https://serverfault.com", "https://serverfault.com/users/200/" ] }
27,248
What is a process handle and what can we know about a running process through the "handle count" property in a task explorer?
A process handle is an integer value that identifies a process to Windows. The Win32 API calls them a HANDLE; handles to windows are called HWND and handles to modules HMODULE. Threads inside processes have a thread handle, and files and other resources (such as registry keys) have handles also. The handle count you see in Task Manager is " the number of object handles in the process's object table ". In effect, this is the sum of all handles that this process has open. If you do not release your handle to a resource, other people may not be able to access it - this is why you sometimes cannot delete a file because Windows claims it is in use (check out this article on handle leaks and Process Explorer ). Also, there is a per-process limit on various handles. Here is an example . In general, if you are opening handles and not closing them, it is analogous to leaking memory. You should figure out what is going on and fix it. There is a good CodeProject article on handle leaks .
{ "source": [ "https://serverfault.com/questions/27248", "https://serverfault.com", "https://serverfault.com/users/4113/" ] }
27,332
Every so often I run into a file that I need to take ownership of. I normally use cacls for changing ntfs permissions, but it doesn't seem to do ownership. Under *nix I would run something like chown me:me <file> . Is there a windows equivalent to chown ?
subinacl is a Windows sysadmin's power tool for doing everything to do with ownership and ACLs. You can change the ownership to anyone other than just you ( you can't do this with the GUI ). subinacl /file test.txt /setowner=domain\foo This lets you set the permission to any user you like, without having to be an administrator (as I believe takeown.exe requires).
{ "source": [ "https://serverfault.com/questions/27332", "https://serverfault.com", "https://serverfault.com/users/9249/" ] }
27,337
I'm trying to run ImageMagick from batch via exec() or passthru() I've already changed security settings for cmd.exe and ImageMagick folder. These are my current settings. C:\ImageMagick-6.5.3-Q8 BUILTIN\Administrators:(OI)(CI)F COMPUTERNAME\IUSR_myusername:(OI)(CI)R NT AUTHORITY\SYSTEM:(OI)(CI)R BUILTIN\Users:(OI)(CI)R C:\WINDOWS\system32\cmd.exe COMPUTERNAME\TelnetClients:R COMPUTERNAME\psaadm:R COMPUTERNAME\psacln:R COMPUTERNAME\psaserv:R NT AUTHORITY\INTERACTIVE:R NT AUTHORITY\SERVICE:R NT AUTHORITY\SYSTEM:F BUILTIN\Administrators:F COMPUTERNAME\IUSR_myusername:R After doing this this is the actual script that I'm trying to run: error_reporting(E_ALL); define("ABSOLUTE_PATH", "C:\\Inetpub\\vhosts\\myusername.com\\httpdocs\\online"); define("IMAGE_MAGICK_CONVERT", "C:\ImageMagick-6.5.3-Q8\convert.exe"); echo(IMAGE_MAGICK_CONVERT . " " . ABSOLUTE_PATH . "\\convert\\myfile1.jpg " . ABSOLUTE_PATH . "\\convert\\myfile1.pdf"); echo exec("cmd /c " . IMAGE_MAGICK_CONVERT . " " . ABSOLUTE_PATH . "\\convert\\myfile1.jpg " . ABSOLUTE_PATH . "\\convert\\myfile1.pdf 2>&1"); passthru(IMAGE_MAGICK_CONVERT . " " . ABSOLUTE_PATH . "\\convert\\myfile1.jpg " . ABSOLUTE_PATH . "\\convert\\myfile1.pdf 2>&1"); So I'm still receiving a Access is denied. Please help out...
subinacl is a Windows sysadmin's power tool for doing everything to do with ownership and ACLs. You can change the ownership to anyone other than just you ( you can't do this with the GUI ). subinacl /file test.txt /setowner=domain\foo This lets you set the permission to any user you like, without having to be an administrator (as I believe takeown.exe requires).
{ "source": [ "https://serverfault.com/questions/27337", "https://serverfault.com", "https://serverfault.com/users/162116/" ] }
27,502
We are starting to do some project and application roadmapping, and am thinking about OpenOffice (and StarOffice) as a replacement for OfficeXP and Office 2000, which is on the bulk of our PCs. Roughly 120 users and PCs OE Windows XP Pro on virtually all desktops. Office 2000, Office XP, properly licensed (knock on wood). No Software Assurance Windows Server 2003 and Active Directory MS Exchange 2003 - not sure yet about Exchange 2008 Outlook 2003 on top of lower Office installs "newish" but aging PC inventory .. very little change in the last 12 months. Windows SharePoint Server for the intranet .. it's use is growing How much should I consider the Open Source alternatives? What sort of things should I be concerned about? What hidden issues and second-order consequences should I be aware of? I am looking forward to hearing pros and cons, and any other comments.
Every year or two, I install OpenOffice and the problem is the same - documents don't format/translate quite right to/from their MS Office counterparts. It doesn't seem to be overly wacko-paranoid to observe that Microsoft is good at stamping out competition. All they need to do is tweak things just a bit in each service pack & patch to make sure things don't translate quite right, and they continue to lock me in, because I don't have the resources to handle the additional support requests. I think this is surmountable if: your users are extremely flexible most documents leave your office in a different format (say, PDF) you don't do a lot of document sharing outside the organization Otherwise, I'd say the business disruption is more costly than the licenses (unfortunately).
{ "source": [ "https://serverfault.com/questions/27502", "https://serverfault.com", "https://serverfault.com/users/846/" ] }