source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
27,590
what would be a command to run in different distros?
"uname -m" is the command you're looking for. You can run both 32bit and 64bit on modern intel and AMD processors, so "uname -p" is not going to help you (in addition it mostly doesn't work these days, this here core2 thinks the response to "uname -p" is "unknown"). Looking for existence of /usr/lib64 (as has been suggested) is not going to help you either, since some hardware and system related packages will install both 32bit and 64bit libraries to be on the safe side. On my (debian) system the fakeroot package does just that. As for the output of "uname -m", if it's i386 or i686 it's 32bit, if it's x86_64 (or alpha, or ia64 or some other 64bit architecture I've never seen :) it's 64bit. (as a fun aside, my 64bit FreeBSD server returns "amd64", which might be a bit strange for an intel quadcore but totally understandable if you know the history of the x86 64bit architecture)
{ "source": [ "https://serverfault.com/questions/27590", "https://serverfault.com", "https://serverfault.com/users/2098/" ] }
27,708
I have seen some sites offering 'Malware University', training classes on getting rid of malware. Do you think that updating your malware removal skills (or arsenal) is necessary from time to time? How do you become more effective at dealing with this growing, very complicated, threat?
You don't "clean malware". You level the machines and start over. Anything less is a disservice to your Customer and asking for trouble. As far as dealing with the "threat", you don't allow users to run with Administrator-level accounts (on Windows), and you don't install untrusted software (inasmuch as is possible). It seems fairly simple to me. My Customers and I do not have a problem with malicious software.
{ "source": [ "https://serverfault.com/questions/27708", "https://serverfault.com", "https://serverfault.com/users/1821/" ] }
27,726
It seems like there's a lot of disagreement in mindsets when it comes to installing rackmount servers. There have been threads discussing cable arms and other rackmount accessories, but I'm curious: Do you leave an empty rack unit between your servers when you install them? Why or why not? Do you have any empirical evidence to support your ideas? Is anyone aware of a study which proves conclusively whether one is better or not?
If your servers use front to back flow-through cooling, as most rack mounted servers do, leaving gaps can actually hurt cooling. You don't want the cold air to have any way to get to the hot aisle except through the server itself. If you need to leave gaps (for power concerns, floor weight issues, etc) you should use blanking panels so air can't pass between the servers.
{ "source": [ "https://serverfault.com/questions/27726", "https://serverfault.com", "https://serverfault.com/users/4392/" ] }
27,757
Can anyone recommend a fail2ban-like tool for a Windows OS? I've got a couple of Windows Media servers that get hammered with brute force authentication attempts. I would like to plug these authentication failures into some kind of blocking tool.
I know of no tool that will do this "out of the box". I wrote a script to do something like this with failed OpenSSH logons on Windows, but I can't share it with you because it "belongs" to the Customer for whom I wrote it. Having said that, it was a simple VBScript program that had an event log sink to watch for new failed logons and, if enough happened in a time window, add an IP route (using the "route" command) to route traffic to the offending IP address to a "MS Loopback Adapter" on the system. For other types of logs, it would be a fairly trivial matter to write. Since I didn't have IPtables on Windows, the loopback adapter seemed like the next best thing. (You can't do a "route x.x.x.x mask 255.255.255.255 127.0.0.1" on Windows-- you need an adapter to route the traffic to, because the 127.0.0.1 loopback isn't a "real" interface on Windows.) (If you want something like this written, contact me out-of-band and we can discuss the specifics of such an arrangement.) Edit: I decided to write something to do this and I've released it under a Free license.
{ "source": [ "https://serverfault.com/questions/27757", "https://serverfault.com", "https://serverfault.com/users/9382/" ] }
27,829
I have been enjoying reading ServerFault for a while and I have come across quite a few topics on Hadoop. I have had a little trouble finding out what it does from a global point of view. So my question is quite simple : What is Hadoop ? What does it do ? What is it used for ? Why does it kick ass ? Edit : If anyone happens to have demonstrations/explanations of use cases in which Hadoop was used, that would be fantastic.
Straight from the horse's mouth : Hadoop is a framework for running applications on large clusters built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or reexecuted on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both Map/Reduce and the distributed file system are designed so that node failures are automatically handled by the framework. Map/Reduce is a programming paradigm that was made popular by Google where in a task is divided in to small portions and distributed to a large number of nodes for processing (map), and the results are then summarized in to the final answer (reduce). Google and Yahoo use this for their search engine technology, among other things. Hadoop is a generic framework for implementing this kind of processing scheme. As for why it kicks ass, mostly because it provides neat features such as fault tolerance and lets you bring together pretty much any kind of hardware to do the processing. It also scales extremely well, provided your problem fits the paradigm. You can read all about it on the website . As for some examples, Paul gave a few, but here's a few more you could do that are not so web-centric: Rendering a 3D film. The "map" step distributes the geometry for every frame to a different node, the nodes render it, and the rendered frames are recombined in the "reduce" step. Computing the energy in a system in a molecular model. Each frame of a system trajectory is distributed to a node in the "map" step. The nodes compute the the energy for each frame, and then the results are summarized in the "reduce" step. Essentially the model works very well for a problem that can be broken down in to similar discrete computations that are completely independent, and can be recombined to produce a final result.
{ "source": [ "https://serverfault.com/questions/27829", "https://serverfault.com", "https://serverfault.com/users/3431/" ] }
27,887
Is there a way to sort ps output by process start time, so newest are either at the top or bottom ? On Linux ? On SysV5 ? On Mac ?
This should work on Linux and SysV5 ps -ef --sort=start_time
{ "source": [ "https://serverfault.com/questions/27887", "https://serverfault.com", "https://serverfault.com/users/3023/" ] }
28,041
I'd like an Apache Web Server I have installed at home to listen on port 80 and port 8080. I've added Listen 8080 to httpd.conf and restarted the Apache services but the server doesn't seem to be listening on 8080. Punching in http://localhost:8080 times out and doesn't display my index.html but http://localhost will display my index.html. How do I make it listen to 80 and 8080?
A standard Debian install of apache will have the following fragment of configuration: Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> This is telling apache to listen on port 80 and to listen to port 443 if mod_ssl is configured. In your case you'd want: Listen 80 Listen 8080 You need to make sure you run a restart, not a reload operation on apache for it to pay any attention to new Listen directives. The safest thing to do is to stop apache, make sure it is dead and start it again. If this configuration does not work, check the log files for any error messages. You could use "netstat -lep --tcp" to see if there is anything listening on port 8080. Finally, if everything else doesn't work, try running apache under strace to see if it's trying to bind to that port and failing, but not logging the problem.
{ "source": [ "https://serverfault.com/questions/28041", "https://serverfault.com", "https://serverfault.com/users/51755/" ] }
28,399
Can VMWare ESX or ESXi be installed and used inside a virtual machine? It can be installed inside VMWare Workstation or Server, but then it doesn't work; the main symptoms are: It runs REALLY slowly. It lets you create VMs, but when powering up them it gives an error stating "You may not power on a virtual machine in a virtual machine" .
VMWare ESX or ESXi CAN run inside a virtual machine, provided certain prerequisites are satisfied. This kind of setup is of course completely useless (and totally unsupported) in a production environment, but can be very useful for two purposes: Testing or studying ESX or ESXi if you don't have a physical server available. Testing or studying the whole Virtual Infrastructure if you don't have at least two servers and a SAN. Prerequisites: You need some physical resources. In order to run ESX or ESXi in VM, the VM needs at least 1.5 GB of memory, two VCPUs and enough disk space for the server itself and for the VMs you will run inside it. You absolutely need a physical CPU with native virtualization support (Intel VT or AMD-V). You need to run VMWare Workstation 6.5, VMWare Server 2 or VMWare Fusion 5 on the physical host. Previous versions can't succesfully run ESX or ESXi in a VM. A 64-bit OS on the physical host is useful but not required. Setup: Enable native virtualization support for your CPU in the motherboard BIOS (it's often not enabled by default). Install your preferred virtualization software. I've tested everything succesfully using VMWare Workstation 6.5.2 on a Windows XP x64 host, but it should work with VMWare Server 2.0 and/or Linux hosts, too. Create a custom VM using those setting: Hardware compatibility level: latest Guest operating system: other 64-bit Virtual CPUs: at least 2 Memory: at least 1.5 GB Networking: Host-only or NAT SCSI adapter: LSI Logic Virtual disk type: SCSI Virtual disks: as you wish; I suggest using at least two virtual disks, a 10-GB one for installing the system and another one where to create a datastore. The space should be pre-allocated. Remove floppy, sound card, USB controller, etc. Leave only networking and storage. CPU Execution mode: Intel VT-x or AMD-V ( very important ). Manually edit the VMX file of the virtual machine you created, setting the following parameters: guestOS = "vmkernel" monitor_control.vt32 = "TRUE" monitor_control.restrict_backdoor = "TRUE" Start the VM and install ESX or ESXi from the installation ISO image. Configure the networking to allow the ESX or ESXi virtual server to talk with the host. Usage: Use your web browser to connect to you virtual server's IP address and download the VI Client. Install the VI Client on the host. Connect to the virtual ESX/ESXi server. Create a VM as you wish. Power up the VM. If everything is done correctly, the VM will start. If it complains about not being able to power on a VM inside a VM, then there is an error with the `monitor_control.restrict_backdoor' parameter (or you're using an old version of VMWare Workstation/Server). Enjoy :-)
{ "source": [ "https://serverfault.com/questions/28399", "https://serverfault.com", "https://serverfault.com/users/6352/" ] }
28,520
I'm writing a monitoring service that uses WMI to get information from remote machines. Having local admin rights on all these machines is not possible for political reasons. Is this possible? What permissions/rights does my user require for this?
The following works on Window 2003 R2 SP 2, Windows Server 2012 R2 Add the user(s) in question to the Performance Monitor Users group Under Services and Applications, bring up the properties dialog of WMI Control (or run wmimgmt.msc ). In the Security tab, highlight Root/CIMV2 , click Security; add Performance Monitor Users and enable the options : Enable Account and Remote Enable Run dcomcnfg . At Component Services > Computers > My Computer, in the COM security tab of the Properties dialog click "Edit Limits" for both Access Permissions and Launch and Activation Permissions . Add Performance Monitor Users and allow remote access, remote launch, and remote activation. Select Windows Management Instrumentation under Component Services > Computers > My Computer > DCOM Config and give Remote Launch and Remote Activation privileges to Performance Monitor Users Group. Notes: As an alternatively to step 3 and 4, one can assign the user to the group Distributed COM Users (Tested on Windows Server 2012 R2) If the user needs access to all the namespaces, you can set the settings in 2. at the Root level, and recurse the permissions to the sub-namespaces via the Advanced window in Security
{ "source": [ "https://serverfault.com/questions/28520", "https://serverfault.com", "https://serverfault.com/users/1780/" ] }
28,521
I am considering Tux as the web server for a new CPAN mirror I'm building. I've got it running and it's very fast but there is one big catch: how am I supposed to rotate the log file? The log file is configurable, and I am using the default value of /var/log/tux. One option would be copy-and-truncate; e.g.: cp /var/log/tux /var/log/tux.1 cat /dev/null > /var/log/tux The logrotate application can do that for me but since the log file is binary I am concerned that this might lead to corruption at some point. If it only corrupts one entry I can live with it - my fear is that the whole log file could be lost. Anyone with experience care to make a suggestion? Thanks
The following works on Window 2003 R2 SP 2, Windows Server 2012 R2 Add the user(s) in question to the Performance Monitor Users group Under Services and Applications, bring up the properties dialog of WMI Control (or run wmimgmt.msc ). In the Security tab, highlight Root/CIMV2 , click Security; add Performance Monitor Users and enable the options : Enable Account and Remote Enable Run dcomcnfg . At Component Services > Computers > My Computer, in the COM security tab of the Properties dialog click "Edit Limits" for both Access Permissions and Launch and Activation Permissions . Add Performance Monitor Users and allow remote access, remote launch, and remote activation. Select Windows Management Instrumentation under Component Services > Computers > My Computer > DCOM Config and give Remote Launch and Remote Activation privileges to Performance Monitor Users Group. Notes: As an alternatively to step 3 and 4, one can assign the user to the group Distributed COM Users (Tested on Windows Server 2012 R2) If the user needs access to all the namespaces, you can set the settings in 2. at the Root level, and recurse the permissions to the sub-namespaces via the Advanced window in Security
{ "source": [ "https://serverfault.com/questions/28521", "https://serverfault.com", "https://serverfault.com/users/6037/" ] }
28,915
I've recently graduated and have got a job at a fast-growing dedicated/VPS hosting company as a junior sysadmin. I'd like to know any tips or advice you more senior sysadmins have, e.g. what mistakes did you make when you were younger, certification, how to stay organised. Thanks!
My best piece of advice is to remember ignorance is not a sin. You don't know everything, nobody does. Read the documentation, ask for help. It is far better to spend some time and possibly a few shreds of credibility with your peers to find learn before you screw up, than to leap in and really mess something up. Everybody screws up sometime. Just don't be the one who screws up because they didn't RTFM or ask around first.
{ "source": [ "https://serverfault.com/questions/28915", "https://serverfault.com", "https://serverfault.com/users/10138/" ] }
28,957
When setting up iptables you can name the port ssh which will use port 22. Is there a list of all the named ports? Specifically I need ssh, http, https and mysql.
On your installation, the list will be based on the file /etc/services
{ "source": [ "https://serverfault.com/questions/28957", "https://serverfault.com", "https://serverfault.com/users/10146/" ] }
29,193
I tried to look for benchmark on the performances of various filesystems with MySQL InnoDB but couldn't find any. My database workload is the typical web-based OLTP, about 90% read, 10% write. Random IO. Among popular filesystems such as ext3, ext4, xfs, jfs, Reiserfs, Reiser4, etc. which one do you think is the best for MySQL?
How much do you value the data? Seriously, each filesystem has its own tradeoffs. Before I go much further, I am a big fan of XFS and Reiser both, although I often run Ext3. So there isn't a real filesystem bias at work here, just letting you know... If the filesystem is little more than a container for you, then go with whatever provides you with the best access times. If the data is of any significant value, you will want to avoid XFS. Why? Because if it can't recover a portion of a file that is journaled it will zero out the blocks and make the data un-recoverable. This issue is fixed in Linux Kernel 2.6.22 . ReiserFS is a great filesystem, provided that it never crashes hard . The journal recovery works fine, but if for some reason you loose your parition info, or the core blocks of the filesystem are blown away, you may have a quandry if there are multiple ReiserFS partitions on a disk - because the recovery mechanism basically scans the entire disk, sector by sector, looking for what it "thinks" is the start of the filesystem . If you have three partitions with ReiserFS but only one is blown, you can imagine the chaos this will cause as the recovery process stitches together a Frankenstein mess from the other two systems... Ext3 is "slow", in a "I have 32,000 files and it takes time to find them all running ls " kinda way. If you're going to have thousands of small temporary tables everywhere, you will have a wee bit of grief. Newer versions now include an index option that dramatically cuts down the directory traversal but it can still be painful. I've never used JFS. I can only comment that every review of it I've ever read has been something along the lines of "solid, but not the fastest kid on the block". It may merit investigation. Enough of the Cons, let's look at the Pros: XFS: screams with enormous files, fast recovery time very fast directory search Primitives for freezing and unfreezing the filesystem for dumping ReiserFS: Highly optimal small-file access Packs several small files into same blocks, conserving filesystem space fast recovery, rivals XFS recovery times Ext3: Tried and true, based on well-tested Ext2 code Lots of tools around to work with it Can be re-mounted as Ext2 in a pinch for recovery Can be both shrunk and expanded (other filesystems can only be expanded) Newest versions can be expanded "live" (if you're that daring) So you see, each has its own quirks. The question is, which is the least quirky for you?
{ "source": [ "https://serverfault.com/questions/29193", "https://serverfault.com", "https://serverfault.com/users/10217/" ] }
29,262
I run an Ubuntu desktop with a bunch of virtual servers in Virtual Box to test stuff out, etc. In the past I have also been connecting to other kinds of remote VPS Linux boxes. Currently my .ssh/known_hosts file has a whole bunch of keys in it, most of which are not being used any more. I want to clean up my .ssh/known_hosts file, but how do I know which key belongs to what host? I.e. how do I know which keys I can safely remove and which ones I should leave alone?
To find out which entry is for a known hostname in known_hosts: # ssh-keygen -H -F <hostname or IP address> To delete a single entry from known_hosts: # ssh-keygen -R <hostname or IP address>
{ "source": [ "https://serverfault.com/questions/29262", "https://serverfault.com", "https://serverfault.com/users/1205/" ] }
29,288
Sometimes I hear that you shouldn't plug (UPS brand X / any UPS) into (power strip brand X / any power strip) because of some interaction leading to poorly conditioned power, reduced battery life, massive explosions spattering the room with battery acid, and so on. Sometimes I hear that it's the power strip that you shouldn't plug into the UPS. What I haven't gotten is a clear idea of how reliable these recommendations are or how generally/specifically they apply. Can anyone speak precisely and non-urban-legendfully on these UPS and power strip interactions, if there are in fact ones worth thinking about?
Having had some 'discussions' with the inspector that comes around our offices once a year to make sure we're not being bad, I have a better idea as to what code says about this. Paraphrased from said inspector: Thou shalt not plug a power-strip into another power-strip Nor any multi-outlet device into another multi-outlet device, for it is a fire-hazard, and therefore bad. Thy UPS counts as a multi-outlet device Therefore thou shalt not plug thy UPS into thy power strip, nor plug thy power-strip into thy UPS, for it is a fire-hazard, and therefore bad. A multi-outlet device shall only be permitted to be attached to another multi-outlet device if it is hard-wired into the first multi-outlet device Which renders it a single multi-outlet device. The inspector wasn't kind enough to elucidate what, exactly, constitutes the 'fire-hazard'. We get dinged on the power-strip in power-strip commandment every other year or so. This necessitated the purchase of a bunch of long-tail power-strips (power strips on a 15' cord), and a few long extension cords with 3 outlets on the ends of them. Edit: Regarding rackmount UPS's and PDU's. I believe they're OK so long as the PDU plugs into a locking outlet of some kind, such as an L5-20 or L5-30.
{ "source": [ "https://serverfault.com/questions/29288", "https://serverfault.com", "https://serverfault.com/users/1736/" ] }
29,513
In an XP Pro workstation, is there a way to start the native Windows VPN client and open/close a connection from the command line so it can be scripted in a batch file?
Yes, if the VPN connection is called "My VPN" then: rasdial "My VPN" will dial the connection. Helpfully it sets errorlevel to the RAS error code if it fails to connect, so your script can detect a connection failure. If you need to supply a username and password instead of using the saved credentials use: rasdial "My VPN" username password To disconnect a connection use: rasdial "My VPN" /disconnect JR
{ "source": [ "https://serverfault.com/questions/29513", "https://serverfault.com", "https://serverfault.com/users/2302/" ] }
29,521
I'd like to use basic HTTP authentication to keep people out of our dev site instance since it is unfortantly exposed to the wild internet. However, in IIS7, the only authentication modes listed are Forms, Anonymous and Impersonation. Where did the "Basic Authentication" module go, and how can I get it back?
You might have to install the basic authentication module for IIS. For vista it is: Control Panel -> Programs -> Turn Windows features on or off For Server 2008: Server Manager -> Roles -> Web Server -> Add Role Services Then in the treeview it is: Internet Information Services -> World Wide Web Services -> Security - > Basic Authentication Click the checkbox and install. Then you should be able to see the basic authentication option.
{ "source": [ "https://serverfault.com/questions/29521", "https://serverfault.com", "https://serverfault.com/users/6321/" ] }
29,529
I am trying to get Openfire to install on an Ubuntu virtual machine, however upon completing the web based installer, I am unable to login to the admin panel. So far I: downloaded Debian installer Installed using stock options Added database and built the structure using supplied SQL file Completed web based installer I am now trying to login using username: admin and my password, however I constantly get a wrong username/password error. There is a record generated in the MySQL database showing the admin user with an encrypted password, and changing to an unencoded password doesn't work. What is the problem here?
I had the same issue, little know and it seems undocumented bug. Try rebooting the server after the you do the install. Worked for me.
{ "source": [ "https://serverfault.com/questions/29529", "https://serverfault.com", "https://serverfault.com/users/55368/" ] }
29,534
Recently I have been looking into open source honeynet technology, mainly Honeyd and Potemkin. Was wondering if anyone had experience working with them or similar technology and how you would start deploying these decoy servers. (Novice System Admin here).
I had the same issue, little know and it seems undocumented bug. Try rebooting the server after the you do the install. Worked for me.
{ "source": [ "https://serverfault.com/questions/29534", "https://serverfault.com", "https://serverfault.com/users/10279/" ] }
29,536
i've set up a live chat using ejabberd. It's working pretty well but I'd like to be able to round-robin chat sessions to different operators depending on who is already in a chat and who is free to talk. To implement this I need some way to update a users presence based on wether they are currently in a private chat. I'm currently using mod_shared_roster to advertise presence but it only reports wether a user is available. This really needs to be done server side because I will need to rely on different IM clients depending on the operators' system.
I had the same issue, little know and it seems undocumented bug. Try rebooting the server after the you do the install. Worked for me.
{ "source": [ "https://serverfault.com/questions/29536", "https://serverfault.com", "https://serverfault.com/users/9159/" ] }
29,633
The IT Manager may be leaving, and it's possible that the parting of ways may not be completely civil. I wouldn't really expect any malice but just in case, what do I check, change or lock down? Examples: Admin passwords Wireless passwords VPN access rules Router / Firewall settings
Obviously the physical security needs to be addressed, but after that... Assuming you don't have a documented procedure for when employees leave (environment generic as you don't mention which platforms you run): Start with perimeter security. Change all passwords on any perimeter equipment like routers, firewalls, vpn's, etc... Then lock out any accounts the IT manager had, as well as review all of the remaining accounts for any that are no longer used, and any that don't belong (in case he added a secondary). Email - remove his account or at least disable logins to it depending on your company policy. Then go through your host security. All machines and directory services should have his account disabled and/or removed. (Removed is preferred, but you might need to audit them in case he has anything running that is valid under them first). Again, also review for any accounts that are no longer used, as well as any that don't belong. Disable/remove those as well. If you use ssh keys you should change them on admin/root accounts. Shared accounts, if you have any, should all have their passwords changed. You should also look at removing shared accounts or disabling interactive login on them as a general practice. Application accounts... don't forget to change passwords, or disable/remove accounts from all applications he had access to as well, starting with admin access accounts. Logging... make sure you have good logging in place for account usage and monitor it closely to look for any suspicious activity. Backups... make sure your backups are current, and secure (preferably offsite). Make sure you've done the same as above with your backup systems as far as accounts. Documents... try as much as you can to identify, request from him if possible, and copy somewhere secure, all of his documentation. If you have any services outsourced (email, spam filtering, hosting of any type, etc..), make sure to do all of the above that are appropriate with those services as well. As you do all of this, document it , so that you have a procedure in place for future terminations. Also, if you use any colocation services, make sure to have his name removed from the access list and ticket submission list. It'd be wise to do the same for any other vendors where he was the primary person handling, so that he can't cancel or mess with services you get from those vendors, and also so that vendors know who to contact for renewals, problems, etc... which can save you some headaches when something the IT manager didn't document happens. I'm sure there's more I missed, but that's off the top of my head.
{ "source": [ "https://serverfault.com/questions/29633", "https://serverfault.com", "https://serverfault.com/users/2277/" ] }
29,665
What do you do first thing in the morning? My work day currently starts at 6AM: Exercise (before work, yes I am one of THOSE people.) Check Email Drink Coffee Check Servers, Services, and Zenoss Create a Morning Systems Report Drink Coffee Write down things that need to be taken care of Read RSS Feeds What's your Morning Ritual?
while [ 1 ] do answer(BlackberryDailyAlarm) if [ $kidAwake -eq 1 ] then eatBreakfastWithKid() wakeUpWife() shower() makeLunch() else shower() eatBreakfast() makeLunch() fi walkToWork(4blocks) # See also: man exercise login ernied while evolution do case $message in VeryImportant ) convertToTask() ;; Important ) markAsUnread() ;; *****SPAM***** ) fixSpamFilter() ;; Customer ) replyOrFixNow() ;; esac done opera munin.mywork.com if [ $notEnoughSleep -eq 1 ] then getCoffee() else avoidCoffee() # See also: man internals fi if [ $dayOfWeek -eq 1 ] # Monday system maintenance - see wiki then for i in $DebianServers do apt-get update apt-get upgrade done checkBackups() # make sure they're actually getting done/are valid checkSpamUpdates() checkVirusUpdates() doDomainRenwals() fi doUnreadMail() doWork($evolutionTaskList) while [ $eatLunch -eq 1 ] do readServerFault() done doWork($evolutionTaskList) goHome(haveFunWithkid(), eat(), sleep()) done
{ "source": [ "https://serverfault.com/questions/29665", "https://serverfault.com", "https://serverfault.com/users/9332/" ] }
29,731
For a new system administrator it is important to learn from those that have experience. What is the thing you know now you wish you knew when you first started? Alternately, what is the piece of advice you would give to a new system administrator?
SLOW DOWN... we're in a hurry. If you do your job right NOBODY will notice a thing. It's never going to be 9-5 or a 40 hour work week. You can never have too many backups. Test them! That "impossible" scenario you didn't bother planning for WILL HAPPEN. Hackers (actually script kiddies) hack things because they can, not because they targeted YOU specifically. It's not personal, you were just there. So don't ask "Why me?" Document everything! Even if it's just for your own sanity. A private Wiki goes a looooong way. If you can't bother to do that then at least keep a "never ending" text document nicely formatted on your computer... then back that up too! Just because you know something "inside and out" today doesn't mean you'll remember what the hell you did 6 months from now. If something goes wrong in the evening and you think you might be in for a long night... WRITE YOUR PLAN DOWN. You'd be surprised how "mush brained" you're going to be at 3 a.m. and suddenly you're going to say, "Now, did I actually do X or was I just thinking I needed to do X next? Oh crap!" (This WILL happen to you, especially if your recovery process takes a few hours.) The weakest aspect of any computer/network is almost always going to be the HUMAN ELEMENT. It doesn't matter how secure you make the computer/program/network. Some moron is always going to try and use "bob" and "bob" as his username and password then write it on a post-it note and stick it to his monitor... which happens to be facing an outside window... which just happens to be outside of a bus stop. (I wish I was making this up) ;-) Relax! I can almost GUARANTEE you that no matter what bad thing(s) happen to you someone, somewhere is having a WORSE day than you. Be happy you're not THAT guy and stay GROUNDED. If you can stay CALM when everyone around you is freaking out... you'll find that after a short time THEY calm down too. Don't participate in mass hysteria. ;-) BONUS TIP: Wives/Girlfriends are NOT stupid. I once pulled up a mail server log with "tail -f maillog.log" for my significant other to watch a dictionary attack on a mail server. I explained to her that this level of attack is "normal" and is almost constant. I then explained that when my phone goes off at 3 a.m. it's because we're facing something about 10x's BIGGER. You'd be surprised how sympathetic/understanding they can be when they can actually SEE the crap we have to deal with daily. SHARE THE EXPERIENCE!
{ "source": [ "https://serverfault.com/questions/29731", "https://serverfault.com", "https://serverfault.com/users/4517/" ] }
29,777
Will adding a second SPF record mess up my DNS, or will it be like adding an extra nameserver? (i.e. it only helps, not hurts)
From RFC 4408 : 3.1.2. Multiple DNS Records A domain name MUST NOT have multiple records that would cause an authorization check to select more than one record. See Section 4.5 for the selection rules. I'm not entirely sure what you want to achieve by adding a second record, but if it is something like adding extra hosts/networks as valid/invalid senders, you can probably do everything you want to with just the one record - just add whatever you wanted to the end of the line.
{ "source": [ "https://serverfault.com/questions/29777", "https://serverfault.com", "https://serverfault.com/users/1325/" ] }
29,788
I am trying to add to the auto start at boottime a linux service through the chkconfig -add <servicename> and I get a message saying service <servicename> does not support chkconfig I am using Red Hat Enterprise 4. The script I am trying to add to the autostart at boottime is the following: #!/bin/sh soffice_start() { if [ -x /opt/openoffice.org2.4/program/soffice ]; then echo "Starting Open Office as a Service" #echo " soffice -headless -accept=socket,port=8100;urp;StarOffice.ServiceManager -nofirststartwizard" /opt/openoffice.org2.4/program/soffice -headless -accept="socket,host=0.0.0.0,port=8100;urp;StarOffice.ServiceManager" -nofirststartwizard & else echo "Error: Could not find the soffice program. Cannot Start SOffice." fi } soffice_stop() { if [ -x /usr/bin/killall ]; then echo "Stopping Openoffice" /usr/bin/killall soffice 2> /dev/null else echo "Eroor: Could not find killall. Cannot Stop soffice." fi } case "$1" in 'start') soffice_start ;; 'stop') soffice_stop sleep 2 ;; 'restart') soffice_stop sleep 5 soffice_start ;; *) if [ -x /usr/bin/basename ]; then echo "usage: '/usr/bin/basename $0' start| stop| restart" else echo "usage: $0 start|stop|restart" fi esac
The script must have 2 lines: # chkconfig: <levels> <start> <stop> # description: <some description> for example: # chkconfig: 345 99 01 # description: some startup script 345 - levels to configure 99 - startup order 01 - stop order After you add the above headers you can run chkconfig --add <service> .
{ "source": [ "https://serverfault.com/questions/29788", "https://serverfault.com", "https://serverfault.com/users/4310/" ] }
29,887
How do I determine the block size of an ext3 partition on Linux?
# tune2fs -l /dev/sda1 | grep -i 'block size' Block size: 1024 Replace /dev/sda1 with the partition you want to check.
{ "source": [ "https://serverfault.com/questions/29887", "https://serverfault.com", "https://serverfault.com/users/1545/" ] }
29,889
It's well-known that you should never fsck a mounted partition. I can understand how this could easily lead to corruption if the filesystem is written to by fsck (e.g., the -a option is used), but why can't read-only checks be run on mounted disks?
From: http://linux.die.net/man/8/fsck.ext3 "Note that in general it is not safe to run e2fsck on mounted filesystems. The only exception is if the -n option is specified, and -c , -l , or -L options are not specified. However, even if it is safe to do so, the results printed by e2fsck are not valid if the filesystem is mounted. If e2fsck asks whether or not you should check a filesystem which is mounted, the only correct answer is ''no''. Only experts who really know what they are doing should consider answering this question in any other way. "
{ "source": [ "https://serverfault.com/questions/29889", "https://serverfault.com", "https://serverfault.com/users/1545/" ] }
29,948
What's the difference between a user's home path and their profile path in Windows Server 2003?
The profile path is the location of the user's user profile. The "Home" path may be the same, but it could be set to another location (via the user account properties). The home path is a bit of a vestigial thing. It dates back to Windows NT, prior to the 'My Documents' directory. I believe the original intent was to provide a "Home Directory" similiar to Unix environments, but the user profile ended up (with the advent of "My Documents") being the default storage location for files (which led to the whole "redirect folders out of the user profile" functionality that came on after W2K). "Folder Redirection" can use the legacy home path setting as the destination for redirecting the "My Documents" path. This can be handy if you have groups of users who need their "My Documents" path redirected to various server computers, as you can set a different home path on a user-for-user basis. (You can do the same thing w/ multiple group policy objects, or with a single folder redirection policy based on group membership, too.)
{ "source": [ "https://serverfault.com/questions/29948", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
30,002
As developers we sometimes need querying LDAP. Do you know useful tools for this task? edit: I don't mean in code, I mean utility/tool (command-line or gui, mostly gui) for just to look/confirm data, or if possible to alter...
Apache Directory Studio It's not exactly lightweight, but it is an excellent tool for doing ad hoc inspection and modifications to an LDAP database
{ "source": [ "https://serverfault.com/questions/30002", "https://serverfault.com", "https://serverfault.com/users/240/" ] }
30,011
I have a registered domain name (thisexample.net), which I forward to a dynamic URL at DynDns (bounce.dnsalias.net) as my internet access comes over cable and doesn't provide a static IP address. My router (openwrt) forwards port 80 to an apache server on the LAN. This works for machines seeking the URL thisexample.net from outside the LAN, and for machines inside the LAN going to the server's LAN address (eg, 192.168.1.xxx). However, LAN machines going to the URL thisexample.net (or www.thisexample.net) bring up the router's admin page, as if they had been addressed 192.168.1.1. I want to experiment with the subdomains, such as beta.thisexample.net. As I understand it, one way to set them up is to use apache's VirtualHost directive with the address name -- but the LAN boxes won't be able to reach such subdomain pages as addressing the domain doesn't get them to the server in the first place. Why aren't LAN boxes able to use the URL address? How can I configure things so they can? Is this a poor approach to experimenting with subdomains in the first place?
Apache Directory Studio It's not exactly lightweight, but it is an excellent tool for doing ad hoc inspection and modifications to an LDAP database
{ "source": [ "https://serverfault.com/questions/30011", "https://serverfault.com", "https://serverfault.com/users/9835/" ] }
30,311
I'm testing a new web server setup which is having a couple of issues. Essentially, we have a web server, where the code uses the remote IP for some interesting things, and also some apache directories secured down to some certain IP's (our office etc). However, we've just chucked this behind ha_proxy so we can look at adding some more app servers, but now the remote IP is always coming through as the proxy ip, not the real remote user. This means we can't get to some locations, and our app is behaving a little oddly where user IP is important. Our config is as follows: global maxconn 4096 pidfile /var/run/haproxy.pid daemon defaults mode http retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen farm xxx.xxx.xxx.xxx:80 mode http cookie GALAXY insert balance roundrobin option httpclose option forwardfor stats enable stats auth username:userpass server app1 xxx.xxx.xxx.xxx:80 maxconn 1 check
Quoted from the HAProxy doc at haproxy.1wt.eu . - if the application needs to log the original client's IP, use the "forwardfor" option which will add an "X-Forwarded-For" header with the original client's IP address. You must also use "httpclose" to ensure that you will rewrite every requests and not only the first one of each session: option httpclose option forwardfor It is stated that the application must treat the X-Forwarded-For HTTP Header to know the client IP adress. Seems like the only way to go in your case. Updated for HAProxy 1.4 Haproxy 1.4 introduced a new mode with "option http-server-close". It still closed the connection to the server but maintains keep-alive towards the client if possible and used. On most setups, you probably want to use that as it helps with latency on the single high-latency part of your connection (between Haproxy and the client). option http-server-close option forwardfor
{ "source": [ "https://serverfault.com/questions/30311", "https://serverfault.com", "https://serverfault.com/users/9967/" ] }
30,605
I'm considering setting up replication of our mysql db to be able to have local slaves in each of our branch offices, while having the master in the main office to improve application performance (significantly) at our branch offices. The db itself isn't that large (<1gb) but I'm wondering; considering 200-300 record updates/min tops: how fast is replication? (assuming, first, a 5mb generic dsl connection, faster if necessary - trying to keep costs as low as possible but the money is there for more) Are whole tables replicated in batches? Is the replication done, on demand, as each record in a table is updated (from the docs, I think I'm seeing that it's configurable)? Notes: I'm thinking 1 master, 2 slaves (2 branch offices for now) setup as in the docs here except that it's an app, not a web client Any update done on the master needs to replicate to the other slaves in <10 mins. All of this assumes that I can get our ORM (DevExpress XPO) happy with the concept of reading from the slave and writing to the master.
MySQL replication happens as close to real-time as possible, as limited by disk and network I/O. The slaves open a socket to the master, which is kept open. When a transaction occurs on the master, it gets recorded in the binlog, and is simply replayed on the slave(s). If the socket between master and slave is interrupted, the binlog is replayed for the slave upon the next successful connection. Multi-master replication does the same thing, but in both directions. Some basic calculations will assist you in making a better determination of your bandwidth needs. Average transaction size * number of slaves * updates/minute = bandwidth needed Hope this helps.
{ "source": [ "https://serverfault.com/questions/30605", "https://serverfault.com", "https://serverfault.com/users/5380/" ] }
30,705
I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it. I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache. Ideally I'd want to invalidate cache in 2 ways: Set an expiration date on the cached item To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages Is it possible to set Nginx to do that? How?
I don't think that there is a way to explicitly invalidate cached items, but here is an example of how to do the rest. Update: As mentioned by Piotr in another answer, there is a cache purge module that you can use. You can also force a refresh of a cached item using nginx's proxy_cache_bypass - see Cherian's answer for more information. In this configuration, items that aren't cached will be retrieved from example.net and stored. The cached versions will be served up to future clients until they are no longer valid (60 minutes). Your Cache-Control and Expires HTTP headers will be honored, so if you want to explicitly set an expiration date, you can do that by setting the correct headers in whatever you are proxying to. There are lots of parameters that you can tune - see the nginx Proxy module documentation for more information about all of this including details on the meaning of the different settings/parameters: http://nginx.org/r/proxy_cache_path http { proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; proxy_temp_path /var/www/cache/tmp; server { location / { proxy_pass http://example.net; proxy_cache my-cache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; } } }
{ "source": [ "https://serverfault.com/questions/30705", "https://serverfault.com", "https://serverfault.com/users/10217/" ] }
30,737
Frequently I know the name of the command line program that I need but I don't know the name of the package that provides the program. How do I find the name of the package that contains the program that I need? On RPM based systems they have the whatprovides option rpm -q --whatprovides /usr/X11R6/bin/xclock which will find the correct package. Is there anything similar for Debian based systems?
If the package is installed, you want dpkg -S /path/to/file . If the package isn't installed, then use the apt-file utility ( apt-file update; apt-file search /path/to/file ).
{ "source": [ "https://serverfault.com/questions/30737", "https://serverfault.com", "https://serverfault.com/users/702/" ] }
30,738
What is the command that can be used to get the IP address and the names of the computers that are located in the same network? I am running Windows
nmap -sn 192.168.1.0/24 Put your network number in it. It'll do a ping-sweep of your network and report the reverse DNS's of the up machines. Won't find down machines. C:> for /L %N in (1,1,254) do @nslookup 192.168.0.%N >> names.txt That'll do a reverse lookup of every IP in your subnet.
{ "source": [ "https://serverfault.com/questions/30738", "https://serverfault.com", "https://serverfault.com/users/1605/" ] }
30,796
In the line of this question on StackOverflow and the completely different crowd we have here, I wonder: what are your reasons to disable SELinux (assuming most people still do)? Would you like to keep it enabled? What anomalies have you experienced by leaving SELinux on? Apart from Oracle, what other vendors give trouble supporting systems with SELinux enabled? Bonus question: Anyone has managed to get Oracle running on RHEL5 with SELinux in enforcing targeted mode? I mean, strict would be awesome, but I don't that is even remotely possible yet, so let's stay with targeted first ;-)
RedHat turns SELinux on by default because its safer. Nearly every vendor that uses Redhat-derived products turns SELinux off because they don't want to have to put in the time (and therefore money) to figure out why the thing doesn't work. The Redhat/Fedora people have put in a massive amount of time and effort making SELinux more of a viable option in the Enterprise, but not a lot of other organizations really care about your security. (They care about their security and the security reputation of their product, which is a totally different thing.) If you can make it work, then go for it. If you can't, then don't expect a lot of assistance from the vendors out there. You can probably get help from the Redhat/Fedora guys, from the selinux mailing lists and #selinux channel on freenode. But from companies like Oracle -- well, SELinux doesn't really factor in to their business plan.
{ "source": [ "https://serverfault.com/questions/30796", "https://serverfault.com", "https://serverfault.com/users/3950/" ] }
30,994
Let's say i have 20 users logged on my linux box. How can I know how much memory each of them is using?
You could try using smem (see ELC2009: Visualizing memory usage with smem for more information). In particular, sudo smem -u should give you the information you want.
{ "source": [ "https://serverfault.com/questions/30994", "https://serverfault.com", "https://serverfault.com/users/5735/" ] }
31,077
How do I find all the folders that are shared on Windows Server 2008?
You can open a command shell and type: net share
{ "source": [ "https://serverfault.com/questions/31077", "https://serverfault.com", "https://serverfault.com/users/10564/" ] }
31,170
What command can you use to find the Gateway IP Address (ie. home router address) for eth0 in Linux? I need to get the IP address from a command line app to use in a shell script.
To print out only the default gw IP: route -n | grep 'UG[ \t]' | awk '{print $2}' To print out route information on all interfaces: route -n or netstat -rn
{ "source": [ "https://serverfault.com/questions/31170", "https://serverfault.com", "https://serverfault.com/users/3312/" ] }
31,285
Does Windows 7 support running Hyper-V Manager?
It already does. You can get the RSAT tools from here They include the Hyper-V manager for Windows 7. After installing the above download, go to Windows Features in Control Panel and choose the 'Role Administration Tools'. Select Hyper-V Tools from there. Edit: Link updated to current version.
{ "source": [ "https://serverfault.com/questions/31285", "https://serverfault.com", "https://serverfault.com/users/1190/" ] }
31,470
Here in the UK we are UTC+1. I set the time using 'date'. However it keeps resetting back to standard UTC, I'm guessing via a NTP time server. I've tried setting the timezone with tzselect but it does not change the time, it remains at UTC instead of local time. Therefore TZ='Europe/London' will be used. Local time is now: Thu Jun 25 10:57:48 BST 2009. Universal Time is now: Thu Jun 25 09:57:48 UTC 2009. The above output is correct but the time does not actually get changed. I either need to disable auto updating of time or ideally setting the timezone correctly.
You can also do : dpkg-reconfigure tzdata It will then allow you to choose your timezone.
{ "source": [ "https://serverfault.com/questions/31470", "https://serverfault.com", "https://serverfault.com/users/4803/" ] }
31,495
I'd like to start a collection of good, free cheat sheet resources for system administrators. Please add your favorite ones. From the Wikipedia "cheat sheet" article : In more general usage, a "cheat sheet" is any short (one or two page) reference to terms, commands, or symbols where the user is expected to understand the use of such terms etc but not necessarily to have memorized all of them.
I add my own favorite: Cheat Sheets on PacketLife.com has some very nice ones on network technology topics. Cheat sheets are in PDF format. You are welcome to use and redistribute them as you please, so long as they remain intact and unmodified. Currently there are six categories: Protocols: BGP, EIGRP, First Hop Redundancy, IEEE 802.11 Wireless, IEEE 802.1X, IPsec, IPv4 Multicast, IPv6, IS-IS, OSPF, Spanning Tree Applications: tcpdump, Wireshark Display Filters Reference: Common Ports, IP Access Lists, Subnetting Syntax: Markdown, MediaWiki Technologies: MPLS, Quality of Service, VLANs Miscellaneous: Cisco IOS Versions, Physical Terminations Examples: Common Ports and IPv6 (links to PDF files)
{ "source": [ "https://serverfault.com/questions/31495", "https://serverfault.com", "https://serverfault.com/users/45/" ] }
31,554
Is there a way to find out the progress of DBCC SHRINKFILE statement? Here is how I was running it dbcc shrinkfile('main_data', 250000) I am running above statement on both SQL Server 2005 and 2008. [UPDATE] Here is the query I ran to check the progress and the text that's being run. select T.text, R.Status, R.Command, DatabaseName = db_name(R.database_id) , R.cpu_time, R.total_elapsed_time, R.percent_complete from sys.dm_exec_requests R cross apply sys.dm_exec_sql_text(R.sql_handle) T
Have you checked percent_complete in sys.dm_exec_requests?
{ "source": [ "https://serverfault.com/questions/31554", "https://serverfault.com", "https://serverfault.com/users/1224/" ] }
31,575
There is something I don't get, one of my web apps has a small form that allows you to enter you name and email address to "subscribe" to a user list for a site I maintain. The site is very low traffic, and only useful to a very small number of people that live in a very small town..it would be of no interest to anyone else. Yet, every day, sometimes many times per day, someone (or a bot) is entering fictitious names and probably bogus email addresses into the form. This form is not even active on my site anymore, it just happens to still exist as an orphaned page on my IIS directory (which tells me that someone is searching for these types of forms via Google, because there is no path to this form if you come in thru the default page. This is not a big hassle for me, I can solve the problem with captcha, but what I don't understand is for what purpose would someone setup a bot to repeatedly fill in forms? I figure there must be a reason, but for the life of me don't know why? What am I missing?
These are bots trying to send you spam, or worse, trying to exploit your contact form to send spam to others. For example, there are several well-known exploits for the PHP mail() command commonly used by contact forms that can cause the TO address you put in your code to be overwritten by POSTed data, if you aren't careful how you handle the data coming in from your form. Some ways to prevent this: Use a captcha. For a low traffic site, even a static captcha (an image that just has the same text in it every time) will work very well. Check the HTTP referrer to make sure the POST is coming from your contact form. Many bots will spoof this though, so it isn't terribly useful. Use hidden form fields to try to trick the bots. For example, create a field called phone_number on your form, and hide it with CSS in your stylesheet (display: none). A bot will normally fill in that field (they usually fill in all fields to avoid possible required-field validation errors) but a user would not, since it's hidden. So on POST you check for a value in that field and SILENTLY fail to send the message if there is a value in it. I find that this method alone is highly effective.
{ "source": [ "https://serverfault.com/questions/31575", "https://serverfault.com", "https://serverfault.com/users/10314/" ] }
31,608
For example, in Bash, I can do this: emacs foo.txt & Is there any equivalent in Windows? I can't seem to figure out a way to do this with the windows version of emacs.
The command to launch programs from the command-line in Windows is "start" Starts a separate window to run a specified program or command. START ["title"] [/D path] [/I] [/MIN] [/MAX] [/SEPARATE | /SHARED] [/LOW | /NORMAL | /HIGH | /REALTIME | /ABOVENORMAL | /BELOWNORMAL] [/AFFINITY <hex affinity>] [/WAIT] [/B] [command/program] [parameters] "title" Title to display in window title bar. path Starting directory B Start application without creating a new window. The application has ^C handling ignored. Unless the application enables ^C processing, ^Break is the only way to interrupt the application I The new environment will be the original environment passed to the cmd.exe and not the current environment. MIN Start window minimized MAX Start window maximized SEPARATE Start 16-bit Windows program in separate memory space SHARED Start 16-bit Windows program in shared memory space LOW Start application in the IDLE priority class NORMAL Start application in the NORMAL priority class HIGH Start application in the HIGH priority class REALTIME Start application in the REALTIME priority class ABOVENORMAL Start application in the ABOVENORMAL priority class BELOWNORMAL Start application in the BELOWNORMAL priority class AFFINITY The new application will have the specified processor affinity mask, expressed as a hexadecimal number. WAIT Start application and wait for it to terminate You may want to use the MIN option to start a program minimized
{ "source": [ "https://serverfault.com/questions/31608", "https://serverfault.com", "https://serverfault.com/users/405/" ] }
31,812
I've read the story about the manager taking out a disk from a RAID 5 array, and then a second one, but I’d just like to try out for myself what happens when I simply disconnect a disk from a live system. It’s an HP ProLiant DL585 G7 series server , so it must be hot-swappable. But before I just go for it I thought it might be better getting some input from more experienced folks before doing anything really, really silly.
It depends on your controller. If it supports hot-swap, then yes. If not, then you might blow the controller and kill the whole array. If you do take a drive out of the array (either while running or powered off) you will have a full rebuild to do once you put it back in which will take a while and degrade performance while it happens. Testing your RAID setup like this is not a bad idea. Just make sure your backups are correct and up-to-date first just in case something goes wrong and the array doesn't survive the test.
{ "source": [ "https://serverfault.com/questions/31812", "https://serverfault.com", "https://serverfault.com/users/7453/" ] }
32,313
When I use screen inside a putty session, I can't seem to use the scrollback buffer of putty to look at whatever just scrolled off the screen. Instead, I just see what was happening in the putty session just prior to my running screen. What am I missing here? I like being able to scroll back, and I don't want to use the screen functionality to look at the past buffer; the scroll wheel on my mouse doesn't have hooks into screen and I don't expect it ever would. Thanks!
You might also check out the Screen FAQ which allows a sort of hybrid behavior: Summary: add the line to your .screenrc file: termcapinfo xterm ti@:te@ Reference ( Putty FAQ ) PuTTY's terminal emulator has always had the policy that when the ‘alternate screen’ is in use, nothing is added to the scrollback. This is because the usual sorts of programs which use the alternate screen are things like text editors, which tend to scroll back and forth in the same document a lot; so (a) they would fill up the scrollback with a large amount of unhelpfully disordered text, and (b) they contain their own method for the user to scroll back to the bit they were interested in. We have generally found this policy to do the Right Thing in almost all situations. Unfortunately, screen is one exception: it uses the alternate screen, but it's still usually helpful to have PuTTY's scrollback continue working. The simplest solution is to go to the Features control panel and tick ‘Disable switching to alternate terminal screen’. (See section 4.6.4 for more details.) Alternatively, you can tell screen itself not to use the alternate screen: the screen FAQ suggests adding the line ‘termcapinfo xterm ti@:te@’ to your .screenrc file.
{ "source": [ "https://serverfault.com/questions/32313", "https://serverfault.com", "https://serverfault.com/users/10027/" ] }
32,317
I'm looking for a way to view the properties (title, author, company, etc.) of Office documents from the command line, hopefully both the standard and custom ones. Does such a utility exist? (My intention is to write a script which lists the files in a directory hierarchy and displays information about them, including the properties. All the other info I'm interested in is easy enough, I just can't see a way to get the properties.)
You might also check out the Screen FAQ which allows a sort of hybrid behavior: Summary: add the line to your .screenrc file: termcapinfo xterm ti@:te@ Reference ( Putty FAQ ) PuTTY's terminal emulator has always had the policy that when the ‘alternate screen’ is in use, nothing is added to the scrollback. This is because the usual sorts of programs which use the alternate screen are things like text editors, which tend to scroll back and forth in the same document a lot; so (a) they would fill up the scrollback with a large amount of unhelpfully disordered text, and (b) they contain their own method for the user to scroll back to the bit they were interested in. We have generally found this policy to do the Right Thing in almost all situations. Unfortunately, screen is one exception: it uses the alternate screen, but it's still usually helpful to have PuTTY's scrollback continue working. The simplest solution is to go to the Features control panel and tick ‘Disable switching to alternate terminal screen’. (See section 4.6.4 for more details.) Alternatively, you can tell screen itself not to use the alternate screen: the screen FAQ suggests adding the line ‘termcapinfo xterm ti@:te@’ to your .screenrc file.
{ "source": [ "https://serverfault.com/questions/32317", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
32,438
I have a service foo which currently starts at runlevel 3 and above. How can I stop it from doing so, without using update-rc.d foo stop 3 . , which (if I understand it correctly) would turn off the service at each runlevel change. (ie., if I was at runlevel 1 and enabled foo , then when I changed to runlevel 3 it would be disabled, no?) Running Debian GNU/Linux Lenny.
The "stop" term does not prevent the daemon from starting but rather shuts it down while entering the specified runlevel. If you just want to remove a service/daemon from a single runlevel, update-rc.d as pointed out bei freiheit or simply remove the symlink from /etc/rcX.d/ , where X is your runlevel. If you don't want the service to start automatically, update-rc.d -f foo remove will do the trick.
{ "source": [ "https://serverfault.com/questions/32438", "https://serverfault.com", "https://serverfault.com/users/880/" ] }
32,481
I have a Windows 2003 Standard x64 Server with SP2. After deleting a large number of folders from a folder, the OS is reporting "Access Denied" on any attempt to read or manipulate the folder. When examining the folder's properties, the Security tab is missing, only General and Customize are listed. We've tried a few things already. Rename folder, access denied. Delete folder, access denied. Take ownership of parent folder, and propagate permissions to children, access denied. Subinacl, access denied. Takeown (cmdline), access denied. We are running chkdsk in read-only mode, and this has not completed yet. If possible we would like to solve this problem without rebooting or running a full chkdsk with the server offline. Does anyone know a solution to this problem?
When I've seen this it was because a process was holding the folder open but the folder was in the process of being deleted. Use a tool like Process Explorer to see if anything has an open handle on the folder. I would guess that once you release it or reboot, that folder will disappear.
{ "source": [ "https://serverfault.com/questions/32481", "https://serverfault.com", "https://serverfault.com/users/2734/" ] }
32,633
Environment is in domain, server is Windows Server 2003, workstations have Vista and XP installed. I need the way to check remotely who is currently logged on workstation, preferably from some simple command line and without sysinternals or third party programs. Thanks
This is the original source . They suggest using the (Windows Management Interface Command) WMIC which available on windows : WMIC /NODE: xxx.xxx.xxx.xxx COMPUTERSYSTEM GET USERNAME Will return the username currently logged into xxx.xxx.xxx.xxx, or WMIC /NODE: "workstation_name" COMPUTERSYSTEM GET USERNAME will return the username currently logged into "workstation_name" UPDATE: This should working on Windows 10 too - if you are an admin on the remote machine.
{ "source": [ "https://serverfault.com/questions/32633", "https://serverfault.com", "https://serverfault.com/users/7146/" ] }
32,692
Taking over a Debian Etch web server with MySQL running. I usually start, stop and restart msyql using: /etc/init.d/mysql restart For some reason on this set up I get the following: :~# /etc/init.d/mysql stop Stopping MySQL database server: mysqld failed! The mysql process is running fine: :~# ps aux | grep mysql root 2045 0.0 0.1 2676 1332 ? S Jun25 0:00 /bin/sh /usr/bin/mysqld_safe mysql 2082 0.6 10.7 752544 111188 ? Sl Jun25 18:49 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 2083 0.0 0.0 1568 504 ? S Jun25 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld root 11063 0.0 0.0 2856 716 pts/0 S+ 17:29 0:00 grep mysql I'm sure there are some really easy way to do it but I want to understand what is going on as well. Why is the typical way not working for me? EDIT UPDATE as an update: JBRLSVR001:/var/log/mysql# mysqladmin shutdown JBRLSVR001:/var/log/mysql# dpkg --list mysql\* Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-============================================-============================================-======================================================================================================== un mysql-client <none> (no description available) un mysql-client-4.1 <none> (no description available) ii mysql-client-5.0 5.0.32-7etch8 mysql database client binaries ii mysql-common 5.0.32-7etch8 mysql database common files (e.g. /etc/mysql /my.cnf) un mysql-common-4.1 <none> (no description available) ii mysql-server 5.0.32-7etch8 mysql database server (meta package depending on the latest version) un mysql-server-4.1 <none> (no description available) ii mysql-server-5.0 5.0.32-7etch8 mysql database server binaries mysqladmin shutdown does work but i'm still curious why the /etc/init.d/mysql commands aren't working.
mysqladmin shutdown should work to shutdown the server. I see two likely possibilities: MySQL has a problem and is refusing to shut down for some reason. The previous admin did something strange. Either modified the init.d script or didn't bother using the Debian packages at all to install MySQL. What does dpkg --list mysql\* say? What does /var/log/mysql.err say? Or the other mysql logs? EDIT: So mysqladmin shutdown worked? According to that, the mysql-server package is installed (mysql-server-5.0; the mysql-server package is probably just a stub). So they may have installed over it? Running debsums mysql-server-5.0 might tell you more. dpkg --listfiles mysql-server-5.0 could help, too... What's actually in /etc/init.d/mysql? I haven't checked that specific version of the package, but it should try to use mysqladmin shutdown ... Maybe you're lucky and they only broke that...
{ "source": [ "https://serverfault.com/questions/32692", "https://serverfault.com", "https://serverfault.com/users/10255/" ] }
32,709
I have a newly built machine with a fresh Gentoo Linux install and a software RAID 5 array from another machine (4 IDE disks connected to off-board PCI controllers). I've successfully moved the controllers to the new machine; the drives are detected by the kernel; and I've used mdadm --examine and verified that the single RAID partition is detected, clean, and even in the "right" order (hde1 == drive 0, hdg1 == drive 1, etc). What I don't have access to is the original configuration files from the older machine. How should I proceed to reactivate this array without losing the data?
You really kinda need the original mdadm.conf file. But, as you don't have it, you'll have to recreate it. First, before doing anything, read up on mdadm via its manual page . Why chance losing your data to a situation or command that you didn't have a grasp on? That being said, this advice is at your own risk. You can easily lose all your data with the wrong commands. Before you run anything, double-check the ramifications of the command . I cannot be held responsible for data loss or other issues related to any actions you take - so double check everything . You can try this: mdadm --assemble --scan --verbose /dev/md{number} /dev/{disk1} /dev/{disk2} /dev/{disk3} /dev/{disk4} This should give you some info to start working with, along with the ID. It will also create a new array device /dev/md{number}, from there you should be able to find any mounts. Do not use the --auto option, the man page verbiage implies that under certain circumstances this may cause an overwrite of your array settings on the drives. This is probably not the case, and the page probably needs to be re-written for clarity, but why chance it? If the array assembles correctly and everything is "normal", be sure to get your mdadm.conf written and stored in /etc , so you'll have it at boot time. Include the new ID from the array in the file to help it along.
{ "source": [ "https://serverfault.com/questions/32709", "https://serverfault.com", "https://serverfault.com/users/3454/" ] }
32,787
I guess these are the kinds of things I think about on the weekend... When I was growing up (not that long ago) my parents always taught us to wait 30 seconds after shutting down the computer before turning it back on again. Fast forward to today in professional IT, and I know a good number of people that still do the same. Where did the "30 second" rule come from? Has anyone out there actually caused damage to a machine by powering it off and on within a few seconds?
You want all capacitors to discharge. A poorly designed/constructed device could be damaged. But the more likely issue is that since you're power-cycling it to reset an unexpected/unhandled failure, a capacitor not being discharged could leave the system/circuit/device not fully reset. On computers I tell people to wait for all fans to stop spinning. It's a fair compromise. This 30-second advice is much more relevant to a non-computer (simpler, bigger capacitors) device. We know that the complex parts of a computer will be reset upon power cycling them, regardless of any random capacitors. I've certainly power-cycled things quickly, had it not work, then waited a significant time with it off, and had it work. No evidence if this mattered of course.
{ "source": [ "https://serverfault.com/questions/32787", "https://serverfault.com", "https://serverfault.com/users/792/" ] }
32,840
I'm about to deploy ~25 servers running Debian . The machines will have different roles - web servers, Java appservers, proxies, MySQL boxes. The environment will probably not grow much in the future - maybe 2-5 more servers in next 2 years. I'll probably use fai for system installation, but I'm unsure if it's worth to add also cfengine or puppet centralized configuration management for such small scale. Does configuration management make sense for an environment this size?
I would recommend using a mixture of Debian pre-seeding, where you give the installer a text file that answers all the questions it would ask, and Puppet. THe reason for using the preseeding, rather than FAI is that you don't have to set up an image first and deal with keeping it up to date. You will end up with an install very similar to what you would have if you did them all by hand. When you come to install a new release, you will have to update a config file with the changes, rather than having to rebuild a new image. A configuration management tool is particularly useful where you have several servers performing the same role and you want them to be identical, e.g. webserver cluster. However, they can also be useful for configuring the base install of all servers. You're going to want to install particular packages on all your servers, like ntpd and a MTA. You're going to want to change a config file on all your servers. An additional benefit is that you can keep your manifests in something like subversion and keep a record of what changed on a server and who did it and why. Configuration management can also be a life saver in the case of a server failure and you need to rebuild it quickly. Install the OS (using FAI or preseeding), install puppet and away it goes, built back exactly as it was before. Obviously you'll need to keep backups of data. Configuration management requires dedication to make sure you only make changes using it and will have an upfront cost setting things up, but once you have a working setup you won't regret it. Puppet is the more modern of the two tools you've mentioned. I really recommend it to anyone. The configuration is a declarative language and is easy to build up higher level constructs. There is also a very large community around it and there are always people welcome to help on the mailing list or the IRC channel.
{ "source": [ "https://serverfault.com/questions/32840", "https://serverfault.com", "https://serverfault.com/users/2413/" ] }
33,027
On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo . The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php , but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html . The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution : $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.
I'll bet that some directory above the one you want to access doesn't have permissions to allow Apache to traverse it. Become the user that Apache is running as ( sudo -i -u apache or whatever), then try to change into the directory of interest and ls it. If you can't (as expected), then try getting into the directories above it, one by one, until one lets you in. The subdirectory of that is that one that needs to have o+x set. Lather, rinse, repeat as required.
{ "source": [ "https://serverfault.com/questions/33027", "https://serverfault.com", "https://serverfault.com/users/1154/" ] }
33,028
I'm trying to install BizTalk Server 2009 on one of our servers. I am reading through the installation guide and I can't believe the Microsoft Office Excel 2007 and Microsoft Visual Studio 2008 are listed in software requirements. I'm new to BizTalk server and I am just wondering do I really need to install Excel & Visual Studio on a server before I can install BizTalk?
I'll bet that some directory above the one you want to access doesn't have permissions to allow Apache to traverse it. Become the user that Apache is running as ( sudo -i -u apache or whatever), then try to change into the directory of interest and ls it. If you can't (as expected), then try getting into the directories above it, one by one, until one lets you in. The subdirectory of that is that one that needs to have o+x set. Lather, rinse, repeat as required.
{ "source": [ "https://serverfault.com/questions/33028", "https://serverfault.com", "https://serverfault.com/users/9446/" ] }
33,283
I have computer with Ubuntu behind router that I can't configure. However I want to have ssh access to that computer. I think it is possible with ssh tunneling, but I don't know how to do it. I have another server to which I would like to setup tunneling. How to do it? Or maybe you have some other idea how to solve this problem? I tried: ssh -N user@my_server -L 22/localhost/8090 but it says: bind: Address already in use channel_setup_fwd_listener: cannot listen to port: 22 Could not request local forwarding.
You are asking it to listen on your local port 22 and forward connections to a remote system's port 8090. You can't do that, because your local port 22 is already taken by your local SSH server. I think what you are looking for is remote forwarding. Replacing -L 22:localhost:8090 with -R 8090:localhost:22 will tell the remote host to listen on port 8090 and forward requests to your SSH server. If you are leaving the connection running so you can get in later from a remote site, then you are going to want to make sure the connection doesn't time-out due to inactivity by adding the relevant options ( -o TCPKeepAlive=yes or -o ServerAliveInterval=30 ) So you'll end up with something like: ssh -N user@my_server -R 8090:localhost:22 -o ServerAliveInterval=30 Also, if one of the network hops between you and the server is down at any point, the connection will drop despite any KeepAlive options you specify, so you might want to add this command to inittab, or look into the daemontools package or your distro's equivalent , so that it always starts on boot and is restarted when it exits for some reason other then system shutdown (or you could run it from a shell script that loops infinitely, but init or daemontools are cleaner solutions).
{ "source": [ "https://serverfault.com/questions/33283", "https://serverfault.com", "https://serverfault.com/users/358/" ] }
33,308
I'm trying to use the Linux find command to find all directories and sub-directories that do not have .svn (Subversion hidden folders) in their path. I can only get it to exclude the actual .svn directories themselves, but not any of the sub-directories. Here is what I'm doing right now: find . -type d \! -iname '*.svn*' I've also tried: find . -type d \! iname '.svn' \! iname '.svn/*' Just an FYI, I'm trying to use the find pattern so I can apply some subversion properties to all directories in my repository excluding the subversion hidden folders and their sub-directories (by applying the exec command to the directories returned from the find command).. TIA
find . -type d -not \( -name .svn -prune \)
{ "source": [ "https://serverfault.com/questions/33308", "https://serverfault.com", "https://serverfault.com/users/8358/" ] }
33,423
My work tends to involves using SSH to connect to various machines, and then using vim to edit files on those machines. The problem is that I have to constantly copy my .vimrc file around. It's very annoying to open vim and not have any settings. Is it possible to carry my vim settings around with me from machine to machine without manually copying it everywhere?
Instead of bringing .vimrc to each server you need to work on, why not edit the remote files from your local vim: In vim/gvim, run: :e scp://[email protected]//path/to/document or start vim like this: vim scp://[email protected]//path/to/document This opens the file seamingly in place (it actually copies the file locally), and when you save, it sends the edited file back to the server for you. It asks for an ssh password, but this can be streamlined via ssh keys. As others have mentioned the only drawback of this method is that you don't get path/file competion as you would when working directly on the machine. For more info, check out the following tutorial .
{ "source": [ "https://serverfault.com/questions/33423", "https://serverfault.com", "https://serverfault.com/users/5248/" ] }
33,461
We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc. These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware. I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.
We run exclusively OpenBSD routers/firewalls to serve FogBugz On Demand. Unless you're operating in a transit role and need the extremely high pps throughput that purpose-built hardware and integrated software can provide, OpenBSD on solid hardware will be a more manageable, scalable, and economical solution. Comparing OpenBSD to IOS or JUNOS (in my experience): Advantages The pf firewall is unmatched in terms of flexibility, manageable configuration, and integration into other services (works seamlessly with spamd, ftp-proxy, etc). The configuration examples do not do it justice. You get all the tools of a *nix on your gateway: syslog, grep, netcat, tcpdump, systat, top, cron, etc. You can add tools as necessary: iperf and iftop I've found very useful tcpdump. Enough said. Intuitive configuration for Unix veterans Seamless integration with existing configuration management (cfengine, puppet, scripts, whatever). Next gen features are free and require no add-on modules. Adding performance is cheap No support contracts Disadvantages IOS/JUNOS make it simpler to dump/load an entire configuration. Absent any configuration management tools, they will be easier to deploy once your config is written. Some interfaces simply aren't available for or stable on OpenBSD (e.g., I know of no well-supported ATM DS3 cards). High-end dedicated Cisco/Juniper-type devices will handle higher pps than server hardware No support contracts So long as you're not talking about backbone routers in an ISP-like environment or edge routers interfacing with specialized network connections, OpenBSD should be just fine. Hardware The most important thing to your router performance is your NICs. A fast CPU will quickly get overwhelmed under moderate load if you have shitty NICs that interrupt for every single packet they receive. Look for gigabit NICs that support interrupt mitigation/coalescing at least. I've had good luck with Broadcom (bge, bnx) and Intel (em) drivers. CPU speed is more important than in dedicated hardware, but not something to fret about. Any modern server-class CPU will handle a ton of traffic before showing any strain. Grab yourself a decent CPU (multiple cores don't help much just yet, so look at raw GHz) good ECC RAM, a reliable hard drive, and a solid chassis. Then double everything and run two nodes as an active/passive CARP cluster. Since 4.5's pfsync upgrade you can run active/active, but I haven't tested this. My routers are running side-by-side with our load-balancers in 1U twin-node configurations. Each node has: Supermicro SYS-1025TC-TB chassis (built-in Intel Gigabit NICs) Xeon Harpertown Quad Core 2GHz CPU (my load balancers use the multiple cores) 4GB Kingston ECC Registered RAM Dual-port Intel Gigabit add-in NIC They've been rock-solid since deployment. Everything about this is overkill for our traffic load, but I've tested throughput upwards of 800Mbps (NIC-limited, the CPU was mostly idle). We make heavy use of VLANs, so these routers have to handle a lot of internal traffic too. Power efficiency is fantastic since each 1U chassis has a single 700W PSU powering two nodes. We've distributed the routers and balancers through multiple chassis so we can lose an entire chassis and have pretty much seamless failover (thank you pfsync and CARP). Operating Systems Some others have mentioned using Linux or FreeBSD instead of OpenBSD. Most of my servers are FreeBSD, but I prefer OpenBSD routers for a few reasons: A tighter focus on security and stability than Linux and FreeBSD The best documentation of any Open Source OS Their innovation is centered around this type of implementation (see pfsync, ftp-proxy, carp, vlan management, ipsec, sasync, ifstated, pflogd, etc - all of which are included in base) FreeBSD is multiple releases behind on their port of pf pf is more elegant and manageable than iptables, ipchains, ipfw, or ipf Leaner setup/install process That said, if you're intimately familiar with Linux or FreeBSD and don't have the time to invest, it's probably a better idea to go with one of them.
{ "source": [ "https://serverfault.com/questions/33461", "https://serverfault.com", "https://serverfault.com/users/5922/" ] }
33,504
We have a role account at work that has a pretty big crontab. Its MAILTO is pointed at a shared address, so that a number of us get notified if something fails. I'd like to add an entry to this crontab, but I only want myself to be notified if something goes wrong. Is there a way to change MAILTO for this one entry, or otherwise accomplish my goal?
You can always just do: MAILTO=you * * * ... your cron job MAILTO=normal.destination
{ "source": [ "https://serverfault.com/questions/33504", "https://serverfault.com", "https://serverfault.com/users/1545/" ] }
33,603
Is there any way to create a virtual machine that you can use in VirtualBox from a physical installation that you have? For instance, if I have Windows XP installed on a physical computer and want to have a virtual version of that machine on a different computer. This would save a ton of time by not having to reinstall and reconfigure the whole OS. I would think there would be issues with Microsoft's licensing. But even if it's not possible with Windows would it be possible to take a physical Linux machine and create a VirtualBox version of that? Does any other desktop virtualization software provide this feature?
Windows is a bit different, see How to migrate existing Windows installations to VirtualBox for a guide. From memory you can use VMware's converter and VirtualBox will read VMDK files. For Linux, if you want the easy solution, boot a live CD, dd if=/dev/sda1 of=/path/to/images/sda1.img bs=1024 Do that for every partition mounted in /etc/fstab of your machine, and then setup those images in VirtualBox.
{ "source": [ "https://serverfault.com/questions/33603", "https://serverfault.com", "https://serverfault.com/users/11120/" ] }
33,744
I have found that there are a lot of questions out there that could be answered by contacting the tech support of the company that makes the product you are having issues with. I am very guilty of not calling tech support and rather asking on a forum or Q&A site. I know my reasons, but I'd like to see what others think. Why don't you call tech support?
{ "source": [ "https://serverfault.com/questions/33744", "https://serverfault.com", "https://serverfault.com/users/1565/" ] }
33,776
Is there someway to record Task Managers info about CPU and memory usage to examine later? Or an equivalent tool?
Windows Performance Monitor (perfmon) should do the job for you; you can configure it to log to a file, so just enable the counters you need and it'll log as much as you want.
{ "source": [ "https://serverfault.com/questions/33776", "https://serverfault.com", "https://serverfault.com/users/11090/" ] }
33,777
I am using godaddy shared windows hosting, And i am building a new website, and it will have full text search on its SQL Server db. My Question is, can full text search run on shared hosting? "i am asking because full text search will create files and edit"
Windows Performance Monitor (perfmon) should do the job for you; you can configure it to log to a file, so just enable the counters you need and it'll log as much as you want.
{ "source": [ "https://serverfault.com/questions/33777", "https://serverfault.com", "https://serverfault.com/users/978/" ] }
33,977
Our IT services firm is proposing a network reconfiguration to use the IP range 10.10.150.1 – 10.10.150.254 internally as they state the current IP scheme using manufacturer defaults of 192.168.1.x is "making it to easy to exploit". Is this true? How does knowing / not knowing the internal IP scheme make a network more exploitable? All internal systems are behind a SonicWall NAT and firewall router.
This will add at best a very thin layer of "security by obscurity", as 192.168.x.y is a way more commonly used network address for private networks, but in order to use the internal addresses, bad boys have to be already inside your network, and only the most stupid attack tools will be fooled by the "non standard" address scheme. It cost nearly nothing to implement this, and it offers nearly nothing in return.
{ "source": [ "https://serverfault.com/questions/33977", "https://serverfault.com", "https://serverfault.com/users/565/" ] }
34,073
Possible Duplicate: vim re-edit as root I could have sworn I saw this question asked. But after looking though every search result for "vi" I'm stumped/lazy. I've opened a file, made an edit and now I realize it's read only and I've opened it as non-root me.
I think you want something like this: :w !sudo tee "%" I first saw it on commandlinefu . The quotes are only necessary if the file path contains spaces.
{ "source": [ "https://serverfault.com/questions/34073", "https://serverfault.com", "https://serverfault.com/users/8199/" ] }
34,465
I believe every system administrator is used to open source by now. From Apache to Firefox or Linux, everyone uses it at least a little bit. However, most open source developers are not good in marketing, so I know that there are hundreds of very good tools out there that very few people know. To fill this gap, share your favorite open source tool that you use in your day-to-day work. *I will post mine in the comments.
UnxUtils: This is a port of various gnu shell utilities based on msvcrt.dll so it understands native windows paths - i.e. you don't need to map to a /cygdrive path. This is a key advantage over Cygwin if you have to interact with native windows commands or homebrew CL utilities. Strings: is a very good way to scrounge through files for items of text. Many, many uses. Flex: Really designed for writing lexical analysers, with a little bodge artistry and a C compiler it can be used as an uber-grep. I don't use it all that often but it can come in surprisingly handy in that role. Fetchmail and Procmail: Core of my email system for well over a decade, since I had dial-up internet connectivity. If it ain't broke ... rdesktop: an open source RDP (terminal services) client that works surprisingly well. PythonWin: , particularly as packaged in Activestate Python . Python on Windows works a lot better than you might think. When used with COM Makepy it's really good for scripting COM APIs. Wget: an exceedingly useful FTP/HTTP downloading tool. Leafnode: if you still read any of the newsgroups that still have decent active traffic this is quite a good way to do it. Again, a bit of legacy from my dialup days but it still gets used on occasion. Abiword and Gnumeric: full featured wordprocessing and spreadsheet software that's far leaner and meaner than OpenOffice. Xfig: Visio type diagramming tool with an odd user interface. Once you get used to the paradigm it's much easier on my poor old mouse hand than a modern direct maniulation interface. Worth a mention for the ergonomics. Tcl/Tk: Overshadowed by Perl and Python, Tcl is very easy to embed C code into - it was designed specifically for embedding. Surprisingly useful nonetheless, and the Tk toolkit is very easy to whip up a GUI with. Modern versions support theming so your applications no longer have to look like Motif. Ghostscript: One of the great unsung heroes of the open-source world. A free postscript interpreter with a whole ecosystem of derived items - PS and PDF viewers, PDF creation tools, printer RIPs and all sorts of Postscript conversion tools. Perhaps most widely used outside open-source circles (if not actively credited) in its role in the back-end of PDFCreator That's just a sampling of the obscure stuff without mentioning Vim, LaTeX, Firefox, python, gcc, gtk & qt and the Berkeley TCP stack - to name but a few.
{ "source": [ "https://serverfault.com/questions/34465", "https://serverfault.com", "https://serverfault.com/users/4703/" ] }
34,692
In Windows Server 2003, in the "Attributes" column of windows explorer, some files have "A" or "C" or "AC" or others. What do these mean?
Prior to windows 8/10 the attributes were: R = READONLY H = HIDDEN S = SYSTEM A = ARCHIVE C = COMPRESSED N = NOT INDEXED L = Reparse Points O = OFFLINE P = Sparse File I = Not content indexed T = TEMPORARY E = ENCRYPTED You should pay special attention to the offline attribute because it may affect the behavior of your backup software. Files with the O attribute may be skipped entirely because the software may assume they are stored elsewhere. Consider these answers on SO and SF for additional information: https://superuser.com/questions/1214542/what-do-new-windows-8-10-attributes-mean-no-scrub-file-x-integrity-v-pinn/1215034 https://superuser.com/questions/44812/windows-explorers-file-attribute-column-values
{ "source": [ "https://serverfault.com/questions/34692", "https://serverfault.com", "https://serverfault.com/users/8111/" ] }
34,741
Does anyone know if Postgres has a way to display query results "prettily", like how MySQL does when ending a query with \G on the command line? For instance, "select * from sometable\G" as opposed to "select * from sometable;" Many thanks!
I'm not familiar enough with MySQL to know what the \G option does, but based on the documentation it looks like the psql \x option might do what you want. It's a toggle, though, so you do it before you submit the query. \x select * from sometable;
{ "source": [ "https://serverfault.com/questions/34741", "https://serverfault.com", "https://serverfault.com/users/11056/" ] }
34,750
Possible Duplicate: Can I nohup/screen an already-started process? On Unix (specifically, Linux), I've started a job in a regular ssh->bash session. I'd like to leave work soon, but I now realize that the job is going to take several hours. If I had just started this job in screen, I could detach and go home. But I didn't. Is there any way to disconnect the job from its ssh session, so I can shut down my computer, (dropping the TCP connection and killing the ssh session), and yet have the program keep running? I don't care about its output -- in fact, I redirected stdout to a file. I just want it to run to completion.
You can press ctrl-z to interrupt the process and then run bg to make it run in the background. You can show a numbered list all processes backgrounded in this manner with jobs . Then you can run disown %1 (replace 1 with the process number output by jobs ) to detach the process from the terminal. In spite of the name, the process will still be owned by you after running disown , it will just be detached from the terminal you started it in. This answer has more information
{ "source": [ "https://serverfault.com/questions/34750", "https://serverfault.com", "https://serverfault.com/users/1545/" ] }
34,940
If I edit Proxy Settings through the Control Panel, the settings are stored in HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyEnable and ...\ProxyServer . These settings are of course not used when running as a service under LOCAL SYSTEM . So I tried setting ProxyEnable and ProxyServer under HKEY_USERS\S-1-5-18\... (as well as HKEY_USERS\.DEFAULT\... and all the other users on the system), but that does not work. How do I set the proxy settings for the LOCAL SYSTEM user?
It is actually the value in Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\DefaultConnectionSettings that is used. Since that is not easily modified, you can modify the proxy settings for a user, export the registry key, modify the path in the exported file to HKEY_USERS\S-1-5-18 and reimport it.
{ "source": [ "https://serverfault.com/questions/34940", "https://serverfault.com", "https://serverfault.com/users/11402/" ] }
35,076
Does anyone have a tool or script that will recursively correct the file permissions on a directory? On an Ubuntu Linux machine, a bunch of files were copied to a USB disk with full 777 permissions (user, group, other - read, write, execute) in error. I want to put them back in the user's directory corrected. Directories should be 775 and all other files can be 664. All the files are images, documents or MP3s, so none of them need to be executable. If the directory bit is set then it needs execution, other wise it just needs user and group, read and write. I figured it was worth checking if such a utility exists before hacking together a shell script :)
This should do the trick: find /home/user -type d -print0 | xargs -0 chmod 0775 find /home/user -type f -print0 | xargs -0 chmod 0664
{ "source": [ "https://serverfault.com/questions/35076", "https://serverfault.com", "https://serverfault.com/users/3581/" ] }
35,218
What is a simple way in Windows to test if traffic gets through to a specific port on a remote machine?
I found a hiddem gem the other day from Microsoft that is designed for testing ports: Portqry.exe "Portqry.exe is a command-line utility that you can use to help troubleshoot TCP/IP connectivity issues. Portqry.exe runs on Windows 2000-based computers, on Windows XP-based computers, and on Windows Server 2003-based computers. The utility reports the port status of TCP and UDP ports on a computer that you select. "
{ "source": [ "https://serverfault.com/questions/35218", "https://serverfault.com", "https://serverfault.com/users/11384/" ] }
35,336
It seems that rsync is the de-facto standard for efficient file backup and sync in Unix/Linux. Does anyone have any thoughts on why it wouldn't have caught on in the Windows world? Why hasn't it become a universal 'protocol' for file sync?
I would say mostly because people in windows are unaware of it. Rsync is a command-line utility that is consistent with the unix philosophy of having lots of small tools preinstalled. The windows philosophy is based around GUI applications that are all downloaded and installed separately. There is not a smooth integration spot where rsync would be obvious or make much sense, and running commands on a windows system is tedious at best. Also, rsync really shines when its part of a larger application (say for consolidating and parsing logs), or as an automated archival system (implemented easily with a cronjob). Windows simply doesnt have the other tools in its ecosystem to make using rsync actually viable. Finally, I would say that rsync is just too freaking complicated. Anyone I know who uses it regularly has a pre-set group of flags (mine is -avuz) that generally does what they want, but the man pages / documentation lists dozens of command-line switches, some of them amalgamations of other switches. For example (from the [man page][1]): -a, --archive : archive mode; equals -rlptgoD (no -H,-A,-X) It is a quick way of saying you want recursion and want to preserve almost everything (with -H being a notable omission). The only exception to the above equivalence is when --files-from is specified, in which case -r is not implied. Windows users generally expect, well, windows, and menus, and to have a single app be an all-in-one solution, not just an independent piece of a tool chain.
{ "source": [ "https://serverfault.com/questions/35336", "https://serverfault.com", "https://serverfault.com/users/11731/" ] }
35,453
I ssh on remote host but terminal performance is poor. Symbols I am typing are not shown immediately, but with some delay. Sometimes two symbols are shown at one time after delay.
High latency is another cause of poor ssh performance. I highly recommend using mtr as a better replacement for traceroute. It should be able to give you some idea of where your network problems might occur.
{ "source": [ "https://serverfault.com/questions/35453", "https://serverfault.com", "https://serverfault.com/users/8975/" ] }
35,656
I hear that Linux-based systems are better for security. Apparently they don't have viruses and do not need antivirus software. Even my university claims this - they refuse to have Windows on their servers, which is a real shame because we wanted to use the .NET framework to create some websites. The only reason I can see Linux being safer is because it's open-source, so bugs theoretically would get caught and fixed sooner. I know a bit about how operating systems work, but haven't really delved into how Linux and Windows implement their OS. Can someone explain the difference that makes Linux-based systems more secure?
I don't think an operating system is "secure". A particular configuration of an operating system has a particular degree of resistance to attacks. I'm probably going to get flamed for being a "Microsoft apologist" here, but this thread is very stilted toward generalizations about "Windows" that aren't true. Windows 1.0 - 3.11, 95, 98, and ME are based on DOS. This lineage of operating systems didn't have any security in the formal sense (protected address spaces, kernel / user mode separation, etc). Fortunately, when we're talking about "Windows" today we're not talking about these operating systems. The Windows NT family of operating systems (Windows NT 3.5, 3.51, 4.0, 2000, XP, 2003, Vista, 2008, and 7) has had a very reasonably security system "designed in" since the initial release in 1992. The OS was designed with the TCSEC "Orange Book" in mind and, while not perfect, I do think it is reasonably well designed and implemented. Windows NT was "multi-user" from the beginning (though the functionality of multiple users receiving a graphical user interface simultaneously from the same server didn't happen until Citrix WinFrame in the Windows NT 3.51 era). There is a kernel / user mode separation, with address space protection relying on the underlying hardware functions of the MMU and CPU. (I'd say that it's very "Unix-y", but actually it's very "VMS-y".) The filesystem permission model in NTFS is quite "rich" and, though it has some warts relative to "inheritance" (or the lack thereof-- see How to workaround the NTFS Move/Copy design flaw? ), it hasn't been until the last 10 years or so that Unix-style operating systems have implemented similar functionality. (Novell NetWare beat Microsoft to the punch on this one, though I think MULTICS had both of them beat... >smile<) The service control manager, including the permission system to control access to start/stop/pause service programs is very well designed, and is much more robust in design that the various "init.d" script "architectures" (more like "gentleman's agreements") in many Linux distros. The executive object manager (see http://en.wikipedia.org/wiki/Object_Manager_(Windows) ), which is loosely analagous to the /proc filesystem and the /dev filesystem combined, has an ACL model that is similiar to the filesystem and much, much richer than any permission model that I'm aware of for /proc or /dev on any Linux distro. While we could debate the merits and disadvantages of the registry, the permission model for keys in the registry is far more granular than the model of setting permissions on files in the /etc directory. (I particularly like Rob Short's comments re: the registry in his "Behind the Code" interview: http://channel9.msdn.com/shows/Behind+The+Code/Rob-Short-Operating-System-Evolution Rob was one of the main people behind the Windows registry initially, and I think it's safe to say that he's not necessarily happy w/ how things turned out.) Linux itself is just a kernel, whereas Windows is more analagous to a Linux distribution. You're comparing apples and oranges to compare them like that. I would agree that Windows is more difficult to "strip down" than some Linux-based systems. Some Linux distributions on the other hand, ship with a lot of "crap" turned on, too. With the advent of the various "embedded" flavors of Windows it is possible (albeit not to the general public) to build "distributions" of Windows that differ in their behaviour from the Microsoft defaults (excluding various services, changing default permissions, etc). The various versions of Windows have had their share of poorly-chosen defaults, bugs that allowed unauthorized users to gain privilege, denial of service attacks, etc. Unix kernels (and plenty of Unix-based applications running by default as root) have had the same problems. Microsoft has done an amazing job, since Windows 2000, of making it easier to compartmentalize applications, run programs with least-privilege, and remove unneeded features of the OS. In short, I guess what I'm saying is that the specific configuration of a given operating system for your needs, with respect to security, matters more than what type of operating system you are using. Windows and Linux distributions have very similiar capabilities with respect to security features. You can apply solid security techniques (least-privilege, limited installation of optional components, cryptographically secure authentication mechanisms, etc) in either OS. Whether you actually do or not-- that's what matters.
{ "source": [ "https://serverfault.com/questions/35656", "https://serverfault.com", "https://serverfault.com/users/5651/" ] }
35,662
I have been using Ubuntu, mainly as a production system for a very long time. Fedora Core and Mandrake before that. I am a developer, mostly working over networking core - L3/L4. I wish to graduate to a power user. Thought about shifting to Arch Linux, but then it would take a lot of time configuring system. All you Linux power user, what suggestion you have, for someone who wish to learn Linux internals, more from operations point of view than development?
I'll offer a slightly different suggestion. I see many people, once they get comfortable with a particular distribution, fall into a cycle of perpetual changeover. They install a new shiny distro, but they can't get their webcam to work. So they switch. Now the webcam works, but something else doesn't work, and they switch again. (Then they get a job and are restricted to RHEL...). You might get the impression that there's some sort of expert level-up progression of Ubuntu -> Arch -> Gentoo ( -> FreeBSD?), but that's not strictly necessary, and plenty of people get into a trap of learning how to merely use distributions instead of build or change them. Rather than run around in circles, it pays to really get to know how a distribution that already does most of what you need works . You know Ubuntu well. What I will advocate is to dig into the Ubuntu community and documentation to find answers to the following questions: What are the core components of a minimal install? How do you configure installed packages? Where can you find the corresponding source to an installed binary package? What is the complete path source code takes to arrive on your computer as a binary? How do developers build binary packages? How can you rebuild a package from source? What steps should be taken to upgrade package source versions? What are the best practices for building and installing programs? Where is the documentation for building and configuring programs stored? Who reviews and approves changes to the distribution itself? Finding the answers to these questions will be valuable no matter what distribution you decide to investigate. You may even already know the answer to some of these. In the case of Ubuntu, many of answers will be similar to Debian. For example, best practices in packaging are codified in the Debian Policy Manual .
{ "source": [ "https://serverfault.com/questions/35662", "https://serverfault.com", "https://serverfault.com/users/11527/" ] }
35,842
I've worked as a sysadmin for some years and what I keep coming back to is that users like Microsoft Outlook and want to use its Exchange features. I have tried my fair share of commercial alternatives but usually there is either a fundamental feature missing or there are stability issues. In short I am looking for a Microsoft Exchange Alternative with the following features: Authentication through SQL or LDAP Has a solid, comfortable web interface for the users when they are off-site Supports replication and load balancing (if one fails, the second one should be already running) Outlook client support (or a really good alternative client) Resource booking (meeting rooms, projectors, company jet, etc) Calendar (shared/private) and Email (if that wasn't obvious) (Optional) A cross-platform client for us *nix users. (Optional) Corporate support contracts available (Optional) An open-source software is a plus Please keep your answers as detailed as possible to determine that you've successfully deployed the software and it fulfills the needs. If I wanted a list of claimed alternatives , I would simply Google it. I've personally tried Binary Server, Novell Groupwise, homegrown Postfix/Cyrus stuff and in the end the 'real thing' because those users just love Exchange. Please help me find a good alternative.
Zimbra is an excellent opensource, linux based alternative to Exchange. It combines Apache Tomcat, Postfix, MySQL, OpenLDAP and Lucene in a single, well defined package. It offers: LDAP Authentication Calendaring, resource booking and free/busy info Ability to connect using outlook, and its own Zimbra client Excellent web interface Allows multi server setup and replication It is available for free, with the option of paying for a supported version. I have used Zimbra for a number of organisations who did not, or could not pay for Exchange, and they have all been very happy with it.
{ "source": [ "https://serverfault.com/questions/35842", "https://serverfault.com", "https://serverfault.com/users/1876/" ] }
36,038
Sometimes, when resizing or otherwise mucking about with partitions on a disk, cfdisk will say: Wrote partition table, but re-read table failed. Reboot to update table. (This also happens with other partitioning tools, so I'm thinking this is a Linux issue rather than a cfdisk issue.) Why is this, and why does it only happens sometimes , and what can I do to avoid it? Note: Please assume that none of the partitions I am actually editing are opened, mounted or otherwise in use. Update: cfdisk uses ioctl(fd, BLKRRPART, NULL) to tell Linux to reread the partition table. Two of the other tools recommended so far ( hdparm -z DEVICE , sfdisk -R DEVICE ) does exactly the same thing. The partprobe DEVICE command, on the other hand, seems to use a new ioctl called BLKPG, which might be better; I don't know. (It also falls back on BLKRRPART if BLKPG fails.) BLKPG seems to be a "this partition has changed; here is the new size" operation, and it looked like partprobe called it individually on all the partitions on the device passed, so it should work if the individual partitions are unused. However, I have not had the opportunity to try it.
IMHO the most reliable/best answer is partprobe /dev/sdX
{ "source": [ "https://serverfault.com/questions/36038", "https://serverfault.com", "https://serverfault.com/users/11492/" ] }
36,147
Which changes that you have implemented did have the biggest impact on saving time in you daily sysadmin workload? What are your tricks to work more efficient and get more things done or work less for the same results? I'm thinking about automation, changes in workflow/processes, new tools, stop doing some things altogether, outsourcing, better delegation, changing software/hardware, reducing bureaucracy, etc.
monitoring+alerting - which is great safety net. just as developers write unit tests to make sure things don't get messed up when they update code, i rely on monitoring as additional safety net just in case i screw something up [ that is disconnect a server, deny production traffic on firewall etc ]. it gives a peace of mind - if things break i will know before customers call.
{ "source": [ "https://serverfault.com/questions/36147", "https://serverfault.com", "https://serverfault.com/users/4559/" ] }
36,260
Is there anyway to show all the locks that are active in a mysql database?
See Marko's link for InnoDB tables and the caveats. For MyISAM, there isn't a dead easy "this is the offending query" solution. You should always start with a processlist. But be sure to include the full keyword so that the printed queries aren't truncated: SHOW FULL PROCESSLIST; This will show you a list of all current processes, their SQL query and state. Now usually if a single query is causing many others to lock then it should be easy to identify. The affected queries will have a status of Locked and the offending query will be sitting out by itself, possibly waiting for something intensive, like a temporary table. If it's not obvious then you will have to use your powers of SQL deduction to determine which piece of offending SQL may be the cause of your woes.
{ "source": [ "https://serverfault.com/questions/36260", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
36,291
I've done several attempts to establish SSH-connecton for user root@host using putty terminal. While doing so I specified wrong credentials several times and after that I've specified them correctly, and then after the credentials were accepted the ssh session breaks with "Server unexpectedly closed network connection". This error is reported by putty terminal. When trying to ssh root@localhost from the local console - it works fine. It also works fine when I ssh otheruser@host from other host. So network connectivity issues are not guilty. The only error I am thinking of is: "Too many Authentication Failures for user root" although putty reported a different error. The question is: how to recover from this error condition and let putty login again? Restarting sshd seems to not help
"Too many Authentication Failures for user root" means that Your SSH server's MaxAuthTries limit was exceeded . It happens so that Your client is trying to authenticate with all possible keys stored in /home/USER/.ssh/ . This situation can be solved by these ways: ssh -i /path/to/id_rsa root@host Specify Host/IdentityFile pair in /home/USER/.ssh/config A single host in the config file should look something like this: Host example.com IdentityFile /home/USER/.ssh/id_rsa You can also set the user so you don't need to enter it on the command line and shorten long FQDN's too, see this example: host short IdentityFile /home/USER/.ssh/id_rsa User someuser HostName really-long-domain.example.com You then connect to the really-long-domain.example.com server with: ssh short Note: if you choose to use only the second option, and try to use ssh example.com you will still get errors (if that;s what brought you here), the short version will not give the errors, you can also use both options so you can ssh [email protected] without the errors. Increase MaxAuthTries value on the SSH server in /etc/ssh/sshd_config (not recommended).
{ "source": [ "https://serverfault.com/questions/36291", "https://serverfault.com", "https://serverfault.com/users/11722/" ] }
36,359
Some would argue that BSD/Unix has always been more reliable and stable than Linux (not me, of course, don't hurt me!). Why does Linux always seem to beat BSD? Is it the romance of the Linux story? I don't intend to offend anyone, please don't take offense. Also, please be thoughtful and polite in your response.
The historical situation back in the early part of the 1990s had a lot to do with it. At the time BSD unix was 'struggling to be free' and was viewed as the way forward in many circles. Linux did not get a working TCP stack for a couple of years after it came out and the internet was still somewhat rarefied. UC Berkeley and AT&T were engaged in a lawsuit about the ownership of the BSD code, so the future of the 'free' BSD code base was in question. Ultimately UC Berkely won the suit by being able to show large chunks of BSD code in the SVR4 code base. AT&T was suitably embarassed by this and backed down. The UCB people replaced the last of the infringing code with their own work and could release an AT&T free code base. About this time Bill and Lynn Jolitz took the BSD code base and ported it to the 386, creating 386BSD and documenting it in a famous series of articles in Dr. Dobb's Journal. The lawsuit went on for long enough to paralyse the potential BSD community, which could not invest significantly in the code base until the legal uncertainty had cleared. A 'stable' version of Linux finally came out with a working TCP stack. Linux was available under the GPL which reduced the incentive to fork it. This and Linux Torvalds' effective benevolent dictatorship worked to keep the kernel development unified. Several competing forks of BSD grew out of the BSD code base, fragmenting the community. The relative cohesion of the early Linux kernel development meant that Linux moved forward relatively quickly and ultimately gained the mind share. The entire BSD world stood still while the lawsuit was resolved. Even with lawsuit resolved it still lacked the structural cohesiveness of the Linux kernel development process and split into several forks. Thus, while BSD was (certainly at that point) more mature and arguably technically superior , Linux got the mindshare - which is pretty much the be-all and end-all of success in any large software market.
{ "source": [ "https://serverfault.com/questions/36359", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
36,419
I have written a script that I am using to push and deploy a new service to several machines under my control, and in order to execute the process I am using ssh to remotely start the process. Unfortunately, whenever I use SSH to start the process, the SSH command never seems to return, causing the script to stall. The command is specified as: ssh $user@$host "/root/command &" Whenever I run simple commands, such as ps or who, the SSH command returns immediately, however when I try and start my process it does not return. I have tried tricks like wrapping my process in a simple bash script that starts the process and then exits, however this also hangs the SSH command (even if the bash script echos a success message, and exits normally). Does anyone have any insight into what is causing this behaviour, and how I can get the SSH command to return as soon as the process has been started?
SSH connects stdin, stdout and stderr of the remote shell to your local terminal, so you can interact with the command that's running on the remote side. As a side effect, it will keep running until these connections have been closed, which happens only when the remote command and all its children (!) have terminated (because the children, which is what "&" starts, inherit std* from their parent process and keep it open). So you need to use something like ssh user@host "/script/to/run < /dev/null > /tmp/mylogfile 2>&1 &" The <, > and 2>&1 redirect stdin/stdout/stderr away from your terminal. The "&" then makes your script go to the background. In production you would of course redirect stdin/err to a suitable logfile. See http://osdir.com/ml/network.openssh.general/2006-05/msg00017.html Edit: Just found out that the < /dev/null above is not necessary (but redirecting stdout/err is ). No idea why...
{ "source": [ "https://serverfault.com/questions/36419", "https://serverfault.com", "https://serverfault.com/users/7897/" ] }
36,421
I've got SSH passwordless set up, however it prints the MoTD when it logs in. Is there anyway to stop that happening from the client side? I've tried ssh -q but that doesn't work. I don't want to use ~/.hushlogin nor do I want to change the server set up. The only thing that can work is to quiet all output, with >/dev/null 2>&1 . However, I don't want to ignore errors in case there actually is a problem. Even doing >/dev/null doesn't work, since ssh seems to print the motd to the stderr. Update & reasoning I'm running backup in a cron. I don't want to get a cron email unless an error has occured. However if the motd is printed I'll get an email all the time. I want to keep the motd being printed because that has legal implications. The motd says "unathorized access prohibited". You need to have this sort of statement in there to legally prevent people from access it (like a no trespassing sign). Hence I don't want to blanket disable it all the time.
I'm not sure why you have an aversion to doing this correctly - either on the server a la PrintMotd no PrintLastLog no and #/etc/pam.d/ssh # Print the message of the day upon successful login. # session optional pam_motd.so Or adding ~/.hushlogin for each user. Hint, for ~/.hushlogin, add it to /etc/skel so new user home directories are created with the file. Update: Without more information about your backup cron job, my only other suggestion is to redirect the output of the command to a file (or let cron capture it in email) and the output of the ssh session to /dev/null. Something like: 0 0 * * * ssh backuphost "backup_script_that_writes_to_a_log" >/dev/null Or 0 0 * * * ssh backuphost "backup_command 2>&1" >/dev/null I'd have to play around with the commands a bit, but that should get you started.
{ "source": [ "https://serverfault.com/questions/36421", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
36,467
I'd like to write a shell script (currently using bash) to automatically back up the content of several MySQL schemas on a remote server. The remote server is locked down to only allow SSH access so I have to create an SSH tunnel before running mysqldump against the various schemas. I can create a tunnel without any issue, however I'd like to be able to automatically close it after the database dump has completed. Currently my script is doing this: /usr/bin/ssh -T -f -L 4444:127.0.0.1:3306 -l remoteuser 208.77.188.166 sleep 600 /usr/bin/mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword db1 | gzip > /root/backups/snapshot/db1.sql.gz /usr/bin/mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword db2 | gzip > /root/backups/snapshot/db2.sql.gz /usr/bin/mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword db3 | gzip > /root/backups/snapshot/db3.sql.gz Where the connection is kept open for 600 seconds, obviously however if one of the first dumps takes longer than that then the connection is closed before the other dumps complete. I'd like to retain separate files for each schema backup (so will avoid the --databases of mysqldump for now). Any suggestions?
You don't need to bother with all that tunneling :-). Just let mysqldump stream its data using the SSH connection: ssh usr@host mysqldump -u dbuser -ppasswd my-database-name >dumpfile
{ "source": [ "https://serverfault.com/questions/36467", "https://serverfault.com", "https://serverfault.com/users/1917/" ] }
36,586
Such known tools like iftop/iptraf display network I/O per interface and per connection. Is there a way to see network I/O statistics per process?
nethogs looks like it will do what you want. EDIT: I needed to install ncurses-devel, libpcap and libpcap-devel to build.
{ "source": [ "https://serverfault.com/questions/36586", "https://serverfault.com", "https://serverfault.com/users/10474/" ] }
36,660
I'm setting up up a new MySQL server and I'd like to give it the same set of usernames, allowed hosts, and passwords as an existing server (which is going away). Would it work to just do a dump of the users table and then load it on the new server? Is there a better way than that?
oldserver$ mysqldump mysql > mysql.sql newserver$ mysql mysql < mysql.sql newserver$ mysql 'flush privileges;' Should do it, remember to add -u $USER and -p$PASSWORD as required
{ "source": [ "https://serverfault.com/questions/36660", "https://serverfault.com", "https://serverfault.com/users/11478/" ] }
36,800
For those of you who still not tired of "XYZ for sysadmin" questions: This subject is bothering me for a long time. I had number of problems selecting notebook for job. For example: Notebook vs Netbook is the real problem too. First has more comfortable keyboard and bigger screen and almost always faster. Second is smaller and easier to carry, but don't have CD/DVD-RW; 14"/15"/17" screen. Don't see winner here. 2-3 more xterms/windows on one screen at cost of 1-2 more kg; Number of extension ports is essential I think. New Mac Air is good, but c'mon: no LAN and two USB ports vs LAN and one USB. Also, as sysadmin I really need COM port to setup network equipment, but where can you find notebook with COM port in 2012? Problem was solved by buying USB-COM cable, but still have problems with drivers in some OSes; So here goes my questions: What you think is good notebook for sysadmin? May be you can list specific models; What hardware extensions(i.e. USB devices) you think should be added to "sysadmin must-have" list;
I've gone pretty much 'USB everything' so all the extras stay in the bag until needed. USB serial port, USB ethernet adapter ( I hear you about sometimes needing two NICs! ), USB CD/DVD-RW. The weight still adds up on the shoulder, but the base machine then requires nothing more than a USB port (or two or three unless you also add a USB hub to the mix). Also handy: USB card reader, USB headphones/mic, and USB->miniUSB cable (for cellphone charging), oh, and USB->IDE/SATA adapter for quick access to random drives from dead machines. Another advantage to USB is that you still have all the extras if/when you upgrade your notebook to another model! Screensize is up to personal choice - I've gotten fast with my virtual desktop switcher and lived for a long time on an 800x600 13" screen, so my current 15" 1680x1050 seems absolutely luxurious. Of course, more is always better, right? :) Whatever you do, max out the RAM in your laptop to whatever your OS can use - you won't regret it when you're trying to read docs in firefox on one desktop and run wireshark in another and have a half dozen screen terms open...
{ "source": [ "https://serverfault.com/questions/36800", "https://serverfault.com", "https://serverfault.com/users/6244/" ] }
36,807
I have a peer to peer network set up with a linux web server as a test environment, and a NetGear WNR854T is set up as a wireless gateway and router. However, from time to time I am unable to resolve the name of the linux server. This happens regularly from one of our Vista computers connecting through the wireless gateway, although it almost never occurs with one of the wired Vista computers. I have not seen the problem occur when looking up the names of windows boxes, although it may still be occurring with them as well. Also, we have a mac, and it usually can connect if .local is appended to the name, but not always. The router is set up with a pretty basic setup - i.e. acting as a DHCP server, forwarding DNS requests to our internet providers DNS servers. The linux server has an address reservation on the router. What other settings can I look into in order to resolve this? I'd rather not modify local hosts files if possible.
I've gone pretty much 'USB everything' so all the extras stay in the bag until needed. USB serial port, USB ethernet adapter ( I hear you about sometimes needing two NICs! ), USB CD/DVD-RW. The weight still adds up on the shoulder, but the base machine then requires nothing more than a USB port (or two or three unless you also add a USB hub to the mix). Also handy: USB card reader, USB headphones/mic, and USB->miniUSB cable (for cellphone charging), oh, and USB->IDE/SATA adapter for quick access to random drives from dead machines. Another advantage to USB is that you still have all the extras if/when you upgrade your notebook to another model! Screensize is up to personal choice - I've gotten fast with my virtual desktop switcher and lived for a long time on an 800x600 13" screen, so my current 15" 1680x1050 seems absolutely luxurious. Of course, more is always better, right? :) Whatever you do, max out the RAM in your laptop to whatever your OS can use - you won't regret it when you're trying to read docs in firefox on one desktop and run wireshark in another and have a half dozen screen terms open...
{ "source": [ "https://serverfault.com/questions/36807", "https://serverfault.com", "https://serverfault.com/users/10841/" ] }
37,088
I have an Amazon EC2 instance running and I will like to add another security group to that instance and then remove the current security group from that instance. Is this possible?
Update 2015-02-27: This is now possible, see the answer below . Old reply: Amazon's FAQ says it's not possible to define a security group anywhere but at launch time.
{ "source": [ "https://serverfault.com/questions/37088", "https://serverfault.com", "https://serverfault.com/users/4310/" ] }
37,092
Guest VM : Ubuntu 9.04 Host : Vista Phys RAM : 4GB Guest RAM : 1.3 GB processor : core duo disk for guest : 18GB I am new to virtual machines. Although, VirtualBox has been very user friendly, I have few doubts (perhaps from concepts of virtualization) 1) I was thinking of creating the virtual hard drive (vhd) for guest (ubuntu) on my external hdd? is this alright? 2) Given the physical RAM (3GB accessible from 4GB - Vista 32bit) Can I work on both the guest and host together (interchange often as every 2-5 mins)? will that be a problem? Thx in advance. EDIT (updated app usage): I plan to use eclipse/netbeans with running Jetty on Ubuntu and OfficeApps-outlook,IMs,spreadsheet.. (everything else) on Vista.
Update 2015-02-27: This is now possible, see the answer below . Old reply: Amazon's FAQ says it's not possible to define a security group anywhere but at launch time.
{ "source": [ "https://serverfault.com/questions/37092", "https://serverfault.com", "https://serverfault.com/users/10694/" ] }
37,441
The proc(5) manpage describes iowait as "time waiting for IO to complete". This was mostly explained in an earlier question. My question is: while waiting in blocking IO, does this include waiting on blocking network IO, or only local IO?
It means waiting for "File I/O", that's to say, any read/write call on a file which is in the mounted filesystem, but also probably counts time waiting to swap in or demand-load pages into memory, e.g. libraries not in memory yet, or pages of mmap()'d files which aren't in ram. It does NOT count time spent waiting for IPC objects such as sockets, pipes, ttys, select(), poll(), sleep(), pause() etc. Basically it's time that a thread spends waiting for synchronous disc-IO - during this time it is theoretically able to run but can't because some data it needs isn't there yet. Such processes usually show up in "D" state and contribute to the load average of a box. Confusingly I think this probably includes file IO on network filesystems.
{ "source": [ "https://serverfault.com/questions/37441", "https://serverfault.com", "https://serverfault.com/users/4757/" ] }
37,622
I'm using Oracle for development on my local machine. The password for a bootstrap account that I always use to rebuild my database has expired. How do I turn off password expiration for this user (and all other users) permanently? I'm using Oracle 11g, but I don't know if the password expiration behavior is new in 11g.
alter profile default limit password_life_time unlimited;
{ "source": [ "https://serverfault.com/questions/37622", "https://serverfault.com", "https://serverfault.com/users/2536/" ] }
37,629
I want to copy a file from my machine A to server C, but only have access to server C through server B. Instead of first transferring to server B, log in and then transfer to server C, Is is possible to transfer the file directly with SCP or similar programs? (Emacs tramp-mode has this feature for editing files remotely).
Assuming OpenSSH, add to your SSH configuration in .ssh/config Host distant ProxyCommand ssh near nc distant 22 This will cause SSH to be able to connect "directly" to the machine named distant by proxying through the machine named near. It can then use applications like scp and sftp to the distant machine. For this to work you need 'nc' aka netcat installed on the machine named near. But a lot of modern systems will have it already. towo's tar solution is more effective for one-shot problems, assuming you've memorised tar's syntax and rules of operation.
{ "source": [ "https://serverfault.com/questions/37629", "https://serverfault.com", "https://serverfault.com/users/3685/" ] }
37,829
I'm trying to write a bash script (in Ubuntu) that will backup a directory using tar. How can I do a check in the script so that it can only be run as root (or with sudo)? For instance, if a user runs the script, it should say that this script must be run with sudo privileges, and then quit. If the script is executed as root, it will continue past the check. I know there has to be an easy solution, I just haven't been able to find it by googling.
To pull the effective uid use this command: id -u If the result is ‘0’ then the script is either running as root, or using sudo. You can run the check by doing something like: if [[ $(/usr/bin/id -u) -ne 0 ]]; then echo "Not running as root" exit fi
{ "source": [ "https://serverfault.com/questions/37829", "https://serverfault.com", "https://serverfault.com/users/8250/" ] }
37,929
Is there an command line option to auto accept a SSL certificate permanently using the SVN commandline in a way that avoids the prompt?
It depends somewhat on your version of SVN. Recent (1.6+) ones have the usual --non-interactive (which you want to use to avoid prompts) and also a --trust-server-cert that may do what you want.
{ "source": [ "https://serverfault.com/questions/37929", "https://serverfault.com", "https://serverfault.com/users/7234/" ] }
38,114
I've been picking up Linux (Fedora 10, then 11) over the past few months (and enjoying it immensely-- it's like discovering computers all over again, so many things to learn). I've added my user to the last line of the /etc/sudoers file as shown below, so that I don't get asked for my password when I execute the sudo command: MyUserName ALL=(ALL) NOPASSWD:ALL Now every time I execute a command using sudo , it pauses a noticeable amount of time before actually performing the task (~10 seconds). Why might this be and how might I fix this? I'm running Sudo version 1.7.1 on Fedora 11 x86 64.
I asked this question over on SO and it got moved here. That said I no longer have the ability to edit the question as if I owned it, or even accept the correct answer, but this turned out to be the true reason why and how to solve it: Found here User "rohandhruva" on there gives the right answer: This happens if you change the hostname during the install process. To solve the problem, edit the file /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 <ADD_YOURS_HERE> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 <ADD_YOURS_HERE>
{ "source": [ "https://serverfault.com/questions/38114", "https://serverfault.com", "https://serverfault.com/users/11046/" ] }