source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
2,429
How do you setup ssh to authenticate a user using keys instead of a username / password?
For each user: they should generate (on their local machine) their keypair using ssh-keygen -t rsa (the rsa can be replaced with dsa or rsa1 too, though those options are not recommended). Then they need to put the contents of their public key ( id_rsa.pub ) into ~/.ssh/authorized_keys on the server being logged into.
{ "source": [ "https://serverfault.com/questions/2429", "https://serverfault.com", "https://serverfault.com/users/635/" ] }
2,591
Under what conditions does one start to consider subnetting a network? I'm looking for a few general rules of thumb, or triggers based on measurable metrics that make subnetting something that should be considered.
Interesting Question. Historically, prior to the advent of fully switched networks, the main consideration to breaking a network into subnets had to do with limiting the number of nodes in a single collision domain. That is, if you had too many nodes, your network performance would reach a peak and eventually would collapse under heavy load due to excessive collisions. The exact number of nodes that could be deployed depended on lots of factors, but generally speaking you could not regularly load the collision domain much beyond 50% of the total bandwidth available and still have the network be stable all the time. 50 nodes on the network was a lot nodes in those days. With heavy use users, you might have topped out at 20 or 30 nodes before needing to start subnetting things. Of course, with fully switched full-duplex subnets, collisions are not a concern anymore and assuming typical desktop type users, you can typically deploy hundreds of nodes in a single subnet without any issues at all. Having lots of broadcast traffic, as other answers have alluded to, might be a concern depending on what protocols/applications you are running on the network. However, understand that subnetting a network does not necessarily help you with your broadcast traffic concerns. Many of the protocols use broadcasting for a reason - that is, when all the nodes on network actually need to see such traffic to implement the application level feature(s) desired. Simply subnetting the network doesn't actually buy you anything if the broadcasted packet is also going to need to forwarded over to the other subnet and broadcasted out again. In fact, that actually adds extra traffic (and latency) to both subnets if you think this through. Generally speaking, today, the main reasons for subnetting networks has much more to do with organizational, administrative and security boundary considerations than anything else. The original question asks for measurable metrics that trigger subnetting considerations. I am not sure there are any in terms of specific numbers. This is going to depend dramatically on the 'applications' involved and I don't think there is really any trigger points that would generally apply. Relative to rules of thumbs in planning out subnets: Consider subnets for each different organizational departments/divisions, especially as they get to be non-trivial (50+ nodes!?) in size. Consider subnets for groups of nodes/users using a common application set that is distinct from other users or node types (Developers, VoIP Devices, manufacturing floor) Consider subnets for groups of users that have differing security requirements (securing the accounting department, Securing Wifi) Consider subnets from a virus outbreak, security breach and damage containment perspective. How many nodes get exposed/breached - what is an acceptable exposure level for your organization? This consideration assumes restrictive routing (firewall) rules between subnets. With all that said, adding subnets adds some level of administrative overhead and potentially causes problems relative to running out of node addresses in one subnet and having too many left in another pool, etc. The routing and firewall setups and placement of common servers in the network and such get more involved, that kind of thing. Certainly, each subnet should have a reason for existing that outweighs the overhead of maintaining the more sophisticated logical topology.
{ "source": [ "https://serverfault.com/questions/2591", "https://serverfault.com", "https://serverfault.com/users/706/" ] }
2,616
How can I tell which MySQL users have access to a database and what privileges they have? I seem to be able to get this information from phpMyAdmin when I click "Privileges". . . Users having access to "mydatabase" User Host Type Privileges Grant myuser1 % database-specific ALL PRIVILEGES Yes root localhost global ALL PRIVILEGES Yes myuser2 % database-specific SELECT, INSERT, UPDATE No . . . but I'd like to know how to perform this query from the command line. (phpMyAdmin often shows me the SQL syntax of the command it is executing, but I don't see it in this case.) Please note that I'm not asking what grants a particular user has (i.e. "SHOW GRANTS for myuser1") but rather, given the name of a database, how do I determine which MySQL users have access to that database and what privileges they have? Basically, how can I get the chart above from the command line?
You can append \G to the command to get results displayed in 'grid' veiw SELECT * FROM mysql.db WHERE Db = '<database name in LC>'\G;
{ "source": [ "https://serverfault.com/questions/2616", "https://serverfault.com", "https://serverfault.com/users/1156/" ] }
2,678
I had created a Windows XP image disk. It is of 5 GB, but I would like to know if there is any simple way to increase the size to 20 GB.
As of VirtualBox 4.0.0, the VBoxManage command line tool offers a simple resize option: VBoxManage modifyhd /path/to/vdi --resize <mbytes> After the virtual disk container resize, boot into the VM and resize the partitions to make use of the extra space. See also: VirtualBox manual, Chapter 8. VBoxManage: modifyhd
{ "source": [ "https://serverfault.com/questions/2678", "https://serverfault.com", "https://serverfault.com/users/1298/" ] }
2,699
I am using Windows XP pro, and I need to know if something is registered on a port. If so how can I tell what is on the port? EDIT What I mean by registered is that I am trying to test a .NET remoting application, and I need to see if the application is running or registered on a given port.
netstat -a -b will show all listening ports and the executable name (rather than just the PID). If you prefer a graphical version, Microsoft's TCPView will show you the same information, updating in real-time.
{ "source": [ "https://serverfault.com/questions/2699", "https://serverfault.com", "https://serverfault.com/users/823/" ] }
2,783
What are the tell-tale signs that a Linux server has been hacked? Are there any tools that can generate and email an audit report on a scheduled basis?
Keep a pristine copy of critical system files (such as ls, ps, netstat, md5sum) somewhere, with an md5sum of them, and compare them to the live versions regularly. Rootkits will invariably modify these files. Use these copies if you suspect the originals have been compromised. aide or tripwire will tell you of any files that have been modified - assuming their databases have not been tampered with. Configure syslog to send your logfiles to a remote log server where they can't be tampered with by an intruder. Watch these remote logfiles for suspicious activity read your logs regularly - use logwatch or logcheck to synthesize the critical information. Know your servers . Know what kinds of activities and logs are normal.
{ "source": [ "https://serverfault.com/questions/2783", "https://serverfault.com", "https://serverfault.com/users/1532/" ] }
2,786
This is something that's always bothered me, so I'll ask the Server Fault community. I love Process Explorer for keeping track of more than just the high-level tasks you get in the Task Manager . But I constantly want to know which of those dozen services hosted in a single process under svchost is making my processor spike. So... is there any non-intrusive way to find this information out?
Yes, there is an (almost) non-intrusive and easy way: Split each service to run in its own SVCHOST.EXE process and the service consuming the CPU cycles will be easily visible in Process Explorer (the space after "=" is required): SC Config Servicename Type= own Do this in a command line window or put it into a BAT script. Administrative privileges are required and a restart of the computer is required before it takes effect. The original state can be restored by: SC Config Servicename Type= share Example: to make Windows Management Instrumentation run in a separate SVCHOST.EXE: SC Config winmgmt Type= own This technique has no ill effects, except perhaps increasing memory consumption slightly. And apart from observing CPU usage for each service it also makes it easy to observe page faults delta, disk I/O read rate and disk I/O write rate for each service. For Process Explorer, menu View/Select Columns: tab Process Memory/Page Fault Delta, tab Process Performance/IO Delta Write Bytes, tab Process Performance/IO Delta Read Bytes, respectively. On most systems there is only one SVCHOST.EXE process that has a lot of services. I have used this sequence (it can be pasted directly into a command line window): rem 1. "Automatic Updates" SC Config wuauserv Type= own rem 2. "COM+ Event System" SC Config EventSystem Type= own rem 3. "Computer Browser" SC Config Browser Type= own rem 4. "Cryptographic Services" SC Config CryptSvc Type= own rem 5. "Distributed Link Tracking" SC Config TrkWks Type= own rem 6. "Help and Support" SC Config helpsvc Type= own rem 7. "Logical Disk Manager" SC Config dmserver Type= own rem 8. "Network Connections" SC Config Netman Type= own rem 9. "Network Location Awareness" SC Config NLA Type= own rem 10. "Remote Access Connection Manager" SC Config RasMan Type= own rem 11. "Secondary Logon" SC Config seclogon Type= own rem 12. "Server" SC Config lanmanserver Type= own rem 13. "Shell Hardware Detection" SC Config ShellHWDetection Type= own rem 14. "System Event Notification" SC Config SENS Type= own rem 15. "System Restore Service" SC Config srservice Type= own rem 16. "Task Scheduler" SC Config Schedule Type= own rem 17. "Telephony" SC Config TapiSrv Type= own rem 18. "Terminal Services" SC Config TermService Type= own rem 19. "Themes" SC Config Themes Type= own rem 20. "Windows Audio" SC Config AudioSrv Type= own rem 21. "Windows Firewall/Internet Connection Sharing (ICS)" SC Config SharedAccess Type= own rem 22. "Windows Management Instrumentation" SC Config winmgmt Type= own rem 23. "Wireless Configuration" SC Config WZCSVC Type= own rem 24. "Workstation" SC Config lanmanworkstation Type= own rem End.
{ "source": [ "https://serverfault.com/questions/2786", "https://serverfault.com", "https://serverfault.com/users/1551/" ] }
2,817
Following in the spirit of Hidden Features of PowerShell and various others on Stack Overflow, what Linux commands or combinations of commands do you find essential in your work? Also See: Hidden Features of MySQL Hidden Features of PowerShell Hidden features of Oracle Database Hidden Features of Windows 2008 Hidden Features of Solaris/OpenSolaris Hidden Features of SQL Server Hidden Features of IIS (6.0 / 7.0)
To get the ball going, I find screen to be essential: When screen is called, it creates a single window with a shell in it (or the specified command) and then gets out of your way so that you can use the program as you normally would. Then, at any time, you can create new (full-screen) windows with other programs in them (including more shells), kill the current window, view a list of the active windows, turn output logging on and off, copy text between windows, view the scrollback history, switch between windows, etc. All windows run their programs completely independent of each other. Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the users terminal.
{ "source": [ "https://serverfault.com/questions/2817", "https://serverfault.com", "https://serverfault.com/users/1106/" ] }
2,888
When someone mentions RAID in a conversation about backups, invariably someone declares that "RAID is not a backup." Sure, for striping, that's true. But what's the difference between redundancy and a backup?
RAID guards against one kind of hardware failure. There's lots of failure modes that it doesn't guard against. File corruption Human error (deleting files by mistake) Catastrophic damage (someone dumps water onto the server) Viruses and other malware Software bugs that wipe out data Hardware problems that wipe out data or cause hardware damage (controller malfunctions, firmware bugs, voltage spikes, ...) and more.
{ "source": [ "https://serverfault.com/questions/2888", "https://serverfault.com", "https://serverfault.com/users/919/" ] }
2,912
When logging into Windows, it says on that page that CTRL-ALT-DEL somehow makes Windows more secure. I have never been able to figure a mechanism where having to press some specific key combination before logging in makes the system more secure. I have never encountered a VMS, UNIX or related system that makes you press any key to log in -- except older terminal-based UNIXes where you press ENTER to get a login prompt. How does having to press CTRL-ALT-DEL before logging in make Windows more secure?
The Windows (NT) kernel is designed to reserve the notification of this key combination to a single process: Winlogon. So, as long as the Windows installation itself is working as it should - no third party application can respond to this key combination (if it could, it could present a fake logon window and keylog your password ;)
{ "source": [ "https://serverfault.com/questions/2912", "https://serverfault.com", "https://serverfault.com/users/808/" ] }
2,944
When you want to capture browser traffic or general windows HTTP traffic what tool do you use?
Fiddler, hands down! http://www.fiddler2.com/fiddler2/
{ "source": [ "https://serverfault.com/questions/2944", "https://serverfault.com", "https://serverfault.com/users/1155/" ] }
2,952
Installing Windows from a thumb drive is vastly superior to burning a copy to a DVD which will fill some landfill somewhere with toxic stuff. Not to mention it's about 50x faster to install Windows from a USB Thumb Drive. How do you get the bits onto the thumb drive so that you can boot from it and do a clean install?
Update: Microsoft has created the Windows 7 USB/DVD Download tool to make this very easy. I used this guide as a set of directions - http://kurtsh.spaces.live.com/blog/cns!DA410C7F7E038D!1665.entry 1. Get a USB Thumbdrive between 4-32GB. If the drive is larger than 32GB, Windows cannot format it as FAT32, so an alternate utility must be used. Windows can still read FAT32 partitions larger than 32GB, though some devices cannot. 2. Run cmd.exe as administrator and enter the following commands followed by Enter diskpart list disk select disk # (where # is your USB drive as determined from step 2) clean (This step will delete all data on your flash drive!) create partition primary active format fs=fat32 quick assign list volume exit bootsect.exe /nt60 F: /mbr (where F: is the drive letter of your USB drive as reported by list volume ) 3. Copy the Windows files from the ISO or other source using robocopy robocopy.exe E:\ F:\ /MIR where E:\ is the source and F:\ is the destination. Drag-and-drop or copy/paste can also be used, if you know what you're doing. Configure your PC to boot from the USB drive In some machines the USB thumbdrive will appear to the BIOS as any other hard drive. You need to muck with the boot sequence to place the thumbdrive higher in the boot order than the local hard drive. Note that after you do this you might want to reset the boot order in order to ensure that BitLocker doesn't detect boot changes based on the fact that the thumdrive is missing if it was there when you encrypted your drive.
{ "source": [ "https://serverfault.com/questions/2952", "https://serverfault.com", "https://serverfault.com/users/1155/" ] }
3,066
I want to delete MyNewService , but when I type in sc delete MyNewService I simply can't delete it because there is no such servic, due to "the Specified service does not exist as an installed service" error Any ideas how to solve this problem? Edit : as far as the service panel is concerned, the MyNewService is there all the time. I restarted the PC a few times and it's there.
View the properties of the service and you'll see a " Service Name " and " Display Name ". The display name is the one you see in services.msc, you need to use the service name with the net command however. Sometimes they're very different for example " Extensible Authentication Protocol Service " is the display name and " EapHost " is the service name.
{ "source": [ "https://serverfault.com/questions/3066", "https://serverfault.com", "https://serverfault.com/users/1605/" ] }
3,103
The top command on OS X is pretty crappy.. The one included with most Linux distros allows you to change the sort-by column using < and > , there is a coloured mode (by pressing the z key), and a bunch of other useful options. Is there a replacement command line tool? Ideally I would like htop for OS X, but because it relies on the /proc/ filesystem ( see this thread ) it has not been ported (and probably will never be) The obvious answer is "Activity Monitor", but I'm looking for a command line tool!
top on MacOS X does support sorting, at least: O<skey> Set secondary sort key to <skey> (see o<key>). o<key> Set primary sort key to <key>: [+-]{command|cpu|pid |prt|reg|rprvt|rshrd|rsize|th|time|uid|username|vprvt |vsize}.
{ "source": [ "https://serverfault.com/questions/3103", "https://serverfault.com", "https://serverfault.com/users/1070/" ] }
3,132
I'm running Ubuntu, and want to find out the UUID of a particular filesystem (not partition). I know I can use e2label /dev/sda1 to find out the filesystem label, but there doesn't seem to be a similar way to find the UUID .
Another command that might be available and also works quite well for this is 'blkid'. It's part of the e2fsprogs package. Examples of it's usage: Look up data on /dev/sda1: topher@crucible:~$ sudo blkid /dev/sda1 /dev/sda1: UUID="727cac18-044b-4504-87f1-a5aefa774bda" TYPE="ext3" Show UUID data for all partitions: topher@crucible:~$ sudo blkid /dev/sda1: UUID="727cac18-044b-4504-87f1-a5aefa774bda" TYPE="ext3" /dev/sdb: UUID="467c4aa9-963d-4467-8cd0-d58caaacaff4" TYPE="ext3" Show UUID data for all partitions in easier to read format: (Note: in newer releases, blkid -L has a different meaning, and blkid -o list should be used instead) topher@crucible:~$ sudo blkid -L device fs_type label mount point UUID ------------------------------------------------------------------------------- /dev/sda1 ext3 / 727cac18-044b-4504-87f1-a5aefa774bda /dev/sdc ext3 /home 467c4aa9-963d-4467-8cd0-d58caaacaff4 Show just the UUID for /dev/sda1 and nothing else: topher@crucible:~$ sudo blkid -s UUID -o value /dev/sda1 727cac18-044b-4504-87f1-a5aefa774bda
{ "source": [ "https://serverfault.com/questions/3132", "https://serverfault.com", "https://serverfault.com/users/768/" ] }
3,270
What kinds of questions would you ask and what scenarios would you describe, what kinds of answers would you look for? I don't ask for specific questions. I would like to know which interview strategy is good for selecting candidates who are qualified for the job.
I ask questions in 3 categories: Technical Knowledge - I want to make sure the candidate knows what he/she is supposed to know. For instance, tell me the difference between RAID 0, RAID 1, RAID 5, RAID 1+0, and RAID 0+1. If an AD Directory Services administrator, tell me the forest and domain level FSMO roles and what do they each do. In addition, this is where I ask what technology they are interested in. Do they build robots on the side? Good! Do they program said robots? Really? So I've got someone who can do a bit of coding and knows the pains of troubleshooting. Outstanding! Things like that. Personality - I ask questions about how they would handle different scenarios. Situations like, "The PM realizes there has been an error made in the schedule. You know the error is the PM's fault. That error is going to cause you to work two weekends back to back. How do you handle it?" Basically questions that reveal how the candidate thinks and whether or not they know what to do to be part of a team. This won't weed out the folks who know the right answers and don't do them, but it will weed out the folks who have no idea how to play nicely with others. I also ask questions about community involvement. Previous Experience - I usually ask for the candidate to give me a situation or project in the past that went well where they were a major part. I want to know what challenges they faced and how they handled them. I also ask to give me a situation where things didn't go well. What were the lessons that candidate learned? What could the candidate have done, thinking in hindsight, to possibly turn around the situation (and if the candidate couldn't, does the candidate recognize that).
{ "source": [ "https://serverfault.com/questions/3270", "https://serverfault.com", "https://serverfault.com/users/45/" ] }
3,331
Sometimes your scripts need to behave differently on different Linux's. How can I determine which version of Linux a script is running on?
Don't try and make assumptions based on the distro as to what you can and cannot do, for that way lies madness (see also "User Agent detection"). Instead, detect whether what it is that you want to do is supported, and how it's done by whatever command or file location you want to use. For example, if you wanted to install a package, you can detect whether you're on a Debian-like system or a RedHat-like system by checking for the existence of dpkg or rpm (check for dpkg first, because Debian machines can have the rpm command on them...). Make your decision as to what to do based on that, not just on whether it's a Debian or RedHat system. That way you'll automatically support any derivative distros that you didn't explicitly program in. Oh, and if your package requires specific dependencies, then test for those too and let the user know what they're missing. Another example is fiddling with network interfaces. Work out what to do based on whether there's an /etc/network/interfaces file or an /etc/sysconfig/network-scripts directory, and go from there. Yes, it's more work, but unless you want to remake all the mistakes that web developers have made over the past decade or more, you'll do it the smart way right from the start.
{ "source": [ "https://serverfault.com/questions/3331", "https://serverfault.com", "https://serverfault.com/users/919/" ] }
3,478
After administering Unix or Unix-like servers, what tools (command-line preferably) do you feel you cannot live without?
GNU screen - essential when you're managing large numbers of systems and don't want to have a dozen terminal windows open.
{ "source": [ "https://serverfault.com/questions/3478", "https://serverfault.com", "https://serverfault.com/users/1695/" ] }
3,740
Basically like some of my own that I've posted below. I'm looking for added functionality to the programme 'screen'. At the very least have a look at the last line for a fantastic 'menu bar' at the bottom of a screen session. ## gyaresu's .screenrc 2008-03-25 # http://delicious.com/search?p=screenrc # Don't display the copyright page startup_message off # tab-completion flash in heading bar vbell off # keep scrollback n lines defscrollback 1000 # Doesn't fix scrollback problem on xterm because if you scroll back # all you see is the other terminals history. # termcapinfo xterm|xterms|xs|rxvt ti@:te@ # These will let you use bind -c selectHighs 0 select 10 #these three commands are bind -c selectHighs 1 select 11 #added to the command-class bind -c selectHighs 2 select 12 #selectHighs bind -c selectHighs 3 select 13 bind -c selectHighs 4 select 14 bind -c selectHighs 5 select 15 bind - command -c selectHighs #bind the hyphen to #command-class selectHighs screen -t rtorrent 0 rtorrent #screen -t tunes 1 ncmpc --host=192.168.1.4 --port=6600 #was for connecting to MPD music server. screen -t stuff 1 screen -t irssi 2 irssi screen -t dancing 4 screen -t python 5 python screen -t giantfriend 6 these_are_ssh_to_server_scripts.sh screen -t computerrescue 7 these_are_ssh_to_server_scripts.sh screen -t BMon 8 bmon -p eth0 screen -t htop 9 htop screen -t hellanzb 10 hellanzb screen -t watching 3 #screen -t interactive.fiction 8 #screen -t hellahella 8 paster serve --daemon /home/gyaresu/downloads/hellahella/hella.ini shelltitle "$ |bash" # THIS IS THE PRETTY BIT #change the hardstatus settings to give an window list at the bottom of the ##screen, with the time and date and with the current window highlighted hardstatus alwayslastline #hardstatus string '%{= mK}%-Lw%{= KW}%50>%n%f* %t%{= mK}%+Lw%< %{= kG}%-=%D %d %M %Y %c:%s%{-}' hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %d/%m %{W}%c %{g}]'
For those wanting a less cryptic way of getting a nice screen set up, I can heartily recommend byobu (formerly called screen profiles). It gives you a nice default set of stuff at the bottom of the screen - the bottom line contains various handy status information, and the second from bottom line contains a list of your screen windows. All this can be configured in a nice easy ncurses menu by pressing F9. The function keys are mapped to common operations: F2 - create a new window F3 - Go to the prev window F4 - Go to the next window F5 - Reload profile F6 - Detach from the session F7 - Enter scrollback mode F8 - View all keybindings F9 - Configure screen-profiles F12 - Lock this terminal See this article for a tutorial and screenshots . Byobu is in the ubuntu repositories from karmic (9.10) onwards. In jaunty it was called screen-profiles. Before that it can be installed from this ppa of from this download page . It's widely packaged for other up-to-date distros aswell. It does depend on python, but once you have byobu set up as you like it, you can have it generate a tar ball containing all you need to recreate your screen on another computer using byobu-export .
{ "source": [ "https://serverfault.com/questions/3740", "https://serverfault.com", "https://serverfault.com/users/1576/" ] }
3,743
Is there anything that you can't live without and will make my life SO much easier? Here are some that I use ('diskspace' & 'folders' are particularly handy). # some more ls aliases alias ll='ls -alh' alias la='ls -A' alias l='ls -CFlh' alias woo='fortune' alias lsd="ls -alF | grep /$" # This is GOLD for finding out what is taking so much space on your drives! alias diskspace="du -S | sort -n -r |more" # Command line mplayer movie watching for the win. alias mp="mplayer -fs" # Show me the size (sorted) of only the folders in this directory alias folders="find . -maxdepth 1 -type d -print | xargs du -sk | sort -rn" # This will keep you sane when you're about to smash the keyboard again. alias frak="fortune" # This is where you put your hand rolled scripts (remember to chmod them) PATH="$HOME/bin:$PATH"
I have a little script that extracts archives, I found it somewhere on the net: extract () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xvjf $1 ;; *.tar.gz) tar xvzf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xvf $1 ;; *.tbz2) tar xvjf $1 ;; *.tgz) tar xvzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1 ;; *.7z) 7z x $1 ;; *) echo "don't know how to extract '$1'..." ;; esac else echo "'$1' is not a valid file!" fi }
{ "source": [ "https://serverfault.com/questions/3743", "https://serverfault.com", "https://serverfault.com/users/1576/" ] }
3,780
The aim for this Wiki is to promote using a command to open up commonly used applications without having to go through many mouse clicks - thus saving time on monitoring and troubleshooting Windows machines. Answer entries need to specify Application name Commands Screenshot (Optional) Shortcut to commands && - Command Chaining %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP) appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs") appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane) arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP) at - Schedule tasks either locally or remotely without using Scheduled Tasks bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files calc - Calculator chkdsk - Check/Fix the disk surface for physical errors or bad sectors cipher - Displays or alters the encryption of directories [files] on NTFS partitions cleanmgr.exe - Disk Cleanup clip - Redirects output of command line tools to the Windows clipboard cls - clear the command line screen cmd /k - Run command with command extensions enabled color - Sets the default console foreground and background colors in console command.com - Default Operating System Shell compmgmt.msc - Computer Management control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center control keyboard - Keyboard Properties control mouse(or main.cpl) - Mouse Properties control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog control userpasswords2 - Opens the classic User Accounts dialog desk.cpl - opens the display properties devmgmt.msc - Device Manager diskmgmt.msc - Disk Management diskpart - Disk management from the command line dsa.msc - Opens active directory users and computers dsquery - Finds any objects in the directory according to criteria dxdiag - DirectX Diagnostic Tool eventvwr - Windows Event Log (Event Viewer) explorer . - Open explorer with the current folder selected. explorer /e , . - Open explorer, with folder tree, with current folder selected. F7 - View command history find - Searches for a text string in a file or files findstr - Find a string in a file firewall.cpl - Opens the Windows Firewall settings fsmgmt.msc - Shared Folders fsutil - Perform tasks related to FAT and NTFS file systems ftp - Transfers files to and from a computer running an FTP server service getmac - Shows the mac address(es) of your network adapter(s) gpedit.msc - Group Policy Editor gpresult - Displays the Resultant Set of Policy (RSoP) information for a target user and computer httpcfg.exe - HTTP Configuration Utility iisreset - To restart IIS InetMgr.exe - Internet Information Services (IIS) Manager 7 InetMgr6.exe - Internet Information Services (IIS) Manager 6 intl.cpl - Regional and Language Options ipconfig - Internet protocol configuration lusrmgr.msc - Local Users and Groups Administrator msconfig - System Configuration notepad - Notepad? ;) mmsys.cpl - Sound/Recording/Playback properties mode - Configure system devices more - Displays one screen of output at a time mrt - Microsoft Windows Malicious Software Removal Tool mstsc.exe - Remote Desktop Connection nbstat - displays protocol statistics and current TCP/IP connections using NBT ncpa.cpl - Network Connections netsh - Display or modify the network configuration of a computer that is currently running netstat - Network Statistics net statistics - Check computer up time net stop - Stops a running service. net use - Connects a computer to or disconnects a computer from a shared resource, displays information about computer connections, or mounts a local share with different privileges (documentation) odbcad32.exe - ODBC Data Source Administrator pathping - A traceroute that collects detailed packet loss stats perfmon - Opens Reliability and Performance Monitor ping - Determine whether a remote computer is accessible over the network powercfg.cpl - Power management control panel applet qfecheck - Shows installed Hotfixes applied to the server/workstation. quser - Display information about user sessions on a terminal server qwinsta - See disconnected remote desktop sessions reg.exe - Console Registry Tool for Windows regedit - Registry Editor rasdial - Connects to a VPN or a dialup network robocopy - Backup/Restore/Copy large amounts of files reliably rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login) runas - Run specific tools and programs with different permissions than the user's current logon provides sc - Manage anything you want to do with services. schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system. secpol.msc - Local Security Settings services.msc - Services control panel set - Displays, sets, or removes cmd.exe environment variables. set DIRCMD - Preset dir parameter in cmd.exe start - Starts a separate window to run a specified program or command start. - opens the current directory in the Windows Explorer. shutdown.exe - Shutdown or Reboot a local/remote machine subst.exe - Associates a path with a drive letter, including local drives systeminfo -Displays a comprehensive information about the system taskkill - terminate tasks by process id (PID) or image name tasklist.exe - List Processes on local or a remote machine taskmgr.exe - Task Manager telephon.cpl - Telephone and Modem properties timedate.cpl - Date and Time title - Change the title of the CMD window you have open tracert - Trace route whoami /all - Display Current User/Group/Privilege Information wmic - Windows Management Instrumentation Command-line winver.exe - Find Windows Version wscui.cpl - Windows Security Center wuauclt.exe - Windows Update AutoUpdate Client
A little known one is getmac It shows the MAC address(es) of your network adapter(s).
{ "source": [ "https://serverfault.com/questions/3780", "https://serverfault.com", "https://serverfault.com/users/1224/" ] }
3,844
Our old tape drives have failed and we not using tapes for backup anymore. We still have a stack of DLT tapes with backups which may contain sensitive information like credit card numbers, social security numbers, etc. How do I responsibly dispose of these backup tapes? If I had a working drive I would be tempted to dd from /dev/urandom to the tape device, but the drives have failed. Would this be a good method if the drive was still working? What do you recommend I do with these tapes given that I have no working drive for them?
You could read the Guidelines for Media Sanitization (PDF) of the National Institute of Standards and Technology . Reel and Cassette Format Magnetic Tapes Clear magnetic tapes by either re-recording (overwriting) or degaussing. Clearing a magnetic tape by re-recording (overwriting) may be impractical for most applications since the process occupies the tape transport for excessive time periods. Clearing by Overwriting: Overwriting should be performed on a system similar to the one that originally recorded the data. For example, overwrite previously recorded classified or sensitive VHS format video signals on a comparable VHS format recorder. All portions of the magnetic tape should be overwritten one time with known non-sensitive signals. Degauss using an NSA/CSS -approved degausser. Purging by Degaussing: Purge the magnetic tape in any degausser that can purge the signal enough to prohibit playback of the previous known signal. Purging by degaussing can be accomplished easier by using an NSA/CSS-approved degausser for the magnetic tape. Incinerate by burning the tapes in a licensed incinerator Shred Preparatory steps, such as removing the tape from the reel or cassette prior to destruction, are unnecessary. However, segregation of components (tape and reels or cassettes) may be necessary to comply with the requirements of a destruction facility or for recycling measures.
{ "source": [ "https://serverfault.com/questions/3844", "https://serverfault.com", "https://serverfault.com/users/984/" ] }
3,854
I have an old hard disk (Maxtor 250Gb) from about 3 years ago that started giving errors and now sits in a draw in my desk. It has some confidential data on it but it's unlikely that it can be read because the disk started to go bad. However, before I dispose of it I want to make sure that the data can't be recovered by destroying the disk. What is the best way to destroy the disk such that the data can't be read? (I live in Arizona and was thinking of leaving it in the yard when we have those 125 F days...?) What is the best way to dispose of the disk after it's destroyed? (I believe that it's environmentally unsound to chuck it in the trash.)
If you are looking for standard procedures and reliable methods, you could read the Guidelines for Media Sanitization (PDF) of the National Institute of Standards and Technology. For any given medium, there are three basic methods: Clear Purge Physical Destruction For hard drives they recommend: Clear : Overwrite media by using agency-approved and validated overwriting technologies/methods/tools. Physical Destruction : Disintegrate Shred Pulverize Incinerate: incinerate hard disk drives by burning the hard disk drives in a licensed incinerator. Purge : Purge using Secure Erase. The Secure Erase software can be downloaded from the University of California, San Diego (UCSD) CMRR site. Purge hard disk drives by either purging the hard disk drive in an NSA/CSS-approved automatic degausser or by disassembling the hard disk drive and purging the enclosed platters with an NSA/CSS-approved degaussing wand. Purge media by using agency-approved and validated purge technologies/tools. Recommendations for flash media (SSDs) are similar, except that degaussing solid state drives is not a viable way to purge them as the data is not stored on magnetic platters.
{ "source": [ "https://serverfault.com/questions/3854", "https://serverfault.com", "https://serverfault.com/users/1671/" ] }
3,887
I never use Internet Explorer on my Windows Servers. Is there any reason I should upgrade Internet Explorer (version 6 or 7 to 7 or 8)?
Keep in mind that while you may not directly use IE, the IE rendering engine or other components may be used by other applications you may run. I think it should kept it up to date.
{ "source": [ "https://serverfault.com/questions/3887", "https://serverfault.com", "https://serverfault.com/users/45/" ] }
4,028
Which performs better under heavy CPU and memory usage on the virtual servers, Xen or VirtualBox?
Xen will generally perform much better than VirtualBox because VirtualBox runs the guest OS in a way that the guest OS does not know it is running in a virtual environment. Or to put it another way, the guest OS is not modified to run virtually. Because of this, VirtualBox has to 'trap' kernel type instructions, run some custom code and then return control to the guest. It can use the hardware virtualisation support provided by Intel and AMD, but even then the overhead adds up. Xen meanwhile makes sure the guest OS is recompiled to fit in with the Xen model. So control flows smoothly from the guest OS to the hypervisor, without the overhead of having to pretend the guest OS has direct access to the hardware. For an overview of quite a few virtualisation technologies, including data from performance tests, read this report . It only talks about Linux, but it covers Linux-Vserver, Xen, OpenVZ, KVM, VirtualBox and QEMU. Linux-Vserver and Xen were generally the best performers, but read the report to see the different workloads. Having said all the above, there may be some areas where VirtualBox outperforms Xen. If your guest OS has a graphical windowing layer, then VirtualBox has good support for that, particularly if you install some special VirtualBox components in the guest OS. And finally you should be aware that Xen will only run a modified guest OS. It cannot run an unmodified guest OS.
{ "source": [ "https://serverfault.com/questions/4028", "https://serverfault.com", "https://serverfault.com/users/1736/" ] }
4,176
As a programmer, we tend to take sysadmins for granted. The few times I've been without a good sysadmin have really made me appreciate what you guys do. When we're venturing into an environment without a sysadmin, what words of wisdom can you offer us?
I'd start with: Always have a backup system of some kind. Even better if it has a history. Consider single points of failure and how to deal with them should they fail. Depending on the amount of computers involved, looking into some way to make and create a standard image across computers will make everyone's life easier - no "it works on mine" because they have such and such a program not normally installed. Document everything, if only because you will forget how you set something up. Keep abreast of security updates.
{ "source": [ "https://serverfault.com/questions/4176", "https://serverfault.com", "https://serverfault.com/users/1489/" ] }
4,188
What tool or technique do you use to prevent brute force attacks against your ssh port. I noticed in my Security logs, that I have millions of attempts to login as various users through ssh. This is on a FreeBSD box, but I imagine it would be applicable anywhere.
Here's a good post on that subject by Rainer Wichmann. It explains pros and cons on theses methods to do it : Strong passwords RSA authentication Using 'iptables' to block the attack Using the sshd log to block attacks Using tcp_wrappers to block attacks Port knocking
{ "source": [ "https://serverfault.com/questions/4188", "https://serverfault.com", "https://serverfault.com/users/1873/" ] }
4,251
Does a 500 watt power supply always pull 500 watts? Or does it depend on the load being placed on the computer? It's a n00b hardware question. I'm trying to figure out how much it costs to run my compuer without buying a meter that actually measures power usage.
No. A 500 Watt Power Supply can DELIVER 500 Watts, but it will ever use only as much as the components in your PC need (and of course that depends on Load and Activity, if Energy Savings Mechanisms like AMD's Cool'n'Quiet or Intel's SpeedStep is enabled etc.). In Theory, with a 100% efficiency rating, which is impossible. The usual Efficiency rating lies around 80%, but it can vary greatly between low quality and proper power supplies. So with 80% efficiency, your power supply will use as much power as your components need and then about 20% extra. Another caveat: Optimal efficiency is only reached at a "proper" load. If you have a 500 Watt Power Supply but then a super-low-consumption PC that only consumes 80 Watt, you're not going to reach 80% efficiency and could easily use ~120 Watt (~50% efficiency). Due to the ~80% efficiency, you can also not use 500 Watt out of a 500 Watt Power Supply. Those numbers are all estimates, as PSEs vary greatly, but a rule of thumb is that you should get a PSU with at least 80% Efficiency and get one that is not too big (but not too small either) for your PC.
{ "source": [ "https://serverfault.com/questions/4251", "https://serverfault.com", "https://serverfault.com/users/526/" ] }
4,427
OS: Ubuntu 8.04 LTS Server Edition We just rolled back an kernel update using the following command: sudo apt-get remove linux-image-2.6.24-24-server The uninstallation was successful, but it had the following message before apt-get exited: The link /vmlinuz is a damaged link Removing symbolic link vmlinuz you may need to re-run your boot loader[grub] The link /initrd.img is a damaged link Removing symbolic link initrd.img you may need to re-run your boot loader[grub] Should we be worrying about this message? Do we need to re-run GRUB? How do we go about doing this if we have to re-run GRUB? Thanks in advance.
Those messages are nothing to worry about. The symlinks that are being complained about are only needed if you're using lilo as your bootloader, because it uses those symlinks to find your "current" kernel. Grub, being more flexible, has it's own way of doing things, and doesn't need the symlinks.
{ "source": [ "https://serverfault.com/questions/4427", "https://serverfault.com", "https://serverfault.com/users/1288/" ] }
4,458
We have an SMTP only mail server behind a firewall which will have a public A record of mail. . The only way to access this mail server is from another server behind the same firewall. We do not run our own private DNS server. Is it a good idea to use the private IP address as an A record in a public DNS server - or is it best to keep these server records in each servers local hosts file?
Some people will say no public DNS records should ever disclose private IP addresses....with the thinking being that you are giving potential attackers a leg up on some information that might be required to exploit private systems. Personally, I think that obfuscation is a poor form of security, especially when we are talking about IP addresses because in general they are easy to guess anyway, so I don't see this as a realistic security compromise. The bigger consideration here is making sure your public users don't pickup this DNS record as part of the normal public services of your hosted application. ie: External DNS lookups somehow start resolving to an address they can't get to. Aside from that, I see no fundamental reason why putting private address A records into the public space is a problem....especially when you have no alternate DNS server to host them on. If you do decide to put this record into the public DNS space, you might consider creating a separate zone on the same server to hold all the "private" records. This will make it clearer that they are intended to be private....however for just one A record, I probably wouldn't bother.
{ "source": [ "https://serverfault.com/questions/4458", "https://serverfault.com", "https://serverfault.com/users/2/" ] }
4,542
What backup solutions would you recommend when using SQL Server 2008 Express ? I'm pretty new to SQL Server, but as I'm coming from an MySQL background I thought of setting up replication on another computer and just take Xcopy backups of that server. But unfortunately replication is not available in the Express Edition. The site is heavily accessed, so there has to be no delays and downtime. I'm also thinking of doing a backup twice a day or something. What would you recommend? I have multiple computers I can use, but I don't know if that helps me since I'm using the Express version.
SQL Server Express 2008 supports database backups. It's missing SQL Agent, which allows to schedule backups, and the maintenance plan wizard for creating a backup tasks. You can backup databases in two different ways: Use Microsoft SQL Server Management Studio Express which has the Backup option on the right click menu for each database under "tasks." Use T-SQL to manually write your backup script. Read the MSDN documentation for the T-SQL BACKUP command . Syntax something like: BACKUP DATABASE MyDatabase TO DISK='C:\MyDatabase.bak'; If you want to schedule your backup jobs, you have to write a T-SQL script and then use the Windows Task Schedule to call SQLCmd to run the script on what every schedule you're interested in: sqlcmd -s server_name\sqlexpress -i C:\SqlJobs\backup.sql -o C:\Logs\output.txt
{ "source": [ "https://serverfault.com/questions/4542", "https://serverfault.com", "https://serverfault.com/users/1951/" ] }
4,639
What is the windows command prompt command to copy files? I need to move a file from location A to location B. Also if the folder for location B doesn't' exists I want to have it created. I need this to be a command line so I can automate it. The version of Windows is XP.
The command xcopy is what you are looking for. Example: xcopy source destination /E /C /H /R /K /O /Y The command above will copy source to destination, files and directories (including empty ones), will not stop on error, will copy hidden and system files, will overwrite read only files, will preserve attributes and ownership/ACL information, and will suppress the prompting for overwrite existing destination files. /E Copies directories and subdirectories, including empty ones. Same as /S /E. May be used to modify /T. /C Continues copying even if errors occur. /H Copies hidden and system files also. /R Overwrites read-only files. /K Copies attributes. Normal Xcopy will reset read-only attributes. /O Copies file ownership and ACL information. /Y Suppresses prompting to confirm you want to overwrite an existing destination file. For more info type xcopy /? and your command line.
{ "source": [ "https://serverfault.com/questions/4639", "https://serverfault.com", "https://serverfault.com/users/823/" ] }
4,689
After 18 years of hosts files on Windows, I was surprised to see this in Windows 7 build 7100: # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost Does anyone know why this change was introduced? I'm sure there has to be some kind reasoning. And, perhaps more relevantly, are there any other important DNS-related changes in Windows 7? It scares me a little bit to think that something as fundamental as localhost name resolution has changed... makes me think there are other subtle but important changes to the DNS stack in Win7.
I checked with a developer on the Windows team, and the actual answer is much more innocuous than the other answers to this post :) At some point in the future, as the world transitions from IPV4 to IPV6, IPV4 will be eventually be disabled/uninstalled by companies that want to simplfy network management in their environments. With Windows Vista, when IPv4 was uninstalled and IPv6 was enabled, a DNS query for an A (IPv4) address resulted in the IPv4 loopback (which came from the hosts file). This of course caused problems when IPv4 was not installed. The fix was to move the always present IPv4 and IPv6 loopback entries from the host into the DNS resolver, where they could be independently disabled. -Sean
{ "source": [ "https://serverfault.com/questions/4689", "https://serverfault.com", "https://serverfault.com/users/475/" ] }
4,788
A lot of time and columns are spent discussing securing a server from outside attacks. This is perfectly valid because it's easier for an attacker to use the Internet to break your server than it is for them to gain physical access. However, some IT professionals gloss over the importance of physical server security. Many, if not most, of the most egregious breaches of security are performed from inside the organization. How do you protect your servers from users with on-site access who have no need to access the server or server room itself? Is it just next to the IT manager's desk in a cubicle, or locked behind several doors with electronic card and biometric access? Once someone has physical access to the servers, what protections are in place that prevent, or at least log, access to sensitive data they have no reasonable need to see? Of course this will vary from organization to organization, and business need to business need, but even print servers have access to sensitive data (contracts and employee information) being printed, so there's more to this than might appear at first glance.
All our production servers are stored on the other side of the world in a solid data center. Man traps, biometric scanners, the whole box and dice. For the machines that are in our office, they live in the server room, accessible only via swipe card. Only the sysadmins have swipe cards that can access that area. In short, if someone physically has their hands on your kit, then your data is theirs. If this is a sufficient concern then pgp'ing anything of value and decrypting it on the fly is a heavy handed but necessary requirement. edit: you could extend this to questions of physical security of your backup media. What good is solid physical security if your offsites are not as or more secure?
{ "source": [ "https://serverfault.com/questions/4788", "https://serverfault.com", "https://serverfault.com/users/706/" ] }
4,906
There's been a number of questions regarding disk cloning tools and dd has been suggested at least once. I've already considered using dd myself, mainly because ease of use, and that it's readily available on pretty much all bootable Linux distributions. What is the best way to use dd for cloning a disk? I did a quick Google search, and the first result was an apparent failed attempt . Is there anything I need to do after using dd , i.e. is there anything that CAN'T be read using dd ?
dd is most certainly the best cloning tool, it will create a 100% replica simply by using the following command. I've never once had any problems with it. dd if=/dev/sda of=/dev/sdb bs=32M Be aware that while cloning every byte, you should not use this on a drive or partition that is being used. Especially applications like databases can't cope with this very well and you might end up with corrupted data.
{ "source": [ "https://serverfault.com/questions/4906", "https://serverfault.com", "https://serverfault.com/users/1390/" ] }
4,993
I have been a zsh user for quite some time (before that tcsh and before that csh). I am quite happy with it, but was wondering if there are any compelling features of bash that do not exist in zsh. And conversely, are there zsh features which do not exist in bash. My current feel is that bash is better: If you are familiar with it already and don't want to learn new syntax. It is going to exist on most all *nix machines by default, whereas zsh may be an extra install. Not trying to start a religious battle here, which is why I'm just looking for features which exist in only one of the shells.
zsh is for vulcans. ;-) Seriously: bash 4.0 has some features previously only found in zsh, like ** globbing: % ls /usr/src/**/Makefile is equivalent to: % find /usr/src -name "Makefile" but obviously more powerful. In my experience bash's programmable completion performs a bit better than zsh's, at least for some cases (completing debian packages for aptitude for example). bash has Alt + . to insert !$ zsh has expansion of all variables, so you can use e.g. % rm !$<Tab> for this. zsh can also expand a command in backtics, so % cat `echo blubb | sed 's/u/a/'`<Tab> yields % cat blabb I find it very useful to expand rm * , as you can see what would be removed and can maybe remove one or two files from the commmand to prevent them from being deleted. Also nice: using the output from commands for other commands that do not read from stdin but expect a filename: % diff <(sort foo) <(sort bar) From what I read bash-completion also supports completing remote filenames over ssh if you use ssh-agent, which used to be a good reason to switch to zsh. Aliases in zsh can be defined to work on the whole line instead of just at the beginning: % alias -g ...="../.." % cd ...
{ "source": [ "https://serverfault.com/questions/4993", "https://serverfault.com", "https://serverfault.com/users/910/" ] }
5,031
How do I find out what hard drives are attached to a Linux box? I'm hoping for a single command that can give me a nice list of all ATA/SCSI/etc drives. I've catted /proc/partitions in the past to do this, but I wonder if that still works if there's a drive with no partitions on it.
sudo lshw -class disk gives you everything but the mount point *-cdrom description: CD-R/CD-RW writer product: 52MAXX 3252AJ vendor: Memorex physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/scd0 logical name: /dev/sr0 version: QWS3 capabilities: removable audio cd-r cd-rw configuration: ansiversion=5 status=nodisc *-disk:0 description: SCSI Disk product: ZIP 100 vendor: IOMEGA physical id: 0.1.0 bus info: scsi@0:0.1.0 logical name: /dev/sda version: 12.A capabilities: removable configuration: ansiversion=5 *-medium physical id: 0 logical name: /dev/sda *-disk:1 description: ATA Disk product: WDC WD800AB-00CB vendor: Western Digital physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/sdb version: 04.0 serial: WD-WCAA52477019 size: 74GiB (80GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=90909090 sudo lshw -class disk -html
{ "source": [ "https://serverfault.com/questions/5031", "https://serverfault.com", "https://serverfault.com/users/331/" ] }
5,049
It is a common situation, when administrator makes system for automatic backuping and forgets it. Only after a system fails administrator notices, that backup system has broken before or backups are unrestorable because of some fault and he has no current backup to restore from... So what are best practices to avoid such situations??
Run fire drills ... every couple of months it is a good idea to say XYZ system is down ... then actually go through the motions of bringing it back online to a new VM etc etc. It keeps things honest and helps you catch mistakes.
{ "source": [ "https://serverfault.com/questions/5049", "https://serverfault.com", "https://serverfault.com/users/1815/" ] }
5,066
What is the most damage (of whatever kind) that you have ever caused with a single mistaken/mistyped/misguided command line? I deleted a production system database by mistake a while back, for example, but I was lucky (i.e. backed-up) and there was no permanent data loss, lost money, property damage etc. Most importantly (for votes), what do you do to make sure it will not ever happen again?
In SQL server, on a production system: update customer set password = '' <enter> The most recent backup was like a week old. To mitigate this, I now usually write a select statement first to make sure I've got the where clause correct, then go back and edit it to insert the set clause and change the statement to update .
{ "source": [ "https://serverfault.com/questions/5066", "https://serverfault.com", "https://serverfault.com/users/1427/" ] }
5,071
I have been thinking about getting started with monitoring software for a while now, but never seem to get started with it well. I have heard Nagios is a pretty decent open-source solution for this, but have never been able to properly get started with it. Does anyone have any tips with some good approaches to getting started on server monitoring? I am thinking of things like number of network connections, load average, maybe bandwidth used by the server, etc. The basics involved, largely (which may include basics that I do not know about).
In SQL server, on a production system: update customer set password = '' <enter> The most recent backup was like a week old. To mitigate this, I now usually write a select statement first to make sure I've got the where clause correct, then go back and edit it to insert the set clause and change the statement to update .
{ "source": [ "https://serverfault.com/questions/5071", "https://serverfault.com", "https://serverfault.com/users/1734/" ] }
5,111
What are some of the better tools/utilities for testing real bandwidth across a link? In my case I am testing the real throughput across a wifi bridge.
I find iperf to be one of the more useful utilities to test point-to-point bandwidth. It has many options to test over tcp/udp, with udp it can tell you how much jitter there was. Ports of iperf are available for almost every OS. I also like testing with NDT , but it is isn't quite as easy to work with as iperf since NDT basically has to be setup as a server somewhere, and the client must have java installed.
{ "source": [ "https://serverfault.com/questions/5111", "https://serverfault.com", "https://serverfault.com/users/1585/" ] }
5,120
I have heard/read different performance stories on the various raid flavors. I am curious what the agreed upon best answer is.
One worthwhile location to check out is StorageReview.com's Comparison of RAID Levels But focused on the answer: LEVEL | CAPACITY | STORAGE | FAILURE | RDM READ | RDM WRITE | SEQ READ | SEQ WRITE | 0 | S * N | 100% | 0 | **** | **** | **** | **** | 1 | S | 50% | 1 | *** | *** | ** | *** | 5 | S * (N-1)| (N-1)/N | 1 | **** | ** | *** | *** | 6 | S * (N-2)| (N-2)/N | 2 | **** | * | *** | ** | 0+1 | S * (N/2)| 50% | 1 | **** | *** | **** | *** | Legend: Capacity: Size of drive Storage: Amount of space on all drives actually useable Failure: Number of drives that can fail
{ "source": [ "https://serverfault.com/questions/5120", "https://serverfault.com", "https://serverfault.com/users/1585/" ] }
5,132
In the scheme of things, it's pretty easy to run a web-server. Install, for example, Apache, PHP, and MySQL, and you're on your way. But the job, obviously, doesn't end there. Good administrators do dozens of tasks past keeping up-to-date a few programs. What should a web administrator do to become a good administrator? What steps should they take to learn these skills, and what should they do to employ these skills? (Examples include monitoring network traffic, creating and executing a backup scheme, managing an encryption certificate, and more.)
a desire to have things right. the ability to cobble kick-ass solutions together when you realize "right" is unachievable. the willingness to "own" a problem and see issues through to conclusion. the willingness to call out and own problems even if no one's reported it. the ability and desire to replace yourself with a small shell script once you've mastered a problem (to free yourself up to find and address the next problem). the drive to self-evaluate and to always make things better, even if your users are already happy. And, above all, the ability to take a deep breath, exhale, and deal with the latest fire that just got dropped in your lap.
{ "source": [ "https://serverfault.com/questions/5132", "https://serverfault.com", "https://serverfault.com/users/1634/" ] }
5,221
Cisco VPN client (IPsec) does not support 64bit Windows. Worse, Cisco does not even plan to release a 64-bit version, instead they say that "For x64 (64-bit) Windows support, you must utilize Cisco's next-generation Cisco AnyConnect VPN Client." Cisco VPN Client Introduction Cisco VPN Client FAQ But SSL VPN licences cost extra. For example, most new ASA firewalls come with plenty of IPSec VPN licences but only a few SSL VPN licences. What alternatives do you have for 64-bit Windows? So far, I know two: 32-bit Cisco VPN Client on a virtual machine NCP Secure Entry Client on 64-bit Windows Any other suggestions or experiences?
Hmm, nobody mentioned Shrew Soft VPN Client yet ? It's a free (as in beer) and cross platform VPN client that compatible with Windows 64 bit. Although free, but support from the author has been great. Currently it doesn't support hybrid xauth+certificate mode but the feature will come soon. Lancom also provides a 64 bit VPN Client for Windows, but IMO they just resell/rebrand NCP's Client. You can also try TheGreenBow VPN Client , which is a bit cheaper (56 EUR) than NCP/Lancom's client.
{ "source": [ "https://serverfault.com/questions/5221", "https://serverfault.com", "https://serverfault.com/users/1387/" ] }
5,267
I've used Ubuntu on and off since Warty Warthog. I was thinking about installing Jaunty soon; but I noticed that over the weekend NetBSD 5.0 , Dragonfly BSD 2.2.1 , OpenBSD 4.5 , and FreeBSD 7.2 have all been released, so I got curious: What is good about the BSDs? Why should or shouldn't I install one of them instead of Ubuntu? What are their main selling points? Performance? Stability? Hardware compatibility? Ease-of-use? Security? Do they run well on older hardware? What is it? Edit: This is from the point of view of a (primarily Java) desktop developer, but I'd be interested to know what are the pros and cons for others also. Are they targeted more for servers? For corporate users? Or what?
Advantages of BSDs The *BSD family of systems has (IMHO) a few key advantages over Linux, particularly for a server O/S. Simplicity and Control: None of the *BSD distributions have the imperative to add features that the Linux distributors exhibit. Thus, the default install of most BSD derived systems is relatively simple. Stability: Partially driven by the simplicity, BSDs tend to be amongst the most stable O/S platforms around. FreeBSD (which is one of the older of the 'modern' BSDs) powers many well known .coms such as Yahoo and (at one point) hotmail. In fact, at one point Microsoft suffered quite a lot of embarassment over their inability to migrate Hotmail off FreeBSD to Windows. Security: OpenBSD in particular has a very strong track record of security and much of their work rubs off on the *BSD community in general. Portability: NetBSD in particular has ports to dozens of platforms and is notable for being very easy to port. Some weaknesses Less support for large SMP configurations than Linux. This will become more of an issue as boxes with large numbers of cores becore widespread. However, most of the network service applications that are really BSD's home turf are not all that CPU hungry (1). SMP performance on BSD kernels has improved substantially over the past decade. Improving SMP performance was one of the main goals of Dragonfly BSD and the FreeBSD SMPNg project has substantially improved SMP performance on that platform, outperforming Linux on 8-core platforms. This means that one can expect to get good performance on mainstream 2 and 4 socket servers. Some debate and early work on providing NUMA support on FreeBSD exists as the system does not currently support APIs for memory allocation, affinity management or other facilities for explicit NUMA support. A good primer on NUMA support can be found here . Smaller range of hardware support than Linux: In practice, this really only means that you need to check components on a hardware compatibility list. For a server this is a non-issue in most cases but installing on a random desktop PC this is a bit thornier. You still have to do a component-by-component check if you want a machine to install BSD on, which is less likely to be the case with Linux. Less emphasis on the desktop: Desktop distributions of Linux (such as Ubuntu) tend to have richer desktop support for multimedia, emulation and bundled applications. While many such applications do have ports onto the various BSD platforms the out-of-the-box support from a desktop Linux distribution will typically be rather better. Some gaps in software: Quite a lot of commercial Linux software does not have a BSD port. For example, none of the major JVM suppliers maintain a native port of their java runtime for any of the BSD platforms. In some cases third parties maintain ports but there is no official support for (for example) Oracle on any of the BSDs. This type of gap pops up in some places on BSD; BSD may not be the platform for you if you work in a space where this type of gap exists. Some salient points One of the great religious wars of the '90s was GPL vs. BSD. BSDs are licensed under the BSD licence, which comes with a different set of rights than the GPL. Essentially the BSD licence does not require you to redistribute source code of modified versions of BSD licensed software. Commercial vendors such as Oracle do not support BSD to anything like the degree that they support Linux. Therefore, if you want to work with such a product you are probably better off with Linux. However, most offer binary compatibility across Linux, System V, Solaris etc, so you can often run binaries for another O/S. BSD communities tend to be run differently to Linux and are often smaller (although no more genteel in many cases - Theo De Raadt has something of a reputation as a potty-mouth ). Some of the BSD variants are niche-market items, optimised for specific goals. For example, OpenBSD is specifically optimised for providing secure network infrastructure on internet-facing computers, with a very large amount of effort going into inspection for security holes like buffer overrun vulnerabilities. Many security conscious organisations use it for precisely this reason. NetBSD is designed for portability with ports to dozens of platforms and is quite widely used in embedded systems. For applications in the sweet spot of one of these systems it may well be the best choice of platform. The home turf of *BSD is in network services - email and web servers, infrastructure and suchlike. You can set up a perfectly good geek desktop with any of the BSDs, and could in theory produce something as warm and fluffy as Ubuntu. However, this is not the core focus of most of the BSD products, although some such as PC-BSD do aim to provide desktop systems. If you want to make a trad unix geek desktop BSD will do this just as well as any other unix-oid system. For example, back in the VAX/4.2BSD era of the 1980s a machine like a VAX-11/750 could provide email servce to an entire department or university campus, and would probably be doing other work as well (although one should note that most emails were text only and attachments weren't so prevalent as today - disk drives used on this machine typically ranged from 120-450MB capacity). A modern server has 3-4 orders of magnitude more CPU power and memory and a disk subsystem with maybe 2 orders of magnitude more throughput and 3-4 orders of magnitude more space.
{ "source": [ "https://serverfault.com/questions/5267", "https://serverfault.com", "https://serverfault.com/users/531/" ] }
5,336
Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update : In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)
If your SATA controller supports hot swap, it should "just work(tm)." To force a rescan on a SCSI BUS (each SATA port shows as a SCSI BUS) and find new drives, you will use: echo "0 0 0" >/sys/class/scsi_host/host<n>/scan On the above, < n > is the BUS number.
{ "source": [ "https://serverfault.com/questions/5336", "https://serverfault.com", "https://serverfault.com/users/1156/" ] }
5,410
In an environment with multiple system administrators, I see a few advantages to adding the server config files into a revision control system. Most notable are the ability to track changes, who made them, and of course being able to roll back to known working configs. I'm mainly interested in Unix/Linux solutions, but would be curious to Windows implementations as well.
I have tested this at home (~ 3 hosts) for some time now, trying different scms (RCS, Subversion, git). The setup that works perfectly for me right now is git with the setgitperms hook. Things you need to consider: Handling of file permissions and ownership RCS: does this natively Subversion: last I tried, you needed a wrapper around svn to do this git: the setgitperms hook handles this transparently (needs a fairly recent version of git with support for post-checkout hooks, though) Also, if you don't want to all of your /etc under version control, but only the files that you actually modified (like me), you'll need an scm that supports this kind of use. RCS: works only on single files anyway. Subversion: I found this to be tricky. git: no probem, put " * " in the top-level .gitignore file and add only those files you want using git add --force Finally, there are some problematic directories under /etc where packages can drop config snippets that are then read by some program or daemon ( /etc/cron.d , /etc/modprobe.d , etc.). Some of these programs are smart enough to ignore RCS files (e.g. cron), some are not (e.g. modprobe). Same thing with .svn directories. Again a big plus for git (only creates one top-level .git directory).
{ "source": [ "https://serverfault.com/questions/5410", "https://serverfault.com", "https://serverfault.com/users/1589/" ] }
5,564
Whenever I access windows shares from OSX 10.5 it leaves .DS_Store files on the remote filesystem. What are they used for, and are they necessary, and can they be prevented from being created?
The " .DS_Store " files are used by the Mac OS Finder to store info about Finder window settings for a folder. They will appear in each folder that you visit (browse to) with the Finder. You normally do not see these files in Finder (they are "hidden", like any *NIX alike dot files) To prevent the creation of these files, open the Terminal and type: defaults write com.apple.desktopservices DSDontWriteNetworkStores true
{ "source": [ "https://serverfault.com/questions/5564", "https://serverfault.com", "https://serverfault.com/users/1904/" ] }
5,589
Given the recent events with a 'hacker' learning and retrying passwords from website administrators , what can we suggest to everyone about best practices when it comes to passwords? use unique passwords between sites (i.e. never re-use a password) words found in the dictionary are to be avoided consider using words or phrases from a non-English language use pass phrases and use the first letter of each word l33tifying doesn't help very much Please suggest more!
Use passwords that are not composed of common words or names. Dictionary attacks use dictionaries with millions of words and are very quick. Use long passwords. I tend to use pass phrases . I pick a phrase, sentence or rhyme and find some way to use a fair number of non alpha-numeric characters so that my words are not dictionary words. Do not use the same password for multiple login services. Take some time to come up with a formula for picking passphrases. This allows you to use many different passwords that, if forgotten, you may be able to recreate with some trial and error. If you have to, by all means write a good, long, secure password down and hide it somewhere. That at least is better than using a weak password that is easier to remember. If the above suggestions prove unmanageable, use a password manager with a long secure password and then use random character passwords for everything else. Carry the password manager around with you on an encrypted USB flash drive (backed up of course).
{ "source": [ "https://serverfault.com/questions/5589", "https://serverfault.com", "https://serverfault.com/users/658/" ] }
5,598
I have two text files and want to find the differences between them using Windows Powershell. Is there something similar to the Unix diff tool available? Or is there another other way I haven't considered? I've tried compare-object, but get this cryptic output: PS C:\> compare-object one.txt two.txt InputObject SideIndicator ----------- ------------- two.txt => one.txt <=
Figured it out myself. Because Powershell works with .net objects rather than text, you need to use get-content to expose the contents of the text files. So to perform what I was trying to do in the question, use: compare-object (get-content one.txt) (get-content two.txt)
{ "source": [ "https://serverfault.com/questions/5598", "https://serverfault.com", "https://serverfault.com/users/195/" ] }
5,817
Is ECC RAM recommended for use in workstations, or is it something that only gets used in servers? If non-ECC RAM works in PCs, why would we need ECC RAM at all?
As stuff is stored into, left, and eventually pulled out of RAM, some corruption naturally occurs (theories vary, but the one with the most weight right now is EMI from the computer itself). ECC is a feature of RAM and motherboards that allows detection and correction of this corruption. The corruption is usually pretty minor (ECC can usually detect and fix 1-2 bits per 64 bit "word" - and that's waaaaay beyond the typical error rates), but increases in frequency with the density of the RAM. Your average workstation/PC will never notice it. On a server where you're running high density RAM 24/7 in a high-demand environment serving critical services, you take every step you possibly can to prevent stuff from breaking. Also note that ECC RAM must be supported by your motherboard, and the average workstation/PC does not support it. ECC RAM is more expensive than non-ECC, is much more sensitive to clock speeds, and can incur a small (1-2%) performance hit. If it helps, an analogy that works is RAM to RAID controllers. On your PC, that hardware-assisted software RAID built into your chipset is great protection against single disk failures. On a server, that would never be enough. You need high-end, battery-backed fully hardware RAID with onboard RAM to ensure that you don't lose data due to a power outage, disk failure, or whatever. So no, you don't really need ECC RAM in your workstation. The benefit simply will not justify the price.
{ "source": [ "https://serverfault.com/questions/5817", "https://serverfault.com", "https://serverfault.com/users/546/" ] }
5,841
How should I decide what size to make my swap on a new Linux machine (Debian) with 2-4 GB of RAM? Do I really need swap space?
There are lots of ways you can figure out how much swap use in a machine. Common suggestions use formulas based on RAM such as 2 x RAM, 1.5 x RAM, 1 x RAM, .75 x RAM, and .5 x RAM. Many times the formulas are varied depending on the amount of RAM (so a box with 1GB of RAM might use 2 x RAM swap (2GB), while a box with 16GB of ram might use .5 x RAM swap (8GB). Another thing to consider is what the box will be used for. If you're going to have a huge number of concurrently running processes running on the box, but a significant number of them will be idle for periods of time, then adding extra swap makes sense. If you're going to be running a small number of critical processes, then adding extra swap makes sense (this might seem counter-intuitive, but I'll explain in a minute). If you're running a box as a desktop, then adding extra swap makes sense. As for whether you should include swap, yes, you should. You should always include swap space unless you really know what you're doing, and you really have a good reason for it. See, the way the Linux kernel works, swap isn't only used when you have exhausted all physical memory. The Linux kernel will take applications that are not active (sleeping) and after a period of time, move the application to swap from real memory. The result is that when you need that application, there will be a momentary delay (usually just a second or two) while the application's memory is read back from swap to RAM. And this is usually a good thing. This allows you to put inactive applications to "sleep", giving your active applications access to additional RAM. Additionally, Linux will use any available (unallocated) RAM on a machine as disk cache, making most (slow) disk activity faster and more responsive. Swapping out inactive processes gives you more disk cache and makes your machine overall faster. Lastly, let's face it, disk space is cheap. Really cheap. There's really no good reason at all not to swipe a (relatively) small chunk of space for swap. If I were running with 2GB - 4GB of RAM in a machine, I'd probably setup my swap space to be at least equal to the RAM. If it were less than 2GB of RAM, then I'd still go with at least 2GB of swap. UPDATE: As an excellent comment mentioned (and I forgot to include), if you're running a laptop or a desktop that you might want to put in 'hibernate' mode (Suspend to Disk), then you always want at least as much swap as you have memory. The swap space will be used to store the contents of the RAM in the computer while it 'sleeps'.
{ "source": [ "https://serverfault.com/questions/5841", "https://serverfault.com", "https://serverfault.com/users/1834/" ] }
5,887
I've seen a dicussion about ECC ram use on servers. Why is it better?
ECC RAM can recover from small errors in bits, by utilizing parity bits. Since servers are a shared resource where up-time and reliability are important, ECC RAM is generally used with only a modest difference in price. ECC RAM is also used in CAD/CAM workstations were small bit errors could cause calculation mistakes which become more significant problems when a design goes to manufacturing.
{ "source": [ "https://serverfault.com/questions/5887", "https://serverfault.com", "https://serverfault.com/users/2142/" ] }
5,912
I would like to shrink the size of a partition containing an Ubuntu distribution and files. Is it safe to assume that I will not lose or corrupt any of the files as long as I don't make the partition smaller than the amount of data that is currently on it? I am planning to use GParted from the Ubuntu LiveCD.
As always, backup your data before. But, I have used GParted many, many times. When used correctly, and with care, you should not lose any data at all.
{ "source": [ "https://serverfault.com/questions/5912", "https://serverfault.com", "https://serverfault.com/users/7658/" ] }
5,942
What are your checklist/routine when setting up a Linux web server? What do you recommend to achieve maximum security? Is there any preferred way to perform repeated maintenance?
First of all, be aware that any scripting ability in Apache (php, cgi, ruby,...) is the potential equivalent of a shell account with privileges of the user running the script. If the server is shared with multiple users, you might want to think about using suexec (- or ITK MPM - Suggested by David Schmitt ) so not every script runs as the same apache user. Virtualize or chroot apache, so that any compromise is at least somewhat contained in an additional layer of security. Be aware that when you chroot apache, maintenance may become harder, as you end up moving libraries to the jail etc. If you're on FreeBSD you can use a jail instead, which is much easier to maintain, since you can just install apache from ports, and run portaudit from within it, without having to worry about any library dependencies and moving files manually, which always becomes an ugly mess. With BSD jails you can simply keep using the package management system (ports). (On GNU/Linux you can also use VServer for virtualization. - Suggested by David Schmitt ) (obviously) Keep up with updates and patches, not only for Apache, but also PHP, ruby, perl, etc... don't just trust your OS to give you all the updates either. Some distro's are extremely slow with their patches. Limit exposure time to 0-day vulnerabilities as much as possible. Stick the milw0rm feed in your RSS reader, subscribe to the insecure.org mailing lists, etc... Not only will it help you learn about vulnerabilities before your OS gets around to releasing a patch, you will also learn about vulnerabilities in certain php cms applications for example, which may not even be managed or patched by your OS at all. Use something like tripwire/aide, audit, or mtree(on BSD) to keep track of changes on your filesystem. This one is really important. Have any changes mailed to you regularly, review them manually, every day. If any file changes that shouldn't change, investigate why. If some malicious javascript somehow gets inserted into your pages through whatever method, you WILL catch it this way. This not only saves your server, but also your users, as your own webpages can be abused to infect your visitors. (This is a very very common tactic, the attackers often don't even care about your server, they just want to infect as many of your visitors as possible until discovered. These attackers also don't even bother to hide their tracks usually. Catching a compromise like this as fast as possible is very important.) Using stuff like suhosin to protect php helps. But also learn to understand it, tweak it's config to your application's expected parameters. Using a kernel patch such as PaX may help protect you from many buffer overflow vulnerabilities. Even if your software is vulnerable. (This does not make you invulnerable, it's just yet another, minor, layer.) Don't get over-confident when using some security tool. Understand the tools you use, and use common sense. Read, learn, keep up with as much as you can. Consider using mandatory access control (eg: SELinux ). It allows you to specify, for each application, what it is allowed to do, in great detail. What files is it allowed to access. What kernel calls is it allowed to make, etc. This is a very involved process and requires lots of understanding. Some distro's provide pre-made SELinux policies for their packages (eg: Gentoo ). This suggestion is kind of a contradiction to the one below, but still valid, nevertheless. Keep things simple. A complex security strategy may work against you. In Apache, set up a very restrictive default rules (Options None, Deny from all, etc...) and override as needed for specific VirtualHosts. Deny access to all dotfiles (which also immediately covers .htaccess files) Always use https anywhere there is any sort of password authentication. Firewall should be a deny-by-default policy. Build some specific rules in your firewall to log specific traffic. Set up log parsing scripts to scan your logs for anomalies. (the prelude IDS suite can do this, but honestly, I recommend you build up your own scripts over time, as it will help you understand your own tools and rules better.) Have the server mail you daily reports on last logged in users, active connections, bandwidth used, etc... Have a cron scan for suid binaries, world writeable files, and stuff like that, and have them mailed to you. For any of the stuff you set up that gets mailed to you, you should build up a list of exceptions over time. (folders to ignore filesystem changes on, 777 files to allow, suid binaries to allow). It is important that you only get notified of things that shouldn't happen. If you get a mail every day with trivial stuff, you will start to ignore them, and they will become pointless. Have a good solid layered redundant backup strategy. And don't just assume that making an image or copy of everything works. For example, if MySQL is in the middle of writing to a table during your backup, your MySQL binary files may be corrupted when you restore your backup. So you will need a cron that mysqldump's your databases on top of regular images or nightly tarballs or version control or whatever else you have setup. Think about your backup strategy. I mean, REALLY think about it. Don't rely on lists like this for security :) Seriously! You'll find lots of these all over the internet, go read them all, research every suggestion, and use common sense and experience to make up your own mind. In the end, experience and common sense are the only things that will save you. Not lists, nor tools. Do read, but don't just copy without understanding.
{ "source": [ "https://serverfault.com/questions/5942", "https://serverfault.com", "https://serverfault.com/users/555/" ] }
6,000
Most equipment is rated for a wide range of humidity (5 to 95% non-condensing, for instance). However, what is the ideal humidity? Higher humidity carries heat away from equipment a little better, but may also be more corrosive, for instance.
I've always heard 40%, though I can't back that up. I will say though that you need some humidity to reduce static electricity build up. EDIT: Ah, I found my documentation, good old Sun Microsystems Part No. 805-5863-13, "Sun Microsystems Data Center Site Planning Guide: Data Centers’ Best Practices" Temperature and relative humidity conditions should be maintained at levels that allow for the greatest operational buffer in case of environmental support equipment down-time. The goal levels for the computer room should be determined in a manner that will achieve the greatest operational buffer and the least possibility of negative influence. The specific hardware design, room configuration, environmental support equipment design and other influencing factors should be taken into consideration when determining the specific relative humidity control appropriate for a particular room. Psychrometrics can affect hardware through thermal influences, Electrostatic Discharge (ESD), and increases in environmental corrosivity. And: Under most circumstances, air conditioners should be set at 72º F (22º C) with a sensitivity range of +/- 2º F (+/-1º C). Humidifiers, in most cases, should be set at 48% RH with a sensitivity range of +/- 3% RH. The set-points of the air conditioners should always be chosen in an effort to maintain the optimal recommended temperature and relative humidity levels for the room environment. These set points should maintain appropriate conditions, while allowing wide enough sensitivity ranges to help avoid frequent cycling of the units. While these tight ranges would be difficult to maintain in a loosely controlled office environment, they should be easily attained in a controlled data center. Numerous factors, such as heat-load and vapor barrier integrity, will influence the actual set-points. If the room lacks adequate vapor barrier protection, for instance, it may be necessary to adjust humidifier set points to accommodate seasonal influences. Ideally, all inappropriate influences on the data center environment will be eliminated, but in the event that they are not, minor adjustments, made by trained personnel, can help alleviate their effects on the environment. And on Electrostatic Discharge: The maintenance of appropriate relative humidity levels is probably the most universal and easiest means of addressing ESD concerns. Appropriate moisture levels will help ease the dissipation of charges, lessening the likelihood of catastrophic failures. The following chart illustrates the effect moisture levels can have on electrostatic charge generation. Note – Source Simco, A Basic Guide to an ESD Control Program for Electronics Manufacturers TABLE 6-3 Electrostatic Voltage At Workstations Static Voltage Means Of Static Generation Relative Humidity 10-20% Relative Humidity 65-90% Walking Across Carpet 35,000 1,500 Walking over vinyl floor 12,000 250 Worker at bench 6,000 100 Vinyl envelopes for work instructions 7,000 600 Common polly bag picked up from bench 20,000 1,200 Work chair padded with urethane foam 18,000 1,500
{ "source": [ "https://serverfault.com/questions/6000", "https://serverfault.com", "https://serverfault.com/users/706/" ] }
6,013
Is it safe to backup data to a hard drive and then leave it for a number of years? Assuming the file system format can still be read, is this a safe thing to do. Or is it better to continually rewrite the data (every 6 months or so) to make sure it remains valid? Or is this a stupid question?
I wouldn't trust important backups to any single device for any significant length of time. I've had plenty of CDs that couldn't be read after a while. (Cheap ones, admittedly, but I'm leary of the longevity claims made.) I've had hard disks silently corrupt data. I seem to remember I've even had SSD failures, although with a low number of writes I'd expect them to be pretty reliable. Aside from all of these things, using a single copy means you've got no protection against physical disasters: fire etc. If you have multiple copies, you can separate them physically. Ideally I'd take some number (e.g. 3) of copies and run a checksum (I usually use MD5) periodically over everything. If one of the copies becomes corrupt in some way, if you've got multiple other copies you should be able to trust the majority, and create a new backup to replace the corrupted one. (Of course, if you keep the correct checksums in a separate place, you could trust even a single backup which still gives the right checksums, as the canonical source for replacements.) Of course, how much trouble you go to depends on the value of the data. My personal home data is only backed up on a RAIDed NAS. My work data is in Google datacenters, which I trust fairly strongly :)
{ "source": [ "https://serverfault.com/questions/6013", "https://serverfault.com", "https://serverfault.com/users/457/" ] }
6,046
I've just installed Windows Server 2008 on a server and I'm able to connect through Remote Desktop but can't ping. Do I need to open an special port on the firewall to be able to ping a server?
By default Windows 2008 does not respond to pings. To enable: Administrative Tools Windows Firewall with Advanced Security Inbound Rules File and Printer Sharing (Echo Request - ICMPv4-IN) Enable Rule You should now be able to ping your server from the LAN.
{ "source": [ "https://serverfault.com/questions/6046", "https://serverfault.com", "https://serverfault.com/users/2221/" ] }
6,134
How do I convert line breaks in a text file between the Windows and Unix/Linux formats? I have a *nix environment, but that I need to import and export data with the Windows-style line breaks. I thought there would be a standard utility or command to do this, but I can't seem to find it.
You're probably looking for dos2unix , unix2dos , todos or fromdos depending on your distribution. Ubuntu/Debian package todos / fromdos as part of the tofrodos package from memory.
{ "source": [ "https://serverfault.com/questions/6134", "https://serverfault.com", "https://serverfault.com/users/1897/" ] }
6,190
After reading this question on a server compromise , I started to wonder why people continue to seem to believe that they can recover a compromised system using detection/cleanup tools, or by just fixing the hole that was used to compromise the system. Given all the various root kit technologies and other things a hacker can do most experts suggest you should reinstall the operating system . I am hoping to get a better idea why more people don't just take off and nuke the system from orbit. Here are a couple points, that I would like to see addressed. Are there conditions where a format/reinstall would not clean the system? Under what types conditions do you think a system can be cleaned, and when must you do a full reinstall? What reasoning do you have against doing a full reinstall? If you choose not to reinstall, then what method do you use to be reasonably confident you have cleaned and prevented any further damage from happening again.
A security decision is ultimately a business decision about risk, just as is a decision about what product to take to market. When you frame it in that context, the decision to not level and reinstall makes sense. When you consider it strictly from a technical perspective, it does not. Here's what typically goes into that business decision: How much will our downtime cost us in measurable amount? How much will it potentially cost us when we have to reveal to customers a bit about why we were down? What other activities am I going to have to pull people away from to do the reinstall? What is the cost? Do we have the right people who know how to bring up the system without error? If not, what's it going to cost me as they troubleshoot bugs? And therefore, when you add up the costs like those, it may be deemed that continuing with a "potentially" still-compromised system is better than reinstalling the system.
{ "source": [ "https://serverfault.com/questions/6190", "https://serverfault.com", "https://serverfault.com/users/984/" ] }
6,233
I have a Linux server that whenever I connect it shows me the message that changed the SSH host key: $ ssh root@host1 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 93:a2:1b:1c:5f:3e:68:47:bf:79:56:52:f0:ec:03:6b. Please contact your system administrator. Add correct host key in /home/emerson/.ssh/known_hosts to get rid of this message. Offending key in /home/emerson/.ssh/known_hosts:377 RSA host key for host1 has changed and you have requested strict checking. Host key verification failed. It keeps me for a very few seconds logged in and then it closes the connection. host1:~/.ssh # Read from remote host host1: Connection reset by peer Connection to host1 closed. Does anyone know what's happening and what I could do to solve this problem?
Please don't delete the entire known_hosts file as recommended by some people, this totally voids the point of the warning. It's a security feature to warn you that a man in the middle attack may have happened. I suggest you identify why it thinks something has changed, most likely an SSH upgrade altered the encryption keys due to a possible security hole. You can then purge that specific line from your known_hosts file: sed -i 377d ~/.ssh/known_hosts This d eletes line 377 as shown after the colon in the warning: /home/emerson/.ssh/known_hosts:377 Alternatively you can remove the relevant key by doing the following ssh-keygen -R 127.0.0.1 (obviously replace with the server's IP) Please DO NOT purge the entire file and ensure this is actually the machine you want to be connecting to prior to purging the specific key.
{ "source": [ "https://serverfault.com/questions/6233", "https://serverfault.com", "https://serverfault.com/users/1249/" ] }
6,354
I am trying to import a MySQL dump file. The file was created on a Linux server, I am trying to import on windows I logged into the command line and ran: SOURCE c:/dump.sql But this seems to have thrown up some character set problems (specifically with smart quotes and other non standard punctuation). It was suggested to me that I run: mysql -u username -d dbase < c:\dump.sql When I try this I get the error ERROR 2006 (HY000) at line 149351: MySQL server has gone away A bit of googling suggested that this was to do with the max_allowed_packet switch but I have tried this and it hasn't worked. Has anyone any idea what this could be? If anyone has a suggestion about the character set issue that would be helpful too.
My first instinct after reading the error message in the question title was to suggest increasing max_allowed_packet. You mentioned that you tried "that switch" it and it hasn't worked. Can you confirm that you have correctly modified the server's configuration file? Your phrasing makes it sound like you've tried to use that as a command line switch on the mysql.exe client command line, which wouldn't cause the server to alter behavior. So, in short, what you should try to do is locate and edit the my.cnf file your server is currently using. In the [mysqld] section alter the max_allowed_packet settings to something like [mysqld] max_allowed_packet=32M Don't forget to restart the server after altering the configuration. I've used 32M (a ridicuously large value) as an example. Since your query seems to be enourmous you should try this value (or perhaps even 64M if you've got enough RAM) to see whether it works. Another option is to leave the server as-is and alter the behavior of the client used to generate the SQL dump. Tell it to limit the size of the individual queries to under 1 MB - that should also do the trick. For more details, see B.1.2.10. Packet too large in the MySQL manual.
{ "source": [ "https://serverfault.com/questions/6354", "https://serverfault.com", "https://serverfault.com/users/536/" ] }
6,403
What do the three columns in traceroute output mean? The man page isn't helpful: http://www.ss64.com/bash/traceroute.html This is better, but a little more verbose than I'm looking for. As an example. traceroute to library.airnews.net (206.66.12.202), 30 hops max, 40 byte packets 1 rbrt3 (208.225.64.50) 4.867 ms 4.893 ms 3.449 ms 2 519.Hssi2-0-0.GW1.EWR1.ALTER.NET (157.130.0.17) 6.918 ms 8.721 ms 16.476 ms 3 113.ATM3-0.XR2.EWR1.ALTER.NET (146.188.176.38) 6.323 ms 6.123 ms 7.011 ms 4 192.ATM2-0.TR2.EWR1.ALTER.NET (146.188.176.82) 6.955 ms 15.400 ms 6.684 ms 5 105.ATM6-0.TR2.DFW4.ALTER.NET (146.188.136.245) 49.105 ms 49.921 ms 47.371 ms 6 298.ATM7-0.XR2.DFW4.ALTER.NET (146.188.240.77) 48.162 ms 48.052 ms 47.565 ms 7 194.ATM9-0-0.GW1.DFW1.ALTER.NET (146.188.240.45) 47.886 ms 47.380 ms 50.690 ms 8 iadfw3-gw.customer.ALTER.NET (137.39.138.74) 69.827 ms 68.112 ms 66.859 ms 9 library.airnews.net (206.66.12.202) 174.853 ms 163.945 ms 147.501 ms
Traceroute sends out three packets per TTL increment. Each column corresponds to the time is took to get one packet back (round-trip-time). This tries to account for situations such as: A traceroute packet is routed along a different link than other attempts 11 130.117.3.201 (130.117.3.201) 109.762 ms 130.117.49.197 (130.117.49.197) 118.191 ms 107.262 ms A traceroute packet is dropped 9 154.54.26.142 (154.54.26.142) 104.153 ms * *
{ "source": [ "https://serverfault.com/questions/6403", "https://serverfault.com", "https://serverfault.com/users/2174/" ] }
6,709
I'm using SSHFS mounts from my laptop to a central server. Obviously, the SSHFS mount is broken after a longer disconnect (eg. during suspend), cause the underlying SSH connection timed out. Is there a way to get SSHFS mounts surviving long lasting disconnections (> 5 min) or even a re-dialin with a different IP?
Use -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3 The combination ServerAliveInterval=15,ServerAliveCountMax=3 causes the I/O errors to pop out after one minute of network outage. This is important but largely undocumented. If ServerAliveInterval option is left at default (so without the alive check), processes which experience I/O hang seem to sleep indefinitely, even after the sshfs gets reconnect 'ed. I regard this a useless behavior. In other words what happens on -o reconnect without assigning ServerAliveInterval is that any I/O will either succeed, or hang the application indefinitely if the ssh reconnects underneath. A typical application becomes entirely hung as a result. If you'd wish to allow I/O to return an error and resume the application, you need ServerAliveInterval=1 or greater. The ServerAliveCountMax=3 is the default anyway, but I like to specify it for readability.
{ "source": [ "https://serverfault.com/questions/6709", "https://serverfault.com", "https://serverfault.com/users/2357/" ] }
6,710
My company is building a server and we've already purchased an Asus Z8NR-D12 motherboard which we really like, but when we went to assemble our server, we found that the chassis we had bought came with a PSU that only provides one EPS12V connector while the motherboard needs two. We had some other problems with the chassis, so we're going to return it and get a new chassis and psu. So can anyone recommend a PSU that will work well with this motherboard? I've been doing some searches, and I'm having trouble finding one that has 2 ESP12V connectors. I guess another option would be a molex to eps12v adapter cable. But I'm not sure how well this will work when the server is under load. My understanding is that the eps12v uses 4 pairs of wire for a reason, and pulling all that current down the single pair of a molex connector could cause problems. Thanks!
Use -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3 The combination ServerAliveInterval=15,ServerAliveCountMax=3 causes the I/O errors to pop out after one minute of network outage. This is important but largely undocumented. If ServerAliveInterval option is left at default (so without the alive check), processes which experience I/O hang seem to sleep indefinitely, even after the sshfs gets reconnect 'ed. I regard this a useless behavior. In other words what happens on -o reconnect without assigning ServerAliveInterval is that any I/O will either succeed, or hang the application indefinitely if the ssh reconnects underneath. A typical application becomes entirely hung as a result. If you'd wish to allow I/O to return an error and resume the application, you need ServerAliveInterval=1 or greater. The ServerAliveCountMax=3 is the default anyway, but I like to specify it for readability.
{ "source": [ "https://serverfault.com/questions/6710", "https://serverfault.com", "https://serverfault.com/users/331/" ] }
6,714
I have: ISO image of Windows 7 install media 4 GB USB flash drive no DVD drive Linux installed
OK, after unsuccessfully trying all methods mentioned here, I finally got it working. Basically, the missing step was to write a proper boot sector to the USB stick, which can be done from Linux with ms-sys or lilo -M . This works with the Windows 7 retail version. Here is the complete rundown again: Install ms-sys - if it is not in your repositories, get it here . Or alternatively, make sure lilo is installed (but do not run the liloconfig step on your local box if e.g. Grub is installed there!) Check what device your USB media is assigned - here we will assume it is /dev/sdb . Delete all partitions, create a new one taking up all the space, set type to NTFS (7), and remember to set it bootable: # cfdisk /dev/sdb or fdisk /dev/sdb (partition type 7 , and bootable flag) Create an NTFS filesystem: # mkfs.ntfs -f /dev/sdb1 Write Windows 7 MBR on the USB stick (also works for windows 8), multiple options here: # ms-sys -7 /dev/sdb or (e.g. on newer Ubuntu installs) sudo lilo -M /dev/sdb mbr ( info ) or (if syslinux is installed), you can run sudo dd if=/usr/lib/syslinux/mbr/mbr.bin of=/dev/sdb Mount ISO and USB media: # mount -o loop win7.iso /mnt/iso # mount /dev/sdb1 /mnt/usb Copy over all files: # cp -r /mnt/iso/* /mnt/usb/ ...or use the standard GUI file-browser of your system Call sync to make sure all files are written. Open gparted, select the USB drive, right-click on the file system, then click on "Manage Flags". Check the boot checkbox, then close. ...and you're done. After all that, you probably want to back up your USB media for further installations and get rid of the ISO file... Just use dd: # dd if=/dev/sdb of=win7.img Note, this copies the whole device! — which is usually (much) bigger than the files copied to it. So instead I propose # dd count=[(size of the ISO file in MB plus some extra MB for boot block) divided by default dd blocksize] if=/dev/sdb of=win7.img Thus for example with 8 M extra bytes: # dd count=$(((`stat -c '%s' win7.iso` + 8*1024*1024) / 512)) if=/dev/sdb of=win7.img status=progress As always, double check the device names very carefully when working with dd . The method creating a bootable USB presented above works also with Win10 installer iso. I tried it running Ubuntu 16.04 copying Win10_1703_SingleLang_English_x64.iso (size 4,241,291,264 bytes) onto an 8 GB USB-stick — in non-UEFI [non-secure] boot only. After execution dd reports: 8300156+0 records in 8300156+0 records out 4249679872 bytes (4.2 GB, 4.0 GiB) copied, 412.807 s, 10.3 MB/s Reverse if/of next time you want to put the Windows 7 installer onto USB.
{ "source": [ "https://serverfault.com/questions/6714", "https://serverfault.com", "https://serverfault.com/users/1682/" ] }
6,733
It was recently suggested to me that I use FastCGI with PHP. Now I went to the FastCGI page and read it but I don't really understand what the advantages are.
Using mod_php each Apache worker has the entire PHP interpreter loaded into it. Because Apache needs one worker process per incoming request, you can quickly end up with hundreds of Apache workers in use, each with their own PHP interpreter loaded, consuming huge amounts of memory. (Note, this isn't exactly true, Apache's worker_mpm allows you to serve many requests with a single threaded Apache worker. However, even in 2009, this is not the recommended way to deploy PHP because of suspected threading issues with the PHP extensions.) By using PHP in fast_cgi mode (using something like spawn-fcgi from the lighttpd package) has the following benefits tune the number of PHP workers separately from the number of incoming connections allow you to put you PHP workers on a different server, or scale across many servers without changing you web tier gives you flexibility to choose a different web server, like nginx , or lighttpd allow you to run your PHP application in a different security domain on your web server
{ "source": [ "https://serverfault.com/questions/6733", "https://serverfault.com", "https://serverfault.com/users/1232/" ] }
6,753
When you create some Linux filesystems like ext3 a 'lost+found' directory is created. According to this files will be placed there if files were damaged from some kind of system crash. What happens if this directory is removed, and the system crashes. If the folder is removed can I just create a new directory with mkdir lost+found or are there attributes that can only be set when the filesystem is being created.
fsck will recreate the lost+found directory if it is missing. On startup most distributions run fsck if the filesystem is detected as not being unmounted cleanly. As fsck creates the lost+found directory if it is missing, it will create it then and place anything that it finds into that directory.
{ "source": [ "https://serverfault.com/questions/6753", "https://serverfault.com", "https://serverfault.com/users/984/" ] }
6,895
Has anyone got a nice solution for handling files in /var/www ? We're running Name Based Virtual Hosts and the Apache 2 user is www-data . We've got two regular users & root. So when messing with files in /var/www , rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question: How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments.
Attempting to expand on @Zoredache's answer , as I give this a go myself: Create a new group (www-pub) and add the users to that group groupadd www-pub usermod -a -G www-pub usera ## must use -a to append to existing groups usermod -a -G www-pub userb groups usera ## display groups for user Change the ownership of everything under /var/www to root:www-pub chown -R root:www-pub /var/www ## -R for recursive Change the permissions of all the folders to 2775 chmod 2775 /var/www ## 2=set group id, 7=rwx for owner (root), 7=rwx for group (www-pub), 5=rx for world (including apache www-data user) Set group ID ( SETGID ) bit (2) causes the group (www-pub) to be copied to all new files/folders created in that folder. Other options are SETUID (4) to copy the user id, and STICKY (1) which I think lets only the owner delete files. There's a -R recursive option, but that won't discriminate between files and folders, so you have to use find , like so: find /var/www -type d -exec chmod 2775 {} + Change all the files to 0664 find /var/www -type f -exec chmod 0664 {} + Change the umask for your users to 0002 The umask controls the default file creation permissions, 0002 means files will have 664 and directories 775. Setting this (by editing the umask line at the bottom of /etc/profile in my case) means files created by one user will be writable by other users in the www-group without needing to chmod them. Test all this by creating a file and directory and verifying the owner, group and permissions with ls -l . Note: You'll need to logout/in for changes to your groups to take effect!
{ "source": [ "https://serverfault.com/questions/6895", "https://serverfault.com", "https://serverfault.com/users/1576/" ] }
6,989
I'd like to create a single rule in iptables (if possible) that uses multiple source IP addresses. Is this possible?
To add multiple sources in a single command I would do this: iptables -t filter -A INPUT -s 192.168.1.1,2.2.2.2,10.10.10.10 -j ACCEPT iptables will automatically translate it into multiple rules .
{ "source": [ "https://serverfault.com/questions/6989", "https://serverfault.com", "https://serverfault.com/users/829/" ] }
7,056
What's the command to find the name of a computer given its IP address? I always forget what this command is, but I know it exists in Windows and I assume it exists on the *nix command-line.
The commands dig and host should be what you're looking for ;) On *nix systems, you can use this command: dig -x [address] Alternatively, you can add +short at the end of the dig command to output only the DNS result. There's also nslookup on both *nix and Windows systems for reverse DNS requests.
{ "source": [ "https://serverfault.com/questions/7056", "https://serverfault.com", "https://serverfault.com/users/2329/" ] }
7,069
Is there a list of public NTP servers on the internet I can use.
http://www.pool.ntp.org/ If you are in the US: United States — us.pool.ntp.org To use this pool zone, add the following to your ntp.conf file: server 0.us.pool.ntp.org server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org Other pools around the world are available and can be found at the http://www.pool.ntp.org/ site.
{ "source": [ "https://serverfault.com/questions/7069", "https://serverfault.com", "https://serverfault.com/users/2427/" ] }
7,109
Windows Vista added the ability to create symbolic links to files and directories. How do I create a symbolic link and what are the current consumer and server versions of Windows that support it?
You can create a symbolic link with the command line utility mklink . MKLINK [[/D] | [/H] | [/J]] Link Target /D Creates a directory symbolic link. Default is a file symbolic link. /H Creates a hard link instead of a symbolic link. /J Creates a Directory Junction. Link specifies the new symbolic link name. Target specifies the path (relative or absolute) that the new link refers to. Symbolic links via mklink are available since Windows Vista and Windows Server 2008. On Windows XP and Windows Server 2003 you can use fsutil hardlink create <destination filename> <source filename> According to msdn.microsoft , Symbolic Links are NOT supported on FAT16/32 and exFAT. It seems Windows only supports them from or to NTFS-Partitions. Future Windows operating systems are likely to continue support for mklink. You can read further information about this new feature on Microsoft TechNet , Junfeng Zhang's blog or howtogeek.com .
{ "source": [ "https://serverfault.com/questions/7109", "https://serverfault.com", "https://serverfault.com/users/1532/" ] }
7,145
While updating my packages on a debian based system by a sudo apt-get update I've got that error message : Reading package lists... Done W: GPG error: ftp://ftp.fr.debian.org stable/non-US Release: The following signatures were invalid: KEYEXPIRED 1138684904 What should I do to fix this ?
To find any expired repository keys and their IDs, use apt-key as follows: LANG=C apt-key list | grep expired You will get a result similar to the following: pub 4096R/BE1DB1F1 2011-03-29 [expired: 2014-03-28] The key ID is the bit after the / i.e. BE1DB1F1 in this case. To update the key, run sudo apt-key adv --recv-keys --keyserver YOUR_GPGKEY_HOST_DOMAIN BE1DB1F1 Note: Updating the key will obviously not work if the package maintainer has not (yet) uploaded a new key. In that case there is little you can do other than contacting the maintainer, filing a bug against your distribution etc. YOUR_GPGKEY_HOST_DOMAIN indicates domain name of any available GPG key server, such as keyserver.ubuntu.com keys.openpgp.org pgp.mit.edu (update 2023.2.22) The SKS key server keys.gnupg.net is deprecated and gone . One liner to update all expired keys: (thanks to @ryanpcmcquen) for K in $(apt-key list | grep expired | cut -d'/' -f2 | cut -d' ' -f1); do sudo apt-key adv --recv-keys --keyserver keys.gnupg.net $K; done
{ "source": [ "https://serverfault.com/questions/7145", "https://serverfault.com", "https://serverfault.com/users/117/" ] }
7,251
Is there an easy way to delete multiple tables in the database without dropping the database and recreating it? In this case we have over 100 to remove. I am happy enough to remove all user tables and reimport the needed data but can't touch any of the database security settings.
In object explorer, navigate to the database you're interested in. Expand it out and click on the Tables folder. Hit F7 to bring up the Object Explorer Details. Select the tables you want to delete and press the delete key.
{ "source": [ "https://serverfault.com/questions/7251", "https://serverfault.com", "https://serverfault.com/users/1436/" ] }
7,401
I once had a job offer from a company that wanted my workstation to be in the AC controlled, noisy server room with no natural light. I'm not sure what their motivation was. Possibly it made sense to them for me to be close to the servers, or possibly they wanted to save the desk space for other employees. I turned down the job (for many reasons, including the working environment). Is this a common practice? Do you work in your LAN room? How do you cope?
Do you work in your server room? Generally no, although in some companies with only 2-3 servers, yes - my office was the server room. Is this a common practice? For small companies and technology startups, yes - space is an issue. How do you cope? I make a business case against it, security, cost, etc. If that doesn't work, I bring a sound level meter and the OSHA guidelines , and show them that they are providing an unsafe work environment. This would require them to perform monitoring and sound control, supply affected employees with proper equipment for such a working environment, hold occasional training on sound exposure, etc, etc, etc. The cost would be much greater to support than providing a work area outside the server room. At what point do you suffer hearing loss? If you can't hold a conversation at a normal level in the server room (about 60db) then it's likely too loud to work in for 8 hours a day. Extended exposure to high sound levels leads to hearing loss, and the employer would be liable for this if they did not proactively follow the OSHA guidelines. Of course, by that time you've already lost something valuable, so it's in your best interest to protect yourself by demanding an appropriate working environment. -Adam
{ "source": [ "https://serverfault.com/questions/7401", "https://serverfault.com", "https://serverfault.com/users/995/" ] }
7,441
So I have gotten a new job at a medium sized company as an IT-administrator. I have also inherited this monster (actually, there is 2 of them) from the previous administrator: I'd like you guys expertise and experience on how to make sense of it. Someday, all hell will probably break loose and I'll need to fiddle with the cables and switches. So I'll need to have an overview of where which cable leads to etc. What approach should I take? EDIT: I guess I did not clarify myself well enough :) I also mean that it a little more detailed. Such as, you say: Color both ends. Yes, but what is the easiest way to figure out which cable leads to where? :)
I've actually seen worse! I suggest you start documenting for rigth now, (per patch port <-> switch port). Start planning the logical placement of your infrastructure and clients on your switch(es): Infrastructure (router/switches/servers) (switch A: ports 1-20) Clients (Switch B: ports 20-40)... (Also, might want to keep VLAN memberships somewhat together) Once you differentiate between clients and infrastructure (and VLAN associations), it will be much easier to just unplug and, re-wire everything. I agree with the other posts> shorter cables would be very useful. ADDITIONAL INFO Per your added request for details, sounds like you need to know where to begin. Obviously, try as hard as you can to get some kind of port documentation from the previous administrator (If you're lucky!). If that's not available, you will need to: See if the patch panels and room ports are labeled , if they are then, GREAT! start documenting! If the patch panels and/or room ports aren't labeled, a cable tester might come in handy: http://www.smarthome.com/89409/LAN-Cable-Check-Telephone-Test-Set/p.aspx . (source: smarthome.com ) Good Luck m8!
{ "source": [ "https://serverfault.com/questions/7441", "https://serverfault.com", "https://serverfault.com/users/78/" ] }
7,478
I know it could be very different based on the situation, but for hosting a website with no plans to move the hosting server what is a good TTL to set on the DNS record?
I tend to leave it at Slicehost's default, 86,400 seconds (1 day). I drop it down to 10 minutes when I have a move pending and wait a day or two. edit: These days (2016) I tend to keep it low - ~5 minutes.
{ "source": [ "https://serverfault.com/questions/7478", "https://serverfault.com", "https://serverfault.com/users/2448/" ] }
7,503
What is the best way to determine if a variable in bash is empty ("")? I have heard that it is recommended that I do if [ "x$variable" = "x" ] Is that the correct way? (there must be something more straightforward)
This will return true if a variable is unset or set to the empty string (""). if [ -z "${VAR}" ];
{ "source": [ "https://serverfault.com/questions/7503", "https://serverfault.com", "https://serverfault.com/users/1466/" ] }
7,594
What is a good, general way to make a recursive/deep directory copy in Linux that works in most cases? I've used simple things like cp -R as well as fairly elaborate cpio incantations. Are there any significant strengths or weaknesses that cause you to prefer one over the other? Which one do you use most often?
NAME cp - copy files and directories -a, --archive same as -dpR -d same as --no-dereference --preserve=links -p same as --preserve=mode,ownership,timestamps -R, -r, --recursive copy directories recursively So in answer to your question: cp -a /foo /bar Copy everything recursively from directory /foo to directory /bar while preserving symbolic links and file/directory 'mode' 'ownership' & 'timestamps'.
{ "source": [ "https://serverfault.com/questions/7594", "https://serverfault.com", "https://serverfault.com/users/1438/" ] }
7,678
Is it ethical to hack real systems owned by someone else? Not for profit, but to test your security knowledge and learn something new. I talk only about hacks, which does not make any damage to system, just proves there are some security holes.
The key flaw I see in your question is that you seem to believe it is possible to correctly assess from the outside what the damage hacking will do to a given system. How do you know that flipping a given bit the wrong way isn't going to completely destroy something and cost your target thousands or millions of dollars. Since ethics are very subjective I will answer you this way. It would be far more ethical to leave stuff alone that doesn't belong to you or you don't have explicit permission to touch.
{ "source": [ "https://serverfault.com/questions/7678", "https://serverfault.com", "https://serverfault.com/users/1815/" ] }
7,689
I have a port that is blocked by a process I needed to kill. (a little telnet daemon that crashed). The process was killed successfully but the port is still in a 'FIN_WAIT1' state. It doesn't come out of it, the timeout for that seems to be set to 'a decade'. The only way I've found to free the port is to reboot the entire machine, which is ofcourse something I do not want to do. $ netstat -tulnap | grep FIN_WAIT1 tcp 0 13937 10.0.0.153:4000 10.0.2.46:2572 FIN_WAIT1 - Does anyone know how I can get this port unblocked without rebooting?
# record what tcp_max_orphans's current value original_value=$(cat /proc/sys/net/ipv4/tcp_max_orphans) #set the tcp_max_orphans to 0 temporarily echo 0 > /proc/sys/net/ipv4/tcp_max_orphans # watch /var/log/messages # it will split out "kernel: TCP: too many of orphaned sockets" # it won't take long for the connections to be killed # restore the value of tcp_max_orphans whatever it was before. echo $original_value > /proc/sys/net/ipv4/tcp_max_orphans # verify with netstat -an|grep FIN_WAIT1
{ "source": [ "https://serverfault.com/questions/7689", "https://serverfault.com", "https://serverfault.com/users/47/" ] }
7,778
The list of must-have requirements: be able to serve static HTML pages and files (images, compressed archives, ASCII text files etc) over HTTP. be Resource conservative . It uses what's needed to send data over the network in form of memory and CPU, and not much more. have a small install footprint. use only as much network bandwith as is necessary. be mature . be easy to configure. be compiled into native code. No Python or Java etc. What I don't need: Complex configuration options. If needed later on, I'll switch to Apache httpd. Support for running CGI, Perl, PHP, Java, Server Side Includes or other "extras". Any suggestions please?
nginx Learn more at the nginx wiki site . It's hot, fast, small. A few % on the Netcraft survey .
{ "source": [ "https://serverfault.com/questions/7778", "https://serverfault.com", "https://serverfault.com/users/32224/" ] }
7,787
Which protocol should I use and why?
My understanding of wireless security protocol strength, starting with most secure: WPA2-AES WPA2-TKIP WPA WEP Search for WEP cracking and you'll find plenty of tutorials on cracking it in 10 minutes on common PCs. WPA is significantly more difficult to crack, but each version has its weak points. WPA2-AES is considered top of the line last I heard and supported by pretty much all modern routers and OS's. See these Security Now! past episodes for in-depth explanations: Episode 170, The TKIP Hack Episode 89, Even More Badly Broken WEP
{ "source": [ "https://serverfault.com/questions/7787", "https://serverfault.com", "https://serverfault.com/users/2376/" ] }
7,836
For as long as I've known, I and everyone I've encountered pronounces BIOS as bi-Ohs. Since listening to the Stackoverflow podcast I'm still surprised to hear Jeff say bi-Ahs. Just when I thought it was an Atwoodism, Michael Pryor made the same enunciation on episode 51 . Right or wrong, how is it more commonly pronounced?
I pronounce it bi-Ohs (bī'ōs)
{ "source": [ "https://serverfault.com/questions/7836", "https://serverfault.com", "https://serverfault.com/users/371/" ] }
7,902
I'm looking for amusing stories of system administrator accidents you have had. Deleting the CEO's email, formatting the wrong hard drive, etc. I'll add my own story as an answer.
I had fun discovering the difference between the linux "killall" command (kills all processes matching the specified name, useful for stopping zombies) and the solaris "killall" command (kills all processes and halts the system, useful for stopping the production server in the middle of peak hours and getting all your co-workers to laugh at you for a week).
{ "source": [ "https://serverfault.com/questions/7902", "https://serverfault.com", "https://serverfault.com/users/2427/" ] }
8,149
Not going into specifics on the specs since I know there is no real answer for this. But I've been doing load testing today with the ab command in apache. And got to the number of 70 requests per second (1000 requests with 100 concurrent users), on a page that is loading from 4 different DB tables, and doing some manipulation with the data. So it's a fairly heavy page. The server isn't used for anything else for now and the load on it is just me, since it's in development. But the application will be used daily by many users. But is this enough? Or should I even worry (just as long as it's over X requests a second) I'm thinking that I shouldn't worry but I'd like some tips on this.
70 requests per second works out to an hourly rate of 252,000 page renders / hour. If you assume that the average browsing session for your site is 10 pages deep, then you can support 25,000 uniques / hour. You should probably check these numbers against your expected visitor count, which should be available from the folks on the business side. Many of the sites I work on see about 50% of their daily traffic in a roughly 3 hour peak period on each day. If this is the case with your site (it depends on the kind of content you provide, and the audience), then you should be able to support a daily unique visit count of around 150,000. These are pretty good numbers; I think you should be fine. It's wise to look into opcode caching and database tuning now, but remember- premature optimization is the root of all evil. Monitor the site, look for hotspots, and wait for traffic to grow before you go through an expensive optimization effort for a problem you may not have.
{ "source": [ "https://serverfault.com/questions/8149", "https://serverfault.com", "https://serverfault.com/users/663/" ] }
8,187
I just installed Windows 7 RC1 and want to move c:\users to d:\users. What's the best way to do this? Due to the fact that Windows 7 creates a reserved partition that is mounted as C: in the recovery console, I had to use the following commands robocopy /mir /xj D:\Users E:\Users mklink D:\Users D:\Users /j Both D's in the mklink command are correct. When the system reboots, the drive that was D in the recovery console becomes the C drive.
You can move the entire C:\Users folder to a different drive pretty easily after windows is installed: Warning: Doing this may cause issues if/when you need to perform a System Restore Boot to the installation media, and get to the command prompt ( press Shift + F10 on the install dialog ) Use Robocopy to copy C:\Users to D:\Users: robocopy c:\Users d:\Users /mir /xj /copyall a. /mir tells robocopy to mirror the directories, this will copy all files b. /xj is very important, this tells robocopy not to follow junction points. If you forget this, you will have a lot of trouble. c. /copyall will copy all the attributes includings ACL and Owner info Verify that the files successfully copied Delete c:\Users Create junction that points to d:\Users: mklink c:\Users d:\Users /j That's it. I've been using this process since Vista went RTM with no problems. Here is an article that explains it as well. Just use robocopy instead of xcopy as he does in the article to avoid possible ntfs permissions problems. Update: Because I found out the hard way, I thought I'd also mention that if you are planning on moving "Program Data", or "Program Files" with this method, you will be disapointed to find out that everything works as expected, but windows updates will no longer install. I'm not sure if this has been fixed Win 7. Update 2: @Benjol has a blog post that details a method of moving the profiles folder that will recreate the junctions that this method leaves out. If you run into any issues with legacy apps, take a look here and see if his method resolves the issue.
{ "source": [ "https://serverfault.com/questions/8187", "https://serverfault.com", "https://serverfault.com/users/1692/" ] }
8,411
In my spare time I remotely support my wife's office via VPN into a Windows Server. I am about to purchase a wireless broadband service which doesn't support VPN. I don't want to open up the remote desktop ports directly, and I would like to set up an SSH tunnel into the network, and if necessary then VPN over the top of that. What is the best windows SSH Server implementation to use on a Windows 2003 Server, or should I just be using sshwindows ?
I've been using FreeSSHd on my home Windows box, and have not run into any limitations. Highly recommended.
{ "source": [ "https://serverfault.com/questions/8411", "https://serverfault.com", "https://serverfault.com/users/2229/" ] }
8,462
In the company I'm working for, we need to get system administrators. However, we are a programming development company and it turns out that we have no idea how to tell a good system administrator from a bad one*. We just needed someone to set up the server, plan the layout of the LAN cables and to set up policies on the security of the Wi-Fi. We didn't realize that we have a problem with our hiring until we found out that the two administrators we hired didn't do the job properly. We found out we have problems two months later when: we started getting static on the phone and we traced it to the cabling. a visitor told us that the network security is ineffective and demonstrated this. we have to replace the server they recommended since the old one was inefficient for our company. Is there any standard way of recognizing a good system administrator? Are there any interview tests that we can give to weed out the poorly skilled ones? * You would think computer programmers would tell the good technical staff from the bad ones but programming and system administration are two different fields.
Here are some ways to recognise a good system administrator. They are able to talk about previous systems they have administered in a way that makes sense to you, a technically-capable non-sysadmin. A good sysadmin needs to be able to communicate with other network users and see the big picture at the same time as being fully aware of all the details. If they can't explain in a structured and clear way what they did and why in a previous job, then they won't be able to explain to you their decision-making rationale when working for you. Basically, they should be able to talk all day about a specific system without ever losing their audience. They are obsessed with avoiding single points of failure. At any point when they are describing a system they administer(ed), stop them and ask " What could have gone wrong with this part of the system and how did you mitigate that risk? " Their answer should be detailed and show that they had thought it through carefully already. They should also be enthusiastic about answering that question, because good sysadmins love thinking about ways to avoid catastrophic failure. They have a healthy scepticism of the new, the cool and the untested. They are also hugely keen to trial new solutions and are always doing so. However, their standard toolbox is staid, safe and involves plenty of testing. They can remember times their systems have failed and answer five whys without having to think. Every sysadmin has made mistakes that led to downtime; the good ones have thought about both technical and systemic reasons why it happened. They document their systems with the same level of obsessiveness that a teenage diary writer documents her crushes. If possible, ask to look at their documentation for previous systems they have administered. I've no idea how to test this at interview, but they are calm in a crisis. Perhaps you could wait till they visit the lavatory, then jam the door and set off the fire alarm.
{ "source": [ "https://serverfault.com/questions/8462", "https://serverfault.com", "https://serverfault.com/users/1331/" ] }
8,492
Wikipedia says : Sneakernet is a tongue-in-cheek term used to describe the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, floppy disks, compact discs, USB flash drives, or external hard drives from one computer to another Has anyone actually used Sneakernet in their professional job? Is it a common practice or is this done rarely?
Nothing beats sneakernet when it comes to bandwidth -- I've achieved blazing speeds of 1.7 Gbps when carrying a 500 Gb hard drive to a machine 10 min away. However, latency sucks -- from 5 min in the same building up to 40h worldwide.
{ "source": [ "https://serverfault.com/questions/8492", "https://serverfault.com", "https://serverfault.com/users/1331/" ] }
8,526
What's the best approach towards determining if I have a rogue DHCP server inside my network? I'm wondering how most admins approach these kinds of problems. I found DHCP Probe through searching, and thought about trying it out. Has anyone had experience with it? (I would like to know before taking the time to compile it and install). Do you know any useful tools or best-practices towards finding rogue DHCP servers?
One simple method is to simply run a sniffer like tcpdump/wireshark on a computer and send out a DHCP request. If you see any offers other then from your real DHCP server then you know you have a problem.
{ "source": [ "https://serverfault.com/questions/8526", "https://serverfault.com", "https://serverfault.com/users/1840/" ] }
8,623
I need to recruit a proper linux guru, not someone who can just about spell it, a real big hitter to go off and recruit their own team. We're currently a big Windows house so I know the questions I need to ask to sort the wheat from the chaff in that area but I have no idea what questions to ask of a linux techie, nor what would be good answers. Do you have any questions I could ask - or should I just pay someone from an external consultancy to sit in on the interviews?
A Beginner: Has less than 4 years experience. Has to rely on binary packages for everything Has never seen an old kernel (i.e. only knows 2.6.x series) Hasn't figured out that the commands and directory locations are different in each distro; often, they only know of one they are starting out on, and can become confused when their environment has switched. Can't script common commands and often do everything manually. Needs assistance in performing diagnostics on a troubled system, although they function independently on lighter issues. Is still learning from others things that the "Seasoned" admins already know. Has a demeanor that is still "green' - they are self-assured (rightly so), but appear cocky to some. This can lead to friction with end-users, developers, and management. Troublesome end-users can often get them to do something that a seasoned admin would immediately deny. Developers don't have much to talk about with them, but may teach them a thing or two about scripting. Management usually wants someone more seasoned and will not bother them unless there are limited choices. They often do not have a complete picture of your core business and how it generates revenue, although they do understand procedural-level positions in the company. As such, they can identify the needs of regular staff throughout the company, but do not necessarily understand the interactions of all company units. These are the admins that start out in junior level positions. A (stereotypical) impression: "This person's got potential, they just need time to make it shine." A Seasoned Admin: Has 5+ years experience. Can download and compile tools/utilities/services, and can recompile a kernel Has seen older kernels (2.2 and 2.4 series) Can adapt to a different distro, or has experience in 2 or more distros. Can do simple scripting to automate tasks. Can perform diagnostics on their own, but require time to pinpoint the issue Can function on their own, but have no management experience, or limited supervisory experience; they often tutor and instruct junior-level admins. Has a demeanor that is "seasoned" - they are observant and reserved, but will always be pleasant without being technical. This leads to confidence when dealing with end-users, developers, and management, and ultimately, a deep-seated sense of trust that this person will "get the job done". End-users will usually consult these folks first, but troublemakers will sometimes attempt to "game the system" and get them to do something they wouldn't (although the admin will know better and deny it). Developers will consult with this person about common issues. Management will sometimes ask for special tasks to be performed (vetted, of course, through the Guru) and they will accomplish this to their satisfaction. They understand the core revenue model of your business, and how this inter-relates with other positions and procedures. They can design custom solutions around this knowledge, and can find ways to decrease operational expenses. They cannot, however, create new revenue sources. These are the admins the Guru will initially hire. Another stereotypical impression: "This person has been around the block, and has the war wounds to prove it. If my back was against the wall, I'd put my trust in them." A Guru: Has 9+ years experience. Can perform code-level customization of a kernel before recompile, either by reconfiguration or by writing new code Has seen very old kernels (2.0 or 1.3 series) Has experience with very difficult-to-install installations (Slackware prior to version 9 , Gentoo, Linux From Scratch) Can do complex scripting, sometimes writing complete tools for other staffers. Immediately knows all potential causes of a problem and can look at each solution without additional diagnostics Has functioned in a supervisory or management capacity with at least one other person for at least 3 years. This means the person was hired and managed directly by them. Has a demeanor that borders on "happy but zen-like'. They are quiet, focused, and have an uncanny means of knowing what to say and when, while putting everyone they talk to at ease. End-users often do not notice this person because they function well at what they do, yet troublemakers are quick to fear their presence; developers will consult with this person about difficult issues; and management trusts them with staffing and employment decisions. They have intricate knowledge of your business process, and how your company's cash flow interacts with capital outlays, staffing, and on-going maintenance. They can find creative ways to create new revenue sources within your business model. This is the person you want. Another (really bad) stereotype: " Grey beard, suspenders...they must be one of those all-knowning Unix admins! "
{ "source": [ "https://serverfault.com/questions/8623", "https://serverfault.com", "https://serverfault.com/users/1435/" ] }
8,654
I know what a proxy is, but I'm not sure what a reverse proxy is. It seems to me that it's probably akin to a load balancer. Is that correct?
A reverse proxy, also known as an "inbound" proxy is a server that receives requests from the Internet and forwards (proxies) them to a small set of servers, usually located on an internal network and not directly accessible from outside. It's "reverse", because a traditional ("outbound") proxy receives requests from a small set of clients on an internal network and forwards them to the Internet. A reverse proxy can be used to allow authenticated users access to an intranet even when they are located outside. Users on the internal network can access intranet servers directly (their IP address is their authentication), but users outside it must authenticate themselves to the proxy server (usually with a username and password) in order to be allowed in.
{ "source": [ "https://serverfault.com/questions/8654", "https://serverfault.com", "https://serverfault.com/users/590/" ] }
8,706
Do you use an antistatic wrist strap when working on hardware? Do they really work? Have you ever fried some hardware that would have been saved if you had been wearing one? I know some people who wear them religiously and others who say they are a waste of time. What is the view of the community?
Here's where experience may lead you astray. Static discharges can create partial burn-throughs in those nanometer lines inside a chip. So the part may not fail immediately. But it can certainly cause premature failure down the line. So if your experience tells you that you've never "fried" any parts while working on them - what you don't know is how many will prematurely fail in the future. I always use the straps in a professional environment, but for low value home stuff I generally just ground myself.
{ "source": [ "https://serverfault.com/questions/8706", "https://serverfault.com", "https://serverfault.com/users/1852/" ] }
8,794
We all have them, let's share some stories about them. What kind of support person are you? Kudos to explanations of how you [un]civilly dealt with them! EDIT : Ok, let's hear some more. Really I just need to know I'm not alone today after dealing with some of my "favorite" people....
I once had a client go on a rampage when his internet failed. He grabbed me by the arm, pulled me into the server, and started hollering about how he was going to tear everything out and throw it away and how we were the worst technical services people on the planet etc etc Believe it or not, this ranting went on for the better part of a full hour. The man got a full head of steam. I itemized it on the invoice we sent him, "1 hour of insane uncontrollable ranting" at our highest rate ($90/hr). Remarkably he actually paid the bill...
{ "source": [ "https://serverfault.com/questions/8794", "https://serverfault.com", "https://serverfault.com/users/1054/" ] }
8,855
I would like to add an Environment variable to a Windows machine (desktop or server) and be able to use it without rebooting that machine. Say you have a production server which hosts a variety of apps and a new app needs a particular Environment variable to run. You do not want to reboot it while users are connected to your other apps. What choices do you have? I don't like the wait-until-a-good-time-to-reboot option. There must be a better way. What am I missing?
Changes to environment variables should take effect immediately, if you make the change via the main Properties dialog for the computer in question (go to My Computer | Properties | Advanced | Environment Variables). After the changes are saved, Explorer broadcasts a WM_SETTINGCHANGE message to all windows to inform them of the change. Any programs spawned via Explorer after this should get the updated environment, although already-running programs will not, unless they handle the setting change message. I'm not able to tell from your problem description what specific problem you're having with this. Can you tell us more about the specific scenario that isn't working? This KB article may also be of use: How to propagate environment variables to the system
{ "source": [ "https://serverfault.com/questions/8855", "https://serverfault.com", "https://serverfault.com/users/47003/" ] }
8,860
I know how to export/import the databases using mysqldump & that's fine but how do I get the privileges into the new server. For extra points, there are a couple of existing databases on the new one already, how do I import the old servers privileges without nuking the couple existing of ones. Old server: 5.0.67-community New server: 5.0.51a-24+lenny1 EDIT: I've got a dump of the db 'mysql' from the Old Server & now want to know the proper way to merge with the 'mysql' db on the New Server. I tried a straight 'Import' using phpMyAdmin and ended up with an error regarding a duplicate (one that I've already migrated manually). Anyone got an elegant way of merging the two 'mysql' databases?
Do not mess with the mysql db. There is a lot more going on there than just the users table. Your best bet is the " SHOW GRANTS FOR" command. I have a lot of CLI maintenance aliases and functions in my .bashrc (actually my .bash_aliases that I source in my .bashrc). This function: mygrants() { mysql -B -N $@ -e "SELECT DISTINCT CONCAT( 'SHOW GRANTS FOR \'', user, '\'@\'', host, '\';' ) AS query FROM mysql.user" | \ mysql $@ | \ sed 's/\(GRANT .*\)/\1;/;s/^\(Grants for .*\)/## \1 ##/;/##/{x;p;x;}' } The first mysql command uses SQL to generate valid SQL which is piped to the second mysql command. The output is then piped through sed to add pretty comments. The $@ in the command will allow you to call it as: mygrants --host=prod-db1 --user=admin --password=secret You can use your full unix tool kit on this like so: mygrants --host=prod-db1 --user=admin --password=secret | grep rails_admin | mysql --host=staging-db1 --user=admin --password=secret That is THE right way to move users. Your MySQL ACL is modified with pure SQL.
{ "source": [ "https://serverfault.com/questions/8860", "https://serverfault.com", "https://serverfault.com/users/1576/" ] }
8,917
At my work, backups have a surprisingly low priority. The backup strategy was implemented a while ago, and since then it's just assumed the backups are fine. If you ask the sysadmins, they'll say everything is backed up. But then, when you ask for a SPECIFIC backup, half the time they are not there: The disk got full The tape failed Looks like someone disabled the backup job The network connection had downtime We ordered that disk years ago, but finance hasn't approved the purchase order The files are corrupt File contains wrong database Only transaction log backups (useless without a full one) A few weeks ago, disaster came real close as one of the servers lost one too many raid disks. Luckily one disk was still kind enough to copy the data, if you tried a lot of times. But even after that near-disaster, I can't seem to convince the sysadmins to improve the situation. So I'm wondering, any tips for opening people's eyes? It seems to me we're walking along the edge of a cliff.
You always have to get these things fixed from the top. Is the current backup strategy backed by and understood by management? If not, it's useless. The executive management needs to know about the problems and what risks are involved (losing financial data that you need to bring out legally to survive, or customer data that has taken years to collect?) and weigh that in deciding on actions, or deciding on letting someone (like you) take action. If you can't get to management, try business controllers or other financial positions where data retrieval and its integrity is of high importance to the company's reports. They in turn can "start the storm" if needed...
{ "source": [ "https://serverfault.com/questions/8917", "https://serverfault.com", "https://serverfault.com/users/2657/" ] }
8,981
My router has two protocols (and a "both" option) that I can select when setting up port forwarding: UDP and TCP. What is the difference between these two protocols and when would you select one over the other in port forwarding?
TCP is backed by acks and retries to make sure you data gets where it's going. UDP is connectionless and "fire and forget". UDP is mostly used for streaming type applications, where if you lose some data you don't need to try to send it again. Which one you use depends on the application. For example, a web server uses TCP.
{ "source": [ "https://serverfault.com/questions/8981", "https://serverfault.com", "https://serverfault.com/users/1671/" ] }