source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
8,997 | What are the merits of buying SAS drives over SATA drives, or vice versa? | SAS=SCSI=manageability, especially under load and also better prefailure diagnostics and tuning capability. Spendy and low capacity/£$€. SATA=value, capacity and adequate performance for many loads but be aware that 99%+ of SATA drives aren't designed to work 24/7/365 under duress. Also putting them under busy server workloads can dramatically affect their MTBF. I'd recommend SATA for everything but server and top-end workstation work. You really can't beat SAS for DB work overall. | {
"source": [
"https://serverfault.com/questions/8997",
"https://serverfault.com",
"https://serverfault.com/users/2318/"
]
} |
9,038 | I have a scheduled task that starts a batch script that runs robocopy every hour. Every time it runs a window pops up on the desktop with robocopy's output, which I don't really want to see. I managed to make the window appear minimized by making the scheduled job run cmd /c start /min mybat.bat but that gives me a new command window every hour. I was surprised by this, given cmd /c "Carries out the command specified by string and then terminates" - I must have misunderstood the docs. Is there a way to run a batch script without it popping up a cmd window? | You could run it silently using a Windows Script file instead. The Run Method allows you running a script in invisible mode. Create a .vbs file like this one Dim WinScriptHost
Set WinScriptHost = CreateObject("WScript.Shell")
WinScriptHost.Run Chr(34) & "C:\Scheduled Jobs\mybat.bat" & Chr(34), 0
Set WinScriptHost = Nothing and schedule it. The second argument in this example sets the window style. 0 means "hide the window." Complete syntax of the Run method : object.Run(strCommand, [intWindowStyle], [bWaitOnReturn]) Arguments: object: WshShell object. strCommand: String value indicating the command line you want to run. You must include any parameters you want to pass to the executable file. intWindowStyle: Optional. Integer value indicating the appearance of the program's window. Note that not all programs make use of this information. bWaitOnReturn: Optional. Boolean value indicating whether the script should wait for the program to finish executing before continuing to the next statement in your script. If set to true, script execution halts until the program finishes, and Run returns any error code returned by the program. If set to false (the default), the Run method returns immediately after starting the program, automatically returning 0 (not to be interpreted as an error code). | {
"source": [
"https://serverfault.com/questions/9038",
"https://serverfault.com",
"https://serverfault.com/users/2281/"
]
} |
9,050 | On Windows, how do you refresh the hosts file without rebooting? | You don't need to reboot. Any changes you make to the hosts file are immediate. You used to need to reboot for changes to take effect in Windows 9x. That is no longer the case. However, you may need to restart any applications that do internal hostname or DNS caching, such as web browsers. | {
"source": [
"https://serverfault.com/questions/9050",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
9,244 | The Ubuntu wiki page on FakeRaid says the following: [A] number of hardware products ...
claim to be IDE or SATA RAID
controllers... Virtually none of these
are true hardware RAID controllers.
Instead, they are simply multi-channel
disk controllers combined with special
BIOS configuration options... Is there a typical way to identify (from a product specification) whether a motherboard has "real" RAID, or are "real" RAID products generally unavailable to consumers? | The market for RAID controllers is fairly much consolidated these days. Three broad brush heuristics can be applied: Price Take a look at the pricing for genuine RAID cards from Areca, 3Ware, Adaptec and LSI. Anything that is much, much cheaper than these controllers is a 'fake RAID'. Remember, if it's too good to be true it probably isn't. Manufacturer There are a fairly limited number of manufacturers these days who actually make true hardware RAID controllers. Chances are that something not made by one of the main manufacturers of such kit is a 'fake RAID'. The main outfits that make RAID controllers are: Adaptec , LSI , Areca , Intel and Highpoint (possibly one or two others that I can't recall off the top of my head). Specifications The main outfits that produce RAID cards/controllers will also document the specifications in some detail on their web sites. If you can't find a detailed specification for the card get something you can get such a spec for. Note that not all cards produced by these outfits are necessarily RAID controllers, but the specs on the web site should make this clear. Batteries Thanks to sh-beta for pointing this out: Pretty much any hardware RAID controller worth buying will also have the option of a battery backed cache. 'Fake RAID' controllers have no cache RAM, using the machine's main RAM as a cache. Note that IBM, Dell, HP and other server manufacturers also sell RAID controllers. In many cases these are rebadged components made by Adaptec or LSI. If you want to buy a RAID controller on the cheap, identify some specific models of appropriate specification from various manufacturers' current and immediately previous generations. Then search for that particular model on ebay and get it secondhand. | {
"source": [
"https://serverfault.com/questions/9244",
"https://serverfault.com",
"https://serverfault.com/users/100/"
]
} |
9,325 | I'm using a service which stores data on disk.
The service is running as "local system account". Where is the stored data for that system user? I'm thinking about C:\Documents and Settings\Default User but I'm not sure about that. Can someone confirm that? | The data you are looking should not, by default, be located in "C:\Documents and Settings\Default User". That is the location of the default user profile, which is the template for new user profiles. Its only function is to be copied to a new folder for use as a user profile when a user logs onto the computer for the first time. If the service is following Microsoft's guidelines, it will be storing data in
the application data folder (%APPDATA%) or the local application data folder (%LOCALAPPDATA% on Windows Vista and later). It should not use the My Documents or Documents folders, but you might want to check there as well. On a typical installation of Windows XP or Windows Server 2003, check the following locations for application data for programs running as Local System (NT AUTHORITY\SYSTEM): C:\Windows\system32\config\systemprofile\Application Data\ Vendor \ Program C:\Windows\system32\config\systemprofile\Local Settings\Application Data\ Vendor \ Program C:\Windows\system32\config\systemprofile\My Documents On a typical installation of Windows Vista and later versions, check the following locations for application data for programs running as Local System (NT AUTHORITY\SYSTEM): C:\Windows\system32\config\systemprofile\AppData\Roaming\ Vendor \ Program C:\Windows\system32\config\systemprofile\AppData\Local\ Vendor \ Program C:\Windows\system32\config\systemprofile\AppData\LocalLow\ Vendor \ Program C:\Windows\system32\config\systemprofile\Documents Of course, substitute the appropriate vendor name and program name for Vendor and Program . [Edit - for bricelam]
For 32 bit processes running on 64 bit windows, it would be in SysWOW64 . C:\Windows\SysWOW64\config\systemprofile\AppData | {
"source": [
"https://serverfault.com/questions/9325",
"https://serverfault.com",
"https://serverfault.com/users/117/"
]
} |
9,428 | Is there a good command line utility to monitor hard disk load on linux? Something like top but then monitoring disk activity i.s.o. cpu usage. More specifically, I suspect that for some (heavy load) servers after several optimizations on various parts of the program(s) that run on it, right now the bottleneck is simply the logging to files on the disk. But I find it very difficult to assess how much traffic the servers can handle. My ideal tool would be something that prints "You're using 35% of your disk bandwidth right now". Any ideas? | You can get a pretty good measure of this using the iostat tool. % iostat -dx /dev/sda 5
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.78 11.03 1.19 2.82 72.98 111.07 45.80 0.13 32.78 1.60 0.64 The disk utilisation is listed in the last column. This is defined as Percentage of CPU time during which I/O requests were issued to the device
(band-width utilization for the device). Device saturation
occurs when this value is close to 100%. | {
"source": [
"https://serverfault.com/questions/9428",
"https://serverfault.com",
"https://serverfault.com/users/44392/"
]
} |
9,490 | I am responsible for managing both our production server (mail, web, database are all on one server) and our test server. Both are built on Debian. However as I am very new to system administration, I have only been installing updates as I come across things that have to be updated so that I can have newer features and get bug fixes. Its a pretty ad hoc process right now, and I'd like to make it less so. So I am wondering how people who know what they're doing handle this? How often do you perform upgrades on your servers? Is the upgrade process different between test and production? Do you always upgrade any test servers first? And do you do a full update of all software, or do you just install selected updates? | I run apt-get update -qq; apt-get upgrade -duyq daily. This will check for updates, but not do them automatically. Then I can run the upgrades manually while I am watching, and can correct anything that might go wrong. Besides the security concerns of maintaining a patched system, I find that if I leave it too long between patches, I end up with a whole bunch of packages that want to be upgraded, and that scares me a-lot more than just upgrading one or two every week or so. Therefore I tend to run my upgrades weekly, or if they are high priority, daily. This has the added advantage of knowing which package broke your system (ie. if you're only upgrading a couple at a time) I always upgrade less critical systems first. I also have a "rollback plan" in place in case I can't fix the system. (since most of our servers are virtual, this rollback plan usually consists of taking a snapshot before the upgrade that I can revert to if necessary) That being said, I think an upgrade has broken something only once or twice in the past 4 years, and that was on a highly customized system - so you don't have to be TOO paranoid :) | {
"source": [
"https://serverfault.com/questions/9490",
"https://serverfault.com",
"https://serverfault.com/users/2882/"
]
} |
9,499 | This question may vary between distros but, in general, what are the advantages/disadvantages of using a hard or soft mount in the UNIX world? Are there certain situations where one is more beneficial or are the uses fairly universal? | A hard mount is generally used for block resources like a local disk or SAN. A soft mount is usually used for network file protocols like NFS or CIFS. The advantage of a soft mount is that if your NFS server is unavailable, the kernel will time out the I/O operation after a pre-configured period of time. The disadvantage is that if your NFS driver caches data and the soft mount times out, your application may not know which writes to the NFS volumes were actually committed to disk. | {
"source": [
"https://serverfault.com/questions/9499",
"https://serverfault.com",
"https://serverfault.com/users/2664/"
]
} |
9,546 | Are there any filename or path length limits on Linux? | See the Wikipedia page about file systems comparison , especially in column Maximum filename length . Here are some filename length limits in popular file systems: BTRFS 255 bytes
exFAT 255 UTF-16 characters
ext2 255 bytes
ext3 255 bytes
ext3cow 255 bytes
ext4 255 bytes
FAT32 8.3 (255 UCS-2 code units with VFAT LFNs)
NTFS 255 characters
XFS 255 bytes | {
"source": [
"https://serverfault.com/questions/9546",
"https://serverfault.com",
"https://serverfault.com/users/2099/"
]
} |
9,708 | I am responsible for maintaining two Debian servers. Every time I have to do anything with security certificates, I Google for tutorials and beat away until it finally works. However, in my searches I often come across different file formats ( .key , .csr , .pem ) but I've never been able to find a good explanation of what each file format's purpose is. I was wondering if the good folks here at ServerFault could provide some clarification on this matter? | SSL has been around for long enough you'd think that there would be agreed upon container formats. And you're right, there are. Too many standards as it happens. In the end, all of these are different ways to encode Abstract Syntax Notation 1 (ASN.1) formatted data — which happens to be the format x509 certificates are defined in — in machine-readable ways. .csr - This is a Certificate Signing Request. Some applications can generate these for submission to certificate-authorities. The actual format is PKCS10 which is defined in RFC 2986 . It includes some/all of the key details of the requested certificate such as subject, organization, state, whatnot, as well as the public key of the certificate to get signed. These get signed by the CA and a certificate is returned. The returned certificate is the public certificate (which includes the public key but not the private key), which itself can be in a couple of formats. .pem - Defined in RFC 1422 (part of a series from 1421 through 1424 ) this is a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs ), or may include an entire certificate chain including public key, private key, and root certificates. Confusingly, it may also encode a CSR (e.g. as used here ) as the PKCS10 format can be translated into PEM. The name is from Privacy Enhanced Mail (PEM) , a failed method for secure email but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys. .key - This is a (usually) PEM formatted file containing just the private-key of a specific certificate and is merely a conventional name and not a standardized one. In Apache installs, this frequently resides in /etc/ssl/private . The rights on these files are very important, and some programs will refuse to load these certificates if they are set wrong. .pkcs12 .pfx .p12 - Originally defined by RSA in the Public-Key Cryptography Standards (abbreviated PKCS), the "12" variant was originally enhanced by Microsoft, and later submitted as RFC 7292 . This is a password-protected container format that contains both public and private certificate pairs. Unlike .pem files, this container is fully encrypted. Openssl can turn this into a .pem file with both public and private keys: openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes A few other formats that show up from time to time: .der - A way to encode ASN.1 syntax in binary, a .pem file is just a Base64 encoded .der file. OpenSSL can convert these to .pem ( openssl x509 -inform der -in to-convert.der -out converted.pem ). Windows sees these as Certificate files. By default, Windows will export certificates as .DER formatted files with a different extension. Like... .cert .cer .crt - A .pem (or rarely .der) formatted file with a different extension, one that is recognized by Windows Explorer as a certificate, which .pem is not. .p7b .keystore - Defined in RFC 2315 as PKCS number 7, this is a format used by Windows for certificate interchange. Java understands these natively, and often uses .keystore as an extension instead. Unlike .pem style certificates, this format has a defined way to include certification-path certificates. .crl - A certificate revocation list. Certificate Authorities produce these as a way to de-authorize certificates before expiration. You can sometimes download them from CA websites. In summary, there are four different ways to present certificates and their components: PEM - Governed by RFCs, used preferentially by open-source software because it is text-based and therefore less prone to translation/transmission errors. It can have a variety of extensions (.pem, .key, .cer, .cert, more) PKCS7 - An open standard used by Java and supported by Windows. Does not contain private key material. PKCS12 - A Microsoft private standard that was later defined in an RFC that provides enhanced security versus the plain-text PEM format. This can contain private key and certificate chain material. Its used preferentially by Windows systems, and can be freely converted to PEM format through use of openssl. DER - The parent format of PEM. It's useful to think of it as a binary version of the base64-encoded PEM file. Not routinely used very much outside of Windows. I hope this helps. | {
"source": [
"https://serverfault.com/questions/9708",
"https://serverfault.com",
"https://serverfault.com/users/2882/"
]
} |
9,742 | I soon will have a folder with thousands of files, each file on the order of a few KB. I will need to transfer these across a Windows network from one UNC share to another. In general, is it faster to simply copy the files over en masse, or would it be faster to zip them up (e.g., using 7zip in fastest mode) and send one or a few large files? Or is there no difference in practice? | It is faster to transfer a single large file instead of lots of little files because of the overhead of negotiating the transfer. The negotiation is done for each file, so transferring a single file it needs to be done once, transferring n files means it needs to be done n times. You will save yourself a lot of time if you zip first before the transfer. | {
"source": [
"https://serverfault.com/questions/9742",
"https://serverfault.com",
"https://serverfault.com/users/3075/"
]
} |
9,766 | This is a Canonical Question about System Administration Careers When I start my job as System Administrator, what basics skills should I know/learn? Are there any key differences for Network, Storage, Database, and other Administrators? | There is a lot of overlap with existing questions, I am creating a wiki here with links. Please feel free to update. How to make (and restore) backups! Customer service skills Troubleshooting Your troubleshooting rules, approach to troubleshooting? Etiquette of Troubleshooting Problems In The Workspaces Of Others How to respond when there is a crisis What’s your checklist for when everything blows up? The OSI Model, and IP networking. What is the OSI model and how does it apply to todays networks? Practical implications of OSI vs TCP/IP networking. How does Subnetting Work? What is the difference between a port and a socket? What are routers, hubs, and switches? What is the difference between UDP and TCP? How to document their network How are you documenting your work, processes and environment? How to ask for help in a way that will get you useful results. How To Ask Questions The Smart Way How to ask a question Security How to respond to a compromised system How to use the CLI Useful Commandline Commands on Windows Useful Commandline Commands on Linux Useful Commandline Commands on Mac OS How to monitor the systems you will be responsible for Also see What makes a “Good” or “Great” Administrator? Cheat Sheets for System Administrators? What tools should you absolutely know as a Windows/Linux Sysadmin? What should every sysadmin know before administrating a public server? What sysadmin things should every programmer know? What is the single most influential book every sysadmin should read? | {
"source": [
"https://serverfault.com/questions/9766",
"https://serverfault.com",
"https://serverfault.com/users/3083/"
]
} |
9,822 | Given this example folder structure: /folder1/file1.txt
/folder1/file2.djd
/folder2/file3.txt
/folder2/file2.fha How do I do a recursive text search on all *.txt files with grep from "/"? ( "grep -r <pattern> *.txt" fails when run from "/", since there are no .txt files in that folder.) | My version of GNU Grep has a switch for this: grep -R --include='*.txt' $Pattern Described as follows: --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude). | {
"source": [
"https://serverfault.com/questions/9822",
"https://serverfault.com",
"https://serverfault.com/users/1814/"
]
} |
9,948 | I have been bitten several times by the 'debian-sys-maint' user that is installed by default on the mysql-server packages installed from the Ubuntu repositories. Generally what happens is I pull a fresh copy of our production database (which is not running on Debian/Ubuntu) for troubleshooting or new development and forget to exclude the mysql.user table hence losing the debian-sys-maint user. If we add new mysql users for whatever reason, I have to 'merge' these into my development environment as opposed to just overlaying the table. Without the user my system still seems functional, but plagued with errors such as: sudo /etc/init.d/mysql restart
Stopping MySQL database server: mysqld...failed.
error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)' What is debian-sys-maint used for? Is there a better way for the package maintainers to do what they're trying to do? What is the easiest way to restore it after I've lost it? What is the correct/minimal set of privileges for this user? Seems like poor idea to 'grant all privileges on *.* ...' Edit Additional question - Is the password in /etc/mysql/debian.cnf already hashed or is this the plaintext password? It matters when you go to recreate the user and I never seem to get it right on the first try. Thanks | What is debian-sys-maint used for? One major thing it is used for is telling the server to roll the logs. It needs at least the reload and shutdown privilege. See the file /etc/logrotate.d/mysql-server It is used by the /etc/init.d/mysql script to get the status of the server. It is used to gracefully shutdown/reload the server. Here is the quote from the README.Debian * MYSQL WON'T START OR STOP?:
=============================
You may never ever delete the special mysql user "debian-sys-maint". This user
together with the credentials in /etc/mysql/debian.cnf are used by the init
scripts to stop the server as they would require knowledge of the mysql root
users password else. What is the easiest way to restore it after I've lost it? The best plan is to simply not lose it. If you really lose the password, reset it, using another account. If you have lost all admin privileges on the mysql server follow the guides to reset the root password, then repair the debian-sys-maint . You could use a command like this to build a SQL file that you can use later to recreate the account. mysqldump --complete-insert --extended-insert=0 -u root -p mysql | grep 'debian-sys-maint' > debian_user.sql Is the password in
/etc/mysql/debian.cnf already hashed The password is not hashed/encrypted when installed, but new versions of mysql now have a way to encrypt the credentials (see: https://serverfault.com/a/750363 ). | {
"source": [
"https://serverfault.com/questions/9948",
"https://serverfault.com",
"https://serverfault.com/users/1544/"
]
} |
9,977 | Is there a Windows equivalent of Unix "whoami" command? If so, what is it? | Since Windows 2000, the whoami command has been part of the standard command line (thanks to pk for clearing that up in comments!). You can do this: Open a command prompt and type "set" then hit enter. This shows active environment variables. Current logged on username is stored in the USERNAME env variable and your domain is stored in the USERDOMAIN variable. To piggy-back off the other answers, from a cmd line: echo %USERDOMAIN%\%USERNAME% will get you the complete logged on user in domain\username format. You can do the same thing with Powershell with this: write-host $env:userdomain\$env:username | {
"source": [
"https://serverfault.com/questions/9977",
"https://serverfault.com",
"https://serverfault.com/users/1687/"
]
} |
9,992 | I am running apache2 on Debian etch, with multiple virtual hosts. I want to redirect so that http://git.example.com goes to http://git.example.com/git/ Should be really simple, but Google isn't quite cutting it. I've tried the Redirect and Rewrite stuff and they don't quite seem to do what I want ... | Feel a bit silly - a bit more googling turned up the answer I was after: RedirectMatch ^/$ /git/ Basically redirecting the root, and only the root. This code could do in a .htaccess file (there is a tag for this, so I assume that is the original use case). But if you can edit ,the main server apache config then put it in the section for your website probably inside a <VirtualHost> section. The docs for RedirectMatch say that the context can be "server config, virtual host, directory, .htaccess". | {
"source": [
"https://serverfault.com/questions/9992",
"https://serverfault.com",
"https://serverfault.com/users/629/"
]
} |
10,027 | This page contains a prominent warning: Important Never build RPMS as root. Why is it bad to build RPMs as root? Is it the possibility of overwriting some files? Are there file permissions problems? | Badly written RPM .spec files (or even well-written ones with a typo) can do improper things such as: Install directly to the running system instead of to a sandbox Leave junk on the filesystem Accidentally run nasty commands such as: rm -rf ${RPM_BUILD_ROOT} There is no part of the RPM build process that actually needs root access. So, we should follow the standard procedure of "If it doesn't need root permission, it doesn't run as root" when building RPMs. This avoids nasty accidents and surprises. | {
"source": [
"https://serverfault.com/questions/10027",
"https://serverfault.com",
"https://serverfault.com/users/2099/"
]
} |
10,116 | I have a few production Fedora and Debian webservers that host our sites as well as user shell accounts (used for git vcs work, some screen+irssi sessions, etc). Occasionally a new kernel update will come down the pipeline in yum / apt-get , and I was wondering if most of the fixes are severe enough to warrant a reboot, or if I can apply the fixes sans reboot. Our main development server currently has 213 days of uptime, and I wasn't sure if it was insecure to run such an older kernel. | There is nothing really special about having a long uptime. It is generally better to have a secure system. All systems need updates at some point. You are probably already applying updates, do you schedule outages when you apply those updates? You probably should just in case something goes wrong. A reboot shouldn't that that much time really. If your system is so sensitive to outages, you probably should be thinking about some kind of clustering setup so you update a single member of the cluster without bringing everything down. If you are not sure about a particular update it is probably safer to schedule a reboot and apply it (preferably after testing it on another similar system). If you are interested in learning about if the update is important take time to read the security notice, and follow the links back to the CVE or the posts/lists/blogs describing the issue. This should help you decide if the update directly applies in your case. Even if you don't think it applies you should still consider updating your system eventually. Security is a layered approach. You should assume at some point in time those other layers may fail. Also, you might forget you have a vulnerable system because you skipped an update when you change the configuration at some later point in time. Anyway if you want to ignore or wait for a while on update on Debian based systems you can put the package on hold. I personally like to put holds on all the kernel packages just in case. CLI method to set a hold on a package on Debian-based systems. dpkg --get-selections | grep 'linux-image' | sed -e 's/install/hold/' | sudo dpkg --set-selections | {
"source": [
"https://serverfault.com/questions/10116",
"https://serverfault.com",
"https://serverfault.com/users/880/"
]
} |
10,285 | We have various passwords that need to be known to more than one person in our company. For example, the admin password to our internet routers, the password for our web-host, and also a few "non-IT" passwords like safe codes. Currently, we use an ad hoc system of "standard passwords" for low-value systems, and verbal sharing of passwords for more important/potentially damaging systems. I think most people would agree that this is not a good system. What we would like is a software solution for storing "shared" passwords, with access for each limited to the people who actually need it. Ideally, this would prompt, or enforce, periodic password changes. It should also be able to indicate who has access to a particular password ( e.g. , who knows the root password for server XYZ?) Can you suggest any software solutions for storing and sharing passwords? Is there anything particular to be wary of? What is the common practise in small-medium sized companies for this? | I face this problem every time I go to a new startup. First thing I do is make a couple of "Password safes" with a program like this one (or one of its derivatives): http://passwordsafe.sourceforge.net/ Set strong combinations and throw them up on a network share. Segment by area of responsibility... central infrastructure, production servers, dev/QA, etc. Once there's enough momentum, and assuming I have the proper Windows environment dependencies, I like to move everyone to this: http://www.clickstudios.com.au/passwordstate.html It has features for both shared and personal credentials. | {
"source": [
"https://serverfault.com/questions/10285",
"https://serverfault.com",
"https://serverfault.com/users/334/"
]
} |
10,326 | S.M.A.R.T. (for Self-Monitoring Analysis and Reporting Technology) is a wonderful technology to detect hard drive failure before it really happens. But is S.M.A.R.T. relevant for SSDs? | Yes, they have it, and yes, it's useful. Flash drives do develop errors over time, usually in the form of bad flash blocks - not unlike bad sectors in regular hard drives. Just like regular hard drives, the drive controller keeps track of these bad blocks and re-maps them to 'extra' blocks that were saved for this purpose. Whenever the computer requests data from a bad block, the controller intercepts it and gives it the correct data from the re-mapped block. Eventually you'll run out of extra blocks and will start getting real errors, at which time you'll need to replace the drive - S.M.A.R.T. will keep you on top of this so you can take care of it before you start losing data. The one major advantage SSDs have over regular drives in this is that the extra blocks in a regular drive require head seeks to another track, so as the drive ages it gets slower. In an SSD the remapping is done almost transparently, and so no additional time is wasted seeking to the remapped block and then seeking back to read the rest of the data. | {
"source": [
"https://serverfault.com/questions/10326",
"https://serverfault.com",
"https://serverfault.com/users/117/"
]
} |
10,328 | I have been unable to discover a way to determine what processors/CPUs/sockets are present in a PC/Server. Any suggestions? | Yes, they have it, and yes, it's useful. Flash drives do develop errors over time, usually in the form of bad flash blocks - not unlike bad sectors in regular hard drives. Just like regular hard drives, the drive controller keeps track of these bad blocks and re-maps them to 'extra' blocks that were saved for this purpose. Whenever the computer requests data from a bad block, the controller intercepts it and gives it the correct data from the re-mapped block. Eventually you'll run out of extra blocks and will start getting real errors, at which time you'll need to replace the drive - S.M.A.R.T. will keep you on top of this so you can take care of it before you start losing data. The one major advantage SSDs have over regular drives in this is that the extra blocks in a regular drive require head seeks to another track, so as the drive ages it gets slower. In an SSD the remapping is done almost transparently, and so no additional time is wasted seeking to the remapped block and then seeking back to read the rest of the data. | {
"source": [
"https://serverfault.com/questions/10328",
"https://serverfault.com",
"https://serverfault.com/users/3018/"
]
} |
10,353 | What is the sticky bit in a UNIX file system? As an admin when and how would you use it? | Its original use was to provide a hint to the OS that the executable should be cached in memory so it would load faster. This use has mostly been deprecated as OSes are pretty smart about this sort of thing now. In fact, I think now some OSes use it as a hint that the executable shouldn’t be cached. The most common use today is to create a directory in which anyone can create a file, but only the owner of a file in that directory can delete it. Traditionally, if you have a directory that anyone can write to, anyone can also delete a file from it. setting the sticky bit on a directory makes it so only the owner of a file can delete the file from a world-writeable directory. The classic use of this is the /tmp directory: $ ls -ld /tmp
drwxrwxrwt 29 root root 5120 May 20 09:15 /tmp/ The t in the mode there is the sticky bit. If that wasn’t set, it would be pretty easy for a regular user to cause havoc by deleting everything from /tmp . Since lots of daemons put sockets in /tmp , it would essentially be a local DOS. | {
"source": [
"https://serverfault.com/questions/10353",
"https://serverfault.com",
"https://serverfault.com/users/2664/"
]
} |
10,437 | I used to have the caps lock and control swapped in GNOME, but when I upgraded to Ubuntu 9.04 I also changed my desktop environment to Xfce. I have the following line in my xorg.conf: Option "XkbOptions" "ctrl:nocaps" But that doesn't seem to make a difference to Xfce. Any ideas? | I ended up removing the "XkbOptions" line from my xorg.conf, and adding this to Xfce's autostart: /usr/bin/setxkbmap -option "ctrl:nocaps" It turns the caps lock key into an additional Ctrl, which does the trick for me. If you wanted a straight swap, I believe "ctrl:swapcaps" would work. For what it's worth, this page is a fairly decent guide: http://manicai.net/comp/swap-caps-ctrl.html I haven't had a change to try the other methods yet, but I also have a netbook with a slightly funky layout, and I might need to muck around with it a bit. | {
"source": [
"https://serverfault.com/questions/10437",
"https://serverfault.com",
"https://serverfault.com/users/1180/"
]
} |
10,475 | Is there a rule of thumb for how much space to leave free on a hard disk? I used to hear you should leave at least 5% free to avoid fragmentation. [I know the answer depends on usage (eg: video files vs text), size of disk, RAID level, disk format, disk size - but as it's impractical to ask 100 variations of the same question, any information is wlecome] | You generally want to leave about 10% free to avoid fragmentation, but there is a catch. Linux, by default, will reserve 5% of the disk for the root user. When you use 'df', the output doesn't include that 5% if you run it as a non-root user. Just something to keep in mind when doing your calulations. Incidentally, you can change the root reserve by using tune2fs. For example tune2fs -m 2 /dev/hda1 will set the root reserve at 2%. Generally this is not recommended of course, unless you have a very specific purpose in mind. | {
"source": [
"https://serverfault.com/questions/10475",
"https://serverfault.com",
"https://serverfault.com/users/2318/"
]
} |
10,518 | I've got a box running Win2k3 and some directions from Microsoft KB about SSL certificates, for IIS 5.0 and 6.0. How can I tell which version of IIS is currently installed? | As a more general answer, not specifically aimed at your question, Microsoft has a support article which lists all old versions and the operating systems that provide each one. IIS version Built-in
5.0 Windows 2000
5.1 Windows XP Pro
6.0 Windows Server 2003
7.0 Windows Vista and Windows Server 2008
7.5 Windows 7 and Windows Server 2008 R2
8.0 Windows 8 and Windows Server 2012 Current versions are on Wikipedia 8.5 Windows 8.1 and Windows Server 2012 R2
10.0 v1607 Windows Server 2016 and Windows 10.*
10.0 v1709 Windows Server 2016 v1709 and Windows 10.*
10.0 v1809 Windows Server 2019 and Windows 10.* October | {
"source": [
"https://serverfault.com/questions/10518",
"https://serverfault.com",
"https://serverfault.com/users/919/"
]
} |
10,543 | I hear that you can now create soft links in Vista too . So, what is the difference between a soft (symbolic) link and a hard link on UNIX/Linux/Vista? Are there advantages of using one over the other? Or do they just serve two distinct purposes? | A hard link traditionally shares the same file system structures (inode in unixspeak), while a soft-link is a pathname redirect. Hardlinks must be on the same filesystem, softlinks can cross filesystems. Hardlinked files stay linked even if you move either of them (unless you move one to another file system triggering the copy-and-delete mechanism). Softlinked files break if you move the target (original), and sometimes when you move the link (Did you use an absolute or relative path? Is it still valid?). Hardlinked files are co-equal, while the original is special in softlinks, and deleting the original deletes the data. The data does not go away until all hardlinks are deleted. Softlinks can point at any target, but most OS/filesystems disallow hardlinking directories to prevent cycles in the filesystem graph (with the exception of the . and .. entries in unix directories which are hard links). Softlinks can require special support from filesystem walking tools. Read up on readlink (2) . (Some details brought back to mind by mat1t . Thanks.) | {
"source": [
"https://serverfault.com/questions/10543",
"https://serverfault.com",
"https://serverfault.com/users/2664/"
]
} |
10,590 | Our network gave an error that there was an IP address conflict and I'd like to find what all the device IP addresses are. (I've also had need of that before). (update/clarification:I'm looking for a Windows-based too.) Any suggestions? I've read suggestions for various tools (Look@Lan, Angry IP Scanner) and I'm looking for suggestions from people who have used these or other tools. | Using nmap to do a sweep of the subnet is one quick and simple way to do this that I've used before, the various options will allow you to do a more detailed inspection also. | {
"source": [
"https://serverfault.com/questions/10590",
"https://serverfault.com",
"https://serverfault.com/users/2181/"
]
} |
10,604 | I have a .net application I want to respond to .htm and .html requests (in addition to .aspx). I know how to do this in IIS6, but not in IIS7. Someone please enlighten me! Thanks, Kyle | Using nmap to do a sweep of the subnet is one quick and simple way to do this that I've used before, the various options will allow you to do a more detailed inspection also. | {
"source": [
"https://serverfault.com/questions/10604",
"https://serverfault.com",
"https://serverfault.com/users/3255/"
]
} |
10,852 | What kernel parameter or other settings control the maximum number of TCP sockets that can be open on a Linux server? What are the tradeoffs of allowing more connections? I noticed while load testing an Apache server with ab that it's pretty easy to max out the open connections on the server. If you leave off ab's -k option, which allows connection reuse, and have it send more than about 10,000 requests then Apache serves the first 11,000 or so requests and then halts for 60 seconds. A look at netstat output shows 11,000 connections in the TIME_WAIT state. Apparently, this is normal. Connections are kept open a default of 60 seconds even after the client is done with them for TCP reliability reasons . It seems like this would be an easy way to DoS a server and I'm wondering what the usual tunings and precautions for it are. Here's my test output: # ab -c 5 -n 50000 http://localhost/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
apr_poll: The timeout specified has expired (70007)
Total of 11655 requests completed Here's the netstat command I run during the test: # netstat --inet -p | grep "localhost:www" | sed -e 's/ \+/ /g' | cut -d' ' -f 1-4,6-7 | sort | uniq -c
11651 tcp 0 0 localhost:www TIME_WAIT -
1 tcp 0 1 localhost:44423 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44424 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44425 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44426 SYN_SENT 7831/ab
1 tcp 0 1 localhost:44428 SYN_SENT 7831/ab | I finally found the setting that was really limiting the number of connections: net.ipv4.netfilter.ip_conntrack_max . This was set to 11,776 and whatever I set it to is the number of requests I can serve in my test before having to wait tcp_fin_timeout seconds for more connections to become available. The conntrack table is what the kernel uses to track the state of connections so once it's full, the kernel starts dropping packets and printing this in the log: Jun 2 20:39:14 XXXX-XXX kernel: ip_conntrack: table full, dropping packet. The next step was getting the kernel to recycle all those connections in the TIME_WAIT state rather than dropping packets. I could get that to happen either by turning on tcp_tw_recycle or increasing ip_conntrack_max to be larger than the number of local ports made available for connections by ip_local_port_range . I guess once the kernel is out of local ports it starts recycling connections. This uses more memory tracking connections but it seems like the better solution than turning on tcp_tw_recycle since the docs imply that that is dangerous. With this configuration I can run ab all day and never run out of connections: net.ipv4.netfilter.ip_conntrack_max = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_orphan_retries = 1
net.ipv4.tcp_fin_timeout = 25
net.ipv4.tcp_max_orphans = 8192
net.ipv4.ip_local_port_range = 32768 61000 The tcp_max_orphans setting didn't have any effect on my tests and I don't know why. I would think it would close the connections in TIME_WAIT state once there were 8192 of them but it doesn't do that for me. | {
"source": [
"https://serverfault.com/questions/10852",
"https://serverfault.com",
"https://serverfault.com/users/2084/"
]
} |
10,854 | Is there a way to share configuration directives across two nginx server {} blocks? I'd like to avoid duplicating the rules, as my site's HTTPS and HTTP content are served with the exact same config. Currently, it's like this: server {
listen 80;
...
}
server {
listen 443;
ssl on; # etc.
...
} Can I do something along the lines of: server {
listen 80, 443;
...
if(port == 443) {
ssl on; #etc
}
} | You can combine this into one server block like so: server {
listen 80;
listen 443 default_server ssl;
# other directives
} Official How-To | {
"source": [
"https://serverfault.com/questions/10854",
"https://serverfault.com",
"https://serverfault.com/users/584/"
]
} |
10,856 | Can someone recommend me, free if possible, subversion client for Vista? | tortoiseSVN is very good. | {
"source": [
"https://serverfault.com/questions/10856",
"https://serverfault.com",
"https://serverfault.com/users/3301/"
]
} |
10,955 | I've been looking at Linux tuning params and see some configs where SACK is turned off. Can anyone explain this? This would be tuning for a busy web server. | A basic TCP ACK says "I received all bytes up to X." Selective ACK allows you to say "I received bytes X-Y, and V-Z." So, for instance, if a host sent you 10,000 bytes and bytes 3000-5000 were lost in transit, ACK would say "I got everything up to 3000." The other end would have to send bytes 3001-10000 again. SACK could say "I got 1000-2999, and 5001-10000" and the host would just send the 3000-5000. This is great over a high bandwidth, lossy (or high delay) link. The problem is that it can cause severe performance issues in specific circumstances. Normal TCP ACKs will make the server treat a high-bandwidth, lossy connection with kid gloves (send 500 bytes, wait, send 500 bytes, wait, etc). SACK lets it adapt to the high delay because it knows exactly how many packets were actually lost. Here is where bad things can happen. An attacker can force your server to keep a massive retransmission queue for a long time, then process that whole damn thing over and over and over again. This can peg the CPU, eat up RAM, and consume more bandwidth than it should. In a nutshell, a lightweight system can initiate a DoS against a beefier server. If your server is robust and doesn't serve large files, you're pretty well insulated against this. If you're mostly serving an intranet or other low-latency group of users, SACK buys you nothing and can be turned off for security reasons with no performance loss. If you're on a low-bandwidth link (say 1Mbps or less as a completely arbitrary rule of thumb), SACK can cause problems in normal operations by saturating your connection and should be turned off. Ultimately, it's up to you. Consider what you're serving, to whom, from what, and weigh the degree of your risk against the performance effects of SACK. There is a great overview of SACK and its vulnerability here. | {
"source": [
"https://serverfault.com/questions/10955",
"https://serverfault.com",
"https://serverfault.com/users/2507/"
]
} |
10,985 | Are IP addresses with a 0 in the last octet valid? 10.6.43.0 In my case, I have the the following netmask 255.255.252.0 What about a 0 for the other octets? | It depends on the subnet of the IP address in question. In general, the first and last addresses in a subnet are used as the network identifier and broadcast address, respectively. All other addresses in the subnet can be assigned to hosts on that subnet. For example, IP addresses of networks with subnet masks of at least 24 bits ending in .0 or .255 can never be assigned to hosts. Such "last" addresses of a subnet are considered "broadcast" addresses and all hosts on the corresponding subnet will respond to it. Theoretically, there could be situations where you can assign an address ending in .0: for example, if you have a subnet like 192.168.0.0/255.255.0.0, you are allowed to assign a host the address 192.168.1.0. It could create confusion though, so it's not a very common practice. In your example 10.6.43.0 with subnet 255.255.252.0 (22 bit subnet mask) means subnet ID 10.6.40.0, a host address range from 10.6.40.1 to 10.6.43.254 and a broadcast address 10.6.43.255. So in theory, your example 10.6.43.0 would be allowed as a valid host address. | {
"source": [
"https://serverfault.com/questions/10985",
"https://serverfault.com",
"https://serverfault.com/users/2427/"
]
} |
11,028 | I can use log analyzers, but often I need to parse recent web logs to see what's happening at the moment. I sometimes do things like to figure out top 10 ips that request a certain file cat foo.log | grep request_to_file_foo | awk '{print $1}' | sort -n | uniq -c | sort -rn | head What do you have in your toolbox? | You can do pretty much anything with apache log files with awk alone. Apache log files are basically whitespace separated, and you can pretend the quotes don't exist, and access whatever information you are interested in by column number. The only time this breaks down is if you have the combined log format and are interested in user agents, at which point you have to use quotes (") as the separator and run a separate awk command. The following will show you the IPs of every user who requests the index page sorted by the number of hits: awk -F'[ "]+' '$7 == "/" { ipcount[$1]++ }
END { for (i in ipcount) {
printf "%15s - %d\n", i, ipcount[i] } }' logfile.log $7 is the requested url. You can add whatever conditions you want at the beginning. Replace the '$7 == "/" with whatever information you want. If you replace the $1 in (ipcount[$1]++), then you can group the results by other criteria. Using $7 would show what pages were accessed and how often. Of course then you would want to change the condition at the beginning. The following would show what pages were accessed by a user from a specific IP: awk -F'[ "]+' '$1 == "1.2.3.4" { pagecount[$7]++ }
END { for (i in pagecount) {
printf "%15s - %d\n", i, pagecount[i] } }' logfile.log You can also pipe the output through sort to get the results in order, either as part of the shell command, or also in the awk script itself: awk -F'[ "]+' '$7 == "/" { ipcount[$1]++ }
END { for (i in ipcount) {
printf "%15s - %d\n", i, ipcount[i] | sort } }' logfile.log The latter would be useful if you decided to expand the awk script to print out other information. It's all a matter of what you want to find out. These should serve as a starting point for whatever you are interested in. | {
"source": [
"https://serverfault.com/questions/11028",
"https://serverfault.com",
"https://serverfault.com/users/2307/"
]
} |
11,119 | Inspired by this question , I ask the reverse: how much about programming do system administrators need to know? More specifically, what programming tools are useful for a sysadmin to have? | Version Control . Be able to generate, read and apply patches. Know how to use a version control system that presents repository wide versions and why you want one. Know how to write descriptive changelogs and why you want them. Know how to search a repository's logs for keywords and time frames. Scripting . Do something once and be on your way. Do it twice or more, do it once then write a script. Debugging . Know how to read a stack trace and how to report relevant errors to your software support contact. Finding the error is nice and helpful, but knowing how to fix it can take a lot of investment in reading the code. Do the part that's easy for you, and let them do the part that's easy for them. Testing . Monitor continuously and log errors. Used in conjunction with version control and testing, you have a strong idea of what may have gone wrong when and what changed around then. Monitor both production and preproduction. Peer Review . Propose and review changes to production systems. Test on preproduction, determine exactly what needs to be done and record what services may be affected for how long. Don't let Change Management degrade into political battles of bureaucratic power. Study Cryptography . A modern system administrator is in charge of network resources; adding security as a final step is somewhere between impossible and a very expensive proposition. Understanding public key cryptography, password handling practices, and encryption in general will be extremely valuable. | {
"source": [
"https://serverfault.com/questions/11119",
"https://serverfault.com",
"https://serverfault.com/users/2567/"
]
} |
11,122 | I am trying to get an install of the new TFS 2010 beta so I can demo it to my co-workers (on Tuesday). I am not really a systems person, so I did not know that Windows Server 2008 R2 is a Release Candidate only. I thought it was the current version. I have it installed on a VM and have SQL Server 2008 Installed. I am working on Sharepoint now and I am realizing that a lot of the software out there needs special versions to work with the Windows Server 2008 R2 RC. So, my question is: Am I better off starting over with Windows Server 2008 or pressing on with Windows Server 2008 R2 RC? Will TFS 2010 even run on Windows 2008 R2 RC? Thanks for any responses. | Version Control . Be able to generate, read and apply patches. Know how to use a version control system that presents repository wide versions and why you want one. Know how to write descriptive changelogs and why you want them. Know how to search a repository's logs for keywords and time frames. Scripting . Do something once and be on your way. Do it twice or more, do it once then write a script. Debugging . Know how to read a stack trace and how to report relevant errors to your software support contact. Finding the error is nice and helpful, but knowing how to fix it can take a lot of investment in reading the code. Do the part that's easy for you, and let them do the part that's easy for them. Testing . Monitor continuously and log errors. Used in conjunction with version control and testing, you have a strong idea of what may have gone wrong when and what changed around then. Monitor both production and preproduction. Peer Review . Propose and review changes to production systems. Test on preproduction, determine exactly what needs to be done and record what services may be affected for how long. Don't let Change Management degrade into political battles of bureaucratic power. Study Cryptography . A modern system administrator is in charge of network resources; adding security as a final step is somewhere between impossible and a very expensive proposition. Understanding public key cryptography, password handling practices, and encryption in general will be extremely valuable. | {
"source": [
"https://serverfault.com/questions/11122",
"https://serverfault.com",
"https://serverfault.com/users/3299/"
]
} |
11,145 | When I was working in our server room, I noticed that it was very cold. I know that the server room has to be cold to offset the heat of the servers, but perhaps it is TOO cold. What is an appropriate temperature to keep our server room at? | Recommendations on server room temperature vary greatly. This guide says that: General recommendations suggest that you should not go below 10°C (50°F) or above 28°C (82°F). Although this seems a wide range these are the extremes and it is far more common to keep the ambient temperature around 20-21°C (68-71°F). For a variety of reasons this can sometimes be a tall order. This discussion on Slashdot has a variety of answers but most of them within the range quoted above. Update : As others have commented below Google recommends 26.7°C (80°F) for data centres. Also the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) has recently updated their recommended tempature range to be from 18°C-27°C (64.4°F-80.6°F). However this article agains highlights that there is still no consensus on the subject. As mentioned in the article I would highlight that: ...nudging the thermostat higher may also leave less time to recover from a cooling failure, and is only appropriate for companies with a strong understanding of the cooling conditions in their facility. IMO most companies would not have such a strong understanding of cooling conditions and thus it would be safer in a small business environment to be running the rooms a little cooler. NB: It is important to note there are a lot more factors to consider in a server/data room than just the temperature, air flow & humidity for example are also important concerns. | {
"source": [
"https://serverfault.com/questions/11145",
"https://serverfault.com",
"https://serverfault.com/users/126900/"
]
} |
11,320 | Command line and scripting is dangerous. Make a little typo with rm -rf and you are in a world of hurt. Confuse prod with stage in the name of the database while running an import script and you are boned (if they are on the same server, which is not good, but happens). Same for noticing too late that the server name where you sshed is not what you thought it was after funning some commands. You have to respect the Hole Hawg . I have a few little rituals before running risky commands - like doing a triple take check of the server I'm on. Here's an interesting article on rm safety . What little rituals, tools and tricks keeps you safe on the command line? And I mean objective things, like "first run ls foo*, look at the output of that and then substitute ls with rm -rf to avoid running rm -rf foo * or something like that", not "make sure you know what the command will do". | One that works well is using different background colors on your shell for prod/staging/test servers. | {
"source": [
"https://serverfault.com/questions/11320",
"https://serverfault.com",
"https://serverfault.com/users/2307/"
]
} |
11,410 | Now that I have started the Software Update service on my Leopard Server, how do I change my client Macs to check for updates on it? | Defaults The simplest method is to run a defaults command on the client Macs (easily pushed via Apple Remote Desktop): defaults write com.apple.SoftwareUpdate CatalogURL 'HTTP_URL_FOR_CATALOG' for a user. If you run it via sudo it will set it for whenever you use softwareupdate as root. The HTTP_URL_FOR_CATALOG has been changed with Mac OS X 10.6. If you use MCX it will automatically pick the new catalog - however if doing it manually the following URLs need to be used for whichever client version is in question: Mac OS X 10.4: http://mysus.example.com:8088/index.sucatalog Mac OS X 10.5: http://mysus.example.com:8088/index-leopard.merged-1.sucatalog.sucatalog Mac OS X 10.6: http://mysus.example.com:8088/index-leopard-snowleopard.merged-1.sucatalog Mac OS X 10.7: http://mysus.example.com:8088/index-lion-snowleopard-leopard.merged-1.sucatalog Mac OS X 10.8: index-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog To double check this applied you can run the following command: /usr/libexec/PlistBuddy -c Print /Library/Preferences/com.apple.SoftwareUpdate.plist and /usr/libexec/PlistBuddy -c Print ~/Library/Preferences/com.apple.SoftwareUpdate.plist to see what settings are for the computer and user appropriately. If this is working correctly when running Software Update (GUI) you should see the server address appear in parenthesis in the title of the window. MCX Another alternative is to use Workgroup Manager to manage the preferences via MCX from your server. This can be done for users, or for computers if they are bound to your Open Directory. If you are using 10.5 Server or newer: you can simply use the Software Update section under Preferences. Manually: Choose the accounts, computers, or groups to have the preference applied to. Click on Preferences, and then the Details tab Press the Add... button and navigate to /Library/Preferences/com.apple.SoftwareUpdate.plist Press Edit... Under Often, add a New Key and enter the name CatalogURL Make sure the type is string and then enter your SUS URL (eg. http://mysus.example.com:8088/index.sucatalog or if using 10.6: http://mysus.examle.com:8088/ - see above from the defaults section) Press Apply Now, then Done. Once users/computers have refreshed their MCX settings (usually the next login or restart) the new settings will take over. If this is working correctly when running Software Update (GUI) you should see the server address appear in parenthesis in the title of the window. | {
"source": [
"https://serverfault.com/questions/11410",
"https://serverfault.com",
"https://serverfault.com/users/2318/"
]
} |
11,540 | Since Linux has a lot of useful tools, while Windows has a lot of apps (like Chrome), instead of buying another machine to run Linux, is there a way to run it as a Virtual Machine on the PC? The Ubuntu installation CD-ROM doesn't seem to have such an option. | Lots of options here: Tools Only If you just want the GNU/Linux tools, there are a few choices. cygwin gives you a bash shell with lots of tools, including an X11 server. This has been around awhile and is mature. msys is a smaller, lightweight alternative to cygwin. GNU utilities for Win32 is another lightweight alternative. These are native versions of the tools, as opposed to cygwin which requires a cygwin DLL to fake out its tools into thinking they are running on Linux. UWIN is a set of Unix tools/libraries from ATT Research that run on Windows. SUA is Microsoft's Subsystem for UNIX-based Applications, offering a tools and an environment for building/running Unix programs under Windows. Linux in a Windows Process There are several packages that will run Linux as a Windows process, without simulating an entire PC as virtualization does. They use Cooperative Linux , a.k.a. coLinux, which is limited to 32-bit systems. These don't have the overhead of virtualizing, and they start up faster since you're not booting a virtual PC. This is a little more on the experimental side and may not be as stable as some of the virtualization options. Portable Ubuntu andLinux Virtualization Virtualization software lets you boot up another OS in a virtual PC, one that shares hardware with the host OS. This is pretty tried-and-true. There are nice options here for taking snapshots of your Virtual PC in a particular state, suspend/resume a virtual PC, etc. It's nice to be able to experiment with a virtual PC, add a few packages, then revert to a previous snapshot and "start clean". VMWare VirtualBox VirtualPC Dual Booting wubi allows you to install Ubuntu right from Windows, then dual-boot. Not as convenient as the above, since you can't run both OS's at once. | {
"source": [
"https://serverfault.com/questions/11540",
"https://serverfault.com",
"https://serverfault.com/users/4612/"
]
} |
11,550 | Tools like top and ps can give me the amount of memory currently allocated to a process, but I am interested in measuring the maximum amount of memory allocated to a process either since its creation or in a given time interval. Any suggestions on how to find out? | You can get the peak memory usage of a certain process, at: grep VmPeak /proc/$PID/status (Change $PID to the actual process id you're looking for). VmPeak is the maximum amount of memory the process has used since it was started. In order to track the memory usage of a process over time, you can use a tool called munin to track, and show you a nice graph of the memory usage over time. Munin comes with many default plugins to track system resources, however it doesn't come with a plugin to track Peak memory usage - fortunetly, its extremely easy to write a plugin for it. Here's an example of a munin plugin to track VmPeak,VmRSS and VmSize memory usage , for the apache process. You can change this to suit your needs (just point to the right PID file and change the component name as needed). The graph it outputs looks like this (VmPeak and VmSize are the same in this example, so you only see one of them): Note: this only monitors the main apache process, and doesn't show the memory usage of it's child processes. #!/bin/bash
#
# Parameters:
#
# config (required)
# autoconf (optional - used by munin-config)
#
COMPONENT_NAME="Apache"
COMPONENT_PID_FILE="/var/run/apache2.pid"
if [ "$1" = "autoconf" ]; then
if [ -r /proc/stat ]; then
echo yes
exit 0
else
echo "no (/proc/stat not readable)"
exit 1
fi
fi
if [ "$1" = "config" ]; then
echo "graph_title $COMPONENT_NAME memory usage"
echo 'graph_vlabel'
echo "graph_category Processes"
echo "graph_info This graph shows the amount of memory used by the $COMPONENT_NAME processes"
echo "${COMPONENT_NAME}_vmpeak.label $COMPONENT_NAME VmPeak"
echo "${COMPONENT_NAME}_vmsize.label $COMPONENT_NAME VmSize"
echo "${COMPONENT_NAME}_vmrss.label $COMPONENT_NAME VmRSS"
echo 'graph_args --base 1024'
exit 0
fi
check_memory ()
# $1 - PID location
# $2 - process_label
{
pid_location=$1
process_label=$2
read pid < $pid_location
procpath="/proc/$pid/status"
if [ ! -e $procpath ] || [ -z $pid ]
then
echo "${process_label}_vmpeak.value 0"
echo "${process_label}_vmsize.value 0"
echo "${process_label}_vmrss.value 0"
exit 0
fi
VmPeak=`grep VmPeak /proc/$pid/status|awk '{print $2}'`
VmSize=`grep VmSize /proc/$pid/status|awk '{print $2}'`
VmRSS=`grep VmRSS /proc/$pid/status|awk '{print $2}'`
echo "${process_label}_vmpeak.value $(( $VmPeak * 1024 ))"
echo "${process_label}_vmsize.value $(( $VmSize * 1024 ))"
echo "${process_label}_vmrss.value $(( $VmRSS * 1024 ))"
}
check_memory $COMPONENT_PID_FILE $COMPONENT_NAME | {
"source": [
"https://serverfault.com/questions/11550",
"https://serverfault.com",
"https://serverfault.com/users/1085/"
]
} |
11,659 | I am installing a Debian server which is connected directly to the Internet. Obviously I want to make it as secure as possible. I would like you guys/gals to add your ideas to secure it and what programs you use for it. I want part of this question to cover what do you use as a firewall? Just iptables manually configured or do you use some kind of software to aid you? What's the best way? Block everything and allow only what is needed? Are there maybe good tutorials for beginners to this topic? Do you change your SSH port? Do you use software like Fail2Ban to prevent bruteforce attacks? | Obligatory: installation of system with expert mode, only packages that I need hand written firewall with default policy on iptables'input: drop, permitting access to SSH, HTTP or whatever else given server is running Fail2Ban for SSH [ and sometimes FTP / HTTP / other - depending on context ] disable root logins, force using normal user and sudo custom kernel [ just old habit ] scheduled system upgrade Depending on level of paranoia additionally: drop policy on output except a couple of allowed destinations / ports integrit for checking if some parts of file system ware not modified [with checksum kept outside of the machine], for example Tripwire scheduled scan at least with nmap of system from the outside automated log checking for unknown patterns [but that's mostly to detect hardware malfunction or some minor crashes] scheduled run of chkrootkit immutable attribute for /etc/passwd so adding new users is slightly more difficult /tmp mounted with noexec port knocker or other non-standard way of opening SSH ports [e.g. visiting 'secret' web page on web server allows incoming SSH connection for a limited period of time from an IP address that viewed the page. If you get connected, -m state --satete ESTABLISHED takes care of allowing packet flow as long as you use a single SSH session] Things I do not do myself but make sense: grsecurity for kernel remote syslog so logs cannot be overwritten when system gets compromised alerting about any SSH logins configure rkhunter and set it up to run from time to time | {
"source": [
"https://serverfault.com/questions/11659",
"https://serverfault.com",
"https://serverfault.com/users/1131/"
]
} |
11,670 | What are the advantages of using .msi files over regular setup.exe files? I have the impression that deployment is easier on machines where users have few permissions, but not sure about the details. What features does msiexec.exe have that makes deployment more easy than using setup.exe scenarios? Any tips or tricks when deploying .msi applications? | Just a few benefits: Can be advertised (so that on demand installation could take place). Like advertisement, features can be installed as soon as the user tries to use them. State management is maintained so Windows Installer provides a way to let administrators see if an application is installed on a machine. Ability to roll back if an installation fails. I think to when I'm deploying software in an enterprise setting: deploying software via MSI is almost enjoyable. In contrast, I almost always find myself dreading deploying software when it's in another container. For some additional info on manipulating MSI installations, type msiexec into the Run dialog. | {
"source": [
"https://serverfault.com/questions/11670",
"https://serverfault.com",
"https://serverfault.com/users/1078/"
]
} |
11,736 | From GNU less manpage -i or --ignore-case Causes searches to ignore case; that is, uppercase and lowercase are considered identical. This option is ignored if any uppercase letters appear in the search pattern; in other words, if a pattern contains uppercase letters, then that search does not ignore case. -I or --IGNORE-CASE Like -i, but searches ignore case even if the pattern contains uppercase letters. This is a great way of searching in GNU less, while ignoring case sensitivity. However, you must know in advance that you'd like to search while ignoring case sensitivity and indicate it in the command line. vim solves this problem by letting the user specify \c before a search, to indicate that the pattern should be searched while ignoring case sensitivity. Is there a way to do the same in less (without specifying -I in the command line)? | You can set it from within less by typing -i and then doing the normal search procedure. Have a look in the help for less by pressing h | {
"source": [
"https://serverfault.com/questions/11736",
"https://serverfault.com",
"https://serverfault.com/users/1134/"
]
} |
11,739 | What performance tips can be offered to someone running a LAMP server? In the instance that something is Distribution specific, I'm targeting Debian. | It really depends on your workload. for the L part get a lot of memory, if you can go over 4GB, go 64bit. for partitions where your content, logs and MySQL data are use mount options: noatime, nodiratime. use separate physical drives / raid sets, ideally keep SQL data, logs, content you serve - each on separate spindle. for the A part of your stack - well maybe you want to replace it completely with nginx or lighthttpd , or maybe just leave Apache for dynamic content and have separate server (like those two or mathopd ) for static content. Take a look here for more options. If you are going to run both Apache and another server at the same box, a 2nd IP address will be handy. To decrease latency for the end-user use http/1.1 with keep-alive. Consider using a CDN for static content. for the M part of your lamp - take a look at mysqlperformanceblog . from the top of my head: log slow queries, give enough memory, consider using innodb. if you have a lot of text to search across - use sphinx and have a batch job that rebuilds the index. consider killing queries that run longer than XYZ seconds. It's better to upset 1% of users than to bring the whole site down at the peak time. But that really depends if you process cash transactions or show nice pictures. use memcached if you can, to cache result of more 'expensive' SQL queries. Keep in mind to invalidate the cache when you change content of SQL. On the other hand I have quite few sites where all data fits in memory comfortably and for that MySQL is blazing fast and there is no need of additional cache. for P set execution timeout for scripts. consider using some PHP accelerator / opcode cache. I was quite satisfied with xcache , but I don't use it now. if you have CPU intensive processing - cache results and store them in SQL or memcached Not really a performance tip, but do take offsite backups. Really. | {
"source": [
"https://serverfault.com/questions/11739",
"https://serverfault.com",
"https://serverfault.com/users/51157/"
]
} |
11,745 | I've just installed SQL Server 2008 and I cannot connect to it with SQL Server Management Studio. My intention is to just let my local windows user be authenticated but I am not totally sure how to do that . In the connect dialog I'm writing: Server type: Database Engine
Server name: (local)
Authentication: Windows Authentication My first question would be if that is what I should connect to? When I try to connect I get this error message: TITLE: Connect to Server
------------------------------
Cannot connect to (local).
------------------------------
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection
to SQL Server. The server was not found or was not accessible. Verify that the instance
name is correct and that SQL Server is configured to allow remote connections.
(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
(Microsoft SQL Server, Error: 2)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476
------------------------------
BUTTONS:
OK
------------------------------ I went to the URL there displayed and it just basically says "be sure SQL server is running". I think it is but I am not totally sure. I've disabled the Windows Firewall (this is Windows 7 7100 x86). I've also changed the log on system in the SQL Server Configuration Manager but it seems it's not a problem of logging in but not even be able to open the socket to it. On that same tool I've enabled all the protocols on "SQL Server Network Configuration" -> "Protocols for SQLEXPRESS" with no luck. I run out of ideas. What else can I try? | Ok, can you open your services console and scroll down to S for SQL Server. You should now see the services. Please ensure SQL Server (SQLEXPRESS) is running and then try .\SQLEXPRESS instead of (local). So as per your example: Server type: Database Engine
Server name: .\SQLEXPRESS
Authentication: Windows Authentication Hope this helps Update: These instructions are because I assume you are running Express Edition not Dev/Std/Ent edition of SQL Server Try ensuring the appropriate protocols are enabled: Start the SQL Configuration Manager (ie: Start->Programs->SQL Server->Configuration Tools) Expand the SQL native Client configuration Click Client Protocols (you may have a 32-bit and a 64-bit, apply to both) Ensure Shared memory, TCP/IP, Named Pipes are enabled in that order Expand SQL Server Network Configuration Ensure Shared Memory for either SQLEXPRESS and/or MSSQLSERVER is enabled Click SQL Server Services Restart any running services You should now be able to login to the instance If you find you cannot login at all you may need to follow these instructions to get SQL Server into single user mode. See here for the full instructions from Microsoft. By default, sqlservr.exe is located at C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn.
If a second instance of SQL Server is installed, a second copy of sqlservr.exe is located in a directory such as C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\binn. You can start one instance of SQL Server by using sqlservr.exe from a different instance, but SQL Server will start the version of the incorrect instance as well, including service packs, which may lead to unexpected results. To avoid this, use the MS-DOS change directory (cd) command to move to the correct directory before starting sqlservr.exe, as shown in the following example. cd \Program Files\Microsoft SQL Server\MSSQL10_50.1\MSSQL\Binn To start the default instance of SQL Server in single-user mode from a command prompt From a command prompt, enter the following command: sqlservr.exe -m Single-user mode can be useful for performing emergency maintenance when you do not want other users to connect to SQL Server, but any user can become the single user, including the SQL Server Agent service. You should now be able to login to the instance and add yourself to the security tab and grant full access. Alternate Method: THere is a script here that claims to add the current user to the SQL Server sysadmin role. This may work in single user mode but I have not verified it | {
"source": [
"https://serverfault.com/questions/11745",
"https://serverfault.com",
"https://serverfault.com/users/2563/"
]
} |
11,746 | When I install SQL Server 2008 Express in prompts me to create an instance and aborts if I don't. Then I see that information in an entry in Sql Server Configuration Manager on SQL Server Services. What is a SQL Server instance? | An SQL Server instance is a complete SQL server and you can install many instances on a machine but you can have only 1 default instance. An SQL Server instance has its own copy of the server files, databases and security credentials. This url may help you | {
"source": [
"https://serverfault.com/questions/11746",
"https://serverfault.com",
"https://serverfault.com/users/2563/"
]
} |
11,807 | What's the difference between a switch, a router, and a modem? | Routers: these devices connect different networks, operating at Layer 3 (the network layer) of the OSI model. They maintain routing tables which map IP addresses (more correctly, IP prefixes ) to an outgoing interface . Note that an interface may contain one or more ports (See below). Switches: these maintain forwarding tables which map MAC addresses to physical ports , operating at Layer 2 (the data link layer) of the OSI model. This is not necessarily a one-to-one mapping; many MAC addresses can be bound to the same physical port. This is the case where you have multi-layer switched networks (think a Netgear or Belkin switch plugged into your office or university network), or a hub connected to a switch port. Hubs: these are essentially multi-port signal repeaters, operating at Layer 1 (the phyiscal layer) of the OSI model. They can be either unpowered (simply providing a physical connection for the existing signal to propagate along), or powered, where they actually regenerate and/or amplify the signal they receive. The point to note here is that hubs are a single collision domain . A collision domain represents a set of devices all connected to the same physical transmission medium, such that only one of them can transmit at any given time (ignoring multiplexing technologies like wavelength division multiplexing, frequency-division multiplexing, time-division multiplexing, etc etc.). In practice, hubs are found less and less in today's data networks, as they have poor performance (as only one user can transmit at a time) and poor security (anyone connected to the same hub can hear everything all other users transmit and receive). Modems: MOdulator-DEModulator. Responsible for establishing a digital channel over an analogue medium, most commonly the telephone network. Modems again operate at Layer 2 (the data link layer) , but use different protocols than Ethernet to communicate. They then offer protocols such as PPP to the network layer, to allow IP traffic to flow over their links. | {
"source": [
"https://serverfault.com/questions/11807",
"https://serverfault.com",
"https://serverfault.com/users/2318/"
]
} |
12,005 | What port(s) should I open/NAT to allow me to use Remote Desktop? | Remote Desktop requires TCP port 3389 to be open. Also, opening UDP port 3389 enables acceleration since RDP 8.0. It is possible to change the port used by the terminal server (or PC which is accessed), see this Microsoft support article: How to change the listening port for Remote Desktop . The UDP port for accelerated connection uses the same port number as the TCP setting and cannot be changed separately. UDP acceleration is available since RDP 8.0 (shipped with Windows 8 and Windows Server 2012, available via an update on Windows 7 / Windows Server 2008 R2). | {
"source": [
"https://serverfault.com/questions/12005",
"https://serverfault.com",
"https://serverfault.com/users/1194/"
]
} |
12,025 | I've had my first notebook hard drive death (well, actually, it's currently dying... clicking noises, super slow windows boot up ...) Anyway, I now I realize I don't know anything about laptop hard drives. I was just going to take one out of another laptop and stick it in but the connectors are the same. How do you shop for laptop hard drives? | Interface (connectors): laptop PATA vs SATA http://www.laptopparts101.com/wp-content/uploads/2008/12/sata-ide-laptop-hard-drive.jpg Parallel ATA a.k.a. IDE, ATA, ATAPI, UDMA and PATA — legacy, wide 40-pin connector for disks produced few years ago. In case of notebook drives pins are smaller, and there is also power supply in the same plug. Serial ATA (SATA) — modern connector. Most modern laptops use it. 6 data pins
and 15 pins including power supply. Sizes of laptop disks: 2.5" — most common 1.8" — reduced, used mostly in ultralights and netbooks. Rotation speed: 7200RPM — modern, high-end 2.5" disks. Consume more energy. 5400RPM — standard, low-end 2.5"disks or high-end 1.8"disks. More energy-efficient. 4200RPM — legacy, low-end 2.5" disks, some modern reduced height and energy-efficient 2,5" or standard 1.8"disks. Take in account, that 5400RPM HDD with bigger capacity actually might have faster transfer rates, than smaller capacity 7200RPM. Rotation speed does however directly affect seek times. HDD vs. SSD: Modern alternative to mechanical HDDs are Solid State Drives , based on flash memory. They have almost instantaneous seek times, incredible read speeds and very low power consumption. As of now they are still much lower capacity than similarly priced HDDs. However, with the effects of Thailand floods and sharp drop in SSD prices in recent years, there are no longer excessively expensive. SSD can come in 2.5" SATA form factor, thus be interchangeable with 2.5" HDD. Another form factor, unique to SSDs is Mini-SATA ( mSATA ), intended mostly for use with netbooks (and some ultraportables). Below mSATA drive on top of 2.5" SATA HDD for size comparison: Note, that ultrabooks use neither of these formats. In ultrabooks SSDs are soldered permanently onto motherboard, thus cannot be removed nor upgraded. | {
"source": [
"https://serverfault.com/questions/12025",
"https://serverfault.com",
"https://serverfault.com/users/2093/"
]
} |
12,162 | There's a directory underneath my homedir called ".gvfs". As my regular user account, I can read it just fine: ~ $ ls -lart ~raldi/.gvfs
total 4
dr-x------ 2 raldi raldi 0 2009-05-25 22:17 .
drwxr-xr-x 60 raldi raldi 4096 2009-05-25 23:08 ..
~ $ ls -d ~raldi/.gvfs
dr-x------ 2 raldi raldi 0 2009-05-25 22:17 /home/raldi/.gvfs However, as root I can't "ls" or even "ls -d" it: # ls ~raldi/.gvfs
ls: cannot access /home/raldi/.gvfs: Permission denied
# ls -d ~raldi/.gvfs
ls: cannot access /home/raldi/.gvfs: Permission denied And, just to make sure: # echo $UID $EUID
0 0 This is just a simple home installation of Ubuntu 8.10, no NFS or anything weird like that. I see that the directory is marked non-world-readable (and non-world-x-able), but I thought none of that applied when you're root. For example, I can make a mode-000 directory in /tmp and give it away to a non-root user, and root has no trouble reading it, writing it, whatever. Any idea what's going on? | From: http://bugzilla.gnome.org/show_bug.cgi?id=534284 This is all unfortunate, but its a
decision that has been taken by the
fuse people at the kernel level (user
others than the one who mounted the fs
can't access it, including root) and
there is nothing we can do about it. Also see: https://bugs.launchpad.net/gvfs/+bug/225361 The solution seems to be to update your /etc/fuse.conf and enable the user_allow_other option. You may also need to then get gvfs to pass the allow_root or allow_other, but I am not sure how to do this. Of course it may be much easier to simply give up on all the GUI tools like gvfs and mount your filesystems from command line where you have complete control of exactly how something gets mounted. | {
"source": [
"https://serverfault.com/questions/12162",
"https://serverfault.com",
"https://serverfault.com/users/1691/"
]
} |
12,278 | Task Manager shows the overall memory usage of svchost.exe. Is there a way to view the memory usage of individual services? Note this is similar to Finegrained performance reporting on svchost.exe | There is an easy way to get the information you are asking
for (but it does require a slight change to your system): Split each service to run in its own SVCHOST.EXE process and
the service consuming the CPU cycles will be easily visible
in Task Manager or Process Explorer (the space after "=" is required): SC Config Servicename Type= own Do this in a command line window or put it into a BAT
script. Administrative privileges are required and
a restart of the computer is required before it takes
effect. The original state can be restored by: SC Config Servicename Type= share Example: to make Windows Management Instrumentation run in a
separate SVCHOST.EXE: SC Config winmgmt Type= own This technique has no ill effects, except perhaps increasing
memory consumption slightly. And apart from observing CPU
usage for each service it also makes it easy to observe page
faults delta, disk I/O read rate and disk I/O write rate for
each service.
For Process Explorer, menu View/Select Columns:
tab Process Memory/Page Fault Delta,
tab Process Performance/IO Delta Write Bytes,
tab Process Performance/IO Delta Read Bytes,
respectively. On most systems there is only one SVCHOST.EXE process that
has a lot of services. I have used this sequence (it can be
pasted directly into a command line window): rem 1. "Automatic Updates"
SC Config wuauserv Type= own
rem 2. "COM+ Event System"
SC Config EventSystem Type= own
rem 3. "Computer Browser"
SC Config Browser Type= own
rem 4. "Cryptographic Services"
SC Config CryptSvc Type= own
rem 5. "Distributed Link Tracking"
SC Config TrkWks Type= own
rem 6. "Help and Support"
SC Config helpsvc Type= own
rem 7. "Logical Disk Manager"
SC Config dmserver Type= own
rem 8. "Network Connections"
SC Config Netman Type= own
rem 9. "Network Location Awareness"
SC Config NLA Type= own
rem 10. "Remote Access Connection Manager"
SC Config RasMan Type= own
rem 11. "Secondary Logon"
SC Config seclogon Type= own
rem 12. "Server"
SC Config lanmanserver Type= own
rem 13. "Shell Hardware Detection"
SC Config ShellHWDetection Type= own
rem 14. "System Event Notification"
SC Config SENS Type= own
rem 15. "System Restore Service"
SC Config srservice Type= own
rem 16. "Task Scheduler"
SC Config Schedule Type= own
rem 17. "Telephony"
SC Config TapiSrv Type= own
rem 18. "Terminal Services"
SC Config TermService Type= own
rem 19. "Themes"
SC Config Themes Type= own
rem 20. "Windows Audio"
SC Config AudioSrv Type= own
rem 21. "Windows Firewall/Internet Connection Sharing (ICS)"
SC Config SharedAccess Type= own
rem 22. "Windows Management Instrumentation"
SC Config winmgmt Type= own
rem 23. "Wireless Configuration"
SC Config WZCSVC Type= own
rem 24. "Workstation"
SC Config lanmanworkstation Type= own
rem End. | {
"source": [
"https://serverfault.com/questions/12278",
"https://serverfault.com",
"https://serverfault.com/users/370/"
]
} |
12,295 | Some PuTTY settings are valid only for the current session, and when I start it again, they are at the default value again. How can I change the default values? | Make your settings changes and then click on "Default Settings" under "Load, save or delete a stored session" (This is in the "Session" category) to select it. Then click "Save." | {
"source": [
"https://serverfault.com/questions/12295",
"https://serverfault.com",
"https://serverfault.com/users/958/"
]
} |
12,373 | When I started using git I just did a git init and started calling add and commit . Now I am starting to pay attention and I can see that my commits are showing up as cowens@localmachine , rather than the address I want. It appears as if setting GIT_AUTHOR_EMAIL and GIT_COMMITTER_EMAIL will do what I want, but I still have those old commits with the wrong email address/name. How can I correct the old commits? | You can go back and fix all your commits with a single call to git filter-branch. This has the same effect as rebase, but you only need to do one command to fix all your history, instead of fixing each commit individually. You can fix all the wrong emails with this command: git filter-branch --env-filter '
oldname="(old name)"
oldemail="(old email)"
newname="(new name)"
newemail="(new email)"
[ "$GIT_AUTHOR_EMAIL"="$oldemail" ] && GIT_AUTHOR_EMAIL="$newemail"
[ "$GIT_COMMITTER_EMAIL"="$oldemail" ] && GIT_COMMITTER_EMAIL="$newemail"
[ "$GIT_AUTHOR_NAME"="$oldname" ] && GIT_AUTHOR_NAME="$newname"
[ "$GIT_COMMITTER_NAME"="$oldname" ] && GIT_COMMITTER_NAME="$newname"
' HEAD More information is available from the git docs | {
"source": [
"https://serverfault.com/questions/12373",
"https://serverfault.com",
"https://serverfault.com/users/2706/"
]
} |
12,378 | I'm not sure how to ask this question, since I'm not in the field. Say you're a network admin and you leave your job. How does the new guy know where to start? | It depends on the size of the network, number of users, number of nodes (computers, servers, printers, etc.) and the size of your IT staff, among other things. It also depends on your goal. Are you documenting the network for training and maintenance purposes, insurance/loss prevention, etc? Personally, I document my networks in such a way that I know I can derive any missing information based on what is documented. From a practical stance, there is a point of diminishing returns when your documentation gets too granular. A good rule of thumb I use is that there should be documentation in a known location that is thorough enough that if I get hit by a bus tonight, another administrator can keep the core network running while he/she fills in the missing pieces over the next few days/weeks. Here is an overview of what I think is most important about one of my networks. For the record this is a Windows-only shop with about 100 users and 5 offices. Administrator credentials for all servers. Obviously this should be kept secure. IP Addresses and NetBIOS names for any node on the network with a static IP address, including servers, workstations, printers, firewalls, routers, switches, etc. Basic server hardware information, such as Service tags or equivalent, total disk capacity, total RAM, etc. Major roles of each server, such as Domain Controller, File Server, Print Server, Terminal Server, etc. Location of backup tapes/drives. Information about the account numbers and credentials for services like remote office voice and data providers. External DNS for websites and routing. If there was anything strange about a setup or workflow that would not be immediately obvious to a new administrator, I would write a short "brief" about it as well. | {
"source": [
"https://serverfault.com/questions/12378",
"https://serverfault.com",
"https://serverfault.com/users/1136/"
]
} |
12,679 | As much as I have read about iowait, it is still mystery to me. I know it's the time spent by the CPU waiting for a IO operations to complete, but what kind of IO operations precisely? What I am also not sure, is why it so important? Can't the CPU just do something else while the IO operation completes, and then get back to processing data? Also what are the right tools to diagnose what process(es) did exactly wait for IO. And what are the ways to minimize IO wait time? | I know it's the time spent by the CPU
waiting for a IO operations to
complete, but what kind of IO
operations precisely? What I am also
not sure, is why it so important?
Can't the CPU just do something else
while the IO operation completes, and
then get back to processing data? Yes, the operating system will schedule other processes to run while one is blocked on IO. However inside that process, unless it's using asynchronous IO, it will not progress until whatever IO operation is complete. Also what are the right tools to
diagnose what process(es) did exactly
wait for IO. Some tools you might find useful iostat , to monitor the service times of your disks iotop (if your kernel supports it), to monitor the breakdown of IO requests per process strace , to look at the actual operations issued by a process And what are the ways to minimize IO
wait time? ensure you have free physical memory so the OS can cache disk blocks in memory keep your filesystem disk usage below 80% to avoid excessive fragmentation tune your filesystem use a battery backed array controller choose good buffer sizes when performing io operations | {
"source": [
"https://serverfault.com/questions/12679",
"https://serverfault.com",
"https://serverfault.com/users/3903/"
]
} |
12,793 | I have three maintenance plans set up to run on an Sql Server 2005 instance: Weekly database optimisations followed by a full backup Daily differential backup Hourly transaction log backups The hourly log backups are usually between a few hundred Kb and 10Mb depending on the level of activity, daily differentials usually grow to around 250Mb by the end of the week, and the weekly backup is about 3.5Gb. The problem I have is that the optimisations before the full backup seem to be causing the next transaction log backup to grow to over 2x the size of the full backup, in this case 8Gb, before returning to normal. Other than BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY , is there any way to reduce the size of that log backup, or prevent the optimisations from being recorded in the transaction log at all, as surely they will be accounted for in the full backup they precede? | Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started. The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken. To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database. If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds? If you have no other activity in the database while the maintenance is occuring, you could do the following: make sure user activity is stopped take a final log backup (this allows you to recover right up to the point of maintenance starting) switch to the SIMPLE recovery model perform maintenance - the log will truncate on each checkpoint switch to the FULL recovery model and take a full backup continue as normal Hope this helps - looking forward to more info. Thanks [Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/] | {
"source": [
"https://serverfault.com/questions/12793",
"https://serverfault.com",
"https://serverfault.com/users/2828/"
]
} |
12,830 | Having worked as a developer and in IT admin/support for a development team, I've come across many different types of environment from the completely locked down to the completely non. In my limited support experience I think its been less effort to support with a less locked down machine and I certainly felt this was easier, but of course this could be bias. I'd like to know what the view is from an IT support perspective, is it genuinely harder to support developers who have non locked down machines? | Most developers are technically savy and know what they are doing. They often need to install many specialist apps, having to get permission to do this and getting IT to come down and add it can be very frustrating, particularly in larger companies, for both sides. I've found what works best is allowing them to do what they want with regards to installing software on their machines, but if they get into problems with something we don't support, then they are on their own. Most developers are happy with this, and prefer being able to look after their own machine anyway. Locking someone down in accounting to only use IE and open word is fine, but if your a developer who needs to install 4 different types of browser and need to quickly install an app to solve a problem, it can be annoying. My experience is that companies who have alot of technical knowledge, so development shops, IT suppliers etc, who trust their employees and let them decide what they want installed are much happier and bother IT less | {
"source": [
"https://serverfault.com/questions/12830",
"https://serverfault.com",
"https://serverfault.com/users/4053/"
]
} |
12,854 | I understand what CIDR is, and what it is used for, but I still can't figure out how to calculate it in my head. Can someone give a "for dummies" type explanation with examples? | CIDR (Classless Inter-Domain Routing, pronounced "kidder" or "cider" - add your own local variant to the comments!) is a system of defining the network part of an IP address (usually people think of this as a subnet mask). The reason it's "classless" is that it allows a way to break IP networks down more flexibly than their base class. When IP networks were first defined, IPs had classes based on their binary prefix: Class Binary Prefix Range Network Bits
A 0* 0.0.0.0-127.255.255.255 8
B 10* 128.0.0.0-191.255.255.255 16
C 110* 192.0.0.0-223.255.255.255 24
D 1110* 224.0.0.0-239.255.255.255
E 1111* 240.0.0.0-255.255.255.255 (Note that this is the source of people referring to a /24 as a "class C", although that's not a strictly true comparison because a class C needed to have a specific prefix) These binary prefixes were used for routing large chunks of IP space around. This was inefficient because it resulted in large blocks being assigned to organizations who didn't necessarily need them, and also because Class Cs could only be assigned in 24 bit increments, meaning that routing tables could get unnecessarily large as multiple Class Cs were routed to the same location. CIDR was defined to allow variable length subnet masks (VLSM) to be applied to networks. As the name applies, address groups, or networks, can be broken down into groups that have no direct relationship to the natural "class" they belong to. The basic premise of VLSM is to provide the count of the number of network bits in a network. Since an IPv4 address is a 32-bit integer, the VLSM will always be between 0 and 32 (although I'm not sure in what instance you might have a 0-length mask). The easiest way to start calculating VLSM/CIDR in your head is to understand the "natural" 8-bit boundaries: CIDR Dotted Quad
/8 255.0.0.0
/16 255.255.0.0
/24 255.255.255.0
/32 255.255.255.255 (By the way, it's perfectly legal, and fairly common in ACLs, to use a /32 mask. It simply means that you are referring to a single IP) Once you grasp those, it's simple binary arithmetic to move up or down to get number of hosts. For instance, if a /24 has 256 IPs (let's leave off network and broadcast addresses for now, that's a different networking theory question), increasing the subnet by one bit (to /25) will reduce the host space by one bit (to 7), meaning there will be 128 IPs. Here's a table of the last octet. This table can be shifted to any octet to get the dotted quad equivalent. CIDR Dotted Quad
/24 255.255.255.0
/25 255.255.255.128
/26 255.255.255.192
/27 255.255.255.224
/28 255.255.255.240
/29 255.255.255.248
/30 255.255.255.252
/31 255.255.255.254
/32 255.255.255.255 As an example of shifting these to another octet, /18 (which is /26 minus 8 bits, so shifted an octet) would be 255.255.192.0. | {
"source": [
"https://serverfault.com/questions/12854",
"https://serverfault.com",
"https://serverfault.com/users/3552/"
]
} |
12,914 | I have an IT question, I hope that's the place to ask it. I'm building a team for a specific project and am considering buying Netbooks for the first time, the reason being cost-reduction (we are a lean-and-mean operation, I'm looking to save on whatever I can). The whole team is very mobile, sharing time between working at home to working at the office to working on airplanes... So desktops are out of the question. My team has both software developers and "documentation guys" - designers and marketing folk. The programmers are using mostly Python, and most of them running a small MySQL installation (developer installation). The rest of the guys are using mostly Word, Excel and PowerPoint. Is Netbook a viable choice for my programmers? And for the rest? What are the trade-offs I should be aware about when choosing between Notebooks and Netbooks? EDIT : Reading some of the answers, I understand I had an underlying assumption when asking my question. I assumed that netbooks, like notebooks, have docking stations allowing for work with large screens and "normal" keyboards in the office or at home. Is that incorrect? Many thanks | I wouldn't recommend Netbooks personally for the following reasons: Small keyboard. Your programmers are most likely going to hate the small keyboard after a short time period. Productivity killer. Possibly slow speed. For running the software depending on which processor you get it may be quite slow compared to what they could use - this could be a big productivity killer. Small screen. The tradeoff here is more a personal preference to the user and how well they can work with their constraints. At least with code it's nice to be able to bring up two documents and not need to squint. Hard Drive space. There isn't much and depending on how much data you're dealing with is a point worth noting. Graphics. Depending on the netbook the graphics card will be sufficient to run an external monitor and you could use an external keyboard/mouse via USB. It's worth noting that the vast majority of netbooks provide VGA out so you'll have to double check the monitor being used can still use VGA - that said how well your programmers can cope with the massive discrepancy in screen size may be more of a hassle both than it's worth. That's in terms of both managing dual monitors or having everything sized for a larger monitor and going back to a very small monitor. Summation: If the constraints of a netbook don't hinder your users, sure. If they do you'll only be frustrating your users. EDIT: Added last note about graphics cards to address edit in question. | {
"source": [
"https://serverfault.com/questions/12914",
"https://serverfault.com",
"https://serverfault.com/users/4028/"
]
} |
12,954 | A few years ago I was told to avoid S.M.A.R.T. like the plague. The reasoning was that the stress the testing puts on the drive will actually cause it to fail. Is this still the case? If not, what is a reasonable frequency to run tests? If I should still be avoiding it, what is a better way to monitor the health of my hard drives? | While S.M.A.R.T. certainly doesn't predict all failures, I worked in a computer repair shop for several years, and many times a S.M.A.R.T. error message was the first indication that a failure was about to occur, allowing me to save the customer's data before the drive died. The technology itself does not stress the drive, it just keeps track of a number of indicators (full list here: http://en.wikipedia.org/wiki/S.M.A.R.T .) that could potentially lead to drive failure, such as: Read Error Rate Reallocated Sectors Count Spin Retry Count Uncorrectable Sector Count Power on Hours The performance hit for S.M.A.R.T. is negligable, doesn't stress drives (the monitoring is passive), and can potentially warn you that you are about to lose all the pictures of your kids (or your MP3 collection or whatever is important on your Hard Drive). In short, leave it on. | {
"source": [
"https://serverfault.com/questions/12954",
"https://serverfault.com",
"https://serverfault.com/users/3552/"
]
} |
12,968 | I've always struggled to find this: How can you ask apache which httpd.conf file it used to load up? It becomes difficult when you have a number of instances of apache running, or if you haven't looked at the machine for a long time, and there are a lot of httpd.conf file on disk! Thanks a lot :) | apache2ctl -V | grep SERVER_CONFIG_FILE | {
"source": [
"https://serverfault.com/questions/12968",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
12,974 | It sometimes appears that any first-round decisions made are met with distrust, as if they (admins) were trying to somehow undermine the company. I would understand a bit of kick-back, to ensure that "IT people" have thought through all of the business concerns - but is it common for upper management to show an inherent lack of trust towards their system administrators? Given, this is a "small" shop ( < 300 employees ), and not "a software company". Is this a common thing in companies this size? | One thing to understand is that most times (and I say most with caution as sometimes, rarely, but sometimes the guys up top do have some tech background) upper management has no idea what you're doing. They're charged with "The Business". You're the grease monkey in the garage that tinkers with the cars that ultimately drive that business. When you take your personal car into a garage don't the thoughts of mistrust automatically strike you when you see a bill? Did they overcharge you? Did they sabotage something? Probably not but as people we tend to not trust what we don't understand. Same goes for IT. Yes, you hired us and (theoretically) you know that we know what we're doing but there's always going to be a level of mistrust between management and IT staff, no matter what size company. | {
"source": [
"https://serverfault.com/questions/12974",
"https://serverfault.com",
"https://serverfault.com/users/2422/"
]
} |
13,192 | We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue. | LVM is actually quite heavily used. Basically, LVM sits above the hardware (driver) layer. It doesn't add any redundancy or increased reliability (it relies on the underlying storage system to handle reliability). Instead, it provides a lot of added flexibility and additional features. LVM should never see a disk disappear or fail, because the disk failure should be handled by RAID (be it software or hardware). If you lose a disk and can't continue operating (rebuild the RAID, etc), then you should be going to backups. Trying to recover data from an incomplete array should never be needed (if it is, you need to reevaluate your entire design). Among the things you get with LVM are the ability to easily grow and shrink partitions/filesystems, the ability to dynamically allocate new partitions, the ability to snapshot existing partitions, and mount the snapshots as read only or writable partitions. Snapshots can be incredibly useful, particularly for things like backups. Personally, I use LVM for every partition (except /boot) on every box I build, and I've been doing so for the past 4 years. Dealing with non-LVM'ed boxes is a huge pain when you want to add or modify your disk layout. If you're using Linux, you definitely want use LVM. [Note: This above stuff on LVM has been updated to better explain what it is and how it fits into the storage equation.] As for RAID, I don't do servers without raid. With disk prices as cheap as they are, I'd go with RAID1 or RAID10. Faster, simpler, and much more robust. Honestly though, unless you're wedded to Ubuntu (which I would normally recommend), or if the box is performing other tasks, you might want to look into OpenFiler . It turns your box into a storage appliance with a web interface and will handle all of the RAID/LVM/etc for you, and allow you to export the storage as SMB, NFS, iSCSI, etc. Slick little setup. | {
"source": [
"https://serverfault.com/questions/13192",
"https://serverfault.com",
"https://serverfault.com/users/4401/"
]
} |
13,215 | I am attempting to allow a wordpress installation to install plugins. I am not quite sure how to securely set the permissions of my wordpress installation. I think chown -R www-data on the entire installation would work, but I think that is insecure. Instead I am attempting to allow wordpress to install plugins via sftp/ssh. In this tutorial on how to get that working , it shows that I would need to generate a key pair to keep on the server. I thought the whole point of key pairs is that you keep the public key on the server and the private key on the computer. I realize it is probably requiring this because the wordpress installer is on the server (the installer needs the private key) and the destination is the wordpress installation. So am I being ridiculous requiring that my wordpress plugin installer script must ssh into a sub-directory of where it exists? If so, why are people raving about this as a secure way to install plugins? If the better option is to set permissions, does anyone know how to securely set the proper permissions for my wordpress installation? Thank you! | LVM is actually quite heavily used. Basically, LVM sits above the hardware (driver) layer. It doesn't add any redundancy or increased reliability (it relies on the underlying storage system to handle reliability). Instead, it provides a lot of added flexibility and additional features. LVM should never see a disk disappear or fail, because the disk failure should be handled by RAID (be it software or hardware). If you lose a disk and can't continue operating (rebuild the RAID, etc), then you should be going to backups. Trying to recover data from an incomplete array should never be needed (if it is, you need to reevaluate your entire design). Among the things you get with LVM are the ability to easily grow and shrink partitions/filesystems, the ability to dynamically allocate new partitions, the ability to snapshot existing partitions, and mount the snapshots as read only or writable partitions. Snapshots can be incredibly useful, particularly for things like backups. Personally, I use LVM for every partition (except /boot) on every box I build, and I've been doing so for the past 4 years. Dealing with non-LVM'ed boxes is a huge pain when you want to add or modify your disk layout. If you're using Linux, you definitely want use LVM. [Note: This above stuff on LVM has been updated to better explain what it is and how it fits into the storage equation.] As for RAID, I don't do servers without raid. With disk prices as cheap as they are, I'd go with RAID1 or RAID10. Faster, simpler, and much more robust. Honestly though, unless you're wedded to Ubuntu (which I would normally recommend), or if the box is performing other tasks, you might want to look into OpenFiler . It turns your box into a storage appliance with a web interface and will handle all of the RAID/LVM/etc for you, and allow you to export the storage as SMB, NFS, iSCSI, etc. Slick little setup. | {
"source": [
"https://serverfault.com/questions/13215",
"https://serverfault.com",
"https://serverfault.com/users/3567/"
]
} |
13,354 | I've got several client computers (i.e. laptops, desktops, etc.), and I connect to several server machines that I manage, and I log into them all via SSH. I can imagine several schemes of managing ssh keys that would make sense, and I'm curious about what others do. Option 1: One global public/private keypair. I would generate one public/private keypair, and put the private key on every client machine, and the public key on every server machine. Option 2: One keypair per server machine. I would generate one keypair on each server machine, and put each private key on my client machines. Option 3: One keypair per client machine. Each client machine would have a unique private key, and each server machine would have the public keys for every client machine that I'd like to connect from. Option 4: One keypair per client/server pair Totally overboard? Which of these is best? Are there other options? What criteria to you use for evaluating the right configuration? | I use Option 3: One keypair per client machine and it makes the most sense to me. Here are some of the reasons: If a client is compromised then that key (and only that key) needs to be removed from servers. It's flexible enough to decide what I can access from where, without granting blanket access to all servers from all clients. Very convenient. There's only 1 key for ssh-add, no confusion. Easy to set up and administer over Option 4 Option 4 is nice, but is just too much work. Option 3 gets you 98% there with much less hassle. | {
"source": [
"https://serverfault.com/questions/13354",
"https://serverfault.com",
"https://serverfault.com/users/4538/"
]
} |
13,355 | Files can be locked on OS X by going to the "Get Info" panel for the specific file and clicking the lock button. I would need to remove locks from a shell script. What unix command can do that? | I use Option 3: One keypair per client machine and it makes the most sense to me. Here are some of the reasons: If a client is compromised then that key (and only that key) needs to be removed from servers. It's flexible enough to decide what I can access from where, without granting blanket access to all servers from all clients. Very convenient. There's only 1 key for ssh-add, no confusion. Easy to set up and administer over Option 4 Option 4 is nice, but is just too much work. Option 3 gets you 98% there with much less hassle. | {
"source": [
"https://serverfault.com/questions/13355",
"https://serverfault.com",
"https://serverfault.com/users/2455/"
]
} |
13,493 | I keep reading everywhere that PowerShell is the way of the future. When it was first released I did a whole bunch of virtual labs, but since then I still haven't used it in a production environment. I know the day will come when I'm dealing with OS's where it's already installed, so I want to be ready. I want to know: Do you use it? What has your 'bootstrapping' process been for using PowerShell? What kind of system administration tasks have you scripted with it? I'm an SQL Server database administrator. What are some cool things to do with it? It seems that everyone agrees that Microsoft is pushing this hard, but no one is actually using it yet. I want to hear from system administrators out there that are using it to do every day tasks and share some code samples. | Microsoft is doing all it can to make PowerShell the choice of power-users and automation writers everywhere. Gone are the days of compiling code in .NET in order to do the same thing, now you just need notepad.exe and google. We're big fans of it in the office, especially since Exchange 2007's Management Console does NOT include everything that you can do in PowerShell. Microsoft deliberately failed to implement things that only get done once in a great while, easier to develop that way, which downright forces its use if you have anything resembling a complex environment. Managing Microsoft's newer generation of products (Win7, Windows Server 2008, Exchange 2007/2010, SQL Server 2008) all have very rich PowerShell hooks. Once Remote Powershell (PowerShell 2.0 IIRC) gets deployed with Server 2008 R2, it'll become even MORE useful for automation writers. What we've done with it: Create a web-page to delegate certain admin tasks to helpdesk users. The web-page fires off commands that get executed in PowerShell. Things it does: Create and delete user accounts, including provisioning Exchange 2007 mailboxes and home directories Unlocks locked out accounts Create/delete groups Add/remove users from groups Move users between mail-stores Set passwords Take extracts from the ERP system and push global-address-book data into Active Directory nightly. Solve the LegacyExchangeDN problem that cropped up with our Exchange 2003 to Exchange 2007 migration. Had to add an X500 address to everyone that used to be on Exchange 2003. A fairly short PowerShell script fixed it. Scripted creation of "group mailboxes" (shared mailboxes in Exchange where multiple users have access to the mailbox), an otherwise manual process due to the nature of the data we need before kicking it off. It greatly standardized the setup of these mailboxes. Created a script that walked through all domained machines resetting a specific registry key and restarting a service. It took 18 hours to complete, but it got the job done. So yes, PowerShell is going to be with us for quite some time. EDIT : Adding a code-sample, since it was requested $list=import-csv("groupusers.csv")
$lastseengroup=$list[0].group
$ADGroupPrefix="grp.netware."
$ADGroupSuffix="{redacted -- in the format of ,ou=groups,dc=domain,dc=domain,dc=domain}"
Clear-Variable memberlist
Clear-Variable unknownusers
foreach ($entry in $list) {
if ($($entry.group) -ne $lastseengroup) {
echo "stumbled across new group $($entry.group), committing changes to $lastseengroup"
$newgroup=$ADgroupPrefix+$lastseengroup
$newgroupdn='"'+"cn=$newgroup$ADGroupSuffix"+'"'
echo "getting DN for $newgroup"
$existinggroup=dsquery group domainroot -name $newgroup
if (($existinggroup -ne $null)) {
dsmod group $newgroupdn -chmbr $memberlist
} else {
dsadd group $newgroupdn -scope u -secgrp yes -members $memberlist -desc "Group imported from eDirectory"
}
Clear-Variable memberlist
}
$User=get-user $($entry.member) -ErrorAction SilentlyContinue
if ($User.isvalid) {
$UserDN=$User.distinguishedname
$memberlist=$memberlist+'"'+"$UserDN"+'" '
} else {
$unknownusers=$unknownusers+$($entry.member)
}
$lastseengroup=$($entry.group)
}
dsadd group "cn=$ADGroupPrefix$lastseengroup$ADGroupSuffix" -scope u -secgrp yes -members $memberlist This takes a CSV file created with a perl script and updates a set of groups. If the group already exists, it replaces the membership with that specified in the file. If the group does not exist, it creates it. This is a one-way sync. Also, not quite in production yet, but close. | {
"source": [
"https://serverfault.com/questions/13493",
"https://serverfault.com",
"https://serverfault.com/users/1715/"
]
} |
13,523 | My Tomcat instance is sitting on a drive with little remaining space. The application I'm running does move file uploads off the server and into a NAS. During the upload, however, Tomcat keeps this file locally, presumably in the /temp directory. My server has a second data drive with plenty of space where I'd like to relocate this temp directory to. How can I configure Tomcat so that it uses a temp directory on this other drive, ie. how can I relocate this directory? Edit: I'm running Windows server 2k3. I tried setting the CATALINA_TMPDIR env var, but Tomcat appeared to ignore it. Solution: I'm using the "Monitor Tomcat" application which passes -Djava.io.tmpdir=C:\some\default\directory to the JVM. This was overriding the environmental variable I was setting. You can find it under Java > Java Options Changing this has fixed my problem. | The java.io.tmpdir in Tomcat is set to $CATALINA_BASE/temp . You can change it by setting the $CATALINA_TMPDIR environment variable before running startup.sh for Tomcat. From catalina.sh : # CATALINA_TMPDIR (Optional) Directory path location of temporary directory
# the JVM should use (java.io.tmpdir). Defaults to
# $CATALINA_BASE/temp. | {
"source": [
"https://serverfault.com/questions/13523",
"https://serverfault.com",
"https://serverfault.com/users/2662/"
]
} |
13,670 | i want to protect my mailserver with dns blacklists for fighting the spam. there a so many blacklists out there. currently i use: ix.dnsbl.manitu.net
cbl.abuseat.org
bl.spamcop.net
safe.dnsbl.sorbs.net
dnsbl.njabl.org should i add/remove some entries?
which are the best blacklists?
which blacklists shouldn't used (like spamhaus)? | Here is my list and why I use them: zen.spamhaus.org - Comprehensive RBL, catches a ton of spam sources, updated regularly. They have a long history and decent reputation in the spam filtering community. I have heard some negative things about them from time to time, but those are generally without real merit. Downside is that if your volume of traffic is high enough they will block access to the free list and you'll need to setup a paid account. Personal or small business mail servers usually do not have this problem. b.barracudacentral.org - Another very good list from another major industry player. I've heard a lot of negative things about the Barracuda devices themselves, but their RBL is top-notch. Downside is that you have to register with them in order to use it. We've never had a false positive reported that was caused by this list, and it blocks a lot of traffic for us. See http://www.barracudacentral.org/rbl for details. We've found that using these two lists alone, we see a significant reduction in spam intake on the server. The other lists that we've tried did not even come close to being as productive as either of these lists and essentially just wasted network resources and time while processing the incoming messages. Here are some that I do not use and why (your experience may vary): bl.spamcop.net - Too many false positives for our taste. They rely almost entirely on user submissions to power the list, and the people submitting are usually trigger happy and submit even legitimate messages as spam to their service, causing popular providers to get blocked when they probably shouldn't be. I have heard that this has been improved recently but we got burned too many times to go back and try again just yet. dnsbl.sorbs.net - They run a comprehensive list, but there are too many options for my taste. They have a lot of coverage, and block a lot of traffic, but finding the right mix of lists that they supply requires a lot of trial and error. The removal process for their spam list requires a verifiable minimum donation to one of their approved charities. If one of my clients ends up on their list (whatever the cause) and we block their traffic, I don't want to have to tell them that they have to donate to a charity to appease a blacklist that we use. They are, of course, free to run their list however they like, but that is not the kind of news I want to deliver to my clients if they end up on the SORBS list and are unable to send me e-mail. | {
"source": [
"https://serverfault.com/questions/13670",
"https://serverfault.com",
"https://serverfault.com/users/2018/"
]
} |
13,694 | Basic: what's the size on disk of my MS SQL Server DB? More: can I quickly see where the data is? i.e. which tables, logs, etc | You'll probably want to start with the sp_spaceused command. For example: sp_spaceused Returns information about the total size of the database sp_spaceused 'MyTable' Returns information about the size of MyTable Read the docs for all the things you can get information about. You can also use the sp_msforeachtable command to run sp_spaceused against all tables at once. Edit: Be aware the command sometimes returns multiple datasets, each set containing a different chunk of stats. | {
"source": [
"https://serverfault.com/questions/13694",
"https://serverfault.com",
"https://serverfault.com/users/4478/"
]
} |
13,839 | Are there any studies or evidence which show that mounting a hard drive horizontally is better than vertically for the lifespan of the device? Or upside down, or any direction for that matter. | The quotes in this thread from WD and Seagate suggest not. To precis the link: Seagate, Maxtor and WD drives can be used in any orientation including upside down . | {
"source": [
"https://serverfault.com/questions/13839",
"https://serverfault.com",
"https://serverfault.com/users/3552/"
]
} |
14,112 | Is there a possibility to "defragmentize" a sparse bundle image and reclaim (most) of the free space? Here is the background: I am using sparse bundles and every now and then I want to reclaim space from them so I run: hdiutil compact image.sparsebundle However, as explained in the man page, it only reclaims completely unused band files, so in my case it says: Reclaimed 0 bytes out of 90.4 GB possible. Of course there is the possibility to copy the contents of this image to a new sparse bundle that is then used in lieu, but that is both cumbersome and requires enough free space for this operation. Meanwhile, I found out that the output of the compact command is somewhat misleading (I am currently running OS X 10.5.7) as it sometimes lists a size as possible that is larger than the size currently taken up by the image bundle on the hard drive. I did not look closer but the output seems to be either the maximum size or "maximum size" - "used size". | Interesting! From what I've heard, the sparse bundle divides the data into 8Mb bands. Changing the band size might just help, if you're lucky. I mean, you'll never get 100% reclaimed space, but maybe better than what you get now. (Depending on the data on the image etc.) I did a dirty simple test with two 500Mb sparse bundles, one with 8Mb (default) band size, and one with 1Mb (smallest allowed size from what I can tell). I copied over 400mb of mp3 files and then removed every other file and then run hdiutil compact on their asses. Size after compact
8Mb bands: 271Mb
1Mb bands: 215Mb The command to convert your sparse bundle is hdiutil convert src.sparsebundle -format UDSB -tgtimagekey sparse-band-size=2048 -o dst.sparsebundle Band size is in the unit 512byte. So the above example sets the band size to 512 * 2048 = 1Mb. Just be careful if you're dealing with TimeMachine images or user home folder images etc. You're deviating from the Apple path :) Keep a fail safe backup! As for defragmentation: I have a funny feeling it's just as fast (or faster!) to just use hdiutil to convert the sparsefile to a new sparse file with the same format. I think it tries to be smart about it. But I don't know. (Note that defragmenting a sparse bundle just defragments the disk data, not the sparse bundle bands, unless it's a sparse bundle aware defragmentor. hdiutil convert does a band 'defragmentation' I believe.) | {
"source": [
"https://serverfault.com/questions/14112",
"https://serverfault.com",
"https://serverfault.com/users/4854/"
]
} |
14,189 | The main advantage of SSD drives is better performance. I am interested in their reliability. Are SSD drives more reliable then normal hard drives? Some people say they must be because they have no moving parts, but I am concerned about the fact that this is a new technology that is possibly not completely matured yet. | They haven't been around long enough in enough quantities to develop an earned reputation. Flash-wear is the really big one everyone is concerned about, which is why the enterprise SSD drives allocate so many blocks to the bad-block store. Anandtech has run several articles about SSD's over the last couple months and they go into a lot of detail. From what I've read, stability problems are primarily in the consumer market where corners are being cut to bring prices down out of orbit. The SSD's you can buy to put in your fibre channel arrays are a completely different class than the OCZ drives. There is perhaps a much larger stability divide between consumer grade SSD's and enterprise SSD's than there are in consumer SATA drives and enterprise SATA drives. For more information about enterprise SSDs like the Intel X25, Anandtech has several article about that. Their introductory article about the X25 practically gushed. On the desktop side a recent article about the OCZ Vertex went into some detail about how bad the consumer side of the SSD market really was, and linked to another article where the problem was originally identified in the tech media. In short, consumer-grade SSDs were tweaked to provide massive sequential I/O numbers with little regard to actual usage patterns. The OCZ Vertex is a consumer-grade SSD that can approach the Intel for performance, but it requires babying to get there. Again, none of these have been on the market long enough for outright failure rates to really emerge. It has only been in the last, oh, 6-8 months that consumer SSD's have gotten cheap enough for mass adoption. Update 6/2011 Two years later, and we do have some feelings for this now. However, how they're used has evolved. SSDs are used in areas where outright performance can't be economically met with disks, so comparing reliability is something of an apples-to-pears comparison. For servers that need small storage, they usually don't also need high performance on that storage so rotational magnetic media is still used most of the time. That said, some comparisons can be drawn. SSD are typically used in large storage arrays as the highest tier of performance. In this role I've heard anecdotal reports that SSDs last a lot shorter than the same disks in those arrays. Like, on the order of 10-18 months. This is reflected in the warranty the big storage vendors allow on SSDs. This may look like "a lot less reliable", but in reality you have to look at it right. Modern top tier SSDs can handle I/O Operations per second into the six digits these days, reaching the performance of even one drive with 15K RPM disks will take well over a hundred spindles. More mid-grade SSDs can do 30-50K I/O Ops, which is still over a hundred 15K disks. Modern disk I/O systems can't keep up with speeds like this, which is why the big array vendors only allow a few SSDs per array relative to disks; they simply can't eek enough performance out of the entire system to keep those things fed. So in reality, we're comparing a brace of (for example) 8 mid-grade SSDs versus 250 15K drives. Since this is enterprise storage, give them an 80% duty cycle. In the first year a couple of those 15K drives will definitely fail requiring replacement, possibly up to 20. Anecdotaly, half of the SSDs will fail. When looked at it like this, failure rate for performance given, SSDs still aren't up to HDs. When looked at it from an economic point of view, each SSD is worth 31.25 HDs, SSDs are markedly cheaper for the performance given so the increased failure rate is more acceptable since replacement-rate is still probably cheaper in the long run. Looking at it another way, a direct apples-to-apples comparison, where you subject the same two devices to identical I/O loads over a period of time, SSDs are more reliable these days. Take a 15K drive and a mid-grade SSD (50K I/O Ops/s) and give them both a steady diet of 180 I/O ops, and it is more likely that the SSD will make it to 5 years without fault than the HD. It's a statistical dance to be sure, but that's where things are going now. Hard-drives still have the edge in the drive-unit failure rate per GB of storage provided. However, this is not a market segment that SSD are intended to be competitive. | {
"source": [
"https://serverfault.com/questions/14189",
"https://serverfault.com",
"https://serverfault.com/users/3631/"
]
} |
14,303 | I'm just tweaking out my new Windows 7 laptop and wanted to disable the automatic Java updating (and thus kill the silly jusched.exe background process), but I can't seem to get it to actually turn it off. I found the Java Control Panel applet and found the settings on the Update tab that should control it. I can turn them off, apply them, and close the dialog successfully. But if I just open the dialog backup again right away, I see that the changes weren't actually made. I've tried it numerous times and it just doesn't take. What's up with that? I also tried to disable the icon in the system tray and got the same effect. Changing the size of the Temporary Internet Files cache work however. Any ideas? Thanks! | Actually this problem is due to the control panel requiring administrator privileges to allow the Java control panel to save your settings (it hasn't been fixed for ages, thanks to Sun Microsystems ). First, you need to find the Java Control Panel executable, in one of the following locations: C:\Program Files\Java\jre[version]\bin\javacpl.exe or C:\Program Files (x86)\Java\jre[version]\bin\javacpl.exe The path will differ depending on your system's architecture and which version of Java you have installed. For example, a 32-bit version of Java 7 installed on a 64-bit version of Windows will have it in: C:\Program Files (x86)\Java\jre7\bin\javacpl.exe Once you've found the file, right-click it and select "Run as administrator". From there, un-check "Check for Updates Automatically" on the Update tab and click OK. You can verify that the setting has been applied by navigating to the same screen as you normally would through the Control Panel. You can also check your running processes to see that jusched.exe is no longer running - it was automatically terminated when you clicked OK. | {
"source": [
"https://serverfault.com/questions/14303",
"https://serverfault.com",
"https://serverfault.com/users/4578/"
]
} |
14,429 | Is there any command I can run from bash that will tell me whether a port is already open? | Use "netstat" to check the presently using ports. netstat -antp
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 xxx.xxx.xxx.xxx 0.0.0.0:* LISTEN 16297/named
tcp 0 0 xxx.xxx.xxx.xxx:53 0.0.0.0:* LISTEN 16297/named
tcp 0 0 xxx.xxx.xxx.xxx:53 0.0.0.0:* LISTEN 16297/named
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 16297/named | {
"source": [
"https://serverfault.com/questions/14429",
"https://serverfault.com",
"https://serverfault.com/users/75/"
]
} |
14,524 | I've got an extensive selection of these to add to a spreadsheet and don't want to go through by hand.
What it the T-SQL command(s) to generate a list of SQL Server Agent Jobs? | On each server, you can query the sysjobs table in the msdb. For instance: SELECT job_id, [name] FROM msdb.dbo.sysjobs; | {
"source": [
"https://serverfault.com/questions/14524",
"https://serverfault.com",
"https://serverfault.com/users/2176/"
]
} |
14,577 | We have a setup with a few web servers being load-balanced. We want to have some sort of network shared storage that all of the web servers can access. It will be used as a place to store files uploaded by users. Everything is running Linux. Should we use NFS, CIFS, SMB, fuse+sftp, fuse+ftp? There are so many choices out there for network file sharing protocols, it's very hard to pick one. We basically just want to permanently mount this one share on multiple machines. Security features are less of a concern because it won't be network accessible from anywhere other than the servers mounting it. We just want it to work reliably and quickly. Which one should we use? | I vote for NFS. NFSv4.1 added the Parallel NFS pNFS capability, which makes parallel data access possible. I am wondering what kind of clients are using the storage if only Unix-like then I would go for NFS based on the performance figures. | {
"source": [
"https://serverfault.com/questions/14577",
"https://serverfault.com",
"https://serverfault.com/users/5248/"
]
} |
14,613 | I have Apache set up to serve several Virtual Hosts, and I would like to see how much bandwidth each site uses. I can see how much the entire server uses, but I would like more detailed reports. Most of the things I have found out there are for limiting bandwidth to virtual hosts, but I don't want to do that; I just want to see which sites are using how much bandwidth. This isn't for billing purposes, just for information. Is there an apache module I should use? Or is there some other way to do this? | The information you're after is all in the logs, so you should look at a log analyzer such as AWStats . The other option is to use Google Analytics. For analyzing the logs, here's a rough example which you can use to tell you how many MB of traffic a log file reports from the command line: cat /var/log/apache/access.log | awk '{SUM+=$10}END{print SUM/1024/1024}' | {
"source": [
"https://serverfault.com/questions/14613",
"https://serverfault.com",
"https://serverfault.com/users/684/"
]
} |
14,832 | Based on “Organizational issues” — sore spots of IT? I think it would be fair to say that system administrators need to determine if a place is worth working at. There is a similar well known test by Joel for programmers . What are the 12 questions system administrators should ask at an interview in order to help them decide if it's a good place to work at? Following Joel's rules: Questions should be platform and technology agnostic Questions should elicit a simple response such as yes or no EDIT: Please post one question at a time so we can see what users are voting for. | Do you use an incident/ticket tracking system? | {
"source": [
"https://serverfault.com/questions/14832",
"https://serverfault.com",
"https://serverfault.com/users/1715/"
]
} |
14,985 | I'd never heard of anycast until a few seconds ago when I read " What are some cool or useful server/networking tricks? ". The wikipedia " Anycast " article on it is quite formal and doesn't really evoke a mental picture of how it would be used. Can someone explain in a few informal sentences what "anycast" is, how you configure it (just in a general sense), and what its benefits are (what does it make easier)? | Anycast is networking technique where the same IP prefix is advertised from multiple locations. The network then decides which location to route a user request to, based on routing protocol costs and possibly the 'health' of the advertising servers. There are several benefits to anycast. First, in steady state, users of an anycast service (DNS is an excellent example) will always connect to the 'closest' (from a routing protocol perspective) DNS server. This reduces latency, as well as providing a level of load-balancing (assuming that your consumers are evenly distributed around your network). Another advantage is ease of configuration management. Rather than having to configure different DNS servers depending on where a server/workstation is deployed (Asia, America, Europe), you have one IP address that is configured in every location. Depending on how anycast is implemented, it can also provide a level of high availability. If the advertisement of the anycast route is conditional on some sort of health check (e.g. a DNS query for a well known domain, in this example), then as soon as a server fails its route can be removed. Once the network reconverges, user requests will be seamlessly forwarded to the next closest instance of DNS, without the need for any manual intervention or reconfiguration. A final advantage is that of horizontal scaling; if you find that one server is being overly loaded, simply deploy another one in a location that would allow it to take some proportion of the overloaded server's requests. Again, as no client configuration is required, this can be done very quickly. | {
"source": [
"https://serverfault.com/questions/14985",
"https://serverfault.com",
"https://serverfault.com/users/2318/"
]
} |
15,040 | This is the canonical question for "Should I build coomputing hardware myself?" questions. I have put together countless PCs, but never a large server. The geek in me says build it, but the realist in me says let the manufacturer handle it when there is a problem. Ignoring the time penalty involved with the initial assembly time of a built one, which is a better solution? Have you ever run into a problem with a home build server that would have been solved easier/quicker/cheaper by going with a manufacturer? Are there any features that manufacturers give that aren't easily attainable with a home built server? | Buy them. And buy them from alternative sources if you need to be frugal - Craigslist, Ebay, Dell Outlet, etc. If you end up building them - go with SuperMicro - great gear. But Commercial Servers will have better out of bandwidth management, better systems management, better support, etc. And if you need to pinch pennies - use third party memory (i.e. Crucial) - its cheaper and just as good. | {
"source": [
"https://serverfault.com/questions/15040",
"https://serverfault.com",
"https://serverfault.com/users/3552/"
]
} |
15,196 | Just the question on the title. There is any problem? Any experience on this? | Yes, it can be done. The appropriateness for doing so is up for debate. Make sure time stays synced! This is very important. A DC with incorrect time can cause havoc. Disable and do not use snapshots. Reverting to an old snapshot in a domain with many DCs will result in massive chaos. Do not suspend/pause the domain controller. Make sure your VM server does not get overloaded. I suggest you run at least one DC within your domain on real hardware, if you have a larger network. Could you explain the snapshot chaos
point? Isn't reverting to a snapshot
going to act like restoring from
backup, i.e. it will sync recent
changes from the other DCs? The active directory is not designed to support that. Once an update has been replicated, it will not be re-replicated. Normally if you are restoring the active directory you need to go through a special procedure. ( http://technet.microsoft.com/en-us/library/cc779573.aspx ). The KB article Sam Cogan, and gharper mentioned specifically address this point. In particular, Active Directory does
not support any method that restores a
snapshot of the operating system or
the volume the operating system
resides on. This kind of method causes
an update sequence number (USN)
rollback. When a USN rollback occurs,
the replication partners of the
incorrectly restored domain controller
may have inconsistent objects in their
Active Directory databases. In this
situation, you cannot make these
objects consistent. We also do not support using "undo"
and "differencing" features in Virtual
PC on operating system images for
domain controllers that run in virtual
hosting environments. The Microsoft AD team just posted a new article about how to virtualize domain controllers which includes several recommendations. | {
"source": [
"https://serverfault.com/questions/15196",
"https://serverfault.com",
"https://serverfault.com/users/5920/"
]
} |
15,563 | How do I scan a network to find out which devices are on it? (I'd be happy with a list of MAC addresses and IPs.) For example, lets say I'm at work and want to be sure that there are no unknown devices connected to the network (especially if access is not filtered by password or MAC). DHCP logs could help, but what if I want to find devices with static IPs? Alternatively, let us say I'm at a friends house and he wants me to setup port forwarding, but doesn't know the IP of his router. Sure, a few good guesses will usually get it, but it'd be nicer to scan. | For the first scenario, look at nmap . You can scan entire subnets in one command. For example: nmap -sP 192.168.0.1/24 For the second, the router IP should show up as the gateway IP for your machine. In windows, that appears in the connection status dialog. | {
"source": [
"https://serverfault.com/questions/15563",
"https://serverfault.com",
"https://serverfault.com/users/2629/"
]
} |
15,564 | The default nofile limit for OS X user accounts seems to be about 256 file descriptors these days. I'm trying to test some software that needs a lot more connections than that open at once. On a typical Debian box running the pam limits module, I'd edit /etc/security/limits.conf to set higher limits for the user that will be running the software, but I'm mystified where to set these limits in OS X. Is there a GUI somewhere for it? Is there a config file somewhere for it? What's the tidiest way to change the default ulimits on OS X? | Under Leopard the initial process is launchd . The default ulimits of each process are inherited from launchd . For reference the default (compiled in) limits are $ sudo launchctl limit
cpu unlimited unlimited
filesize unlimited unlimited
data 6291456 unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 266 532
maxfiles 256 unlimited To change any of these limits, add a line (you may need to create the file first) to /etc/launchd.conf , the arguments are the same as passed to the launchctl command. For example echo "limit maxfiles 1024 unlimited" | sudo tee -a /etc/launchd.conf However launchd has already started your login shell, so the simplest way to make these changes take effect is to restart our machine. (Use >> to append to /etc/launchd.conf.) | {
"source": [
"https://serverfault.com/questions/15564",
"https://serverfault.com",
"https://serverfault.com/users/6283/"
]
} |
15,746 | "C:\Windows\Installer" folder found on Windows Vista is taking around 1 GB of space. Can I safely delete it? Cheers | Google is our friend in this: This folder contains installer information for programs that are installed on your system (presumably via the MSI). Deleting this folder or files from it could cause problems with your installed programs or future uninstallation attempts, so MANUALLY DELETING IT IS NOT RECOMMENDED ! There seems to be one thing you can do: Using the MSIZAP.exe utility, you can clean the "orphaned" files from the installer folder, by running msizap.exe g! to clean the orphaned installer information for all the users. You can do some additional cleaning using the different options of the utility (see the link for a detailed explanation) EDIT: Since MSIZAP.exe is no longer supported by Microsoft (since June 2010) The Windows Installer Cleanup utility
(MSICUU2.exe) that was previously
referred to in this article resolved
some installation problems but
sometimes caused issues with other
programs or components that are
installed on the computer. Because of
this, the tool has been removed from
the Microsoft Download Center. you can download Windows "Installer CleanUp Utility" package from the here or here . More details on the Wikipedia article. | {
"source": [
"https://serverfault.com/questions/15746",
"https://serverfault.com",
"https://serverfault.com/users/3611/"
]
} |
15,776 | I want to check if a specified ethX is physically up or down. How do I do that with the command line? | $ ethtool <eth?> For example: $ ethtool eth0 provides: Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000001 (1)
Link detected: yes | {
"source": [
"https://serverfault.com/questions/15776",
"https://serverfault.com",
"https://serverfault.com/users/958/"
]
} |
15,782 | I have many Linux servers (SUSE 9 &10) used to run web services that provide data to large calculation grids. Recently we have had some difficult to explain outages (i.e. hardware and software logs are not showing any obvious errors) and we are starting to wonder whether the long uptime (typically 200-300 days) is the issue. Given that these servers are heavily utilised, should I consider a regular reboot cycle? | You must reboot after a kernel update (unless you are using KSplice), anything else is optional. Personally I reboot on a monthly cycle during a maintenance window to make sure the server and all services come back as expected. This way I can be reasonably certain if I have to do an out of schedule reboot (i.e. critical kernel update) that the system will come back up properly. Automated monitoring of servers and services (i.e. Nagios) also goes a long way to helping this process (reboot, watch the lights go red and then hopefully all back to green). P.S. if you do reboot regularily you'll want to make sure you tune your fsck checks (i.e. maximal mount count between checks appropriately, otherwise a quick 2 minute reboot might take 30 minutes if the server starts fsck'ing a couple terabytes of data. I typically set my mount count to 0 (tune2fs -c 0) and the interval between checks to 6 months or so and then manually force an fsck every once in a while and reset the count. | {
"source": [
"https://serverfault.com/questions/15782",
"https://serverfault.com",
"https://serverfault.com/users/6440/"
]
} |
16,101 | I cannot list them using dig/nslookup/host. | There are two ways, both require administrator access or trust to the DNS records: Perform a zone transfer ( AXFR ) on the domain to retrieve all records for the domain. The DNS administrator needs to explicitly allow AXFR transfers to your IP address from your chosen DNS server. You can perform such a transfer like this: dig @ns1.google.com google.com AXFR Directly view the zonefile on the relevant DNS server. You need administrator access to the DNS server for this. | {
"source": [
"https://serverfault.com/questions/16101",
"https://serverfault.com",
"https://serverfault.com/users/6658/"
]
} |
16,204 | For example, I have a simple bash file #!/bin/bash
cd ~/hello
ls How can I make it display every command before executing it? Just the opposite effect of "@echo off" in windows batch scripting. | bash -x script or set -x in the script. You can unset the option again with set +x . If you just want to do it for a few commands you can use a subshell: `(set -x; command1; command; ...;) | {
"source": [
"https://serverfault.com/questions/16204",
"https://serverfault.com",
"https://serverfault.com/users/6707/"
]
} |
16,355 | I'd like to append to the global PATH environment variable on OS X so that all user shells and GUI applications get the same PATH environment. I know I can append to the path in shell startup scripts, but those settings are not inherited by GUI applications. The only way I found so far is to redefine the PATH environment variable in /etc/launchd.conf : setenv PATH /usr/bin:/bin:/usr/sbin:/sbin:/my/path I couldn't figure out a way to actually append to PATH in launchd.conf . I'm a bit worried about this method, but so far this is the only thing that works. Is there a better way? | palmer's GUI information is correct, but there is a more maintainable way to modify the path seen by the shell. Like mediaslave said , you can edit /etc/paths , but even better you can drop a text file in /etc/paths.d/ that has a path in it and all shells will construct the path correctly. For example, on my system: $ cat /etc/paths
/usr/bin
/bin
/usr/sbin
/sbin
/usr/local/bin
$ ls /etc/paths.d
X11 git postgres
$ cat /etc/paths.d/postgres
/Library/PostgreSQL/8.4/bin
$ echo $PATH
/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/Library/PostgreSQL/8.4/bin:/usr/X11/bin:/usr/local/mysql/bin | {
"source": [
"https://serverfault.com/questions/16355",
"https://serverfault.com",
"https://serverfault.com/users/2455/"
]
} |
16,467 | What are the pros and cons between these two ways to synchronize your server? It seems to me that your server would probably not drift more than 1 second every day, so ntpdate on a crontab would be ok. But I heard you could use redundant NTP servers here http://www.pool.ntp.org/en/use.html in order to maintain synchronized time in case of failure. Do you have any suggestions? | The NTP algorithm includes information to allow you to calculate and fix the drift in your server's clock. NTPD includes the ability to use this to keep your clock in sync and will run more accurately than a clock on a computer not running NTPD. NTPD will also use several servers to improve accuracy. ntpdate does not keep any state to perform this service for you so will not provide the same kind of accuracy. It will allow you to provide it with a list of servers which it will use to attempt to provide you with a better result but this is no substitute for the sophisticated algorithms provided in NTPD that track your drift from each of the servers over time. NTPDATE corrects the system time instantaneously, which can cause problems with some software (e.g. destroying a session which now appears old). NTPD intentionally corrects the system time slowly, avoiding that problem. You can add the -g switch when starting NTPD to allow NTPD to make the first time update a big one which is more or less equivalent to running ntpdate once before starting NTPD, which at one time was recommended practice. As for security concerns, ntp servers do not connect back on uninitiated connections, which means your firewall should be able to tell that you initiated the ntp request and allow return traffic. There should be no need to leave ports open for arbitrary connections in order to get NTPD to work. From the ntpdate(8) man page: ntpdate can be run manually as necessary to set the host clock, or it
can be run from the host startup script to set the clock at boot time.
This is useful in some cases to set the clock initially before starting
the NTP daemon ntpd. It is also possible to run ntpdate from a cron
script. However, it is important to note that ntpdate with contrived
cron scripts is no substitute for the NTP daemon, which uses sophisticated algorithms to maximize accuracy and reliability while minimizing
resource use. Finally, since ntpdate does not discipline the host clock
frequency as does ntpd, the accuracy using ntpdate is limited. | {
"source": [
"https://serverfault.com/questions/16467",
"https://serverfault.com",
"https://serverfault.com/users/1322/"
]
} |
16,661 | I have two servers that have should have the same setup except for known differences. By running: find / \( -path /proc -o -path /sys -o -path /dev \) -prune -o -print | sort > allfiles.txt I can find a list of all the files on one server and compare it against the list of files on the the other server. This will show me the differences in the names of the files that reside on the servers. What I really want to do is run a checksum on all the files on both of the servers and compare them to also find where the contents are different. e.g find / \( -path /proc -o -path /sys -o -path /dev \) -prune -o -print | xargs /usr/bin/sha1sum Is this a sensible way to do this? I was thinking that rysnc already has most of this functionality but can it be used to provide the list of differences? | You're right, rsync is perfect for this. Use --itemize-changes (aka -i). Make sure you can run this as root on both sides (or some other user with full access to the machine): rsync -ani --delete / root@remotehost:/ -a is for archive, and basically makes rsync make an exact duplicate (apart from some cases involving links) -n is for dry-run, and means nothing will actually be changed (This one is IMPORTANT! :)) -i is for itemize-changes, and outputs a simple-to-understand-once-you-get-it format showing every file that needs to be updated (the syntax is explained fully in the man page under the detailed help for that trigger). --delete makes rsync delete files that exist on the destination but not the source. If you want to exclude certain paths, use commands like --exclude /var . The exclude patterns are relative to the source directory (which in this case is /, so they are effectively absolute). | {
"source": [
"https://serverfault.com/questions/16661",
"https://serverfault.com",
"https://serverfault.com/users/4872/"
]
} |
16,706 | I have scheduled backup script that makes the database dump.
How can I add the date timestamp to the file name? I am talking about Windows and CMD. | In the command prompt and batch files, you can use %date% and %time% to return the date and time respectively. Date works fine, but the time value returned contains colons, which are illegal for use in filenames, but there is a way to remove those. Use something like: COPY file.txt file_%time:~0,2%%time:~3,2%%time:~6,2%_%date:~-10,2%%date:~-7,2%%date:~-4,4%.txt This will produce a filename such as file_172215_01062009.txt Update: The comments below have interesting twists on this command as well as some potential problems you can avoid. | {
"source": [
"https://serverfault.com/questions/16706",
"https://serverfault.com",
"https://serverfault.com/users/7034/"
]
} |
16,767 | How to check that my python script is running under Administrator rights (sudo) under BSD-like OS? Need to display user-friendly warning in order it is executed without admin rights. | How about this? Check if uid == 0 : [kbrandt@kbrandt-admin: ~] python -c 'import os; print os.getuid()'
196677
[kbrandt@kbrandt-admin: ~] sudo python -c 'import os; print os.getuid()'
0 | {
"source": [
"https://serverfault.com/questions/16767",
"https://serverfault.com",
"https://serverfault.com/users/4634/"
]
} |
16,839 | I have installed Apache 2 from source on my Linux box. apachectl -k start works just fine, but how do I get Apache to start at boot time? This is on a Red Hat Linux distribution: Linux <hostname> 2.6.9-55.ELsmp #1 SMP Fri Apr 20 17:03:35 EDT 2007 i686 i686 i386 GNU/Linux | You want to add its init script to the appropriate run level. The init script is typically /etc/init.d/apache2 where you could manually run /etc/init.d/apache2 start to start it. On Gentoo you would write: rc-update add apache2 default On Ubuntu/Debian this works: sudo update-rc.d apache2 defaults On Red Hat Linux/Fedora/CentOS a little googling shows this: chkconfig --add httpd It varies a little bit from distribution to distribution , but the idea is usually the same. Basically, all these commands make a symbolic link from /etc/init.d/ to the appropriate run-level folder in /etc/ . | {
"source": [
"https://serverfault.com/questions/16839",
"https://serverfault.com",
"https://serverfault.com/users/1873/"
]
} |
17,092 | Can a Multi core CPU server be configured to allow the OS to see all the cores as a single CPU and there by allowing the processor to function as a single CPU? A lot of modern day servers are multicore CPUs. Are they ways for all the CPU cores to report as 1 CPU to the OS? This would be handy to run applications was designed for a single CPU. | What you're asking is "Can I run a single threaded application on a multi-core machine and take full advantage of all the cores?" The answer is : no A single threaded application can only run on one core and will never be able to use more resources than that single core can provide. | {
"source": [
"https://serverfault.com/questions/17092",
"https://serverfault.com",
"https://serverfault.com/users/6886/"
]
} |
17,255 | At our office, we have a local area network with a purely internal DNS setup, on which clients all named as whatever.lan . I also have a VMware environment, and on the virtual-machine-only network, I name the virtual machines whatever.vm . Currently, this network for the virtual machines isn't reachable from our local area network, but we're setting up a production network to migrate these virtual machines to, which will be reachable from the LAN. As a result, we're trying to settle on a convention for the domain suffix/TLD we apply to the guests on this new network we're setting up, but we can't come up with a good one, given that .vm , .local and .lan all have existing connotations in our environment. So, what's the best practice in this situation? Is there a list of TLDs or domain names somewhere that's safe to use for a purely internal network? | Since the previous answers to this question were written, there have been a couple of RFCs that alter the guidance somewhat. RFC 6761 discusses special-use domain names without providing specific guidance for private networks. RFC 6762 still recommends not using unregistered TLDs, but also acknowledges that there are cases where it will be done anyway. Since the commonly used .local conflicts with Multicast DNS (the main topic of the RFC), Appendix G. Private DNS Namespaces recommends the following TLDs: intranet internal private corp home lan IANA appears to recognize both RFCs but does not (currently) incorporate the names listed in Appendix G. In other words: you shouldn't do it. But when you decide to do it anyway, use one of the above names. | {
"source": [
"https://serverfault.com/questions/17255",
"https://serverfault.com",
"https://serverfault.com/users/1305/"
]
} |
17,360 | Surely someone has written a decent shell for Windows. I'm looking for a) something more or less like the ordinary linux shell (ie. history, completion etc.) b) something which is a simple install (easier than Cygwin which didn't seem all that good when I tried it.) Bonus points if it's : c) Free (as in speech) d) Allows forward slashes instead of back-slashes in paths | Powershell has a SIGNIFCANT advantage over any of the other command shells. It is OBJECT ORIENTED. In cmd, bash, etc. your output from a command like dir/ls is a effectively a string array. If you pipe tha tto another command then you have to process strings. In Powershell the dir cmdlet actually gives an array of file objects that you can pipe to another command and act on those objects via properties. Powershell is really an interactive .Net shell. Every cmdlet is actually a wrapper around a set of .Net objects. All the next generation of management interfaces coming from Microsoft are actually implemented in Powershell and the GUI interfaces are a wrapper around the Powershell commands, similar to the "Unix way" of doing GUI admin tools. Here's an example from an Active Directory perspective... You can use the cmd.exe shell and a utility like dsquery.exe to do LDAP queries for objects. But you get a list a distinguished names back. You can then process those DNs as string, etc. In Powershell v1 or v2, you can install and use a Quest snap-in that gives you tools like get-QADUser . When you query the AD with get-QADuser the return values are a collection of User objects. So a command like: $users = get-QADUser svc_* Will return a a collection that you can process by property, for example to sort them by HomeDirectory you would use: $HmDirs = get-QADUser svc_* | sort-object HomeDirectory There is no other shell out there that has this capability for Windows. Powershell is the way to go, absolutely. Update: PowerShell v2 is now released as part of Windows Management Framework , but if you want to get the Microsoft AD cmdlets, you have to be running server 2008 R2 or Windows 7, else it's still the Quest cmdlets. | {
"source": [
"https://serverfault.com/questions/17360",
"https://serverfault.com",
"https://serverfault.com/users/7355/"
]
} |
17,364 | Under Linux, how can I find all the files and directories that are writable (or, actually, not writable) by a particular user? Edit: To clarify, I meant under a particular subdir, not systemwide. And yes, that means all the permutations and combinations of user, group and world writability that would allow that user to write. I know what the question entails semantically, I was hoping for a one- or few-liner to execute to get a list of these files. | Use the 'find' command if you have findutils version 4.3.0 or greater installed: For all files under the current directory that are writable by the current user: find . -writable For all files under the current directory that are not writable by the current user: find . ! -writable According to the man page: This test makes use of the access(2)
system call, and so can be fooled by
NFS servers which do UID mapping (or
root-squashing), since many systems
implement access(2) in the client’s
kernel and so cannot make use of the
UID mapping information held on the
server. | {
"source": [
"https://serverfault.com/questions/17364",
"https://serverfault.com",
"https://serverfault.com/users/4153/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.