source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
347,331
When I create a partition using parted(mkpart) giving unit as Bytes, it is creating "size given - 16896" byte sized partition. Is there any specific reason to less 16896 bytes from the partition size(in bytes) given? Here, after creating a partition I get the partition size like: #parted /dev/sda unit B print. Note: These partitions are used in RAID formation. Also, observed this happens only if it is the first partition created in disk.
Don't bother with uninstalling or fixing bloatware. Just reimage the computers. In fact it's pretty easy to setup a reference image, sysprep, capture, and deploy it using WDS + MDT . See the aforementioned for various driver packages: trust me you're not the first person to think of this stuff, it's been solved already. Profiles can be transferred with USMT . Mapped drives are best done with a logon script . Outlook 2007+ with Exchange 2007+ can use Autodiscovery . Install updates with WSUS (fully automated at install with a simple script ). Keys and Activation can be managed with scripts or VAMT . Fair warning that if you don't know about any of this stuff already you've got one heck of a learning curve to get through and you're way behind the times. If you really only have a handful of computers it probably isn't worth the time to set this stuff up now, but if it's more than a dozen it's worth the time. Also future hardware refreshes aren't nearly so painful. Bonus that many of these skills allow you to be more efficient in your routine tasks and help prevent problems.
{ "source": [ "https://serverfault.com/questions/347331", "https://serverfault.com", "https://serverfault.com/users/33217/" ] }
347,582
I am using func to perform parallel commands on our servers. The other day, we had an issue when a service restart of puppet via func made all our severs hit our puppetmaster at the same time. My question: How can I execute the same exact command on a set of servers while adding a delay before it's executed on the individual servers? E.g.: random_delay && service puppet restart I am interested in the random_delay part of the command.
sleep $((RANDOM % MAXWAIT)) where MAXWAIT is the maximum desired delay in seconds.
{ "source": [ "https://serverfault.com/questions/347582", "https://serverfault.com", "https://serverfault.com/users/53736/" ] }
347,606
Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!
Ok - something was bugging me about your issue, so I fired up a VM to dive into the behavior that should be expected. I'll get to what was bugging me in a minute; first let me say this: Back up these drives before attempting anything!! You may have already done damage beyond what the resync did; can you clarify what you meant when you said: Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. If you ran a mdadm --misc --zero-superblock , then you should be fine. Anyway, scavenge up some new disks and grab exact current images of them before doing anything at all that might do any more writing to these disks. dd if=/dev/sdd of=/path/to/store/sdd.img That being said.. it looks like data stored on these things is shockingly resilient to wayward resyncs. Read on, there is hope, and this may be the day that I hit the answer length limit. The Best Case Scenario I threw together a VM to recreate your scenario. The drives are just 100 MB so I wouldn't be waiting forever on each resync, but this should be a pretty accurate representation otherwise. Built the array as generically and default as possible - 512k chunks, left-symmetric layout, disks in letter order.. nothing special. root@test:~# mdadm --create /dev/md0 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> So far, so good; let's make a filesystem, and put some data on it. root@test:~# mkfs.ext4 /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=512 blocks, Stripe width=1024 blocks 51000 inodes, 203776 blocks 10188 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 25 block groups 8192 blocks per group, 8192 fragments per group 2040 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@test:~# mkdir /mnt/raid5 root@test:~# mount /dev/md0 /mnt/raid5 root@test:~# echo "data" > /mnt/raid5/datafile root@test:~# dd if=/dev/urandom of=/mnt/raid5/randomdata count=10000 10000+0 records in 10000+0 records out 5120000 bytes (5.1 MB) copied, 0.706526 s, 7.2 MB/s root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Ok. We've got a filesystem and some data ("data" in datafile , and 5MB worth of random data with that SHA1 hash in randomdata ) on it; let's see what happens when we do a re-create. root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none> root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 21:07:06 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 21:07:06 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 21:07:06 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdd1[2] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> The resync finished very quickly with these tiny disks, but it did occur. So here's what was bugging me from earlier; your fdisk -l output. Having no partition table on the md device is not a problem at all, it's expected. Your filesystem resides directly on the fake block device with no partition table. root@test:~# fdisk -l ... Disk /dev/md1: 208 MB, 208666624 bytes 2 heads, 4 sectors/track, 50944 cylinders, total 407552 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Yeah, no partition table. But... root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks Perfectly valid filesystem, after a resync. So that's good; let's check on our data files: root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Solid - no data corruption at all! But this is with the exact same settings, so nothing was mapped differently between the two RAID groups. Let's drop this thing down before we try to break it. root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 Taking a Step Back Before we try to break this, let's talk about why it's hard to break. RAID 5 works by using a parity block that protects an area the same size as the block on every other disk in the array. The parity isn't just on one specific disk, it's rotated around the disks evenly to better spread read load out across the disks in normal operation. The XOR operation to calculate the parity looks like this: DISK1 DISK2 DISK3 DISK4 PARITY 1 0 1 1 = 1 0 0 1 1 = 0 1 1 1 1 = 0 So, the parity is spread out among the disks. DISK1 DISK2 DISK3 DISK4 DISK5 DATA DATA DATA DATA PARITY PARITY DATA DATA DATA DATA DATA PARITY DATA DATA DATA A resync is typically done when replacing a dead or missing disk; it's also done on mdadm create to assure that the data on the disks aligns with what the RAID's geometry is supposed to look like. In that case, the last disk in the array spec is the one that is 'synced to' - all of the existing data on the other disks is used for the sync. So, all of the data on the 'new' disk is wiped out and rebuilt; either building fresh data blocks out of parity blocks for what should have been there, or else building fresh parity blocks. What's cool is that the procedure for both of those things is the exact same: an XOR operation across the data from the rest of the disks. The resync process in this case may have in its layout that a certain block should be a parity block, and think it's building a new parity block, when in fact it's re-creating an old data block. So even if it thinks it's building this: DISK1 DISK2 DISK3 DISK4 DISK5 PARITY DATA DATA DATA DATA DATA PARITY DATA DATA DATA DATA DATA PARITY DATA DATA ...it may just be rebuilding DISK5 from the layout above. So, it's possible for data to stay consistent even if the array's built wrong. Throwing a Monkey in the Works (not a wrench; the whole monkey) Test 1: Let's make the array in the wrong order! sdc , then sdd , then sdb .. root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdc1 /dev/sdd1 /dev/sdb1 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:06:34 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:06:34 2012 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:06:34 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdb1[3] sdd1[1] sdc1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> Ok, that's all well and good. Do we have a filesystem? root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Nope! Why is that? Because while the data's all there, it's in the wrong order; what was once 512KB of A, then 512KB of B, A, B, and so forth, has now been shuffled to B, A, B, A. The disk now looks like jibberish to the filesystem checker, it won't run. The output of mdadm --misc -D /dev/md1 gives us more detail; It looks like this: Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 3 8 17 2 active sync /dev/sdb1 When it should look like this: Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1 So, that's all well and good. We overwrote a whole bunch of data blocks with new parity blocks this time out. Re-create, with the right order now: root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:11:08 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:11:08 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:11:08 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks Neat, there's still a filesystem there! Still got data? root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Success! Test 2 Ok, let's change the chunk size and see if that gets us some brokenness. root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=64 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:19 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:19 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:19 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Yeah, yeah, it's hosed when set up like this. But, can we recover? root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:51 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:51 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:51 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Success, again! Test 3 This is the one that I thought would kill data for sure - let's do a different layout algorithm! root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --layout=right-asymmetric --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:32:34 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:32:34 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:32:34 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdd1[3] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 1 [3/3] [UUU] unused devices: <none> root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... Superblock has an invalid journal (inode 8). Scary and bad - it thinks it found something and wants to do some fixing! Ctrl + C ! Clear<y>? cancelled! fsck.ext4: Illegal inode number while checking ext3 journal for /dev/md1 Ok, crisis averted. Let's see if the data's still intact after resyncing with the wrong layout: root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:33:02 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:33:02 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:33:02 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Success! Test 4 Let's also just prove that that superblock zeroing isn't harmful real quick: root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --misc --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Yeah, no big deal. Test 5 Let's just throw everything we've got at it. All 4 previous tests, combined. Wrong device order Wrong chunk size Wrong layout algorithm Zeroed superblocks (we'll do this between both creations) Onward! root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --misc --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 root@test:~# mdadm --create /dev/md1 --chunk=64 --level=5 --raid-devices=3 --layout=right-symmetric /dev/sdc1 /dev/sdd1 /dev/sdb1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdb1[3] sdd1[1] sdc1[0] 204672 blocks super 1.2 level 5, 64k chunk, algorithm 3 [3/3] [UUU] unused devices: <none> root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 The verdict? root@test:~# mdadm --misc --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdd1[3] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 13/51000 files, 17085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Wow. So, it looks like none of these actions corrupted data in any way. I was quite surprised by this result, frankly; I expected moderate odds of data loss on the chunk size change, and some definite loss on the layout change. I learned something today. So .. How do I get my data?? As much information as you have about the old system would be extremely helpful to you. If you know the filesystem type, if you have any old copies of your /proc/mdstat with information on drive order, algorithm, chunk size, and metadata version. Do you have mdadm's email alerts set up? If so, find an old one; if not, check /var/spool/mail/root . Check your ~/.bash_history to see if your original build is in there. So, the list of things that you should do: Back up the disks with dd before doing anything!! Try to fsck the current, active md - you may have just happened to build in the same order as before. If you know the filesystem type, that's helpful; use that specific fsck tool. If any of the tools offer to fix anything, don't let them unless you're sure that they've actually found the valid filesystem! If an fsck offers to fix something for you, don't hesitate to leave a comment to ask whether it's actually helping or just about to nuke data. Try building the array with different parameters. If you have an old /proc/mdstat , then you can just mimic what it shows; if not, then you're kinda in the dark - trying all of the different drive orders is reasonable, but checking every possible chunk size with every possible order is futile. For each, fsck it to see if you get anything promising. So, that's that. Sorry for the novel, feel free to leave a comment if you have any questions, and good luck! footnote: under 22 thousand characters; 8k+ shy of the length limit
{ "source": [ "https://serverfault.com/questions/347606", "https://serverfault.com", "https://serverfault.com/users/106244/" ] }
347,620
I downloaded the AWS CloudWatch command line api and I have also set the env_path variable that is AWS_CLOUDWATCH_HOME=local/usr/CloudWatch . But when I run mon-cmd , I get command not found error in the console. I am working on ubuntu 10.04 server which is a EC2 instance. It's been to couple of days I am struck with this problem, in spite of setting the path variables correctly I am facing this problem. Kindly help me out
Ok - something was bugging me about your issue, so I fired up a VM to dive into the behavior that should be expected. I'll get to what was bugging me in a minute; first let me say this: Back up these drives before attempting anything!! You may have already done damage beyond what the resync did; can you clarify what you meant when you said: Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. If you ran a mdadm --misc --zero-superblock , then you should be fine. Anyway, scavenge up some new disks and grab exact current images of them before doing anything at all that might do any more writing to these disks. dd if=/dev/sdd of=/path/to/store/sdd.img That being said.. it looks like data stored on these things is shockingly resilient to wayward resyncs. Read on, there is hope, and this may be the day that I hit the answer length limit. The Best Case Scenario I threw together a VM to recreate your scenario. The drives are just 100 MB so I wouldn't be waiting forever on each resync, but this should be a pretty accurate representation otherwise. Built the array as generically and default as possible - 512k chunks, left-symmetric layout, disks in letter order.. nothing special. root@test:~# mdadm --create /dev/md0 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> So far, so good; let's make a filesystem, and put some data on it. root@test:~# mkfs.ext4 /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=512 blocks, Stripe width=1024 blocks 51000 inodes, 203776 blocks 10188 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 25 block groups 8192 blocks per group, 8192 fragments per group 2040 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@test:~# mkdir /mnt/raid5 root@test:~# mount /dev/md0 /mnt/raid5 root@test:~# echo "data" > /mnt/raid5/datafile root@test:~# dd if=/dev/urandom of=/mnt/raid5/randomdata count=10000 10000+0 records in 10000+0 records out 5120000 bytes (5.1 MB) copied, 0.706526 s, 7.2 MB/s root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Ok. We've got a filesystem and some data ("data" in datafile , and 5MB worth of random data with that SHA1 hash in randomdata ) on it; let's see what happens when we do a re-create. root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none> root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 21:07:06 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 21:07:06 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 21:07:06 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdd1[2] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> The resync finished very quickly with these tiny disks, but it did occur. So here's what was bugging me from earlier; your fdisk -l output. Having no partition table on the md device is not a problem at all, it's expected. Your filesystem resides directly on the fake block device with no partition table. root@test:~# fdisk -l ... Disk /dev/md1: 208 MB, 208666624 bytes 2 heads, 4 sectors/track, 50944 cylinders, total 407552 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Yeah, no partition table. But... root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks Perfectly valid filesystem, after a resync. So that's good; let's check on our data files: root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Solid - no data corruption at all! But this is with the exact same settings, so nothing was mapped differently between the two RAID groups. Let's drop this thing down before we try to break it. root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 Taking a Step Back Before we try to break this, let's talk about why it's hard to break. RAID 5 works by using a parity block that protects an area the same size as the block on every other disk in the array. The parity isn't just on one specific disk, it's rotated around the disks evenly to better spread read load out across the disks in normal operation. The XOR operation to calculate the parity looks like this: DISK1 DISK2 DISK3 DISK4 PARITY 1 0 1 1 = 1 0 0 1 1 = 0 1 1 1 1 = 0 So, the parity is spread out among the disks. DISK1 DISK2 DISK3 DISK4 DISK5 DATA DATA DATA DATA PARITY PARITY DATA DATA DATA DATA DATA PARITY DATA DATA DATA A resync is typically done when replacing a dead or missing disk; it's also done on mdadm create to assure that the data on the disks aligns with what the RAID's geometry is supposed to look like. In that case, the last disk in the array spec is the one that is 'synced to' - all of the existing data on the other disks is used for the sync. So, all of the data on the 'new' disk is wiped out and rebuilt; either building fresh data blocks out of parity blocks for what should have been there, or else building fresh parity blocks. What's cool is that the procedure for both of those things is the exact same: an XOR operation across the data from the rest of the disks. The resync process in this case may have in its layout that a certain block should be a parity block, and think it's building a new parity block, when in fact it's re-creating an old data block. So even if it thinks it's building this: DISK1 DISK2 DISK3 DISK4 DISK5 PARITY DATA DATA DATA DATA DATA PARITY DATA DATA DATA DATA DATA PARITY DATA DATA ...it may just be rebuilding DISK5 from the layout above. So, it's possible for data to stay consistent even if the array's built wrong. Throwing a Monkey in the Works (not a wrench; the whole monkey) Test 1: Let's make the array in the wrong order! sdc , then sdd , then sdb .. root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdc1 /dev/sdd1 /dev/sdb1 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:06:34 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:06:34 2012 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:06:34 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdb1[3] sdd1[1] sdc1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> Ok, that's all well and good. Do we have a filesystem? root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Nope! Why is that? Because while the data's all there, it's in the wrong order; what was once 512KB of A, then 512KB of B, A, B, and so forth, has now been shuffled to B, A, B, A. The disk now looks like jibberish to the filesystem checker, it won't run. The output of mdadm --misc -D /dev/md1 gives us more detail; It looks like this: Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 3 8 17 2 active sync /dev/sdb1 When it should look like this: Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1 So, that's all well and good. We overwrote a whole bunch of data blocks with new parity blocks this time out. Re-create, with the right order now: root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:11:08 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:11:08 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:11:08 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks Neat, there's still a filesystem there! Still got data? root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Success! Test 2 Ok, let's change the chunk size and see if that gets us some brokenness. root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=64 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:19 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:19 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:19 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Yeah, yeah, it's hosed when set up like this. But, can we recover? root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:51 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:51 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:21:51 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Success, again! Test 3 This is the one that I thought would kill data for sure - let's do a different layout algorithm! root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --layout=right-asymmetric --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:32:34 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:32:34 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:32:34 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdd1[3] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 1 [3/3] [UUU] unused devices: <none> root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... Superblock has an invalid journal (inode 8). Scary and bad - it thinks it found something and wants to do some fixing! Ctrl + C ! Clear<y>? cancelled! fsck.ext4: Illegal inode number while checking ext3 journal for /dev/md1 Ok, crisis averted. Let's see if the data's still intact after resyncing with the wrong layout: root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:33:02 2012 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:33:02 2012 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 23:33:02 2012 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Success! Test 4 Let's also just prove that that superblock zeroing isn't harmful real quick: root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --misc --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 12/51000 files, 12085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Yeah, no big deal. Test 5 Let's just throw everything we've got at it. All 4 previous tests, combined. Wrong device order Wrong chunk size Wrong layout algorithm Zeroed superblocks (we'll do this between both creations) Onward! root@test:~# umount /mnt/raid5 root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 root@test:~# mdadm --misc --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 root@test:~# mdadm --create /dev/md1 --chunk=64 --level=5 --raid-devices=3 --layout=right-symmetric /dev/sdc1 /dev/sdd1 /dev/sdb1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdb1[3] sdd1[1] sdc1[0] 204672 blocks super 1.2 level 5, 64k chunk, algorithm 3 [3/3] [UUU] unused devices: <none> root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> root@test:~# mdadm --stop /dev/md1 mdadm: stopped /dev/md1 The verdict? root@test:~# mdadm --misc --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 root@test:~# mdadm --create /dev/md1 --chunk=512 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. root@test:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sdd1[3] sdc1[1] sdb1[0] 203776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> root@test:~# fsck.ext4 /dev/md1 e2fsck 1.41.14 (22-Dec-2010) /dev/md1: clean, 13/51000 files, 17085/203776 blocks root@test:~# mount /dev/md1 /mnt/raid5/ root@test:~# cat /mnt/raid5/datafile data root@test:~# sha1sum /mnt/raid5/randomdata 847685a5d42524e5b1d5484452a649e854b59064 /mnt/raid5/randomdata Wow. So, it looks like none of these actions corrupted data in any way. I was quite surprised by this result, frankly; I expected moderate odds of data loss on the chunk size change, and some definite loss on the layout change. I learned something today. So .. How do I get my data?? As much information as you have about the old system would be extremely helpful to you. If you know the filesystem type, if you have any old copies of your /proc/mdstat with information on drive order, algorithm, chunk size, and metadata version. Do you have mdadm's email alerts set up? If so, find an old one; if not, check /var/spool/mail/root . Check your ~/.bash_history to see if your original build is in there. So, the list of things that you should do: Back up the disks with dd before doing anything!! Try to fsck the current, active md - you may have just happened to build in the same order as before. If you know the filesystem type, that's helpful; use that specific fsck tool. If any of the tools offer to fix anything, don't let them unless you're sure that they've actually found the valid filesystem! If an fsck offers to fix something for you, don't hesitate to leave a comment to ask whether it's actually helping or just about to nuke data. Try building the array with different parameters. If you have an old /proc/mdstat , then you can just mimic what it shows; if not, then you're kinda in the dark - trying all of the different drive orders is reasonable, but checking every possible chunk size with every possible order is futile. For each, fsck it to see if you get anything promising. So, that's that. Sorry for the novel, feel free to leave a comment if you have any questions, and good luck! footnote: under 22 thousand characters; 8k+ shy of the length limit
{ "source": [ "https://serverfault.com/questions/347620", "https://serverfault.com", "https://serverfault.com/users/74544/" ] }
348,052
Recently, I put more ram into my server and now I have got a total of 24GB of RAM. Originally, I setup the OS to have a 2GB swap size. /dev/sdc1 1 281 2257101 82 Linux swap / Solaris /dev/sdc2 * 282 60801 486126900 83 Linux 2GB is allocated for swap currently, but reading around it seems it is not much. For a system with 24GB, I am thinking to allocate at least 10GB of swap. My questions is: Can I do it while the OS is running? Do I have to reinstall? I am using OpenSuse 11.3
You decided to create a separate swap partition upon installation. You can't resize it online - even an offline resize is going to take a considerable amount of time and bear the potential risk of damaging your subsequent filesystem on /dev/sdc2. The easiest option to work around this is to either create a new swap partition on a different disk you don't currently use (or can afford to offline for re-partitioning) or simply use a swap file within an existing filesystem (which comes with some minor performance penalty due to the filesystem overhead ). The general procedure to add a swap partition/file: create either a new partition of type 82h or a new 8 GB file using dd if=/dev/zero of=/swapfile bs=1M count=8192 initialize it using mkswap /swapfile or mkswap /dev/sdXX use swapon /swapfile or swapon /dev/sdXX respectively to enable your new swap space on-the-fly add an entry to /etc/fstab to make sure your new swap space gets activated upon reboot Your current swap partition remains in use, you may want to get rid of it for the sake of complexity reduction. Just use swapoff /dev/sdc1 to disable its usage for the moment and remove the reference in /etc/fstab
{ "source": [ "https://serverfault.com/questions/348052", "https://serverfault.com", "https://serverfault.com/users/105496/" ] }
348,482
I have files with invalid characters like these 009_-_�%86ndringshåndtering.html It is a Æ where something have gone wrong in the filename. Is there a way to just remove all invalid characters? or could tr be used somehow? echo "009_-_�%86ndringshåndtering.html" | tr ???
One way would be with sed: mv 'file' $(echo 'file' | sed -e 's/[^A-Za-z0-9._-]/_/g') Replace file with your filename, of course. This will replace anything that isn't a letter, number, period, underscore, or dash with an underscore. You can add or remove characters to keep as you like, and/or change the replacement character to anything else, or nothing at all.
{ "source": [ "https://serverfault.com/questions/348482", "https://serverfault.com", "https://serverfault.com/users/34187/" ] }
348,493
When you setup a site on IIS it defaults the worker process to recycle every 1740 minutes (29 hours). Why an odd number like 29 hours and not, for example, 24 or 48 hours?
At Tech Ed 2003, the presenter was asked this question, and the answer was that they wanted an irregular cycle to prevent it from occurring on a daily boundary (e.g. to distinguish from other daily tasks scheduled on the server / domain). The site here(Link dead) speculated: ... (29 is the) first prime after 24, allowing it to have the least chance occurring in a regular pattern with any other server process; easing the investigation into problems Another site appears to confirm this : ( Wade Hilmo ) suggested 29 hours for the simple reason that it’s the smallest prime number over 24. He wanted a staggered and non-repeating pattern that doesn’t occur more frequently than once per day
{ "source": [ "https://serverfault.com/questions/348493", "https://serverfault.com", "https://serverfault.com/users/1671/" ] }
348,912
I'm interested in finding out what people's experiences with standard usernames is. I've always been in places that used {firstInitial}{lastname} (sometimes with a length-limit). Now I've users that want {firstname}.{lastname} - and now it comes up that the period may cause problems. Specifically: What is the best username length limit to use to maintain compatibility across all uses? What characters should be avoided? UPDATE: The reason I didn't mention specifics is that I wanted to be general enough to handle anything that might come up in the future. However, that may be too general of a requirement (anything can happen, right?). This is our environment: Ubuntu Server Lucid Lynx 10.04 LTS, Red Hat Enterprise Linux 5.6 and up, Windows Server 2003 and Windows 2000 Server (with Active Directory in Windows 2000 Native Mode), Zimbra 7.x for mail, and OpenLDAP in the near future. UPDATE: I should mention (for completeness) that I saw this question (though it didn't answer my asked question) and also this web post , both of which were very informative.
This is a chronic problem with large Identity Management systems attempting to glue together heterogeneous systems. Invariably, you'll be limited to the lowest common denominator, which all too often is an 8-character ASCII-alpha-numeric limit thanks to some (probably legacy) Unix-like system somewhere in the bowels of the datacenter. Those fancy modern systems can take arbitrary length UTF8 usernames are unlikely to get used. I spent 7 years at an institution of higher education where we had to figure out 8-character usernames for 5000 new students every year. We had managed to come up with unique names for 15 years of students by the time I left. This can be done, Mr. smitj510 Things that will make your life immeasurably easier: Figure out what your lowest-common-denominator is, which requires analyzing every part of your identity-management system to discover what the limits are. That old Solaris 7 system is forcing the 8-character limit. Critical applications that use identity data have their own limits you will have to consider. Perhaps they expect user data from LDAP to conform to a unique-to-them 'standard'. Perhaps the authentication database they use can only handle certain formatted data. Perhaps that Windows-compatible system still uses SAMAccountName two decades after that stopped being a good idea. Have a database table with a list of the One True Identifier (that 8-character account-name), with links/fields listing alternate ID's like firstname.lastname or anything else that might come up. Off-the-shelf software can do some really weird and IDM-unfriendly things like use a numerical ID for account name, or auto-generate account IDs based on profile data. All that goes into the database table too. This also helps people with non-[a-z|0-9] characters in their names like Harry O'Neil, or non-ASCII ones like Alžbêta. When you build your account synchronization processes, leverage that database table to ensure that the right accounts are getting the right updates. When names change (marriage, divorce, others) you want those changes to propagate to the right places. Configure the actual identity databases themselves to prevent local-changes where possible, and business process to strongly discourage that when it isn't possible. Rely on the central account-sync process for everything you can. Leverage alias systems wherever you can, such as in email. Consider the 8-char ID immutable, since changing that field can trigger a LOT of heart-ache among IT staff as accounts have to be recreated. This suggests an account-ID not derived from name data, since marriage/divorce/court-order can change the name-data over time. Have a system in place for exceptions, since there will always be some. Horrible divorce and that name-data generated 8-char UID brings wrenching memories every time you have to enter it? Be nice to your users and allow a mechanism for these changes, but keep it quiet. Do what you can to allow multiple username logins in the systems where that's an option Some people like their 8-character uid, others like [email protected]. Be flexible, make friends. Sometimes this requires fronting your web-based systems with a single-sign-on framework like CAS or leverage saml . You will be surprised at how many off the shelf systems can support SSO frameworks like this, so don't be discouraged. Which is to say, treat it like a databasing problem because that's what it is. Pick a primary key for maximum compatibility with your systems (likely 8 characters), build a lookup-table to allow systems to translate local ID's to the primary key, and engineer your data synchronization systems to handle various IDs.
{ "source": [ "https://serverfault.com/questions/348912", "https://serverfault.com", "https://serverfault.com/users/8263/" ] }
348,916
For whatever reason, I am not able to include the following line in my httpd.conf: AuthBasicProvider file ldap I keep getting the following error: Unknown Authn provider: ldap Apache is compiled from source with : --enable-authnz-ldap --enable-ldap What other compile-time options should I pass? Building this for svn/ldap servers, compiling support in instead of dso. Thanks! V
This is a chronic problem with large Identity Management systems attempting to glue together heterogeneous systems. Invariably, you'll be limited to the lowest common denominator, which all too often is an 8-character ASCII-alpha-numeric limit thanks to some (probably legacy) Unix-like system somewhere in the bowels of the datacenter. Those fancy modern systems can take arbitrary length UTF8 usernames are unlikely to get used. I spent 7 years at an institution of higher education where we had to figure out 8-character usernames for 5000 new students every year. We had managed to come up with unique names for 15 years of students by the time I left. This can be done, Mr. smitj510 Things that will make your life immeasurably easier: Figure out what your lowest-common-denominator is, which requires analyzing every part of your identity-management system to discover what the limits are. That old Solaris 7 system is forcing the 8-character limit. Critical applications that use identity data have their own limits you will have to consider. Perhaps they expect user data from LDAP to conform to a unique-to-them 'standard'. Perhaps the authentication database they use can only handle certain formatted data. Perhaps that Windows-compatible system still uses SAMAccountName two decades after that stopped being a good idea. Have a database table with a list of the One True Identifier (that 8-character account-name), with links/fields listing alternate ID's like firstname.lastname or anything else that might come up. Off-the-shelf software can do some really weird and IDM-unfriendly things like use a numerical ID for account name, or auto-generate account IDs based on profile data. All that goes into the database table too. This also helps people with non-[a-z|0-9] characters in their names like Harry O'Neil, or non-ASCII ones like Alžbêta. When you build your account synchronization processes, leverage that database table to ensure that the right accounts are getting the right updates. When names change (marriage, divorce, others) you want those changes to propagate to the right places. Configure the actual identity databases themselves to prevent local-changes where possible, and business process to strongly discourage that when it isn't possible. Rely on the central account-sync process for everything you can. Leverage alias systems wherever you can, such as in email. Consider the 8-char ID immutable, since changing that field can trigger a LOT of heart-ache among IT staff as accounts have to be recreated. This suggests an account-ID not derived from name data, since marriage/divorce/court-order can change the name-data over time. Have a system in place for exceptions, since there will always be some. Horrible divorce and that name-data generated 8-char UID brings wrenching memories every time you have to enter it? Be nice to your users and allow a mechanism for these changes, but keep it quiet. Do what you can to allow multiple username logins in the systems where that's an option Some people like their 8-character uid, others like [email protected]. Be flexible, make friends. Sometimes this requires fronting your web-based systems with a single-sign-on framework like CAS or leverage saml . You will be surprised at how many off the shelf systems can support SSO frameworks like this, so don't be discouraged. Which is to say, treat it like a databasing problem because that's what it is. Pick a primary key for maximum compatibility with your systems (likely 8 characters), build a lookup-table to allow systems to translate local ID's to the primary key, and engineer your data synchronization systems to handle various IDs.
{ "source": [ "https://serverfault.com/questions/348916", "https://serverfault.com", "https://serverfault.com/users/76922/" ] }
349,046
When doing a puppet agent call from a new image, I'm getting a err: Could not find class custommod error. The module itself is in /etc/puppet/modules/custommod same as all of the other modules we're calling, but this one is obstinante. [site.pp] node /clunod-wk\d+\.sub\.example\.local/ { include base include curl include custommod class{ "custommod::apps": frontend => "false} [...] } When the puppetmaster is run with debug output, it clearly finding the information for base and curl: debug: importing '/etc/puppet/modules/base/manifests/init.pp' in environment production debug: Automatically imported base from base into production debug: importing '/etc/puppet/modules/curl/manifests/init.pp' in environment production debug: Automatically imported curl from curl into production err: Could not find class custommod for clunod-wk0130.sub.example.local at /etc/puppet/manifests/site.pp:84 on node clunod-wk0130.sub.example.local Line 84 is include custommod An abbreviated directory and file structure: /etc/puppet |- manifests | |- site.pp | |- modules |- base | |- manifests | |- init.pp | |- curl | |- manifests | |- init.pp | |- custommod |- files | |- apps | |- [...] | |- manifests |- init.pp |- apps.pp I did check spelling :} The content of init.pp in the custommod directory is completely unremarkable: class custommod { } The intent is to create an empty class for the apps.pp file, which is where the meat is. class custommod::apps { [lots of stuff] } Only, it's never getting to the apps file. If I comment out the include custommod , the above error is generated on the class{ "custommod::apps": frontend => "false} line instead. What am I missing in my hunt to find out how this error is being generated? I need to note that this repo works just fine if it is run locally via puppet apply .
So... this is a bit embarrassing, but... Environments. Right there in my /etc/puppet.conf file is this: [master] manifest=$confdir/manifests/site.pp modulepath=$confdir/environments/$environment/modules:$confdir/modules After throwing strace at it to figure out where it was hunting for files, I noticed something. It was looking for custommod under /etc/puppet/environments/production/modules , and since there was a directory there (empty), it did not then go check /etc/puppet/modules . Apparently when importing a module it checks for directory-presence, rather than file-presence (init.pp). Remove that empty directory, things start working. Run the puppet agent using a different environment, things start working. Moral of the story: Puppet Environment paths do not act like bash $PATH.
{ "source": [ "https://serverfault.com/questions/349046", "https://serverfault.com", "https://serverfault.com/users/3038/" ] }
349,454
I'm half way through writing a nagios script and I've hit an annoyance with SSH. According to the man page: -q Quiet mode. Causes all warning and diagnostic messages to be suppressed. Yet if I enable the quiet flag and then pass an invalid port, I still get an error: $ ssh user@localhost -q -p test Bad port 'test' This is a problem, because that will make that message the first line out and that's what is grabbed by Nagios. I need to output something like "Warning|SSH error" after picking up on a != 0 exit code from ssh, but the first line I can output on is going to be line 2. How can I make SSH TRULY quiet? Note: I wasn't sure whether to post this question on serverfault, on superuser or on stackoverflow. I went with serverfault as the user base are probably most experienced with cli SSH and cli scripting workarounds.
ssh user@localhost -q -p test 2> /dev/null will redirect stderr to /dev/null.
{ "source": [ "https://serverfault.com/questions/349454", "https://serverfault.com", "https://serverfault.com/users/87855/" ] }
349,460
I have millions of files in a Amazon S3 bucket and I'd like to move these files to other buckets and folders with minimum cost or no cost if possible. All buckets are in the same zone. How could I do it?
Millions is a big number - I'll get back to that later. Regardless of your approach, the underlying mechanism needs to be copying directly from one bucket to another - in this way (since your buckets are in the same region) you do not incur any charge for bandwidth. Any other approach is simply inefficient (e.g. downloading and reuploading the files). Copying between buckets is accomplished by using 'PUT copy' - that is a PUT request that includes the 'x-amz-copy-source' header - I believe this is classed as a COPY request. This will copy the file and by default the associated meta-data. You must include a 'x-amz-acl' with the correct value if you want to set the ACL at the same time (otherwise, it will default to private). You will be charged for your COPY requests ($0.01/1,000 requests). You can delete the unneeded files after they have been copied (DELETE requests are not charged). (One point I am not quite clear on is whether or not a COPY request also incurs the charge of a GET request, as the object must first be fetched from the source bucket - if it does, the charge will be an additional $0.01/10,000 requests). The above charges are seemingly unavoidable - for a million objects you are looking at around $10 (or $11). Since in the end you must actually create the files on the destination bucket, other approaches (e.g. tar-gzipping the files, Amazon Import/Export, etc) will not get around this cost. None the less, it might be worth your while contacting Amazon if you have more than a couple million objects to transfer. Given the above (unavoidable price), the next thing to look into is time, which will be a big factor when copying 'millions of files'. All tools that can perform the direct copy between buckets will incur the same charge. Unfortunately, you require one request per file (to copy), one request to delete, and possibly one request to read the ACL data (if your files have varied ACLs). The best speed will come from whatever can run the most parallel operations. There are some command line approaches that might be quite viable: s3cmd-modification (that specific pull request) includes parallel cp and mv commands and should be a good option for you. The AWS console can perform the copy directly - I can't speak for how parallel it is though. Tim Kay's aws script can do the copy - but it is not parallel - you will need to script it to run the full copy you want (probably not the best option in this case - although, it is a great script). CloudBerry S3 Explorer , Bucket Explorer , and CloudBuddy should all be able to perform the task, although I don't know how the efficiency of each stacks up. I believe though that the multi-threaded features of most of these require the purchase of the software. Script your own using one of the available SDKs. There is some possibility that s3fs might work - it is quite parallel, does support copies between the same bucket - does NOT support copies between different buckets, but might support moves between different buckets. I'd start with s3cmd-modification and see if you have any success with it or contact Amazon for a better solution.
{ "source": [ "https://serverfault.com/questions/349460", "https://serverfault.com", "https://serverfault.com/users/83957/" ] }
349,585
I want to have an environment variable that contains the day of week in cmd.exe. When I run this command I get the result I want. C:\Users\tisc> powershell (get-date).dayofweek Friday Here I'm trying to store the result in an environment variable. C:\Users\tisc> set dow = powershell (get-date).dayofweek But when I try to get it I don't get the string as I wanted. C:\Users\tisc> set dow DoW=0 dow = powershell (get-date).dayofweek My goal is to use the variable in a batch file for some backup scripts.
You can use something like: $env:DOW = "foo"
{ "source": [ "https://serverfault.com/questions/349585", "https://serverfault.com", "https://serverfault.com/users/75610/" ] }
350,023
I'm trying to setup traffic shaping on a Linux gateway as written here . The script needs to be customized because I have multiple LAN interfaces. So to shape the LAN side I am planning to create a ifb pseudo device like so: modprobe ifb ip link set dev ifb0 up /sbin/tc qdisc add dev $WAN_INTERFACE ingress /sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0 The script from the gist repo mentioned above has these lines: /sbin/tc qdisc add dev $WAN_INTERFACE handle ffff: ingress /sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip prio 1 u32 match ip sport $INTERACTIVE_PORT 0xffff flowid :1 /sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip prio 1 u32 match ip dport $INTERACTIVE_PORT 0xffff flowid :1 /sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip prio 5 0 u32 match ip src 0.0.0.0/0 police rate $MAX_DOWNRATE_INGRESS burst 20k drop flowid :2 This code and the ifb interface creation code don't get on well together. The customized script gets executed, but ifb0 device doesn't show any traffic stats. If I comment out ingress gist repo code (quoted above), then ifb0 device shows the number of packets that are transferred. Also these lines cannot be executed together: /sbin/tc qdisc add dev $WAN_INTERFACE ingress /sbin/tc qdisc add dev $WAN_INTERFACE handle ffff: ingress I get file exists error. So, how can I shape ingress on WAN_INTERFACE and at the same time also shape the traffic that goes to LAN via ifb0 device?
IFB is an alternative to tc filters for handling ingress traffic, by redirecting it to a virtual interface and treat is as egress traffic there.You need one ifb interface per physical interface, to redirect ingress traffic from eth0 to ifb0, eth1 to ifb1 and so on. When inserting the ifb module, tell it the number of virtual interfaces you need. The default is 2: modprobe ifb numifbs=1 Now, enable all ifb interfaces: ip link set dev ifb0 up # repeat for ifb1, ifb2, ... And redirect ingress traffic from the physical interfaces to corresponding ifb interface. For eth0 -> ifb0: tc qdisc add dev eth0 handle ffff: ingress tc filter add dev eth0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0 Again, repeat for eth1 -> ifb1, eth2 -> ifb2 and so on, until all the interfaces you want to shape are covered. Now, you can apply all the rules you want. Egress rules for eth0 go as usual in eth0. Let's limit bandwidth, for example: tc qdisc add dev eth0 root handle 1: htb default 10 tc class add dev eth0 parent 1: classid 1:1 htb rate 1mbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbit Needless to say, repeat for eth1, eth2, ... Ingress rules for eth0, now go as egress rules on ifb0 (whatever goes into ifb0 must come out, and only eth0 ingress traffic goes into ifb0). Again, a bandwidth limit example: tc qdisc add dev ifb0 root handle 1: htb default 10 tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit The advantage of this approach is that egress rules are much more flexible than ingress filters. Filters only allow you to drop packets, not introduce wait times, for example. By handling ingress traffic as egress you can setup queue disciplines, with traffic classes and, if need be, filters. You get access to the whole tc tree, not only simple filters.
{ "source": [ "https://serverfault.com/questions/350023", "https://serverfault.com", "https://serverfault.com/users/59291/" ] }
350,026
I have a server running cPanel/WHM with exim and SpamAssassin. I've been noticing an issue where emails coming in with forged spamassassin headers bypassing some of the filtering. I want to strip out all SpamAssassin headers before it goes through spamassassin and then filtered into the inbox/spam folders. Searching the net, the only similar instance I could find was from 2004 . However, the exim config by that user and by me are very different. I am not sure how to apply it. I can run formail against a file containing the message to remove the headers, but I don't know how to make exim do that. Just to provide an example, a message will come in with headers like this: X-Spam-Status: No, score=1.3 X-Spam-Score: 13 X-Spam-Bar: + X-Ham-Report: Spam detection software, running on the system "serv02.example.com", has identified this incoming email as possible spam. The original message *snip* X-Spam-Flag: NO My SpamAssassin will add these headers to the message: X-Spam-Status: Yes, score=6.8 X-Spam-Score: 68 X-Spam-Bar: ++++++ X-Spam-Report: Spam detection software, running on the system "serv02.example.com", has identified this incoming email as possible spam. The original message *snip* X-Spam-Flag: YES But because the exim vfilter rules read the first X-Spam headers, the email ends up in the user's inbox instead of in the spam folder.
IFB is an alternative to tc filters for handling ingress traffic, by redirecting it to a virtual interface and treat is as egress traffic there.You need one ifb interface per physical interface, to redirect ingress traffic from eth0 to ifb0, eth1 to ifb1 and so on. When inserting the ifb module, tell it the number of virtual interfaces you need. The default is 2: modprobe ifb numifbs=1 Now, enable all ifb interfaces: ip link set dev ifb0 up # repeat for ifb1, ifb2, ... And redirect ingress traffic from the physical interfaces to corresponding ifb interface. For eth0 -> ifb0: tc qdisc add dev eth0 handle ffff: ingress tc filter add dev eth0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0 Again, repeat for eth1 -> ifb1, eth2 -> ifb2 and so on, until all the interfaces you want to shape are covered. Now, you can apply all the rules you want. Egress rules for eth0 go as usual in eth0. Let's limit bandwidth, for example: tc qdisc add dev eth0 root handle 1: htb default 10 tc class add dev eth0 parent 1: classid 1:1 htb rate 1mbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbit Needless to say, repeat for eth1, eth2, ... Ingress rules for eth0, now go as egress rules on ifb0 (whatever goes into ifb0 must come out, and only eth0 ingress traffic goes into ifb0). Again, a bandwidth limit example: tc qdisc add dev ifb0 root handle 1: htb default 10 tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit The advantage of this approach is that egress rules are much more flexible than ingress filters. Filters only allow you to drop packets, not introduce wait times, for example. By handling ingress traffic as egress you can setup queue disciplines, with traffic classes and, if need be, filters. You get access to the whole tc tree, not only simple filters.
{ "source": [ "https://serverfault.com/questions/350026", "https://serverfault.com", "https://serverfault.com/users/90968/" ] }
350,330
I am trying to send multiple headers add_header Access-Control-Allow-Origin http://dev.anuary.com; add_header Access-Control-Allow-Origin https://dev.anuary.com; However, instead NGINX makes them into Access-Control-Allow-Origin: http://dev.anuary.com, https://dev.anuary.com What's the solution?
Well, yes, nginx is combining the identically named headers.. but it's doing so in accordance with the HTTP spec. See section 4.2 . The header: Access-Control-Allow-Origin: http://dev.anuary.com, https://dev.anuary.com Is, according to the HTTP/1.1 spec, functionally equivalent to: Access-Control-Allow-Origin: http://dev.anuary.com Access-Control-Allow-Origin: https://dev.anuary.com If you have a system or application that is capable of reading one format and not the other, then it's the problem. nginx is doing it right. EDIT : The Mozilla documentation states that there can only be one Access-Control-Allow-Origin header. The formatting of it ( see here ) should be a space-delimited list of origins: add_header Access-Control-Allow-Origin "http://dev.anuary.com https://dev.anuary.com"; But really, you're supposed to be echoing the Origin header supplied by the client instead of generating one out of the blue. This is probably more appropriate: if ($http_origin ~* "^https?://dev\.anuary\.com$" ) { add_header Access-Control-Allow-Origin $http_origin; }
{ "source": [ "https://serverfault.com/questions/350330", "https://serverfault.com", "https://serverfault.com/users/82338/" ] }
350,424
When you change something in Apache you need to reload or restart apache. Does anything need to be refreshed or restarted in Ubuntu Server 8.04 after I add/update the crontab? Thanks a bunch for your help.
No. As long as you use the crontab -e command to edit the file, when you save it, you'll get a 'New Crontab Installed' message. That's it.
{ "source": [ "https://serverfault.com/questions/350424", "https://serverfault.com", "https://serverfault.com/users/107202/" ] }
350,454
This is a canonical question about capacity planning for web sites. Related: Can you help me with my capacity planning? How do you do load testing and capacity planning for databases? What are some recommended tools and methods of capacity planning for web sites and web-applications? Please feel free to describe different tools and techniques for different web-servers, frameworks, etc., as well as best-practices that apply to web servers in general.
The short answer is: Nobody can answer this question except you. The long answer is that benchmarking your specific workload is something that you need to undertake yourself, because it's a bit like asking "How long is a piece of string?". A simple one-page static website could be hosted on a Pentium Pro 150 and still serve thousands of impressions every day. The basic approach you need to take to answer this question is to try it and see what happens. There are plenty of tools that you can use to artificially put your system under pressure to see where it buckles. A brief overview of this is: Put your scenario in place Add monitoring Add traffic Evaluate results Remediate based on results Rinse, repeat until reasonably happy Put your scenario in place Basically, in order to test some load, you need something to test against. Set up an environment to test against. This should be a fairly close guess to your production hardware if possible, otherwise you will be left extrapolating your data. Set up your servers, accounts, websites, bandwidth, etc. Even if you do this on VMs that's OK just as long as you're prepared to scale your results. So, I'm going to set up a mid-powered virtual machine (two cores, 512 MB RAM, 4 GB HDD) and install my favourite load balancer, haproxy inside Red Hat Linux on the VM. I'm also going to have two web servers behind the load balancer that I'm going to use to stress test the load balancer. These two web servers are set up identically to my live systems. Add Monitoring You'll need some metrics to monitor, so I'm going to measure how many requests get through to my web servers, and how many requests I can squeeze through per second before users start getting a response time of over two seconds. I'm also going to monitor RAM, CPU and disk usage on the haproxy instance to make sure that the load balancer can handle the connections. How to do this depends a lot on your platforms and is outside of the scope of this answer. You might need to review web server log files, start performance counters, or rely on the reporting ability of your stress test tool. A few things you always want to monitor: CPU usage RAM usage Disk usage Disk latency Network utilisation You might also choose to look at SQL deadlocks, seek times, etc depending on what you're specifically testing. Add traffic This is where things get fun. Now you need to simulate a test load. There are plenty of tools that can do this, with configurable options: JMeter (Web, LDAP) Apache Benchmark (Web) Grinder (Web) httperf (Web) WCAT (Web) Visual Studio Load Test (Web) SQLIO (SQL Server) Choose a number, any number. Let's say you're going to see how the system responds with 10,000 hits a minute. It doesn't matter what number you choose because you're going to repeat this step many times, adjusting that number up or down to see how the system responds. Ideally, you should distribute these 10,000 requests over multiple load testing clients/nodes so that a single client does not become a bottleneck of requests. For example, JMeter's Remote Testing provides a central interface from which to launch several clients from a controlling Jmeter machine. Press the magic Go button and watch your web servers melt down and crash. Evaluate results So, now you need to go back to your metrics you collected in step 2. You see that with 10,000 concurrent connections, your haproxy box is barely breaking a sweat, but the response time with two web servers is a touch over five seconds. That's not cool - remember, your response time is aiming for two seconds. So, we need to make some changes. Remediate Now, you need to speed up your website by more than twice. So you know that you need to either scale up, or scale out. To scale up, get bigger web servers, more RAM, faster disks. To scale out, get more servers. Use your metrics from step 2, and testing, to make this decision. For example, if you saw that the disk latency was massive during the testing, you know you need to scale up and get faster hard drives. If you saw that the processor was sitting at 100% during the test, perhaps you need to scale out to add additional web servers to reduce the pressure on the existing servers. There's no generic right or wrong answer, there's only what's right for you. Try scaling up, and if that doesn't work, scale out instead. Or not, it's up to you and some thinking outside the box. Let's say we're going to scale out. So I decide to clone my two web servers (they're VMs) and now I have four web servers. Rinse, repeat Start again from Step 3. If you find that things aren't going as you expected (for example, we doubled the web servers, but the reponse times are still more than two seconds), then look into other bottlenecks. For example, you doubled the web servers, but still have a crappy database server. Or, you cloned more VMs, but because they're on the same physical host, you only achieved higher contention for the servers resources. You can then use this procedure to test other parts of the system. Instead of hitting the load balancer, try hitting the web server directly, or the SQL server using an SQL benchmarking tool .
{ "source": [ "https://serverfault.com/questions/350454", "https://serverfault.com", "https://serverfault.com/users/50875/" ] }
350,458
This is a canonical question about capacity planning for databases. Related: Can you help me with my capacity planning? How do you do load testing and capacity planning for web sites? I'm looking to create a canonical question of tools and methods of capacity planning for databases. This is intended to be a canonical question. Obviously, the general workflow is: Put your scenario in place Add monitoring Add traffic Evaluate results Remediate based on results Rinse, repeat until reasonably happy Please feel free to describe different tools and techniques for different web-servers, frameworks, etc., as well as best-practices.
Disk & RAM Capacity Planning Planning disk and memory capacity for a database server is a black art. More is better. Faster is better. As general guidelines I offer the following: You want more disk space than you'll EVER need. Take your best estimate of how much disk space you'll need for the next 3-5 years, then double it. You'll want enough RAM to hold your database indexes in memory, handle your biggest query at least two times over, and still have enough room left over for a healthy OS disk cache. Index size will depends on your database, and everything else depends heavily on your data set and query/database structure. I'll offer up "At least 2x the size of your largest table" as a suggestion, but note that this suggestion breaks down on really large data warehousing operations where the largest table can be tens or hundreds of gigabytes. Every database vendor has some instructions on performance tuning your disk/memory/OS kernel -- Spend some time with this documentation prior to deployment. It will help. Workload Benchmarking and Capacity Planning Assuming you haven't deployed yet… Many database systems ship with Benchmarking Tools -- For example, PostgreSQL ships with pgBench . These tools should be your first stop in benchmarking database performance. If possible you should run them on all new database servers to get a feel for "how much work" the database server can do. Armed now with a raw benchmark that is ABSOLUTELY MEANINGLESS let's consider a more realistic approach to benchmarking: Load your database schema and write a program which populates it with dummy data, then run your application's queries against that data. This benchmarks three important things: 1. The database server (hardware) 2. The database server (software) 3. Your database design, and how it interacts with (1) and (2) above. Note that this requires a lot more effort than simple pre-built benchmarks like pgBench : You need to write some code to do the populating, and you may need to write some code to do the queries & report execution time. This kind of testing is also substantially more accurate: Since you are working with your schema and queries you can see how they will perform, and it offers you the opportunity to profile and improve your database/queries. The results of these benchmarks are an idealized view of your database. To be safe assume that you will only achieve 50-70% of this performance in your production environment (the rest being a cushion that will allow you to handle unexpected growth, hardware failures, workload changes, etc.). It's too late! It's in production! Once your systems are in production it's really too late to "benchmark" -- You can turn on query logging/timing briefly and see how long things take to execute, and you can run some "stress test" queries against large data sets during off hours. You can also look at the system's CPU, RAM and I/O (disk bandwidth) utilization to get an idea of how heavily loaded it is. Unfortunately all these things will do is give you an idea of what the system is doing, and a vague concept of how close to saturation it is. That brings us to… Ongoing Monitoring All the benchmarks in the world won't help you if your system is suddenly seeing new/different usage patterns. For better or worse database deployments aren't static: Your developers will change things, your data set will grow (they never seem to shrink), and your users will somehow create insane combinations of events you never predicted in testing. In order to do proper capacity planning for your database you will need to implement some kind of performance monitoring to alert you when database performance is no longer meeting your expectations. At that point you can consider remedial actions (new hardware, DB schema or query changes to optimize resource use, etc.). Note: This is a very high level and generic guide to sizing your database hardware and figuring out how much abuse it can take. If you are still unsure about how to determine if a specific system meets your needs you should speak to a database expert. There is also a Stack Exchange site specifically dedicated to database management: dba.stackexchange.com . Search their question archive or browse the tags specific to your database engine for further advice on performance tuning.
{ "source": [ "https://serverfault.com/questions/350458", "https://serverfault.com", "https://serverfault.com/users/50875/" ] }
350,782
I have wrestled with service principle names a few times now and the Microsoft explanation is just not sufficient. I am configuring an IIS application to work on our domain and it looks like some of my issues are related to my need to configure http specific SPNs on the windows service account that is running the application pool hosting my site. All this has made me realize I just don't fully get the relationship between service types (MSSQL, http, host, termsrv, wsman, etc.), Kerberos authentication, active directory computer accounts (PCName$), windows services accounts, SPNs, and the user account I am using to try and access a service. Can someone please explain Windows Service Principle Names (SPNs) without oversimplifying the explanation? Bonus points for a creative analogy that would resonate with a moderately experienced system administrator/developer.
A Service Principal Name is a concept from Kerberos . It's an identifier for a particular service offered by a particular host within an authentication domain. The common form for SPNs is service class / fqdn @ REALM (e.g. IMAP/[email protected] ). There are also User Principal Names which identify users, in form of user @ REALM (or user1 / user2 @ REALM , which identifies a speaks-for relationship). The service class can loosely be thought of as the protocol for the service. The list of service classes that are built-in to Windows are listed in this article from Microsoft . Every SPN must be registered in the REALM 's Key Distribution Center (KDC) and issued a service key . The setspn.exe utility which is available in \Support\Tools folder on the Windows install media or as a Resource Kit download, manipulates assignments of SPNs to computer or other accounts in the AD. When a user accesses a service that uses Kerberos for authentication (a "Kerberized" service) they present an encrypted ticket obtained from KDC (in a Windows environment an Active Directory Domain Controller). The ticket is encrypted with the service key . By decrypting the ticket the service proves it possesses the key for the given SPN. Services running on Windows hosts use the key associated with AD computer account, but to be compliant with the Kerberos protocol SPNs must be added to the Active Directory for each kerberized service running on the host — except those built-in SPNs mentioned above. In the Active Directory the SPNs are stored in the servicePrincipalName attribute of the host's computer object. For more information, see: Microsoft TechNet article on SPN , Ken Hornstein's Kerberos FAQ
{ "source": [ "https://serverfault.com/questions/350782", "https://serverfault.com", "https://serverfault.com/users/12890/" ] }
350,931
From the adduser command, I saw the option --system to create a system user. A system user will use /bin/false and by default belong to nogroup . It also won't copy the /etc/skel to the home directory. In which condition would I prefer to create a system user?
When you are creating an account to run a daemon, service, or other system software, rather than an account for interactive use. Technically, it makes no difference, but in the real world it turns out there are long term benefits in keeping user and software accounts in separate parts of the numeric space. Mostly, it makes it easy to tell what the account is, and if a human should be able to log in.
{ "source": [ "https://serverfault.com/questions/350931", "https://serverfault.com", "https://serverfault.com/users/98421/" ] }
351,021
We're moving into a new office in an old building in London (that's England :) and are walling off a 2m x 1.3m area where the router & telephone equipment currently terminates to use as a server closet. The closet will contain: 2 24-port switches 1 router 1 VSDL modem 1 Dell desktop 1 4-bay NAS 1 HP micro-server 1 UPS Miscellaneous minor telephony boxes. There is no central A/C in the office and there never will be. We can install ducting to the outside quite easily - it's only a couple of metres to the windows, which face a courtyard. My question is whether installing an extractor fan with ducting to the window should be sufficient for cooling? Would an intake fan and intake duct (from the window, too) be required? We don't want to leave a gap in the closet door as that'll let noise out into the office. If we don't have to put a portable A/C unit into the closet, that'd be perfect. The office has about 12 people; London is temperate, average maximum in August is 31 Celsius, 25 Celsius is more typical. The same equipment runs fine in our current office (same building as new office, also no A/C) but it isn't in an enclosed space. I can see us putting say one Dell 2950 tower server into the closet, but no more than that. So, sustained power consumption in the closet would currently be about 800w (I'm guessing); possibly in the future 2kw. The closet will have a ceiling and no windows and be well-insulated. We don't care if the equipment runs hot, so long as it runs and we don't hear it.
Well, let's work this out; 2 x 24 port switches (say Cisco 3750-E's) can output 344 BTU/hr each so that's 688 in total 1 x router (say a Cisco 2921) can output 1260 BTU/hr 1 x VDSL modem (say a Draytek Vigor 2750) can output 120 BTU/hr 1 x Desktop (say a Dell Optiplex 790, with monitor switched off) can output 850 BTU/hr 1 x 4-Bay NAS (say a Netgear ReadyNAS Ultra 4 with 4 x 2TB disks) can output ~600 BTU/hr 1 x HP Microserver can output 511 BTU/hr 1 x UPS (say an APC Smart-UPS 2200VA that can handle the ~1.2Kw you may be drawing) can output 275 BTU/hr That's 4300 BTU/hr. You've got 5.2 cubic metres of space (minus the items inside it), so not including natural heat loss you're going to have to install a minimum 29cm fan with a 900 cubic metre per hour rating with 29cm conduit all the way to the room if you don't want to hit 42 degrees C (the lowest recommended highest temp of the kit listed above) from a nominal of 20C in 17 minutes. Basically get an external A/C unit that can scrub 5k BTU/hr ok - a fan's going to literally and figuratively suck :)
{ "source": [ "https://serverfault.com/questions/351021", "https://serverfault.com", "https://serverfault.com/users/84771/" ] }
351,046
All I need to do is to run a specific script as a particular user who does have the nologin/false shell indicated in /etc/passwd . I would run the script as root and this should run as another user. Running: ~# su -c "/bin/touch /tmp/test" testuser would work, but I need a valid shell for the testuser. I know I can disable the password with passwd -d testuser and leave the shell to /bin/bash this way would secure a little bit but I need to have nologin/false shell. Basically what I need is what crontab does when we set jobs to be running as a particular user, regardless this one has nologin/false shell. p.s I found this thread Executing a command as a nologin user , but I have no idea how to concatenate the command su -s /bin/sh $user to the script I need to run.
You can use the -s switch to su to run a particular shell su -s /bin/bash -c '/path/to/your/script' testuser (Prepend sudo to the above if testuser is a passwordless user.)
{ "source": [ "https://serverfault.com/questions/351046", "https://serverfault.com", "https://serverfault.com/users/107410/" ] }
351,108
I'm running Ubuntu server on a computer used as a wireless AP, but this AP should resolve all DNS requests to an internal IP address rather than actually performing the lookup. I want to do the same thing that paid public WiFi hotspots do - you can connect but if you attempt to load any websites they show a default page. I've noticed that they do this by resolving all domains to an internal IP address. I've added these lines to /etc/dnsmasq.conf : # Add domains which you want to force to an IP address here. # The example below send any host in double-click.net to a local # web-server. address=/com/192.168.2.1 address=/uk/192.168.2.1 address=/org/192.168.2.1 address=/gov/192.168.2.1 address=/net/192.168.2.1 address=/us/192.168.2.1 which works fine for those TLD's, but I'd like to be able to do it with all domains so I can sleep at night.
As the dnsmasq manual says … … just use # for a wildcard: address=/#/192.168.2.1
{ "source": [ "https://serverfault.com/questions/351108", "https://serverfault.com", "https://serverfault.com/users/45745/" ] }
351,129
Is there a more direct way to the environmental variables GUI than the following? Right click 'My Computer' and select 'Properties'. Click 'Advanced System Settings' link. Click 'Advanced' tab. Click 'Environment Variables...' button. Can I make a shortcut to it?
Starting with Windows Vista , the panel can be displayed from the command line ( cmd.exe ) with a rundll32 sysdm.cpl,EditEnvironmentVariables It is from here .
{ "source": [ "https://serverfault.com/questions/351129", "https://serverfault.com", "https://serverfault.com/users/104622/" ] }
351,559
this is my first web app deployment and am running into all sorts of issues. I am currently going for a nginx + gunicorn implementation for the Django app, but mostly this question relates to nginx configurations. For some context - nginx would receive connections and proxy to the gunicorn local server. in the nginx configurations, where it says server_name do I have to provide one? I don't plan to use domain names of any kind, just through my network's external ip (it is static) and the port number to listen to. My desire is that when I access something like http://xxx.xxx.xxx.xxx:9050 I would be able to get the site. The following is the sample code that I will base the configurations on for reference. server { listen 80; server_name WHAT TO PUT HERE?; root /path/to/test/hello; location /media/ { # if asset versioning is used if ($query_string) { expires max; } } location /admin/media/ { # this changes depending on your python version root /path/to/test/lib/python2.6/site-packages/django/contrib; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8000/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; }
server_name defaults to an empty string, which is fine; you can exclude it completely. Another common approach for the "I don't want to put a name on this" need is to use server_name _; Your http://xxx.xxx.xxx.xxx:9050 URL won't work with this config, though; you're only listening on port 80. You'd need to add a listen 9050; as well.
{ "source": [ "https://serverfault.com/questions/351559", "https://serverfault.com", "https://serverfault.com/users/93611/" ] }
351,563
We have a Windows 2008 Enterprise R2 SP1 server with multiple accepted domains configured on our Exchange 2010 console. Configuration of exchange 2010: In exchange console, under organization configuration > hub transport > accepted domains, we have: domain1 > authoritative > default = true domain2 > authoritative > default = false domain3 > authoritative > default = false domain4 > authoritative > default = false We are able to RECEIVE e-mails on ALL the above domains. Just to be clear: I can receive emails to [email protected] , [email protected], [email protected] and [email protected] without any problems. I am able to send email from [email protected] (the default domain). However , when trying to send emails from [email protected], [email protected], and [email protected], I receive the following error: Delivery has failed to these recipients or groups: destination_example_email You can't send a message on behalf of this user unless you have permission to do so. Please make sure you're sending on behalf of the correct sender, or request the necessary permission. If the problem continues, please contact your helpdesk. If I change the primary email address for userX to [email protected] , I am able to send as [email protected] and only from that mail. The question: How can I enable sending emails from ALL the authoritative domains at any single moment without having to manually change the default email address of the user?
server_name defaults to an empty string, which is fine; you can exclude it completely. Another common approach for the "I don't want to put a name on this" need is to use server_name _; Your http://xxx.xxx.xxx.xxx:9050 URL won't work with this config, though; you're only listening on port 80. You'd need to add a listen 9050; as well.
{ "source": [ "https://serverfault.com/questions/351563", "https://serverfault.com", "https://serverfault.com/users/107584/" ] }
352,431
Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?
I would say it depends on your use case and the rest of the answers have covered this pretty well. 4G of swap are after all a cheap way to buy some safety. And I feel that this cheapness is what is making people not want to turn it off. But let me answer with a rhetorical question. If money is not an issue, and you have a choice between two systems - one with 12G of RAM and 4G of swap, and another with 16G of RAM and no swap - which one would you choose? Unfortunately most people would still answer that they'd choose 16G of RAM and still add 4G of swap, which is missing my point. And on another note, I personally find a swappy system worse than a crashed system. A crashed system would trigger a standby backup server to take over much sooner. And in an active-active (or load balanced setup) a crashed system would be taken out of rotation much sooner. A win for the no-swap system again.
{ "source": [ "https://serverfault.com/questions/352431", "https://serverfault.com", "https://serverfault.com/users/77199/" ] }
352,647
Is it possible in IIS to set up a site in IIS and only let users of a certain AD Group get access to it?
The following should work, depending on your IIS version. You'll need to add a web.config if you don't have one (though you should on IIS7) in the directory root of your site. The below will allow Domain Admins and deny Domain Users (fairly self explanatory). Make sure you line up the config sections if you already have a section, etc. <configuration> <location path="MyPage.aspx/php/html"> <system.web> <authorization> <allow users="DOMAIN\Domain Admins"/> <deny users="DOMAIN\Domain Users"/> </authorization> </system.web> </location> </configuration> You will need Windows Authentication enabled under Authentication in your site preferences for this to work, obviously, but I assume you already have this enabled.
{ "source": [ "https://serverfault.com/questions/352647", "https://serverfault.com", "https://serverfault.com/users/107408/" ] }
352,658
I have several servers where some users require to be sudoers to work. The problem is that when sudoers can run the command sudo su and login as user root . It seems very risky to run that command. I tried with Command Alias ​​in the file /etc/sudoers but it has not worked. Is there any way that they are sudoers but not run the command sudo su ?
The following should work, depending on your IIS version. You'll need to add a web.config if you don't have one (though you should on IIS7) in the directory root of your site. The below will allow Domain Admins and deny Domain Users (fairly self explanatory). Make sure you line up the config sections if you already have a section, etc. <configuration> <location path="MyPage.aspx/php/html"> <system.web> <authorization> <allow users="DOMAIN\Domain Admins"/> <deny users="DOMAIN\Domain Users"/> </authorization> </system.web> </location> </configuration> You will need Windows Authentication enabled under Authentication in your site preferences for this to work, obviously, but I assume you already have this enabled.
{ "source": [ "https://serverfault.com/questions/352658", "https://serverfault.com", "https://serverfault.com/users/62338/" ] }
352,783
I am trying to understand this Unix behavior (which I happen to be testing on Ubuntu 11.10): $ touch foo $ setfacl -m u:nobody:rwx foo $ getfacl foo # file: foo # owner: michael # group: michael user::rw- user:nobody:rwx group::rw- mask::rwx other::r-- $ chmod g-rw foo $ getfacl foo # file: foo # owner: michael # group: michael user::rw- user:nobody:rwx #effective:--x group::rw- #effective:--- mask::--x other::r-- Notice that the chmod(1) command has updated the ACL mask. Why does this happen? The SunOS manpage has the following to say: If you use the chmod(1) command to change the file group owner permissions on a file with ACL entries, both the file group owner permissions and the ACL mask are changed to the new permissions. Be aware that the new ACL mask permissions may change the effective permissions for additional users and groups who have ACL entries on the file. I ask because it would be convenient for me if chmod(1) did not have this behavior. I hope that by understanding why it does what it does, I can better design how I set up filesystem permissions.
It would not be convenient for you if chmod() didn't have this behaviour. It would be highly inconvenient, because things that people traditionally expect to work on Unixes would break. This behaviour serves you well, did you but know it. It's a shame that IEEE 1003.1e never became a standard and was withdrawn in 1998. In practice, fourteen years on, it's a standard that a wide range of operating systems — from Linux through FreeBSD to Solaris — actually implement. IEEE 1003.1e working draft #17 makes for interesting reading, and I recommend it. In appendix B § 23.3 the working group provides a detailed, eight page, rationale for the somewhat complex way that POSIX ACLs work with respect to the old S_IRWXG group permission flags. (It's worth noting that the TRUSIX people provided much the same analysis ten years earlier.) I'm not going to copy it all here. Read the rationale in the draft standard for details. Here is a very brief précis : The SunOS manual is wrong. It should read If you use the chmod(1) command to change the file group owner permissions on a file with ACL entries, either the file group owner permissions or the ACL mask are changed to the new permissions. This is the behaviour that you can see happening , despite what the current manual page says, in your question. It's also the behaviour specified by the draft POSIX standard. If a CLASS_OBJ (Sun's and TRUSIX's terminology for ACL_MASK ) access-control entry exists, the group bits of a chmod() set it, otherwise they set the GROUP_OBJ access-control entry. If this weren't the case, applications that did various standard things with `chmod()`, expecting it to work as `chmod()` has traditionally worked on old non-ACL Unixes, would either leave gaping security holes or see what they think to be gaping security holes: Traditional Unix applications expect to be able to deny all access to a file, named pipe, device, or directory with chmod(…,000) . In the presence of ACLs, this only turns off all user and group permissions if the old S_IRWXG maps to CLASS_OBJ . Without this, setting the old file permissions to 000 wouldn't affect any USER or GROUP entries and other users would, surprisingly, still have access to the object. Temporarily changing a file's permission bits to no access with chmod 000 and then changing them back again was an old file locking mechanism, used before Unixes gained advisory locking mechanisms, that — as you can see — people still use today . Traditional Unix scripts expect to be able to run chmod go-rwx and end up with only the object's owner able to access the object. Again — as you can see — this is still the received wisdom twelve years later. And again, this doesn't work unless the old S_IRWXG maps to CLASS_OBJ if it exists, because otherwise that chmod command wouldn't turn off any USER or GROUP access control entries, leading to users other than the owner and non-owning groups retaining access to something that is expected to be accessible only to the owner. A system where the permission bits were otherwise separate from and and ed with the ACLs would require file permission flags to be rwxrwxrwx in most cases, which would confuse the heck out of the many Unix applications that complain when they see what they think to be world-writable stuff. A system where the permission bits were otherwise separate from and or ed with the ACLs would have the chmod(…,000) problem mentioned before. Further reading Winfried Trümper (1999-02-28). Summary about Posix.1e Portable Applications Standards Committee of the IEEE Computer Society (October 1997). Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17. Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System . NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
{ "source": [ "https://serverfault.com/questions/352783", "https://serverfault.com", "https://serverfault.com/users/55554/" ] }
352,785
I have a Windows Azure applicaion (web-role) which I'm having serious problems with. If there are problems with one of my databases which the application connects to (in this case, SQL Azure transient errors) many exceptions are thrown in my application. Every single exception thrown is caught and handled nicely, yet the IIS service eventually shuts down, or even worse, the entire web-role becomes un-responsive. The number of exceptions is high, anything from 30-100 per second. My application runs happily on 3 web-role instances, whilst experiencing this problem I have upgraded to 10 instances because I'm sure I read that caught exceptions are quite heavy on resources, but this made no difference. There are no errors or (from what I can determine) helpful warnings in the event log. .. and just to repeat, I can guarantee there are no uncaught exceptions occurring. Does this sound like normal server behaviour?
It would not be convenient for you if chmod() didn't have this behaviour. It would be highly inconvenient, because things that people traditionally expect to work on Unixes would break. This behaviour serves you well, did you but know it. It's a shame that IEEE 1003.1e never became a standard and was withdrawn in 1998. In practice, fourteen years on, it's a standard that a wide range of operating systems — from Linux through FreeBSD to Solaris — actually implement. IEEE 1003.1e working draft #17 makes for interesting reading, and I recommend it. In appendix B § 23.3 the working group provides a detailed, eight page, rationale for the somewhat complex way that POSIX ACLs work with respect to the old S_IRWXG group permission flags. (It's worth noting that the TRUSIX people provided much the same analysis ten years earlier.) I'm not going to copy it all here. Read the rationale in the draft standard for details. Here is a very brief précis : The SunOS manual is wrong. It should read If you use the chmod(1) command to change the file group owner permissions on a file with ACL entries, either the file group owner permissions or the ACL mask are changed to the new permissions. This is the behaviour that you can see happening , despite what the current manual page says, in your question. It's also the behaviour specified by the draft POSIX standard. If a CLASS_OBJ (Sun's and TRUSIX's terminology for ACL_MASK ) access-control entry exists, the group bits of a chmod() set it, otherwise they set the GROUP_OBJ access-control entry. If this weren't the case, applications that did various standard things with `chmod()`, expecting it to work as `chmod()` has traditionally worked on old non-ACL Unixes, would either leave gaping security holes or see what they think to be gaping security holes: Traditional Unix applications expect to be able to deny all access to a file, named pipe, device, or directory with chmod(…,000) . In the presence of ACLs, this only turns off all user and group permissions if the old S_IRWXG maps to CLASS_OBJ . Without this, setting the old file permissions to 000 wouldn't affect any USER or GROUP entries and other users would, surprisingly, still have access to the object. Temporarily changing a file's permission bits to no access with chmod 000 and then changing them back again was an old file locking mechanism, used before Unixes gained advisory locking mechanisms, that — as you can see — people still use today . Traditional Unix scripts expect to be able to run chmod go-rwx and end up with only the object's owner able to access the object. Again — as you can see — this is still the received wisdom twelve years later. And again, this doesn't work unless the old S_IRWXG maps to CLASS_OBJ if it exists, because otherwise that chmod command wouldn't turn off any USER or GROUP access control entries, leading to users other than the owner and non-owning groups retaining access to something that is expected to be accessible only to the owner. A system where the permission bits were otherwise separate from and and ed with the ACLs would require file permission flags to be rwxrwxrwx in most cases, which would confuse the heck out of the many Unix applications that complain when they see what they think to be world-writable stuff. A system where the permission bits were otherwise separate from and or ed with the ACLs would have the chmod(…,000) problem mentioned before. Further reading Winfried Trümper (1999-02-28). Summary about Posix.1e Portable Applications Standards Committee of the IEEE Computer Society (October 1997). Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17. Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System . NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
{ "source": [ "https://serverfault.com/questions/352785", "https://serverfault.com", "https://serverfault.com/users/108003/" ] }
352,786
I swapped out some tapes from our older Amanda system over the weekend. In doing so I was a bit over-eager and took out the first tape that Amanda was expecting to use for the weekly backup run. The next several tapes in the series are all present but now the run has been put in the holding disk. How would I tell amanda to 'amflush' the backup run but skip to the next tape in the series? IE its expecting 'ARCHIVE-0150, ARCHIVE-0151, ARCHIVE-0152, ARCHIVE-0153, ARCHIVE-0154' and I want it to start with ARCHIVE-0151 and continue on from there.
It would not be convenient for you if chmod() didn't have this behaviour. It would be highly inconvenient, because things that people traditionally expect to work on Unixes would break. This behaviour serves you well, did you but know it. It's a shame that IEEE 1003.1e never became a standard and was withdrawn in 1998. In practice, fourteen years on, it's a standard that a wide range of operating systems — from Linux through FreeBSD to Solaris — actually implement. IEEE 1003.1e working draft #17 makes for interesting reading, and I recommend it. In appendix B § 23.3 the working group provides a detailed, eight page, rationale for the somewhat complex way that POSIX ACLs work with respect to the old S_IRWXG group permission flags. (It's worth noting that the TRUSIX people provided much the same analysis ten years earlier.) I'm not going to copy it all here. Read the rationale in the draft standard for details. Here is a very brief précis : The SunOS manual is wrong. It should read If you use the chmod(1) command to change the file group owner permissions on a file with ACL entries, either the file group owner permissions or the ACL mask are changed to the new permissions. This is the behaviour that you can see happening , despite what the current manual page says, in your question. It's also the behaviour specified by the draft POSIX standard. If a CLASS_OBJ (Sun's and TRUSIX's terminology for ACL_MASK ) access-control entry exists, the group bits of a chmod() set it, otherwise they set the GROUP_OBJ access-control entry. If this weren't the case, applications that did various standard things with `chmod()`, expecting it to work as `chmod()` has traditionally worked on old non-ACL Unixes, would either leave gaping security holes or see what they think to be gaping security holes: Traditional Unix applications expect to be able to deny all access to a file, named pipe, device, or directory with chmod(…,000) . In the presence of ACLs, this only turns off all user and group permissions if the old S_IRWXG maps to CLASS_OBJ . Without this, setting the old file permissions to 000 wouldn't affect any USER or GROUP entries and other users would, surprisingly, still have access to the object. Temporarily changing a file's permission bits to no access with chmod 000 and then changing them back again was an old file locking mechanism, used before Unixes gained advisory locking mechanisms, that — as you can see — people still use today . Traditional Unix scripts expect to be able to run chmod go-rwx and end up with only the object's owner able to access the object. Again — as you can see — this is still the received wisdom twelve years later. And again, this doesn't work unless the old S_IRWXG maps to CLASS_OBJ if it exists, because otherwise that chmod command wouldn't turn off any USER or GROUP access control entries, leading to users other than the owner and non-owning groups retaining access to something that is expected to be accessible only to the owner. A system where the permission bits were otherwise separate from and and ed with the ACLs would require file permission flags to be rwxrwxrwx in most cases, which would confuse the heck out of the many Unix applications that complain when they see what they think to be world-writable stuff. A system where the permission bits were otherwise separate from and or ed with the ACLs would have the chmod(…,000) problem mentioned before. Further reading Winfried Trümper (1999-02-28). Summary about Posix.1e Portable Applications Standards Committee of the IEEE Computer Society (October 1997). Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17. Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System . NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
{ "source": [ "https://serverfault.com/questions/352786", "https://serverfault.com", "https://serverfault.com/users/72780/" ] }
352,835
I need to run a script daily. The script should be run as a specific user (ex. user1) not as root. So I put the cron file at /etc/cron.d and put the user name in the line (2nd column). But it gives an error saying that the command is not found. I suspect that the script was not run as user1's environment. Did I miss something?
Only /etc/crontab and the files in /etc/cron.d/ have a username field. In that file you can do this: 1 1 * * * username /path/to/your/script.sh From root's crontab sudo crontab -e you can use: 1 1 * * * su username -c "/path/to/your/script.sh" Or you can use the user's actual crontab like this: sudo crontab -u username -e The second column in any crontab file is for the hour that you want the job to run at. Did you mean the sixth field?
{ "source": [ "https://serverfault.com/questions/352835", "https://serverfault.com", "https://serverfault.com/users/18486/" ] }
352,942
Is logrotate hiding somewhere on OSX, or is there an equivalent? It's not in /usr/sbin .
Based on Brian Armstrong's answer, here's something with a little more explanation and a correction. This handles the log created by postgres on OSX installed by Homebrew. Located at /etc/newsyslog.d/postgresql.conf : # logfilename [owner:group] mode count size(KB) when flags [/pid_file] [sig_num] /usr/local/var/postgres/postgresql.log : 600 2 2048 * J /usr/local/var/postgres/postmaster.pid This will rotate the log file when it reaches 2MB in size, keep 2 archives (for a total of 6MB storage used), and bzip2-compress the archives. It will notify the postgres process to reopen the log files once rotated, which is necessary to get new log entries and to actually free the disk space without restarting the machine. Important to note that size is in KB, not bytes. You can test the config file (without affecting any files) using sudo newsyslog -nvv . newsyslog documentation is located here: http://www.freebsd.org/cgi/man.cgi?newsyslog.conf(5) . Also used: http://www.redelijkheid.com/blog/2011/3/28/adding-custom-logfile-to-os-x-server-log-rotation.html
{ "source": [ "https://serverfault.com/questions/352942", "https://serverfault.com", "https://serverfault.com/users/68259/" ] }
353,130
This doesn't work for me: # iptables -A INPUT -p tcp --dports 110,143,993,995 -j ACCEPT iptables v1.4.7: unknown option `--dports' Try `iptables -h' or 'iptables --help' for more information. However in the man page, there is an option --dports ... any ideas?
You have to use --match multiport in the rule for defining more ports #iptables -A INPUT -p tcp --match multiport --dports 110,143,993,995 -j ACCEPT
{ "source": [ "https://serverfault.com/questions/353130", "https://serverfault.com", "https://serverfault.com/users/34653/" ] }
354,403
I have a bash script for deploying code from a beta environment to a production environment but currently I have to add the list of files to a txt file manaully and sometime I miss some. Basically my deployment script cat/loops copying the files over. (exports/imports db as well but that's not relevant..lol) Anyway, I'd like to use the find command to generate a list of files modified in the last 14 days. The problem is I need to strip the path out ./ in order for the deployment script to work. Here's an example of the find command usage: find . -type f -mtime -14 > deploy.txt Here's the line that cats deploy.txt in my deployment script: for i in `cat deploy.txt`; do cp -i /home/user/beta/public_html/$i /home/user/public_html/$i; done Any idea how to accomplish this using bash scripting? Thanks!
You can use the -printf command line option with %f to print just the filename without any directory information find . -type f -mtime -14 -printf '%f\n' > deploy.txt or you can use sed to just remove the ./ find . -type f -mtime -14 | sed 's|^./||' >deploy.txt
{ "source": [ "https://serverfault.com/questions/354403", "https://serverfault.com", "https://serverfault.com/users/48581/" ] }
354,412
I am trying to update my Debian Server (version 6.0.3 returned from lsb_release ) from kernel 2.6.8 to 2.6.32, but it keeps saying, that my Network Driver is tg3 even though I have create a blacklist with tg3 in /etc/modprobe.d/blacklist It should use e1000 How can I set it? I do not have a possibility to come near the server in the nearest future, so I have to create as a script, which could then be run on boot or by cron, if I can not set it in a config-file EDIT I figured out due ethtool and the bus, that the numbering of the 2 network interfaces was changed, when they were going to set up, but still I can't figure out, why the interface will not come up. I have created the same configuration for both interfaces in /etc/network/interfaces
You can use the -printf command line option with %f to print just the filename without any directory information find . -type f -mtime -14 -printf '%f\n' > deploy.txt or you can use sed to just remove the ./ find . -type f -mtime -14 | sed 's|^./||' >deploy.txt
{ "source": [ "https://serverfault.com/questions/354412", "https://serverfault.com", "https://serverfault.com/users/108588/" ] }
354,615
I'm starting a very little hosting company for a few friends and little clients, nothing big. I want to give my "clients" the right to manage their files on the server. I hate FTP as it is not secure and it's in my opinion obsolete. So I'd like to allow my users to connect through SFTP but not allow them to connect through SSH. (I know, I know, SFTP is using SSH). But I was just wondering, is it possible? So I wouldn't have to install a FTP service on the server and everything would be awesome!
Starting with version 4.9 OpenSSH (not available in centos 5.x but ChrootDirectory feature was backported) has an internal-sftp subsystem: Subsystem sftp internal-sftp And then block other uses: Match group sftponly ChrootDirectory /upload/%u X11Forwarding no AllowTcpForwarding no AllowAgentForwarding no ForceCommand internal-sftp Add your users to the sftponly group. The chroot directory must be owned by root, and cannot be group-writeable, so create a subdirectory for each user, e.g. uploads or home/$username that's owned by the appropriate user (if you match their home directory, it will be the default working directory when connecting). I'd also set /bin/false as the user's shell. As an example, users can then upload single files with: sftp username@hostname <<< 'put filename.ext uploads/' (scp will hopefully soon be modified to use sftp so this will become easier)
{ "source": [ "https://serverfault.com/questions/354615", "https://serverfault.com", "https://serverfault.com/users/28259/" ] }
355,292
I know I have existing groups and users but I'm not sure about their association. Is there an shell command I can use to list all users or all groups and a command to list all groups/users for a specified user/group? So something like showusers would list all users, and showgroups -u thisuser would show all the groups that have thisuser in it.
All users: $ getent passwd All groups: $ getent group All groups with a specific user: $ getent group | grep username
{ "source": [ "https://serverfault.com/questions/355292", "https://serverfault.com", "https://serverfault.com/users/85879/" ] }
355,321
I have a 30gb disk image of a borked partition (think dd if=/dev/sda1 of=diskimage ) that I need to recover some text files from. Data carving tools like foremost only work on files with well defined headers, i.e. not plain text files, so I've fallen back on my good friend strings . strings diskimage > diskstrings.txt produced a 3gb text file containing a bunch of strings, mostly useless stuff, mixed in with the text that I actually want. Most of the cruft tends to be really long, unbroken strings of gibberish. The stuff I'm interested in is guaranteed to be less than 16kb, so I'm going to filter the file by line length. Here's the Python script I'm using to do so: infile = open ("infile.txt" ,"r"); outfile = open ("outfile.txt","w"); for line in infile: if len(line) < 16384: outfile.write(line) infile.close() outfile.close() This works, but for future reference: Are there any magical one-line incantations (think awk , sed ) that would filter a file by line length?
awk '{ if (length($0) < 16384) print }' yourfile >your_output_file.txt would print lines shorter than 16 kilobytes, as in your own example. Or if you fancy Perl: perl -nle 'if (length($_) < 16384) { print }' yourfile >your_output_file.txt
{ "source": [ "https://serverfault.com/questions/355321", "https://serverfault.com", "https://serverfault.com/users/106498/" ] }
355,414
For example: we have registered domain name domain.com and added name server records at the registrars server: ns1.domain.com. ns2.domain.com. ns3.domain.com. Than we look up for domain.com . We get all 3 name server adressess. 1. Which one of that servers will be requested further and why? 2. Do the order of NS records in the zone file matters? 3. Is it determined in any RFC ?
Sadly, the answer here is "it depends". The factors it depends on will vary with the domain and how the owning servers are set up as well as how your own local DNS is set up. First, for example, regarding the NS records returned: it is perfectly allowed to randomise the order in which those records are returned, so the order may differ each time you request it. On the other hand, that is not done by all DNS implementations, so you might well get a statically ordered list. The point is that you cannot be sure. Next, some DNS implementations will query each NS in parallel, and use whichever one replies first. Others will hit each, determine the fastest over some number of requests and use that one. Or it could just round-robin. There are multiple RFCs for DNS, two of the more useful that I have found are: http://www.faqs.org/rfcs/rfc1912.html http://www.faqs.org/rfcs/rfc1033.html I realize this is something of a non-answer, without anything definitive for you to take away, but given the above, the only true way you have to determine the behavior for a given domain is to test.
{ "source": [ "https://serverfault.com/questions/355414", "https://serverfault.com", "https://serverfault.com/users/108945/" ] }
355,511
Or put another way, is using v=spf1 a mx ~all recommended over using v=spf1 a mx -all ? The RFC does not appear to make any recommendations. My preference has always been to use FAIL, which causes problems to become apparent immediately. I find that with SOFTFAIL, incorrectly configured SPF records are allowed to persist indefinitely, since no one notices. All of the examples I have seen online, however, seem to use SOFTFAIL. What made me question my choice was when I saw the Google Apps instructions for configuring SPF: Create a TXT record containing this text: v=spf1 include:_spf.google.com ~all Publishing an SPF record that uses -all instead of ~all may result in delivery problems. See Google IP address ranges for details about the addresses for the Google Apps mail servers. Are the examples being overly cautious by pushing the use of SOFTFAIL? Are there good reasons that make the use of SOFTFAIL a best practice?
Well, it was certainly not the intent of the specification for it to be used instead - softfail is intended as a transition mechanism, where you can have the messages marked without rejecting them outright. As you've found, failing messages outright tends to cause problems; some legitimate services, for example, will spoof your domain's addresses in order to send mail on behalf of your users. Because of this, the less draconian softfail is recommended in a lot of cases as a less-painful way to still get a lot of the help that SPF offers, without some of the headaches; recipient's spam filters can still take the softfail as a strong hint that a message may be spam (which many do). If you're confident that no message should ever come from a node other than what you've specified, then by all means, use fail as the SPF standard intended.. but as you've observed, softfail has definitely grown beyond its intended use.
{ "source": [ "https://serverfault.com/questions/355511", "https://serverfault.com", "https://serverfault.com/users/55554/" ] }
355,750
I run a rather busy site, and during peek hours I see over 10.000 open connections to my database server on my webserver when a run a netstat command. 99% of the connections are in the TIME_WAIT state. I learned about this mysql variable: wait_timeout http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_wait_timeout today. Mine is still set at the default 28.800 seconds. Is lowering this value safe? Non of my queries usually takes over a second. So it seems silly to keep a connection open for 480 minutes. I also heard about using mysql_pconnect instead of mysql_connect , but i've been reading nothing but horror stories about it, so I think i'll stay away from that.
Lowering the value is pretty trivial without a mysql restart Let's say you want to lower timeouts to 30 seconds First, add this to my.cnf [mysqld] interactive_timeout=30 wait_timeout=30 Then, you can do something like this mysql -uroot -ppassword -e"SET GLOBAL wait_timeout=30; SET GLOBAL interactive_timeout=30" All DB Connections after this will timeout in 30 seconds WARNING Make sure to use explicitly use mysql_close. I do not trust Apache as most developers do. If not, sometimes, there is a race condition where Apache closes a DB Connection but does not inform mysqld and mysqld hold that connection open until it times out. Even worse, you may see TIME_WAITs more often. Choose your timeout values wisely. UPDATE 2012-11-12 10:10 EDT CAVEAT After applying my posted suggestions, create a script called /root/show_mysql_netstat.sh with the following lines: netstat | grep mysql > /root/mysql_netstat.txt cat /root/mysql_netstat.txt | awk '{print $5}' | sed 's/:/ /g' | awk '{print $2}' | sort -u > /root/mysql_netstat_iplist.txt for IP in `cat /root/mysql_netstat_iplist.txt` do ESCOUNT=`cat /root/mysql_netstat.txt | grep ESTABLISHED | awk '{print $5}' | grep -c "${IP}"` TWCOUNT=`cat /root/mysql_netstat.txt | grep TIME_WAIT | awk '{print $5}' | grep -c "${IP}"` IPPAD=`echo "${IP}..................................." | cut -b -35` (( ESCOUNT += 1000000 )) (( TWCOUNT += 1000000 )) ES=`echo ${ESCOUNT} | cut -b 3-` TW=`echo ${TWCOUNT} | cut -b 3-` echo ${IPPAD} : ESTABLISHED:${ES} TIME_WAIT:${TW} done echo ; echo netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n | sed 's/d)/d/' When you run this, you should see something like this: [root@*** ~]# /root/ShowConnProfiles.sh 10.48.22.4......................... : ESTABLISHED:00002 TIME_WAIT:00008 10.48.22.8......................... : ESTABLISHED:00000 TIME_WAIT:00002 10.64.51.130....................... : ESTABLISHED:00001 TIME_WAIT:00000 10.64.51.133....................... : ESTABLISHED:00000 TIME_WAIT:00079 10.64.51.134....................... : ESTABLISHED:00002 TIME_WAIT:00001 10.64.51.17........................ : ESTABLISHED:00003 TIME_WAIT:01160 10.64.51.171....................... : ESTABLISHED:00002 TIME_WAIT:00000 10.64.51.174....................... : ESTABLISHED:00000 TIME_WAIT:00589 10.64.51.176....................... : ESTABLISHED:00001 TIME_WAIT:00570 1 established 1 Foreign 11 LISTEN 25 ESTABLISHED 1301 TIME_WAIT If you still see a lot of mysql TIME_WAITs for any given web server, here are two escalation steps to take: ESCALATION #1 Login to the offending web server and restart apache as follows: service httpd stop sleep 30 service httpd start If necessary, do this to all the web servers service httpd stop (on all web servers) service mysql stop sleep 120 service mysql start service httpd start (on all web servers) ESCALATION #2 You can force the OS to kill TIME_WAITs for mysql or any other app with the following: SEC_TO_TIMEWAIT=1 echo ${SEC_TO_TIMEWAIT} > /proc/sys/net/ipv4/tcp_tw_recycle echo ${SEC_TO_TIMEWAIT} > /proc/sys/net/ipv4/tcp_tw_reuse This will make TIME_WAITs time out in 1 second. To give credit where credit is due... I got this idea from this post: How to forcibly close a socket in TIME_WAIT? The accepted answer has a pictorial representation of when a TIME_WAIT comes into existence. The answer with the idea that I liked is the one I am now suggesting .
{ "source": [ "https://serverfault.com/questions/355750", "https://serverfault.com", "https://serverfault.com/users/68748/" ] }
355,887
This is a Canonical Question about DNS (Domain Name Service). If my understanding of the DNS system is correct, the .com registry holds a table that maps domains (www.example.com) to DNS servers. What is the advantage? Why not map directly to an IP address? If the only record that needs to change when I am configuring a DNS server to point to a different IP address, is located at the DNS server, why isn't the process instant? If the only reason for the delay are DNS caches, is it possible to bypass them, so I can see what is happening in real time?
Actually, it's more complicated than that - rather than one "central registry (that) holds a table that maps domains (www.mysite.com) to DNS servers", there are several layers of hierarchy There's a central registry (the Root Servers) which contain only a small set of entries: the NS (nameserver) records for all the top-level domains - .com , .net , .org , .uk , .us , .au , and so on. Those servers just contain NS records for the next level down. To pick one example, the nameservers for the .uk domain just has entries for .co.uk , .ac.uk , and the other second-level zones in use in the UK. Those servers just contain NS records for the next level down - to continue the example, they tell you where to find the NS records for google.co.uk . It's on those servers that you'll finally find a mapping between a hostname like www.google.co.uk and an IP address. As an extra wrinkle, each layer will also serve up 'glue' records. Each NS record maps a domain to a hostname - for instance, the NS records for .uk list nsa.nic.uk as one of the servers. To get to the next level, we need to find out the NS records for nic.uk are, and they turn out to include nsa.nic.uk as well. So now we need to know the IP of nsa.nic.uk , but to find that out we need to make a query to nsa.nic.uk , but we can't make that query until we know the IP for nsa.nic.uk ... To resolve this quandary, the servers for .uk add the A record for nsa.nic.uk into the ADDITIONAL SECTION of the response (response below trimmed for brevity): jamezpolley@li101-70:~$dig nic.uk ns ; <<>> DiG 9.7.0-P1 <<>> nic.uk ns ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21768 ;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 14 ;; QUESTION SECTION: ;nic.uk. IN NS ;; ANSWER SECTION: nic.uk. 172800 IN NS nsb.nic.uk. nic.uk. 172800 IN NS nsa.nic.uk. ;; ADDITIONAL SECTION: nsa.nic.uk. 172800 IN A 156.154.100.3 nsb.nic.uk. 172800 IN A 156.154.101.3 Without these extra glue records, we'd never be able to find the nameservers for nic.uk. and so we'd never be able to look up any domains hosted there. To get back to your questions... a) What is the advantage? Why not map directly to an IP address? For one thing, it allows edits to each individual zone to be distributed. If you want to update the entry for www.mydomain.co.uk , you just need to edit the information on your mydomain.co.uk 's nameserver. There's no need to notify the central .co.uk servers, or the .uk servers, or the root nameservers. If there was only a single central registry that mapped all the levels all the way down the hierarchy that had to be notified about every single change of a DNS entry all the way down the chain, it would be absolutely swamped with traffic. Before 1982, this was actually how name resolution happened. One central registry was notified about all updates, and they distributed a file called hosts.txt which contained the hostname and IP address of every machine on the internet. A new version of this file was published every few weeks, and every machine on the internet would have to download a new copy. Well before 1982, this was starting to become problematic, and so DNS was invented to provide a more distributed system. For another thing, this would be a Single Point of Failure - if the single central registry went down, the entire internet would be offline. Having a distributed system means that failures only affect small sections of the internet, not the whole thing. (To provide extra redundancy, there are actually 13 separate clusters of servers that serve the root zone. Any changes to the top-level domain records have to be pushed to all 13; imagine having to coordinate updating all 13 of them for every single change to any hostname anywhere in the world...) b) If the only record that needs to change when I am configuring a DNS server to point to a different IP address is located at the DNS server, why isn't the process instant? Because DNS utilises a lot of caching to both speed things up and decrease the load on the NSes. Without caching, every single time you visited google.co.uk your computer would have to go out to the network to look up the servers for .uk , then .co.uk , then .google.co.uk , then www.google.co.uk . Those answers don't actually change much, so looking them up every time is a waste of time and network traffic. Instead, when the NS returns records to your computer, it will include a TTL value, that tells your computer to cache the results for a number of seconds. For example, the NS records for .uk have a TTL of 172800 seconds - 2 days. Google are even more conservative - the NS records for google.co.uk have a TTL of 4 days. Services which rely on being able to update quickly can choose a much lower TTL - for instance, telegraph.co.uk has a TTL of just 600 seconds on their NS records. If you want updates to your zone to be near-instant, you can choose to lower your TTL as far down as you like. The lower your set it, the more traffic your servers will see, as clients refresh their records more often. Every time a client has to contact your servers to do a query, this will cause some lag as it's slower than looking up the answer on their local cache, so you'll also want to consider the tradeoff between fast updates and a fast service. c) If the only reason for the delay are DNS caches, is it possible to bypass them, so I can see what is happening in real time? Yes, this is easy if you're testing manually with dig or similar tools - just tell it which server to contact. Here's an example of a cached response: jamezpolley@host:~$dig telegraph.co.uk NS ; <<>> DiG 9.7.0-P1 <<>> telegraph.co.uk NS ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36675 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;telegraph.co.uk. IN NS ;; ANSWER SECTION: telegraph.co.uk. 319 IN NS ns1-63.akam.net. telegraph.co.uk. 319 IN NS eur3.akam.net. telegraph.co.uk. 319 IN NS use2.akam.net. telegraph.co.uk. 319 IN NS usw2.akam.net. telegraph.co.uk. 319 IN NS use4.akam.net. telegraph.co.uk. 319 IN NS use1.akam.net. telegraph.co.uk. 319 IN NS usc4.akam.net. telegraph.co.uk. 319 IN NS ns1-224.akam.net. ;; Query time: 0 msec ;; SERVER: 97.107.133.4#53(97.107.133.4) ;; WHEN: Thu Feb 2 05:46:02 2012 ;; MSG SIZE rcvd: 198 The flags section here doesn't contain the aa flag, so we can see that this result came from a cache rather than directly from an authoritative source. In fact, we can see that it came from 97.107.133.4 , which happens to be one of Linode's local DNS resolvers. The fact that the answer was served out of a cache very close to me means that it took 0msec for me to get an answer; but as we'll see in a moment, the price I pay for that speed is that the answer is almost 5 minutes out of date. To bypass Linode's resolver and go straight to the source, just pick one of those NSes and tell dig to contact it directly: jamezpolley@li101-70:~$dig @ns1-224.akam.net telegraph.co.uk NS ; <<>> DiG 9.7.0-P1 <<>> @ns1-224.akam.net telegraph.co.uk NS ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23013 ;; flags: qr aa rd; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;telegraph.co.uk. IN NS ;; ANSWER SECTION: telegraph.co.uk. 600 IN NS use2.akam.net. telegraph.co.uk. 600 IN NS eur3.akam.net. telegraph.co.uk. 600 IN NS use1.akam.net. telegraph.co.uk. 600 IN NS ns1-63.akam.net. telegraph.co.uk. 600 IN NS usc4.akam.net. telegraph.co.uk. 600 IN NS ns1-224.akam.net. telegraph.co.uk. 600 IN NS usw2.akam.net. telegraph.co.uk. 600 IN NS use4.akam.net. ;; Query time: 9 msec ;; SERVER: 193.108.91.224#53(193.108.91.224) ;; WHEN: Thu Feb 2 05:48:47 2012 ;; MSG SIZE rcvd: 198 You can see that this time, the results were served directly from the source - note the aa flag, which indicates that the results came from an authoritative source. In my earlier example, the results came from my local cache, so they lack the aa flag. I can see that the authoritative source for this domain sets a TTL of 600 seconds. The results I got earlier from a local cache had a TTL of just 319 seconds, which tells me that they'd been sitting in the cache for (600-319) seconds - almost 5 minutes - before I saw them. Although the TTL here is only 600 seconds, some ISPs will attempt to reduce their traffic even further by forcing their DNS resolvers to cache the results for longer - in some cases, for 24 hours or more. It's traditional (in a we-don't-know-if-this-is-really-neccessary-but-let's-be-safe kind of way) to assume that any DNS change you make won't be visible everywhere on the internet for 24-48 hours.
{ "source": [ "https://serverfault.com/questions/355887", "https://serverfault.com", "https://serverfault.com/users/109136/" ] }
355,944
I have the following shell script $cat capture.sh TIME=$(date +"%H-%M-%d-%m-%y") IP="203.208.198.29" PREFIX=$TIME$IP tshark -f "udp" -i eth0 -w /root/captures/$PREFIX.cap& pid=$! sleep 2m kill $pid it runs fine when i execute it from shell. but when i add it to the cron tab nothing happens. my crontab entry : 1 */2 * 2 3,4,5 sh /root/capture.sh tail /var/log/cron shows that the command has executed . but nothing happens. i have set executable permission for "all" for capture.sh and write permission for "all" for /root/captures directory. Thanks in advance
Your PATH variable probably isn't what you expect it to be inside cron. Use full paths to each executable in your script or set the path manually in your crontab or the script. Also, a better way of stopping your tshark would be using the built-in functionality: -a <capture autostop condition> duration:value Stop writing to a capture file after value seconds have elapsed. Also #2: add a shebang line ( #! )
{ "source": [ "https://serverfault.com/questions/355944", "https://serverfault.com", "https://serverfault.com/users/83503/" ] }
356,598
The server works fine via the Amazon assigned DNS entry, but I cannot reach it (using a browser) via the Elastic IP address Amazon assigned the box. Ping does not work either. I am trying to confirm it is reachable before I add the IP address to my own DNS entries.
Things to check: Your elastic IP associated with your instance? Your security group of instance permits incoming connections? Your instance firewall permits incoming connections? Your application listens?
{ "source": [ "https://serverfault.com/questions/356598", "https://serverfault.com", "https://serverfault.com/users/109438/" ] }
356,962
I have two CentOS 5 servers with nearly identical specs. When I login and do ulimit -u , on one machine I get unlimited , and on the other I get 77824 . When I run a cron like: * * * * * ulimit -u > ulimit.txt I get the same results ( unlimited , 77824 ). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles ( .bashrc , /etc/profile , etc.). These wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / , but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024 , which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT). EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf : * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the nofile limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.
These "default" limits are applied by: the Linux kernel at boot time (to the init or systemd process), inheritance , from the parent process' limits (at fork(2) time), PAM when the user session is opened (can replace kernel/inherited values), systemd , especially to the processes it manages, the process itself (can replace PAM & kernel/inherited values, see setrlimit(2) ). Normal users' processes cannot rise hard limits. The Linux kernel At boot time, Linux sets default limits to the init (or systemd ) process, which are then inherited by all the other (children) processes. To see these limits: cat /proc/1/limits . For example, the kernel default for maximum number of file descriptors ( ulimit -n ) was 1024/1024 (soft, hard), and has been raised to 1024/4096 in Linux 2.6.39. The default maximum number of processes you're talking about is limited to approximately: Total RAM in kB / 128 for x86 architectures (at least), but distributions sometimes change default kernel values, so check your kernel source code for kernel/fork.c , fork_init() . The "number of processes" limit is called RLIMIT_NPROC there. PAM Usually, to ensure user authentication at login, PAM is used along with some modules (see /etc/pam.d/login ). On Debian, the PAM module responsible for setting limits is here : /lib/security/pam_limits.so . This library will read its configuration from /etc/security/limits.conf and /etc/security/limits.d/*.conf , but even if those files are empty, pam_limits.so might use hardcoded values that you can check within the source code. For example, on Debian, the library has been patched so that by default, the maximum number of processes ( nproc ) is unlimited, and the maximum number of files ( nofile ) is 1024/1024: case RLIMIT_NOFILE: pl->limits[i].limit.rlim_cur = 1024; pl->limits[i].limit.rlim_max = 1024; So, check your CentOS' PAM module source code (look for RLIMIT_NPROC). However, please note that many processes will not go through PAM (usually, if they are not launched by a logged in user, like daemons and maybe cron jobs). systemd Nowadays, systemd is widely used, it replaces init and can also configure specific limits values, especially to the processes/daemons it manages and creates itself. Some limits it uses by default can be manually configured in /etc/systemd/system.conf . There is more information available in the documentation.
{ "source": [ "https://serverfault.com/questions/356962", "https://serverfault.com", "https://serverfault.com/users/109551/" ] }
357,066
How do I compress every file in a directory into its own tar whilst preserving the name for each file? i.e. file1.out file2.out --> file1.out.tar.gz file2.out.tar.gz
Putting every file into a separate tar file doesn't make any sense in this scenario. You can use gzip to compress them directly: gzip * will result in file1.out.gz , file2.out.gz etc. You would use tar only if you would need a compressed archive as a single file. If you ineed need a tar archive for every file, you can create it like so: for i in *; do tar -czf $i.tar.gz $i; done
{ "source": [ "https://serverfault.com/questions/357066", "https://serverfault.com", "https://serverfault.com/users/97367/" ] }
357,108
This is a Canonical Question about File Permissions on a Linux web server. I have a Linux web server running Apache2 that hosts several websites. Each website has its own folder in /var/www/. /var/www/contoso.com/ /var/www/contoso.net/ /var/www/fabrikam.com/ The base directory /var/www/ is owned by root:root. Apache is running as www-data:www-data. The Fabrikam website is maintained by two developers, Alice and Bob. Both Contoso websites are maintained by one developer, Eve. All websites allow users to upload images. If a website is compromised, the impact should be as limited as possible. I want to know the best way to set up permissions so that Apache can serve the content, the website is secure from attacks, and the developers can still make changes. One of the websites is structured like this: /var/www/fabrikam.com /cache /modules /styles /uploads /index.php How should the permissions be set on these directories and files? I read somewhere that you should never use 777 permissions on a website, but I don't understand what problems that could cause. During busy periods, the website automatically caches some pages and stores the results in the cache folder. All of the content submitted by website visitors is saved to the uploads folder.
When deciding what permissions to use, you need to know exactly who your users are and what they need. A webserver interacts with two types of user. Authenticated users have a user account on the server and can be provided with specific privileges. This usually includes system administrators, developers, and service accounts. They usually make changes to the system using SSH or SFTP. Anonymous users are the visitors to your website. Although they don't have permissions to access files directly, they can request a web page and the web server acts on their behalf. You can limit the access of anonymous users by being careful about what permissions the web server process has. On many Linux distributions, Apache runs as the www-data user but it can be different. Use ps aux | grep httpd or ps aux | grep apache to see what user Apache is using on your system. Notes on linux permissions Linux and other POSIX-compliant systems use traditional unix permissions. There is an excellent article on Wikipedia about Filesystem permissions so I won't repeat everything here. But there are a few things you should be aware of. The execute bit Interpreted scripts (eg. Ruby, PHP) work just fine without the execute permission. Only binaries and shell scripts need the execute bit. In order to traverse (enter) a directory, you need to have execute permission on that directory. The webserver needs this permission to list a directory or serve any files inside of it. Default new file permissions When a file is created, it normally inherits the group id of whoever created it. But sometimes you want new files to inherit the group id of the folder where they are created, so you would enable the SGID bit on the parent folder. Default permission values depend on your umask. The umask subtracts permissions from newly created files, so the common value of 022 results in files being created with 755. When collaborating with a group, it's useful to change your umask to 002 so that files you create can be modified by group members. And if you want to customize the permissions of uploaded files, you either need to change the umask for apache or run chmod after the file has been uploaded. The problem with 777 When you chmod 777 your website, you have no security whatsoever. Any user on the system can change or delete any file in your website. But more seriously, remember that the web server acts on behalf of visitors to your website, and now the web server is able to change the same files that it's executing. If there are any programming vulnerabilities in your website, they can be exploited to deface your website, insert phishing attacks, or steal information from your server without you ever knowing. Additionally, if your server runs on a well-known port (which it should to prevent non-root users from spawning listening services that are world-accessible), that means your server must be started by root (although any sane server will immediately drop to a less-privileged account once the port is bound). In other words, if you're running a webserver where the main executable is part of the version control (e.g. a CGI app), leaving its permissions (or, for that matter, the permissions of the containing directory, since the user could rename the executable) at 777 allows any user to run any executable as root. Define the requirements Developers need read/write access to files so they can update the website Developers need read/write/execute on directories so they can browse around Apache needs read access to files and interpreted scripts Apache needs read/execute access to serveable directories Apache needs read/write/execute access to directories for uploaded content Maintained by a single user If only one user is responsible for maintaining the site, set them as the user owner on the website directory and give the user full rwx permissions. Apache still needs access so that it can serve the files, so set www-data as the group owner and give the group r-x permissions. In your case, Eve, whose username might be eve , is the only user who maintains contoso.com : chown -R eve contoso.com/ chgrp -R www-data contoso.com/ chmod -R 750 contoso.com/ chmod g+s contoso.com/ ls -l drwxr-s--- 2 eve www-data 4096 Feb 5 22:52 contoso.com If you have folders that need to be writable by Apache, you can just modify the permission values for the group owner so that www-data has write access. chmod g+w uploads ls -l drwxrws--- 2 eve www-data 4096 Feb 5 22:52 uploads The benefit of this configuration is that it becomes harder (but not impossible*) for other users on the system to snoop around, since only the user and group owners can browse your website directory. This is useful if you have secret data in your configuration files. Be careful about your umask! If you create a new file here, the permission values will probably default to 755. You can run umask 027 so that new files default to 640 ( rw- r-- --- ). Maintained by a group of users If more than one user is responsible for maintaining the site, you will need to create a group to use for assigning permissions. It's good practice to create a separate group for each website, and name the group after that website. groupadd dev-fabrikam usermod -a -G dev-fabrikam alice usermod -a -G dev-fabrikam bob In the previous example, we used the group owner to give privileges to Apache, but now that is used for the developers group. Since the user owner isn't useful to us any more, setting it to root is a simple way to ensure that no privileges are leaked. Apache still needs access, so we give read access to the rest of the world. chown -R root fabrikam.com chgrp -R dev-fabrikam fabrikam.com chmod -R 775 fabrikam.com chmod g+s fabrikam.com ls -l drwxrwsr-x 2 root dev-fabrikam 4096 Feb 5 22:52 fabrikam.com If you have folders that need to be writable by Apache, you can make Apache either the user owner or the group owner. Either way, it will have all the access it needs. Personally, I prefer to make it the user owner so that the developers can still browse and modify the contents of upload folders. chown -R www-data uploads ls -l drwxrwxr-x 2 www-data dev-fabrikam 4096 Feb 5 22:52 uploads Although this is a common approach, there is a downside. Since every other user on the system has the same privileges to your website as Apache does, it's easy for other users to browse your site and read files that may contain secret data, such as your configuration files. You can have your cake and eat it too This can be futher improved upon. It's perfectly legal for the owner to have less privileges than the group, so instead of wasting the user owner by assigning it to root, we can make Apache the user owner on the directories and files in your website. This is a reversal of the single maintainer scenario, but it works equally well. chown -R www-data fabrikam.com chgrp -R dev-fabrikam fabrikam.com chmod -R 570 fabrikam.com chmod g+s fabrikam.com ls -l dr-xrwx--- 2 www-data dev-fabrikam 4096 Feb 5 22:52 fabrikam.com If you have folders that need to be writable by Apache, you can just modify the permission values for the user owner so that www-data has write access. chmod u+w uploads ls -l drwxrwx--- 2 www-data dev-fabrikam 4096 Feb 5 22:52 fabrikam.com One thing to be careful about with this solution is that the user owner of new files will match the creator instead of being set to www-data. So any new files you create won't be readable by Apache until you chown them. *Apache privilege separation I mentioned earlier that it's actually possible for other users to snoop around your website no matter what kind of privileges you're using. By default, all Apache processes run as the same www-data user, so any Apache process can read files from all other websites configured on the same server, and sometimes even make changes. Any user who can get Apache to run a script can gain the same access that Apache itself has. To combat this problem, there are various approaches to privilege separation in Apache. However, each approach comes with various performance and security drawbacks. In my opinion, any site with higher security requirements should be run on a dedicated server instead of using VirtualHosts on a shared server. Additional considerations I didn't mention it before, but it's usually a bad practice to have developers editing the website directly. For larger sites, you're much better off having some kind of release system that updates the webserver from the contents of a version control system. The single maintainer approach is probably ideal, but instead of a person you have automated software. If your website allows uploads that don't need to be served out, those uploads should be stored somewhere outside the web root. Otherwise, you might find that people are downloading files that were intended to be secret. For example, if you allow students to submit assignments, they should be saved into a directory that isn't served by Apache. This is also a good approach for configuration files that contain secrets. For a website with more complex requirements, you may want to look into the use of Access Control Lists . These enable much more sophisticated control of privileges. If your website has complex requirements, you may want to write a script that sets up all of the permissions. Test it thoroughly, then keep it safe. It could be worth its weight in gold if you ever find yourself needing to rebuild your website for some reason.
{ "source": [ "https://serverfault.com/questions/357108", "https://serverfault.com", "https://serverfault.com/users/23300/" ] }
357,323
What is the command to display a list of open ports on a Debian server? I tried netstat -a | egrep 'Proto|LISTEN' but I would like something more specific that actually lists the port number.
netstat -pln -l will list listening ports, -p will also display the process, -n will show port numbers instead of names. Add -t to only show TCP ports.
{ "source": [ "https://serverfault.com/questions/357323", "https://serverfault.com", "https://serverfault.com/users/103440/" ] }
357,799
I’m trying to improve my TCP throughput over a “gigabit network with lots of connections and high traffic of small packets”. My server OS is Ubuntu 11.10 Server 64bit. There are about 50.000 (and growing) clients connected to my server through TCP Sockets (all on the same port). 95% of of my packets have size of 1-150 bytes (TCP header and payload). The rest 5% vary from 150 up to 4096+ bytes. With the config below my server can handle traffic up to 30 Mbps (full duplex). Can you please advice best practice to tune OS for my needs? My /etc/sysctl.cong looks like this: kernel.pid_max = 1000000 net.ipv4.ip_local_port_range = 2500 65000 fs.file-max = 1000000 # net.core.netdev_max_backlog=3000 net.ipv4.tcp_sack=0 # net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.somaxconn = 2048 # net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_mem = 50576 64768 98152 # net.core.wmem_default = 65536 net.core.rmem_default = 65536 net.ipv4.tcp_window_scaling=1 # net.ipv4.tcp_mem= 98304 131072 196608 # net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_rfc1337 = 1 net.ipv4.ip_forward = 0 net.ipv4.tcp_congestion_control=cubic net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 # net.ipv4.tcp_orphan_retries = 1 net.ipv4.tcp_fin_timeout = 25 net.ipv4.tcp_max_orphans = 8192 Here are my limits: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 193045 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1000000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1000000 [ADDED] My NICs are the following: $ dmesg | grep Broad [ 2.473081] Broadcom NetXtreme II 5771x 10Gigabit Ethernet Driver bnx2x 1.62.12-0 (2011/03/20) [ 2.477808] bnx2x 0000:02:00.0: eth0: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fb000000, IRQ 28, node addr d8:d3:85:bd:23:08 [ 2.482556] bnx2x 0000:02:00.1: eth1: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fa000000, IRQ 40, node addr d8:d3:85:bd:23:0c [ADDED 2] ethtool -k eth0 Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: on rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off receive-hashing: off [ADDED 3] sudo ethtool -S eth0|grep -vw 0 NIC statistics: [1]: rx_bytes: 17521104292 [1]: rx_ucast_packets: 118326392 [1]: tx_bytes: 35351475694 [1]: tx_ucast_packets: 191723897 [2]: rx_bytes: 16569945203 [2]: rx_ucast_packets: 114055437 [2]: tx_bytes: 36748975961 [2]: tx_ucast_packets: 194800859 [3]: rx_bytes: 16222309010 [3]: rx_ucast_packets: 109397802 [3]: tx_bytes: 36034786682 [3]: tx_ucast_packets: 198238209 [4]: rx_bytes: 14884911384 [4]: rx_ucast_packets: 104081414 [4]: rx_discards: 5828 [4]: rx_csum_offload_errors: 1 [4]: tx_bytes: 35663361789 [4]: tx_ucast_packets: 194024824 [5]: rx_bytes: 16465075461 [5]: rx_ucast_packets: 110637200 [5]: tx_bytes: 43720432434 [5]: tx_ucast_packets: 202041894 [6]: rx_bytes: 16788706505 [6]: rx_ucast_packets: 113123182 [6]: tx_bytes: 38443961940 [6]: tx_ucast_packets: 202415075 [7]: rx_bytes: 16287423304 [7]: rx_ucast_packets: 110369475 [7]: rx_csum_offload_errors: 1 [7]: tx_bytes: 35104168638 [7]: tx_ucast_packets: 184905201 [8]: rx_bytes: 12689721791 [8]: rx_ucast_packets: 87616037 [8]: rx_discards: 2638 [8]: tx_bytes: 36133395431 [8]: tx_ucast_packets: 196547264 [9]: rx_bytes: 15007548011 [9]: rx_ucast_packets: 98183525 [9]: rx_csum_offload_errors: 1 [9]: tx_bytes: 34871314517 [9]: tx_ucast_packets: 188532637 [9]: tx_mcast_packets: 12 [10]: rx_bytes: 12112044826 [10]: rx_ucast_packets: 84335465 [10]: rx_discards: 2494 [10]: tx_bytes: 36562151913 [10]: tx_ucast_packets: 195658548 [11]: rx_bytes: 12873153712 [11]: rx_ucast_packets: 89305791 [11]: rx_discards: 2990 [11]: tx_bytes: 36348541675 [11]: tx_ucast_packets: 194155226 [12]: rx_bytes: 12768100958 [12]: rx_ucast_packets: 89350917 [12]: rx_discards: 2667 [12]: tx_bytes: 35730240389 [12]: tx_ucast_packets: 192254480 [13]: rx_bytes: 14533227468 [13]: rx_ucast_packets: 98139795 [13]: tx_bytes: 35954232494 [13]: tx_ucast_packets: 194573612 [13]: tx_bcast_packets: 2 [14]: rx_bytes: 13258647069 [14]: rx_ucast_packets: 92856762 [14]: rx_discards: 3509 [14]: rx_csum_offload_errors: 1 [14]: tx_bytes: 35663586641 [14]: tx_ucast_packets: 189661305 rx_bytes: 226125043936 rx_ucast_packets: 1536428109 rx_bcast_packets: 351 rx_discards: 20126 rx_filtered_packets: 8694 rx_csum_offload_errors: 11 tx_bytes: 548442367057 tx_ucast_packets: 2915571846 tx_mcast_packets: 12 tx_bcast_packets: 2 tx_64_byte_packets: 35417154 tx_65_to_127_byte_packets: 2006984660 tx_128_to_255_byte_packets: 373733514 tx_256_to_511_byte_packets: 378121090 tx_512_to_1023_byte_packets: 77643490 tx_1024_to_1522_byte_packets: 43669214 tx_pause_frames: 228 Some info about SACK: When to turn TCP SACK off?
The problem might be that you are getting too many interrupts on your network card. If Bandwidth is not the problem, frequency is the problem: Turn up send/receive buffers on the network card ethtool -g eth0 Will show you the current settings (256 or 512 entries). You can probably raise these to 1024, 2048 or 3172. More does probably not make sense. This is just a ring buffer that only fills up if the server is not able to process incoming packets fast enough. If the buffer starts to fill, flow control is an additional means to tell the router or switch to slow down: Turn on flow control in/outbound on the server and the switch/router-ports it is attached to. ethtool -a eth0 Will probably show: Pause parameters for eth0: Autonegotiate: on RX: on TX: on Check /var/log/messages for the current setting of eth0. Check for something like: eth0: Link is up at 1000 Mbps, full duplex, flow control tx and rx If you don't see tx and rx your network admins have to adjust the values on the switch/router. On Cisco that is receive/transmit flow control on. Beware: Changing these Values will bring your link down and up for a very short time (less than 1s). If all this does not help - you can also lower the speed of the network card to 100 MBit (do the same on the switch/router-ports) ethtool -s eth0 autoneg off && ethtool -s eth0 speed 100 But in your case I would say - raise the receive buffers in the NIC ring buffer.
{ "source": [ "https://serverfault.com/questions/357799", "https://serverfault.com", "https://serverfault.com/users/82828/" ] }
358,228
I have a KVM host machine with several VMs on it. Each VM uses a Logical Volume on the host. I need to copy the LVs to another host machine. Normally, I would use something like: dd if=/the/logical-volume of=/some/path/machine.dd To turn the LV into an image file and use SCP to move it. Then use DD to copy the file back to a new LV on the new host. The problem with this method is you need twice as much disk space as the VM takes on both machines. ie. a 5GB LV uses 5GB of space for the LV and the dd copy also uses an additional 5GB of space for the image. This is fine for small LVs, but what if (as is my case) you have a 500GB LV for a big VM? The new host machine has a 1TB hard drive, so it can't hold a 500GB dd image file and have a 500GB logical volume to copy to and have room for the host OS and room for other smaller guests. What I would like to do is something like: dd if=/dev/mygroup-mylv of=192.168.1.103/dev/newvgroup-newlv In other words, copy the data directly from one logical volume to the other over the network and skip the intermediate image file. Is this possible?
Sure, of course it's possible. dd if=/dev/mygroup-mylv | ssh 192.168.1.103 dd of=/dev/newvgroup-newlv Boom. Do yourself a favor, though, and use something larger than the default blocksize. Maybe add bs=4M (read/write in chunks of 4 MB). You can see there's some nitpicking about blocksizes in the comments; if this is something you find yourself doing fairly often, take a little time to try it a few different times with different blocksizes and see for yourself what gets you the best transfer rates. Answering one of the questions from the comments: You can pipe the transfer through pv to get statistics about the transfer. It's a lot nicer than the output you get from sending signals to dd . I will also say that while of course using netcat -- or anything else that does not impose the overhead of encryption -- is going to be more efficient, I usually find that the additional speed comes at some loss of convenience. Unless I'm moving around really large datasets, I usually stick with ssh despite the overhead because in most cases everything is already set up to Just Work.
{ "source": [ "https://serverfault.com/questions/358228", "https://serverfault.com", "https://serverfault.com/users/15623/" ] }
358,229
Our web server (Nginx, MySQL, PHP) is presently being attacked by DDOS. Outgoing traffic is normal (avg 563 kb/sec) but incoming traffic is what is eating up our 1gbit port (avg 800Mb/sec). In the Nginx access log, I noticed a POST request to a 499 error coming from 10-15 unique IPs very repeatedly to a support ticket system with have installed (/support/index.php - running OSTicket). I blocked INPUT/OUTPUT on these IPs in iptables. I don't think this did anything but it was odd none-the-less considering these IPs were repeating the POST request ever few seconds. How can I pinpoint the problematic IPs and block them from sending massive incoming requests? EDIT: Here is a printout of iptables -L -v http://pastebin.com/cyGLKJh4
Sure, of course it's possible. dd if=/dev/mygroup-mylv | ssh 192.168.1.103 dd of=/dev/newvgroup-newlv Boom. Do yourself a favor, though, and use something larger than the default blocksize. Maybe add bs=4M (read/write in chunks of 4 MB). You can see there's some nitpicking about blocksizes in the comments; if this is something you find yourself doing fairly often, take a little time to try it a few different times with different blocksizes and see for yourself what gets you the best transfer rates. Answering one of the questions from the comments: You can pipe the transfer through pv to get statistics about the transfer. It's a lot nicer than the output you get from sending signals to dd . I will also say that while of course using netcat -- or anything else that does not impose the overhead of encryption -- is going to be more efficient, I usually find that the additional speed comes at some loss of convenience. Unless I'm moving around really large datasets, I usually stick with ssh despite the overhead because in most cases everything is already set up to Just Work.
{ "source": [ "https://serverfault.com/questions/358229", "https://serverfault.com", "https://serverfault.com/users/137199/" ] }
358,866
Is there a command similar to mkfifo but for domain sockets?
There is no exact equivalent of mkfifo for socket, i.e. there is no command that just creates a "hanging" socket. This is for historical reason: server's function bind(), the one that creates a socket name/inode in the filesystem, fails if the name is already used. In other words, server cannot operate on a pre-existing socket. So if you'd created socket earlier, it would need to be removed by the server anyway first. No benefit. As you see with Gregory's answer, you can create a socket IF you keep a server for it, such as netcat. Once a server is gone, the old socket is gone. A new server has a new socket, and all clients need to re-connect, despite the socket's name being identical.
{ "source": [ "https://serverfault.com/questions/358866", "https://serverfault.com", "https://serverfault.com/users/40958/" ] }
358,881
for example I want to: NAT 5.5.5.5 tcp 85 to 192.168.1.5 tcp 85 NAT 5.5.5.5 tcp 33 to 192.168.1.9 tcp 33 NAT 6.6.6.6 tcp 80 443 to 192.168.1.20 tcp 80 443 The way I was going to do this was: object obj-192.168.1.5 host 192.168.1.5 nat (inside,outside) static 5.5.5.5 service tcp 85 85 object obj-192.168.1.9 host 192.168.1.9 nat (inside,outside) static 5.5.5.5 service tcp 33 33 object obj-192.168.1.20 host 192.168.1.9 nat (inside,outside) static 6.6.6.6. ???? Then add an ACL to allow the traffic in. I don't know how to make a service group and apply that to the NAT, it seems to only allow you to enter one port at a time. Any idea how to do this?
There is no exact equivalent of mkfifo for socket, i.e. there is no command that just creates a "hanging" socket. This is for historical reason: server's function bind(), the one that creates a socket name/inode in the filesystem, fails if the name is already used. In other words, server cannot operate on a pre-existing socket. So if you'd created socket earlier, it would need to be removed by the server anyway first. No benefit. As you see with Gregory's answer, you can create a socket IF you keep a server for it, such as netcat. Once a server is gone, the old socket is gone. A new server has a new socket, and all clients need to re-connect, despite the socket's name being identical.
{ "source": [ "https://serverfault.com/questions/358881", "https://serverfault.com", "https://serverfault.com/users/32265/" ] }
358,903
I am using the mysql command line client and I do not want to need to provide the password every time I start the client. What are my options?
Create a file named .my.cnf in your home directory that looks like this. Make sure the filesystem permissions are set such that only the owning user can read it (0600). [client] host = localhost user = username. password = thepassword socket = /var/run/mysqld/mysqld.sock #database = mysql Since you also tagged your question mysqldump you should look at this question. Using mysqldump in cron job without root password Update (2016-06-29) If you are running mysql 5.6.6 or greater, you should look at the mysql_config_editor tool that allows you to store credentials in an encrypted file. Thanks to Giovanni for mentioning this to me.
{ "source": [ "https://serverfault.com/questions/358903", "https://serverfault.com", "https://serverfault.com/users/105848/" ] }
358,996
What's the practical difference between: iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT and iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT Which one is best to use? Thank you.
Both use same kernel internals underneath (connection tracking subsystem). Header of xt_conntrack.c: xt_conntrack - Netfilter module to match connection tracking information. (Superset of Rusty's minimalistic state match.) So I would say -- state module is simpler (and maybe less error prone). It's also longer in kernel. Conntrack on the other side has more options and features[1]. My call is to use conntrack if you need it's features, otherwise stick with state module. Similar question on netfilter maillist. [1] Quite useful like "-m conntrack --ctstate DNAT -j MASQUERADE" routing/DNAT fixup ;-)
{ "source": [ "https://serverfault.com/questions/358996", "https://serverfault.com", "https://serverfault.com/users/57860/" ] }
359,414
People keep telling me that in order to improve an SQL server's performance, buy the fastest hard disks possible with RAID 5, etc. So I was thinking, instead of spending all the money for RAID 5 and super-duper fast hard disks (which isn't cheap by the way), why not just get tonnes of RAM? We know that an SQL server loads the database into memory. Memory is wayyyy faster than any hard disks. Why not stuff in like 100 GB of RAM on a server? Then just use a regular SCSI hard disk with RAID 1. Wouldn't that be a lot cheaper and faster?
Your analysis is fine -- to a point -- in that it absolutely will make things faster. You still have to account for a couple of other issues though: Not everyone can afford enough memory; when you have multiple terabytes of data, you have to put it on disk some time. If you don't have much data, anything is fast enough. Write performance for your database is still going to be constrained by the disks, so that you can keep the promise that the data was actually stored. If you have a small data set, or don't need to persist it on disk, there is nothing wrong with your idea. Tools like VoltDB are working to reduce the overheads that older assumptions in RDBMS implementations made which constrain pure in-memory performance. (As an aside, people telling you to use RAID-5 for database performance are probably not great folks to listen to on the subject, since it is almost never the best choice - it has good read performance, but bad write performance, and writes are almost always the production constraint - because you can throw RAM into caching to solve most read-side performance issues.)
{ "source": [ "https://serverfault.com/questions/359414", "https://serverfault.com", "https://serverfault.com/users/100298/" ] }
359,793
How do I tell Jenkins to run a specific project on a particular slave? I've set up a Jenkins master node, and a slave node that I want to use for staging an application. But I can't figure out how to configure the project to run on the slave node I created.
Set the "Restrict where this job can be run" check box in your job configuration and specify the name of your slave. If you add more slaves later, you can set labels for each slave and specify those in your job configs. See this reference documentation: https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
{ "source": [ "https://serverfault.com/questions/359793", "https://serverfault.com", "https://serverfault.com/users/847/" ] }
359,856
What is the difference between commands sudo -i and sudo su - ? Are they the same?
They may provide functionally close to the same thing, but it seems 'sudo -i' is lighter weight and keeps some handy back references in your environment. You can see the extra processes by looking at 'ps auxf' (f gives you a forest view) sudo -i yields this process tree jkrauska 4480 0.0 0.0 76828 1656 ? S 23:38 0:00 | \_ sshd: jkrauska@pts/0 jkrauska 4482 0.0 0.0 21008 3816 pts/0 Ss 23:38 0:00 | \_ -bash root 4675 0.6 0.0 19512 2260 pts/0 S+ 23:42 0:00 | \_ -bash sudo su - yields this process tree jkrauska 4480 0.0 0.0 76828 1656 ? S 23:38 0:00 | \_ sshd: jkrauska@pts/0 jkrauska 4482 0.0 0.0 21008 3816 pts/0 Ss 23:38 0:00 | \_ -bash root 4687 0.5 0.0 43256 1488 pts/0 S 23:42 0:00 | \_ su - root 4688 0.5 0.0 19508 2252 pts/0 S+ 23:42 0:00 | \_ -su Note that they are starting from the same bash process pid, 4482, but that su - seems to spawn another step.) Your first 'sudo' is already elevating your access level to root. Running su without specifying a username inside sudo changes the current user to root twice. Another way to investigate this is by running both commands with strace -f. strace -f -o sudoi sudo -i vs strace -f -o sudosu sudo su - If you diff those two straces, you'll see more exeve's being run for sudo su -. One more thing. sudo -i maintains the extra environment variables set by SUDO. SUDO_USER=jkrauska SUDO_UID=1000 SUDO_COMMAND=/bin/bash SUDO_GID=1000 sudo su - clobbers those variables.
{ "source": [ "https://serverfault.com/questions/359856", "https://serverfault.com", "https://serverfault.com/users/52746/" ] }
359,976
Is there a way to send the Ctrl-Alt-Del command to an RDP session (Windows Server 2008 R2) inside another RDP session (also Windows Server 2008 R2) without the first session catching it? Ctrl + Alt + End and Ctrl + Alt + Shift + End do not reach the 2nd level session. Top-level environment is Windows 7 Enterprise.
Use the On-Screen Keyboard ( osk.exe ). You can press Ctrl-Alt-Del virtually! (Note: you may need to hold the CTRL and ALT keys on your physical keyboard (Windows Server 2012-R2))
{ "source": [ "https://serverfault.com/questions/359976", "https://serverfault.com", "https://serverfault.com/users/86718/" ] }
360,122
Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".
Heavy refers to how much of a month the instance is turned on , not the CPU load, so yes, any 24/7 server would be "heavy utilization".
{ "source": [ "https://serverfault.com/questions/360122", "https://serverfault.com", "https://serverfault.com/users/13008/" ] }
360,438
I have this section in my web.config: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <security> <authentication> <anonymousAuthentication enabled="true" /> <windowsAuthentication enabled="true" /> </authentication> </security> </system.webServer> IIS7 crashes and complains about the autientication section: Module AnonymousAuthenticationModule Notification AuthenticateRequest Handler StaticFile Error Code 0x80070021 Config Error This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault="Deny"), or set explicitly by a location tag with overrideMode="Deny" or the legacy allowOverride="false". Config Source 69: <authentication> 70: <anonymousAuthentication enabled="true" /> So the usual way to solve this is to go into %windir%\system32\inetsrv\config\applicationHost.config and unlock the section: <sectionGroup name="system.webServer"> <sectionGroup name="security"> <section name="access" overrideModeDefault="Deny" /> <section name="applicationDependencies" overrideModeDefault="Deny" /> <sectionGroup name="authentication"> <section name="anonymousAuthentication" overrideModeDefault="Allow" /> <section name="basicAuthentication" overrideModeDefault="Allow" /> <section name="clientCertificateMappingAuthentication" overrideModeDefault="Allow" /> <section name="digestAuthentication" overrideModeDefault="Allow" /> <section name="iisClientCertificateMappingAuthentication" overrideModeDefault="Allow" /> <section name="windowsAuthentication" overrideModeDefault="Allow" /> </sectionGroup> (alternatively, appcmd unlock config ). The weird thing: I've done that and it still complains. I looked for Locations (MVC is the name of my website that's the root of all sites I'm using): <location path="MVC" overrideMode="Allow"> <system.webServer overrideMode="Allow"> <security overrideMode="Allow"> <authentication overrideMode="Allow"> <windowsAuthentication enabled="true" /> <anonymousAuthentication enabled="true" /> </authentication> </security> </system.webServer> </location> Still it blows up. I'm puzzled as to why this happens. I cannot remove it from the web.config, I want to find the root problem. Is there a way to get specific information from IIS which rule is eventually denying me? Edit: I was able to fix this using the IIS7 management console by going to the very root (my machine) and clicking "Edit Configuration" and unlocking the section there. Still I'd like to know if there is a better way since I can't find the file it actually modifies.
Worked out these steps which fix the issue for me: Open IIS Manager Click the server name in the tree on the left Right hand pane, Management section, double click Configuration Editor At the top, choose the section system.webServer/security/authentication/anonymousAuthentication Right hand pane, click Unlock Section At the top, choose the section system.webServer/security/authentication/windowsAuthentication Right hand pane, click Unlock Section
{ "source": [ "https://serverfault.com/questions/360438", "https://serverfault.com", "https://serverfault.com/users/151/" ] }
360,815
Our business email is hosted on Google apps. In addition, our web server may also send email. Currently our SPF record in DNS looks like this: domain.com. IN TXT "v=spf1 a include:_spf.google.com -all" This is all fine, however now we've outsourced our email list management to another company and we need to include a second domain with include . So, I'm looking for something like: domain.com. IN TXT "v=spf1 a include:_spf.google.com include:otherdomain.com -all" What is the correct syntax for this? Many thanks!
All SPF mechanisms, including include , can be used multiple times, separated by spaces: "v=spf1 include:_spf.google.com include:otherdomain.com -all" Evaluation of include works this way: If the included data returned PASS, then the include itself generates a result (for example, include:foo.bar generates a PASS, but -include:foo.bar generates a FAIL). If the included data returned FAIL or NEUTRAL, then the include does not contribute to the result at all, and processing goes to your next mechanism. See SPF record syntax and RFC 7208 . (Note that redirect= is not a mechanism but a global modifier, and cannot be repeated this way.)
{ "source": [ "https://serverfault.com/questions/360815", "https://serverfault.com", "https://serverfault.com/users/260864/" ] }
361,134
Is the order in which Nginx includes configuration files fixed, or random? Apache explicitly states wildcard characters are expanded in alphabetical order. With Nginx it seems this does not apply, and the manual says nothing about it . In my setup, 20_example.com was included before 00_default , which defeats my purpose of defining shared directives (like log formats) there.
According to nginx source code it uses glob() function with GLOB_NOSORT parameter , so, the order of file inclusion could not be clearly established. This was changed in November 2012 , first released in 1.3.10. From the changefile : now if the "include" directive with mask is used on Unix systems, included files are sorted in alphabetical order.
{ "source": [ "https://serverfault.com/questions/361134", "https://serverfault.com", "https://serverfault.com/users/6757/" ] }
361,421
Apparently, I shouldn't have spent sleepless night trying to debug an application. I wanted to restart my nginx and discovered that its config file is empty. I don't remember truncating it, but fat fingers and reduced attention probably played their part. I don't have backup of that config file. I know I should have made it. Good for me, current nginx daemon is still running. Is there a way to dump its configuration to a config file that it'll understand later?
You need a gdb installed to dump memory regions of running process. # Set pid of nginx master process here pid=8192 # generate gdb commands from the process's memory mappings using awk cat /proc/$pid/maps | awk '$6 !~ "^/" {split ($1,addrs,"-"); print "dump memory mem_" addrs[1] " 0x" addrs[1] " 0x" addrs[2] ;}END{print "quit"}' > gdb-commands # use gdb with the -x option to dump these memory regions to mem_* files gdb -p $pid -x gdb-commands # look for some (any) nginx.conf text grep worker_connections mem_* grep server_name mem_* You should get something like "Binary file mem_086cb000 matches". Open this file in editor, search for config (e.g. "worker_connections" directive), copy&paste. Profit! Update: This method isn't entirely reliable. It's based on assumption that nginx process will read configuration and don't overwrite/reuse this memory area later. Master nginx process gives us best chances for that I guess.
{ "source": [ "https://serverfault.com/questions/361421", "https://serverfault.com", "https://serverfault.com/users/59018/" ] }
361,427
See http://technet.microsoft.com/en-us/library/ff715408.aspx for FirstLogonCommand , how do I specify a CommandLine for the OS boot drive and not just use "C:" like the examples do. The OS boot drive might not be "C:". Update : I am using C++ to write the XML and the program that will run is also written in C++.
You need a gdb installed to dump memory regions of running process. # Set pid of nginx master process here pid=8192 # generate gdb commands from the process's memory mappings using awk cat /proc/$pid/maps | awk '$6 !~ "^/" {split ($1,addrs,"-"); print "dump memory mem_" addrs[1] " 0x" addrs[1] " 0x" addrs[2] ;}END{print "quit"}' > gdb-commands # use gdb with the -x option to dump these memory regions to mem_* files gdb -p $pid -x gdb-commands # look for some (any) nginx.conf text grep worker_connections mem_* grep server_name mem_* You should get something like "Binary file mem_086cb000 matches". Open this file in editor, search for config (e.g. "worker_connections" directive), copy&paste. Profit! Update: This method isn't entirely reliable. It's based on assumption that nginx process will read configuration and don't overwrite/reuse this memory area later. Master nginx process gives us best chances for that I guess.
{ "source": [ "https://serverfault.com/questions/361427", "https://serverfault.com", "https://serverfault.com/users/60317/" ] }
361,464
I've got a script which uses openssl's s_client command to pull certificates for a big set of hosts. Some of these hosts will inevitably be unreachable because of a firewall. Is it possible to set the s_client timeout to something much shorter than the default? I don't see one in the man page/help file. That or some sort of wrapper command that will auto-kill the openssl -s_client after X number of seconds. I'd prefer not to pre-test a host/port for usability if possible.
Use timeout command from GNU coreutils package. timeout <time> <command> Alternatively look at the first response to this archived blog post for a bash-only answer.
{ "source": [ "https://serverfault.com/questions/361464", "https://serverfault.com", "https://serverfault.com/users/111161/" ] }
361,472
I'm running a postfix server - for several weeks without problems, but now mails bounce. After one day, I got the following mail from gmail, when I tried to send a mail to my server: [...] Delivery to the following recipient has been delayed: <the mailadress>@<myserver> Message will be retried for 2 more day(s) [...] [mx.<myserver>. (0): Connection refused] The logs mail.log , mail.err and mail.warn are empty except of some strange errors of the form: postfix/trivial-rewrite[22141]: warning: regexp map /etc/postfix/virtual, line 12: ignoring unrecognized request The virtual aliases is also the only thing I changed in the last time (but afterwards it was still working). Any hints how I could troubleshoot this? Note that sending a mail to a local user from a local user works. Also there is a open port although "127.0.0.2" looks strange to me... $ netstat -a|grep smtp tcp 0 0 127.0.0.2:smtp *:* LISTEN tcp 0 0 localhost.localdom:smtp *:* LISTEN My main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = <my servername> alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases #virtual_alias_maps = regexp:/etc/postfix/virtual virtual_alias_maps = hash:/etc/postfix/virtual myorigin = /etc/mailname mydestination = <myserver>, localhost relayhost = <my external smtp> mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = ipv4
Use timeout command from GNU coreutils package. timeout <time> <command> Alternatively look at the first response to this archived blog post for a bash-only answer.
{ "source": [ "https://serverfault.com/questions/361472", "https://serverfault.com", "https://serverfault.com/users/96200/" ] }
361,794
Once in a while i have to connect to a server where access is highly restricted. Only inbound SSH via VPN is allowed by the DMZ firewall. Outbound HTTP connections are blocked. I'm looking for an easy way to tunnel web access through my SSH session, so i can install updates and software via yum / apt-get. Ideally, i would like to avoid installing additional software/services in the protected area. What do you do in such a situation? SSH has the -D <port> SOCKS proxy option. But unfortunately it's only one-way from client to server and there is no reverse option.
I finally managed to accomplish this with ssh only: start a local SOCKS proxy on your client machine (using ssh -D ) EDIT: not necessary with SSH>7.6 connect to remote server and setup a reverse port forwarding ( ssh -R ) to your local SOCKS proxy configure the server software to use the forwarded proxy 1. Start local socks proxy in the background EDIT SSH>7.6 allow a simpler syntax to start the proxy. Skip this and continue with step 2! Connect to localhost via SSH and open SOCKS proxy on port 54321. $ ssh -f -N -D 54321 localhost -f runs SSH in the background. Note: If you close the terminal where you started the command, the proxy process will be killed. Also remember to clean up after yourself by either closing the terminal window when you are done or by killing the process yourself! 2. connect to remote server and setup reverse port forwarding Bind remote port 6666 to local port 54321. This makes your local socks proxy available to the remote site on port 6666. $ ssh root@target -R6666:localhost:54321 EDIT SSH>7.6 allows a simpler syntax to start the proxy! Step 1 is not needed then: $ ssh root@target -R6666:localhost 3. configure the server software to use the forwarded proxy Just configure yum, apt, curl, wget or any other tool that supports SOCKS to use the proxy 127.0.0.1:6666 . Voilá! Happy tunneling! 4. optional: install proxychains to make things easy proxychains installed on the target server enables any software to use the forwarded SOCKS proxy (even telnet ). It uses a LD_PRELOAD trick to redirect TCP and DNS requests from arbitrary commands into a proxy and is really handy. Setup /etc/proxychains.conf to use the forwarded socks proxy: [ProxyList] # SSH reverse proxy socks5 127.0.0.1 6666 Tunnel arbitrary tools (that use TCP) with proxychains : $ proxychains telnet google.com 80 $ proxychains yum update $ proxychains apt-get update
{ "source": [ "https://serverfault.com/questions/361794", "https://serverfault.com", "https://serverfault.com/users/111042/" ] }
361,838
Mysql was started: /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 but when I'm trying to do something: ERROR 1548 (HY000) at line 1: Cannot load from mysql.proc. The table is probably corrupted How to fix it? $ mysql -V mysql Ver 14.14 Distrib 5.1.58, for debian-linux-gnu (x86_64) using readline 6.2 $ lsb_release -a Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric $ sudo mysql_upgrade -uroot -p<password> --force Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysql.columns_priv OK mysql.db OK mysql.event OK mysql.func OK mysql.general_log Error : You can't use locks with log tables. status : OK mysql.help_category OK mysql.help_keyword OK mysql.help_relation OK mysql.help_topic OK mysql.host OK mysql.ndb_binlog_index OK mysql.plugin OK mysql.proc OK mysql.procs_priv OK mysql.servers OK mysql.slow_log Error : You can't use locks with log tables. status : OK mysql.tables_priv OK mysql.time_zone OK mysql.time_zone_leap_second OK mysql.time_zone_name OK mysql.time_zone_transition OK mysql.time_zone_transition_type OK mysql.user OK Running 'mysql_fix_privilege_tables'... OK $ mysqlcheck --port=3700 --socket=/srv/mysql/sockets/mysql-my-env.sock -A -udata_owner -pdata_owner <all tables> OK UPD1: for example I'm trying to remove procedure: mysql> DROP PROCEDURE IF EXISTS mysql.myproc; ERROR 1548 (HY000): Cannot load from mysql.proc. The table is probably corrupted mysql> UPD2: mysql> REPAIR TABLE mysql.proc; +------------+--------+----------+-----------------------------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +------------+--------+----------+-----------------------------------------------------------------------------------------+ | mysql.proc | repair | error | 1 when fixing table | | mysql.proc | repair | Error | Can't change permissions of the file '/srv/mysql/myDB/mysql/proc.MYD' (Errcode: 1) | | mysql.proc | repair | status | Operation failed | +------------+--------+----------+-----------------------------------------------------------------------------------------+ 3 rows in set (0.04 sec) This is strange, because: $ ls -l /srv/mysql/myDB/mysql/proc.MYD -rwxrwxrwx 1 mysql root 3983252 2012-02-03 22:51 /srv/mysql/myDB/mysql/proc.MYD UPD3: $ ls -la /srv/mysql/myDB/mysql total 8930 drwxrwxrwx 2 mysql root 2480 2012-02-21 13:13 . drwxrwxrwx 13 mysql root 504 2012-02-21 19:01 .. -rwxrwxrwx 1 mysql root 8820 2012-02-20 15:50 columns_priv.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 columns_priv.MYD -rwxrwxrwx 1 mysql root 4096 2012-02-20 15:50 columns_priv.MYI -rwxrwxrwx 1 mysql root 9582 2012-02-20 15:50 db.frm -rwxrwxrwx 1 mysql root 8360 2011-12-08 02:14 db.MYD -rwxrwxrwx 1 mysql root 5120 2012-02-20 15:50 db.MYI -rwxrwxrwx 1 mysql root 54 2011-11-12 15:42 db.opt -rwxrwxrwx 1 mysql root 10223 2012-02-20 15:50 event.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 event.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 event.MYI -rwxrwxrwx 1 mysql root 8665 2012-02-20 15:50 func.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 func.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 func.MYI -rwxrwxrwx 1 mysql root 8700 2012-02-20 15:50 help_category.frm -rwxrwxrwx 1 mysql root 21497 2011-11-12 15:42 help_category.MYD -rwxrwxrwx 1 mysql root 3072 2012-02-20 15:50 help_category.MYI -rwxrwxrwx 1 mysql root 8612 2012-02-20 15:50 help_keyword.frm -rwxrwxrwx 1 mysql root 88650 2011-11-12 15:42 help_keyword.MYD -rwxrwxrwx 1 mysql root 16384 2012-02-20 15:50 help_keyword.MYI -rwxrwxrwx 1 mysql root 8630 2012-02-20 15:50 help_relation.frm -rwxrwxrwx 1 mysql root 8874 2011-11-12 15:42 help_relation.MYD -rwxrwxrwx 1 mysql root 16384 2012-02-20 15:50 help_relation.MYI -rwxrwxrwx 1 mysql root 8770 2012-02-20 15:50 help_topic.frm -rwxrwxrwx 1 mysql root 414320 2011-11-12 15:42 help_topic.MYD -rwxrwxrwx 1 mysql root 20480 2012-02-20 15:50 help_topic.MYI -rwxrwxrwx 1 mysql root 9510 2012-02-20 15:50 host.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 host.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 host.MYI -rwxrwxrwx 1 mysql root 8554 2011-11-12 15:42 innodb_monitor.frm -rwxrwxrwx 1 mysql root 98304 2011-11-12 15:55 innodb_monitor.ibd -rwxrwxrwx 1 mysql root 8592 2012-02-20 15:50 inventory.frm -rwxrwxrwx 1 mysql root 76 2011-11-12 15:42 inventory.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 inventory.MYI -rwxrwxrwx 1 mysql root 8778 2012-02-20 15:50 ndb_binlog_index.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 ndb_binlog_index.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 ndb_binlog_index.MYI -rwxrwxrwx 1 mysql root 8586 2012-02-20 15:50 plugin.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 plugin.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 plugin.MYI -rwxrwxrwx 1 mysql root 9996 2012-02-20 15:50 proc.frm -rwxrwxrwx 1 mysql root 3983252 2012-02-03 22:51 proc.MYD -rwxrwxrwx 1 mysql root 36864 2012-02-21 13:23 proc.MYI -rwxrwxrwx 1 mysql root 8875 2012-02-20 15:50 procs_priv.frm -rwxrwxrwx 1 mysql root 1700 2011-11-12 15:42 procs_priv.MYD -rwxrwxrwx 1 mysql root 8192 2012-02-20 15:50 procs_priv.MYI -rwxrwxrwx 1 mysql root 3977704 2012-02-21 13:23 proc.TMD -rwxrwxrwx 1 mysql root 8800 2012-02-20 15:50 proxies_priv.frm -rwxrwxrwx 1 mysql root 693 2011-11-12 15:42 proxies_priv.MYD -rwxrwxrwx 1 mysql root 5120 2012-02-20 15:50 proxies_priv.MYI -rwxrwxrwx 1 mysql root 8838 2012-02-20 15:50 servers.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 servers.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 servers.MYI -rwxrwxrwx 1 mysql root 8955 2012-02-20 15:50 tables_priv.frm -rwxrwxrwx 1 mysql root 5957 2011-11-12 15:42 tables_priv.MYD -rwxrwxrwx 1 mysql root 8192 2012-02-20 15:50 tables_priv.MYI -rwxrwxrwx 1 mysql root 8636 2012-02-20 15:50 time_zone.frm -rwxrwxrwx 1 mysql root 8624 2012-02-20 15:50 time_zone_leap_second.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_leap_second.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_leap_second.MYI -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone.MYI -rwxrwxrwx 1 mysql root 8606 2012-02-20 15:50 time_zone_name.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_name.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_name.MYI -rwxrwxrwx 1 mysql root 8686 2012-02-20 15:50 time_zone_transition.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_transition.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_transition.MYI -rwxrwxrwx 1 mysql root 8748 2012-02-20 15:50 time_zone_transition_type.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_transition_type.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_transition_type.MYI -rwxrwxrwx 1 mysql root 10630 2012-02-20 15:50 user.frm -rwxrwxrwx 1 mysql root 5456 2011-11-12 21:01 user.MYD -rwxrwxrwx 1 mysql root 4096 2012-02-20 15:50 user.MYI
This will most likely be solved when running a MySQL upgrade, as this seems to be a result of schema changes. mysql_upgrade -u root -p If your username for your administrative account is not root, please change it in the example above.
{ "source": [ "https://serverfault.com/questions/361838", "https://serverfault.com", "https://serverfault.com/users/111305/" ] }
361,940
The SuperMicro X8SIE-F board has two dedicated LAN interfaces for the operating system (LAN1/2) and one dedicated LAN interface for IPMI. Is it possible to configure IPMI to use one of the LAN1/2 interfaces, instead of the IPMI port? If so, what is the procedure?
Jiri's on the right track with the three options (Dedicated, Share, Failover) for the IPMI interface. The short answer is that yes, you can use LAN1 instead of the dedicated IPMI port, and it generally works that way with the default BIOS settings. It's not possible to run the IPMI on the LAN2 interface. Here's a more detailed description of the three options: Dedicated : Always use the dedicated IPMI interface. This is the option you want if you're trying to have the simplest setup, at the expense of additional cabling. Shared : Always use the LAN1 interface. This is the option you want if you're trying to reduce your cabling to each server, and understand the tradeoffs. Under the covers, there's a virtual switch in hardware that's splitting out traffic to the IPMI card from traffic to the rest of the system; the IPMI card has a separate MAC address to differentiate the traffic. On modern Supermicro boards, you can also set the IPMI traffic to run on a different VLAN from the rest of the system, so you can tag the IPMI traffic. There are some definite security implication to this design; it's not difficult for the main system to access the IPMI network, if you were trying to keep them separated. A failure of the LAN1 interface often means that you lose primary and out-of-band connectivity at the same time. Failover (factory default) : On boot, detect if the dedicated IPMI interface is connected. If so, use the dedicated interface, otherwise fall back to the shared LAN1. I've never found a good use for this option. As best I can tell, this setup is fundamentally flawed - I haven't tested it extensively, but I've heard reports it'll fail to detect the dedicated interface in many circumstances because the upstream switch isn't passing traffic - for example, after a power outage if the switch and system come up simultaneously, or if the switch is still blocking during the spanning tree detection. Combine this with the fact that the check only happens at boot, and it's just generally hard to control what interface you end up using.
{ "source": [ "https://serverfault.com/questions/361940", "https://serverfault.com", "https://serverfault.com/users/63612/" ] }
362,090
I'm using zfs on my FreeBSD 9.0 x64 and pretty happy with it, but I find it hard to count directory real, not compressed, size. Surely I can walk over the directory and count every file size with ls, but I'd expect some extra key for du for that purpose. So, how can I tell the directory size for dir placed on zfs with compression on ? Thamk you in advance for the advice, I simple can't rememeber there is no such a 'simple' way, without 'find ./ -type d -exec ls -l '{}' \; | awk ...'!
Use the du with its -A flag: root@pg78:/usr/local/pgsql/data/base/218204 # du -A -h 221350.219 1.0G 221350.219 root@pg78:/usr/local/pgsql/data/base/218204 # du -h 221350.219 501M 221350.219 Very handy. It even works with -d for recursive goodness: root@pg78:/usr/local/pgsql/data/base # du -h -c -d0 . 387G . 387G total root@pg78:/usr/local/pgsql/data/base # du -A -h -c -d0 . 518G . 518G total
{ "source": [ "https://serverfault.com/questions/362090", "https://serverfault.com", "https://serverfault.com/users/100187/" ] }
362,338
As far as I can tell, here are the main differences: OpenTSDB does not deteriorate data over time, unlike Graphite where the size of the database is pre-determined. OpenTSDB can store metrics per second, as opposed to Graphite which has minute intervals (I'm not sure of this, Graphite docs show retention policies which stores metrics every minute, but I don't know if this is the minimum unit of time we can play with) I want to make an informed decision about which tool to use in order to store metrics, have I missed any other differences in these 2 systems? How performant/scalable are they? Bonus Question: Is there any other time series system I should look at?
Disclaimer: I wrote OpenTSDB . I would say that the biggest advantage of Graphite seems to be superior graphing capabilities . It offers more graph types and features. Deployment complexity is also probably a bit lower with Graphite, as it's not a distributed system and thus has fewer moving parts. OpenTSDB , on the other hand, is capable of storing a significantly larger amount of fine-grained data points. This comes at the cost of deploying HBase , which isn't that big of a deal to be honest. If you want to get real-time data down to the second with >>10k new data points/s, then OpenTSDB will suit you well. Some info about our current scale at StumbleUpon (these numbers generally double every 2-3 months): Over 1B new data points per day (=12k/s on average). Hundreds of billions of data points stored. Less than 2TB of disk space consumed (before 3x replication by HDFS). Read queries are generally capable of retrieving, munging and plotting over 500k data points per second.
{ "source": [ "https://serverfault.com/questions/362338", "https://serverfault.com", "https://serverfault.com/users/69474/" ] }
362,374
We have a Server 2008 R2 Primary Domain Controller that seems to have amnesia when it comes to working out what kind of network it is on. The (only) network connection is identified at startup as a 'Public Network'. Yet, if I disable and then re-enable the connection, it happily figures out that it is actually part of a domain network. Is this because AD Domain Services is not started when the network location is initially worked out? This issue causes some headaches with Windows Firewall Rules (which I am more than aware can be solved in other ways) so I am mostly just curious to see if anyone knows why this happens.
Whether the network of a domain controller is classified as domain network doesn't depend on the gateway configuration. The behaviour of a false network classification can be caused the NLA (network location awareness) service starts before the domain is available . In this case the public or private network is chosen and not corrected afterwards. How to check if this fault situation is given When the domain controller after rebooting is in the public network, restart the NLA service or disconnect / reconnect the network. The domain controller should be in the domain network afterwards. How to solve it It may help to set the NLA Service to delayed start . Better, check why the domain needs long to be present. It seems that the domain needs longer to start when there are multiple network cards. When it doesn't help When neither speeding up the loading of the domain nor the delay of NLA help and the error is caused by the long loading of the domain (look: "how to check..."), then there are some more things that can be done. Write a script for restarting it an run it with the scheduler (dangerous) Shift the loading of the NLA service to the end of the service starts, changing the load order in the registry (dangerous) The following Registry entry sets the dependencies to NSI RpcSs TcpIp Dhcp Eventlog NTDS DNS : REGEDIT4 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NlaSvc] "DependOnService"=hex(7):4e,53,49,00,52,70,63,53,73,00,54,63,70,49,70,00,44,68,\ 63,70,00,45,76,65,6e,74,6c,6f,67,00,4e,54,44,53,00,44,4e,53,00,00 Execute "IPCONFIG /RENEW" from scheduler at startup with a delay of 1 or 2 minutes (better than starting NLA service) Restart the NLA service manually after every reboot (but: "IPCONFIG /RENEW" should be preferred)! One more cause can also be when the domain controller has two or more IPs configured (on the same or on other network cards) and the additional networks aren't configured in the DNS. Reproduction of the behaviour On a test domain controller (single DC!) I deleted the default gateway entry and set the DNS Server to delayed start . Doing this the domain needed long to get loaded and the network was classified as public . After disconnecting and reconnecting the network cable, the network was classified correctly as domain network . Edit gratefully from the comments of Daniel Fisher lennybacon and Joshua Hanley : How to add a dependency for NlaSvc to DNS and NTDS run sc config nlasvc depend=NSI/RpcSs/TcpIp/Dhcp/Eventlog/DNS/NTDS from CMD (use sc.exe if you're running it in PowerShell). If you want to double-check the existing dependencies before adding DNS and NTDS, use sc qc nlasvc
{ "source": [ "https://serverfault.com/questions/362374", "https://serverfault.com", "https://serverfault.com/users/107610/" ] }
362,529
I can sniff the traffic of my local pc but I would like to know how to I sniff the traffic of a remote machine by wireshark? When in capture option I select remote interface and enter my remote ip show me error.code(10061). What should I do?
On Linux and OSX you can achieve this by running tcpdump over ssh and having wireshark listen on the pipe. Create a named pipe: $ mkfifo /tmp/remote Start wireshark from the command line $ wireshark -k -i /tmp/remote Run tcpdump over ssh on your remote machine and redirect the packets to the named pipe: $ ssh root@firewall "tcpdump -s 0 -U -n -w - -i eth0 not port 22" > /tmp/remote Source: http://blog.nielshorn.net/2010/02/using-wireshark-with-remote-capturing/
{ "source": [ "https://serverfault.com/questions/362529", "https://serverfault.com", "https://serverfault.com/users/110480/" ] }
362,589
My VPS web server running on CentOS 5.4 (Linux kernel 2.6.16.33-xenU) irregularly (like once a month give or take a few weeks) becomes unresponsive due to oom-killer kicking in. Monitoring of the server shows that it doesn't normally run out of memory, just every so often. I've read a couple of blogs that point to this page which discusses configuring the kernel to better manage overcommit using the following sysctl settings: vm.overcommit_memory = 2 vm.overcommit_ratio = 80 My understanding of this (which may be wrong, but I can't find a canonical definition to clarify) is that this prevents the kernel over-allocating memory beyond swap + 80% of physical memory. However, I have also read some other sources suggesting that these settings are not a good idea - although the critics of this approach seem to be saying "don't do things to break your system, rather than attempting this kludge" in the assumption that causation is always known. So my question is, what are the pros and cons of this approach , in the context of an Apache2 web server hosting about 10 low traffic sites? In my specific case, the web server has 512Mb RAM, with 1024Mb swap space. This seems to be adequate for the vast majority of the time.
Setting overcommit_ratio to 80 is likely not the right action. Setting the value to anything less than 100 is almost always incorrect. The reason for this is that linux applications allocate more than they really need. Say they allocate 8kb to store a couple character string of text. Well thats several KB unused right there. Applications do this a lot, and this is what overcommit is designed for. So basically with overcommit at 100, the kernel will not allow applications to allocate any more memory than you have (swap + ram). Setting it at less than 100 means that you will never use all your memory. If you are going to set this setting, you should set it higher than 100 because of the fore-mentioned scenario, which is quite common. However, while setting it greater than 100 is almost always the correct answer, there are some use cases where setting it less than 100 is correct. As mentioned, by doing so you wont be able to use all your memory. However the kernel still can. So you can effectively use this to reserve some memory for the kernel (e.g. the page cache). Now, as for your issue with the OOM killer triggering, manually setting overcommit will not likely fix this. The default setting (heuristic determination) is fairly intelligent. If you wish to see if this is really the cause of the issue, look at /proc/meminfo when the OOM killer runs. If you see that Committed_AS is close to CommitLimit , but free is still showing free memory available, then yes you can manually adjust the overcommit for your scenario. Setting this value too low will cause the OOM killer to start killing applications when you still have plenty of memory free. Setting it too high can cause random applications to die when they try to use memory they were allocated, but isnt actually available (when all the memory does actually get used up).
{ "source": [ "https://serverfault.com/questions/362589", "https://serverfault.com", "https://serverfault.com/users/31143/" ] }
362,590
We have two load balancers, one used as hot-standby. Would you recommend me using a RAID1 setup in case of a harddisk failure? Or is RAID1 not needed as in case of a failure the hot-standby server takes over?
Setting overcommit_ratio to 80 is likely not the right action. Setting the value to anything less than 100 is almost always incorrect. The reason for this is that linux applications allocate more than they really need. Say they allocate 8kb to store a couple character string of text. Well thats several KB unused right there. Applications do this a lot, and this is what overcommit is designed for. So basically with overcommit at 100, the kernel will not allow applications to allocate any more memory than you have (swap + ram). Setting it at less than 100 means that you will never use all your memory. If you are going to set this setting, you should set it higher than 100 because of the fore-mentioned scenario, which is quite common. However, while setting it greater than 100 is almost always the correct answer, there are some use cases where setting it less than 100 is correct. As mentioned, by doing so you wont be able to use all your memory. However the kernel still can. So you can effectively use this to reserve some memory for the kernel (e.g. the page cache). Now, as for your issue with the OOM killer triggering, manually setting overcommit will not likely fix this. The default setting (heuristic determination) is fairly intelligent. If you wish to see if this is really the cause of the issue, look at /proc/meminfo when the OOM killer runs. If you see that Committed_AS is close to CommitLimit , but free is still showing free memory available, then yes you can manually adjust the overcommit for your scenario. Setting this value too low will cause the OOM killer to start killing applications when you still have plenty of memory free. Setting it too high can cause random applications to die when they try to use memory they were allocated, but isnt actually available (when all the memory does actually get used up).
{ "source": [ "https://serverfault.com/questions/362590", "https://serverfault.com", "https://serverfault.com/users/111570/" ] }
362,619
I'm setting up on my VPS a vsftpd, and i don't want users be allowed to leave they're ftp home directory. I'm using local_user ftp, not anonymous, so I added: chroot_local_user=YES I've read in a lot of forum post, that this is unsecure. Why is this unsecure? If this is unsecure because of using ssh to join to my VPS as well, then I could just lock out these users from sshd, right? Is there an other option for achiving this behaviour of vsftpd? ( I dont want to remove read permissions on all folder/files for "world" on my system )
Check here for VSFTPD's FAQ for the answer your looking for. Below is the important excerpt that I think will answer your question. Q) Help! What are the security implications referred to in the "chroot_local_user" option? A) Firstly note that other ftp daemons have the same implications. It is a generic problem. The problem isn't too severe, but it is this: Some people have FTP user accounts which are not trusted to have full shell access. If these accounts can also upload files, there is a small risk. A bad user now has control of the filesystem root, which is their home directory. The ftp daemon might cause some config file to be read - e.g. /etc/some_file. With chroot(), this file is now under the control of the user. vsftpd is careful in this area. But, the system's libc might want to open locale config files or other settings...
{ "source": [ "https://serverfault.com/questions/362619", "https://serverfault.com", "https://serverfault.com/users/101260/" ] }
362,628
Where can I find the procedure for replacing a disk in an EMC CX2-30 san. Amber light showing on one disk, NaviSphere shows the disk status as "Requested Bypass" The affected LUN is in a RAID 5 configuration. There are two hot spares in the unowned LUN, one has status "Hot spare ready". The other has status "Enabled".
Check here for VSFTPD's FAQ for the answer your looking for. Below is the important excerpt that I think will answer your question. Q) Help! What are the security implications referred to in the "chroot_local_user" option? A) Firstly note that other ftp daemons have the same implications. It is a generic problem. The problem isn't too severe, but it is this: Some people have FTP user accounts which are not trusted to have full shell access. If these accounts can also upload files, there is a small risk. A bad user now has control of the filesystem root, which is their home directory. The ftp daemon might cause some config file to be read - e.g. /etc/some_file. With chroot(), this file is now under the control of the user. vsftpd is careful in this area. But, the system's libc might want to open locale config files or other settings...
{ "source": [ "https://serverfault.com/questions/362628", "https://serverfault.com", "https://serverfault.com/users/14193/" ] }
362,903
Usually, I run aptitude -y install locales then dpkg-reconfigure locales to set up locale. Now I want to put it into a shell script, how can I reliably do the following, automatically / non-interactively? Choose en_US.UTF-8 and set it as system default Disable all other locales (Optional) Verify if /etc/default/locale contains one-and-only entry of LANG=en_US.UTF-8 as expected
Could not get @stone's answer to work. Instead, I use this method (for Dockerfiles): # Configure timezone and locale echo "Europe/Oslo" > /etc/timezone && \ dpkg-reconfigure -f noninteractive tzdata && \ sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \ sed -i -e 's/# nb_NO.UTF-8 UTF-8/nb_NO.UTF-8 UTF-8/' /etc/locale.gen && \ echo 'LANG="nb_NO.UTF-8"'>/etc/default/locale && \ dpkg-reconfigure --frontend=noninteractive locales && \ update-locale LANG=nb_NO.UTF-8
{ "source": [ "https://serverfault.com/questions/362903", "https://serverfault.com", "https://serverfault.com/users/80161/" ] }
363,095
This may be a bit of a noobish question, but I was taking a look at /etc/hosts on my new Xubuntu install and saw this: 127.0.0.1 localhost 127.0.1.1 myhostname On most 'nixes I've used, the second line is omitted, and if I want to add my hostname to the hosts file, I'd just do this: 127.0.0.1 localhost myhostname Is there a difference between these two files in any practical sense?
There isn't a great deal of difference between the two; 127/8 (eg: 127.0.0.0 => 127.255.255.255 ) are all bound to the loopback interface. The reason why is documented in the Debian manual in Ch. 5 Network Setup - 5.1.1. The hostname resolution . Ultimately, it is a bug workaround; the original report is 316099 .
{ "source": [ "https://serverfault.com/questions/363095", "https://serverfault.com", "https://serverfault.com/users/43634/" ] }
363,159
So I'm setting up a virtual path when pointing at a node.js app in my nginx conf. the relevant section looks like so: location /app { rewrite /app/(.*) /$1 break; proxy_pass http://localhost:3000; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Works great, except that when my node.js app (an express app) calls a redirect. As an example, the dev box is running nginx on port 8080, and so the url's to the root of the node app looks like: http://localhost:8080/app When I call a redirect to '/app' from node, the actual redirect goes to: http://localhost/app
The problem is that the Node.js application is not issuing the redirect correctly. You may be able to use proxy_redirect to correct this in nginx: proxy_redirect http://localhost/ http://localhost:8080/;
{ "source": [ "https://serverfault.com/questions/363159", "https://serverfault.com", "https://serverfault.com/users/28406/" ] }
363,425
How to change all file permissions to 644 and all folder permissions to 755 recursively using chmod in the following two situation: If they had 777 permissions Regardless of the permission (with ANY permissions)
find . -type d -perm 777 -exec chmod 755 {} \; (for changing the directory permission) find . -type f -perm 777 -exec chmod 644 {} \; (for changing the file permission) If the files/directories dont have 777 permissions, we easily remove the -perm 777 part. The advantage of these commands is that they can target regular files or directories and only apply the chmod to the entries matching a specific permission. . is the directory to start searching -type d is to match directories ( -type f to match regular files) -perm 777 to match files with 777 permissions (allowed for read, write and exec for user, group and everyone) -exec chmod 755 {} \; for each matching file execute the command chmod 755 {} where {} will be replaced by the path of the file. The ; indicates the end of the command, parameters after this ; are treated as find parameters. We have to escape it with \ since ; is the default shell delimiter, it would mean the end of the find command otherwise.
{ "source": [ "https://serverfault.com/questions/363425", "https://serverfault.com", "https://serverfault.com/users/108296/" ] }
363,555
I'm trying to use rync locally (on a windows machine) to a remote server (my osx box) in order to test a remote deploy build script. I've done rsync before just fine between 2 linux servers, but I'm having problems now. Here is the output: $ rsync -v -e ssh [email protected]:/Library/WebServer/sites/staging/app1/ ./export skipping directory /Library/WebServer/sites/staging/app1/. sent 8 bytes received 13 bytes 3.82 bytes/sec total size is 0 speedup is 0.00 $ or $ rsync -avz -e ssh [email protected]:/Library/WebServer/sites/staging/app1/ ./export receiving file list ... done ./ sent 26 bytes received 68 bytes 17.09 bytes/sec total size is 0 speedup is 0.00 $ remote app1 directory is empty while local export directory has 4 sub directories and then a bunch of files in each of those
rsync -v -e ssh [email protected]:/Library/WebServer/sites/staging/app1/ ./export You didn't give it any options to put it into recursive mode like -r or -a . remote app1 directory is empty while local export directory has 4 sub directories and then a bunch of files in each of those Do you have the options backwards here? The command should be rsync [source] [DESTINATION] . If the app1 directory is empty and you are trying to copy an empty directory then you aren't going to do anything useful. Perhaps you need something like this instead? rsync -avz ./export/ [email protected]:/Library/WebServer/sites/staging/app1/ Also: You should almost always include a trailing slash on directories with rsync. Almost every version of rsync released in the last 5-10 years defaults to using ssh has the remote transport. You probably don't have to specify the -e ssh .
{ "source": [ "https://serverfault.com/questions/363555", "https://serverfault.com", "https://serverfault.com/users/95925/" ] }
363,628
Let's say I bought two Intel Xeon's and installed them into server class hardware... If one CPU failed would the other still function and pick up the slack, therefore providing fault tolerance? This does not seem very likely, but I figured I would ask instead of making any assumptions.
In a normal dual-socket system, no, although there are servers that do permit hot-swapping of processors and RAM. So these things do exist, but they're at the very, very high-end of the market. It's not really a big deal - of everything in your server that can fail, the processor is right on the bottom of the list, next to those little brass risers that hold the motherboard off the chassis.
{ "source": [ "https://serverfault.com/questions/363628", "https://serverfault.com", "https://serverfault.com/users/63619/" ] }
363,707
I want to have a secure connection, when I log into my webmail, phpMyAdmin , etc. Therefore I signed my own SSL certificates with OpenSSL and told Apache to listen on port 443. Is this in fact secure? Are all my passwords really sent through a safe and secure layer? What difference does it make, if I buy an SSL certificate from Verisign or sign my own one? At the end of the day, all data will be on my server anyway. So what is the big difference?
This is all about trust. If you get a signed certificate from verisign you prove to random clients that your certificate is trusted. If you self-sign the certificate people not having your certificate installed on their computer cannot be sure that they aren't being attacked by an Man-in-the-middle attack . If your webserver is just used by you, then you do not need a real CA (such as verisign) to sign your certificate. Just install the certificate on the machines that you want to use and you're good to go. Edit: So to answer your question: Yes everything is encrypted and you can be sure no-one can read your sensitive data if you know that the certificate presented to the web browser is in fact the one you have setup the web server with.
{ "source": [ "https://serverfault.com/questions/363707", "https://serverfault.com", "https://serverfault.com/users/109245/" ] }
363,741
How can I block all ports except for 1962, 999, 12020? One port for SSH and two others for a kind of script. So, it's necessary to allow outgoing on these ports, right? My iptables: # Generated by iptables-save v1.4.4 on Sat Feb 25 17:25:21 2012 *mangle :PREROUTING ACCEPT [643521:136954367] :INPUT ACCEPT [643521:136954367] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [645723:99904505] :POSTROUTING ACCEPT [645723:99904505] COMMIT # Completed on Sat Feb 25 17:25:21 2012 # Generated by iptables-save v1.4.4 on Sat Feb 25 17:25:21 2012 *filter :INPUT ACCEPT [643490:136950781] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [645723:99904505] -A INPUT -p tcp -m tcp --dport 1962 -j ACCEPT -A INPUT -p tcp -m tcp --dport 999 -j ACCEPT COMMIT # Completed on Sat Feb 25 17:25:21 2012 # Generated by iptables-save v1.4.4 on Sat Feb 25 17:25:21 2012 *nat :PREROUTING ACCEPT [5673:734891] :POSTROUTING ACCEPT [2816:179474] :OUTPUT ACCEPT [2816:179474] COMMIT # Completed on Sat Feb 25 17:25:21 2012 Sorry, but I'm a real newbie when it comes to this stuff and I just want make my server more secure.
At first you should always flush to be sure whats already defined… nothing iptables -F Then set the default policy of the INPUT chain to DROP if the end is reached and no rule matched: iptables -P INPUT DROP To ensure the loopback is not affacted you should add iptables -A INPUT -i lo -p all -j ACCEPT iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT to allow all traffic on the lo-if and every incomming traffic for connections you etablished. After that add every rule you need for your services (don't forget to open ssh if you need it! else you're out) : iptables -A INPUT -p tcp -m tcp --dport 1962 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 999 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 12020 -j ACCEPT A little trick I do to keep myself and others from accidentally drilling holes into the security I finally add: iptables -A INPUT -j DROP This line matches everything for the INPUT chain and the policy should not get anything. advantage of this is even if you add an ACCEPT-rule sometime after initializing your ruleset it will never become checked because everything is droped before. so it ensures you have to keep everything in one place. For your question the whole thing looks like this in summary: iptables -F iptables -P INPUT DROP iptables -A INPUT -i lo -p all -j ACCEPT iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 1962 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 999 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 12020 -j ACCEPT iptables -A INPUT -j DROP
{ "source": [ "https://serverfault.com/questions/363741", "https://serverfault.com", "https://serverfault.com/users/111954/" ] }
363,922
How to not copy but move files from one server to another (both Linux)? man scp didn't give me anything useful. I cannot use 'scp' then 'rm' because I must make sure the file is successfully transferred. If there is any error during transfer, the file must not be deleted. Perhaps I should use exit code somehow, but how? Also, there are a lot of files, and if the last file fails it would be not-so-good option keep the whole bunch of successfully transferred files. Maybe there is something besides SCP?
rsync over ssh is probably your best bet with the --remove-source-files option rsync -avz --remove-source-files -e ssh /this/dir remoteuser@remotehost:/remote/dir a quick test gives; [tomh@workstation001 ~]$ mkdir test1 [tomh@workstation001 ~]$ mkdir test2 [tomh@workstation001 ~]$ touch test1/testfile.1 [tomh@workstation001 ~]$ ls test1/ testfile.1 [tomh@workstation001 ~]$ rsync --remove-source-files -av -e ssh test1/testfile.1 tomh@localhost:/home/tomh/test2/ sending incremental file list sent 58 bytes received 12 bytes 10.77 bytes/sec total size is 0 speedup is 0.00 [tomh@workstation001 ~]$ ls test1/ [tomh@workstation001 ~]$ [tomh@workstation001 ~]$ ls test2/ testfile.1 As @SvenW mentioned, -e ssh is the default so can be omitted.
{ "source": [ "https://serverfault.com/questions/363922", "https://serverfault.com", "https://serverfault.com/users/103132/" ] }
364,555
So this sounds like a simple question but searching the internet I have been unable to find a list of what the different actions in the SCCM Client actually do. On my machine it is called Configuration Manager and I am specifically talking about the actions tab. Does someone have a list of each action and what they do? I would find it really helpful to have a reference in one place that I could point some fellow techs too.
This is a deceptively hard question! If you look up the Technet page regarding the Actions tab, you'll find that it tells you nothing about each action. All of the actions on the tab are scheduled tasks; that is, if the feature is enabled, it will automatically run on a periodic basis. In some circumstances (e.g. troubleshooting), you may find the need to manually initiate these tasks. That's where this tab comes in. The information on each action's functionality is scattered about the Technet website (although I understand and personally use many of the these actions, I copied from the Technet website because I'm lazy): Branch Distribution Point Maintenance Task verifies any prestaged packages and downloads any that do not exist on the branch distribution point. While Technet does not explicitly state it, I believe this task is useful only on branch distribution points and is ignored on normal clients. Discovery Data Collection Cycle causes the client to generate a new discovery data record (DDR). When the DDR is processed by the site server, Discovery Data Manager adds or updates resource information from the DDR in the site database. File Collection Cycle When a file is specified for collection, the Microsoft System Center Configuration Manager 2007 software inventory agent searches for that file when it runs a software inventory scan on each client in the site. If the software inventory client agent finds a file that should be collected, the file is attached to the inventory file and sent to the site server. This action differs from software inventory in that it actually sends the file to the site server, so that it can be later viewed using Resource Explorer. This is a part of SCCM inventory functionality. Hardware Inventory Cycle collects information such as available disk space, processor type, and operating system about each computer. This is a part of SCCM inventory functionality. Machine Policy Retrieval & Evaluation Cycle The client downloads its policy on a schedule. By default, this value is configured to every 60 minutes and is configured with the option Policy polling interval (minutes). However, there might be occasions when you want to initiate ad-hoc policy retrieval from the client—for example, in a troubleshooting scenario or when testing. This action initiates ad-hoc machine policy retrieval from the client outside its scheduled polling interval. Software Inventory Cycle collects software inventory data directly from files (such as .exe files) by inventorying the file header information. Configuration Manager 2007 can also inventory unknown files — files that do not have detailed information in their file headers. This provides a flexible, easy-to-maintain software inventory method. You can also have Configuration Manager 2007 collect copies of files that you specify. Software inventory and collected file information for a client can be viewed using Resource Explorer. This is a part of SCCM inventory functionality. Software Metering Usage Report Cycle collects the data that allows you to monitor and client software usage. Software Updates Deployment Evaluation Cycle initiates a scan for software updates compliance. Before client computers can scan for software update compliance, the software updates environment must be configured. Software Updates Scan Cycle Just after a software update installation completes, a scan is initiated to verify that the update is no longer required and to create a new state message that indicates the update has been installed. When the installation has finished but a restart is necessary, the state will indicate that the client computer is pending a restart. User Policy Retrieval & Evaluation Cycle Similar to Machine Policy Retrieval & Evaluation Cycle, but this action initiates ad-hoc user policy retrieval from the client outside its scheduled polling interval. Windows Installer Source List Update Cycle causes the Product Source Update Manager to complete a full update cycle. When you install an application using Windows Installer, those Windows Installer applications try to return to the path they were installed from when they need to install new components, repair the application, or update the application. This location is called the Windows Installer source location. Windows Installer Source Location Manager can automatically search Configuration Manager 2007 distribution points for the source files, even if the application was not originally installed from a distribution point.
{ "source": [ "https://serverfault.com/questions/364555", "https://serverfault.com", "https://serverfault.com/users/94830/" ] }
364,677
This is a Canonical Question about File Permission and Why 777 is "destructive". I'm not asking how to fix this problem, as there are a ton of references of that already on Server Fault (reinstall OS). Why does it do anything destructive at all? If you've ever ran this command you pretty much immediately destroy your operating system. I'm not clear why removing restrictions has any impact on existing processes. For example, if I don't have read access to something and after a quick mistype in the terminal suddenly I now have access well... why does that cause Linux to break?
First of all a minor terminology nitpick: chmod doesn't remove permissions. It CHANGES them. Now the meat of the issue -- The mode 777 means "Anyone can read, write or execute this file" - You have given permission for anyone to do (effectively) whatever the heck they want. Now, why is this bad? You've just let everyone read/modify every file on your system. Kiss password security goodbye (anyone can read the shadow file and crack your passwords, but why bother? Just CHANGE the password! It's much easier!). Kiss security for your binaries goodbye (someone can just write a new login program that lets them in every time). Kiss your files goodbye: One user misdirects rm -r / and it's all over. The OS was told to let them do whatever they wanted! You've pissed off every program that checks permissions on files before starting. sudo , sendmail , and a host of others simply will not start any more. They will examine key file permissions, see they're not what they're supposed to be, and kick back an error message. Similarly ssh will break horribly (key files must have specific permissions, otherwise they're "insecure" and by default SSH will refuse to use them.) You've wiped out the setuid / setgid bits on the programs that had them. The mode 777 is actually 0 777 . Among the things in that leading digit are the setuid and setgid bits. Most programs which are setuid/setgid have that bit set because they must run with certain privileges. They're broken now. You've broken /tmp and /var/tmp The other thing in that leading octal digit that got zero'd is the sticky bit -- That which protects files in /tmp (and /var/tmp ) from being deleted by people who don't own them. There are (unfortunately) plenty of badly-behaved scripts out there that "clean up" by doing an rm -r /tmp/* , and without the sticky bit set on /tmp you can kiss all the files in that directory goodbye. Having scratch files disappear can really upset some badly-written programs... You've caused havoc in /dev /proc and similar filesystems This is more of an issue on older Unix systems where /dev is a real filesystem, and the stuff it contains are special files created with mknod , as the permissions change will be preserved across reboots, but on any system having your device permissions changing can cause substantial problems, from the obvious security risks (everyone can read every TTY) to the less-obvious potential causes of a kernel panic. Credit to @Tonny for pointing out this possibility Sockets and Pipes may break, or have other problems Sockets and pipes may break entirely, or be exposed to malicious injection as a result of being made world-writeable. Credit to @Tonny for pointing out this possibility You've made every file on your system executable A lot of people have . in their PATH environment variable (you shouldn't!) - This could cause an unpleasant surprise as now anyone can drop a file conveniently named like a command (say make or ls , and have a shot at getting you to run their malicious code. Credit to @RichHomolka for pointing out this possibility On some systems chmod will reset Access Control Lists (ACLs) This means you may wind up having to re-create all your ACLs in addition to fixing permissions everywhere (and is an actual example of the command being destructive). Credit to @JamesYoungman for pointing out this possibility Will the parts of the system which are already running continue to run? Probably, for a while at least. But the next time you need to launch a program, or restart a service, or heaven forbid REBOOT the box you're in for a world of hurt as #2 and #3 above will rear their ugly heads.
{ "source": [ "https://serverfault.com/questions/364677", "https://serverfault.com", "https://serverfault.com/users/95321/" ] }
364,709
I have an sshfs connection setup with a remote filesystem on a Linux server. I'm doing an rsync from my local server to the ftpfs-filesystem. Because of the nature of this setup, I can't chown anything on the sshfs filesystem. When I do the rsync, it tries to chown all the files after it transfers them. This results in chown errors, even though it transfers the files just fine. With rsync, is there a way to tell it to not try and chown the files? If I rsync like 1000 files I end up with a log of 1000 chown: permission denied (error 13) errors. I know it doesn't hurt anything to get these errors since the ownership of the files is determined by the sshfs configuration itself, but it would be nice to not get them.
You are probably running rsync like this: rsync -a dir/ remote:/dir/ The -a option according to the documentation is equivalent to: -rlptgoD -a, --archive archive mode; equals -rlptgoD (no -H,-A,-X) You probably want to remove the -o and -g options: -o, --owner preserve owner (super-user only) -g, --group preserve group So instead your rsync command should look something like this: rsync -rlptD dir/ remote:/dir/ Or as @glglgl points out: rsync -a --no-o --no-g dir/ remote:/dir/ The remaining options in use are: -r, --recursive recurse into directories -l, --links copy symlinks as symlinks -p, --perms preserve permissions -t, --times preserve modification times -D same as --devices --specials --devices preserve device files (super-user only) --specials preserve special files
{ "source": [ "https://serverfault.com/questions/364709", "https://serverfault.com", "https://serverfault.com/users/21307/" ] }
365,061
We're seeing huge performance problems on a web application and we're trying to find the bottleneck. I am not a sysadmin so there is some stuff I don't quite get. Some basic investigation shows the CPU to be idle, lots of memory to be available, no swapping, no I/O, but a high average load. The software stack on this server looks like this: Solaris 10 Java 1.6 WebLogic 10.3.5 (8 domains) The applications running on this server talk with an Oracle database on a different server. This server has 32GB of RAM and 10 CPUs (I think). Running prstat -Z gives something like this: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 3836 ducm0101 2119M 2074M cpu348 58 0 8:41:56 0.5% java/225 24196 ducm0101 1974M 1910M sleep 59 0 4:04:33 0.4% java/209 6765 ducm0102 1580M 1513M cpu330 1 0 1:21:48 0.1% java/291 16922 ducm0102 2115M 1961M sleep 58 0 6:37:08 0.0% java/193 18048 root 3048K 2440K sleep 59 0 0:06:02 0.0% sa_comm/4 26619 ducm0101 2588M 2368M sleep 59 0 8:21:17 0.0% java/231 19904 ducm0104 1713M 1390M sleep 59 0 1:15:29 0.0% java/151 27809 ducm0102 1547M 1426M sleep 59 0 0:38:19 0.0% java/186 2409 root 15M 11M sleep 59 0 0:00:00 0.0% pkgserv/3 27204 root 58M 54M sleep 59 0 9:11:38 0.0% stat_daemon/1 27256 root 12M 8312K sleep 59 0 7:16:40 0.0% kux_vmstat/1 29367 root 297M 286M sleep 59 0 11:02:13 0.0% dsmc/2 22128 root 13M 6768K sleep 59 0 0:10:51 0.0% sendmail/1 22133 smmsp 13M 1144K sleep 59 0 0:01:22 0.0% sendmail/1 22003 root 5896K 240K sleep 59 0 0:00:01 0.0% automountd/2 22074 root 4776K 1992K sleep 59 0 0:00:19 0.0% sshd/1 22005 root 6184K 2728K sleep 59 0 0:00:31 0.0% automountd/2 27201 root 6248K 344K sleep 59 0 0:00:01 0.0% mount_stat/1 20964 root 2912K 160K sleep 59 0 0:00:01 0.0% ttymon/1 20947 root 1784K 864K sleep 59 0 0:02:22 0.0% utmpd/1 20900 root 3048K 608K sleep 59 0 0:00:03 0.0% ttymon/1 20979 root 77M 18M sleep 59 0 0:14:13 0.0% inetd/4 20849 daemon 2856K 864K sleep 59 0 0:00:03 0.0% lockd/2 17794 root 80M 1232K sleep 59 0 0:06:19 0.0% svc.startd/12 17645 root 3080K 728K sleep 59 0 0:00:12 0.0% init/1 17849 root 13M 6800K sleep 59 0 0:13:04 0.0% svc.configd/15 20213 root 84M 81M sleep 59 0 0:47:17 0.0% nscd/46 20871 root 2568K 600K sleep 59 0 0:00:04 0.0% sac/1 3683 ducm0101 1904K 1640K sleep 56 0 0:00:00 0.0% startWebLogic.s/1 23937 ducm0101 1904K 1640K sleep 59 0 0:00:00 0.0% startWebLogic.s/1 20766 daemon 5328K 1536K sleep 59 0 0:00:36 0.0% nfsmapid/3 20141 daemon 5968K 3520K sleep 59 0 0:01:14 0.0% kcfd/4 20093 ducm0101 2000K 376K sleep 59 0 0:00:01 0.0% pfksh/1 20797 daemon 3256K 240K sleep 59 0 0:00:01 0.0% statd/1 6181 root 4864K 2872K sleep 59 0 0:01:34 0.0% syslogd/17 7220 ducm0104 1268M 1101M sleep 59 0 0:36:35 0.0% java/138 27597 ducm0102 1904K 1640K sleep 59 0 0:00:00 0.0% startWebLogic.s/1 27867 root 37M 4568K sleep 59 0 0:13:56 0.0% kcawd/7 12685 ducm0101 4080K 208K sleep 59 0 0:00:01 0.0% vncconfig/1 ZONEID NPROC SWAP RSS MEMORY TIME CPU ZONE 42 135 22G 19G 59% 87:27:59 1.2% dsuniucm01 Total: 135 processes, 3167 lwps, load averages: 54.48, 62.50, 63.11 I understand that CPU is mostly idle, but the load average is high, which is quite strange to me. Memory doesn't seem to be a problem. Running vmstat 15 gives something like this: kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr s0 s1 s4 sd in sy cs us sy id 0 0 0 32531400 105702272 317 1052 126 0 0 0 0 13 13 -0 8 9602 107680 10964 1 1 98 0 0 0 15053368 95930224 411 2323 0 0 0 0 0 0 0 0 0 23207 47679 29958 3 2 95 0 0 0 14498568 95801960 3072 3583 0 2 2 0 0 3 3 0 21 22648 66367 28587 4 4 92 0 0 0 14343008 95656752 3080 2857 0 0 0 0 0 3 3 0 18 22338 44374 29085 3 4 94 0 0 0 14646016 95485472 1726 3306 0 0 0 0 0 0 0 0 0 24702 47499 33034 3 3 94 I understand that the CPU is mostly idle, no processes are waiting in the queue to be executed, little swapping is happening. Running iostat 15 gives this: tty sd0 sd1 sd4 ssd0 cpu tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id 0 676 324 13 8 322 13 8 0 0 0 159 8 0 1 1 0 98 1 1385 0 0 0 0 0 0 0 0 0 0 0 0 3 4 0 94 0 584 89 6 24 89 6 25 0 0 0 332 19 0 2 1 0 97 0 296 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 97 1 1290 43 5 24 43 5 22 0 0 0 297 20 1 3 3 0 94 Running netstat -i 15 gives the following: input aggr26 output input (Total) output packets errs packets errs colls packets errs packets errs colls 1500233798 0 1489316495 0 0 3608008314 0 3586173708 0 0 10646 0 10234 0 0 26206 0 25382 0 0 11227 0 10670 0 0 28562 0 27448 0 0 10353 0 9998 0 0 29117 0 28418 0 0 11443 0 12003 0 0 30385 0 31494 0 0 What am I missing?
With some further investigation, it appears that the performance problem is mostly due to a high number of network calls between two systems (Oracle SSXA and UCM). The calls are quick but plenty and serialized, hence the low CPU usage (mostly waiting for I/O), the high load average (many calls waiting to be processed) and especially the long response times (by accumulation of small response times). Thanks for your insight on this problem!
{ "source": [ "https://serverfault.com/questions/365061", "https://serverfault.com", "https://serverfault.com/users/78575/" ] }
365,186
After a reboot, some partitions which were mentioned in fstab were not mounted as expected. The format of the line for the partitions which were mounted correctly and those which were not mounted correctly look the same so I am wondering whether some log exists of any problems which prevented the restoration of the missing partitions. I am not able to see the console during a reboot but need to determine and fix the problem later.
There's a few things you could try: Assuming that they are still not mounted when you can login, does a mount -a cause any errors to get printed to your terminal? This will only use information available in the fstab to mount all available mounts, and should provide details of any mounts that are still failing to succeed. If you get no errors, and still have no mounts, are you sure that you don't have the noauto option enabled? If you get no errors and now have mounts, perhaps there's some segregated mounting happening in your boot sequence and not all of those boot steps are enabled; eg, Gentoo has localmount and netmount and nfsmount init scripts for mounting things at boot. Is it an ordering issue? ie, trying to mount /var/lib before /var/ . You can use the first numeric parameter in the fstab to control which mounts get mounted first. Failing any of the above, you can try going log diving. dmesg , or one of the various logs in /var/log should be able to help. Your boot sequence should be being logged by default, but because it's dependant on your system logger's configuration it can change a little, even on different versions of the same distribution. The usual culprits are /var/log/messages and var/log/kernel .
{ "source": [ "https://serverfault.com/questions/365186", "https://serverfault.com", "https://serverfault.com/users/4872/" ] }
365,192
My system is Centos 5.5 with 2G memory, I have no permission setup a swap. I have set restart mysqld, httpd in 3:00 every day with crontab , and I also want to free memory every hour. So how to free memory with crontab? I write some code below styding from web, but it seems not work... crontab -e 6 * * * * sync;echo 3 > /proc/sys/vm/drop_caches EDIT here is my.cnf , however I need fulltext search sometimes used oderby date . key_buffer_size = 256M max_allowed_packet = 8M max_connections=1024 wait_timeout=5 table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 256M thread_cache_size = 8 query_cache_limit=2M query_cache_size= 128M thread_concurrency = 8 read_rnd_buffer_size=2M tmp_table_size=128M ft_min_word_len=2 ft_max_word_len=42
There's a few things you could try: Assuming that they are still not mounted when you can login, does a mount -a cause any errors to get printed to your terminal? This will only use information available in the fstab to mount all available mounts, and should provide details of any mounts that are still failing to succeed. If you get no errors, and still have no mounts, are you sure that you don't have the noauto option enabled? If you get no errors and now have mounts, perhaps there's some segregated mounting happening in your boot sequence and not all of those boot steps are enabled; eg, Gentoo has localmount and netmount and nfsmount init scripts for mounting things at boot. Is it an ordering issue? ie, trying to mount /var/lib before /var/ . You can use the first numeric parameter in the fstab to control which mounts get mounted first. Failing any of the above, you can try going log diving. dmesg , or one of the various logs in /var/log should be able to help. Your boot sequence should be being logged by default, but because it's dependant on your system logger's configuration it can change a little, even on different versions of the same distribution. The usual culprits are /var/log/messages and var/log/kernel .
{ "source": [ "https://serverfault.com/questions/365192", "https://serverfault.com", "https://serverfault.com/users/102504/" ] }
365,423
The file is located in Program Files/Oracle/VirtualBox/VBoxManage.exe and is used as a command-line interface with VirtualBox. I'm using it to convert the .vdi image to a .vdmk (for VMware). http://scottlinux.com/2011/06/24/convert-vdi-to-vmdk-virtualbox-to-vmware/ Here's an example script: $ VBoxManage list hdds But where do I run this command? In Windows cmd? I tried both in cmd and in Linux but I can't figure it out.
You need to either use the whole path for the command: "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" list hdds Or cd to the C:\Program Files\Oracle\VirtualBox directory, then: VBoxManage.exe list hdds Or you can add add the C:\Program Files\Oracle\VirtualBox directory to your PATH : PATH=%PATH%;C:\Program Files\Oracle\VirtualBox and then you can run VBoxManage from anywhere
{ "source": [ "https://serverfault.com/questions/365423", "https://serverfault.com", "https://serverfault.com/users/111100/" ] }
365,605
I just attached another ebs volume to running instance. But how do I access the volume? I can't find the /dev/sda directory anywhere. Where should I look?
When you attach an EBS volume, you specify the device to attach it as. Under linux, these devices are /dev/xvd* - and are symlinked to /dev/sd* In the AWS console, you can see your EBS volumes, what instances they are attached to, and the device each volume is attached as: You can achieve the same thing from the CLI tools. Set the necessary environment variables: export EC2_PRIVATE_KEY=/root/pk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem export EC2_CERT=/root/cert-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem Run the command on your current instance (otherwise, just specify the instance-id): ec2-describe-instances `curl -s http://169.254.169.254/latest/meta-data/instance-id` | grep BLOCKDEVICE BLOCKDEVICE /dev/sda1 vol-xxxxxxxx 2011-11-13T21:09:53.000Z BLOCKDEVICE /dev/sdf vol-xxxxxxxx 2011-11-13T21:09:53.000Z BLOCKDEVICE /dev/sdg vol-xxxxxxxx 2011-11-13T21:09:53.000Z It is worth noting that in both cases above - the CLI and the AWS Console - the devices are described as being attached at /dev/sd* - this is not actually the case, however. Look at the contents of /dev: ls -l /dev/sd* /dev/xv* lrwxrwxrwx 1 root root 5 Dec 12 18:32 /dev/sda1 -> xvda1 lrwxrwxrwx 1 root root 4 Dec 12 18:32 /dev/sdf -> xvdf lrwxrwxrwx 1 root root 4 Dec 12 18:32 /dev/sdg -> xvdg brw-rw---- 1 root disk 202, 1 Dec 12 18:32 /dev/xvda1 brw-rw---- 1 root disk 202, 80 Dec 12 18:32 /dev/xvdf brw-rw---- 1 root disk 202, 96 Dec 12 18:32 /dev/xvdg The devices are actually /dev/xvd* - and the /dev/sd* paths are symlinks. Another approach to check for the currently available devices is to use fdisk -l , or for a simpler output: cat /proc/partitions major minor #blocks name 202 1 4194304 xvda1 202 80 6291456 xvdf 202 96 1048576 xvdg If you need to determine which devices have been mounted use mount and df - and check /etc/fstab to change the mount options.
{ "source": [ "https://serverfault.com/questions/365605", "https://serverfault.com", "https://serverfault.com/users/102717/" ] }
366,072
What is likely to happen when you plug two ends of a network cable to a single switch/router? Will this create problems on the network, or just be ignored?
Depends on the router/switch. If it's " Managed " - Like decent Netgear, Cisco or HP Procurve, or has STP (Spanning Tree Protocol) or one of its variants enabled, there's a few seconds of absolute insanity, then the switch realises that there's a loop in the network topology, and blocks one of the ports. (I've only described the STP re-convergence as "absolute insanity" because if you're using old-style, slow, STP then re-convergence can take 30s or more, depending on network complexity. Vendor specific STP extensions such as BackboneFast and so on will decrease this, but you might still end up with a short period of a slightly unstable network. Rapid STP is a lot quicker to converge, due to a different algorithm) If it's " Unmanaged "- Like pretty much all SOHO grade gear, and a fair proportion of small 4-8 port switches, then all hell breaks loose, as you've just created a loop in a network, and all the traffic tends to just bounce about inside the loop. The reason this happens is because switches rely on a process of MAC address learning to map MAC addresses to physical ports. In a non-looped network, one MAC address will only be visible to the switch on a given physical port. If you have a loop, then the switch will see multiple paths to the same MAC address, and possibly multiple MAC addresses on multiple ports, so instead of the traffic being switched efficiently, it will be broadcast to wherever it sees the MACs. This is known as a "Broadcast Storm". This can quickly use up all of a switch's CPU power, fill the transmit and receive buffers, as well as polluting the MAC address table. Basically, if you create a loop in the network, you'll know about it, either through monitoring (detecting a change in the STP topology [you do have monitoring, right?]), or in everything falling over dramatically. If you look at a switch that has a broadcast storm on it, you tend to find that all of the port activity lights are blinking all at the same time.
{ "source": [ "https://serverfault.com/questions/366072", "https://serverfault.com", "https://serverfault.com/users/88015/" ] }
366,324
I have been asked this question in two consecutive interviews, but after some research and checking with various systems administrators I haven't received a good answer. I am wondering if somebody can help me out here. A server is out of disk space. You notice a very large log file and determine it is safe to remove. You delete the file but the disk still shows that it is full. What would cause this and how would you remedy it? And how would you find which process is writing this huge log file?
This is a common interview question and a situation that comes up in a variety of production environments. The file's directory entries have been deleted, but the logging process is still running. The space won't be reclaimed by the operating system until all file handles have been closed (e.g., the process has been killed) and all directory entries removed. To find the process writing to the file, you'll need to use the lsof command. The other part of the question can sometimes be "how do you clear a file that's being written to without killing the process?" Ideally, you'd "zero" or "truncate" the log file with something like : > /var/log/logfile instead of deleting the file.
{ "source": [ "https://serverfault.com/questions/366324", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }