output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
I think I found the problem: After a while of plugging arround different setups I replaced the SiI controller with an old PCI one and the problem seems to be solved.
I have a problem with a filesystem here on a storage machine. We noticed, that many of the data that comes out of the systems seams to be corrupt, but only with minor problems like CRC errors with self-verifing installers or small picture errors in movies. While tracking down the problem, i endet up in a test with 3 files with about 900MB each. The ext4 filesystem is mounted read-only, but every time i do a md5sum on the files, the result differs: $ ls -l -rw-rw-r-- 1 samba samba 922789695 Jan 7 21:47 File1 -rw-rw-r-- 1 samba samba 939080225 Jan 7 21:54 File2 -rw-rw-r-- 1 samba samba 996515494 Jan 14 21:13 File3$ md5sum * 9449c8e4fd2869a7969017db266451b0 File1 016b5c2e8b535ec922f5efb4ec9082bc File2 5576aeb34575e07171fa835a79fec147 File3 $ echo 3 > /proc/sys/vm/drop_caches # (clear file cache of the kernel) $ md5sum * 3f03edec64e22de384fd3d2cff0e3730 File1 32b53ee1dd3f5c9796322cabe4f8c0da File2 35af5c433d0725ab0892d4517faeceea File3 $ echo 3 > /proc/sys/vm/drop_caches $ md5sum * 593d83e084387a8d5bd9b445032a5669 File1 4f8b76249b96a1a29bdd748167c41bda File2 8b5bab8a153eb6e33dc3cd7d23362090 File3 $ echo 3 > /proc/sys/vm/drop_caches $ md5sum * d716d9c4acbd3ade450bab46903810d9 File1 68ede84d1396075ffe8a9228966cc148 File2 b8d75123b2d5b18c0d2827a448f53086 File3 $ echo 3 > /proc/sys/vm/drop_caches $ md5sum * c991bcca3bc2f39fdd143f8460935646 File1 73e6301b28c3b1b0bb95df52ea5794dd File2 a202e88343d6e7bc4dce808b885ad013 File3First I let e2fsck check the whole disk. It found a few problems, but it finds other errors on every new run. I think it got other reads each time same as md5sum and the problem is on another layer. The whole thing is inside a xen vm, but i don't think that detail matters. The architecture is like: ext4 | dm-crypt | (xen blk between here) md-raid5 (softraid) | +---+-----------------------------+ | | mainboard sata +---------pcie---------+ | | | 3 disks sata controller(jbod) sata controller(jbod) (1 failed) | | 2 disks 2 diskslspci output of the sata controllers: 00:12.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB600 Non-Raid-5 SATA 02:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) 03:00.0 SATA controller: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 03)While i was searching for the problem, one of the 7 disks failed and the raid is currently running with only 6 disks until the replacement arrives. Maybe this can be part of the problem? It existed definitly before the failure, but now the raid should be in a vulnerable but stable state...? Whats going on here?
Same file with different content on every read [closed]
According to Gparted, the answer was to do a filesystem check in order to utilize the extra space. To solve the problem I had to:unmount the filesystem. open gparted select the raid device (in my case /dev/md0) run check (Partition -> Check)This successfully resized the md0 partition to use all of the available space. The exact operation's output from gparted is the following : GParted 0.33.0 --enable-libparted-dmraid --enable-online-resizeLibparted 3.2 Check and repair file system (ext4) on /dev/md0 00:03:51 ( SUCCESS )calibrate /dev/md0 00:00:00 ( SUCCESS )path: /dev/md0 (device) start: 0 end: 7813533951 size: 7813533952 (3.64 TiB) check file system on /dev/md0 for errors and (if possible) fix them 00:02:43 ( SUCCESS )e2fsck -f -y -v -C 0 '/dev/md0' 00:02:43 ( SUCCESS )Pass 1: Checking inodes, blocks, and sizes Inode 30829505 extent tree (at level 1) could be shorter. Optimize? yesInode 84025620 extent tree (at level 1) could be narrower. Optimize? yesInode 84806354 extent tree (at level 2) could be narrower. Optimize? yesPass 1E: Optimizing extent trees Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Create? yesPass 4: Checking reference counts Pass 5: Checking group summary informationStorageArray0: ***** FILE SYSTEM WAS MODIFIED *****5007693 inodes used (4.10%, out of 122093568) 23336 non-contiguous files (0.5%) 2766 non-contiguous directories (0.1%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 4942467/2090/2 458492986 blocks used (93.89%, out of 488345872) 0 bad blocks 52 large files4328842 regular files 612231 directories 0 character device files 0 block device files 3 fifos 1396 links 66562 symbolic links (63077 fast symbolic links) 45 sockets ------------ 5009079 files e2fsck 1.45.1 (12-May-2019) grow file system to fill the partition 00:01:08 ( SUCCESS )resize2fs -p '/dev/md0' 00:01:08 ( SUCCESS )Resizing the filesystem on /dev/md0 to 976691744 (4k) blocks. The filesystem on /dev/md0 is now 976691744 (4k) blocks long.resize2fs 1.45.1 (12-May-2019)========================================
My initial raid setup was 2x2TB RAID 1 using mdadm. I have bought a 3rd 2TB drive in the hopes to upgrade the RAID's total capacity to 4 TB using mdadm. I have already run the following two commands, but dont see a capacity change: sudo mdadm --grow /dev/md0 --level=5 sudo mdadm --grow /dev/md0 --add /dev/sdd --raid-devices=3with the mdadm details : $ sudo mdadm --detail /dev/md0 [sudo] password for userd: /dev/md0: Version : 1.2 Creation Time : Wed Jul 5 19:59:17 2017 Raid Level : raid5 Array Size : 1953383488 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed May 22 17:58:37 2019 State : clean, reshaping Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64KConsistency Policy : bitmap Reshape Status : 5% complete Delta Devices : 1, (2->3) Name : userd:0 (local to host userd) UUID : 986fca95:68ef5344:5136f8af:b8d34a03 Events : 13557 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 2 8 48 2 active sync /dev/sddUPDATE : With the reshape now finished, only 2TB of the 4TB is available. /dev/md0: Version : 1.2 Creation Time : Wed Jul 5 19:59:17 2017 Raid Level : raid5 Array Size : 3906766976 (3725.78 GiB 4000.53 GB) Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu May 23 23:40:16 2019 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64KConsistency Policy : bitmap Name : userd:0 (local to host userd) UUID : 986fca95:68ef5344:5136f8af:b8d34a03 Events : 17502 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 2 8 48 2 active sync /dev/sddHow can I get mdadm to use all of the 4TB instead of just 2TB?
Can I Increase raid 1 capacity by changing to raid 5?
That does seem like a disk performance issue. You should get something in between 20 MB/s to 80 MB/s depending on block size I think. I found this old 10k disk comparison where you can see how different drives are performing http://techreport.com/review/5236/10k-rpm-hard-drive-comparison/7 . I also found a thread from dell forum where someone is facing same kind of issue: http://en.community.dell.com/support-forums/servers/f/906/t/19475037 To answer you question: No 5-6 MB/s is not normal.
I have a Dell PowerEdge R820 server which is under maintenance by other third party. There are 6 SAS (10K RPM, 6gbps) disks and they are configured as RAID 5 using PERC controller. Currently I am facing performance issue with the server. Basically it is with the disk. When I tried to write 4GB of data, it is taking 12 minutes to complete. I am using a Linux Server. Please see the output of dd command: # # time dd if=/dev/zero of=TestFile bs=4096 count=1024000 1024000+0 records in 1024000+0 records out real 12m 3.56s user 0m 7.94s sys 0m 0.00sI have also checked with the other Desktop made server, where RAID 5 is configured with 4 SATA (7.2K RPM) disks. It is taking only 19 seconds to write 4GB of data to the disk. I can see the clear problem of disk I/O performance issue. But the third party is denying, they are telling that, this is the normal time. But I refuse to agree with them. Can you please tell me what should be the normal time to write 4GB data to the volume configured with 6SAS (10K RPM) disks?
Disk I/O performance issue [closed]
It looks odd. You might have to mdadm --create with overlays for this one (with correct data offset, chunk size, and drive order). And perhaps with the first drive missing, as that seems to have failed first... Basically there is no way to recover with conventional means once a drive no longer even remembers its Device Role. Both say they're "spare", so it's unknown whether either drive was role 0, or role 2, or nothing at all (some raid5 setups actually use spares for some reason). So it's unclear: whether there is useful data on them at all, and if so what order it would be in. You have to determine yourself. While you're at it, also check SMART data and use ddrescue first if any of these drives actually have reallocated or pending sectors that might have contributed to raid failure.
MD raid5 array appears to have stopped working suddenly. Symptoms are somewhat similar to this issue in that I'm getting errors talking about not enough devices to start the array, however in my case the event counts on all three drives are equal. It's a raid 5 array that should have 2 active drives and one parity, however mdadm --examine on each drive shows two of them having their role listed as spare and only one as an active drive. I've tried mdadm --stop /dev/md1 followed by mdadm --assemble /dev/md1 (including attempts with the --force and --run flags). SMART data doesn't indicate any issues with the drives (and current pending and reallocated sector counts are all zero), I've tried the raid.wiki.kernel.org guide linked below by frostschutz through the steps involving setting up mapped overlay devices. I would have then assumed running the following command would create a raid array that I could then attempt to mount read-only and see if that resulted in a readable filesystem or just a garbled mess (i.e. to determine if my guess of sdf1 being the parity drive was correct or if I should try again with sde1) - but instead it gives the error show below (have also tried with the associated loop devices as per losetup --list, with the same result). mdadm --create /dev/md2 --assume-clean --level=5 --chunk=64K --metadata=1.2 --data-offset=261888s --raid-devices=3 missing /dev/mapper/sdh1 /dev/mapper/sdf1 mdadm: super1.x cannot open /dev/mapper/sdh1: Device or resource busy mdadm: /dev/mapper/sdh1 is not suitable for this array. mdadm: super1.x cannot open /dev/mapper/sdf1: Device or resource busy mdadm: /dev/mapper/sdf1 is not suitable for this array. mdadm: create abortedAlso, while mdadm --detail /dev/md1 previously gave the output (further) below, it now gives: /dev/md1: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Working Devices : 3 Name : bob:1 (local to host bob) UUID : 07ff9ba9:e8100e68:94c12c1a:3d7ad811 Events : 373364 Number Major Minor RaidDevice - 253 11 - /dev/dm-11 - 253 10 - /dev/dm-10 - 253 9 - /dev/dm-9Also, I've noticed dmsetup status gives the same information for all three overlays, and has a number that looks suspiciously like it may refer to the size of original raid array (16TB) rather than an individual drive (8TB) - not sure if this is as it should be? sde1: 0 15627528888 snapshot 16/16777216000 16 sdh1: 0 15627528888 snapshot 16/16777216000 16 sdf1: 0 15627528888 snapshot 16/16777216000 16Not sure how to progress from this point as far as attempting to create the device, mount and inspect the filesystem to confirm whether I guessed the correct parity device or not, using the overlay to prevent anything being written to the actual drives. UPDATE: As per frostschutz's suggestion below, the array was somehow in some kind of state where --stop needed to be issued prior to being able to do anything with the underlying drives. I'd discounted that possibility previously as cat /proc/mdstat was showing the array as inactive, which I'd assumed meant it could not possibly be what was tying the drives up, but that was not in fact the case (I'd also previously ran --stop, but it would seem something was done afterwards triggering it's return to a non-stopped state). After getting the drive order correct (not on the first try, glad I was using overlays) the array passed a fsck check with no errors reported and is now up and running as if nothing ever happened.The result of running other diagnostic commands: cat /proc/mdstat: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md1 : inactive sdh1[1](S) sde1[3](S) sdf1[0](S) 23440900500 blocks super 1.2mdadm --detail /dev/md1: /dev/md1: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Working Devices : 3 Name : bob:1 (local to host bob) UUID : 07ff9ba9:e8100e68:94c12c1a:3d7ad811 Events : 373364 Number Major Minor RaidDevice - 8 113 - /dev/sdh1 - 8 81 - /dev/sdf1 - 8 65 - /dev/sde1lines appearing in dmesg when trying to mdadm --assemble /dev/md1: md/raid:md1: device sdh1 operational as raid disk 1 md/raid:md1: not enough operational devices (2/3 failed) md/raid:md1: failed to run raid set. md: pers->run() failed ..and the mdadm --examines /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 07ff9ba9:e8100e68:94c12c1a:3d7ad811 Name : bob:1 (local to host bob) Creation Time : Mon Mar 4 22:10:29 2019 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 15627267000 (7451.66 GiB 8001.16 GB) Array Size : 15627266688 (14903.32 GiB 16002.32 GB) Used Dev Size : 15627266688 (7451.66 GiB 8001.16 GB) Data Offset : 261888 sectors Super Offset : 8 sectors Unused Space : before=261808 sectors, after=312 sectors State : clean Device UUID : e856f539:6a1b5822:b3b8bfb7:4d0f4741Internal Bitmap : 8 sectors from superblock Update Time : Sun May 30 00:22:45 2021 Bad Block Log : 512 entries available at offset 40 sectors Checksum : 9b5703bc - correct Events : 373364 Layout : left-symmetric Chunk Size : 64K Device Role : spare Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)/dev/sdf1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 07ff9ba9:e8100e68:94c12c1a:3d7ad811 Name : bob:1 (local to host bob) Creation Time : Mon Mar 4 22:10:29 2019 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 15627267000 (7451.66 GiB 8001.16 GB) Array Size : 15627266688 (14903.32 GiB 16002.32 GB) Used Dev Size : 15627266688 (7451.66 GiB 8001.16 GB) Data Offset : 261888 sectors Super Offset : 8 sectors Unused Space : before=261800 sectors, after=312 sectors State : clean Device UUID : 7919e56f:2e08430e:95a4c4a6:1e64606aInternal Bitmap : 8 sectors from superblock Update Time : Sun May 30 00:22:45 2021 Bad Block Log : 512 entries available at offset 72 sectors Checksum : d54ff3e1 - correct Events : 373364 Layout : left-symmetric Chunk Size : 64K Device Role : spare Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)/dev/sdh1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 07ff9ba9:e8100e68:94c12c1a:3d7ad811 Name : bob:1 (local to host bob) Creation Time : Mon Mar 4 22:10:29 2019 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 15627267000 (7451.66 GiB 8001.16 GB) Array Size : 15627266688 (14903.32 GiB 16002.32 GB) Used Dev Size : 15627266688 (7451.66 GiB 8001.16 GB) Data Offset : 261888 sectors Super Offset : 8 sectors Unused Space : before=261800 sectors, after=312 sectors State : clean Device UUID : 0c9a8237:7e79a439:d4e35b31:659f3c86Internal Bitmap : 8 sectors from superblock Update Time : Sun May 30 00:22:45 2021 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 6ec2604b - correct Events : 373364 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)
mdadm not enough devices to start the array - recovery possible?
You can skip the initial sync with --assume-clean. mdadm --create /dev/md100 --assume-clean --level=5 --raid-devices=3 /dev/sdx1 /dev/sdy1 /dev/sdz1Alternatively, leave a disk missing so no sync can be performed. Doing so results in a degraded RAID, which might be a relevant use case for some tests. mdadm --create /dev/md100 --level=5 --raid-devices=3 /dev/sdx1 missing /dev/sdz1A completely different approach would be to perform the initial sync after all, but make the partition size so small that it finishes quickly. It should not be necessary to use a full size 6TB RAID for most tests. Don't forget to also check the filesystem options, for example ext4 has some lazy init modes that might affect performance in a newly created filesystem. It also has options to optimize for RAID use, you can test whether those make any difference to you.
I'd like to run a series of fio-based performance tests on a few drives in various RAID and non-RAID configurations. When assembling drives in RAID5, the rebuild process takes an incredibly long time (6TB HDD). Since I'm going to completely overwrite the disks as part of the performance tests (or at least all the sectors I plan on reading), is there any way I can configure mdadm to not bother rebuilding the parity and just calculate parity the next time the sector is written?
Quickly assemble raid5 for perf test
If you have a small amount of disk space available you can test out these commands with loopback devices. Create the loopback devices a, b, c: dd if=/dev/zero bs=1M count=50 > diska.img # Plan for RAID5 dd if=/dev/zero bs=1M count=50 > diskb.img # Likewise dd if=/dev/zero bs=1M count=50 > diskc.img # Original data will be herela=$(losetup --find --show diska.img); echo $la lb=$(losetup --find --show diskb.img); echo $lb lc=$(losetup --find --show diskc.img); echo $lcCreate some "important original data" and put it onto the third disk ($lc) mkfs -t ext4 -L data "$lc" mount "$lc" /mnt cp -a /usr/share/man/man1 /mnt umount /mntNow try creating the RAID5 arrays per your ideas. In this scenario we have $la and $lb as the two blank disks, and $lc representing your important third disk: mdadm --create --verbose /dev/md0 --level=5 --raid-devices=2 "$la" "$lb"Success; this has created a RAID5 array with two members. Personally, I'd have specified three, with the third element as the word missing, because this makes it clearer what I've intended: mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 "$la" "$lb" missingYour next command, though, is not quite right. rsync copies between filesystems, not between devices, so first you need to create the new filesystem and mount both: mkfs -t ext4 -L data /dev/md0 mkdir -p /mnt/src /mnt/dst mount "$lc" /mnt/src # Here you could use mount /dev/sdc1 /mnt/src mount /dev/md0 /mnt/dst rsync -av --exclude-from=excludefile /mnt/src/ /mnt/dstYou should use rsync --dry-run to check what it's going to do before it does it. umount /mnt/src umount /mnt/dstAt this point you need to be absolutely sure that you have successfully copied the data from the original disk to the new (degraded) RAID5 array, as we are going to add the old disk into the array. If you originally specified only two devices you need to grow the array to include a third: mdadm --grow /dev/md0 --raid-devices=3 --add "$lc" # /dev/sd1c when you do this for realOn the other hand, if you took my recommendation and started with three devices (one of which was missing) you just need to add the device: mdadm --manage /dev/md0 --add "$lc"Finally, you can remount the RAID 5 array in the intended part of the filesystem. Use cat /proc/mdstat to see how the resynchronisation is going. For the testbed ONLY, you need to stop the array and delete the components mdadm --stop /dev/md0 losetup -d "$la" losetup -d "$lb" losetup -d "$lc" rm diska.img diskb.img diskc.img
I have 3 exactly the same drives (4tb ironwolf) where i would like to make a raid-5 with using MDADM for a small degree of data security. Now the problem is, 1 drive is filled with data which i am unable to make a backup for. Yes, i understand that when building and a drive fails all my data is gone, but still i would like to give it the best try. To make it more understandable lets call them sda1 and sdb1 which are empty, and sdc1 with data. mdadm --create --verbose /dev/md0 --level=5 --raid-devices=2 /dev/sda1 /dev/sdb1; rsync -av --exclude-from=excludefile /dev/sdc1 /dev/md0; mkfs.ext4 /dev/sdc1; mdadm --add /dev/md0 /dev/sdc1; mdadm --grow --raid-devices=3 --spare-devices=1 /dev/md0;Can someone confirm that this is a proper way?
Build RAID 5 with mdadm and 1 disk with data
Your command is incorrect, it should be this: $ mdadm -C /dev/md0 -l 5 -n 4 /dev/sd[b-e]1If you want to use the = signs you use these switches instead like this: $ mdadm -C /dev/md0 --level=5 --raid-devices=4 /dev/sd[b-e]1Per man page: -l, --level= Set RAID level. When used with --create, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container. Obviously some of these are synonymous.
I'm trying to create RAID 5 according to this tutorial. But, when I run mdadm -C /dev/md0 -l=5 -n=4 /dev/sd[b-e]1, I get this error: mdadm: invalid raid level: =5Here's the output of mdadm -E /dev/sd[b-e]: /dev/sdb: MBR Magic : aa55 Partition[0] : 4294965247 sectors at 2048 (type fd) /dev/sdc: MBR Magic : aa55 Partition[0] : 4294965247 sectors at 2048 (type fd) /dev/sdd: MBR Magic : aa55 Partition[0] : 4294965247 sectors at 2048 (type fd) /dev/sde: MBR Magic : aa55 Partition[0] : 4294965247 sectors at 2048 (type fd)Here's the output of mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1: mdadm: No md superblock detected on /dev/sdb1. mdadm: No md superblock detected on /dev/sdc1. mdadm: No md superblock detected on /dev/sdd1. mdadm: No md superblock detected on /dev/sde1.Here's the output of fdisk -l | grep sd: Disk /dev/sda: 300.0 GB, 300000000000 bytes /dev/sda1 * 2048 626687 312320 83 Linux /dev/sda2 626688 34187263 16780288 82 Linux swap / Solaris /dev/sda3 34187264 139059199 52435968 83 Linux /dev/sda4 139059200 585936895 223438848 83 Linux Disk /dev/sdc: 8001.6 GB, 8001563222016 bytes /dev/sdc1 2048 4294967294 2147482623+ fd Linux raid autodetect Disk /dev/sdb: 8001.6 GB, 8001563222016 bytes /dev/sdb1 2048 4294967294 2147482623+ fd Linux raid autodetect Disk /dev/sdd: 24003.1 GB, 24003062267904 bytes /dev/sdd1 2048 4294967294 2147482623+ fd Linux raid autodetect Disk /dev/sde: 8001.6 GB, 8001563222016 bytes /dev/sde1 2048 4294967294 2147482623+ fd Linux raid autodetectsda is the system space, I want to use others to store data.
mdadm, invalid RAID level?
You're missing one of the three drives of the /dev/md0 RAID5 array. Therefore, mdadm will assemble the array but not run it.-R, --run Attempt to start the array even if fewer drives were given than were present last time the array was active. Normally if not all the expected drives are found and --scan is not used, then the array will be assembled but not started. With --run an attempt will be made to start it anyway.So, all you should need to do is mdadm --run /dev/md0. If you're cautious you can try mdadm --run --readonly /dev/md0 and follow that by mount -o ro,norecover /dev/md0 /mnt to check it looks ok. (The converse of --readonly is of course, --readwrite.) Once it's running you can add back a new disk. I wouldn't recommend adding your existing disk because it's getting SMART disk errors as evidenced by this recent report from your test SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 9692 2057However, if you really want to try and re-add your existing disk it's probably a very good idea to --zero-superblock on that disk first. But I'd still recommend replacing it.
I'm trying to repair a RAID5 array, consisting of 3 2TB disks. After working perfectly for quite some time, the computer (running Debian) suddenly wouldn't boot anymore and got stuck at a GRUB prompt. I'm pretty sure it has to do with the RAID array. Since it is difficult to give a full account of everything tried already, I will try to describe the current status. mdadm --detail /dev/md0 outputs: /dev/md0: Version : 1.2 Creation Time : Sun Mar 22 15:13:25 2015 Raid Level : raid5 Used Dev Size : 1953381888 (1862.89 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun Mar 22 16:18:56 2015 State : active, degraded, Not Started Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : ubuntu:0 (local to host ubuntu) UUID : ae2b72c0:60444678:25797b77:3695130a Events : 57Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1mdadm --examine /dev/sda1 gives: mdadm: No md superblock detected on /dev/sda1.which makes sense, because I reformatted this partition because I believed it to be the faulty one. mdadm --examine /dev/sdb1 gives: /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : ae2b72c0:60444678:25797b77:3695130a Name : ubuntu:0 (local to host ubuntu) Creation Time : Sun Mar 22 15:13:25 2015 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 3906764800 (1862.89 GiB 2000.26 GB) Array Size : 3906763776 (3725.78 GiB 4000.53 GB) Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : f1817af9:1d964693:774d5d63:bfa69e3d Update Time : Sun Mar 22 16:18:56 2015 Checksum : ab7c79ae - correct Events : 57 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : .AA ('A' == active, '.' == missing)mdadm --detail /dev/sdc1 gives: /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : ae2b72c0:60444678:25797b77:3695130a Name : ubuntu:0 (local to host ubuntu) Creation Time : Sun Mar 22 15:13:25 2015 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 3906764800 (1862.89 GiB 2000.26 GB) Array Size : 3906763776 (3725.78 GiB 4000.53 GB) Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : f076b568:007e3f9b:71a19ea2:474e5fe9 Update Time : Sun Mar 22 16:18:56 2015 Checksum : db25214 - correct Events : 57 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : .AA ('A' == active, '.' == missing)cat /proc/mdstat: Personalities : [raid6] [raid5] [raid4] md0 : inactive sdb1[1] sdc1[2] 3906764800 blocks super 1.2unused devices: <none>fdisk -l: Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000d84faDevice Boot Start End Blocks Id System /dev/sda1 2048 3907029167 1953513560 fd Linux raid autodetectDisk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000802d9Device Boot Start End Blocks Id System /dev/sdb1 * 2048 3907028991 1953513472 fd Linux raid autodetectDisk /dev/sdc: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000a8dcaDevice Boot Start End Blocks Id System /dev/sdc1 2048 3907028991 1953513472 fd Linux raid autodetectDisk /dev/sdd: 7756 MB, 7756087296 bytes 255 heads, 63 sectors/track, 942 cylinders, total 15148608 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x128faec9Device Boot Start End Blocks Id System /dev/sdd1 * 2048 15148607 7573280 c W95 FAT32 (LBA)And of course I've tried to add the /dev/sda1 again. mdadm --manage /dev/md0 --add /dev/sda1 gives: mdadm: add new device failed for /dev/sda1 as 3: Invalid argumentIf the RAID is fixed I will probably also need getting GRUB up and running again, so it can detect the RAID/LVM and boot again. EDIT (added smartctl test results) Output of smartctl tests smartctl -a /dev/sda: smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.16.0-30-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green (AF, SATA 6Gb/s) Device Model: WDC WD20EZRX-00D8PB0 Serial Number: WD-WMC4M0760056 LU WWN Device Id: 5 0014ee 003a4a444 Firmware Version: 80.00A80 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Tue Mar 24 22:07:08 2015 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSEDGeneral SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 121) The previous self-test completed having the read element of the test failed. Total time to complete Offline data collection: (26280) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 266) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x7035) SCT Status supported. SCT Feature Control supported. SCT Data Table supported.SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 3401 3 Spin_Up_Time 0x0027 172 172 021 Pre-fail Always - 4375 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 59 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 087 087 000 Old_age Always - 9697 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 59 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 51 193 Load_Cycle_Count 0x0032 115 115 000 Old_age Always - 255276 194 Temperature_Celsius 0x0022 119 106 000 Old_age Always - 28 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 12 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 1SMART Error Log Version: 1 No Errors LoggedSMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 9692 2057SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
Repairing a RAID5 array
I don't normally use LVM RAID so excuse me if I reproduce your situation a bit imperfectly. So the numbers will be a bit weird. Given what would be a 3 device RAID 5 in mdadm. In LVM terms, this is called a raid5 with 2 stripes (the parity is not counted). # lvs -o +devices HDD/raidtest LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices raidtest HDD rwi-a-r--- 256.00m 100.00 raidtest_rimage_0(0),raidtest_rimage_1(0),raidtest_rimage_2(0)Growing it by one more stripe works like this: # lvconvert --stripes 3 HDD/raidtest Using default stripesize 64.00 KiB. WARNING: Adding stripes to active logical volume HDD/raidtest will grow it from 4 to 6 extents! Run "lvresize -l4 HDD/raidtest" to shrink it or use the additional capacity. Are you sure you want to add 1 images to raid5 LV HDD/raidtest? [y/n]: maybe [... this takes a while ...] Logical volume HDD/raidtest successfully converted.Things to look out for: the WARNING message should clearly state that the device is growing, not shrinking. Also I did not specify which PV to use for the extension so LVM picked it on its own. In your case this is also optional and should just work (as there are no other eligible PV) but feel free to keep specifying it so there will be no surprises. Result: # lvs -o +devices HDD/raidtest LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices raidtest HDD rwi-a-r--- 384.00m 100.00 raidtest_rimage_0(0),raidtest_rimage_1(0),raidtest_rimage_2(0),raidtest_rimage_3(0)The filesystem will not grow along in this case, you're given the choice to either do that separately or use lvresize to shrink the LV back to what it was before (just distributed to more drives now). I guess that is useful when using multiple RAID LVs side by side, instead of giving the entire disk to a single one as you seem to be doing.
I have an existing LVM RAID5 array on a CentOS 8 box made up of 3x4TB drives. That array is beginning to run low on space, so I have an identical 4TB drive I'd like to add into the array to increase the total space. However, when I run lvextend /dev/storage/raidarray /dev/sda, I get the following output: Converted 100%PVS into 953861 physical extents. Using stripesize of last segment 64.00 KiB Archiving volume group "storage" metadata (seqno 35). Extending logical volume storage/raidarray to <10.92 TiB Insufficient free space: 1430790 extents needed, but only 953861 availableHere is the output of pvs: PV VG Fmt Attr PSize PFree /dev/sda storage lvm2 a-- <3.64t <3.64t /dev/sdb3 cl lvm2 a-- 221.98g 0 /dev/sdc storage lvm2 a-- <3.64t 0 /dev/sdd storage lvm2 a-- <3.64t 0 /dev/sde storage lvm2 a-- <3.64t 0 /dev/sdf lvm2 --- 119.24g 119.24glvs -o +devices: LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices home cl -wi-a----- <164.11g /dev/sdb3(12800) root cl -wi-ao---- 50.00g /dev/sdb3(0) swap cl -wi-ao---- <7.88g /dev/sdb3(54811) raidarray storage rwi-aor--- <7.28t 100.00 raidarray_rimage_0(0),raidarray_rimage_1(0),raidarray_rimage_2(0)pvdisplay: --- Physical volume --- PV Name /dev/sdb3 VG Name cl PV Size 221.98 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 56827 Free PE 0 Allocated PE 56827 PV UUID MM6j63-1V3E-YWXl-61ro-f3bB-7ysd-c1DGQv--- Physical volume --- PV Name /dev/sdc VG Name storage PV Size <3.64 TiB / not usable <3.84 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 953861 Free PE 0 Allocated PE 953861 PV UUID rmqBBu-DD8U-d7WW-yzKW-R97b-1M4r-RYb1Qx--- Physical volume --- PV Name /dev/sdd VG Name storage PV Size <3.64 TiB / not usable <3.84 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 953861 Free PE 0 Allocated PE 953861 PV UUID TBn2He-cRTU-eybT-fuBM-REbO-YNfr-Ca86gU--- Physical volume --- PV Name /dev/sde VG Name storage PV Size <3.64 TiB / not usable <3.84 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 953861 Free PE 0 Allocated PE 953861 PV UUID wHZOf0-KTK9-2qLW-USl9-Gkgz-6MjV-D3gWrH--- Physical volume --- PV Name /dev/sdf VG Name storage PV Size 119.24 GiB / not usable <4.34 MiB Allocatable yes PE Size 4.00 MiB Total PE 30525 Free PE 30525 Allocated PE 0 PV UUID MWWaUJ-UC2h-YT29-bMol-fWoQ-5Chl-uKBB4O--- Physical volume --- PV Name /dev/sda VG Name storage PV Size <3.64 TiB / not usable <3.84 MiB Allocatable yes PE Size 4.00 MiB Total PE 953861 Free PE 953861 Allocated PE 0 PV UUID vzGHi9-TF42-EFx9-uLch-EioJ-DI35-RuZuJtand lsblk: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 3.7T 0 disk sdb 8:16 0 223.6G 0 disk ├─sdb1 8:17 0 600M 0 part /boot/efi ├─sdb2 8:18 0 1G 0 part /boot └─sdb3 8:19 0 222G 0 part ├─cl-root 253:0 0 50G 0 lvm / └─cl-swap 253:1 0 7.9G 0 lvm [SWAP] sdc 8:32 0 3.7T 0 disk ├─storage-raidarray_rmeta_0 253:7 0 4M 0 lvm │ └─storage-raidarray 253:14 0 7.3T 0 lvm /home └─storage-raidarray_rimage_0 253:8 0 3.7T 0 lvm └─storage-raidarray 253:14 0 7.3T 0 lvm /home sdd 8:48 0 3.7T 0 disk ├─storage-raidarray_rmeta_1 253:9 0 4M 0 lvm │ └─storage-raidarray 253:14 0 7.3T 0 lvm /home └─storage-raidarray_rimage_1 253:10 0 3.7T 0 lvm └─storage-raidarray 253:14 0 7.3T 0 lvm /home sde 8:64 0 3.7T 0 disk ├─storage-raidarray_rmeta_2 253:11 0 4M 0 lvm │ └─storage-raidarray 253:14 0 7.3T 0 lvm /home └─storage-raidarray_rimage_2 253:12 0 3.7T 0 lvm └─storage-raidarray 253:14 0 7.3T 0 lvm /home sdf 8:80 0 119.2G 0 disk sdg 8:96 1 14.8G 0 disk └─sdg1 8:97 1 14.8G 0 partI've been searching around for answers to this question, but can find very little written about LVM RAID; only mdadm. Is anyone aware of a way that I can extend the RAID array without purchasing additional drives and without data loss?
Grow LVM RAID5 by identical disk, not enough extents
You will have to reduce the size of whatever is stored on the md0 array first. Unfortunately you give very little information on that.If there is a plain filesystem directly on /dev/md0 then it depends on the filesystem type how you can reduce its size, if at all possible. If there is an LVM physical array on /dev/md0 then you first have to reduce the size of that, which in turn may mean you here also have to reduce a filesystem, then reduce the logical volume, then reduce the volume group, then the physical volume.As you're trying to add disks to a RAID5 consisting of (slightly larger) 2TB disks, it might be easiest to first assemble a RAID5 with the 2 new disks, pass missing as the name of the third disk which will create a RAID5 with one disk missing. Now copy the data over from the old RAID5 to the new RAID5. Disconnect the old RAID5 disks and verify all your data is available on the new RAID5. Now you can reconnect the old RAID5 disks, use mdadm --zero-superblock on the old component disks (perhaps you might need to do mdadm --stop /dev/md0 first), this wipes any information about the old RAID5. Now you can add the disks to the new RAID5.
Basically I have raid 5 with three disks that are 2TB each. I bought 2 extra 2TB drives however they are few sectors smaller on a newer model - old drives are no longer sold. When I issue /dev/md0 --add /dev/sde /dev/sdfit yields:mdadm: /dev/sde not large enough to join array. Is there any way to resize the first three disks without losing data in order for the smaller sector drives to be added?
MDADM - adding a disk to RAID5 with slightly less sectors
The trade-offs are:Option 2 (2xRAID1) is more reliable, presuming the disks are of equal reliability. Basically, we assume there is a N%/yr (or whatever time period) of an individual disk failing. If you have two disks, the chance of any one of them failing is greater. If you have three disks, the chance of any one of them is greater still. So the 3-disk RAID5 is more likely to experience a failure (of a single disk). Either array will survive a single-disk failure. After a single failure, the RAID5 still has more disks, so it's more likely to suffer a second failure. Though that depends as well on rebuild time, which is probably a bit better for the 2TB disks, though it depends on the speed of the disks as well. However, with the absence of hot spares, I expect rebuild time is actually dominated by the time it takes the admin to install a replacement disk. Option 1 (3xRAID5) will have better read performance for single files (due to striping). Probably worse write performance, but it depends. For multiple files, RAID1 can read from both disks. Option 2 (RAID1) has a simpler "geometry" (how the data is laid out on the disks). If for some reason you had to recover data from it without access the RAID software (this is more likely in the case of hardware RAID—e.g., if the controller breaks), it'd be easier. Normal management for both options should be the same. You'd typically use the same commands to replace failed drives, start and stop the array, etc.There is another option you didn't mention: 3xRAID1. You can put 3 disks in RAID1. This means even after losing a disk, you're still fully redundant—so, e.g., a (previously) undetected bad sector doesn't mean data loss on rebuild. Writes may be a little slower (due to the additional mirror). Cost is the main downside. Another way to increase durability of the data is to have a cold spare (a drive sitting on a shelf somewhere, ready to be installed if one of the active drives fails). This means you aren't waiting several days for a replacement drive to arrive. There are also filesystem options, if those are supported (e.g., both ZFS and btrfs support mirroring data). As far as the operating system, I'd install it on the array unless that's impossible. E.g., on Linux x86-64, I'd have a separate /boot (or /boot/efi for EFI machines) array, which would be a small RAID1 across all disks. Once you have a kernel and initramfs loaded (acutally, once you have grub2 loaded), you can use the full selection of RAID levels, logical volumes, etc. Finally, remember that RAID is not a substitute for backups. For example, if a machine gets infected with ransomware, it'll encrypt and delete all your files—and the RAID software will faithfully replicate that destruction to as many disks as you give it. Same with accidental deletions, bugs causing filesystem corruption, etc. And it won't stop natural or man-made disasters from taking out the whole server.
I want to install a Linux server in my house to be able to share documents and multi-media files between every devices here. The machine I have has 3 HDD slots, I need at least 3.5 TB of storage and I want my files to be safe in case of a disk failure. I'm currently investigating two options that are around the same price at the moment:Option 1: 3*2TB in RAID 5 Option 2: 2*4TB in RAID 1, which saves an HDD slot that I can use for the OS.My first question is : Are there technical advantages to either option—what should I consider when choosing between them? And my second question is: in the case of Option 1, where should I install the system? Should I create a 50GB RAID volume replicated on all disks to hold the system, or should I locate it on one specific drive with no replication ? And what about the swap?
Building a network storage server, trade-offs of different RAID configurations?
The short answer is: yes, it is possible. Linux software RAID writes some meta information on the devices such that you can easily plug them into another system (using another controller and so on) and use them there. Before doing any assembling, you can query the devices (status, look what Linux think what part of what RAID this device was etc.). When you are using USB adapters be aware that you can't query SMART information over it (or use other more sophisticated ATA commands). To be safe you should have a backup available. If you don't have a recent one you can copy the devices of your RAID5 via dd before issuing any mdadm commands.
I had a RAID 5 (Linux software RAID) server which recently decided not to boot anymore. It seems to be a motherboard problem possibly due to not being protected by a UPS during a power failure. It managed to extend its slow death to the point of taking up plenty of my time without much reward but I know that the drives are fine. Is it possible to rebuild the RAID array even though the drives will have different numbers by plugging it into a machine over USB adapters? Is it possible any damage can be done to the array in the process by attempting to rebuild the array with the wrong drive numbers? I don't want to do a mdadm assemble --force unless I know it's safe. Is there something I can do to investigate if it will be safe to force the assembly?
is it possible to recover a raid 5 array by using usb enclosures?
Maybe it will help somebody. I didn't write it before but all four partitions had the same count of events mdadm --examine /dev/sd[a-z]1 | egrep 'Event|/dev/sd`' mdadm: No md superblock detected on /dev/sda1. Events : 315786 Events : 315786 Events : 315784 Events : 315786Still, after some reading I decided to remove the "failed" drive and re-assamble my md0 device. mdadm --manage --set-faulty /dev/md0 /dev/sdd1 mdadm /dev/md1 --stop mdadm --assemble /dev/md0 /dev/sd[bce]1 --force mdadm --manage /dev/md0 --add /dev/sdd1Please, don't ask me why it worked. The important part for me is that I got back all the files (file allocation table shows proper content of directories. All missing files are there.
I had to open my file-server's housing today to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged in the cable: mdadm --stop /dev/md0 poweroffAfter another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1I was a bit surprised that it was shown as active (even if earlier mdadm said that this device was removed from array) and the checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scanThe command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get this data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another HDD. UPDATE I started to back-up the drives today. After mounting md0 as read-only I run rsync to another server. Now curious thing. I moved a week ago some directories to other array. rsyns has shown following info on these removed dirs: file has vanished: "/MD0/Data/_NMR_"
missing files after reassemble of RAID-5
If you plan to expand the array so to increase its capacity, then yes, this addition will be followed by the reshape operation and it will require a complete rewrite of all component devices, including the new one. However, a single full rewrite a few times should not concern you much. You're not going to reshape an array every day and even every month, aren't you? Also, notice that at least one full write is inevitable anyway since it is required when you create a RAID5 array.
I’m going to build a new storage with a few SSDs in RAID5 under Linux’ mdadm. I am considering buying 5 or 6 SSDs now. If I add another SSD in the future does adding it to the RAID5 require a full write on all then used SSDs? SSDs have a limited write capability and it would be a small indicator for me to buy more free space now. I am not sure whether data must be rearranged in order to grow the array.
Does adding 1 drive to an mdadm RAID5 with SSDs require writing all disks once?
/sbin/fsck /dev/md0 failed because you don’t have /sbin on your PATH, so fsck couldn’t find fsck.ext4. Running /sbin/fsck.ext4 directly works, as would adding sbin to the PATH: PATH="${PATH}:/sbin" /sbin/fsck /dev/md0
I have a raid5 which is inaccessible after unexpeted power outage. Details: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdd[3] sdc[1] sdb[0] 3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/15 pages [0KB], 65536KB chunkunused devices: <none># /sbin/mdadm --misc --test /dev/md0 # echo $? 0# /sbin/mdadm --misc --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed May 29 18:43:39 2019 Raid Level : raid5 Array Size : 3906766848 (3725.78 GiB 4000.53 GB) Used Dev Size : 1953383424 (1862.89 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Mar 19 19:02:08 2020 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512KConsistency Policy : unknown Name : Serwer:0 (local to host Serwer) UUID : 84a16ed7:c54e6e9f:f3ae512c:413b3a28 Events : 4368 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 3 8 48 2 active sync /dev/sdd# ls -l /dev/md0 brw-rw---- 1 root disk 9, 0 mar 19 19:00 /dev/md0However: /sbin/fsck /dev/md0 fsck from util-linux 2.33.1 fsck: error 2 (No such file or directory) while executing fsck.ext4 for /dev/md0# mount /dev/md0 /media/storage/ mount: /media/storage: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
fsck: error 2 (No such file or directory) while executing fsck.ext4 for /dev/md0
So... /dev/sdb1 hasn't been active in this array since 2015 (Update Time). The data on it should be outdated to the point of uselessness. Essentially you've been running a RAID-0 ever since. That leaves you the three other devices /dev/sd{c,d,e}1. Out of these, /dev/sdd1 failed recently. Since you already lost redundancy years ago, this failure effectively stopped your RAID from working at all. Now it depends. Are these three drives still readable? Then you can probably recover data. Otherwise, it's game over. So check smartctl -a. If any drives have bad or reallocated sectors, use ddrescue to copy them to a new drive. If the drives are intact, given a recent enough kernel (4.10+) and mdadm (v4.x), you can probably assemble it like such: mdadm --stop /dev/md0 mdadm --assemble --force /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1(There was a bug with assemble force in older versions, I'm not sure exactly which version though.) ...and if that doesn't work, you're left with mdadm --create but this is a path wrought with danger, see also https://unix.stackexchange.com/a/131927/30851
I'm looking to recover data from 4 old HDDs set up in a software raid5 and it looks like a disk has failed. What I want to do is recover the raid so I can copy its data somewhere else. I have done some research and I believe I want to use mdadm to perform a resync, but at the end of the day I do not want to mess it up and I would appreciate if someone could explain what needs to be done in order to get that data to safety. Also I am on ubuntu 16.04, here is what I see when I run mdadm --detail /dev/md0 /dev/md0: Version : 1.1 Creation Time : Thu Feb 13 09:03:27 2014 Raid Level : raid5 Array Size : 4395016704 (4191.41 GiB 4500.50 GB) Used Dev Size : 1465005568 (1397.14 GiB 1500.17 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Dec 23 12:51:56 2018 State : clean, FAILED Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : CentOS-01:0 UUID : 1cf7d605:8b0ef6c5:bccd8c1e:3e841f24 Events : 4178728 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 33 1 active sync /dev/sdc1 2 8 65 2 active sync /dev/sde1 6 0 0 6 removed 0 8 49 - faulty /dev/sdd1Also, I ran mdadm --examine on each device: /dev/sdb1: Magic : a92b4efc Version : 1.1 Feature Map : 0x1 Array UUID : 1cf7d605:8b0ef6c5:bccd8c1e:3e841f24 Name : CentOS-01:0 Creation Time : Thu Feb 13 09:03:27 2014 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 2930012160 (1397.14 GiB 1500.17 GB) Array Size : 4395016704 (4191.41 GiB 4500.50 GB) Used Dev Size : 2930011136 (1397.14 GiB 1500.17 GB) Data Offset : 262144 sectors Super Offset : 0 sectors Unused Space : before=262072 sectors, after=1024 sectors State : clean Device UUID : 252a74c1:fae726d9:179963f2:e4694a65Internal Bitmap : 8 sectors from superblock Update Time : Sun Mar 15 07:05:19 2015 Checksum : 53cae08e - correct Events : 130380 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc1: Magic : a92b4efc Version : 1.1 Feature Map : 0x1 Array UUID : 1cf7d605:8b0ef6c5:bccd8c1e:3e841f24 Name : CentOS-01:0 Creation Time : Thu Feb 13 09:03:27 2014 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 2930012160 (1397.14 GiB 1500.17 GB) Array Size : 4395016704 (4191.41 GiB 4500.50 GB) Used Dev Size : 2930011136 (1397.14 GiB 1500.17 GB) Data Offset : 262144 sectors Super Offset : 0 sectors Unused Space : before=262072 sectors, after=1024 sectors State : clean Device UUID : dc8c18bd:e92ba6d3:b303ee86:01bd6451Internal Bitmap : 8 sectors from superblock Update Time : Sun Dec 23 14:18:53 2018 Checksum : d1ed82ce - correct Events : 4178730 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : .AA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd1: Magic : a92b4efc Version : 1.1 Feature Map : 0x1 Array UUID : 1cf7d605:8b0ef6c5:bccd8c1e:3e841f24 Name : CentOS-01:0 Creation Time : Thu Feb 13 09:03:27 2014 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 2930012160 (1397.14 GiB 1500.17 GB) Array Size : 4395016704 (4191.41 GiB 4500.50 GB) Used Dev Size : 2930011136 (1397.14 GiB 1500.17 GB) Data Offset : 262144 sectors Super Offset : 0 sectors Unused Space : before=262072 sectors, after=1024 sectors State : active Device UUID : 03a2de27:7993c129:23762f07:f4ba7ff8Internal Bitmap : 8 sectors from superblock Update Time : Sun Dec 23 12:48:03 2018 Checksum : ba2a5a95 - correct Events : 4178721 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sde1: Magic : a92b4efc Version : 1.1 Feature Map : 0x1 Array UUID : 1cf7d605:8b0ef6c5:bccd8c1e:3e841f24 Name : CentOS-01:0 Creation Time : Thu Feb 13 09:03:27 2014 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 2930012160 (1397.14 GiB 1500.17 GB) Array Size : 4395016704 (4191.41 GiB 4500.50 GB) Used Dev Size : 2930011136 (1397.14 GiB 1500.17 GB) Data Offset : 262144 sectors Super Offset : 0 sectors Unused Space : before=262072 sectors, after=1024 sectors State : clean Device UUID : c00a8798:51804c50:3fe76211:8aafd9b1Internal Bitmap : 8 sectors from superblock Update Time : Sun Dec 23 14:18:53 2018 Checksum : 14ec2b30 - correct Events : 4178730 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : .AA. ('A' == active, '.' == missing, 'R' == replacing)EDIT: Following @frostschutz advice, I have run: server:~$ sudo mdadm --stop /dev/md0That successfully stopped the raid. After that I ran: server:~$ sudo mdadm --assemble --force /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1` mdadm: forcing event count in /dev/sdd1(0) from 4178721 upto 4178730 mdadm: Marking array /dev/md0 as 'clean' mdadm: /dev/md0 assembled from 3 drives - not enough to start the array.That didn't look so good, but I still tried the following: server:~$ sudo mdadm --assemble --scan mdadm: /dev/md/0 has been started with 3 drives (out of 4).After that the array is now 'Active,Degraded' with 3 disks in 'active sync' and the last one removed. I am pleased to report that I have successfully started to copy the data into a safer place (at least so far the rsync command shows no error message, but I guess we shall see).
Recovering a Failed Software Raid5
6 TB would be (4 - 1) * 2 TB, where 4 is the number of your devices, minus one for parity, and 2 TB is the size of the partitions you seem to have. Assuming that first output is from the fdisk utility, the fields are probably partition name start end length type /dev/sdc1 2048 4294967294 2147482623+ fd Linux raid autodetectIn units of 512-byte sectors, the partition is 2 TB from start to end. (the + at the end of the length field seems to hint that the actual length is greater than , so I ignored that field.) My fdisk utility shows the size of the partition in human units too, but 2 TB is the limitation of what an old-style MBR partition table can provide, so check that you haven't used that instead of GPT. Some older versions of fdisk might not know about GPT partition tables, so you may need to use other tools (or a get a newer version). You don't actually even need to use partitions, you can just run mdadm on /dev/sd[bcde]. But note that because of the RAID-5 layout, the smallest drive (or partition) sets the size of the array, so a single larger disk gets partly wasted.
I'm trying to create raid 5 from four disks: Disk /dev/sdc: 8001.6 GB, 8001563222016 bytes /dev/sdc1 2048 4294967294 2147482623+ fd Linux raid autodetect Disk /dev/sdb: 8001.6 GB, 8001563222016 bytes /dev/sdb1 2048 4294967294 2147482623+ fd Linux raid autodetect Disk /dev/sdd: 24003.1 GB, 24003062267904 bytes /dev/sdd1 2048 4294967294 2147482623+ fd Linux raid autodetect Disk /dev/sde: 8001.6 GB, 8001563222016 bytes /dev/sde1 2048 4294967294 2147482623+ fd Linux raid autodetectBut, I just got 6T space (one of my disk) after creating: /dev/md0 ext4 6.0T 184M 5.7T 1% /mnt/raid5Here's other information of my creating process: Results of mdadm -E /dev/sd[b-e]1: /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d Name : node7:0 (local to host node7) Creation Time : Fri Sep 7 09:16:42 2018 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB) Array Size : 6442053120 (6143.62 GiB 6596.66 GB) Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 2fcb3346:9ed69eab:64c6f851:0bcc39c4 Update Time : Fri Sep 7 13:17:38 2018 Checksum : c701ff7e - correct Events : 18 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAAA ('A' == active, '.' == missing) /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d Name : node7:0 (local to host node7) Creation Time : Fri Sep 7 09:16:42 2018 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB) Array Size : 6442053120 (6143.62 GiB 6596.66 GB) Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 6f13c9f0:de2d4c6f:cbac6b87:67bc483e Update Time : Fri Sep 7 13:17:38 2018 Checksum : e4c675c2 - correct Events : 18 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAAA ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d Name : node7:0 (local to host node7) Creation Time : Fri Sep 7 09:16:42 2018 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB) Array Size : 6442053120 (6143.62 GiB 6596.66 GB) Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 4dab38e6:94c5052b:06d6b6b0:34a41472 Update Time : Fri Sep 7 13:17:38 2018 Checksum : f306b65f - correct Events : 18 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing) /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d Name : node7:0 (local to host node7) Creation Time : Fri Sep 7 09:16:42 2018 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB) Array Size : 6442053120 (6143.62 GiB 6596.66 GB) Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : b04d152e:0448fe56:3b22a2d6:b2504d26 Update Time : Fri Sep 7 13:17:38 2018 Checksum : 40ffd3e7 - correct Events : 18 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing)Results of mdadm --detail /dev/md0: /dev/md0: Version : 1.2 Creation Time : Fri Sep 7 09:16:42 2018 Raid Level : raid5 Array Size : 6442053120 (6143.62 GiB 6596.66 GB) Used Dev Size : 2147351040 (2047.87 GiB 2198.89 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri Sep 7 13:17:38 2018 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : node7:0 (local to host node7) UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 4 8 65 3 active sync /dev/sde1Results of mkfs.ext4 /dev/md0 mke2fs 1.41.9 (22-Aug-2009) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 402628608 inodes, 1610513280 blocks 80525664 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 49149 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.Then mkdir /mnt/raid5 and mount /dev/md0 /mnt/raid5/.
Why isn't the space of Raid 5 array equal to sum of disks?
In RAID-5, unless your write was large enough to cover all data chunks for a given parity chunk, it has to read the missing data chunks in order to be able to recalculate and update parity. Thus a relatively small write on a RAID-5 can turn into a large read operation. RAID-1 does not need such additional reads, as there is no parity, it just writes to all disks directly. So it's possible that RAID-1 (or RAID-10) is faster for random small writes. Even so it's hard to tell what's faster or slower overall. It's best if you benchmark it yourself directly for your specific use case scenario.
Is it slower to write one block in RAID Level 5 with 5 disks than RAID 1 with two disks (mirroring only)? I think No since with Level 5 you are writing the data and the parity (two writes). With Level 1, you are writing twice as well (original as well as mirror). Can someone let me know if my train of thought is correct?
RAID Level 5 versus Level 1
You could simply do a 'rsync' from the old disk to the new disk. Something like this should work: rsync -rtv "/mnt/old_drive" "/mnt/new_drive"You would replace the above locations with the mount points of your disks on your computer. To find where your disks are mounted to or what device they are showing up as you can use: df -hand... cat /proc/diskstats'df' will tell you where the disks are mounted and 'cat /proc...' will tell you what devices the disks are connected as in case you have to mount them manually or add them to fstab.
I have one 1.5Tb HDD with Debian Wheezy. I have also four 2Tb empty HDDs. I want to configure the four 2Tb disks in Raid5 and install LVM. Then migrate the content of the 1.5Tb disk on the Raid5. Before starting, I would like to know if the migration could be possible and, if yes, how to do it.
How to migrate the content of one disk on a Raid5?
RAID should only resync after a server crash or replacing a failed disk. It's always recommended to use a UPS and set the system to shutdown on low-battery so that a resync won't be required on reboot. NUT or acpupsd can talk to many UPSes and initiaite a shutdown before the UPS is drained. If the server is resyncing outside of a crash, you probably have a hardware issue. Check the kernel log at /var/log/kern.log or by running dmesg. I also recommend setting up mdadm to email the adminstrator and running smartd on all disk drives similarly set up to email the administrator. I receive an email about half the time before I see a failed disk. If you are having unavoidable crashes, you should enable a write-intent bitmap on the RAID. This keeps a journal of where the disk is being written to and avoids a full re-sync on reboot. Enable it with: mdadm -G /dev/md0 --bitmap=internal
I have a CentOS server with RAID5. Every time RAID5 re-syncs my server stop working. The hosting company stopped the httpd service so that RAID5 can re-sync itself, a process which can take as long as 3-4 hours. The problem reoccurs frequently, so the Hosting company swaped my server hardware and I migrated to new hardware. I still have this problem (on the new server). Is this something normal in RAID5? How can we solve this issue permanently? If every time RAID5 wants to re-sync my server overloads and my website will not be accessible, then RAID5 sucks. I would really appreciate if you can suggest a solution for this disaster. Here is /proc/mdstat report: root@host [~]# watch 'cat /proc/mdstat' Every 2.0s: cat /proc/mdstat Mon May 9 01:25:30 2011Personalities : [raid1] md0 : active raid1 xvda1[0] xvdb1[1] 104320 blocks [2/2] [UU]md1 : active raid1 xvda2[0] xvdb2[1] 2096384 blocks [2/2] [UU]md2 : active raid1 xvda5[0] xvdb5[1] 484086528 blocks [2/2] [UU] [=====>...............] resync = 29.5% (142978880/484086528) finish=77.7m in speed=73108K/secunused devices: <none>
Problem with RAID5
It turns out that the RAID subsystem has actually got only three disks instead of four. 3x 3TB in RAID5 gives 6TB usable space, and now the numbers add up and match as expected. This can be seen with the command line utilities cat /proc/mdstat # display the makeup of the RAID arrays df /path/to/mountpoint # show disk used/free/available for the filesystem
I run a mdadm RAID 5 array of 4 3TB disks on Ubuntu 18.04, with a total size of 9TB. It displays as a 9.0 TB RAID-5 Array in gnome disk utilities, and the usage is 799 GB free, which means I have over 8T data. I then bought a NAS and start copying data from RAID to a new disk. The copy was completed after 4 hours with no error. After the copy, the new disk is only 5.2TB used. When I use gnome-files to count the total number/size of files on the RAID-5 array, it shows that the free space is only 498.9 GB, which means I have 9T - 500G = 8.5T data? I finally use du on both the RAID and the nas disk. The result shows both are 4.7T. Why are there discrepancies and which number is correct?
What is a good way to know mdadm RAID 5 array used size
Ext4 is limited 16 TB on 32-bit systems.
I connected a 4-bay enclosure with 4 new 8TB disks to my rPi. This disks appeared in lsblk as sda to sdd. Following this tutorial I created the array doing mdadm -C /dev/md0 --level=raid5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sddThe array started building itself without issue and after 2 days (!) it was done. The output of mdadm --detail /dev/md0 is as follows /dev/md0: Version : 1.2 Creation Time : Mon Nov 4 15:15:37 2019 Raid Level : raid5 Array Size : 23441682432 (22355.73 GiB 24004.28 GB) Used Dev Size : 7813894144 (7451.91 GiB 8001.43 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Nov 7 01:24:44 2019 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : bitmap Name : raspberrypi:0 (local to host raspberrypi) UUID : 05442287:97e027c2:1ddf5e37:c16c1428 Events : 35568 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 2 8 32 2 active sync /dev/sdc 4 8 48 3 active sync /dev/sddThe output for cat /proc/mdstat is Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd[4] sdc[2] sdb[1] sda[0] 23441682432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/59 pages [0KB], 65536KB chunkThis is my first time using mdadm, but I can't see anything out of place in this statuses. however, when I try to do sudo mkfs.ext4 /dev/md0 I get the following error mke2fs 1.44.5 (15-Dec-2018) Creating filesystem with 5860420608 4k blocks and 366276608 inodes Filesystem UUID: 30b59932-d4ca-47f9-9b58-50cb28fc579c Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632, 2560000000, 3855122432, 5804752896Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read while trying to create journalI'm really at a loss here, and google isn't helping either. Any ideas are greatly appreciated.
Can't create filesystem on a freshly made raid5 using mdadm
As I said in my question, I've done this before and have a capture log of what I did to accomplish it two years ago. For some reason the identical lvcreate command didn't work. To get this LV create I had to specify the number of stripes using -i 3. So, the working command was: lvcreate -i 3 --type raid5 -L 8.18T -n lv_sklad02 vg_sklad02I guess something changed in updates to the LVM tools? UPDATE They did indeed make a change to LVM2. From rpm -q --changelog lvm2 * Fri Jul 29 2016 Peter Rajnoha <[emailprotected]> - 7:2.02.162-1 <...> - Add allocation/raid_stripe_all_devices to reinstate previous behaviour. - Create raid stripes across fixed small numbers of PVs instead of all PVs. <...>Nice to know I wasn't completely insane. :-) I RTFM'd, but not the right FM I guess. :-))
I'm having an issue with LVM RAID 5 not allowing me to create a LV that uses the space on all four drives in the VG. What is particulary annoying is that I create this very same VG/LV using the same model of drives two years ago and I don't recall having this problem. Here's the output of pvs and vgs before I attempt to create the RAID 5 LV: Output of pvs: PV VG Fmt Attr PSize PFree /dev/sda1 vg_sklad02 lvm2 a-- 2.73t 2.73t /dev/sdb1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sdc1 vg_sklad02 lvm2 a-- 2.73t 2.73t /dev/sdd1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sde1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sdf1 vg_sklad02 lvm2 a-- 2.73t 2.73t /dev/sdg1 vg_sklad02 lvm2 a-- 2.73t 2.73t /dev/sdh1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sdi2 vg_bootdisk lvm2 a-- 118.75g 40.00m /dev/sdj2 vg_bootdisk lvm2 a-- 118.75g 40.00mOutput of vgs: VG #PV #LV #SN Attr VSize VFree vg_bootdisk 2 2 0 wz--n- 237.50g 80.00m vg_sklad01 4 1 0 wz--n- 10.92t 0 vg_sklad02 4 0 0 wz--n- 10.92t 10.92tThe command I used last time to create LV using the same model drives on the same system is: lvcreate --type raid5 -L 8.18T -n lv_sklad01 vg_sklad01When I issue this same command changing the VG and LV target names I get: lvcreate --type raid5 -L 8.18T -n lv_sklad02 vg_sklad02Using default stripesize 64.00 KiB. Rounding up size to full physical extent 8.18 TiB Insufficient free space: 3216510 extents needed, but only 2861584 availableThis doesn't make sense as I have four drives with a capacity of 2.73T. 4 * 2.73 = 10.92. Subtracting one for parity gives me 8.19T, which is the size of the original LV I have on this system. Banging. My. Head. Against. Monitor. :? Grasping at straws, I also tried: [root@sklad ~]# lvcreate --type raid5 -l 100%VG -n lv_sklad02 vg_sklad02 Using default stripesize 64.00 KiB. Logical volume "lv_sklad02" created.This results in a LV 2/3 the size I expect. Output from lvs: LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg_bootdisk rwi-aor--- 102.70g 100.00 lv_swap vg_bootdisk rwi-aor--- 16.00g 100.00 lv_sklad01 vg_sklad01 rwi-aor--- 8.19t 100.00 lv_sklad02 vg_sklad02 rwi-a-r--- 5.46t 0.18After issuing the above lvcreate command the output of pvs, vgs, and lvs are as follows: [root@sklad ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda1 vg_sklad02 lvm2 a-- 2.73t 0 /dev/sdb1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sdc1 vg_sklad02 lvm2 a-- 2.73t 0 /dev/sdd1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sde1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sdf1 vg_sklad02 lvm2 a-- 2.73t 0 /dev/sdg1 vg_sklad02 lvm2 a-- 2.73t 2.73t /dev/sdh1 vg_sklad01 lvm2 a-- 2.73t 0 /dev/sdi2 vg_bootdisk lvm2 a-- 118.75g 40.00m /dev/sdj2 vg_bootdisk lvm2 a-- 118.75g 40.00m[root@sklad ~]# vgs VG #PV #LV #SN Attr VSize VFree vg_bootdisk 2 2 0 wz--n- 237.50g 80.00m vg_sklad01 4 1 0 wz--n- 10.92t 0 vg_sklad02 4 1 0 wz--n- 10.92t 2.73t[root@sklad ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg_bootdisk rwi-aor--- 102.70g 100.00 lv_swap vg_bootdisk rwi-aor--- 16.00g 100.00 lv_sklad01 vg_sklad01 rwi-aor--- 8.19t 100.00 lv_sklad02 vg_sklad02 rwi-a-r--- 5.46t 2.31 For some reason there is unallocated space in vg_sklad02 (the VG I'm working on). Shouldn't the -l 100%VG used all available space in the VG? LV lv_sklad01 and lv_sklad02 should be the same size as they are created from the same drives, and as far as I recall I attempted to use the same create command. Does anyone have any suggestions as to what I'm doing wrong?
LVM RAID 5 not resulting in logical volume size expected
Ubuntu can run on RAM, but it requires some manual changes: https://wiki.ubuntu.com/BootToRAM
I have 32 GB of memory in my PC. This is more than enough for a linux OS. Is there an easy to use version of Linux (Ubuntu preferably) that can be booted via optical or USB disk and be run completely within RAM? I know a live disc can be booted with a hard disk, but stuff still runs off the disc and this takes a while to load. I'd like everything loaded into RAM and then run from there, completely volatile. Any files I need to create would be saved to a USB disk. I'm aware of http://en.wikipedia.org/wiki/List_of_Linux_distributions_that_run_from_RAM but these all depend on a little bit of RAM. I'd prefer something like Ubuntu instead of these light versions.
Is there a linux OS that can be loaded entirely into RAM?
I am not an expert on device drivers, however here are some pointers for your R&D:if memory is marked as "reserved", the OS cannot touch it; you will have to find a way to either have the BIOS mark it as available to the OS, or use direct low-level ioctls to control it if Linux could see the memory, you still would not have an easy way to prevent Linux from using it as any other block of RAM; an attempt could be tried by marking such RAM as "bad" and then modifying the kernel to still make a special use out of it (please check kernel documentation regarding this, it has changed a lot since last time I hacked into it and it's evolving at a great speed) considering the above as a preliminary (and non-definitive nor exhaustive) feasibility study, I would say writing your ramdisk blockdevice driver is the most sane option in your case, and perhaps you should contribute it back to Linux kernel and/or team up with people already trying this (perhaps a better place for this question is the Linux Kernel Mailing list, if you haven't yet posted there)Some other relevant sources:Current ramdisk driver A somewhat old (2005) document about block drivers A simple block driver for Linux Kernel 2.6.31 (careful: a lot has changed in time)
There have been a lot of questions about RAM Disks and I am aware of ramfs and tmpfs which allow the use of ram as a block device. However my interest is in using a fixed memory address range as a block device. This arises from the necessity to use non-volatile RAM available in my system. I have 6GB of RAM available, and 8GB of non-volatile RAM present. The output of /proc/iomem gives me the following 100000000-17fffffff : System RAM 180000000-37fffffff : reserved Here the region from 6GB to 14GB corresponds to the Non-volatile RAM region which is marked by the E820 BIOS memory map as reserved. My main intention is to use this NVRAM as a block device in linux. This is useful for testing NVRAM systems. Is there any linux command already present which would allow me to use this region as a block device, or do I have to write my own kernel device driver to facilitate the same?
Reserve fixed RAM memory region as a block device ( with a given start physical address)
Thanks to the link @Someone posted in the comments to the question, I was able to pull this content which fixed the issue for me:on the boot screen (below) press the "e" key to edit the configuration.You will be shown a screen like follows.Scroll down using the keyboard down arrow. You want the line that says linuxAdd the text console=ttyS0 after the word quiet and then press cntrl + x to proceedNow as root, or using sudo, run the command systemctl enable getty@ttyS0 in order to never have to go through all those steps again.
I have updated my KVM management script for Ubuntu 14.04 KVM hosts to support debian 8 guests. After a manual installation (preseed script does not work yet), I am stuck with the the following message on bootup:During the installation, I:Selected only ssh server and base system utilities. Set the grub bootloader to install to the only listed option. Used the guided partitioning mode for everything on one partition. Used the local UK mirror.Is there some step I need to be careful to make or can Debian 8 not yet be installed as a KVM guest?Update After giving up and deciding to just upgrade a debian 7 VM to debian 8 by updating all of the lines in /etc/apt/sources.list to jessie instead of wheezy, I found that I eventually got the same behaviour. However this instance had a static IP and I found that I could still SSH into the server on that IP, so it looks like this is some sort of graphics issue where the server does manage to boot up, we just can't see the login text. How can I resolve this?Update This time, on the debian installation created by upgrading debian 7, I can click advanced from the grub menu and select the option with (sysvinit) which works for now. I am hoping this can lead to an explanation of what is going wrong with the normal version that gets booted?
Debian 8 KVM Guest - Loading initial ramdisk
I believe, you can use blockdev command, which is available from util-linux package (in Debian) blockdev --flushbufs /dev/ram0Source
I have allocated /dev/ram0: dd if=/dev/zero of=/dev/ram0 bs=1M count=1024Now I have 1Gb sitting in memory. How do I free up the allocated space?
How to deallocate /dev/ram0
Q1: Yes Q2: It is not feasible to recover the data. Nevertheless, if you want to be extreme you could do it like this :)Create some space in ram:mkdir ram mount -t ramfs -o size=1000M ramfs ram/Create some randomly filled file which we encrypt in that RAM space. Being filled with random data it will be impossible to establish boundaries between random and encrypted data.dd if=/dev/urandom of=ram/test bs=1M count=512Setup encriptioncryptsetup -y luksFormat ram/test cryptsetup luksOpen ram/test encyptedFormat and mount the new secure space:mkfs.ext4 /dev/mapper/encypted mkdir securedir mount /dev/mapper/encypted securedir/Umont securedir/ and then ram/ to loose the data until the end of time.umount securedir/ umount ram/
In modern file systems (and on modern SSDs) there is no guarantee that if you write over a file using a traditional utility (such as dd) that the data will be overwritten in-place and journaled backups destroyed. As a result, the data could possibly be recovered. So, after a little research I figured that mounting a temporary ramfs (tmpfs was ruled out due to the potential for it to swap) would be the way to go: # mkdir -p /mnt/tmp/ram # mount -t ramfs -o size=[size, but ramfs grows as needed] ramfs /mnt/tmp/ram # [create the sensitive data, secure it, copy out secured data] # umount /mnt/tmp/ramQ1: Does unmounting a ramfs destroy the data contained within it? Q2: If the data is not guaranteed to be destroyed, is there any feasible way to recover said data (or am I just being paranoid)? Q3: If the data is recoverable, would # dd if=/dev/zero of=/mnt/tmp/ram/[filename]destroy the data properly or is ramfs not guaranteed to overwrite files in-place? Constraints: The system cannot be forced to reboot before/during/after these operations. In case you're curious, the "sensitive data" in this case is the unsalted, unhashed usernames+passwords for a pam database. The "secured data" is the salted/hashed database, which would end up on the primary drive. I do not want the sensitive data to touch the drive (as I am using ext3 - which cannot guarantee the data will be unrecoverable without wiping the entire partition as far as I understand). If you know a better way to go about doing this, please enlighten me, thanks.
Regarding the creation and destruction of sensitive data on linux/unix systems
I have combined an idea given to me by Ipor Sircer's answer with Stephen Kitt's suggestion of using a RAM disk block device. First, I compiled CONFIG_BLK_DEV_RAM into my kernel. I changed the default number of RAM disks from 16 to 8 (BLK_DEV_RAM_COUNT), though that is based on preference and not necessity. Next, I created the folder I want to mount to. mkdir /mnt/ext4ramdisk Finally, I formatted my RAM disk block device with ext4 and mounted it. mkfs.ext4 /dev/ram0 mount -t ext4 /dev/ram0 /mnt/ext4ramdisk
First, I have create the directory that I will want to mount to. mkdir /mnt/ramdisk Now, I could easily turn this into a ramdisk using ramfs or tmpfs via mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk I've found a tutorial on how to create a ramdisk which breaks this syntax down as: mount -t [TYPE] -o size=[SIZE] [FSTYPE] [MOUNTPOINT] The tutorial indicates that I can replace [FSTYPE] with ext4 to change the FS to ext4. However, I am not convinced this method is correct and that the author has misjudged what changing the [FSTYPE] argument actually does. UPDATE: For those interested, G-Man and Johan Myréen have weighed in on my speculations about [FSTYPE]. Essentially, the [FSTYPE] argument acts as a necessary (but ignored) placeholder used by mount. See this post's comments for more details. I would like to know the proper way to create an ext4 ramdisk. That is, I want a temporary directory in memory that uses the ext4 file system. How can this be achieved?
How can I create an ext4 ramdisk?
I think you mean something like this: Load the block ramdisk module, set the desired size in blocks using the rd_size=... parameter. # modprobe brd rd_size=123456...after this step /dev/ram0 exists. You now can put a filesystem on it. # mkfs /dev/ram0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 30976 inodes, 123456 blocks 6172 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 16 block groups 8192 blocks per group, 8192 fragments per group 1936 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729Writing inode tables: done Writing superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.Mount it and check the space used and free... # mount /dev/ram0 /mnt # df /mnt Filesystem 1K-blocks Used Available Use% Mounted on /dev/ram0 119539 1550 111817 2% /mnt
I need a block device in RAM. I built a 3.x kernel and added the RAM block device driver. The number of RAM block device drive is 16 (by default) but when the kernel boots there is no ramx in /sys/block nor /dev. What's going on?
How Linux kernel 3.x manage ramdisk as block device?
I found the problem: I had to add to the NFS options fsid so now the full list looks like this:fsid=1,crossmnt,rw,no_root_squash,async,no_subtree_checkThe fact is yast doesn't warn here. I could fix the problem because I ran exportfs and then I got the error regarding the fsid.
OS: SLES 12.3 Running these commands: mkdir /foo/ramdisk mount -t tmpfs -o size=100m tmpfs /foo/ramdiskCreating a NFS on /foo/ramdisk produces this result when I run showmount -e <IP>:clnt_create: RPC: Program not registeredWhen I remove the NFS share then showmount -e <IP> works again:Export list for ...Another strange fact: When I create an NFS for /foo and I mount this NFS on another Linux/Windows PC then [on the PC where I mounted the NFS] there are no files visible in /foo/ramdisk and I am not allowed [yes, there are the correct permissions set] to write anything into the /foo/ramdisk directory.I export the NFS with the SUSE tool yast and use these settings:crossmnt,rw,no_root_squash,async,no_subtree_checkMy question: Isn't it allowed to export a ramdisk as NFS or what am I doing wrong?
Cannot create NFS on a tmpfs drive
You don't need a script for that: there's a system facility to mount a filesystem at boot time. Add it to the file /etc/fstab. Open this file in your favorite text editor and add a line like this: none /mnt/tmpfs tmpfs size=1GMake sure not to accidentally modify other lines. Note that there's already a tmpfs filesystem¹ mounted at /run. Debian doesn't make /tmp tmpfs by default, but you can make it so by editing /etc/default/tmpfs and changing the RAMTMP line to RAMTMP=yes¹ This isn't a RAM disk: it doesn't reserve memory, only the space used for files takes up memory, and its pages can be swapped out just like application data.
This is how I create RAM disk manually on Linux Debian Jessie: mount -o size=1G -t tmpfs none /mnt/tmpfsMy question is, how do I make this automatic at each computer startup?
How to create and mount a RAM disk at computer startup?
You should not delete /dev/ram0 yourself. It will be deleted when you do sudo rmmod brd, which frees the space and removes the module. You can then start again from modprobe.
I learned that I can do modprobe brd rd_nr=1 rd_size=4585760 max_part=1 if I want to create a ram block device at /dev/ram0 but lets say I want to flush the device (to free the ram), then delete it and create another. How would I do this running modprobe brd rd_nr=1 rd_size=4585760 max_part=1 again doesn't seem to create another ram device in /dev Recreate steps: 1) create disk: modprobe brd rd_nr=1 rd_size=4585760 max_part=1 2) use the ram disk for some arbitrary task: ex: dd if=/dev/zero of=/dev/ram0 count=1000 3) free up the memory blockdev --flushbufs /dev/ram0 4) delete device file: rm /dev/ram0 5) try to create another one: modprobe brd rd_nr=1 rd_size=4585760 max_part=1 6) ls /dev/ram* gives me an error I know that I can change the rd_nr to be whatever number I desire but I want to be able to create these on the fly. Edit: I don't want to create a tmpfs, my use case requires a block device
How do you create block RAM Disks on demand?
Initial ramdisks use Busybox to save space. Essentially, utilities like mv and cp all share a lot of common logic - open a file descriptor, read buffers into memory, etc. Busybox basically puts all the common logic into one binary which changes the way it behaves depending on the name with which it was called. Let's take a look at that ramdisk. alex@alexs-arch-imac:/tmp/initramfs/bin$ ls -l total 1308 lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 [ -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 [[ -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ash -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 awk -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 basename -> busybox -rwxr-xr-x 1 alex alex 68840 Mar 24 17:06 blkid -rwxr-xr-x 1 alex alex 287096 Mar 24 17:06 busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 cat -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 chgrp -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 chmod -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 chown -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 chroot -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 clear -> busybox -rwxr-xr-x 1 alex alex 130272 Mar 24 17:06 cp -rwxr-xr-x 1 alex alex 59264 Mar 24 17:06 cryptsetup lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 cttyhack -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 cut -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 dd -> busybox lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 depmod -> kmod lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 df -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 dirname -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 dmesg -> busybox -r-xr-xr-x 1 alex alex 92227 Mar 24 17:06 dmsetup lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 du -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 echo -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 egrep -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 env -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 expr -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 false -> busybox -rwxr-xr-x 1 alex alex 53696 Mar 24 17:06 findmnt lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 free -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 getopt -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 grep -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 halt -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 head -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 hexdump -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ifconfig -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 init -> busybox lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 insmod -> kmod lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 install -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ip -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ipaddr -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 iplink -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 iproute -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 iprule -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 iptunnel -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 kbd_mode -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 kill -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 killall -> busybox -rwxr-xr-x 1 alex alex 142424 Mar 24 17:06 kmod lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 less -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ln -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 loadfont -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 loadkmap -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 losetup -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ls -> busybox -rwxr-xr-x 1 alex alex 70192 Mar 24 17:06 lsblk lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 lsmod -> kmod lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 md5sum -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 mkdir -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 mkfifo -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 mknod -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 mktemp -> busybox lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 modinfo -> kmod lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 modprobe -> kmod -rwsr-xr-x 1 alex alex 40168 Mar 24 17:06 mount lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 mountpoint -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 mv -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 nc -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 netstat -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 nslookup -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 openvt -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 pgrep -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 pidof -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ping -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ping6 -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 poweroff -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 printf -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 ps -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 pwd -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 readlink -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 reboot -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 rm -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 rmdir -> busybox lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 rmmod -> kmod lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 route -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sed -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 seq -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 setfont -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sh -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sha1sum -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sha256sum -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sha512sum -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sleep -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sort -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 stat -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 strings -> busybox -rwxr-xr-x 1 alex alex 14816 Mar 24 17:06 switch_root lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 sync -> busybox -rwxr-xr-x 1 alex alex 63992 Mar 24 17:06 systemd-tmpfiles lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 tac -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 tail -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 telnet -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 test -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 tftp -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 touch -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 true -> busybox -rwxr-xr-x 1 alex alex 264696 Mar 24 17:06 udevadm -rwsr-xr-x 1 alex alex 27616 Mar 24 17:06 umount lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 uname -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 uniq -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 uptime -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 vi -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 wc -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 wget -> busybox lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 yes -> busybox alex@alexs-arch-imac:/tmp/initramfs/bin$ As you can see, almost every single binary in this image is linked to Busybox. alex@alexs-arch-imac:/tmp/initramfs/bin$ ls -l | grep --invert-match busybox - total 1308 -rwxr-xr-x 1 alex alex 68840 Mar 24 17:06 blkid -rwxr-xr-x 1 alex alex 130272 Mar 24 17:06 cp -rwxr-xr-x 1 alex alex 59264 Mar 24 17:06 cryptsetup lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 depmod -> kmod -r-xr-xr-x 1 alex alex 92227 Mar 24 17:06 dmsetup -rwxr-xr-x 1 alex alex 53696 Mar 24 17:06 findmnt lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 insmod -> kmod -rwxr-xr-x 1 alex alex 142424 Mar 24 17:06 kmod -rwxr-xr-x 1 alex alex 70192 Mar 24 17:06 lsblk lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 lsmod -> kmod lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 modinfo -> kmod lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 modprobe -> kmod -rwsr-xr-x 1 alex alex 40168 Mar 24 17:06 mount lrwxrwxrwx 1 alex alex 4 Mar 24 17:06 rmmod -> kmod -rwxr-xr-x 1 alex alex 14816 Mar 24 17:06 switch_root -rwxr-xr-x 1 alex alex 63992 Mar 24 17:06 systemd-tmpfiles -rwxr-xr-x 1 alex alex 264696 Mar 24 17:06 udevadm -rwsr-xr-x 1 alex alex 27616 Mar 24 17:06 umount alex@alexs-arch-imac:/tmp/initramfs/bin$ ls | wc -l # total number of files 116 alex@alexs-arch-imac:/tmp/initramfs/bin$ ls -l | grep --invert-match busybox - | grep --invert-match kmod | wc -l # number of real binaries minus two (busybox and kmod) 12There are 116 files in the image, but only 14 of them are actually binaries. The rest are symlinks to either kmod or busybox. So: the reason that there are so many random utilities is because you might as well put them in there. The symlinks don't take up any space, and even if you removed them, the functionality would remain in the Busybox binary, taking up space. Since there's no real reason to remove all the links, the packagers don't. Here's another question to consider: why not simply remove the network functionality from the Busybox binary? As @Gilles mentions, there are legitimate (if not common) cases where you would need networking in an initcpio. Therefore, the packagers have two options: one, do what they do now and just include it all by default, or two, split networking functionality out into its own mkinitcpio hook. The former is dead-easy (you basically do nothing) and costs a very, very small amount, whereas the second is very complex (again, thanks to @Gilles for pointing this out) and the gains really are not significant enough to matter. Therefore, the packagers take the smart way out, and don't do anything with the networking.
I have a fairly standard initial ramdisk created using mkinitcpio. I'm on Arch GNU/Linux. A while ago I got dropped to a rescue shell and poked around in the /bin of the ramdisk to see what was available. For some reason, there was a bunch of utilities that seemed irrelevant (think things like ping - why would you want that in a rescue environment?). alex@alexs-arch-imac:/tmp$ mkdir initramfs alex@alexs-arch-imac:/tmp$ cd initramfs alex@alexs-arch-imac:/tmp/initramfs$ cp /boot/initramfs-linux.img . alex@alexs-arch-imac:/tmp/initramfs$ cat initramfs-linux.img | unlzma - > initramfs-linux # needed because unlzma complains that it doesn't recognize the .img extension alex@alexs-arch-imac:/tmp/initramfs$ cpio -iV < initramfs-linux ............................................................................................................................................................................................................................................................................................................................................................. 24225 blocks alex@alexs-arch-imac:/tmp/initramfs$ ls bin buildconfig config dev etc hooks init init_functions initramfs-linux initramfs-linux.img lib lib64 new_root proc run sbin shutdown sys tmp usr VERSION alex@alexs-arch-imac:/tmp/initramfs$ ls -l bin lrwxrwxrwx 1 alex alex 7 Mar 24 17:06 bin -> usr/bin alex@alexs-arch-imac:/tmp/initramfs$ ls bin [ blkid chown cttyhack dirname egrep free hexdump ip iptunnel less ls mkfifo mount nslookup ping6 readlink route sha1sum stat tac touch uniq yes [[ busybox chroot cut dmesg env getopt ifconfig ipaddr kbd_mode ln lsblk mknod mountpoint openvt poweroff reboot sed sha256sum strings tail true uptime ash cat clear dd dmsetup expr grep init iplink kill loadfont lsmod mktemp mv pgrep printf rm seq sha512sum switch_root telnet udevadm vi awk chgrp cp depmod du false halt insmod iproute killall loadkmap md5sum modinfo nc pidof ps rmdir setfont sleep sync test umount wc basename chmod cryptsetup df echo findmnt head install iprule kmod losetup mkdir modprobe netstat ping pwd rmmod sh sort systemd-tmpfiles tftp uname wget alex@alexs-arch-imac:/tmp/initramfs$ Notice that the image has the weirdest utilities. Just looking at it, I see wget, ping, telnet, sha1sum... why are these here? Here's the output of my /etc/mkinitcpio.conf. Images were generated using mkinitcpio -p linux. # vim:set ft=sh # MODULES # The following modules are loaded before any boot hooks are # run. Advanced users may wish to specify all system modules # in this array. For instance: # MODULES="piix ide_disk reiserfs" MODULES=""# BINARIES # This setting includes any additional binaries a given user may # wish into the CPIO image. This is run last, so it may be used to # override the actual binaries included by a given hook # BINARIES are dependency parsed, so you may safely ignore libraries BINARIES=""# FILES # This setting is similar to BINARIES above, however, files are added # as-is and are not parsed in any way. This is useful for config files. FILES=""# HOOKS # This is the most important setting in this file. The HOOKS control the # modules and scripts added to the image, and what happens at boot time. # Order is important, and it is recommended that you do not change the # order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for # help on a given hook. # 'base' is _required_ unless you know precisely what you are doing. # 'udev' is _required_ in order to automatically load modules # 'filesystems' is _required_ unless you specify your fs modules in MODULES # Examples: ## This setup specifies all modules in the MODULES setting above. ## No raid, lvm2, or encrypted root is needed. # HOOKS="base" # ## This setup will autodetect all modules for your system and should ## work as a sane default # HOOKS="base udev autodetect block filesystems" # ## This setup will generate a 'full' image which supports most systems. ## No autodetection is done. # HOOKS="base udev block filesystems" # ## This setup assembles a pata mdadm array with an encrypted root FS. ## Note: See 'mkinitcpio -H mdadm' for more information on raid devices. # HOOKS="base udev block mdadm encrypt filesystems" # ## This setup loads an lvm2 volume group on a usb device. # HOOKS="base udev block lvm2 filesystems" # ## NOTE: If you have /usr on a separate partition, you MUST include the # usr, fsck and shutdown hooks. HOOKS="base udev autodetect modconf keyboard block encrypt resume filesystems fsck shutdown"# COMPRESSION # Use this to compress the initramfs image. By default, gzip compression # is used. Use 'cat' to create an uncompressed image. #COMPRESSION="gzip" #COMPRESSION="bzip2" COMPRESSION="lzma" #COMPRESSION="xz" #COMPRESSION="lzop" #COMPRESSION="lz4"# COMPRESSION_OPTIONS # Additional options for the compressor #COMPRESSION_OPTIONS=""
Why are there internet utilities in my initial ramdisk?
Quite generally speaking, all operations happen in RAM first - file systems are cached. There are exceptions to this rule, but these rather special cases usually arise from quite specific requirements. Hence until you start hitting the cache flushing, you won't be able to tell the difference. Another thing is, that the performance depends a lot on the exact file system - some are targeting easier access to huge amounts of small files, some are efficient on real-time data transfers to and from big files (multimedia capturing/streaming), some emphasise data coherency and others can be designed to have small memory/code footprint. Back to your use case: in just one loop pass you spawn about 20 new processes, most of which just create one directory/file (note that () creates a sub-shell and find spawns cat for every single match) - the bottleneck indeed isn't the file system (and if your system uses ASLR and you don't have a good fast source of entropy your system's randomness pool gets depleted quite fast too). The same goes for FUSE written in Perl - it's not the right tool for the job.
I have a script which creates a lot of files and directories. The script does black box tests for a program which works with a lot of files and directories. The test count grew and the tests were taking too long (over 2 seconds). I thought I run the tests in a ram disk. I ran the test in /dev/shm. Strangely it did not run any faster. Average run time was about the same as on normal harddisk. I also tried in a fuse based ram disk written in perl. The website is gone but I found it in the internet archive. Average run time on the fuse ram disk is even slower. Perhaps because of the suboptimal implementation of the perl code. Here is a simplified version of my script: #! /bin/shpreparedir() { mkdir foo mkdir bar touch bar/file mkdir bar/baz echo qux > bar/baz/file }systemundertest() { # here is the black box program that i am testing # i do not know what it does exactly # but it must be reading the files # since it behaves differently based on them find $1 -type f -execdir cat '{}' \; > /dev/nullsingletest() { mkdir actual (cd actual; preparedir) systemundertest actual mkdir expected (cd expected; preparedir) diff -qr actual expected }manytests() { while read dirname; do rm -rf $dirname mkdir $dirname (cd $dirname; singletest) done }seq 100 | manytestsThe real script does a bit more error checking and result collecting and a summary. The find is a dummy for the actual program I am testing. I wonder why my filesystem intensive script does not run faster on a memory backed filesystem. Is it because the linux kernel handles the filesystem cache so efficiently that it practically is a memory backed filesystem?
why is filesystem intensive script not faster on ram disk
I was told running tune2fs -O project -Q prjquota /dev/sdaX is absolutely essential to enable Project Quota on a device. So I searched for a solution that does not require switching off or using a live-cd as this requires too much time and does not always work well in my experience with my VPS provider. And I also hoped that I can turn the steps into a script, which did not work out so far. Thanks to another question I was able to put together a solution that worked for me on Ubuntu 18.04. You do need ca. 4GB of RAM to do this (and of course a kernel after version 4.4). Sources:How to shrink root filesystem without booting a livecd http://www.ivarch.com/blogs/oss/2007/01/resize-a-live-root-fs-a-howto.shtml1. Make a RAMdisk filesystem mkdir /tmp/tmproot mount none /tmp/tmproot -t tmpfs -o rw mkdir /tmp/tmproot/{proc,oldroot,sys} cp -a /dev /tmp/tmproot/dev cp -ax /{bin,etc,opt,run,usr,home,mnt,sbin,lib,lib64,var,root,srv} /tmp/tmproot/2. Switch root to the new RAMdisk filesystem cd /tmp/tmproot unshare -m pivot_root /tmp/tmproot/ /tmp/tmproot/oldroot mount none /proc -t proc mount none /sys -t sysfs mount none /dev/pts -t devpts3. Restart SSH on another port than 22 and reconnect with another session nano /etc/ssh/sshd_configChange the port to 2211Restart SSH with /usr/sbin/sshd -D &Connect again from 22114. Kill processes using /oldroot or /dev/sdaX fuser -km /oldroot fuser -km /dev/sdaX5. Unmount /dev/sdaX and apply the project quota feature umount -l /dev/sdaX tune2fs -O project -Q prjquota /dev/sdaX6. Mount with Project Quota mount /dev/sda2 -o prjquota /oldroot7. Putting things back pivot_root /oldroot /oldroot/tmp/tmproot umount /tmp/tmproot/proc mount none /proc -t proc cp -ax /tmp/tmproot/dev/* /dev/ mount /dev/sda1 /boot ### This might be different for you reboot -f8. Turn quota on after reboot apt install quota -y quotaon -Pv -F vfsv1 /9. Check if quota is on on root repquota -Ps /10. Make it persistentPut prjquota into the options of root in /etc/fstabEnjoy!
How do I accomplish setting up project quota for my live root folder being ext4 on Ubuntu 18.04? Documentation specific to project quota on the ext4 filesystem is basically non-existent and I tried this:Installed Quota with apt install quota -y Put prjquota into /etc/fstab for the root / and rebooted, filesystem got booted as read-only, no project quota (from here only with prjquota instead of the user and group quotas) Also find /lib/modules/`uname -r` -type f -name '*quota_v*.ko*' was run and both kernel modules /lib/modules/4.15.0-96-generic/kernel/fs/quota/quota_v2.ko and /lib/modules/4.15.0-96-generic/kernel/fs/quota/quota_v1.ko were found (from this tutorial) Put GRUB_CMDLINE_LINUX_DEFAULT="rootflags=prjquota" into /etc/default/grub, ran update-grub and rebooted, machine does not come up anymore. Putting rootflags=quota into GRUB_CMDLINE_LINUX="... rootflags=quota" running update-grub and restarting did show quota and usrquota being enabled on root, but it does not work with prjquota or pquota or project being set as an rootflagI need this for the DIR storage backend of LXD to be able to limit container storage size. What else can I try?
Project Quota on a live root EXT4 filesystem without live-cd
Add it to your /etc/fstab: none /mnt/tmpfs tmpfs defaults,size=1g,mode=1777 0 0You may also need to rebuild your initramfs, e.g.: sudo update-initramfs -u -k $(uname -r)or, to rebuild the initramfs for all kernels: sudo update-initramfs -u -k allBTW, tmpfs doesn't reserve any memory - a tmpfs filesystem only uses as much memory as required by the files it contains (and any file/directory overhead).
I've been shortly using the following on Linux Debian Jessie to create a "RAM disk": mount -o size=1G -t tmpfs none /mnt/tmpfsBut I was told it doesn't reserve memory, which I didn't know. I would like a solution, which does reserve memory.
How to create a real RAM disk that reserves memory?
Ramdisks /dev/ram* (or rather the brd module) don't bother updating stats (*), for efficiency reasons I guess. If you don't mind a small overhead, here is a workaround: use the device-mapper to create a transparent (1-to-1) layer over your ramdisk. You will then have access to the stats through your dm device. # ramsize=$(< /sys/block/ram0/size) # dmsetup create ram0 --table "0 $ramsize linear /dev/ram0 0" # dmsetup info ram0 (...) Major, minor: 253, 6 (...) # grep -Fw dm-6 /proc/diskstats(*) A patch was proposed in 2012 but was apparently ignored
Is there away way to turn on IO accounting for /dev/ramX block devices in Linux? I already tried echo 1 > /sys/block/ram1/queue/iostat but it did not work. Notice that all devices have stats except the ram devices at the bottomSo performance measurement tools like dool cannot measure the IO speed: # cat /proc/diskstats 11 0 sr0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 1 sr1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 sda 723676 35000 88332842 15213728 1023996 4665548 516093800 245971648 0 17497453 262613130 0 0 0 0 45125 1427753 8 16 sdb 160138 25031 9086236 1901857 5894107 1629435 99783746 263249749 0 34043114 273161200 0 0 0 0 319611 8009594 8 17 sdb1 126 30 8994 751 9 0 24 117 0 862 869 0 0 0 0 0 0 8 18 sdb2 159872 25001 9072882 1900962 5894094 1629435 99783722 263249569 0 34042604 265150532 0 0 0 0 0 0 253 0 dm-0 2910 0 55490 64862 6575840 0 52901448 295198662 0 20973198 295263524 0 0 0 0 0 0 253 1 dm-1 112625 0 3832882 2024582 103580 0 1755640 12405574 0 1639466 14430156 0 0 0 0 0 0 253 2 dm-2 757922 0 88327794 16122171 5450438 0 516093800 1273579273 0 16577985 1289701444 0 0 0 0 0 0 253 3 dm-3 729 0 805332 9620 58317 0 13217898 6701775 0 868402 6711395 0 0 0 0 0 0 253 4 dm-4 68487 0 4373706 650891 766343 0 31908736 44931626 0 13503972 45582517 0 0 0 0 0 0 7 0 loop0 741 0 32160 69 159 0 912 4296 0 4336 8536 6 0 6 10 45 4160 1 0 ram0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 ram1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 ram2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 3 ram3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 4 ram4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 5 ram5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 6 ram6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 7 ram7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 8 ram8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 9 ram9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 10 ram10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 11 ram11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 12 ram12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 13 ram13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 14 ram14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 15 ram15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Why do /dev/ramX devices have all 0's in /proc/diskstats?
I think what you want to do is a bad idea, because in the case of a system crash you'll lose your work. Anyway, you can use a subdir of /dev/shm to store your files; it's a tmpfs file system, which means it's kept in RAM.
I have a laptop with Linux Slackware. When it's working on battery saving a file to hard drive takes around a second. When I'm writing code I waste a lot of time saving the files. I have about 2 GB of free RAM, so I can use 1 GB as a temporary buffer. And work like this:Load the file into the RAM buffer. Work with the file and save it there. At the end of work the file is moved to the HDD.The problem is that the file is a php script, used by Apache. So, I must somehow make the buffer transparent for it and make it use the RAM file when it's applying to the original file.
How to keep the file in RAM
It turns out that the device node needed to be created with mknod /dev/ram1 b 1 1Once this is done, it can be formatted via e.g. mkfs.ext2: mkfs.ext2 /dev/ram1 8192
I'm installing a CentOS 7 VM, and I would like to create a RAM disk inside the %pre section of a Kickstart file. However, doing so via mkfs -q /dev/ram1 8192is not possible as the mkfs binary is not present in the Kickstart environment, and all other mkfs.* filesystem-specific commands return an error "/dev/ram1: no such file or directory". Is there any other way to do so?
How to create a RAM disk from the Kickstart environment?
Well, yes, but actually no. While it is possible to create a virtual disk in RAM, it's not handled the same way other disks are. In particular, it doesn't have a device node in /dev, so it isn't visible to features like "LVM" or "mdadm" (which could otherwise be used to join two different disks into one big virtual disk). There is a way to kinda-sorta do what you're asking, and that's to turn your SSD into swap space, then create a big RAM disk. However, because it's a RAM disk, you won't be able to read directly from the SSD. Each block will be automatically copied into RAM when your program tries to access it, and depending on the exact nature of the process you're running, I think it's highly likely that thrashing will destroy the performance gains you hope to achieve. If you're bound and determined to go through with this, here are the steps. (Note 1: I'm assuming your SSD is /dev/sdb. Replace this with the actual designation of your SSD drive. Note 2: This will erase your SSD. Make sure you have a copy of any important data before you begin. Note 3: You're going to be using root privileges while playing with tools that could potentially wipe your system, so be really careful, and stop immediately if anything seems even the slightest bit off.)Format your SSD for swap: sudo mkswap /dev/sdb. (Optional, but recommended) Use swapon -s to get a list of any swap areas that are currently active, and use sudo swapoff [device] to turn them off. Activate the SSD swap: sudo swapon /dev/sdb. Create a directory to mount your RAM disk: mkdir /tmp/ramdisk Create and mount the actual RAM disk: sudo mount -t tmpfs -o size=[size] myramdisk /tmp/ramdisk (You must use tmpfs for this, since ramfs doesn't use swap.)And that's it. Now, anything you write to /tmp/ramdisk will be stored in RAM, and anything that's too big for RAM will be swapped out to your SSD. When you're done, everything you did (except formatting your SSD) will be undone by a simple reboot.
I'm in an odd situation where I have plenty of RAM sitting around (200gb extra) and ALMOST enough SSD to do a read/write intensive process. Is there any way to say "Dear System, please create a temp virtual drive that is a combination of the RAM and SSD, so that for some of the read/write operations that are backed by the SSD they go kinda-fast, and others backed by the RAM go REALLY fast"?
Make a temp virtual RAM disk from some RAM + some SSD
mount never shows the size of filesystem. Use df instead: df /tmp/shmemoryThat will show you desired information.
When mounting a tmpfs we can pass as an option its (maximum) size and prevent the relevant fs from growing indefinitely and thus consuming all of our RAM, e.g. $ mkdir -p /tmp/shmemory $ sudo mount -t tmpfs -o size=1G noname /tmp/shmemory/$ mount | grep -i shmem noname on /tmp/shmemory type tmpfs (rw,relatime,size=1048576k)However I do not see any size option in the /dev/shmem tmpfs dir that is present on my machine (and in most modern linux distributions) $ mount | grep -iE "shm\s" tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)Does this mean the above can grow without any limit whatsoever?
Find size of shared memory tmpfs
Those are created if the system has a ramdisk. I doubt that's the source of your problem though. Go back in time: before you delete /etc, compare /etc and /etc-1 with diff -q to see if etckeeper really cloned /etc fully.
I have been experimenting with etckeeper to store different snapshots of /etc in a git repo. I have cloned a working /etc in /etc-1 of a clean VPS and I have restored metadata with etckeeper, the I have deleted /etc and I have made a symbolic link of /etc to /etc-1. I could not connect from ssh after reboot, so I logged in from the HN. Searching for broken symlinks I have found that /etc/dev/devices/ram and /etc/dev/devices/ramdisk have broken links to /dev/ram1 and /dev/ram0. And /dev/core has a broken symlink to /proc/kcore. I can create the ramdisks with : mknod -m 660 /dev/ram0 b 1 1 chown root.disk /dev/ram0But when I reboot they disappear. What creates all those files?
Missing /dev/ram0 /dev/ram1 and /proc/kcore
Simple answer You need to enable CONFIG_LEGACY_IMAGE_FORMAT in U-Boot:Go to the u-boot source directory. Type: $ make menuconfig In Boot images -> Enable support for the legacy image format Exit and save, then build U-Boot againNow it will be able to load your uRamdisk :-) Longer answer The book was written using U-Boot v2017.01 and configuration am335x_boneblack_defconfig. U-Boot version v2020.01 does not have that configuration file. Instead it has am335x_boneblack_vboot_defconfig, which works fine except that it does not enable support for the mkimage format. Speaking as the author of the book, I can only say that it is hard to write detailed instructions that will work for all future versions of software. But I do try.
I am following the book "Mastering Embedded Linux Programming - Second Edition" trying to boot up the Linux kernel mounting a ramdisk. I have U-boot 2020.01 working and the Linux kernel image made. I have made a file system on my computer where I have added busybox and the libraries required by it manually as so files. Here is summarized copy of tree in my filesystem. ├── bin │ ├── arch -> busybox ... │ ├── busybox │ ├── cat -> busybox ... ├── dev ├── etc ├── home ├── lib │ ├── ld-2.30.so │ ├── ld-linux-armhf.so.3 -> ld-2.30.so │ ├── libc-2.30.so │ ├── libc.so.6 -> libc-2.30.so │ ├── libm-2.30.so │ ├── libm.so.6 -> libm-2.30.so │ ├── libresolv-2.30.so │ └── libresolv.so.2 -> libresolv-2.30.so ├── linuxrc -> bin/busybox ├── proc ├── sbin │ ├── acpid -> ../bin/busybox ... ├── sys ├── tmp ├── usr │ ├── bin │ │ ├── [ -> ../../bin/busybox ... │ ├── lib │ └── sbin │ ├── addgroup -> ../../bin/busybox ... └── var └── logI have created my ramdisk image following the snippet: cd ~/rootfs find . | cpio -H newc -ov --owner root:root > ../initramfs.cpio cd .. gzip initramfs.cpio mkimage -A arm -O linux -T ramdisk -d initramfs.cpio.gz uRamdiskI have placed all the needed files in the SD card and in u-boot in the Beaglebone black tried to boot as: fatload mmc 0:1 0x80200000 zImage fatload mmc 0:1 0x80f00000 am335x-boneblack.dtb fatload mmc 0:1 0x81000000 uRamdisk setenv bootargs console=ttyO0,115200 rdinit=/bin/sh bootz 0x80200000 0x81000000 0x80f00000The problem is after the bootz it complains about my ramdisk image being wrong. => fatload mmc 0:1 0x80200000 zImage 7109016 bytes read in 464 ms (14.6 MiB/s) => fatload mmc 0:1 0x80f00000 am335x-boneblack.dtb 34220 bytes read in 5 ms (6.5 MiB/s) => fatload mmc 0:1 0x81000000 uRamdisk 2828897 bytes read in 185 ms (14.6 MiB/s) => setenv bootargs console=ttyO0,115200 rdinit=/bin/sh => bootz 0x80200000 0x81000000 0x80f00000 Wrong Ramdisk Image Format Ramdisk image is corrupt or invalidAs I am starting with Linux on embedded devices I am completely out of ideas on how to solve the issue. I have found the reason is that the filesystem image has been created wrong. I have tried to use mkimage with -c none to no avail. I have tried using the mkimage inside my u-boot copy instead of the one that you can install in Ubuntu (with sudo apt-get install u-boot-tools). Before mkimage is called initramfs.cpio.gz looks as as follow:Am I missing some folders/files in my filesystem? Is it a problem my computer has ext4 but the boot partition uses fat32? Do I need to use a different mkimage toolset? What could be the problem?
U-Boot "Wrong Ramdisk Image Format" with initramfs on BeagleBone black
The ramdisk created by the system at /run/user/1000 is for system processes and I would create a new, dedicated one if you wish to use it for your own purposes. sudo mkdir /mnt/ramdisk will create a folder called ramdisk in the folder /mnt but not a ram disk. If you wish to mount a RAM disk to the /mnt/ramdisk folder, usable by the user you log in as, enter the following (one time use): mount -o size=4G,uid=1000 -t tmpfs tmpfs /mnt/ramdisk(replace "4G" with the required size) If it should be created at each boot, edit /etc/fstab as root (e.g. by sudo nano /etc/fstab) and add a line like this none /mnt/ramdisk tmpfs size=4G,uid=1000 0 0Then during each boot a new, empty ram disk will be mounted at /mnt/ramdisk for files discarded during shutdown.
To create a ramdisk (Ubuntu 18.04), I issued "sudo mkdir /mnt/ramdisk" at the Putty terminal prompt. Then I issued "mount | tail -n 1" and it returned: tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=100912k,mode=700,uid=1000,gid=1000)Now to unmount I issued "sudo umount /mnt/ramdisk/" but it said not found. So instead I issued "sudo umount /run/user/1000/" (part of the return from the mount command). Then, to be sure it's gone, I issued "mount | tail -n 1" and it returned: tmpfs on /run/user/0 type tmpfs(rw,nosuid,nodev,relatime,size=100912k,mode=700)So I issued "sudo umount /run/user/0/" Finally it's gone. My questions are:When I mount a ramdisk at /mnt/ramdisk, how do I know where the ramdisk will actually be mounted so I can unmount it? This was done at the Linux command line, but if I did it with system() in a C program, how can I get the actual mount point to unmount it?When I unmounted /run/user/1000/ why did it end up at /run/user/0/?Why didn't it just go to /mnt/ramdisk?Thanks.
Ramdisk mounted at mnt/ramdisk goes to /run/user
Someone came up with a similar question some time ago on the [emailprotected] mailing list. Quoting directly Stuart Henderson's answer:Hello, I want to build "bigger" bsd.rd image. Does rebuilding it only way to increase it? Can I somehow increase its size and just rdsetroot new disk.fs?You'll need to build a release(8) after adjusting at least FSSIZE in the relevant Makefiles under src/distrib, maybe also MINIROOTSIZE in kernel config.So, apparently not, you can't do it without rebuilding the kernel.
Having a bsd.rd extracted from an installation image and mounted as a vnode I can see there is 0.2MB free space available for additional files such as used during unattended installation. I want to copy a file 1MB in size but it obviously won't fit. Having that said, is there any way to increase the size of the ramdisk kernel without building it from source? My idea was to copy its content to newcontent.d, move my additional file into it, run makefs newcontent.fs newcontent.d on it, then rdsetroot bsd.rd.uc newcontent.fs and finally compress it and put back on an installation media. Sadly, while the size of original bsd.rd is 3.3MB the copy of it takes 180MB... I measure the size of directories using du -hs /path/to/directory.
Is it possible to increase the size of OpenBSD's bsd.rd without building it from source?
On a tmpfs filesystem I can copy 64 files of 1.6 GB (in total 100GB) in 7.8 sec by running 64 jobs in parallel. That is pretty close to your 100 Gbit/s. So if you run this in parallel (meta code): curl byte 1G..2G | write_to file.out position 1G..2G ẁrite_to could be implemented with mmap. Maybe you can simply write to different files, use loop-devices, and use RAID in linear mode: https://raid.wiki.kernel.org/index.php/RAID_setup#Linear_mode If you control both ends, then set up the source as 150 1 GB files used as loop-devices and RAID in linear mode. Then you should copy those in parallel and set up the RAID linear again.
Background I'm trying to download about 150GB to a newly-created Linux box (AWS EC2) with 100gbps network connection at full speed (12.5GB/s) or close to that. The network end is working well. However, I'm struggling to find anywhere on the box that I can put all the data fast enough to keep up, even though the box has 192GB of RAM. My most successful attempt so far is to use the brd kernel module to allocate a RAM block device large enough, and write into that in parallel. This works at the required speed (using direct io) when the block device has already been fully written to, for example using dd if=/dev/zero ... Unfortunately, when the brd device is newly created, it will only accept a write rate of around 2GB/s. My guess is that this is because brd hooks into 'normal' kernel-managed memory, and therefore when each new block is used for the first time, the kernel has to actually allocate it, which it does no faster than 2GB/s. Everything I've tried so far has the same problem. Seemingly, tmpfs, ramfs, brd, and everything else that provides RAM storage hooks into the normal kernel memory allocation system. Question Is there any way in Linux to create a block device out of real memory, without going through the normal kernel's memory management? I'm thinking that perhaps there is a kernel module out there that will split off an amount of memory at boot time, to be treated like a disk. This memory would not be considered normal memory to the kernel and so there would be no issue with it wanting to use it for anything else. Alternatively, is there some way to get the kernel to fully initialise a brd ramdisk (or similar) quickly? I tried writing to the last block of the disk alone, but unsurprisingly that didn't help. Non-RAM alternative In theory, a RAID of NVMe SSDs could achieve the required write speed, although it seems likely there would be some kind of bottleneck preventing such high overall I/O. My attempts to use mdadm RAID 0 with 8 NVMe SSDs have been unsuccessful, partly I think because of difficulties around block sizes. To use direct io and bypass the kernel's caching (which seems necessary), the only block size that can be used is 4096, and this is apparently far too small to make efficient use of the SSDs themselves. Any alternative here would be appreciated. Comments I know 2GB/s sounds like a lot, and it only takes a couple of minutes to download the lot, but I need to go from no EC2 instance at all to an EC2 instance with 150GB loaded in less than a minute. In theory it should be completely possible: the network stack and the physical RAM are perfectly capable of transferring data that fast. Thanks!
Allocate RAM block device faster than Linux kernel can normally allocate memory
.exrc is the configuration file for vi, whereas .vimrc is the config file for vim No. Vim will use the .vimrc file if present, otherwise the .exrc file if present Yes, unless you only put vi-compatible commands in thereFrom the Vim help on exrc: c. Four places are searched for initializations. The first that exists is used, the others are ignored. The $MYVIMRC environment variable is set to the file that was first found, unless $MYVIMRC was already set and when using VIMINIT. - The environment variable VIMINIT (see also |compatible-default|) (*) The value of $VIMINIT is used as an Ex command line. - The user vimrc file(s): "$HOME/.vimrc" (for Unix and OS/2) (*) "$HOME/.vim/vimrc" (for Unix and OS/2) (*) "s:.vimrc" (for Amiga) (*) "home:.vimrc" (for Amiga) (*) "home:vimfiles:vimrc" (for Amiga) (*) "$VIM/.vimrc" (for OS/2 and Amiga) (*) "$HOME/_vimrc" (for MS-DOS and Win32) (*) "$HOME/vimfiles/vimrc" (for MS-DOS and Win32) (*) "$VIM/_vimrc" (for MS-DOS and Win32) (*) Note: For Unix, OS/2 and Amiga, when ".vimrc" does not exist, "_vimrc" is also tried, in case an MS-DOS compatible file system is used. For MS-DOS and Win32 ".vimrc" is checked after "_vimrc", in case long file names are used. Note: For MS-DOS and Win32, "$HOME" is checked first. If no "_vimrc" or ".vimrc" is found there, "$VIM" is tried. See |$VIM| for when $VIM is not set. - The environment variable EXINIT. The value of $EXINIT is used as an Ex command line. - The user exrc file(s). Same as for the user vimrc file, but with "vimrc" replaced by "exrc". But only one of ".exrc" and "_exrc" is used, depending on the system. And without the (*)!
I know from experience that the ~/.exrc file can be used to configure vim. I also know that the ~/.vimrc file can be used for the same purpose. However, If I use .exrc to configure vim, this leads to problems on systems where vi is installed rather than vim. Namely, vim supports extra features that vi does not; and when you try to use them in vi, vi complains. My questions are:What is the difference between .exrc and .vimrc? If both are present, then are both used? Is it bad practice to use the .exrc file to configure vim?
What is the difference between .exrc and .vimrc?
I would assume the potential issue is that ^ is quite a valid character in file names, so the meaning of a pattern containing it changes based on if the option is set or not: $ touch 'foo' 'bar' '^foo' '^bar' $ ls ^foo* ^foo $ setopt extendedglob $ ls ^foo* bar ^bar ^fooA standard shell would take the ^ as an ordinary character, so the feature is probably disabled by default for compatibility. As long as you remember what options are enabled, i.e. know how the shell interprets the globs, leaving extended_glob enabled shouldn't be a problem in interactive use. For scripts that don't expect it but use weird filenames, it would be an issue, but non-interactive shells shouldn't read .zshrc, so setting it there should be ok. Just don't set it in .zshenv.The ksh-style extended globs (@(...|...) etc., setopt kshglob) have a similar issue in that they conflict with how zsh handles parenthesis in globs. @(f|b) means different things depending on if kshglob is set or not.
I recetnly came across setopt extended_glob...in order to enable extended globbing which allows for a number of cool wildcard additions, like excluding specific patterns, for example: ls ^foo*...will use ls on every path in your current directory except for patterns that match foo*. I found one tutorial suggesting to put setopt extended_glob inside your .zshrc, but I guess since many zsh config templates miss that option and the option being disabled by default it has some downsides or even side-effects? Or is it absolutely harmless always enabling extended_glob via putting it inside one's .zshrc?
zsh: is there a problem with always enabling extended glob?
zsh do not read .zshrc in non-interactive shell, but zsh allow you to invoke an interactive shell to run a script: $ zsh -ic 'type f' f is a shell functionor you can always source .zshrc manually: $ zsh -c '. ~/.zshrc; type f' f is a shell function
I have a script that runs a command via zsh -c. However, when zsh runs, it doesn't appear to load ~/.zshrc. I understand a login shell flag exists, but even zsh -lc <command> doesn't seem to work. How can I get functions, aliases, and variables defined in my ~/.zshrc to populate when running it with zsh -c?
Run .zshrc when passing command via -c
That's more than one question, each could have long answers. BrieflyIf I start a program in the background using & (for example './script &' ), what makes this process' execution different than if I ran normally a program that turns itself into a daemon ? Running a program in the background, it no longer is directly controlled by the terminal (you can't simply ^C it), but it can still write to the terminal and interfere with your work. Typically a daemon will separate itself from the terminal (in addition to forking) and its output/error would be redirected to files.Does this simply mean that if I logout the background process will stop and the daemon will keep running ? The background process could be protected with nohup but unless its output were redirected, closing the terminal would prevent it from writing, producing an error that likely would stop it.I would like to know if there's any risk / if it's bad practice. Besides the problem of keeping track of the program's output (and error messages), there's the problem of restarting it if it happens to die. A service script fits into the way the other services on the system are designed, providing a more/less standard way of controlling the daemon.
I've read and understood about how you create a daemon process, but from everything I read I never really understood why it needs to be done. I've read that we do the fork - setsid - fork to avoid the process to gain control of a terminal, but what does this mean ? If I start a program in the background using & (for example './script &' ), what makes this process' execution different than if I ran normally a program that turns itself into a daemon ? Does this simply mean that if I logout the background process will stop and the daemon will keep running ? I'm really having trouble understanding the 'gain control of a terminal' thing. The reason this bothers me is because I'm working on an embedded RPi on a robot and I thus need to make programs start on boot. Currently I'm just starting them from rc.local with a command like this su user -c 'python /home/user/launcher.py &' &. I've never had any problem with the program starting on boot (I can even see the process using ps -e when SSHing to the RPi), but I would like to know if there's any risk / if it's bad practice.
Why do we daemonize processes? [closed]
# If PC contains anything, add semicolon and space if [ ! -z "$PROMPT_COMMAND" ]; then PROMPT_COMMAND="$PROMPT_COMMAND; " fi# Add custom PC PROMPT_COMMAND=$PROMPT_COMMAND'CUSTOM_PC_HERE'
Mac's Terminal comes with a default PROMPT_COMMAND that checks the history and updates the current working directory (title of the tab): Add echo $PROMPT_COMMAND to the top of your .bash_profile and you'll see: shell_session_history_check; update_terminal_cwdI want to add my own PROMPT_COMMAND without over-writing the default. The default should come before my custom PROMPT_COMMAND with a semicolon and space to separate the two. Note that some programs (such as IntelliJ and VS Code) don't have a default! So I wouldn't want to include the space/semicolon in that case.
How can I customize $PROMPT_COMMAND without overwriting the default (if present)?
Mouse binding to toggle an action when dragging to screen edge: There doesn't seem to be an obvious way to have Openbox detect dragging a window to the edge of the screen as a <mousebind> action. It might be easiest to basically set up hot corners, such as with behave_screen_edge in xdotool, and use those to trigger the Openbox keybind you've already found. What makes Openbox send windows to other desktops by dragging them to the screen edge? This is set up in <screenEdgeWarpTime>. Example from my rc.xml, in the <mouse> section: <screenEdgeWarpTime>400</screenEdgeWarpTime> <!-- Time before changing desktops when the pointer touches the edge of the screen while moving a window, in milliseconds (1000 = 1 second). Set this to 0 to disable warping -->
I am trying to edit the lxde-rc.xml file (in ~/.config/openbox) so I can implement Window snapping like in Microsoft Windows. When a window is dragged to the right edge of the screen, it maximizes to fill the right half of the screen. I don't want to use a tiling wm, but edit the configuration for openbox. I have found code that will do this with keyboard shortcuts: <!-- Fill left half of desktop --> <keybind key="C-W-Left"> <action name="Unmaximize"/> <action name="MoveResizeTo"> <x>0</x> <y>0</y> <height>99%</height> <width>50%</width> </action> </keybind> <!-- Fill right half of desktop --> <keybind key="C-W-Right"> <action name="Unmaximize"/> <action name="MoveResizeTo"> <x>-0</x> <y>0</y> <height>99%</height> <width>50%</width> </action> </keybind>My current (and also the default) configuration moves a window to the next desktop when it is dragged to the screen edge, so there must already be some kind of binding in the configuration file. However, The only actions in the configuration file that switch desktops are called by Keyboard shortcuts and scrolling on the desktop. I have two questions: What would a mouse binding look like that toggles an action when a window is dragged to the edge of the screen, and: Why is the current behaviour of that action not referenced in lxde-rc.xml? Thanks in advance!
OpenBox Mouse binding for dragging window to screen edge
Thanks for the excellent bug report! I can answer your questions, and as the Stow maintainer I can also fix the issues, but I'd appreciate your feedback from a UX perspective so we can figure out the best fix. Firstly, it's worth noting that --verbose=5 will give you much more detail about the internals of the ignore mechanism, although in this case it would not be sufficient to explain why things are not behaving the way you expect. There are two reasons why neither of your .stowrc files worked:The .stowrc parser splits (option, value) pairs based on the space character, not on =. So a line for an ignore option in that file should start --ignore not --ignore=. The .stowrc parser doesn't automatically strip quotes. When the --ignore option (or any other option, for that matter) is passed via the CLI, the shell will strip the quotes before Stow sees them. That's why it works there.So the combination of these two means that your .stowrc should contain: --ignore carI've tested that, and indeed it works. Now, there are probably valid arguments saying that either or even both of these points are actually UX bugs. I certainly agree that they do not provide an intuitive UI. The question is whether the behaviour should be changed, or whether it's better to simply make this clearer in the docs. My current thinking is that based on Postel's Law, the parser should accept splitting on both space and =, but it should not strip quotes because what if the user really did want to ignore 'car' rather than just car? Also there is an existing and related bug report that options with spaces break in stowrc, so this should be taken into account when implementing any fix. (I'll update that bug after posting this answer.) I'd welcome your opinions on these. Finally, I believe your .stow-local-ignore did not work because you placed it in the stow directory rather than in the a/ package directory. The documentation about this seems clear to me, so I think it's fair to write this one off as pilot error. However if you have any suggestions for how to make the docs clearer than I'm all ears. Thanks again! BTW in future you may want to consider sending bug reports to the bug-stow mailing list (or to the horrible but ethically correct Savannah bug tracker or to the less ethical but more usable github issue tracker) and help requests to the help-stow mailing list. Yes, I know that these are too many options for such a small and quiet project; that's a TODO for another day ...
Summary --ignore=<regex> lines that I put in my .stowrc does not work. When running Stow, it says "Loading defaults from .stowrc", yet it has no effect. But passing the --ignore=<regex> lines to the command directly works. Problem Assume this directory: user@user-machine:~/test-stow/stow$ tree -a . ├── a │ └── car └── .stowrc1 directory, 2 filesContents of ./.stowrc: --ignore='car'So my expectation is that running the command stow --verbose=3 a/ while in that directory is equivalent to running stow --ignore='car' --verbose=3 a/ if the ./.stowrc file wasn't there. Now I run: user@user-machine:~/test-stow/stow$ stow --verbose=3 a Loading defaults from .stowrc stow dir is /home/user/test-stow/stow stow dir path relative to target /home/user/test-stow is stow cwd now /home/user/test-stow cwd restored to /home/user/test-stow/stow cwd now /home/user/test-stow Planning stow of package a... Stowing contents of stow/a (cwd=~/test-stow) Stowing stow/a/car LINK: car => stow/a/car Planning stow of package a... done cwd restored to /home/user/test-stow/stow Processing tasks... cwd now /home/user/test-stow cwd restored to /home/user/test-stow/stow Processing tasks... doneNote that this does create the symlink to ./car, despite the ignore line in ./.stowrc. Now I undo the operation by running stow -D --verbose=3 a/: user@user-machine:~/test-stow/stow$ stow -D --verbose=3 a/ stow dir is /home/user/test-stow/stow stow dir path relative to target /home/user/test-stow is stow cwd now /home/user/test-stow Planning unstow of package a... Unstowing from . (cwd=~/test-stow, stow dir=stow) Unstowing stow/a/car car did not exist to be unstowed Planning unstow of package a... done cwd restored to /home/user/test-stow/stow cwd now /home/user/test-stow cwd restored to /home/user/test-stow/stow Processing tasks...If I delete everything in ./.stowrc and run stow --verbose=3 --ignore='car' a/, I get a different result: user@user-machine:~/test-stow/stow$ stow --verbose=3 --ignore='car' a/ stow dir is /home/user/test-stow/stow stow dir path relative to target /home/user/test-stow is stow cwd now /home/user/test-stow cwd restored to /home/user/test-stow/stow cwd now /home/user/test-stow Planning stow of package a... Stowing contents of stow/a (cwd=~/test-stow) Planning stow of package a... done cwd restored to /home/user/test-stow/stow Processing tasks...Now a symlink to ./car was not created, as expected and desired. What about $HOME/.stowrc? Placing the .stowrc file in the home directory instead of $HOME/test-stow/stow has the same effect; a symlink to the file car still gets made. Ignore lists Having a file $HOME/test-stow/stow/.stow-local-ignore with the content "car" instead of the .stowrc file doesn't work, either. The symlink to the file named car still gets created.GNU Stow version: 2.2.0 Perl version: perl 5, version 18 Update Here is my reply to Adam Spiers' answer.
Stow doesn't use the "ignore" option given in the rc file
The [[ ... ]] syntax isn't valid for /bin/sh. Try: if [ -e /usr/src/an-existing-file ] then echo "seen" >> /etc/rclocalmadethis fiNote that sometimes it works because /bin/sh -> /bin/bash or some other shell that supports that syntax, but you can't depend on that being the case (as you see here). You can run ls -l /bin/sh to get to know this info for instance: lrwxrwxrwx 1 root root 4 Jul 18 2019 /bin/sh -> dash
Is it possible to have a conditional within /etc/rc.local? I've checked many Q&As and most people suggest running chmod +x on it, but my problem is different. It actually does work for me without conditionals, but doesn't otherwise. #!/bin/shif [[ -e /usr/src/an-existing-file ]] then echo "seen" >> /etc/rclocalmadethis fiHere's the weird error I see when I run systemctl status rc-local.service: rc.local[481]: /etc/rc.local: 3: /etc/rc.local: [[: not foundAnd here's my rc.local in the exact same location ls -lah /etc/: -rwxr-xr-x 1 root root 292 Sep 19 09:13 rc.localI'm on Debian 10 Standard.
Is it possible to have conditionals in /etc/rc.local?
If you start the program from rc.local, then you cannot login to a shell and type ctrl-c to stop it. The reason is that the program was not started from the shell that you're logged into. You will find the process ID (pid) of the program and use the kill command to send the process a signal, causing it to terminate. For example, at a console (in a terminal window or logged in via ssh): ps aux | grep 'the-name-of-your-progam'The number in the second column is the pid. Use that pid to send the process a termination signal: kill -TERM [put-your-pid-here]A process may choose to ignore the TERM signal, so run the ps pipeline again. If you still see the same pid, then send the kill signal: kill -KILL [put-your-pid-here]
I was trying to create a program that runs at start-up that takes a picture every 10 seconds (In a infinite loop) on my raspberry pi but I discovered I had made a mistake but couldn't Ctrl+C out of it. Is there a way to escape? (I did try to go to a different workspace but login prompt wouldn't show.)
How can I kill a program started from rc.local when Ctrl-C doesn't work?
/etc/init.d/rcS allows you to run additional programs at boot time. Its typical use is to mount additional filesystems (only the root filesystem is mounted at that point) and launch some daemons. Usually rcS is a shell script, which can easily be customized on the fly. Typical distributions make rcS a simple script that executes further scripts in /etc/rcS.d (the exact location is distribution-dependent); this allows each daemon to be package with its own init script. The file /etc/rc.local is also executed by rcS if present; it is intended for commands written by the system administrator. With the traditional SysVinit implementation of init, /etc/init.d/rcS is listed in /etc/inittab (the sysinit setting). With BusyBox, you can also supply an inittab (if the feature is compiled in) but there is a built-in default that makes it read /etc/init.d/rcS (among other things).
I using embedded Linux, I have compiled the kernel without initramfs and kernel is booting fine. But It shows me rcS file is not found I have put it in /etc/init.d/rcS and my rcS file look like #!/bin/sh echo "Hello world"After the file system is mounted by the kernel it prints Hello world. Can any one tell/explain me why this file is require and how could I start those start up scripts in particular order? I am using Raspberry Pi with busybox and it works fine but get I got stuck in the startup.
Why is rcS required after file system is mounted by the kernel?
The solution was actually pretty simple. Within my ~/.config/fish/config.fish file, I just needed to drop the "$" from the eval statement. So it would look like this: # uses dircolors template eval (gdircolors ~/.dircolors/dircolors.256dark)# Aliases alias ls='gls --color=auto'
I just recently switched to the fish shell from bash and I am having trouble sourcing my dircolors template file to get custom colors to appear for certain file extensions. In bash, I was sourcing a template from ~/.dircolors/dircolors.256dark. The dircolors.256dark template has different file types mapped to different colors. For example, when I use ls, all .wav files will appear in orange. This was being sourced in my ~/.bash_profile like so: # uses dircolors template eval $(gdircolors ~/.dircolors/dircolors.256dark)# Aliases alias ls='gls --color=auto'However, fish doesn't really use an rc file but instead sources from a config.fish, file but the syntax for certain operations is different in fish. I'm trying to figure out how I would be able to accomplish this in fish. I like being able to visually distinguish different file types by color so this could potentially be a deal breaker for me if there is no way of doing this in fish. P.S. For clarity, I am not trying to simply change the colors of directories or executable files, but files with different file extensions. For example if I did ls in a directory, I would get the following: my_file.py # this file would be green for example my_file.js # this file would be yellow my_file.wav # this file would be orangeEDIT: I used homebrew on macOS for dircolors.
Using ~/.dircolors in fish shell
You can use command e.g.: command -v vimIt is a shell built-in command. (zsh, bash, ksh, but not tcsh) In tcsh you can use this: ~> sh -c 'command -v vim'
I want to know how to check whether a command is installed. Specifically, I want to alias vi to vim on all machines where vim is a valid command. And I want to keep my *rc files generic. But some OSes don't come with vim installed by default. And in others, vi is Vi IMproved, yet vim isn't a valid command. I figured I could do this (bash style): #If the version info from vi includes the phrase "Vi IMproved", then vi IS vim. vimInfo=$(vi --version 2> /dev/null | head -n 1 | grep -Fi 'VI IMPROVED') if [[ -z $vimInfo ]]; then if [[ -f $(which vim) ]]; then #if the value returned by 'which' is a valid directory, then vim is installed alias vi='vim' else #vim is not installed echo "Vi IMproved (vim) is not installed"; fi fiBut is there a better way? And is the which command guaranteed to be present on all unix-like machines? P.S. I have *rc files for nearly every popular shell (bash, tcsh, zsh, ksh), and I intend to apply the solution in all of them. Therefore, the solution should not be shell-specific. The syntax can be, though.
How to check if vim is installed?
The most optimized solution to troubleshoot any detached script using tmux will require you to use the following option within your triggering script: #!/bin/bash # this script is called "sess"tmux new-session -d -s sess1# this statement is a life-saver for tmux detached sessions tmux set-option -t sess1 set-remain-on-exit on# In my case running the script in a new window worked tmux new-window -d -n 'nameofWindow' -t sess1:1 'sudo /home/pi/bin/script.py'exit 0Now the following script was called from the rc.local and the Pi was rebooted. Eventually on reboot when you attach the session using sudo tmux a Once gets a tmux session with 2 windowsInitial one is just an empty session triggered due to tmux new-session -d -s sess1and the another one from the tmux new-window command which can be opened using CTRL+B + 1 since it was mentioned as sess1:1 (note: Hot keys may vary for user, the default tmux hotkey (bindkeys) are CTRL+B)Inference If the script ends with an error, the Window will show you where the error was in my case errors in my Python script and at the Bottom it will show Pane is Dead. Hence due to errors in the script the tmux session was exited without giving any relevant log(feedback) hence no output was logged in the above mentioned /tmp/tmux.log Hence it is always recommended using the set-remain-on-exit on when running scripts with tmux in case if there are faults in the the script in detached mode
I have a tmux triggering script as mentioned below, running on Raspbian Wheezy 7.10:Step 1 #!/bin/bash # this script is called "sess"tmux new-session -d -s sess1 'sudo /home/pi/bin/myscript.py' exit 0I have checked the running script as follows : first by running the python script sudo /home/pi/bin/myscript.py and then by typing the tmux command as mentioned above tmux new-session -d -s sess1 'sudo /home/pi/bin/myscript.py'. Both the times the script runs.Since if a User can type and run this scripts, it is a safe assumption that the complete thing can be written as a bash script. Hence the above mentioned script 'sess'Step 2I have give this file executing rights through chmod +x /home/pi/bin/sessStep 3I have also tried to run the script using rc.local as the following: # in the rc.local file /home/pi/bin/sess & exit 0The rc.local file gets triggered for a fact since I set WLAN parameter on boot for my Pi to join an Ad-Hoc Network.I can clearly verify this since I can ssh into my Pi.Observations: Upon reboot the script is not triggered. This can be verified through the tmux ls command which says Connection to Server Failed. I have also verified using sudo tmux ls incase if the superuser has the tmux session but the Output is same.Step 4I tried running the script in crontab using: sudo crontab -u pi -e## inside the crontab@reboot /home/pi/bin/sess &I also tried creating a cron job for the superuser sudo crontab -e@reboot /home/pi/bin/sess &Observations: Upon reboot the script is not executed. Step 5I created a sub-shell in the rc.local to capture any activity of the script being triggered # in the rc.local file (/home/pi/bin/sess &) > /tmp/tmux.logObservations upon reboot and cat /tmp/tmux.log there is nothing inside the file. The file tmux.log does get created though Inferences Ironically, if do something like sudo /etc/rc.local or sudo ~/bin/sess while i am logged in the script gets triggered perfectly since I can actually attach the session using sudo tmux a and also see the listing sudo tmux ls But since it cannot run on boot time the purpose is useless if not triggered on boot. I have also checked the environment variables $PATH which actually does show /home/pi/bin in it. I have also tried using the complete path to tmux in all my scripts since if the environment variables might not be sorted. But no Luck $ which tmux $ /usr/bin/tmuxIronically, If I follow such a step on my Ubuntu 14.04 LTS laptop the script gets triggered through my rc.local fileFurther optionsMaybe try an init.d/ daemon-script but not sure if an rc.local and a crontab can't handle this then maybe a daemon also wont I have no idea if a ~/.tmux.conf is any good.
Cannot trigger tmux script on boot
Statically linking used to be the only way to load a module which is think is the primary reason to having options like COMPAT_LINUX. Also, prior to loader, it used to be the only way to load modules necessary to get FreeBSD to get the necessary drivers to mount the root file system and boot FreeBSD. Nowadays, I don't think there is any significant benefit to statically linking in a module if it can be easily loaded at runtime. I don't think you will see any benefit in performance by statically linking Linux compatibility support, but some users still swear by it. I would avoid it just because of the inconvenience of recompiling a Kernel for little to no perceived performance gain.
I'm curious: What, exactly are the benefits to statically linking modules into the kernel rather than loading through rc.conf, etc? For example: To add Linux emulation, I could add linux_enable="YES" to /etc/rc.conf, or I could link it into the kernel by adding options COMPAT_LINUX to my kernel config. Is there actually an advantage to this? If so, what?
Difference between rc.conf, loader.conf and static kernel linking in FreeBSD
Windows Subsystem for Linux 1 (WSL 1) is just a compatibility layer for running Linux binary executables on Windows. It does not provide much more functionality beyond that. Especially in your case you encountered two unsupported components / functions: Runlevels WSL does not run as a separate instance of an operating system so there is no simple way of supporting bootstrap and service managing systems like init or systemd as they are present on real Linux systems. Consequently today (2019-02) Ubuntu for WSL still does not support runlevels. Linux kernel This is the fundamental limitation in your case. VirtualBox as a hypervisor needs to integrate with the operating system's kernel (using kernel modules). WSL is just a compatibility layer. There is no real Linux running (no Linux kernel). Consequently VirtualBox cannot compile its kernel modules for WSL. If you want to run VirtualBox on a Windows machine, install the Windows version. You cannot run the Linux version inside WSL 1. Update for WSL 2 Since the time the original answer was written WSL 2 came into existence. The architecture of WSL 2 is very different. It runs full Linux kernel inside a Hyper-V virtual machine. Unfortunately as of today (2022-01) runlevels are still not supported in the default WSL Ubuntu 20.04. On Windows 11 nested virtualization (running hypervisor like KVM inside WSL 2) is supported both on Intel and AMD CPUs since WSL build 20175. Maybe with some tweaking it could be possible to run VirtualBox inside WSL 2. It seems this combination is officially supported neither by Oracle nor by Microsoft.https://docs.microsoft.com/en-us/windows/wsl/release-notes#build-20175 https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig
Issue Many apt-get installs are failing b/c the system can't determine current runlevel Background specs: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial $ uname -a Linux systemName 4.4.0-43-Microsoft #1-Microsoft Wed Dec 31 14:42:53 PST 2014 x86_64 x86_64 x86_64 GNU/LinuxExplaniation I am trying to install virtualbox on WSL and I got the following error: $ VBoxManage --version WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-dkms package and the appropriate headers, most likely linux-headers-Microsoft.I solved this by following these steps. To dpkg-reconfigure virtualbox-dkms. But then I got the following: dpkg: warning: version '*-*' has bad syntax: version number does not start with digit It is likely that 4.4.0-43-Microsoft belongs to a chroot's host Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. invoke-rc.d: could not determine current runlevelHow can get the invoke-rc.d to return the correct runlevel?
Windows Subsystem for Linux is unable to determine current runlevel
rc is typically not used by Linux distributions but is used in BSD rc.local is used to be able to execute additional commands during the startup without having to add symlinks. rc.sysinit seems to be Red Hat specific and is executed very early in the process. It is executed as one of the first scripts while rc.local is executed later.Also I can find S99local -> softlink for rc.local in 2,3,4 and 5 runlevels. Does it mean that rc.local won't run on runlevel 1?Correct, that means that S99local which is a symlink to /etc/rc.local will be one of the last scripts executed when entering runlevels 2, 3, 4 and 5. It won't get executed for runlevel 1 as 1 is the single user runlevel, typically used for rescue/maintenance work.
I tried to display the list of startup scripts for the current runlevel at bootup. I wrote the following code. rl=`runlevel | cut -d " " -f2` ls /etc/rc.d/rc$rl.d/S* | cut -d "/" -f5 sleep 10It's working if I put this code in rc.local file. But it's not working if I put it in rc file or in a separate script file abc in /etc/init.d and by creating softlinks in runlevel directories. But simple commands like follows are able to run in all the methods. ls /etc/init.dDo some commands like runlevel or piping won't work unless some of the scripts have started? Or is there something else? And if I put my code in rc file, my code runs before and after reboot. So what's the difference between rc, rc.local and rc.sysinit files? Where exactly I need to edit those files?Also I can find S99local -> softlink for rc.local in 2, 3, 4 and 5 runlevels. Does it mean that rc.local won't run on runlevel 1?
What is the difference between rc, rc.local and rc.sysinit?
As @VincentNivoliers said in his comment, your issue comes from the line mouse=a. It enables the mouse in all modes of vim, ie letting you put the cursor where you click. a means this is active in all modes. If you don't want vim to care about your mouse, just set mouse= (no value). Then, you could use your mouse to copy'n'paste from your clipboard as in a terminal. From vim documentation:The mouse can be enabled for different modes: n Normal mode v Visual mode i Insert mode c Command-line mode h all previous modes when editing a help file a all previous modes r for |hit-enter| and |more-prompt| prompt
Am using an open source .vimrc file from GitHub, but it is screwing up my default mouse right click, copy-paste actions. Whenever I do a right click, it enters into the visual mode and I am having a hard time, doing a copy-paste when I'm inter-working with my Windows machine. Let me know what .vimrc config lines to delete. Am running a Red Hat Enterprise Linux Server release 5.4 (Tikanga)
My .vimrc file disabled the copy/paste action using mouse right click!
The best way is probably to create your own rc-script that you will use instead of the "official one". Otherwise, your rc-script probably includes an external "config" file if you check it. The include may look like this: . /etc/default/mydaemon-configSo that you can edit /etc/default/mydaemon-config and do something like: export LD_PRELOAD=whateveryouwantBut be careful, it may not be what you want, because every process started from the script will have that LD_PRELOAD configuration. Otherwise, the original script may have something like: DAEMON=/usr/bin/mydaemonSo you might be able to change it from /etc/default/mydaemon-config with: DAEMON="LDPRELOAD=whateveryouwant $DAEMON"This depends on your original rc-script, that we don't have, so it's only speculation... Anyway, these are all workarounds, and IMHO, you should rather look for a solution to avoid using LD_PRELOAD in the first place.
I have an application that needs a modified LD_PRELOAD. I want to start the application using the originally provided rc script, so I can benefit from an automatically updated rc script on an update of the application. I can't modify the original rc script of course, because any change would be lost on the next update. So, is there maybe some system settings like: If starting application X, use a modified LD_PRELOAD? Or would my best way really be to copy the original rc script, modify it and use the modified rc script?
Automatically start an application with a modifed LD_PRELOAD?
Add the commandline to 'startup applications'. This worked for me (at least on Ubuntu 18.04).
I have installed rescuetime on debian 9. It requires the command rescuetime to be run in a terminal, this just keeps running rather than running and closing (it adds an icon into the tray at the bottom left of the screen). I'm having some difficulty getting this to run on startup. I have tried crontab and added @reboot rescuetime Also I've tried adding an rc.local file #!/bin/sh -esh 'rescuetime.sh' exit 0rescuetime.sh #!/bin/sh<CR> rescuetimeNeither of these options work. How do I get rescuetime to run on startup.
How to automatically start Rescuetime on startup (tried crontab and rc.local)
You can add ssh key file using ssh config. Here is default for all users /etc/ssh/ssh_config Here is for current user ~/.ssh/config Example of current user ssh config per host: ## Home nas server ## Host nas01 HostName 192.168.1.100 User root IdentityFile ~/.ssh/nas01.key## Login AWS Cloud ## Host aws.apache HostName 1.2.3.4 User wwwdata IdentityFile ~/.ssh/aws.apache.keyYou can read more here
I have a Git repo that is authenticated with an SSH key - the key is not the standard id_rsa. I can run: eval $(ssh-agent -s) ssh-add /home/forge/.ssh/otherkeyThen git pull origin masterThis is working. The server needs to do a git pull on boot. So I have code in rc.local that pull the repo it's working only when the key is default id_rsa but not when the key is different. If I add it in bashrc then it does add the key and work when I log in, but not immediately from boot. How can I add an alternative SSH key than id_rsa for git to use, that can be instantiated before my git pull command in rc.local? Thanks
Add an SSH key on boot
This is mentioned in the FreeBSD FAQ's System Administration section. See especially section 10.5, which is aptly named I made a mistake in rc.conf, or another startup file, and now I cannot edit it because the file system is read-only. What should I do?. 10. System Administration 10.1. Where are the system start-up configuration files? The primary configuration file is /etc/defaults/rc.conf which is described in rc.conf(5). System startup scripts such as /etc/rc and /etc/rc.d, which are described in rc(8), include this file. Do not edit this file! Instead, to edit an entry in /etc/defaults/rc.conf, copy the line into /etc/rc.conf and change it there. For example, to start sshd(8), the included OpenSSH daemon: # echo 'sshd_enable="YES"' >> /etc/rc.confAlternatively, use sysrc(8) to modify /etc/rc.conf: # sysrc sshd_enable="YES"To start up local services, place shell scripts in the /usr/local/etc/rc.d directory. These shell scripts should be set executable, the default file mode is 555.
I am running a single-user FreeBSD and I am trying to edit rc.conf but it appears to be read-only for some reason. I can't change it from the root account. Indeed, id gives: uid=0(root) gid=0(wheel) groups=0(wheel),5(operator) Trying to mount with mount -u -w does not help either.
FreeBSD: /etc/rc.conf persistently read-only
An option is to use symbolic links. For example, say I have my git checkout out in ~/.dotfiles. I might have: .vimrc -> ~/.dotfiles/vimrc .bashrc -> ~/.dotfiles/bashrc .bash_login -> ~/.dotfiles/bash_profile ...I would not, personally, check my home directory itself into the repo.
What is the proper way to checkout a bunch of .*rc files into a home directory? I've seen lots of github repos online and people usually name them dotfiles, and I guess they get checked out into their home directory; but what I don't understand is... How does one keep their other home files (specific to that machine) separate from their .*rc related files? For instance, you would not want to store /home/<username>/Documents in the dotfiles git repo, but you would want to store .vimrc, /home/vim in the repository. I'm aware of using .gitignore to ignore files for a repository, but I don't want to add an entry to .gitignore every single time I add a file to /home/<username>/Documents. Is there a way to do this the opposite way, that I only specify the files and directories related to my dotfiles project to be included in the repository? Also, if I create a new feature branch for something that I'm already developing, and I check it out, I don't want it to "blow away" the non-dotfiles folders etc when I check it out, or have git complain that the working directory isn't clean before cutting a new branch. Would this somehow require the use of git submodules? What I'd really like to do with this is be able to get on a new Linux box and with the required dependencies installed check out a common version of my .*rc files and related files from my repo, so if I fix it on one, I can just pull the changes to the other machines.
Proper way to checkout home directory .rc files from a git repo?
You can test if the output of the script (i.e. rc file) is a terminal if not; if it is it should be safe to output text, and if not a terminal don't output anything: if [ -t 0 ]; then # check your jobs here and print any info you want to see fi
Recently my Unison started throwing up some strange error whenever I tried to sync between my laptop and my PC. I realized that I had added a line in bashrc that would print my pending tasks whenever I would open a terminal. The line added in my bashrc: task list #this command comes from a small utility called taskwarriorThe error is here: Received unexpected header from the server: expected "Unison 2.40\n" but received "\nID Proj Age Description\n-- -------- --- -----------------------------\n 2 11d Do the research work\n 3 Life 11d Get stickynotes from stationary\n 1 Technical 11d Fix the error\n\n3 tasks\n", which differs at "\n". This can happen because you have different versions of Unison installed on the client and server machines, or because your connection is failing and somebody is printing an error message, or because your remote login shell is printing something itself before starting Unison.As mentioned in the error log, my login shell is printing something itself before starting Unison. This is indeed the root of the problem. So, now I have 2 questions:How do I make my bashrc to print "task-list" message AFTER the Unison header? Alternatively, can I make the ssh sessions to load separate RC file so that the "task-list" is not printed at all? Will it be safe to print anything at all? I mean if I am somehow manage to print my task-list after the Unison header, is their any chance of data corruption during syncing, due to the additional information in the header? PS: Unison uses ssh for communication between the two systems.
Separate bashrc file for ssh sessions to avoid Unison Errors
The environment variable SSH_TTY seems to be set only when sshing, not when scping. So the following suffices (at least in my testing): if [ -n "$SSH_TTY" ]; then /usr/bin/neofetch; fi(For what it's worth, I guessed this by looking at the output of env | grep -i ssh.)
I'd like to launch neofetch (a small utility that displays a banner) each time I log into a remote server via OpenSSH. So, I just added /usr/bin/neofetch into my ~/.ssh/rc file, and it works fine. The problem is that ~/.ssh/rc is also parsed when I scp into the server. A complete scp command works just fine, there is however a problem when I try to use the autocomplete feature of scp, when I type <Tab><Tab> so it displays the files/folders available on the remote server, example : $ scp remote-host:/t <TAB><TAB> \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\`\\\"\\\"\\\"\\\ \\\ \\\$\\\$\\\:\\\ \\\ \\\ \\\ \\\ \\\ \\\$\\\$.\\\ \\\ \\\ ^[\\\[0m^[\\\[31m^[\\\[1m-^[\\\[0m^[\\\[1m\\\ \\\ \\\ \\\ \\\,d\\\$\\\$\\\'\\\ \\\ \\\ \\\ \\\ \\\ \\\`\\\$\\\$b.\\\ \\\ \\\,\\\$\\\$P\\\'\\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\`\\\$\\\$\\\$.\\\ \\\ \\\$\\\$P\\\ \\\ \\\ \\\ \\\ \\\ d\\\$\\\'\\\ \\\ \\\ \\\ \\\ ^[\\\[0m^[\\\[31m^[\\\[1m\\\,^[\\\[0m^[\\\[1m\\\ \\\ \\\ \\\ \\\$\\\$P\\\ \\\'\\\,\\\$\\\$P\\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\,ggs.\\\ \\\ \\\ \\\ \\\ \\\`\\\$\\\$b\\\:\\\ \\\ \\\$\\\$\\\;\\\ \\\ \\\ \\\ \\\ \\\ Y\\\$b._\\\ \\\ \\\ _\\\,d\\\$P\\\'\\\ ^[\\\[0m^[\\\[1m\\\ \\\`\\\$\\\$b\\\ \\\ \\\ \\\ \\\ \\\ ^[\\\[0m^[\\\[31m^[\\\[1m\\\"-.__\\\ ^[\\\[0m^[\\\[1m\\\ \\\ \\\`Y\\\$\\\$\\\ (...)Usually $ scp remote-host:/t <TAB><TAB> shows me the files/folders starting with /t (for example /tmp), but now it executes the neofetch banner. Is there a way to distinguish $ ssh from $ scp in ~/.ssh/rc (to launch neofetch only when I ssh into the server, not when I scp into it) ? Note : I don't want to launch neofetch each time I launch bash, nor each time I launch a login shell, so putting it in /etc/bash.bashrc or in /etc/profile is not an option. I only want to launch it after an SSH connection. I did some research and tried a few things :Inspired by this post, I tried : if [ -t 0 ]; then /usr/bin/neofetch; fi and if tty > /dev/null; then /usr/bin/neofetch; fi But it's not working (neofetch is never launched, not even after an $ ssh)Inspired by that post, I also tried to use the $- environment variable to distinguish between interactive and non interactive sessions, but it doesn't work either, because ~/.ssh/rc is parsed by dash, not by bash (and $- is a bash variable)I found however a working solution (well, sort of...). It was inspired by this post :On the server, in ~/.ssh/rc, I put : if [ ! "$LC_SCP" = "yes" ]; then /usr/bin/neofetch; fiOn the client, I have to set an LC_SCP environment variable before the $ scp :$ export LC_SCP=yes$ scp -o SendEnv=LC_SCP remote-host:/t<TAB><TAB> (works, doesn't launch neofetch)It works, but it's cumbersomee. Isn't there a better way to distinguish between ssh and scp sessions in ~/.ssh/rc ?
how to distinguish ssh from scp in ~/.ssh/rc?
Leave /etc/rc.conf as is. It even has a prominent header saying DO NOT EDIT THIS FILE!!, twice. Instead, modify /etc/rc.conf.local. But you don't need to do even that: Tested on OpenBSD 6.1-stable (amd64) running in a VirtualBox VM (this installs kde4-4.14.3 and enables KDM): $ doas pkg_add kde4 $ doas rcctl enable kdm $ doas rebootKDM will start upon reboot. KDM will start the KDE desktop environment when you log in. You may also start KDM through doas rcctl start kdm without rebooting. If you already have xenodm(1) (previously known as xdm) running, stop it and disable it first, before starting KDM: $ doas rcctl stop xenodm $ doas rcctl disable xenodm $ doas rcctl enable kdmSee also rcctl(8).Regarding /etc/doas.conf (from comments): This is my /etc/doas.conf on my OpenBSD 6.1-stable system: permit nopass keepenv root as root permit persist :trustedIt allows root to use doas without password and without resetting the environment (this line is taken straight out of doas.conf(5)), and it allows members of the group trusted (a special group on my system) to use doas with password. To grant usage of doas to a single user myuser, I'd probably use something like permit persist myuseras a bare minimum, or permit setenv { -ENV PS1=$DOAS_PS1 SSH_AUTH_SOCK } :wheel as suggested by doas.conf(5) (and then add the user to the wheel group). The persist option allows for passwordless doas invocations during five minutes after a successful doas invocation has been done. This option was added in OpenBSD 6.1.
I have installed kde4 (via running # pkg_add kde4) on my OpenBSD 6.0 VM and I would like to automatically boot KDM on startup. I have followed the most applicable guide Google found me, but it didn't work. Specifically adding: kdm_flags=""if [ "X${kdm_flags}" != X"NO" ]; then /usr/local/bin/kdm ${kdm_flags} ; echo -n 'kdm ' fito my /etc/rc.conf does not cause KDM to start on boot for me. Any ideas? My full /etc/rc.conf (which besides the above modification I have not changed since I installed OpenBSD) file can be found here. If it is relevant running startkde4 starts KDE without a problem. /usr/local/bin/kdm does exist.
How to autostart KDM on boot in OpenBSD 6.0?
If you run: zstyle ':completion:*' format 'Completing %d'And try again, you'll see:Completing process-groupJust above that 0. kill 0 does kill the current process group. See https://www.zsh.org/mla/workers/2014/msg00713.html for the rationale, though I'll have to admit neither their explanation nor the code in the diff there makes much sense to me. You can make it go away with: zstyle ':completion:*:kill:*:process-groups' hidden trueOr: zstyle ':completion:*:kill:*:process-groups' hidden allto also make the Completing process-group header go away (see info zsh hidden for details).
I have following in my .zshrc: zstyle ':completion:*:kill:*' command 'ps -u $USER -o pid,%cpu,tty,cputime,cmd'When I press TAB, in addition to processes being listed, there is always a last line containing the digit 0. why is that? Can I get rid of it?
zsh completion for kill listing unexpected "0"
The README in that directory states that scripts in that directory are only called once on poweroff (and not on reboot). With a simple test program #!/bin/bashLOG=/root/backup.log date >> $LOG echo $* >> $LOGI noticed that one time the program was actually called twice, once without a parameter and once with the parameter 'stop'. I have however not been able to reproduce it. I would suggest to log the actual invocation parameters to the program as well and in the script test for $1 being stop. It is also more customary to put this program as backup in /etc/init.d and make a link from /etc/rc0.d/K01backup to that script, but that should not influence its operation in any way. Any tools managing such entries work with creating/deleting these links. Based on trying out this basic script the OP found that there was a backup file from editing the file: /etc/rc0.d/K01backup~ that got executed as well. Putting the backup file in /etc/init.d/ from the start, and making a link would have prevented this from occuring (independent of whether there would be a /etc/init.d/backup~ file or not).
I have a small backup bash script that I wrote for my computer at work. I have copied the script into /etc/rc0.d/ and called it K01backup so it is executed before anything else upon shutdown. It backs up all the data from my computer (running Ubuntu 14.04LTS) and my working copies and a virtual machine located on a separate internal SSD to an external hard drive and adds the log output to files in each folders. Here is the script: #!/bin/bashLOG="/syncLog" VMORIG="/media/SSDData/VM" PROJORIG="/media/Data/Projects" DESTROOT="/media/ExtData/Backups" LOGVM=${DESTROOT}"/VM"${LOG} LOGPROJ=${DESTROOT}"/Projects"${LOG} ALLORIG="/" DESTALL=${DESTROOT}"/All" LOGALL=${DESTROOT}"/All"${LOG}echo "STARTED" > ${LOGPROJ} date +%d.%m.%Y/%H:%M:%S >> ${LOGPROJ} rsync -avvx --progress --no-whole-file ${PROJORIG} ${DESTROOT} >> ${LOGPROJ} echo "FINISHED" >> ${LOGPROJ} date +%d.%m.%Y/%H:%M:%S >> ${LOGPROJ}echo "STARTED" > ${LOGVM} date +%d.%m.%Y/%H:%M:%S >> ${LOGVM} rsync -avvx --progress --no-whole-file ${VMORIG} ${DESTROOT} >> ${LOGVM} echo "FINISHED" >> ${LOGVM} date +%d.%m.%Y/%H:%M:%S >> ${LOGVM}echo "STARTED" > ${LOGALL} date +%d.%m.%Y/%H:%M:%S >> ${LOGALL} rsync -avvx --progress --no-whole-file --exclude "/media/*" --exclude "/indel/*" ${ALLORIG} ${DESTALL} >> ${LOGALL} echo "FINISHED" >> ${LOGALL} date +%d.%m.%Y/%H:%M:%S >> ${LOGALL}Then I ran sudo chmod +x /etc/rc0.d/K01backup to make it executable. At first the script took roughly an hour to execute and it all worked well. But for a little while now, I can see in the log files that the script started (presumably started again) an hour after I left work and all the files were already up to date, so it only took about a minute to run. Does anybody know what I might have done wrong?
Shutdown script seems to be executed twice
Instead of running mplayer directly from your start up, I would write a script and run that instead. Your script would eventually just run the same mplayer command you have given, but before hand you could check that your wifi connection is up and working (maybe by pinging your router), this gives your script control. It could wait until the connection comes up, and then start mplayer. If you put start the script in something like rc.local, then it will start once on start-up. if you start it from your profile, then it will be started when you login. Here's an example script which waits until it successfully pings an IP address before starting mplayer. #!/bin/bashRouterIP="128.0.0.1" TimeOut=2 WaitTime=8echo "Waiting for network connection" while ! ping -q -c 1 -w $TimeOut $RouterIP > /dev/null do echo Timeout, waiting $WaitTime seconds sleep $WaitTime echo "Waiting for network connection" donemplayer blah blahYou can change the ip address to your routers internal port and correct the mplayer line. Name it startRadio and make it executable then test it. `./startRadio Add it to whichever startup script you want, but redirect stdout and stderr to /dev/null and start it as a background process. eg. /path/to/your/script/startRadio >/dev/null 2>&1 &
I have a Raspberry without screen/keyboard/mouse that does nothing else than launching a radio stream at startup : mplayer http://95.81.146.2/fip/all/fiphautdebit.mp3I have put this command at the end of /etc/rc.local. Unfortunately, 50% of the time, the playing doesn't start (maybe because the WIFI wasn't properly connected yet?) and I have to reboot to make the sound playing start. How can I check what happened during rc.local startup, later, with a SSH connexion ? I tried with dmesg, but couldn't see the result of mplayer. Which is the most appropriated script to put such a command ? (rc.local, /etc/profiles ?)
Launch a command at the end of Linux startup
If you don't do something special vagrant is a wrapping for virtualbox. You can get a list of running virtualboxes: vboxmanage list runningvmsand parse the output to get a vmname, then do: VBoxManage controlvm <vmname> acpipowerbuttonHave to do this as the user that started the VMs Put a link to the script in /etc/rc0.d and /etc/rc6.d just like other softwares do ( ls /etc/rc0.d /etc/rc6.d ). My script: # coding: utf-8import os import pwd from subprocess import check_output, CalledProcessErroruser_name = 'zelda'def main(): os.chdir('/') cmd = ["vboxmanage", "list", "runningvms"] if os.getuid() == 0: cmd = ['su', '-l', user_name, ] + cmd try: res = check_output(cmd) except CalledProcessError: return for line in res.splitlines(): if not line.strip(): continue # split on first char vmname = line[1:].split(line[0])[0] cmd = ["VBoxManage", "controlvm", vmname, "acpipowerbutton"] if os.getuid() == 0: cmd = ['su', '-l', user_name, ] + cmd check_output(cmd)if __name__ == "__main__": main()
At work, I'm running linux, and use vagrant on a daily basis. What I find annoying is that the system often hangs when I reboot/shut down, if I forgot to vagrant halt any virtual boxes I may have fired up. To counter this, I'd like to write a shutdown script, along the lines of: #! /bin/sh cd ~/vagrants/vagrant_1 vagrant halt cd ../vagrant_2 vagrant halt exit 0However, for I'm not too sure if ~ will still be available to me, and even if it is, if the home dir in question will always be the right one, and if the script will have access too it. So I thought I'd do: #! /bin/sh VAGRANT_HOME="~/vagrants/" #or "/home/my_user/vagrants" if [ -d "$VAGRANT_HOME" ]; then cd $VAGRANT_HOMEvagrant_1 vagrant halt cd $VAGRANT_HOMEvagrant_2 vagrant halt fiBut even so, I can't help finding this silly, since I have added a couple of aliasses to my .profile file, including: alias vagranthalt="cd ~/vagrants/vagrant_1 && vagrant halt && cd - && cd ~/vagrants/vagrant_2 && vagrant halt && cd -"Which I'd like to use, but I'm not sure if these aliasses will still be available when my script is executed. I think I'll only need it @ runlevel 6, but might also need to symlink the script to run on runlevel 0, too. Basically, what I'd like to know is this:will existing aliasses be available to me, or not? Is there a user executing this script (and will I therefore be able to use ~ for home? should I make sure I have, at least, read-rights on the vagrant dirs in the script Is there another way to ensure the vagrant boxes are shutdown, that is perhaps slightly easier?
local shutdown scripts (do's and don't's)
There is currently no database software (in the conventional sense) included in the OpenBSD base installation. SQLite was part of the base system, but that too has been put back into the ports system with the release of 6.1. The OpenBSD developers are unlikely to include any software in the base system that is "big" unless it's actually used by the base system. Most database solutions are too big and complicated to meet the OpenBSD criteria of simple and secure.SQLite was removed in 6.1 since mandoc.db(5) no longer needed it.
There are many "built-in" softwares for OpenBSD, ex.: NTP, LDAP, RADIUS, etc., see all (?): https://github.com/openbsd/src/blob/master/etc/rc.conf in the rc.conf file. The question: Currently, 2017 Dec, I cannot find any database software by default, is this true? I know I can install several from the ports, but I am searching one that is shipped with the base (and even knows master-master replication).
Is there any database solution that can do master-master replication by default on OpenBSD?
You need to put the commands into a shell script, make that script executable and then uses this script as the command. <command>/usr/local/bin/volume_up</command>The contents of /usr/local/bin/volume_up #!/bin/sh pactl set-sink-volume alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.multichannel-output +5% & pactl set-sink-volume alsa_output.pci-0000_00_1b.0.analog-stereo +5% & pactl set-sink-volume alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.analog-surround-40 +5%and make it executable chmod +x /usr/local/bin/volume_upThe reason is that Openbox is not executing the contents of the command element in a shell instead it tries to execute it directly. From the documentation for <command>:A string which is the command to be executed, along with any arguments to be passed to it. The "~" tilde character will be expanded to your home directory, but no other shell expansions or scripting syntax may be used in the command unless they are passed to the sh command. Also, the & character must be written as & in order to be parsed correctly. is a deprecated name for .Another benefit is that you can rewrite the script to also be able lower the volume #!/bin/shchange_volume() { pactl set-sink-volume alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.multichannel-output "$1" pactl set-sink-volume alsa_output.pci-0000_00_1b.0.analog-stereo "$1" pactl set-sink-volume alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.analog-surround-40 "$1" }main() { case "$1" in up) change_volume +5% ;; down) change_volume -5% ;; *) printf "volume <command>\n" printf " up \n" printf " down\n" esac }main "$@"This would be saved under /usr/local/bin/volume and would be use like this <command>/usr/local/bin/volume up</command> <command>/usr/local/bin/volume down</command>
I am trying to configure Openbox's rc.xml file in order to manipulate my soundcards with one keypress. Because I have multiple sound cards on my system I have to manipulate multiple sinks at once so I use multiple commands separated with & like this: <keybind key="XF86AudioRaiseVolumen"> <action name="Execute"> <command>pactl set-sink-volume alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.multichannel-output +5% & pactl set-sink-volume alsa_output.pci-0000_00_1b.0.analog-stereo +5% & pactl set-sink-volume alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.analog-surround-40 +5%</command> </action> </keybind>For some reason this won't work in rc.xml. Can anyone help me?
Openbox - multiple commands separated with & for one keypress
According to this answer:Unless a command has output or logging already configured, rc.local commands will not log anywhere. If you want to see logs for specific commands, try redirecting the stdout and stderr for rc.local to somewhere you can check. Try adding this to the top of your /etc/rc.local file. exec 2> /tmp/rc.local.log # send stderr from rc.local to a log file exec 1>&2 # send stdout to the same log file set -x # tell sh to display commands before executionThough this will require to rerun the rc.local file.
I (new to linux) am attempting to add a command to the boot up that will execute an application. The application accepts parameters and has simple output of what it is doing. To troubleshoot it, how can I see what the output is when it attempts to start up (or not start at all)? When I run the command, it works. Whend I add it to rc.local, I cannot tell why it is not working.
Understanding rc.local and troubleshooting
**Also I like to hide my mouse pointer and my windows borders but don't know how?You can append -- -nocursor to your startx to hide mouse pointer: exec startx -- -nocursorThere are files ~/.config/openbox/rc.xml and /etc/xdg/openbox/rc.xml for you to edit (ref: http://openbox.org/wiki/Help:Configuration) , e.g. (bottom in that files): ... </menu> <applications> <application class="*"> <decor>no</decor> <position force="yes"> <x>50</x> <y>50</y> <monitor>1</monitor> </position> <size> <width>300</width> <height>300</height> </size> <focus>yes</focus> <desktop>1</desktop> <layer>normal</layer> <iconic>no</iconic> <skip_pager>no</skip_pager> <skip_taskbar>no</skip_taskbar> <fullscreen>no</fullscreen> <maximized>false</maximized> </application></applications> </openbox_config>In which <decor>no</decor> above will make the image app become borderless. Adjust the <width> and <height> if you found your image doesn't show the complete size. You can also adjust <x>, <y> of the app. There are more, e.g. comment out the menu tags (there are multiple <context tags has this <menu> entry): <mousebind button="Right" action="Press"> <action name="ShowMenu"> <!-- menu>root-menu</menu --> </action> </mousebind>It will disable the right-click to shows menu (startx -- -nocursor hide mouse cursor not prevent you to right-click open menu). There are also openbox/menu.xml to customize the right-click menu item, e.g.: <item label="Run Image app"> <action name="Execute"><execute>/home/m/img</execute></action> </item>You can choose right-click menu item Reconfigure once menu.xml or rc.xml edited to take effect. I also posted answer here, to solve auto start issue as non-root.
I have Ubuntu-server 16.04. Installed gtk3 and can execute my program manually by this command: ./img when I go to it's directory /home/m.But when I tried to add this line to my /etc/rc.local file: /home/m/img &It didn't work. This is my rc.local full content: startx /home/m/img & exit 0Then I tried to create ~/.xinitrc file with this content: #!/usr/bin/env bash /home/m/img & exec openbox-sessionThen made it executable by this command: chmod +x ~/.xinitrc But I got nothing(even it didn't show my openbox after reboot), So I executed this command too: ln -s ~/.xinitrc ~/.xsessionAfter that my openbox came back but my program didn't start after boot! or any other time!My goal is this: when I turned on my board, after boot, it runs my gtk-based program and shows my image. It's something like Kiosk but a c++ program should only show an image!How should I do that? EDIT: I did add this line: /home/m/img & to my /etc/xdg/openbox/autostart file, and it works after login but doesn't show my image, it shows only a file icon at center of the screen. But when I go to this address /home/m/ and run this command ./img it shows my image in full screen! Why this happens? **Also I like to hide my mouse pointer and my windows borders but don't know how? EDIT2: This is what I see after boot:And this is what I see after trying this command(in write buttom corner an icon appears): /home/m/img &
My ubuntu-server doesn't execute my gtk-based program at startup!
Lazyness rules ;) If you don't want to remove the program itself: sudo update-rc.d -f dnsmasq remove
Background: On my Debian Stretch machine at home, I've noticed my DNS lookup times are pretty slow, and have concluded that the culprit is dnsmasq - since if I take it down, name resolution becomes > 10x faster (no multi-second delays). Now, it's probably some misconfiguration, but I was lazy and wanted to just remove it, since my router has a DNS server which is what dnsmasq is looking at anyway, and that's probably something like a dnsmasq of its own. Anyway, I run: update-rc.d remove dnsmasqand get: insserv: Service dnsmasq has to be enabled to start service apache2 insserv: Service dnsmasq has to be enabled to start service cups-browsed insserv: exiting now! update-rc.d: error: insserv rejected the script headerMy questions:Why would apache2 and cups-browse depend on dnsmasq? Why am I running cups-browsed by default?
Trouble update-rc.d remove'ing dnsmasq on Debian Stretch
You linked your script to K99. When changing runlevel, the K* scripts are called with option stop, and the S* scripts are called with option start, or stop for runlevel 0 (shutdown). They are called in numerical order. So you should remove your K99 links and replace them by K00 links so they are executed first (before the script that actually halts the system!).
Background: I have a Python script that runs (infinitely) from startup in the background of a Ubuntu server. The process is ended by sending a SIGINT which is handled by the script which finishes all jobs before exiting. If anyone were to shutdown the server, thus terminating the script, important data will be lost. I was wondering if there were a way to wait for the script to finish before shutdown, or prevent shutdown altogether (if preventing it'd be nice if it could display a message). Attempt: My Python script (test.py) has a SIGINT handler where when it receives a SIGINT, it finishes all tasks before exiting. I wrote the following shell script: PROCESS=$(pgrep -f 'python test.py') echo $PROCESS kill -2 $PROCESS while kill -2 $PROCESS 2> /dev/null; do sleep 1 doneThis script will continuously send kill commands to the python script until it exits (it works when run). I put the script in the /etc/init.d directory, executed chmod -x on the script, made symlinks to the /etc/rc0.d and /etc/rc6.d directories with names starting with K99. Since scripts in the /etc/rc0.d//etc/rc6.d directory get run at shutdown/reboot, the script should (theoretically) run and wait until the python script finishes to shutdown/reboot. This does not seem to be working. It shutsdown/reboots without finishing the tasks. Help would be greatly appreciated.
Prevent shutdown using shell script run at shutdown
I couldn't get the TERM setting to work from ~/.ssh/rc either. I could get it to work by changing the following in /etc/ssh/sshd_config PermitUserEnvironment yesfollowed by a restart of sshd and taking into account the warning from man sshd_config,PermitUserEnvironment Specifies whether ~/.ssh/environment and environment= options in ~/.ssh/authorized_keys are processed by sshd(8). The default is “no”. Enabling environment processing may enable users to bypass access restrictions in some configurations using mechanisms such as LD_PRELOAD.Then I created the ~/.ssh/environment file and added the line, TERM=ansilogged back in and it worked. EDIT: This won't help much, but setting the TERM on the command line before calling ssh (on Linux) does set the term type on the remote end. TERM=ansi ssh [emailprotected] [emailprotected] ~ $ echo $TERM ansi
Whenever I ssh to my desktop, I change $TERM to ansi so that ssh works better with the Windows terminal. I decided to create ~/.ssh/rc and add TERM=ansi to it. The problem is, after I ssh into my desktop, the terminal type is still msys instead of ansi. Is there a way to fix this?
.ssh/rc not working
If it, as you say, "needs to run at startup on an unpriveleged user account", then it will necessarily have access to all files that the unpriveleged user account in question has access to. You could create a dedicated unpriveleged user account for the purpose of running the script. Set the permissions on the secret key file so that only that dedicated user account can read it. But it sounds like you need to run the program under a specific pre-existing user account so that might not work for you. There are other solutions, such as running it in a chroot() that has access to the secret key, but whether or not that's viable depends on that it does and what else, exactly, besides the secret key file, it needs access to. You will not need to use sudo in any case because /etc/rc.local runs as root so you can su directly to whichever account you ultimately choose to run the program under. EDIT after clarification of question:It needs to execute every time a user of any sort logs in.I see. This is quite different from running it once only at startup using /etc/rc.local as you originally stated! Your best bet in this case will probably be to try to embed the secret key in the binary instead of accessing it as an external file, have the binary owned by root and executable but not readable by other users (permissions such as rwx--x--x. The users will not be able to get access to the key (unless they compromise root on the system) but they can run the binary. If you cannot embed the secret key in the binary then you can make the binary setuid to some user that can access the secret key... but take all the care that goes with writing setuid binaries.
I have a binary file that needs to run at startup on all accounts (including unprivileged user accounts), so a command to run it will be put into /etc/rc.local. The program itself will have only execute permissions so that it cannot be read or modified by an unprivileged user. It is located in /usr/bin. However, it needs to access a secret key when it runs (key is in /usr/share). Is it possible to create a file containing the secret key that will not be readable or writable to all common users, but readable by the program? Could the file take advantage of the program being privileged? Perhaps it could have some sort of setup with the file permissions (chmod)? Or is there a way that that it should be encrypted in some way?
File that is only readable with root privileges
I think you are on the right path but that you expect the rc framework to handle more things automatically than it actually does. It looks like you might be familiar with Practical rc.d scripting in BSD as you touch upon:For instance, stop must know the PID of the process to terminate it. In the present case, rc.subr(8) will scan through the list of all processes, looking for a process with its name equal to $procname. The latter is another variable of meaning to rc.subr(8), and its value defaults to that of command. In other words, when we set command, procname is effectively set to the same value.Your life will become easier if you accept that you do not have a "simple" daemon and look through the next section with the "advanced" daemon. So rather than setting procname to the correct name so it can scan for the PID - simply set the PID file. pidfile is a known entity which rc.subr(8) understands. You are using daemon to detach from the terminal and that handles pid files nicely. So if you add: pidfile="/var/run/${name}.pid"And change your start_cmd: start_cmd="/usr/sbin/daemon -P ${pidfile} -u myuser $command"Then you should be good to go. Another nice article outlining a simple rc script is Supervised FreeBSD rc.d script for a Go daemon - the gist of it is as simple as: #!/bin/sh # # PROVIDE: goprogram # REQUIRE: networking # KEYWORD:. /etc/rc.subrname="goprogram" rcvar="goprogram_enable" goprogram_user="goprogram" goprogram_command="/usr/local/goprogram/goprogram" pidfile="/var/run/goprogram/${name}.pid" command="/usr/sbin/daemon" command_args="-P ${pidfile} -r -f ${goprogram_command}"load_rc_config $name : ${goprogram_enable:=no}run_rc_command "$1"Notice how the main difference is that they control the pid file rather than relying on $procname
I'm trying to configure multiple Zope-Instances as daemons in FreeBSD. Each instance gets a start script in /usr/local/etc/rc.d. Starting works fine, but invoking status or stop is problematic because the PIDs of the running instances get confused (although the PIDs are different, the startscript cannot tell them apart). Here is the template of the rc scripts: instancename="%%instancename%%" name="$instancename"rcvar="${name}_enable"zope="/usr/local/opt/zope" python="${zope}/bin/python" command_interpreter="$python" command="${zope}/bin/runwsgi -v /usr/local/www/zope-instances/${instancename}/etc/zope.ini -d"start_cmd="/usr/sbin/daemon -u myuser $command"load_rc_config "$name" run_rc_command $*The values between "%%" are set differently for each instance. When I try to get the status or to stop the process (service instancename status or service instancename stop) the PID of the last instance started is used. With processes able to create a pid-file this is not the case, but runwsgi, the script I need, doesn't create a pid-file. I understand that procname, which defaults to the command variable, is used to tell the processes apart, but I don't know how to set it properly for my needs.
rc scripts for multiple Zope instances in FreeBSD
Assuming that Raspbian Stretch uses systemd like regular Debian Stretch does by default, /etc/rc.local is started by /lib/systemd/system/rc-local.service: # This file is part of systemd. # # systemd is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2.1 of the License, or # (at your option) any later version.# This unit gets pulled automatically into multi-user.target by # systemd-rc-local-generator if /etc/rc.local is executable. [Unit] Description=/etc/rc.local Compatibility ConditionFileIsExecutable=/etc/rc.local After=network.target[Service] Type=forking ExecStart=/etc/rc.local start TimeoutSec=0 RemainAfterExit=yes GuessMainPID=noAs it specifies Type=forking, TimeoutSec=0 and RemainAfterExit=yes, I understand that systemd basically starts it and does not care whether it exits or not. So that explains why the system can successfully complete the boot even if /etc/rc.local is still running. Your rc.local script first runs startsignal.py in the background (= with the &): that means only a failure to start the script would cause an error in the rc.local script at that point. If startsignal.py successfully starts but then returns an error, rc.local would have to use wait <process or job ID> to read the incoming error from the startsignal.py process. But your process apparently does not care to check for that. Then your rc.local starts fan.py. Since it is started without &, the shell will start another process to run fan.py and wait for it to exit... but since fan.py has an infinite loop, it will not exit until the system is being shut down, fan.py has an error, or the process running fan.py is killed. The touch /home/pi/thisisrun will be executed only after fan.py has exited. I think it would make more sense to start startsignal.py without the & and fan.py with it, rather than vice versa.
I put a python script with an infinite loop into /etc/rc.local but the machine boots successfully, which confuses me. The /etc/rc.local content: #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing.# Print the IP address _IP=$(hostname -I) || true if [ "$_IP" ]; then printf "My IP address is %s\n" "$_IP" fi/home/pi/py/startsignal.py & /home/pi/py/fan.py touch /home/pi/thisisrun exit 0startsignal.py #!/usr/bin/python import RPi.GPIO as GPIOGPIO.setmode(GPIO.BCM) GPIO.setup(18, GPIO.OUT)GPIO.output(18, 1)fan.py #!/usr/bin/python # coding: utf8 import RPi.GPIO as gpiogpio.setmode(gpio.BCM) upper_temp = 55 lower_temp = 45 # minutes check_interval = 2def get_temp(): with open('/sys/class/thermal/thermal_zone0/temp', 'r') as f: temp = float(f.read()) / 1000 return tempdef check_temp(): if get_temp() > upper_temp: gpio.setup(23, gpio.OUT) elif get_temp() < lower_temp: gpio.setup(23, gpio.IN)if __name__ == '__main__': # check every 2 minutes try: while True: check_temp() sleep(check_interval * 60) finally: gpio.cleanup()All the code relevant is above. I thought about this after googling around.the #!/bin/sh -e indicates that the script will exit once an error occurs. the /home/pi/thisisrun file is not created, so there must be an error above this line after booting into system I can see that fan.py is running. So I guess the error occurs during the execution of fan.py. But the fan.py has an infinite loop in it !How can a python script generates an error but still runs normally? How can /bin/sh detects the error when the fan.py never returns? OS: raspbian stretch
boot normally even with an infinite loop in /etc/rc.local
I since found that there is a bug in nano < 2.7.4-1 nano: /etc/nanorc is ignored, if ~/.nanorc existsLatest from the bug report: I just made the dist-upgrade to Debian 9.0, which included an update of package nano to version 2.7.4-1 and the problem vanished, the bug is solved in 2.7.4-1.The bug report: bug
I'm trying to set my color syntax highlighting in nano, but it doesn't work as expected.One system everything works. This is an Fedora 21 laptop. Two systems everything I've tried except man something works. This is an Fedora 21 desktop and an Fedora 21 vm in VirtualBox. One system only one file I've tried works(opening nanorc itself gives highlighting). This is an Debian Wheezy desktop.If I do man emacs it only works as expected on one system. I also have syntax highlighting for many other types of files, I thought the only thing I needed to set this up was to have .nanorc located in the users home directory so nano could find it. This is very confusing. I've tried to look for differences in bash_profile, /etc/profile, bashrc but nothing stands out and maybe that's irrelevant. I've looked at the permissions. I've started an new terminal and restarted the system. Here is a piece from my .nanorc file: ####################################################################### Manpages ##include "/usr/share/nano/man.nanorc"## Here is an example for manpages. ## syntax "man" "\.[1-9]x?$" color green "\.(S|T)H.*$" color brightgreen "\.(S|T)H" "\.TP" color brightred "\.(BR?|I[PR]?).*$" color brightblue "\.(BR?|I[PR]?|PP)" color brightwhite "\\f[BIPR]" color yellow "\.(br|DS|RS|RE|PD)"#####################################################################Questions: Why is the same .nanorc file not working the same on four Linux systems(Fedora 21 is working, two Fedora 21 not working and Debian Wheezy not working at all). What am I missing? What are the steps to set a custom .nanorc file to be used by nano and be sure it's not in some kind of conflict or something? -------------------------------------------- Here is the full nanorc file on pastebin.com.
Color syntax highlighting working on one system but not the others. Same nanorc file
2024-03-07, archived:WARNING - don't upgrade your TrueNAS CORE jails to FreeBSD 13.3 just yet | TrueNAS CommunityToday, https://forums.freebsd.org/threads/forgejo-failing-to-start-as-service-pid-file-not-readable.93214/#post-653734:FreeBSD 13.3TrueNAS CORE 13.3 is not yet released. TrueNAS CORE 13.3 Plans - Announcements - TrueNAS Community Forums (I'm present there, and in other TrueNAS Community topics.)
I have been trying to get Forgejo running in a Truenas Core (FreeBSD jail) for over a week. When I manually start Forgejo as the git user it runs as expected, however attempting to get it to run with the included rc file provided by the ports package it errors out. Forgejo Port rc.d script When I start forgejo manually it runs: root@Forgejo:/home/jailuser # su git git@Forgejo:/home/jailuser $ forgejo web -c /usr/local/etc/forgejo/conf/app.ini 2024/04/23 18:59:36 cmd/web.go:242:runWeb() [I] Starting Forgejo on PID: 4748 2024/04/23 18:59:36 cmd/web.go:111:showWebStartupMessage() [I] Forgejo version:1.21.11-1 built with GNU Make 4.4.1, go1.21.9 : bindata, pam, sqlite, sqlite_unlock_notifyHowever, when I attempt to start the forgejo service I get the following pid not found error: root@Forgejo:/home/jailuser # service forgejo start /usr/local/etc/rc.d/forgejo: DEBUG: Sourcing /etc/defaults/rc.conf /usr/local/etc/rc.d/forgejo: DEBUG: pid file (/var/run/forgejo.pid): not readable. /usr/local/etc/rc.d/forgejo: DEBUG: checkyesno: forgejo_enable is set to YES. /usr/local/etc/rc.d/forgejo: DEBUG: run_rc_command: doit: forgejo_start_ root@Forgejo:/home/jailuser # mount Main/iocage/jails/Forgejo/root on / (zfs, local, noatime, nfsv4acls) root@Forgejo:/home/jailuser # ll /var total 81 drwxr-x--- 2 root wheel 2 Mar 1 18:50 account/ drwxr-xr-x 4 root wheel 4 Mar 1 18:50 at/ drwxr-x--- 4 root audit 4 Mar 1 18:50 audit/ drwxrwx--- 2 root authpf 2 Mar 1 18:50 authpf/ drwxr-x--- 2 root wheel 8 Apr 23 03:21 backups/ drwxr-xr-x 2 root wheel 2 Mar 1 18:50 cache/ drwxr-x--- 2 root wheel 3 Mar 1 19:06 crash/ drwxr-x--- 3 root wheel 3 Mar 1 18:50 cron/ drwxr-xr-x 14 root wheel 17 Apr 20 21:43 db/ dr-xr-xr-x 2 root wheel 2 Mar 1 18:50 empty/ drwxrwxr-x 2 root games 2 Mar 1 18:50 games/ drwx------ 2 root wheel 2 Mar 1 18:50 heimdal/ drwxr-xr-x 3 root wheel 23 Apr 23 00:00 log/ drwxrwxr-x 2 root mail 5 Apr 20 21:01 mail/ drwxr-xr-x 2 daemon wheel 3 Apr 20 19:28 msgs/ drwxr-xr-x 2 root wheel 2 Mar 1 18:50 preserve/ drwxr-xr-x 6 root wheel 18 Apr 23 18:56 run/ drwxrwxr-x 2 root daemon 2 Mar 1 18:50 rwho/ drwxr-xr-x 9 root wheel 9 Mar 1 18:50 spool/ drwxrwxrwt 3 root wheel 3 Mar 1 18:50 tmp/ drwxr-xr-x 3 unbound unbound 3 Mar 1 18:50 unbound/ drwxr-xr-x 2 root wheel 4 Mar 1 19:24 yp/ root@Forgejo:/home/jailuser #Manually executing the daemon command results in an exit status of 0 with no other useful information. Tried relocating the pid file to a directory with 777 permissions and still getting the same error. My only guess right now would be that forgejo is dying almost immediately before daemon is able to create the pid file? Not sure how to get stdout from forgejo to see if there are any errors (forgejo is not logging anything to its log file directory). Any ideas? UPDATE: Adding truss to the init script on the call to daemon yields the following: 53609: mmap(0x0,135168,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON,-1,0x0) = 34376810496 (0x801048000) 53609: mprotect(0x801044000,4096,PROT_READ) = 0 (0x0) 53609: issetugid() = 0 (0x0) 53609: sigfastblock(0x1,0x801047490) = 0 (0x0) 53609: open("/etc/libmap.conf",O_RDONLY|O_CLOEXEC,0101130030) = 3 (0x3) 53609: fstat(3,{ mode=-rw-r--r-- ,inode=16052,size=35,blksize=4096 }) = 0 (0x0) 53609: read(3,"includedir /usr/local/etc/libmap.d\n",35) = 35 (0x23) 53609: close(3) = 0 (0x0) 53609: open("/usr/local/etc/libmap.d",O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC,0165) ERR#2 'No such file or directory' 53609: open("/var/run/ld-elf.so.hints",O_RDONLY|O_CLOEXEC,0100416054) = 3 (0x3) 53609: read(3,"Ehnt\^A\0\0\0\M^@\0\0\0w\0\0\0\0\0\0\0v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0",128) = 128 (0x80) 53609: fstat(3,{ mode=-r--r--r-- ,inode=741826,size=247,blksize=4096 }) = 0 (0x0) 53609: pread(3,"/lib/casper:/lib:/usr/lib:/usr/lib/compat:/usr/local/lib:/usr/local/lib/compat/pkg:/usr/local/lib/perl5/5.36/mach/CORE\0",119,0x80) = 119 (0x77) 53609: close(3) = 0 (0x0) 53609: open("/lib/casper/libutil.so.9",O_RDONLY|O_CLOEXEC|O_VERIFY,00) ERR#2 'No such file or directory' 53609: open("/lib/libutil.so.9",O_RDONLY|O_CLOEXEC|O_VERIFY,00) = 3 (0x3) 53609: fstat(3,{ mode=-r--r--r-- ,inode=190,size=79952,blksize=80384 }) = 0 (0x0) 53609: mmap(0x0,4096,PROT_READ,MAP_PRIVATE|MAP_PREFAULT_READ,3,0x0) = 34376945664 (0x801069000) 53609: mmap(0x0,98304,PROT_NONE,MAP_GUARD,-1,0x0) = 34376949760 (0x80106a000) 53609: mmap(0x80106a000,32768,PROT_READ,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x0) = 34376949760 (0x80106a000) 53609: mmap(0x801072000,49152,PROT_READ|PROT_EXEC,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x7000) = 34376982528 (0x801072000) 53609: mmap(0x80107e000,4096,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x12000) = 34377031680 (0x80107e000) 53609: mmap(0x80107f000,4096,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x12000) = 34377035776 (0x80107f000) 53609: mmap(0x801080000,8192,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_ANON,-1,0x0) = 34377039872 (0x801080000) 53609: munmap(0x801069000,4096) = 0 (0x0) 53609: close(3) = 0 (0x0) 53609: open("/lib/casper/libc.so.7",O_RDONLY|O_CLOEXEC|O_VERIFY,012320443000) ERR#2 'No such file or directory' 53609: open("/lib/libc.so.7",O_RDONLY|O_CLOEXEC|O_VERIFY,012320443000) = 3 (0x3) 53609: fstat(3,{ mode=-r--r--r-- ,inode=126,size=1940168,blksize=131072 }) = 0 (0x0) 53609: mmap(0x0,4096,PROT_READ,MAP_PRIVATE|MAP_PREFAULT_READ,3,0x0) = 34376945664 (0x801069000) 53609: mmap(0x0,4190208,PROT_NONE,MAP_GUARD,-1,0x0) = 34377048064 (0x801082000) 53609: mmap(0x801082000,540672,PROT_READ,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x0) = 34377048064 (0x801082000) 53609: mmap(0x801106000,1343488,PROT_READ|PROT_EXEC,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x83000) = 34377588736 (0x801106000) 53609: mmap(0x80124e000,40960,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x1ca000) = 34378932224 (0x80124e000) 53609: mmap(0x801258000,24576,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x1d3000) = 34378973184 (0x801258000) 53609: mmap(0x80125e000,2240512,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_ANON,-1,0x0) = 34378997760 (0x80125e000) 53609: munmap(0x801069000,4096) = 0 (0x0) 53609: close(3) = 0 (0x0) 53609: mprotect(0x80124e000,36864,PROT_READ) = 0 (0x0) 53609: mprotect(0x80124e000,36864,PROT_READ|PROT_WRITE) = 0 (0x0) 53609: mprotect(0x80124e000,36864,PROT_READ) = 0 (0x0) 53609: readlink("/etc/malloc.conf",0x7fffffffc610,1024) ERR#2 'No such file or directory' 53609: issetugid() = 0 (0x0) 53609: mmap(0x0,2097152,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON|MAP_ALIGNED(21),-1,0x0) = 34382807040 (0x801600000) 53609: mmap(0x0,2097152,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON|MAP_ALIGNED(12),-1,0x0) = 34384904192 (0x801800000) 53609: mmap(0x0,4194304,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON|MAP_ALIGNED(21),-1,0x0) = 34387001344 (0x801a00000) 53609: mprotect(0x1026000,4096,PROT_READ) = 0 (0x0) 53609: sigaction(SIGHUP,{ SIG_IGN SA_RESTART ss_t },{ SIG_DFL 0x0 ss_t }) = 0 (0x0) 53609: sigaction(SIGTERM,{ SIG_IGN SA_RESTART ss_t },{ SIG_DFL 0x0 ss_t }) = 0 (0x0) 53609: socket(PF_LOCAL,SOCK_DGRAM|SOCK_CLOEXEC,0) = 3 (0x3) 53609: getsockopt(3,SOL_SOCKET,SO_SNDBUF,0x7fffffffd85c,0x7fffffffd858) = 0 (0x0) 53609: setsockopt(3,SOL_SOCKET,SO_SNDBUF,0x7fffffffd85c,4) = 0 (0x0) 53609: connect(3,{ AF_UNIX "/var/run/logpriv" },106) = 0 (0x0) 53609: openat(AT_FDCWD,"/var/run",O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC,00) = 4 (0x4) 53609: openat(4,"forgejo.pid",O_WRONLY|O_NONBLOCK|O_CREAT|O_CLOEXEC,0600) = 5 (0x5) 53609: flock(5,LOCK_EX|LOCK_NB) = 0 (0x0) 53609: fstatat(4,"forgejo.pid",{ mode=-rw------- ,inode=742728,size=0,blksize=131072 },0x0) = 0 (0x0) 53609: fstat(5,{ mode=-rw------- ,inode=742728,size=0,blksize=131072 }) = 0 (0x0) 53609: ftruncate(5,0x0) = 0 (0x0) 53609: fstat(5,{ mode=-rw------- ,inode=742728,size=0,blksize=131072 }) = 0 (0x0) 53609: cap_rights_limit(4,{ CAP_UNLINKAT }) = 0 (0x0) 53609: cap_rights_limit(5,{ CAP_PWRITE,CAP_FTRUNCATE,CAP_FSTAT,CAP_EVENT }) = 0 (0x0) 53609: sigaction(SIGHUP,{ SIG_IGN 0x0 ss_t },{ SIG_IGN SA_RESTART ss_t }) = 0 (0x0) 53609: fork() = 53610 (0xd16a) 53610: <new process> 53610: setsid() = 53610 (0xd16a) 53609: exit(0x0) 53609: process exit, rval = 0 53610: sigaction(SIGHUP,{ SIG_IGN SA_RESTART ss_t },0x0) = 0 (0x0) 53610: madvise(0x0,0,MADV_PROTECT) ERR#1 'Operation not permitted' 53610: pipe2(0x7fffffffd9c0,0) = 0 (0x0) 53610: kqueuex() ERR#78 'Function not implemented' 53610: SIGNAL 12 (SIGSYS) code=SI_KERNEL 53610: process killed, signal = 12UPDATE: TrueNAS-13.0-U6.1 jailuser@Forgejo:~ $ uname -a FreeBSD Forgejo 13.1-RELEASE-p9 FreeBSD 13.1-RELEASE-p9 n245429-296d095698e TRUENAS amd64
Forgejo pid file (/var/run/forgejo.pid) : not readable in Truenas Core (FreeBSD Jail)
As has been explained in comments, you need to “save” the process's stdin somehow. By default, depending on the init system, this may be the console, or /dev/null. To be able to attach to the process, use a screen multiplexer such as Screen or tmux. See also How can I disown a running process and associate it to a new screen shell? In /etc/rc.local, run something like screen -S mydaemon -md /usr/local/bin/mydaemon --some-optionTo attach to the program interactively, you would then run screen -S mydaemon -rdTo automatically send keystrokes to the program (see sending text input to a detached screen): screen -S mydaemon -p 0 -X stuff 'bye^M'
Is there a way to join an interactive session of a process that was ran on boot with /etc/rc.local, or send it "stop" over STDIN on reboot/shutdown and wait for it to end before shutting down?
Join an interactive session of a process launched from rc.local
Don't block the startup process for the sake of one service, unless it's some absolutely critical service without which the machine is unusable (e.g. entering a passphrase to decrypt the OS disk). If some service needs manual intervention to start (which should be avoided if at all possible, unless you like getting paged at 3am because the service didn't come back after an unscheduled reboot due to a UPS failure), make sure that it doesn't block the boot. Put whatever needs to be done in the background. To allow users to interact with the service, run it inside Screen (or tmux if you prefer tmux) to create a pseudoterminal where the service will read input from and write output to. screen -d -m -S myservice /usr/local/sbin/myservice --interactive-startTo connect to the terminal created by Screen, use screen -r -d -S myserviceYou can do that from anywhere: on the console, over SSH, etc. You need to run the screen command as the same user both times. To detach from the Screen session and leave it running in the background, press Ctrl+A D.
I have a simple script that asks for user input ([y/N]) and then acts upon it. I wrote a daemon rc wrapper so that it can run from startup. I was wondering if it is possible to make the daemon/script ask for the user input and then background itself until it is time to ask again, at which point it foregrounds itself? Is this possible? Is this practical? Where should the fg/bg control be hosted? in the rc.d script or in the main script?
RC script that brings itself to foreground under certain criteria
There's no global standard. They can be (and are) all different syntaxes. For example,the bashrc is simply a bash script, the vimrc a vimscript script, i3 uses its own syntax that's pretty close to a scripting language (but they claim it isn't a programming language, but I think they're lying there, the conditional screen placement thing looks extremely much like you can build a turing machine out of it), xinitrc is just an arbitrary script (which will be run by the shell specified in the #! line at the beginning of the file, so it could just as well by say, Python, bash, zsh, tcl, perl, … Conky uses JSON or YAML, I think,Essentially, there is no standard, and you always need to read the documentation.
I've been using GNU/Linux for over a year now. And there's this question to which I need an answer from you, Linux gurus: What language(s) do config files like .bashrc, .vimrc, .i3status.conf, .conkyrc, .xinitrc, etc. use?
What language do config files use?
When you use the service command in will look for the process id (pid) as it was set when it was started. Your service has it defined as: pidfile="/var/run/secret_service/${name}.pid"When you ask for status the pid will be fetched from this file and it will check if the process is running. If you examine the output of ps I am pretty sure that you will find that the process id of your running service does not match what is in the pidfile. Your rc script does look a little suspect. Are you sure you want "secret_service" in the pidfile path? If so make sure it is there. It would be more common with: pidfile="/var/run/${name}.pid"See Practical rc.d scripting in BSD
Something wrong with one service on FreeBSD 7.3: 1) it starts with command "service my_secret_service start" but later if I enter "service my_secret_service status" - it shows as not running. But in processes it exists (ps auwx | grep secret_service) with all threads (python threads) and I can see that it's working because of service logs, access to webui of service, etc. 2) If I type "service my_secret_service stop", it can't kill main process and all threads. My secret rc script: #!/bin/sh # PROVIDE: sbdns_daemon. /etc/rc.subrCONFROOT=/usr/local/secret_group/secret_service/etc export CONFROOTname=secret_service_daemon rcvar=`set_rcvar` pidfile="/var/run/secret_service/${name}.pid" logfile="${CONFROOT}/log.conf"command_interpreter=/usr/bin/python command="full path to python service file" command_args="--logconf ${logfile} -d " stop_postcmd="${name}_post_stop"secret_service_daemon_post_stop() { n=0 until [ $n -ge 3 ] do child_processes=$(check_alive_processes) if [ -z "$child_processes" ] then echo "All child processes were shutdown gracefully!" exit 0 else if [ $n = 0 ] then echo "Processes are still alive. Waiting for them to shutdown gracefully..." fi n=$(($n+1)) echo "Attempt $n/3, alive processes: $child_processes" sleep 5 fi done echo "Not all processes were terminated! Forcibly terminating child processes: $child_processes!" pkill -if "${command}" } check_alive_processes() { echo "$(pgrep -if -d " " "${command}")" }chmod +x $command load_rc_config "$name"secret_service_daemon_enable=${secret_service_enable-NO}echo "Enabled: ${secret_service_daemon_enable}"run_rc_command "$1"What's wrong? Update #1. Looks like problem was just in path to pidfile, very interesting. Thank you!
FreeBSD 7.3: service is working, but status shows "is not running"
I didn't post this until I had searched for days, and I just now found the answer. If no one else finds this useful, I'll end up deleting, but here it is: https://forums.freebsd.org/threads/58365/ Basically, if networking isn't up yet, then it cannot bind and will fail. The solution is to edit /usr/local/etc/rc.d/slapd and change this line: # REQUIRE: FILESYSTEMS ldconfigTo: # REQUIRE: FILESYSTEMS ldconfig NETWORKINGThis ensures networking is loaded prior to attempting to start slapd.
I can successfully start slapd on FreeBSD 11 perfectly fine, but it won't run on startup. Here is what I put in my rc.conf: slapd_enable="YES" slapd_flags="-h "ldap://1.2.3.4/ ldapi://%2fvar%2frun%2fopenldap%2fldapi/"" slapd_sockets="/var/run/openldap/ldapi"1.2.3.4 is replaced with my actual public IP. I have tried many permutations of the valid options for slapd_flags and slapd_sockets, but every time I reboot slapd is not running. How do I ensure slapd runs at system startup?
slapd doesn't start automatically despite rc.conf entry
This works for me as /etc/systemd/system/netstat.service:[Unit] Description=Save interface stats on shutdown[Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/true ExecStop=/bin/sh -c '{ date; ip -s link; } >>/root/ipstat.log'[Install] WantedBy=multi-user.targetEnable it with systemctl enable netstat. This won't give you precise statistics, because the network interfaces can still be used during shutdown, but that may be acceptable for you.
I need to run a script before shutting down or rebooting my VPS running Debian 8,to keep count of network statistics.I tried adding the script directly to /etc/init.d and symlinking it in /etc/rc0.d and /etc/rc6.d, and adding to it the LSB header, making it like an actual service with start and stop and generating the symlinks with update-rc.d, but nothing has worked, it seems like the script isn't executed at all.Maybe it could have to do with it being an VPS, so it doesn't get recognised by the system when it is apparently shut down or rebooted. I just need a simple mechanism,without having to create a proper service. Also I'd like to know which could be the better way to check if the script is actually being executed or not,some simple way of logging. The script is just this: RESULT=$(bc <<< "scale =2;($(cat /sys/class/net/venet0/statistics/rx_bytes)/1024/1024/1024)+($(cat /sys/class/net/venet0/statistics/tx_bytes)/1024/1024/1024)+($(cat /root/bw))") echo $RESULT > /root/bw
Execute simple script before shutdown and reboot
The K does indeed stand for “kill”. The symlinks link all the init scripts which are supposed to be called to stop the corresponding service when the system switches to runlevel 6; this tries to ensure that all the system’s services are stopped correctly before the system reboots. Each link is called with a stop argument.
I was trying to find out how to run a script at startup and during shutdown during which I got to know that level 6 corresponds to reboot in ubuntu. When I opened the /etc/rc6.d every link's name started with K which is for kill I suppose.
Why do all the links in /etc/rc6.d start with K if runlevel 6 corresponds to reboot?
update-rc.d -f cheese remove update-rc.d cheese defaults 15
I currently have scripts in the rc.# S## ... one of my scripts is called: S20cheese and another is called S19Moo. How would I properly update these run orders e.g. S20cheese -> S15cheese. Is it as simple as renaming the files in every rc.2 / rc.3 etc etc or will this break things? I am trying to set the run order as I need cheese to run at 30 seconds into the boot but loads of other processes are running before it
change boot order in rc.d
Could you add the mount command to /etc/fstab instead of doing it with a script? As for the second part, rc.local is run by root by default, so if you aren't taking steps to run as nass you will be mounting the NFS share as /root/sg. If you want it to run as a different user from rc.local you would have to do something like su nass -c '/home/nass/audio_setup/scripts/start_audio'
I mount some nfs exports from a fileserver to my workstation. The workstation is ubuntustudio 64bit 14.04. in order to make the mounts as transparent as possible, I have inserted the following in my .bashrc SG=sg mount | grep $SG &> /dev/null if [ $? -eq 1 ] ; then sudo mount -o vers=3 fileserver:/nfs/home/nass ~/$SG fiSo I basically mount my folders when the 1st login shell is fired up. This works fine when I log on to the pc and open up a terminal - which is what I usually do. I would like this mounting to occur automatically during boot and the obvious choice is to add the above snippet in /etc/rc.local. Then I add a command to run my script, however I want to run it as my user (and not root). /home/nass/audio_setup/scripts/start_audio 2>&1 | tee -a /tmp/audio.logbut as I can see in the audio.log file /etc/rc.local: 22: /etc/rc.local: /home/nass/audio_setup/scripts/start_audio: not foundwhy does this happen? what am I missing?
run a script from rc.local, that exists on an autofs nfs share
It looks like Synology moved from classic SysVinit to upstart in DSM 6 or so, and then to systemd in DSM 7. Both init systems provide backward compatibility for classic SysVinit-style start/stop scripts, but there are some quirks you should be aware of. If you have DSM 7.0 or newer, then after installing the script you probably should run systemctl daemon-reload, so systemd-sysv-generator should automatically create a .service file for it (maybe in /run/systemd). Then you can start your script with systemctl start <script name> - and in fact should do just that, instead of just running the script manually. systemd will be aware of the need to run <your script> stop job only if it has actually executed the corresponding start job. This is because systemd will set up each service as a separate control group of processes as it starts them up (and the administrator running the start script manually doesn't do that). This is something that is completely invisible to the services themselves (unless they specifically go looking for it), and any child processes of the services will inherit this control group membership. If a control group has no processes left in it, it will cease to exist automatically. When shutting down, systemd will just go through the existing control groups and will run the stop command for any non-default control group it finds. Any services that have been started without using systemctl start will be part of the "administrator's interactive session" control group rather than the "service X" control group, and will essentially be just killed without running the corresponding stop script. If you need features like an automatic restart for your service if it dies for some reason, you should consider using the appropriate "native" configuration method for the applicable init system:/etc/init/* files for Upstart in Synology DSM 6.x series /etc/systemd/system/*.service files for systemd in Synology DSM 7.x series and newer. These init systems have built-in automatic restart features you can use with just a little bit of configuration, rather than having to write a wrapper script to watch your service process yourself.Developer Guide for Synology DSM 7 Developer Guide for Synology DSM 6 Possibly helpful notes on configuring services for DSM 6 and 7
I have a very simple SysVinit service in /etc/rc.d: #!/bin/bashPIDFILE="/var/run/test.pid"status() { if [ -f "$PIDFILE" ]; then echo 'Service running' return 1 fi return 0 }start() { if [ -f "$PIDFILE" ] && kill -0 "$(cat "$PIDFILE")"; then echo 'Service already running' return 1 fi echo 'Starting...' test & echo $! > "$PIDFILE" return 0 }stop() { if [ ! -f "$PIDFILE" ] || ! kill -0 "$(cat "$PIDFILE")"; then echo 'Service not running' return 1 fi echo 'Stopping...' kill -15 "$(cat "$PIDFILE")" && rm -f "$PIDFILE" return 0 }case "$1" in start) start ;; stop) stop ;; status) status ;; restart) stop start ;; *) echo "Usage: $0 {start|stop|restart}" exit 1 esacWhen the system starts it starts the service. But when the system stops, it never calls the stop command. The only reason I can think off is that the system either thinks the service is not running or was not started correctly. But what are the requirements for that?Do you need to return a special exitcode for the start command? Do I need to create a file in /var/lock/subsys to signal that it is active? Anything else that might cause the system to think the service did not start?
Stop not called for init rc.d service
I use a pretty minimal i3 install. The only things I've had to add are:dmenu to launch applications, i3status for a status bar in the bottom. nm-applet for networking from the status bar Commands in a config file to map XF86Audio* keys to pactl actions.The other things that DEs give you are default applications for most desktop uses: browser, email client, text editor, file manager, photo viewer. If you find you are missing a PDF viewer, just install one. This setup has worked for me for over a year and I don't feel like I'm missing too much. If I ever want a GUI for something (like pavucontrol for audio-devices or spectacle for screenshots) then I just run that specific thing from dmenu. Anything outside of that is probably opinion-based.
I'm a relatively new Linux user. I started with Ubuntu 20.04 a few months ago, and eased myself into the experience, learning a bit of the command line and becoming familiar with the system structure I'd now like to move up a bit in the world, and improve my productivity by working on a tiling window manager. I've started using AwesomeWM, and I've loved the experience so far. The only issue is that, being a window manager rather than a full DE, there are a number of key features missing, like volume control. I've made due without some, and I've had to go back into Gnome for others (such as my workflow or interests required). For example, I've figured out how to add volume control to my rc.lua file, and I installed the ranger file manager. But because I don't know what my experience might be on a day-to-day basis, I can't move over to the WM entirely (say, with a brand-new netinst). All this leads to my question:What are the things that make you a (relatively) fully-fleshed-out desktop environment? What are the general things I should be installing/setting up to get a DE experience without having to install one like Gnome or KDE and using everything provided out of the box?
What are the key components of a desktop environment?
I managed to get it going with the steps on the Ubuntu Forums, for clarity here is what I did:sudo apt-get install gtk-recordmydesktop pavucontrol Opened the Pulse Audio Volume Control dialog: Applications > Sound & Video > PulseAudio Volume Control Opened gtk-recordmydesktop In gtk-rmd advanced preferences, "Sound" tab, set "Device" to pulse In gtk-rmd start a recording In Volume Control goto the Recording tab and change the recordmydesktop entry to 'Monitor of 'This is what seems to have worked for me.
I am using gtk-recordmydesktop to record the video output to my desktop. However, the videos have no sound. All the tutorials I found regarding this involved getting sound recorded from a microphone, while I am interested in getting the sound output recorded. How can I do this? The official FAQ says "The solution is in your mixer's settings. Keep playing with it ;)." which doesn't clarify anything. How can I get the sound output recorded, while being able to hear it myself also?
How can I record the sound output with gtk-recordmydesktop?