output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Systemd reimplements many functionalities previously scattered over the whole OS (eg. in udev daemon), and is able to recognize that device was just plugged in or out. At the same time, systemd holds all system services configuration: what need to be run, how to run it etc. And simply, it has all knowledge needed to start, stop, or even reconfigure services related to hot pluggable devices.Classic init system doesn't manage hot pluggable devices at all. It just starts services in a defined order and that's mostly all. One of such services is udev daemon, which handles hot pluggable devices. But it's not able to start a service, when device is plugged in, at least without custom scripts made for local machine.
One argument I hear often about systemd is that it more adapted to current hardware needs, e.g. hereComputers changed so much that they often doesn’t even look like computers. And their operating systems are very busy : GPS, wireless networks, USB peripherals that come and go, tons of softwares and services running at the same time, going to sleep / waking up in a snap… Asking the antiquated SysVinit to manage all this is like asking your grandmother to twerk.What I don't understand is how an init system manages hot pluggable devices. What does replacing a hot plugable disk drive it have to do with how the system is booted? Maybe this all done at the none init parts of systemd? I know this is a hot topic for some people. It is not meant to ignite a war, rather to understand. Please explain it to me with out flames.
systemd in the era of hotplugable devices
I assume that you are using an ext4 file system: You can modify the size of the reserved space with tune2fs. The following command line reduces the reserved space to 1% (from default 5%). sudo tune2fs -m 1 /dev/sdxnwhere x is the drive letter and n is the partition number (of a partition with an ext file system).From man tune2fs: -m reserved-blocks-percentage Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesys‐ tem fragmentation, and to allow system daemons, such as sys‐ logd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. Nor‐ mally, the default percentage of reserved blocks is 5%.You can reduce the size of the reserved space in a data drive (which is not as critical as a system drive). But as described in the manual, space is reserved toavoid fragmentation (relevant also for a data partition) allow system daemons to continue to function correctly
I have been filling a hard drive and a couple of backup drives with the family pictures and videos. Once a video goes into the archive, it remains there in a folder correctly labeled with the date. The data collection has grown up to the point where I need a new drive (and new backup drives). But I wonder why should I leave the reserved blank space linux enforces in the drive, which now is above 50 GB I could fill with more videos. It is easy to do that by invoking the 'cp' command as root. Since I am not going to do anything else with the drive other than archival purposes, I wonder if filling that reserved space would be a bad practice, in this case, and why. I know I shouldn't do that with the main drive inside the laptop, otherwise the system may become unstable, but what about that external drive holding only archival data?
Is it wrong to fill the reserved space in an external USB drive for archiving purposes?
@umair i am not sure why sdb is showing as removable , could you post the o/p of this script for device in /sys/block/* do if udevadm info --query=property --path=$device | grep -q ^ID_BUS=usb then echo $device fi done
Is there a way to distinguish between the Internal Hard drives and External Hard drives. Actually i need to see how many External hardrives do we have and to which server are they connected . This is the Screenshot i took and by judging by its name SDE is external hard drive. But im not sure . So help me out.Further actions Ok now i used lsusb and it said Western Digital Drive connected and its Drive No is SDE. But by using dmesg it said that sdb is also a removable disk. Any suggestions 'sd 0:0:1:0: Attached scsi removable disk sdb Vendor: WDC Model: WD2500YD-01NVB1 Rev: 10.0 Type: Direct-Access ANSI SCSI revision: 05 Vendor: WDC Model: WD2500YD-01NVB1 Rev: 10.0 Type: Direct-Access ANSI SCSI revision: 05 Vendor: WDC Model: WD2500YD-01NVB1 Rev: 10.0 Type: Direct-Access ANSI SCSI revision: 05 Vendor: WDC Model: WD2500YD-01NVB1 Rev: 10.0 Type: Direct-Access ANSI SCSI revision: 05
How to check how many External Hard Drives are connected to Linux Server
My main concern is why /etc/fstab is disregarded ... The manual mount immediately put them right back where they should beThe auto-mounting you refer to is performed by udisks. As you desire, it's supposed to defer to the entry in /etc/fstab, if there is one. But if there isn't one, it mounts under /media. It sounds like udisks gets confused by the failed (but still existing) mounts... I would call this a bug in udisks. If you are interested in seeing it improved then please report it to the project :). Udisks has actually been tested with device removal, as this is something real users do :). If udisks mounts a filesystem itself, and the device is removed, it attempts to unmount the filesystem and clean up. This unmount occurs regardless of whether a mount point is specified manually in /etc/fstab. However, udisks does not unmount automatically if the device was mounted "manually", using /sbin/mount. Hence, your scenario would not necessarily have been noticed when developers of udisks did their initial coding/testing. Note that manually running mount /dev/sdu2 behaves differently to the automount that happens when the "new" device is plugged in. /sbin/mount does not call in to udisks. (udisks might be implemented in terms of /sbin/mount though).
I've got a Drobo in three partitions on Linux Mint, and it periodically drops off the filesystem, losing its mount points. Upon return it disregards /etc/fstab and mounts as a new device under /media--as if I'd inserted a new USB stick. AFAICT, the fstab declarations are correct--they work manually--but maybe I've missed a key element: # drobo mount points UUID="d4af52ec-7734-4a43-91cf-ccea799b130e" /mnt/d1 ext3 rw,user 0 2 UUID="599456dd-3e9e-4f56-aa8e-957191099c6b" /mnt/d2 ext3 rw,user 0 2 UUID="94a0b9bf-6ae3-45cf-9a66-da228da64660" /mnt/d3 ext3 rw,user 0 2The Drobo exits uncleanly, creating a ton of false duplicates. The only hardware is one internal drive and the Drobo. gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=zed) /dev/sde2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev) /dev/sdf2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev) /dev/sdg2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev) /dev/sdd2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdc2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev) /dev/sdb2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev) /dev/sdh2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev) /dev/sdi2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdk2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdj2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdn2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdm2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdl2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdo2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdp2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdq2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdt2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sds2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdr2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdz2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdy2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdx2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdu2 on /media/zed/drobo1 type ext3 (rw,nosuid,nodev,uhelper=udisks2) /dev/sdw2 on /media/zed/drobo3 type ext3 (rw,nosuid,nodev,uhelper=udisks2) /dev/sdv2 on /media/zed/drobo2 type ext3 (rw,nosuid,nodev,uhelper=udisks2)When I (manually) unmount and re-mount, it follows the fstab declarations without issue. I never need to first type umount /mnt/d*. I don't need to be root to re-mount. The manual un-mount command works quickly. The first re-mount command takes a few seconds and the Drobo spins back up (this I expect is the Drobo allowing the drives to sleep, but the Drobo itself is still on the filesystem). The second and third mount commands always happen as quickly as I can type them. 0 [08:57:46 zed@linnicks doc 124] umount /media/zed/drobo* 0 [08:57:51 zed@linnicks doc 125] mount /mnt/d3 0 [08:57:56 zed@linnicks doc 126] mount /mnt/d2 0 [08:57:59 zed@linnicks doc 127] mount /mnt/d1 0 [08:58:01 zed@linnicks doc 128] Did I miss something obvious? My main concern is why /etc/fstab is disregarded, though I might be better advised to find the root cause for the dropoffs in the first place**. Just now it occurred to me that cron could umount and remount, but that's even more of a band-aid. It's easy to blame a 2008 Drobo for an occasional glitch. It seems completely random. The Drobo will work fine for a week or three and then simply be in the wrong place. It's always all three partitions. I've had less than stellar luck with other Drobos, so I'm quick to blame the drobo for the dropoffs--maybe I'm being too hasty there. It's certainly worth noting that my OS theoretically should recognize the hardware and not try and define it as three new devices each time. I don't think the Drobo is merely entering sleep mode, because I can go a day or two without using it and step right back into it. **This ambiguity may be a cause for deeper concern from a back-up-your-stuff perspective, but I'm planning a better and more traditional RAID that will serve as additional backup. Everything on "RealRaid" will be triplicated to Drobo, so when either one dies, I replace it and move on. On that note if anyone has found a specific device (Qnap, Lacie...) to be highly satisfying at the consumer (possibly even prosumer) level, lemmeno. I'm probably thinking in the 15-30TB range.
Drobo filesystem ignores /etc/fstab, automounts in the wrong place after connection is interrupted
In my case I brought up CentOS 7 and tried following everyone's instructions on this page. I kept running into a device busy message. The reason in my opinion why you are getting the mdadm: cannot open device /dev/sda1: Device or resource busyerror message is because the device is already mounted as something else. I also did not want to make any changes to the disk at all since my use case was to extract a very large file from my RAID1 array that failed to be extracted every possible way otherwise and the fastest way was to pull one of the drives out, I do want to put the drive back in and still have my configuration in place as well. Here is what I did after doing some online research on other sites: NOTE: NAS:0 is the name of my NAS device so substitute appropriately. It was automatically mounted although it would say that its not mounted if you were to run the mount command, you can verify that it was mounted by running: [root@localhost Desktop]# cat /proc/mdstat Personalities : [raid1] md127 : active (auto-read-only) raid1 sdb2[0] 1952996792 blocks super 1.2 [2/1] [U_]unused devices: <none>Notice it was automatically mounted under /dev/md127 for me. Ok then: [root@localhost Desktop]# mdadm -A -R /dev/md9 /dev/sdb2 mdadm: /dev/sdb2 is busy - skipping[root@localhost Desktop]# mdadm --manage --stop /dev/md/NAS\:0 mdadm: stopped /dev/md/NAS:0[root@localhost Desktop]# mdadm -A -R /dev/md9 /dev/sdb2 mdadm: /dev/md9 has been started with 1 drive (out of 2).[root@localhost Desktop]# mount /dev/md9 /mnt/That did it for me. If in doubt, DD the drive to make a full copy and use CentOS or other Linux Live CD.
I have a horrible situation where I have to restore data from damaged raid system in a rescue Debian Linux. I just want to mount them all to /mnt/rescue in read only modus to be able to copy VMWare GSX images to another machine and migrate them to ESXi later on. The output for relevant commands is as follows. fdisk -lDisk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0005e687 Device Boot Start End Blocks Id System /dev/sda1 1 523 4200997 fd Linux raid autodetect /dev/sda2 524 785 2104515 fd Linux raid autodetect /dev/sda3 786 182401 1458830520 fd Linux raid autodetectDisk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00014fc7 Device Boot Start End Blocks Id System /dev/sdb1 1 523 4200997 fd Linux raid autodetect /dev/sdb2 524 785 2104515 fd Linux raid autodetect /dev/sdb3 786 182401 1458830520 fd Linux raid autodetectDisk /dev/md0: 4301 MB, 4301717504 bytes 2 heads, 4 sectors/track, 1050224 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000Disk /dev/md0 doesn't contain a valid partition tableDisk /dev/md1: 2154 MB, 2154954752 bytes 2 heads, 4 sectors/track, 526112 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000Disk /dev/md1 doesn't contain a valid partition tableI was trying to mount the disks as follows. mount -o ro /dev/sda1 /mnt/rescueThen I get following error. mount: unknown filesystem type 'linux_raid_member'Guessing file system is not going well either. mount -o ro -t ext3 /dev/sda1 /mnt/rescue/ mount: /dev/sda1 already mounted or /mnt/rescue/ busySo I tried to create a virtual device as follows. mdadm -A -R /dev/md9 /dev/sda1This results in the following message. mdadm: cannot open device /dev/sda1: Device or resource busy mdadm: /dev/sda1 has no superblock - assembly abortedNow I am lost, I have no idea how to recover the disks and get the data back. The following is the output of mda --examine for all 3 disks (I think it should be 3x raid1 disks). /dev/sda1: Magic : a92b4efc Version : 0.90.00 UUID : 6708215c:6bfe075b:776c2c25:004bd7b2 (local to host rescue) Creation Time : Mon Aug 31 17:18:11 2009 Raid Level : raid1 Used Dev Size : 4200896 (4.01 GiB 4.30 GB) Array Size : 4200896 (4.01 GiB 4.30 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 0 Update Time : Sun Jun 2 00:58:05 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 9070963e - correct Events : 19720 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 0 0 0 removed 1 1 8 1 1 active sync /dev/sda1 2 2 8 17 2 active sync /dev/sdb1/dev/sda2: Magic : a92b4efc Version : 0.90.00 UUID : e8f7960f:6bbea0c7:776c2c25:004bd7b2 (local to host rescue) Creation Time : Mon Aug 31 17:18:11 2009 Raid Level : raid1 Used Dev Size : 2104448 (2.01 GiB 2.15 GB) Array Size : 2104448 (2.01 GiB 2.15 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 1 Update Time : Sat Jun 8 07:14:24 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 120869e1 - correct Events : 3534 Number Major Minor RaidDevice State this 1 8 2 1 active sync /dev/sda2 0 0 0 0 0 removed 1 1 8 2 1 active sync /dev/sda2 2 2 8 18 2 active sync /dev/sdb2/dev/sda3: Magic : a92b4efc Version : 0.90.00 UUID : 4f2b3b67:c3837044:776c2c25:004bd7b2 (local to host rescue) Creation Time : Mon Aug 31 17:18:11 2009 Raid Level : raid5 Used Dev Size : 1458830400 (1391.25 GiB 1493.84 GB) Array Size : 2917660800 (2782.50 GiB 2987.68 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 2 Update Time : Sat Jun 8 14:47:00 2013 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2b2b2dad - correct Events : 36343894 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 3 1 active sync /dev/sda3 0 0 0 0 0 removed 1 1 8 3 1 active sync /dev/sda3 2 2 0 0 2 faulty removedcat /proc/mdstat Personalities : [raid1] md2 : inactive sda3[1](S) sdb3[2](S) 2917660800 blocksmd1 : active raid1 sda2[1] sdb2[2] 2104448 blocks [3/2] [_UU]md0 : active raid1 sda1[1] sdb1[2] 4200896 blocks [3/2] [_UU]md2 seems to be damaged and it is probably the raid with my VMWare images. I would like to access the data from md2 (the data on the active and not damaged disk, that is /dev/sda3) by mounting it outside of the raid. Is it a good idea to just execute mdadm --manage /dev/md2 --remove /dev/sda3 (would it even work as md2 is not seen by fdisk)? Should I re-assamble the other raids md0 and md1 by running mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1? UPDATE 0: I am not able to assemble md0 and md2. root@rescue ~ # mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 mdadm: cannot open device /dev/sda1: Device or resource busy mdadm: /dev/sda1 has no superblock - assembly aborted root@rescue ~ # mdadm --assemble /dev/md2 /dev/sda3 /dev/sdb3 mdadm: cannot open device /dev/sda3: Device or resource busy mdadm: /dev/sda3 has no superblock - assembly abortedMounting with mount -t auto is not possible. root@rescue ~ # mount -t auto -o ro /dev/md0 /mnt/rescue/ /dev/md0 looks like swapspace - not mounted mount: you must specify the filesystem type root@rescue ~ # mount -t auto -o ro /dev/md2 /mnt/rescue/ mount: you must specify the filesystem typeMounting /dev/md1 works but no VMWare data on it. root@rescue /mnt/rescue # ll total 139M -rw-r--r-- 1 root root 513K May 27 2010 abi-2.6.28-19-server -rw-r--r-- 1 root root 631K Sep 16 2010 abi-2.6.32-24-server -rw-r--r-- 1 root root 632K Oct 16 2010 abi-2.6.32-25-server -rw-r--r-- 1 root root 632K Nov 24 2010 abi-2.6.32-26-server -rw-r--r-- 1 root root 632K Dec 2 2010 abi-2.6.32-27-server -rw-r--r-- 1 root root 632K Jan 11 2011 abi-2.6.32-28-server -rw-r--r-- 1 root root 632K Feb 11 2011 abi-2.6.32-29-server -rw-r--r-- 1 root root 632K Mar 2 2011 abi-2.6.32-30-server -rw-r--r-- 1 root root 632K Jul 30 2011 abi-2.6.32-33-server lrwxrwxrwx 1 root root 1 Aug 31 2009 boot -> . -rw-r--r-- 1 root root 302K Aug 4 2010 coffee.bmp -rw-r--r-- 1 root root 89K May 27 2010 config-2.6.28-19-server ...UPDATE 1: I tried to stop md2 and md0 and assemble once again. mdadm -S /dev/md0root@rescue ~ # mount -t auto -o ro /dev/md0 /mnt/rescue/ /dev/md0 looks like swapspace - not mounted mount: you must specify the filesystem typemdadm -S /dev/md2root@rescue ~ # mount -t auto -o ro /dev/md2 /mnt/rescue/ mount: you must specify the filesystem typeAny ideas? UPDATE 2: Assembling from one disk is not working due to following error message. root@rescue ~ # mdadm -S /dev/md2 root@rescue ~ # mdadm --assemble /dev/md2 /dev/sda3 mdadm: /dev/md2 assembled from 1 drive - not enough to start the array.root@rescue ~ # mdadm -S /dev/md2 mdadm: stopped /dev/md2 root@rescue ~ # mdadm --assemble /dev/md2 /dev/sdb3 mdadm: /dev/md2 assembled from 1 drive - not enough to start the array.Even new raid fails. root@rescue ~ # mdadm -S /dev/md9 mdadm: stopped /dev/md9 root@rescue ~ # mdadm --assemble /dev/md9 /dev/sda3 mdadm: /dev/md9 assembled from 1 drive - not enough to start the array.root@rescue ~ # mdadm -S /dev/md9 mdadm: stopped /dev/md9 root@rescue ~ # mdadm --assemble /dev/md9 /dev/sdb3 mdadm: /dev/md9 assembled from 1 drive - not enough to start the array.Creating new md disk fails too. root@rescue ~ # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[1] sdb1[2] 4200896 blocks [3/2] [_UU]md1 : active raid1 sda2[1] sdb2[2] 2104448 blocks [3/2] [_UU]unused devices: <none> root@rescue ~ # mdadm -A -R /dev/md9 /dev/sda3 mdadm: failed to RUN_ARRAY /dev/md9: Input/output error mdadm: Not enough devices to start the array. root@rescue ~ # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md9 : inactive sda3[1] 1458830400 blocksmd0 : active raid1 sda1[1] sdb1[2] 4200896 blocks [3/2] [_UU]md1 : active raid1 sda2[1] sdb2[2] 2104448 blocks [3/2] [_UU]unused devices: <none> root@rescue ~ # mdadm -S /dev/md9 mdadm: stopped /dev/md9 root@rescue ~ # mdadm -A -R /dev/md9 /dev/sdb3 mdadm: failed to RUN_ARRAY /dev/md9: Input/output error mdadm: Not enough devices to start the array.UPDATE 3: Removing disks from md2 is not working. mdadm --remove /dev/md2 /dev/sda3 mdadm: cannot get array info for /dev/md2UPDATE 4: Finally, running assemble with --force hopefully did it. I am now copying files to another server.
How to mount a disk from destroyed raid system?
This will drop you into an initramfs shell:Start your computer. Wait until the Grub menu appears. Hit e to edit the boot commands. Append break=mount to your kernel line. Hit F10 to boot. Within a moment, you will find yourself in a initramfs shell.If you want to make this behavior persistent, add GRUB_CMDLINE_LINUX_DEFAULT="break=mount" to /etc/default/grub and run grub-mkconfig -o /boot/grub/grub.cfg.
I am trying to customize the initramfs rescue environment and would like to force the kernel to fail mounting / and drop into the (initramfs) rescue shell, as opposed to single user mode. How can I do that? NB: I know how to hook into initramfs-tools to achieve the customization steps, but I need to be able to verify the result.
How can I force a Ubuntu kernel to fail mounting / and drop into the initramfs rescue shell?
Rescue kernels use a general-purpose initramfs, so you have to regenerate it. (Compare the sizes of your initramfses to see the impact of this.) To create a new rescue kernel using the currently-running kernel, on Fedora 36, run sudo rm /boot/*rescue* sudo /usr/lib/kernel/install.d/51-dracut-rescue.install add "$(uname -r)" /boot "/boot/vmlinuz-$(uname -r)"
On the Internet I've only found this: /etc/kernel/postinst.d/51-dracut-rescue-postinst.sh $(uname -r) /boot/vmlinuz-$(uname -r)but it doesn't work in Fedora 36 and soon to be released version 37, because this file is missing, in fact the entire /etc/kernel/postinst.d/ directory is empty. I've also found dnf reinstall kernel-corebut it only works for an up-to-date kernel. I'm running the kernel which is no longer available in repositories. Also, this is not a good option per se since it will result in reinstalling literally many hundreds of files for no reason. grep -r rescue /etc finds nothing. # grep -r rescue /usr/bin grep: /usr/bin/tdbdump: binary file matches grep: /usr/bin/ctags: binary file matches grep: /usr/bin/systemctl: binary file matches grep: /usr/bin/systemd-analyze: binary file matches grep: /usr/bin/efisecdb: binary file matches grep: /usr/bin/dpkg: binary file matches grep: /usr/bin/grub2-mkrescue: binary file matches/usr/share contains a ton of matches but I've no idea how to work with that. kernel-core and kernel-modules packages have RPM scripts that do something but there's nothing specific to "rescue". It looks like it's all done as a single operation but I don't want to regenerate the initrd.
How to manually regenerate the rescue kernel from the running/installed kernel in Fedora in 2022?
Ubuntu 16.04 contains a package called dropbear-initramfs which is supposed to provide this feature.Lightweight SSH2 server and client - initramfs integration dropbear is a SSH 2 server and client designed to be small enough to be used in small memory environments, while still being functional and secure enough for general use. It implements most required features of the SSH 2 protocol, and other features such as X11 and authentication agent forwarding. This package provides initramfs integration.The only items I needed to adjust in addition to installing said package where:Uncomment the commented out DROPBEAR=y inside /etc/initramfs-tools/conf-hooks.d/dropbear Convert my existing host keys (see below) Create and populate /etc/initramfs-tools/root/.ssh/authorized_keys. For this I opted to bind-mount /root/.ssh onto /etc/initramfs-tools/root/.ssh A final update-initramfs -u -k all re-created all the initrd imagesTo convert the keys I ran these commands: /usr/lib/dropbear/dropbearconvert openssh dropbear /etc/ssh/ssh_host_rsa_key /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key /usr/lib/dropbear/dropbearconvert openssh dropbear /etc/ssh/ssh_host_dsa_key /etc/initramfs-tools/etc/dropbear/dropbear_dss_host_key /usr/lib/dropbear/dropbearconvert openssh dropbear /etc/ssh/ssh_host_ecdsa_key /etc/initramfs-tools/etc/dropbear/dropbear_ecdsa_host_keyNote: the source and target file names differ. So don't make assumptions here. Also, /usr/lib/dropbear isn't in my PATH, so I needed to give the full path to execute dropbearconvert.
This is mostly aimed at Debian/Ubuntu, but I feel savvy enough on a variety of distros to be able to adapt the solution for one distro to another. Here's my scenario. There are a few situations when the boot process will drop you to the shell (usually busybox) of the initrd. Most notably whenever you run a hardware RAID for which drivers have to be rebuilt for each and every new kernel revision. I'd like to be able to access the rescue system the same way as I would access the fully booted system. I reckon it'd be possible to put static builds of the shell(s) and sshd (OpenSSH or dropbear) into the initrd and have been looking for an existing solution that I can adjust to my needs. Assuming there is no existing solution (since I have searched for quite a while) what do I need to consider aside from using static builds where possible (or supply the libs)? Is it reasonable to simply cache a static build of dropbear and use /etc/initramfs-tools/hooks to embed that along with a "converted" OpenSSH sshd_config and the original host keys?
Are there any canned solutions for running sshd in the initrd?
Open the file /usr/lib/dracut/dracut.conf.d/02-rescue.conf and change dracut_rescue_image="yes"to dracut_rescue_image="no"This seems to be the only way for CentOS 7.
To make a long story short, my (CentOS 7) server's /boot is too small (100MiB) to hold 2 kernels plus the automatically generated rescue image. I want to avoid the hassle of repartitioning and reinstalling my server by preventing the rescue image from being generated. This would leave enough space for at least 2 kernels, and I can still use my hoster's netboot rescue solution should it be needed. (I know the only 'right' way to deal with this is to fix my partition scheme, but considering the downtime involved with that I wanted to try a more pragmatic solution first)
How do I disable the creation of the rescue boot image on CentOS?
When you boot the live distros you'll typically get a screen like this:When you get to this screen just hit the Esc key which will bring up the grub boot prompt from where you can type linux rescue. Additional boot options are covered here in this Fedora document titled: 7.1.3. Additional Boot Options. ReferencesChapter 7. Booting the Installer
I have created a new Fedora live USB with the intention of booting into rescue mode and fixing the bootloader, so that I can dualboot win7 and Fedora 20. However, I do not understand how I am to boot into rescue mode, seeing as the installation boot prompt is not shown as described by the guide, I am taken directly to the installation process. Pressing tab when given the option to run Fedora Live allows me write stuff in a terminalish thingy, but writing linux rescuesimply starts the Fedora Live as usual. Some sources claim that I need the DVD, not the LiveUSB. I will try this shortly.
Booting Fedora in rescue mode
it still asking me for root password, root FS is not in read only modeThis is the norm for systemd's rescue mode and thus for systemd operating systems. For not (re-)mounting filesystems and a read-only / mount, you should look to emergency mode, which is not the same as rescue mode. Both emergency and rescue modes invoke sulogin on systemd operating systems. The differences between the twain lie in how much of the basic system is brought up, and what is mounted. Note that single user mode was superseded by the split mechanisms of emergency mode and rescue mode in 1995, when van Smoorenburg init gained its -b option. The other answer is talking about something else that is also confusingly called "rescue mode", as well as referencing the CentOS 5 doco for CentOS 7 even though CentOS 7 is a systemd operating system whereas CentOS 5 was not. That "rescue mode" involves bootstrapping another operating system image from a CD-ROM, DVD-ROM, or USB storage device. This rescue mode and emergency mode involve what you are talking about in the question: entries on the GRUB menu and stuff that you can edit into the kernel command line from that very same GRUB menu. Further readingJonathan de Boyne Pollard (2016). The gen on emergency and rescue mode bootstrap. Frequently Given Answers. Lennart Poettering et al.. bootup. systemd manual pages. Freedesktop.org. Lennart Poettering et al.. "emergency.target". systemd.special. systemd manual pages. Freedesktop.org. Lennart Poettering et al.. "rescue.target". systemd.special. systemd manual pages. Freedesktop.org. "Booting into Emergency Mode". Red Hat Enterprise Linux 7 System Administrator's Guide. RedHat. "Booting into Rescue Mode". Red Hat Enterprise Linux 7 System Administrator's Guide. RedHat. Lingeshwaran Rangasamy (2015). Redhat Enterprise Linux 7 — systemd targets. Unix Arena. Working with systemd targets. Red Hat Enterprise Linux 7 System Administrator's Guide. RedHat. How to permanently disable root-password prompt for recovery mode, RHEL7
When I enter into grub menu, I get two entries : CentOS Linux (3.10.0-514.21.1.el7.x86_64) 7 (Core) CentOS Linux (0-rescue-e1ac24cbe9f94f2caa228d77e027be8b) 7 (Core)When I boot into the second line (the rescue one), I get a normal prompt like if I had boot into the first line. I was expecting someting like a rescue shell or something equivalent to single-user mode but it still asking me for root password, root FS is not in read only mode etc. Nothing seems different from multi-user mode. Can someone try on its distro to see if it has the same behavior? I'm pretty new to rescue, emergency, single-user modes so I might have missed someting. Here is my conf : [root@centos3 ~]# uname -a Linux centos3 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux[root@centos3 ~]# cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core)
Why booting into rescue mode menu doesn't do anything?
This is a problem with apt-get, it knows about dependencies, but does not know how to upgrade a dependency. And since 12.04.3 was released, both openssh-client and openssh-server have been updated, but the first is already installed on the rescue DVD. You can have your friend do a complete upgrade of all packages before installing openssh-server. What is much quicker is to do: apt-get remove -y openssh-client apt-get install -y openssh-client ubuntu-desktop openssh-server(ubuntu-desktop is depending on openssh-client, so it gets removed with first command) You should be able to do apt-get install openssh-clientbut I never got that to work, and removing first does.
I tried to help a friend with hard drive boot problems. I first asked her to make a rescue disk (Ubuntu 12.04.3), and boot from it. Then I asked her to open a console (Alt+F1) and use sudo to become root. All OK. Then I told her to install openssh-server - so I can remotely login and look at the system - but that does not work: openssh-client (and some other applications) are reported as "not going to be installed". But openssh-client is already installed: checking with dpkg -l openssh-client shows this to be the case. Why doesn't this work?
Installing openssh-server after Rescue Disk boot
Doing: sudo chown -R root.root /etcon the commandline will set /etc and everything underneath to owner root and group root However on my system (Ubuntu 12.04) not everything under /etc is in group root. The following list might help (generated with sudo find /etc ! -gid 0 -ls | cut -c 29-): root dovecot 5348 Apr 8 2012 /etc/dovecot/dovecot-sql.conf.ext root dovecot 782 Apr 8 2012 /etc/dovecot/dovecot-dict-sql.conf.ext root dovecot 410 Apr 8 2012 /etc/dovecot/dovecot-db.conf.ext root shadow 2009 Dec 23 16:10 /etc/shadow root lp 4096 Mar 12 19:38 /etc/cups root lp 540 Mar 12 19:38 /etc/cups/subscriptions.conf root lp 108 Sep 1 2012 /etc/cups/classes.conf root lp 4096 Oct 8 2012 /etc/cups/ppd root lp 2751 Mar 12 07:38 /etc/cups/printers.conf root lp 2751 Mar 11 21:06 /etc/cups/printers.conf.O root lp 108 Jun 6 2012 /etc/cups/classes.conf.O root lp 540 Mar 12 19:24 /etc/cups/subscriptions.conf.O root lp 4096 Mar 28 2012 /etc/cups/ssl root sasl 12288 Jun 6 2012 /etc/sasldb2 root daemon 144 Oct 25 2011 /etc/at.deny root dialout 66 Oct 31 2012 /etc/wvdial.conf root lightdm 0 Apr 21 2012 /etc/mtab.fuselock root shadow 981 Feb 19 23:38 /etc/gshadow root dovecot 1306 Jun 6 2012 /etc/ssl/certs/dovecot.pem root ssl-cert 4096 Jun 6 2012 /etc/ssl/private root dovecot 1704 Jun 6 2012 /etc/ssl/private/dovecot.pem root ssl-cert 1704 Apr 21 2012 /etc/ssl/private/ssl-cert-snakeoil.key root fuse 216 Oct 18 2011 /etc/fuse.conf root dip 4096 Oct 31 2012 /etc/ppp/peers root dip 1093 Mar 28 2012 /etc/ppp/peers/provider root dip 4096 Mar 28 2012 /etc/chatscripts root dip 656 Mar 28 2012 /etc/chatscripts/provider
I have a computer with Ubuntu 13.10 installed. The user (say Walesa) has changed the ownership of etc folder and all its subfolders from root to Welesa using a privileged file manager. As sudo was disabled, he rebooted hoping it will be re-enabled again. But security does not allow log-in after entering username and password saying "owner of etc/profile is not root". But a commandline login with I have no name!@Walesa is possible. Is there a way to restore ownership of etc and all its subfolders to root using this commandline?
Ownership of etc folder is changed how to restore it using commandline?
Clonezilla would be a suitable product for a whole-disk image. It works in a fashion similar to Ghost.
What can I use to create a backup image of my entire system that will be saved on a LAN computer via SSH? If I break anything later, I want to be able to restore my entire system as it was before the backup in minutes. Is there a Live CD that can "save backup image to ssh://..." and "restore from backup image ssh://..."?
How do I backup everything?
Some security hardening manuals suggest disabling the loading of unnecessary filesystem types. The examples typically include vfat among the types to be disabled. But for systems using UEFI, vfat is a necessary filesystem type: the EFI System Partition (ESP) that contains the bootloader *.efi files is typically a FAT32 filesystem, and that is one of the FAT filesystem sub-types handled by the vfat module. Typically, mounting the ESP is necessary for applying any bootloader updates, and another item in the security hardening manuals usually requires installing any security updates in a timely manner. Check /etc/modprobe.d/*.conf files for a line like: install vfat /bin/falseor install vfat /bin/trueIf such a line exists, comment it out and try again. You should also contact whoever is responsible for the security hardening, as it is obvious this hardening was applied without rebooting the system to test for bad side effects. Perhaps the hardening was tested only on systems with a classic MBR boot style, but applied to systems with UEFI too? In that case, this same error might be present on other hardened systems too.
I have a centOS 7.5 server that does not boot up. Only boots up to rescue mode. This happened after a forced reboot of the server. I got the following error on CentOS 7.5 after checking the journalctl -p errgrub2 was installed after getting the correct x86_64 file into the system,tried to mount the boot/efi, but got the error : Unknown file type "vfat" Then I tried to run dosfsck and correct if there are any dirty bits. There was a dirty bit, and it was corrected.Tried to mount again, and the same error occurred. Unknown file type "vfat". [![enter image description here][4]][4] vfat modules are available and they are of the same version as the kernel. I did not update the kernel in this server. so we can rule out the kernel version mismatch problem.Also tried re-installing the kernel and all the packages related to kernel.Still the /dev/sda1 cannot be mounted to /boot/efi. I'm basically ran out of solutions now. Could you help me with this, please. Also I do not have internet to this server. I can download any file from another computer and transfer to this. Please consider this when writing your suggestion. My fstab is as follows,
/boot/efi failed to mount due to unknown file system "vfat" : CentOS 7.5
The main problem here is not the RAID but a bogus partition table. The partition table is made for 512 byte sectors however the drive is detected as 4K native sectors. So all partition offsets and sizes are completely wrong. You might be able to work around it with losetup: losetup --find --show --read-only --sector-size 512 --partscan /dev/sdbAnd then see if the loop device has valid partitions and mdadm metadata: mdadm --examine /dev/loop*And then go on from there, hopefully with no further problems. If there's any chance that the drives might be defective, you can also consider pulling an image with ddrescue first. Image files would also default to 512 byte sector handling usually. Do everything read-only or use copy-on-write overlays.
I hope you´re doing well. I work as a technician in an IT company focused on Windows systems and cloud stuff, hence my knowledge to Linux is sadly very limited. So please excuse any dumb questions but I´ll try to be as helpfull as possible. Also this is my first time posting here, so pleas tell me if I do something wrong. So here´s the story: A new customer called and said his server is not reachable -> Server is dead (powersupply and and motherboard broke, even with new PS not even a POST beep). His old IT company was a oneman show, he unfortunately died, so no help from that side.. The Server is a 15 Year old Fujitsu with a LSI logic RAID embedded. Inside we found 2x 2TB SATA HDDs connected to the MB. All his data aswell as his software database file are on this server. Of course there´s not a backup.. he doesnt need one because eveything is mirrored.. you know. Also the server OS was setup only 2-3 years ago the customer stated. So I started with some recoverytools like Diskinternals RAID Recovery but those did not really work out. I only got single files (some of them were functional docs so the Disk itself seems ok) but no folders or such. To have the customers software restored to another system, we need a complete folder, subfolders and files. But what I found were files and folders only present on a Linux system. So I think the previous technician installed a Linux OS and set up a network share for the customer. So I pulled a dump from one of the Raid memberdisks to another HDD and set up a Debian machine for further testing. I´m still not sure if he set up a linux / mdadm Raid or if he did it using the onboard LSI Raid controller. Until now I had no luck mounting or reassembling the disk. Any help would be greatly appreciated.mount /dev/sdb /mnt/mountpoint brings error wrong FS, bad options, corrupted superblock Disk is not shown as md in /dev/lsblk: sdb 8:16 0 1,8T 0 disk ├─sdb1 8:17 0 240G 0 part └─sdb2 8:18 0 1,6T 0 partfdisk -l Festplatte /dev/sdb: 1,82 TiB, 2000398934016 Bytes, 488378646 Sektoren Festplattenmodell: EFAX-68FB5N0 Einheiten: Sektoren von 1 * 4096 = 4096 Bytes Sektorgröße (logisch/physikalisch): 4096 Bytes / 4096 Bytes E/A-Größe (minimal/optimal): 4096 Bytes / 4096 Bytes Festplattenbezeichnungstyp: dos Festplattenbezeichner: 0x87c99aecGerät Boot Anfang Ende Sektoren Größe Kn Typ /dev/sdb1 * 2048 62916607 62914560 240G fd Linux RAID-Autoerkennung /dev/sdb2 62916608 3897729167 3834812560 14,3T fd Linux RAID-Autoerkennung /dev/sdb3 3897729168 3907029167 9300000 35,5G fd Linux RAID-Autoerkennungmdadm --query /dev/sdb /dev/sdb: is not an md arraymdadm --assemble --scan mdadm: No arrays found in config file or automaticallymdadm --examine /dev/sdb mdadm: No md superblock detected on /dev/sdbThanks in advance for any help or tips, KofftheHoff Edit1: After using losetup --find --show --read-only --sector-size 512 --partscan /dev/sdband mdadm --examine /dev/loop*I get this wich looks promising: /dev/loop0: MBR Magic : aa55 Partition[0] : 62914560 sectors at 2048 (type fd) Partition[1] : 3834812560 sectors at 62916608 (type fd) Partition[2] : 9300000 sectors at 3897729168 (type fd) /dev/loop0p1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 7e3b9767:71d0e46d:6fe589d3:47671ff3 Name : schobert-fs:0 Creation Time : Wed Mar 27 18:49:49 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 62881792 (29.98 GiB 32.20 GB) Array Size : 31440896 (29.98 GiB 32.20 GB) Data Offset : 32768 sectors Super Offset : 8 sectors Unused Space : before=32680 sectors, after=0 sectors State : clean Device UUID : 34f9240c:3f35c4a9:b20f6259:5a6a295e Update Time : Fri Dec 2 16:10:26 2022 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 9882cebb - correct Events : 430 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/loop0p2: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 1c73d398:e5404786:1cba7820:fb5e4cd5 Name : schobert-fs:1 Creation Time : Wed Mar 27 18:50:09 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3834550416 (1828.46 GiB 1963.29 GB) Array Size : 1917275200 (1828.46 GiB 1963.29 GB) Used Dev Size : 3834550400 (1828.46 GiB 1963.29 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=16 sectors State : clean Device UUID : 038f5d97:741a9a29:c4803eec:d5502d4bInternal Bitmap : 8 sectors from superblock Update Time : Wed Nov 30 10:19:49 2022 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 5dcb018a - correct Events : 12662 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/loop0p3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 00bd818b:55eed0df:3ad0c7d3:0c7a3a97 Name : schobert-fs:2 Creation Time : Wed Mar 27 18:50:31 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 9291808 (4.43 GiB 4.76 GB) Array Size : 4645888 (4.43 GiB 4.76 GB) Used Dev Size : 9291776 (4.43 GiB 4.76 GB) Data Offset : 8192 sectors Super Offset : 8 sectors Unused Space : before=8104 sectors, after=32 sectors State : clean Device UUID : 1577f59e:d2022fbc:d0b79765:f127efbc Update Time : Wed Nov 30 00:01:49 2022 Bad Block Log : 512 entries available at offset 72 sectors Checksum : dc81c5d0 - correct Events : 92 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) mdadm: No md superblock detected on /dev/loop1. mdadm: No md superblock detected on /dev/loop2. mdadm: No md superblock detected on /dev/loop3. mdadm: No md superblock detected on /dev/loop4. mdadm: No md superblock detected on /dev/loop5. mdadm: No md superblock detected on /dev/loop6. mdadm: No md superblock detected on /dev/loop7. mdadm: cannot open /dev/loop-control: Invalid argumentThanks @frostschutz, also for the tip with pulling an image. I´m working with a dumped disk right now, the originals stay untouched. @gabor.zed Do those different names schobert-fs:0 /schobert-fs:1 /schobert-fs:2 prove your assumption? Or is the number after : just a marker what drive it was in the raid? lsblk now brings this: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 1,8T 1 loop ├─loop0p1 259:0 0 30G 1 part ├─loop0p2 259:1 0 1,8T 1 part └─loop0p3 259:2 0 4,4G 1 partfdisk now looks reasonable too: Festplatte /dev/loop0: 1,82 TiB, 2000398934016 Bytes, 3907029168 Sektoren Einheiten: Sektoren von 1 * 512 = 512 Bytes Sektorgröße (logisch/physikalisch): 512 Bytes / 512 Bytes E/A-Größe (minimal/optimal): 512 Bytes / 512 Bytes Festplattenbezeichnungstyp: dos Festplattenbezeichner: 0x87c99aecGerät Boot Anfang Ende Sektoren Größe Kn Typ /dev/loop0p1 * 2048 62916607 62914560 30G fd Linux RAID-Autoerkennung /dev/loop0p2 62916608 3897729167 3834812560 1,8T fd Linux RAID-Autoerkennung /dev/loop0p3 3897729168 3907029167 9300000 4,4G fd Linux RAID-AutoerkennungI even get 3 md devices now listed in /dev/ md125 md126 md127 mdadm --query --detail /dev/md126/dev/md125: Version : 1.2 Raid Level : raid0 Total Devices : 1 Persistence : Superblock is persistent State : inactive Working Devices : 1 Name : schobert-fs:2 UUID : 00bd818b:55eed0df:3ad0c7d3:0c7a3a97 Events : 92 Number Major Minor RaidDevice - 259 2 - /dev/loop0p3mdadm --query --detail /dev/md126/dev/md126: Version : 1.2 Raid Level : raid0 Total Devices : 1 Persistence : Superblock is persistent State : inactive Working Devices : 1 Name : schobert-fs:1 UUID : 1c73d398:e5404786:1cba7820:fb5e4cd5 Events : 12662 Number Major Minor RaidDevice - 259 1 - /dev/loop0p2mdadm --query --detail /dev/md127/dev/md127: Version : 1.2 Raid Level : raid0 Total Devices : 1 Persistence : Superblock is persistent State : inactive Working Devices : 1 Name : schobert-fs:0 UUID : 7e3b9767:71d0e46d:6fe589d3:47671ff3 Events : 430Number Major Minor RaidDevice - 259 0 - /dev/loop0p1The md devices cannot be mounted by themself, right? And whats somehow odd is that he states the md devices are Raid0. Don´t know what to make of that right now. At least when I try: mount -o ro -t auto /dev/md125 /mnt/raid1i get: mount: /mnt/raid1: Der Superblock von /dev/md125 konnte nicht gelesen werden.Superblock cannot be read I think i have to assemble the raid somehow before accessing it? Edit 2: @frostschutz i ran as requested: file -s /dev/md*/dev/md125: empty /dev/md126: empty /dev/md127: emptyand blkid/dev/sda1: UUID="ae7d369d-cf6b-4f84-a010-5d8a4c6fac80" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="248a7be6-01" /dev/sda5: UUID="e49af320-e5fc-45fa-91b4-528b231f0bbd" TYPE="swap" PARTUUID="248a7be6-05" /dev/sdb1: PARTUUID="87c99aec-01" /dev/loop0p1: UUID="7e3b9767-71d0-e46d-6fe5-89d347671ff3" UUID_SUB="34f9240c-3f35-c4a9-b20f-62595a6a295e" LABEL="schobert-fs:0" TYPE="linux_raid_member" PARTUUID="87c99aec-01" /dev/loop0p2: UUID="1c73d398-e540-4786-1cba-7820fb5e4cd5" UUID_SUB="038f5d97-741a-9a29-c480-3eecd5502d4b" LABEL="schobert-fs:1" TYPE="linux_raid_member" PARTUUID="87c99aec-02" /dev/loop0p3: UUID="00bd818b-55ee-d0df-3ad0-c7d30c7a3a97" UUID_SUB="1577f59e-d202-2fbc-d0b7-9765f127efbc" LABEL="schobert-fs:2" TYPE="linux_raid_member" PARTUUID="87c99aec-03"Edit3: So since I think all the data is located on md126 next up I ran: mdadm --stop /dev/md126 mdadm: stopped /dev/md126After that i tried auto assemble and it put the md126 back together and it seems to work since it came up with the raidname and a new device under /dev/md/ mdadm --assemble --scan mdadm: /dev/md/schobert-fs:1 has been started with 1 drive (out of 2).I tried mounting it then, but it keeps saying it can´t because its read only. Makes sense because the loop device ist in read only mode. But Should´nt it run when iI mount it in read only mode also with the option -o ro? mount -o ro /dev/md/schobert-fs\:1 /mnt/raid1 mount: /mnt/raid1: /dev/md126 konnte nicht im Lese-Schreib-Modus eingehängt werden, (Medium) ist schreibgeschützt..Edit 4: hoooray, I got it! Found the last hint in the mount manual:-r, --read-only Mount the filesystem read-only. A synonym is -o ro. Note that, depending on the filesystem type, state and kernel behavior, the system may still write to the device. For example, Ext3 or ext4 will replay its journal if the filesystem is dirty. To prevent this kind of write access, you may want to mount ext3 or ext4 filesystem with ro,noload mount options or set the block device to read-only mode, see command blockdev(8). […] norecovery/noload Don't load the journal on mounting. Note that if the filesystem was not unmounted cleanly, skipping the journal replay will lead to the filesystem containing inconsistencies that can lead to any number of problems.So i ran: mount -o ro,noload /dev/md/schobert-fs\:1 /mnt/raidand voila, all the files are there! Massive thanks to @frostschutz and @gabor.zed for helping me out! Have a good day all.
Recover files from Linux Raid1 member disk - as bad as it gets
The second GRUB option is to boot in rescue mode, when something has gone haywire. To remove it: 1) Remove the kernel image file rm -rf /boot/vmlinuz-0-rescue-6b78...2) Remove the boot option from GRUB grubby --remove-kernel=/boot/vmlinuz-0-rescue-6b78...(obviously, complete the commands with the correct number) You can safely remove this entry if you wish, but you could also just set up GRUB so it boots automatically to the first entry after a shorter delay (default is 5 secs).
I installed RHEL7 using vmware and at some point two boot options appeared, one of them appears to be a rescue option (second in the image). What is this option and how can I remove it? Should it be removed?
What is the rescue boot option in RHEL7?
Try ddrescue (gddrescue in most distros): GNU ddrescue - Data recovery tool. Copies data from one file or block device to another, trying to rescue the good parts first in case of read errors.
I have a faulty 320GB drive which has reading errors in samish GB positions but the exact positions vary. I am ok with probability of errors, this is out of question here. First of all I was surprised with that I need conv=sync for conv=noerror being actually useful but ok, I have spare time to grow new foot. I found it because of file -s /dev/sdc* did not give any sensible output for last partitions (i.e. same as for source drive), it said data instead. However, I did not get any practical improvement after I added sync to my command line: the file -s output still makes no sense except for first partition which does not contain errors in FS description section so file -s command detects FS correctly. I confirm erratic copying with mount -o ro for both drives and comparing md5sums for all files (but directory structure alone is erratic). I am trying to dd it to new bigger drive this way: dd if=/dev/sda3 conv=noerror,sync bs=1M of=/dev/sdc3 2> /part3_log grep -oPaz '[[:digit:]]*(?=\+[[:digit:]]+ records out\n)' </part3_log >/part3_log_bads # parsing is ok for this specific case rm /part3_log_01 for i in $(cat /part3_log_bads); do dd if=/dev/sda3 conv=noerror,sync bs=1M of=/dev/sdc3 skip=$((i-1)) seek=$((i-1)) count=1 2>>/part3_log_01; done # retrying erratic blocks. i-1 because of number of records is written after erratic block was padded and written. noerror does not make any practical difference here. I get this output for each erratic block in /part3_log (as expected): dd: error reading ‘/dev/sda3’: Input/output error 71051+3 records in <<<<<<<<< second number increments from 0 after each erratic block indicating partial read, this is expected 71054+0 records out 74505519104 bytes (75 GB) copied, 2546,96 s, 29,3 MB/sAnd I get this strange output (speed difference is expected) for all blocks in /part3_log_01: 1048576 bytes (1,0 MB) copied, 6,5663 s, 160 kB/s 0+1 records in 0+0 records out 0 bytes (0 B) copied, 6,41877 s, 0,0 kB/s 0+1 records in 1+0 records out 1048576 bytes (1,0 MB) copied, 7,42028 s, 141 kB/s 1+0 records in 1+0 records outWhat draws my attention is that almost each input record is read partially while there are no errors reported despite them actually happening (I see them in dmesg). There are no error being reported for sdc (as expected, it's a new drive). So, ho do I blindly copy faulty drive and then retry the faulty records? My approach seems to fail at two points:it fails to copy data without shift occuring after erratic blocks (despite conv=sync being present) it fails to report errors while retrying bad blocks.P.S. I would like to do it with dd only. Using ddrescue is problematic ATM. P.P.S. It's Debian 8.7.1 and dd 8.23
Blindly dd'ing faulty drive to new drive
Okay, sorry for answering my own question so soon, but I noticed something flabbergasting. The .qcow2 file was of size 120400379904 Bytes, whereas the conversion of the image with qemu-img convert -O raw gave me an image of size 128849018880 Bytes. Quite a difference. Now, if we take the size in sectors found by testdisk, we will indeed notice that 512*251657216 is 128848494592, which happens to be 512 Bytes more than the file size of the "raw" image. That looks promising, I thought to myself. I generated these files a few years ago, so I am not sure whether I created them as sparse images. Nevertheless, if qemu-img info shows it that way, I thought to myself, let's try to convert the image format. Keep in mind that this doesn't change the original file! qemu-img convert -O raw input outputdoes that job, albeit slowly. Running testdisk again on that file worked surprisingly well, although I was still unable to convince mount to use a different superblock, despite -o sb=.... TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[emailprotected]> http://www.cgsecurity.orgDisk bigdata/vm_disk_vdb.img - 128 GB / 120 GiB - CHS 15666 255 63 Partition Start End Size in sectors >P ext3 0 1 1 15664 239 62 251657216 [DATA]Structure: Ok.Keys T: change type, P: list files, Enter: to continue ext3 blocksize=4096 Large file Sparse superblock, 128 GB / 119 GiBAfter that, I could get testdisk to copy the files into a directory and diff it against my backups. There were a few corruptions, such: ext2fs_read_inode(ino=384492884) failed with error 2133571369.and also other minor issues, but the problems were affecting only about 0.1% of all files and folders. Start testdisk as follows to be able to figure out which files must be considered damaged: testdisk /log imagefile.img
I have an interesting case, where e2fsck refuses to recognize the file system inside a qcow2 image file. Using testdisk I am able to see the partition, so some markers would be left. The reason this problem occurred in the first place was because the host of the virtual machine died. So I choose None as the "type" of partition and get the following. TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[emailprotected]> http://www.cgsecurity.orgDisk /dev/loop0 - 120 GB / 112 GiB - 235156929 sectorsThe harddisk (120 GB / 112 GiB) seems too small! (< 4079258 TB / 3710063 TiB) Check the harddisk size: HD jumpers settings, BIOS detection...The following partitions can't be recovered: Partition Start End Size in sectors > ext3 640 251657855 251657216 [DATA] ext3 1864062 253521277 251657216 [DATA] ext3 1864064 253521279 251657216 [DATA] ext3 2387454 254044669 251657216 [DATA] ext3 2387456 254044671 251657216 [DATA] ext3 2911614 254568829 251657216 [DATA] ext3 2911616 254568831 251657216 [DATA] ext3 3435774 255092989 251657216 [DATA] ext3 3435776 255092991 251657216 [DATA] ext3 3959934 255617149 251657216 [DATA][ Continue ] ext3 blocksize=4096 Large file Sparse superblock, 128 GB / 119 GiBIt seems superblocks still exist and are intact, but how can I convince mount to use one of those superblocks as long as I don't know where they are located? kpartx doesn't see anything on /dev/loop0 after I did the usual losetup -o 32256 /dev/loop0 imagefile for qcow2. The image itself is (qemu-img info): file format: qcow2 virtual size: 120G (128849018880 bytes) disk size: 112G cluster_size: 65536 Format specific information: compat: 0.10NB: I do have backups, but they are a few weeks old and if at all possible, I'd diff the stuff on the disk against the backups. Most are Git and Mercurial repos, so it's possible to fetch them again from elsewhere.
How to find alternative superblocks in ext3 file system of partition-less qcow2?
The rescue login: text is a login prompt expecting you to type in a username. Enter root and press Enter, that should give you a root shell. If it asks you for a password, you press Enter again. Further reading: https://doc.opensuse.org/documentation/leap/startup/single-html/book.opensuse.startup/index.html#sec.trouble.data.recover.rescue
On booting the Rescue System from an openSUSE DVD, I find myself at a "rescue login" prompt:What are the default login details?
What is the default openSUSE Rescue login?
I recovered the partly overwritten partition with testdisk. In case someone has the same problem, here's the solution (use testdisk): Intel/PC Partition > Analyse > Quick search > And there I found the deleted partition [1.8 TB] > Enter to continue > [Write] (Write partition structure to disk) >And now the partition is showing when I run fdisk -l After that I tried to mount it, but it showed an error:"Metadata kept in Windows cache, refused to mount" root@rescue:/dev# sudo mount /dev/sda3 /mnt The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda3': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option.Read some other thread on this site on how to fix this: sudo ntfsfix /dev/sda3 and sudo mount -o rw /dev/sda3 /mnt > now the mounted NTFS partition is showing in WinSCP (SFTP) /mnt folder. sda3 is the recovered partition's name, it can contain a different number based on how many other partitions you have.
So I'm having a problem with kimsufi server. I was installing windows by using this command: wget -O- ...url.../server.gz | gunzip | dd of=/dev/sdaAnd I messed up and accidentally ran that command on already existing windows installation, now I can't use RDP anymore, I guess it's all gone now, it somehow wrote over existing installation, even though it had 3% progress at downloading the image. All my important files were on different partition, not on primary where the OS was stored. Is there a way to transfer all files to another server by using rescue mode ? Can I somehow get FTP server running in Kimsufi Linux rescue mode ? I am thinking of connecting to it from another server (windows), browse files and download/back up them. I have tried to use WinSCP, but it shows only Linux directories. How can I browse windows partitions through WinSCP ? Could it be that after running that command it had overwritten my main partition and corrupted other partitions ? I ran lsblk command and it shows only 2 partitions NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ├─sda1 8:1 0 500M 0 part └─sda2 8:2 0 14.5G 0 part Or it just shows linux partitions ?
How to recover overwritten partition?
As @RuiFRibeiro said in his comments, this is what serial consoles are for. USB to RS-232 serial adaptors are cheap ($5-$10), and so are null-modem cables. BTW, according to the ASRock X99 Extreme specs page, your motherboard has a COM port header on it. Most motherboards do. All you need is the cable kit to extend it from the motherboard header to a DB-9 (or DB-25) serial connector on one of the back-panel slots. These typically only cost a few dollars, about the same as a USB to RS-232 adaptor. Setting this up will be less work than getting a live system to do what you want - AND will give you console access at the exact point that the boot failed (usually with the initrd's root login prompt), and grub can be configured to use the serial console. If you insist on using a live system instead of a serial console, though, you're going to have to build or customise your own. None of them will do exactly what you want....fortunately, most of them (especially those oriented towards rescue and recovery) will be very close and will require only minimal changes. And since your server doesn't have a graphics card at all, you should choose one that doesn't try to start up a graphical console. You seem a little confused about the difference between a dhcp client and a dhcp server. A dhcp server gives out IP addresses etc to other machines (not itself) on already-configured network interface(s). For this task, you'll need your boot system to either be a DHCP client (and have a DHCP server elsewhere on the network) or be configured to have a static IP address. If the server you're talking about is your LAN's DHCP server, it's a good idea to have some other machine on the network configured to be a secondary DHCP server. dhcpd doesn't have to be running all the time on it, it just has to have the software installed and ready to start manually when needed. It also needs an up-to-date copy of the primary DHCP server's configuration files....or, at the very least, a minimal configuration that allocates a known IP address to your main server based on its MAC address. e.g. in ISC dhcpd's dhcpd.conf: host server { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.1.1; }Most Live system USBs/CDs/etc (including rescue-type systems like gparted and clonezilla) already have dhcp client support built in and can be configured (or modified) to have a static IP on your LAN. Similarly most will have sshd installed and configured to start up as soon as there's a working network interface. I'd recommend the Clonezilla USB image as a good base for building your own live rescue system. Unlike most, it's oriented towards text/console use already rather than graphical and because it's focused on backup & restore it already has all the tools you might need for mounting and working with almost any filesystem known to linux. When you customise your Live system, remember to change the default password(s) (usually empty, or something trivial and well-known) and install copies of your SSH public keys.
I'm looking for a Linux live system which allows me to investigate boot failure via SSH for my server which is placed under my desk and doesn't have a graphic card for energy saving reasons. I sometime make configuration/administration errors which lead to boot failures before the SSH server starts. In this case I'd like to be able to plug-in a USB stick the live system and reboot into it (UEFI is configured to boot USB before SSD). The system should then start a DHCP and a SSH server without interaction so that I can figure out the IP address from ifconfig on the client (through an educated guess) and connect to the SSH server. Currently, I have to shutdown the server, plugin a graphic card, connect my keyboard to it instead of the above. That's fine, but not ideal. I tried Ubuntu 17.04 desktop and server. They both wait forever for input before they start an DHCP or SSH server. The search is difficult in general because OS don't advertise the required feature (only that they have DHCP and SSH included, but not when they're started).
Linux live system for headless rescue
You can't access the block device on Apple directly, it is forbidden by the OS, on which you don't have a root access, despite that you've purchased it and it is yours. To be able to do these, you have to jailbreak it (I intentionally don't use the word "crack", because it is your property). It is hard. Although the OS of the Apple mobile devices, the iOS, is based on a Unix variant (OS X, which is based on freebsd), it doesn't mean that you would have the freedom of the unixes on it, its exact opposite is the truth. Better solution would be to use some application-level thingy (i.e. to copy the files with usb or wifi). If you have some electronical affinity, also soldering out the ssd chip from it would be a solution, although it doesn't solve the problem that their whole disk is encrypted (note: being the legal owner of the device, you are still not allowed to decrypt its content).
I would like to mount an Apple iPad to my Linux device, to make a jpeg or ddrescue recovery on it. How I would do this with an Apple device?
How to mount an Apple device from Linux?
If you're getting the same error on both a Windows USB and Linux USB stick then it's unlikely that the USB stick is being used to boot. The 'no such device' error message should be a UUID that should be different between the two operating systems (that and Windows doesn't use GRUB). To me this indicates one of two things, either there's a problem with the BIOS boot order and the USB sticks are being skipped, or the order is correct but there's a problem with both USB sticks and the local hard drive is next on the list of devices to boot from. From the BIOS boot screen there's usually a method of changing the boot order at boot time or booting off a specific device - usually pressing F12 or F1 or some key other than getting you into the BIOS configuration. I'd recommend finding that, and trying to boot from the Windows USB to start. If you continue getting the same grub message, I'd try the USB sticks in another system to make sure they're readable.
I have received a computer that the previous owner had attempted to install some Linux OS, I don't know which particular one. I have both an Ubuntu and a Windows bootable USB drive and I have attempted to boot off of them with priority set to boot off USBs in the BIOS, however when computer boots it leads me grub rescue prompt. Multiple occasions I have tried to boot off the USBs but the same result ensues. At the very beginning of the prompt it displays: "error: no such device: 79078212-7a47-4a0a-a07a-ee451a023492." Followed by: "Entering rescue mode..." "grub rescue>"
Inherited Computer, trying to boot of USB but not working
tar cvzf - file1 file2 dir1 dir2 | ssh user@remotesystem "cat > /big/partition/rescue.tgz"would be my preference. You could even unpack on the fly: tar cvzf - file1 file2 dir1 dir2 | ssh user@remotesystem "cd /big/partition; tar xvzfp -"But as fuero points out, one could also rsync -avz -e "ssh user@remotesystem" dir1 dir2 remotesystem:/big/(my rsync-fu is not so great, so it is possible that the above isn't perfect). But you get the idea; ssh can be used as the bearer transport for any number of things.
I only have access to my vserver via a minimal rescue system over ssh. It does not have scp or ftp installed. Is there an easy way to backup the files, preferably directly to an ftp server, but to my local machine would also be fine. Maybe this helps showing the capabilities of the rescue system: uname -a Linux customer-rescue-v5 2.6.38.1-GN-SMP_x86_64 #5 SMP Fri Apr 8 13:27:16 CEST 2011 x86_64 QEMU Virtual CPU version 0.14.1 GenuineIntel GNU/Linux customer-rescue-v5 ~ # compgen -c if then else elif fi case esac for select while until do done in function time { } ! [[ ]] . : [ alias bg bind break builtin caller cd command compgen complete continue declare dirs disown echo enable eval exec exit export false fc fg getopts hash help history jobs kill let local logout popd printf pushd pwd read readonly return set shift shopt source suspend test times trap true type typeset ulimit umask unalias unset wait crond dropbear rmd160 sha256 sha384 sha512 backup-tar addpart update-pciids backup.sh biosdecode delpart dmidecode dump-remind hashalot iconvconfig locale-gen lspci memtester mklost+found nscd ntpd ntpdate ownership partx pcimodules readprofile restore-tar rmt rpcinfo setpci tcpdump tunelp vpddecode zdump zic install crond dropbear rmd160 sha256 sha384 sha512 backup-tar addpart update-pciids backup.sh biosdecode delpart dmidecode dump-remind hashalot iconvconfig locale-gen lspci memtester mklost+found nscd ntpd ntpdate ownership partx pcimodules readprofile restore-tar rmt rpcinfo setpci tcpdump tunelp vpddecode zdump zic install arping awk base64 basename captoinfo cat chgrp chmod chown chroot cksum comm cp cut date dbclient dd df dir dircolors dirname dropbearkey du echo env ex expr false find fuser head hostid id infotocap jmacs jpico jstar killall last less link ln logname ls mkdir mkfifo mknod mv nano nice nohup nslookup oldfuser passwd pkill printenv pstree ptx pwd readlink reset rjoe rm rmdir rview rvim seq sha224sum shuf sha256sum sha384sum sha512sum sleep snice sort split ssh stat strings stty sync tee telnet time touch tr traceroute true tty uname uniq unlink vdir vi view vimdiff wc wget who whoami yes bashbug [ catchsegv cal compile_et clear col colcrt colrm column dropbearmulti csplit cytune ddate exuberant-ctags expand localedef factor fdformat flock fmt fold free gencat getconf getent getopt hexdump iconv infocmp ipcrm ipcs isosize joe join ldd lddlibc4 line locale ntp-keygen logger look mcookie md5sum mk_cmds mtrace namei nl pcprofiledump ntp-wait ntpdc ntpq ntptime ntptrace od paste pathchk pgrep pg pinky pmap pr printf pwdx raw rename renice rev rlfe rpcgen rsync screen script setfdprm setsid setterm sha1sum shred skill slabtop sntp sprof sum tac tack tail tailf termidx test tic tickadj tload toe top tput tset tsort tzselect ul unexpand uptime users vim vmstat w watch whereis write xtrace e2label findfs fsck.ext2 fsck.ext3 halt hdparm ifconfig ifdown ifup init insmod klogd logread lsmod mkfs.ext2 mkfs.ext3 modprobe poweroff reboot rmmod route start-stop-daemon swapoff syslogd udhcpc badblocks agetty ctrlaltdel blkid blockdev cfdisk fsck.cramfs debugfs dumpe2fs e2fsck e2image elvtune fdisk filefrag fsck iptables-restore fsck.minix hwclock iptables mkfs.bfs mkfs iptables-save ldconfig logsave losetup mke2fs mkfs.ext4 mkfs.cramfs mkfs.minix mkswap pivot_root resize2fs sfdisk sln swapon sysctl tune2fs bunzip2 busybox.static bzcat bzcmp bzegrep bzfgrep bzless egrep fgrep gunzip gzcat hostname ip netstat ping rbash rnano run-parts sed sh zcat zcmp zegrep zfgrep base64 arch basename busybox bash bb bzip2recover bzdiff bzgrep bzip2 dircolors bzmore cat chattr chgrp chmod chown chroot cksum comm cp cut date dd df dir sha224sum dirname dmesg du echo env expr false fuser grep gzexe gzip head hostid id kill killall link ln logname ls lsattr mkdir mkfifo mknod more mount mv nano nice nohup oldfuser printenv ps pstree ptx pwd readlink rm rmdir seq sha256sum sha384sum sha512sum shuf sleep sort split stat stty sync tar tee touch tr true tty umount uname uniq unlink uuidgen vdir wc who whoami yes zdiff zforce zgrep zless zmore znew .ssh install
How to backup files with only a minimal rescue system?
This is possible, but you might be better off re-installing. If you want to try, I would first try to copy enough of dpkg to your filesystem that dpkg will run. There are a bunch of files from dpkg that are in /usr/bin/. Copy those in. For convenience, the list is /usr/bin/dpkg-trigger /usr/bin/dpkg-deb /usr/bin/dpkg /usr/bin/dpkg-query /usr/bin/dpkg-split /usr/bin/dpkg-maintscript-helper /usr/bin/dpkg-divert /usr/bin/update-alternatives /usr/bin/dpkg-statoverrideThen you can download debs to the system, to, first, reinstall dpkg, and then reinstall apt. apt and dpkg don't seem to depend on other stuff in /usr/bin, so they might run. Once you have dpkg working, you can get a list of packages you have installed (using, e.g. dpkg -l), and then run apt to reinstall them. A detailed recipe would be hard without actually trying it. If you do decide to go that route, post comments here if you run into problems.
I accidentally deleted the /usr/bin directory. Using a bootable usb, is it possible to rescue my machine?
Rescue /usr/bin on Debian Wheezy?
If you only changed the partition size, you're not ready to resize the logical volume yet. Once the partition is the new size, you need to do a pvresize on the PV so the volume group sees the new space. After that you can use lvextend to expand the logical volume into the volume group's new space. You can pass -r to the lvextend command so that it automatically kicks off the resize2fs for you. Personally, I would have just made a new partition and used vgextend on it since I've had mixed results with pvresize.
I recently resized the hard drive of a VM from 150 GB to 500 GB in VMWare ESXi. After doing this, I used Gparted to effectively resize the partition of this image. Now all I have to do is to resize the file system, since it still shows the old value (as you can see from the output of df -h): Filesystem Size Used Avail Use% Mounted on /dev/mapper/owncloud--vg-root 157G 37G 112G 25% / udev 488M 4.0K 488M 1% /dev tmpfs 100M 240K 100M 1% /run none 5.0M 0 5.0M 0% /run/lock none 497M 0 497M 0% /run/shm /dev/sda1 236M 32M 192M 14% /bootHowever, running sudo resize2fs /dev/mapper/owncloud--vg-root returns this: resize2fs 1.42 (29-Nov-2011) The filesystem is already 41608192 blocks long. Nothing to do!Since Gparted says that my partition is /dev/sda5, I also tried running sudo resize2fs /dev/sda5, but in this case I got this: resize2fs 1.42 (29-Nov-2011) resize2fs: Device or resource busy while trying to open /dev/sda5 Couldn't find valid filesystem superblock.Finally, this is the output of pvs: PV VG Fmt Attr PSize PFree /dev/sda5 owncloud-vg lvm2 a- 499.76g 340.04gfdisk -l /dev/sda shows the correct amount of space. How can I resize the partition so that I can finally make the OS see 500 GB of hard drive?
Can't resize a partition using resize2fs
You should not use df because it shows the size as reported by the filesystem (in this case, ext4). Use the dumpe2fs -h /dev/mapper/ExistingExt4 command to find out the real size of the partition. The -h option makes dumpe2fs show super block info without a lot other unnecessary details. From the output, you need the block count and block size.... Block count: 19506168 Reserved block count: 975308 Free blocks: 13750966 Free inodes: 4263842 First block: 0 Block size: 4096 ...Multiplicating these values will give the partition size in bytes. The above numbers happen to be a perfect multiple of 1024, so we can calculate the result in KiB: $ python -c 'print 19506168.0 * 4096 / 1024' # python2 $ python -c 'print(19506168.0 * 4096 / 1024)' # python3 78024672.0Since you want to shrink the partition by 15 GiB (which is 15 MiB times 1 KiB): $ python -c 'print 19506168.0 * 4096 / 1024 - 15 * 1024 * 1024' #python2 $ python -c 'print(19506168.0 * 4096 / 1024 - 15 * 1024 * 1024)' #python3 62296032.0As resize2fs accepts several kinds of suffixes, one of them being K for "1024 bytes", the command for shrinking the partition to 62296032 KiB becomes: resize2fs -p /dev/mapper/ExistingExt4 62296032KWithout unit, the number will be interpreted as a multiple of the filesystem's blocksize (4096 in this case). See man resize2fs(8)
I want to shrink an ext4 filesystem to make room for a new partition and came across the resize2fs program. The command looks like this: resize2fs -p /dev/mapper/ExistingExt4 $sizeHow should I determine $size if I want to substract exactly 15 GiB from the current ext4 filesystem? Can I use the output of df somehow?
How do I determine the new size for resize2fs?
The lvextend command (without the --resizefs option) only makes the LVM-side arrangements to enlarge the block device that is the logical volume. No matter what the filesystem type (or even whether or not there is a filesystem at all) on the LV, these operations are always similar. If the LV contains an ext2/3/4 filesystem, the next step is to update the filesystem metadata to make the filesystem aware that it has the more space available, and to create/extend the necessary metadata structures to manage the added space. In the case of ext2/3/4 filesystems, this involves at least:creating new inodes to the added space extending the block allocation data structures so that the filesystem can tell whether any block of the added space is in use or free potentially moving some data blocks around if they are in the way of the previously-mentioned data structure extensionThis part is specific to the filesystem type, although the ext2/3/4 filesystem types are similar enough that they can all be resized with a single resize2fs tool. For XFS, filesystems, you would use a xfs_growfs tool instead. Other filesystems may have their own extension tools. And if the logical volume did not contain a filesystem but instead something like a "raw" database or an Oracle ASM volume, a yet another procedure would need to be applied. Each filesystem has different internal workings and so the conditions for extending a filesystem will be different for each. It took a while until a common API was designed for filesystem extension; that made it possible to implement the fsadm resize command, which provides an unified syntax for extending several filesystem types. The --resizefs option of lvextend just uses the fsadm resize command. In a nutshell: After lvextend, LVM-level tools such as lvs, vgs, lvdisplay and vgdisplay will see the updated size, but the filesystem and any tools operating on it, like df, won't see it yet.
For resizing LVM2 partition, one needs to perform the following 2 commands: # lvextend -L+1G /dev/myvg/homevol # resize2fs /dev/myvg/homevolHowever, when I perform lvextend, I see that the changes are already applied to the partition (as shown in Gnome Disks). So why do I still need to do resize2fs?
Why do I need to do resize2fs after lvextend?
There are actually four different behaviors resize2fs can have (one of them trivial). It depends on if the filesystem is mounted or unmounted and if you're shrinking or extending.Mounted, Extending Here, resize2fs attempts an online resize. More or less, this just tells the kernel to do the work. The kernel then begins writing additional filesystem metadata on the newly available storage. You can continue to use the filesystem as this happens. Note that really old ext3 filesystems may not support online resize. You'll have to unmount the old filesystem to extend. Unmounted, Extending This time, resize2fs does the work instead of the kernel. Mostly this consists of writing additional filesystem metadata to the newly available storage. Mounted, Shrinking This isn't supported. It should just print out an error. This is the trivial behavior. Unmounted, Shrinking This is the most time consuming one, and also the most dangerous (though it still should be reasonably safe). If possible (e.g., there is sufficient space), resize2fs makes the filesystem use only the first size bytes of the storage. It does this by moving both filesystem metadata and your data around. After it completes, there will be unused storage at the end of the block device (logical volume), unused by the filesystem.lvextend and lvreduce change the size of the logical volume. They can additionally change the size of the filesystem if given the -r option, which is probably the right way to go, especially with reducing. Accidentally giving the wrong size to lvreduce is an unfortunately easy way to lose data; -r prevents this (by ensuring that resize2fs is told the same size).
What does resize2fs command do when we extend or reduce a Logical volume. Is the function same or different while using lvextend and lvreduce commands ?
What does resize2fs command do in Linux
From what I can tell, ext4fs supports online defragmentation (it's listed under "done", but the status field is empty; the original patch is from late 2006) through e4defrag in e2fsprogs 1.42 or newer which when running on Linux 2.6.28 or newer allows you to query status for directories or possibly file systems, and at least defragment individual files. e2fsprogs as of today is at version 1.42.8. I'm not sure whether or not this helps you, though, as what you want to do doesn't seem to be so much defragment the data as consolidate the data on disk. The two are often done together, but they are distinctly different operations. A simple way to consolidate the data, which might work, assuming you have a reasonable amount of free space, is to copy each file to some other logical logication on the same file system, and then use mv to replace the data pointed to by the inode with the new copy. It would depend heavily on exactly how the ext4 allocator works in detail, but it might be worth an attempt and it should be fairly easy to script. Just watch out for files that are hardlinked from more than one place (with a scheme like this it might be easiest to simply ignore any files with link count > 1, and let resizefs deal with those).
I need to shrink a large ext4 volume, and I would like to do it with as little downtime as possible. With the testing I've done so far it looks like it could be unmounted for the resize for up to a week. Is there any way to defragment the filesystem online ahead of time so that resizefs won't have to move so many blocks around? Update: It's taken some time to get to this point, moved quite a few TB of data around in preparation for the shrink, and I've been experimenting using the information in the answer below. I finally came up with the following command-line which could be useful to others in a similar situation with only minor modifications. Also note, it should be run as root for the filefrag and e4defrag commands to work properly - it won't affect the file ownership. It does also work properly on files with multiple hard-links, which I have lots of. find -type f -print0 | xargs -0 filefrag -v | grep '\.\.[34][0-9]\{9\}.*eof' -A 1 | awk '/extents found/ {match($0, /^(.*): [0-9]+ extents found/, res); print res[1]}' | xargs -n 1 -d '\n' e4defragA quick explanation to make it easier for others to modify/use: The first 'find' command builds the list of files to work with. Possibly redundant now or could be done a better way, but while testing I had other filters there and I've left it as a handy place to modify the scope of the rest of the command. Next pass each file through 'filefrag -v' to get a list of all physical blocks used by each file. The grep looks for the last block used by each file (line ending in 'eof'), and where that block is a 10-digit number starting with 3 or 4. In my case my new filesystem size will be 2980024320 blocks long so that does a good-enough job of only working on files that are on the area of disk to be removed. Having grep also include the following line (the '-A 1') also includes the filename in the output for the next section. This is where anyone else doing this will have to modify the command depending on the size of their filesystem. It could also probably be done in a much better way but this is working for me now and I'm lazy. awk pulls just the filenames out of all the other garbage that grep left in the filefrag output. And finally e4defrag is called - I don't care about the actual fragment count, but it has the side effect of moving the physical blocks around (hopefully into an early part of the drive), and it works against files with multiple hard-links with no extra effort. If you only want to know which files it would defrag without actually moving any data around, just leave the last piece of the command off. find -type f -print0 | xargs -0 filefrag -v | grep '\.\.[34][0-9]\{9\}.*eof' -A 1 | awk '/extents found/ {match($0, /^(.*): [0-9]+ extents found/, res); print res[1]}'
Decrease time to shrink ext4 filesystem
That would work; ext4 doesn't care about whether the block device it resides on is a partition, a whole hard drive, an LVM volume, a network block device, an iSCSI target… All it sees that there blocks.
One may help me with this, because its confusing me: I have a 1.8T Disk (it's a VM virtual disk), here a snippet of df: df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/sdb ext4 1.8T 1.6T 91G 95% /afHere the partition info: parted /dev/sdb print Model: VMware Virtual disk (scsi) Disk /dev/sdb: 1924GB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags:Number Start End Size File system Flags 1 0.00B 1924GB 1924GB ext4So I assume filesystem was setup w/o creating partition first. Now an expand is necessary and this will exceed the 2TB limit. I am just unsure whether this will work w/o trouble? For my undestanding it should be ok to increase the size of the virtual disk and the expand should be done with simply expand the filesystem: resize2fs -f /dev/sdb, so am I correct with this?
filesystem on disk without partition
I used fdisk /dev/vdb to extend its only partition /dev/vdb1 to full capacity of 2TB from previous 1TB... See How to Resize a Partition using fdisk - Red Hat Customer Portal. And then I did [resize2fs /dev/vdb1]...We can see this did not change the size of your filesystem. Here is why: resize2fs reads the size of the partition from the kernel, similar to reading the size of any other file. fdisk tries to update the kernel after it has written the partition table. However this will fail if the disk is in use, e.g. you have mounted one of its partitions. This is why resize2fs showed the "nothing to do" message. It did not see the extra partition space. The kernel reads the partition table during startup. So you can simply restart the computer. Then you can run resize2fs, it will see the extra partition space, and expand the filesystem to fit.I believe fdisk logs a prominent warning when this happens, as screen-shotted in this (otherwise outdated) document. There is a less friendly but actually up-to-date document, on the Red Hat Customer Portal:How to use a new partition in RHEL6 without reboot? From partprobe was commonly used in RHEL 5 to inform the OS of partition table changes on the disk. In RHEL 6, it will only trigger the OS to update the partitions on a disk that none of its partitions are in use (e.g. mounted). If any partition on a disk is in use, partprobe will not trigger the OS to update partitions in the system because it is considered unsafe in some situations. So in general we would suggest:Unmount all the partitions of the disk before modifying the partition table on the disk, and then run partprobe to update the partitions in system. If this is not possible (e.g. the mounted partition is a system partition), reboot the system after modifying the partition table. The partitions information will be re-read after reboot. If a new partition was added and none of the existing partitions were modified, consider using the partx command to update the system partition table. Do note that the partx command does not do much checking between the new and the existing partition table in the system and assumes the user knows what they are are doing. So it can corrupt the data on disk if the existing partitions are modified or the partition table is not set correctly. So use at one's own risk.
I added a new disk (/dev/vdb) of 2TB with existing data from the previous 1TB disk. I used fdisk /dev/vdb to extend its only partition /dev/vdb1 to full capacity of 2TB from previous 1TB. (In other words, I deleted vdb1, and then re-created it to fill the disk. See How to Resize a Partition using fdisk - Red Hat Customer Portal). And then I did: [root - /]$ fsck -n /dev/vdb1 fsck from util-linux 2.23.2 e2fsck 1.42.9 (28-Dec-2013) /dev/vdb1: clean, 46859496/65536000 files, 249032462/262143744 blocks[root - /]$ e2fsck -f /dev/vdb1 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/vdb1: 46859496/65536000 files (0.4% non-contiguous), 249032462/262143744 blocks[root - ~]$ resize2fs /dev/vdb1 resize2fs 1.42.9 (28-Dec-2013) The filesystem is already 262143744 blocks long. Nothing to do!And fdisk -l looks like this: Disk /dev/vdb: 2147.5 GB, 2147483648000 bytes, 4194304000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x4eb4fbf8 Device Boot Start End Blocks Id System /dev/vdb1 2048 4194303999 2097150976 83 LinuxHowever when I mount it: mount /dev/vdb1 /mntThis is what I got from df -h: /dev/vdb1 985G 935G 0 100% /mntWhich is still the size of the previous partition. What am I doing wrong here? UPDATE I ran partprobe and it told me to reboot: Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/vdb1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. Error: Failed to add partition 1 (Device or resource busy)So I rebooted and then ran this again: mount /dev/vdb1 /mntBut the added file system is still: /dev/vdb1 985G 935G 0 100% /mntAny ideas? Should I do all the fsck, e2fsck, and resize2fs once again? This is really weird. After the reboot, I ran partprobe again and it was still this error: Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/vdb1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. Error: Failed to add partition 1 (Device or resource busy)Why is the device or resource busy? Even after I rebooted?
resize2fs fails to resize partition to full capacity?
This should be relatively easy, since you're using LVM:First, as always, take a backup. Resize the disk in Xen (you've already done this; despite this, please re-read step 1). Use parted to resize the extended partition (xvda2); run parted /dev/xvda, then at the parted prompt resizepart 2 -1s to resize it to end at the end of the disk (BTW: quit will get out of parted). Either (a) create another logical partition (xvda6) with the free space, then:reboot to pick up the partition table changes pvcreate /dev/xvda6 vgextend xenhosting-vg /dev/xvda6or (b) extend xvda5 using resizepart 5 -1s reboot to pick up the partition table changes pvresize /dev/xvda5Finally, if you want to add that to your root filesystem, lvextend -r -l +100%FREE /dev/xenhosting-vg/root. The -r option to lvextend tells it to call resize2fs itself.Another option you didn't consider: Add another virtual disk. If you can do this in Xen w/o rebooting the guest, then you can do this entirely online (without any reboots). Partition the new disk xvdc (this will not requite a reboot, since its not in use), then proceed with pvcreate & vgextend using /dev/xvdc1.
I need to resize my first disk (/dev/xvda) from 40 GB to 80 GB. I'm using XEN virtualization, and the disk is resized in XenCenter, but I need to resize its partitions without losing any data. The virtual machine is running Debian 8.6. Disk /dev/xvda: 80 GiB, 85899345920 bajtů, 167772160 sektorů Jednotky: sektorů po1 * 512 = 512 bajtech Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů Typ popisu disku: dos Identifikátor disku: 0x5a0b8583Device Boot Start End Sectors Size Id Type /dev/xvda1 2048 499711 497664 243M 83 Linux /dev/xvda2 501758 83884031 83382274 39,8G 5 Extended /dev/xvda5 501760 83884031 83382272 39,8G 8e Linux LVMDisk /dev/xvdb: 64 GiB, 68719476736 bajtů, 134217728 sektorů Jednotky: sektorů po1 * 512 = 512 bajtech Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů Typ popisu disku: gpt Identifikátor disku: 0596FDE3-F7B7-46C6-8CE1-03C0B0ADD20ADevice Start End Sectors Size Type /dev/xvdb1 2048 134217694 134215647 64G Linux filesystemDisk /dev/mapper/xenhosting--vg-root: 38,1 GiB, 40907046912 bajtů, 79896576 sektorů Jednotky: sektorů po1 * 512 = 512 bajtech Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů Disk /dev/mapper/xenhosting--vg-swap_1: 1,7 GiB, 1782579200 bajtů, 3481600 sektorů Jednotky: sektorů po1 * 512 = 512 bajtech Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů
How to resize LVM disk in Debian 8.6 without losing data
Definitely an interesting question and while your result was pretty good (and as I would hoped, since catching SIGINT is not exactly rocket science and pausing halfway merely relocating some data blocks doesn't seem hard either), there are enough non-success stories as well, like for example 10yo Debian bug https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=574292 But even though that bug is 10y old, I've just ran a mock e2fsck and resize2fs through strace and while the former installs a whole bunch of signal handlers including SIGINT and SIGTERM, resize2fs still does not. So if anyone finds this question: Take the above as anecdotal evidence and continue to beware. :-) Note that the man page does mention a flag for creating an undo file in case of mistakes. (And me, I just wish I ran this resize operation inside a screen session... but ok at least I do have -p) edit Wait, I just realised, why not SSH in, make an LVM snapshot and e2fsck that while the resize is still running? I did that 5× in a row during the "Relocating blocks" phase and although I get "contains a file system with errors, check forced" on every check, it never found any errors. Now of course don't ask me about data integrity. edit Pretty interesting response from tytso@ himself BTW at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=574292#30
I inherited an old PC-server (quad Pentium 4) that only had partitions for /, /boot and swap (RAID1 with 2 1T SATA disks), but needed to update the distro (from CentOS 6.9). I decided to create a new partition so that the one containing / could be formatted. But I forgot to add the -p flag to resize2fs and now it's silently staring back at me and I can't tell how much longer it could take (it's been at it for 50+ hours). Now, I know that shrinking a filesystem can take a long time, but while I could wait for 100 hours, something like 800 hours is out of the question. Here's what I'm thinking at the moment:Go ahead with the Ctrl+C && e2fsck. Mount the partition and manually delete 100G+ worth of data that serves us no purpose. Start from the top with resize2fs -p ...But I haven't been able to find consensus on just how dangerous it is to send SIGINT to resize2fs. I do have an extra backup of the important information, but would still like to do this without corrupting the filesystem. And yes, I'm aware it might be faster to just install the distro from scratch and restore my backup. Update: I decided to interrupt it. And everything seems to be fine, but the question still stands. I'm still curious.
Just how dangerous is sending SIGINT to resize2fs tasked with shrinking?
The commands did not work as expected as they were contained within an extended partition as described here: https://askubuntu.com/a/365953/585364 Instead I had to first extend the /dev/sda2 extended partition that was the parent of /dev/sda5. So all the commands that were required (in my specific case): growpart /dev/sda 2 growpart /dev/sda 5 resize2fs /dev/sda5
I've previously used growpart and resize2fs to resize a mounted online ext4 paritition in a Linux system. Currently I have a Ubuntu guest running in virtualbox that I'd like to resize the partition /dev/sda5. I've already extended the virtual disk on the host via vboxmanage modifyhd --resize..., however after running (within the guest) growpart I don't see any change in the partition table (I assume it's the value returned from lsblk). chris@chris-VirtualBox:~$ lsblk ... sda 8:0 0 53.9G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 37.8G 0 part / ...Resize: chris@chris-VirtualBox:~$ sudo growpart /dev/sda 5 CHANGED: partition=5 start=1052672 old: size=79251456 end=80304128 new: size=111996895 end=113049567 chris@chris-VirtualBox:~$ sudo resize2fs /dev/sda5 resize2fs 1.45.5 (07-Jan-2020) The filesystem is already 9906432 (4k) blocks long. Nothing to do!lsblk still shows old values: ... sda 8:0 0 53.9G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 37.8G 0 part / ...Is this a limitation of virtualbox? Or is there a working alternative?Hmmm actually the /dev/sda2 partition looks quite suspicious (it's size seems too large? is it overlapping with /dev/sda5?: chris@chris-VirtualBox:~$ sudo parted /dev/sda GNU Parted 3.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 57.9GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary fat32 boot 2 539MB 41.1GB 40.6GB extended 5 539MB 41.1GB 40.6GB logical ext4
Ubuntu ext4 partition is not being extended or resized as expected with growpart or resize2fs
Yes. See man mkfs.ext4:-i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to change this ratio on a filesystem after it is created, so be careful deciding the correct value for this parameter. Note that resizing a filesystem changes the numer of inodes to maintain this ratio.I verified this experimentally, resizing from 1G to 10G and looking at tune2fs /dev/X | grep Inode. The inode count went from 64K to about 640K. I believe it's a natural consequence of Unix filesystems which use "block groups". The partition is divided into block groups, each of which has their own inode table. When you extend the filesystem, you're adding new block groups.
If I create a small filesystem, and grow it when I need to, will the number of inodes increase proportionally? I want to use Docker with the overlay storage driver. This can be very inode hungry because it uses hardlinks to merge lower layers. (The original aufs driver effectively stacked union mounts, which didn't require extra inodes, but instead caused extra directory lookups at runtime). EDIT: hardlinks don't use extra inodes themselves, I can only think the issue is extra directories which have to be created. (Closed question here. I believe the answer is incorrect. However it says the question is closed, and that I need to create a new one).
If I grow an ext4 partition, will it increase the number of inodes available?
You must tell apart the resizing of a block device (here: /dev/sdb4) from the resizing of a file system. A file system can be smaller but not bigger than the underlying block device. You should make a backup of the partition table: sfdisk -d /dev/sdb > ~/sfdisk_sdb.txtThen you make a copy of that file and adapt the line that looks similar to this: /dev/sdb4 : start=24260, size=3653948, Id= 83You want that partition to end on the last sector of the device (i.e. 7744511; the first one is 0 not 1). The size is this number minus the start sector plus one (both the start and end sector count). Then you replace the partition table: sfdisk /dev/sdb <~/sfdisk_sdb.mod.txtAfter that you can use resize2fs without a size parameter. It will use the whole size of /dev/sdb4 then. You must run e2fsck -f /dev/sdb4 immediately before using resize2fs.
I have a 4 GB SD card. Before the image load root@ubuntu# fdisk -l Disk /dev/sdb: 3965 MB, 3965190144 bytes 49 heads, 48 sectors/track, 3292 cylinders, total 7744512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 8192 7744511 3868160 b W95 FAT32I loaded a 2gb SD image to the card by dd if=2gbsd-noeclipse-latest.dd of=/dev/sdb bs=4M conv=fsync. The fdisk -l outputs: root@ubuntu# fdisk -l Disk /dev/sdb: 3965 MB, 3965190144 bytes 122 heads, 62 sectors/track, 1023 cylinders, total 7744512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 16063 8031+ b W95 FAT32 /dev/sdb2 16064 20158 2047+ da Non-FS data /dev/sdb3 20162 24257 2048 da Non-FS data /dev/sdb4 24260 3678207 1826974 83 Linuxso I have 2GB that is not used. I want to extend sdb4 so that I can use the 2GB space that is not included. So I calculate the unused space as (7744512-3678207)*512= 2081948160 byte and 2081948160 / 1048576 = 1985.50048828 MB. So roughly I will extend 1900 MB. I use resize2fs to do that: resize2fs /dev/sdb4 1900MHowever, it outputs resize2fs 1.42.5 (29-Jul-2012) open: No such file or directory while opening /dev/sdb4Could anyone tell me how I should use the command above or how else I can extend the sdb4?
extending a partition by resize2fs
Beyond the wear and tear on the HDDs I can't see any reason why this would be dangerous. I've never come across a EXT3/EXT4 parameter that limits the amount of times you can do this. There isn't any counter I've seen either. In looking through the output from tune2fs I see nothing that I would find alarming which would lead me to believe that performing many resizes would be harmful to the filesystem or the device, beyond the wear and tear. Example $ sudo tune2fs -l /dev/mapper/vg_grinchy-lv_root tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 74e66905-d09a-XXXX-XXXX-XXXXXXXXXXXX Filesystem magic number: 0x1234 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 3276800 Block count: 13107200 Reserved block count: 655360 Free blocks: 5842058 Free inodes: 2651019 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1020 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Sat Dec 18 19:05:48 2010 Last mount time: Mon Dec 2 09:15:34 2013 Last write time: Thu Nov 21 01:06:03 2013 Mount count: 4 Maximum mount count: -1 Last checked: Thu Nov 21 01:06:03 2013 Check interval: 0 (<none>) Lifetime writes: 930 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 1973835 Default directory hash: half_md4 Directory Hash Seed: 74e66905-d09a-XXXX-XXXX-XXXXXXXXXXXX Journal backup: inode blocksdumpe2fs You can also poke at the EXT3/EXT4 filesystems using dumpe2fs which essentially shows the same info as tune2fs. The output from that command is too much to include here, mainly because it includes information about the groups of inodes within the filesystem. But when I went through the output, again I saw no mention of any counters that were inherent within the EXT3/EXT4 filesystems.
I have a partition which contains MySQL data which is constantly growing. My LVM PV has precious little free space remaining and therefore I find I'm frequently adding additional space to my /var partition using lvextend and resize2fs in smallish increments (250-500 MB at a time) so as not to give too much space to /var and then be unable to allocate those PEs to other partitions should I need to later. I'm concerned about reaching some limit or causing a problem by calling resize2fs too often to grow this filesystem. Is there a limit to how often resize2fs can be used to grow an Ext3 filesystem? Is it better to do one large Ext3 resize rather than many small ones? Does resizing using resize2fs too often carry a potential for problems or data loss?
Is there a problem using resize2fs too often?
The offset option of mount does not get passed to mount directly, but to losetup which sets up a loop device which refers to the offsetted location of the underlaying block device. Mount then performs its operations on that loop device rather than the raw block device itself. You can also use losetup to make resize2fs play which such file systems: # losetup --offset=<offset> --find --show /dev/<device> /dev/loop0 # resize2fs /dev/loop0 <newsize> # losetup --detach /dev/loop0(Example may not be complete in means of resize2fs operations) losetup searchs for the first free loop device (in that example /dev/loop0) as --find was passed. --show outputs that loop device to STDOUT.
Mount has the option offset to specify that a file system does not start at the beginning of a device but some specific amount of bytes after. How can I use resize2fs, which does not have that option, to resize such a file system which does not start at the device's beginning?
Using resize2fs with file system offset
In this case, your file system is on the LV(Logical Volumne), which is on the partition. If you expand the partition, your LV will not be expanded. Please run these commands : pvresize <device name> <-- This will let the Physical Volume know that the partition it is on has been expanded. And : lvextend -l +100%FREE /dev/mapper/fedora-root <Physical Volume name> <-- This will extend the LV. resize2fs /dev/mapper/fedora-rootPS: You can find the Physical Volume name using the command pvs Thank you @Dani_l for the edit suggestions.
Apologies for this question but I am very new to Linux. When I installed my Fedora distribution I only allocated 20GB of my hard drive space for its partition. I recently used GParted and tried to increase the size of the partition to around 40GB. I was under the impression that I was successful but today I tried to create a directory and I got the following error message: mkdir: cannot create directory ‘b_scripts’: No space left on deviceI checked the space on my disk and found out that I had used 20GB on my fedora-root. derrick@dazza >> df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 253M 1.7G 14% /dev/shm tmpfs 1.9G 1.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/fedora-root 20G 19G 0 100% / tmpfs 1.9G 128K 1.9G 1% /tmp tmpfs 386M 20K 386M 1% /run/user/42 tmpfs 386M 28K 386M 1% /run/user/1000Is a partition different from a Filesystem? How come there are only 20GB in total allocated to my fedora-root? What is my solution? How do I increase the size of my fedora-root Filesystem so that it is more than 20GB Size?
How to increase size of filesystem to match partition
Extend your physical volume first, and then the logical volume: pvresize /dev/sdb4 lvextend /dev/vg_mine/lv_rootNote that I've left off the -L+16G — this will use all free space.
So, I have a 120 GB SSD (/dev/sdb) that I have a dual boot of Windows 7 and Fedora 17. When I first started I only have a 60 GB SSD so my space was very limited. I have a partition on my SSD (dev/sdb4) which I created with gparted, that shows a "Partition 5 LVM2" (dev/sdb5) below it which I believe is what the LVM is stored on(?). Anyways, using gparted I extened my /dev/sdb4 to 27GB, which then created a "Free Space" of 17GB within /dev/sdb4. Now I need to combine /dev/sdb5 and the free space into one. I've tried: lvextend -L+16G /dev/vg_mine/lv_root which results in: Extending logical volume lv_root to 20.97 GiB Insufficient free space: 512 extends needed, but only 0 availableI then used a resize2fs /dev/vg_mine/lv_root which results in: The filesystem is already 1302528 blocks long, nothing to do! Anybody point me in the right direction? Am I on the right track so far?
Extend my LVM After Upgrading SSD
469G is 469*1024*1024k, which is 491782144k. 122945536 blocks of 4k is also 491782144k. Parted uses G in terms of 1000, not 1024. Try unit Gi with parted.
Good afternoon! I am attempting to shrink an ext4 partition and I have found many tutorials online to achieve this, however, when implementing the actual changes, resize2fs is telling me wrong information! Here is the scenario: # parted -s /dev/sdb unit GB print Model: Hitachi HTS725050A7E630 (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 0.00GB 0.64GB 0.64GB primary ext2 boot 2 0.64GB 500GB 499GB primary ext4Now I am trying to first reduce the filesystem by 30GB: # resize2fs /dev/sdb2 469G resize2fs 1.42.13 (17-May-2015) The containing partition (or device) is only 121940394 (4k) blocks. You requested a new size of 122945536 blocks.The partition is not mounted and as you can see from the output, I am actually taking 30GB off the total size (499 - 30 = 469). How is this possible when I am applying a unit (GB in this case)? Am I missing something?
why is resize2fs telling me wrong information
resize2fs complains it has nothing to do because it only works at filesystem size. First you have to grow the partition size underneath it with fdisk, cfdisk or parted. https://geekpeek.net/resize-filesystem-fdisk-resize2fs/ Similar with LVM, it needs more free partition space to grow, or a new partition added to the LVM volume group. https://www.turnkeylinux.org/blog/extending-lvm
I ordered a dedicated server and it came with a primary partition of 20gb and a second partition of 1.8TB. I see no point in this as I plan to use it as a web server. As such I need to put pretty much everything into /var. I have rebooted in rescue mode and I have deleted the 1.8TB partition. My FS now looks like this NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ├─sda1 8:1 0 1004.5K 0 part ├─sda2 8:2 0 19.5G 0 part └─sda4 8:4 0 511M 0 partI thought I could use the partid app to resize the primary (sda2) partition (https://www.centos.org/docs/5/html/5.2/Deployment_Guide/s2-disk-storage-parted-resize-part.html), but when I run the command it tells me it is no longer supported. Error: The resize command has been removed in parted 3.0I found another tutorial that said to use resize2fs. I ran the command and I get the following. root@rescue:~# resize2fs /dev/sda2 resize2fs 1.42.12 (29-Aug-2014) The filesystem is already 5119744 (4k) blocks long. Nothing to do!I have around 1.7TB of free space that is not assigned to any partition. All I want to do is assign all of this space to sda2. This is the primary partition and I want it to have all of the space. Am I missing something simple here? The lvextend command seems to be along the right lines, but still it doesn't work. root@rescue:~# lvextend -L +1700G /dev/sda2 Path required for Logical Volume "sda2" Please provide a volume group name Run `lvextend --help' for more information.
Can't resize main partition on CentOS 7
Once you have extracted the filesystem you are interested in (using dd), simply adapt the file size (967424*4096=3962568704): $ truncate -s 3962568704 trunc.imgAnd then simply: $ sudo mount -o loop trunc.img /tmp/img/ $ sudo find /tmp/img/ /tmp/img/ /tmp/img/u-boot-spl.bin /tmp/img/u-boot.img /tmp/img/root.ubifs.9 /tmp/img/root.ubifs.4 /tmp/img/root.ubifs.5 /tmp/img/root.ubifs.7 /tmp/img/root.ubifs.2 /tmp/img/root.ubifs.6 /tmp/img/lost+found /tmp/img/root.ubifs.3 /tmp/img/boot.ubifs /tmp/img/root.ubifs.0 /tmp/img/root.ubifs.1 /tmp/img/root.ubifs.8Another simpler solution is to truncate directly on the original img file: $ truncate -s 3964665856 nand_2016_06_02.img $ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/Where 3962568704 + 2097152 = 3964665856
I am trying to understand what I did wrong with the following mount command. Take the following file from here:http://elinux.org/CI20_Distros#Debian_8_2016-02-02_BetaSimply download the img file from here. Then I verified the md5sum is correct per the upstream page: $ md5sum nand_2016_06_02.img 3ad5e53c7ee89322ff8132f800dc5ad3 nand_2016_06_02.imgHere is what file has to say: $ file nand_2016_06_02.img nand_2016_06_02.img: x86 boot sector; partition 1: ID=0x83, starthead 68, startsector 4096, 3321856 sectors, extended partition table (last)\011, code offset 0x0So let's check the start of the first partition of this image: $ /sbin/fdisk -l nand_2016_06_02.imgDisk nand_2016_06_02.img: 1.6 GiB, 1702887424 bytes, 3325952 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0212268dDevice Boot Start End Sectors Size Id Type nand_2016_06_02.img1 4096 3325951 3321856 1.6G 83 LinuxIn my case Units size is 512, and Start is 4096, which means offset is at byte 2097152. In which case, the following should just work, but isn't: $ mkdir /tmp/img $ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/ mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.And, dmesg reveals: $ dmesg | tail [ 1632.732163] loop: module loaded [ 1854.815436] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 1854.815452] EXT4-fs (loop0): bad geometry: block count 967424 exceeds size of device (415232 blocks)None of the solutions listed here worked for me:resize2fs or, sfdiskWhat did I missed ?Some other experiments that I tried: $ dd bs=2097152 skip=1 if=nand_2016_06_02.img of=trunc.imgwhich leads to: $ file trunc.img trunc.img: Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=960b67cf-ee8f-4f0d-b6b0-2ffac7b91c1a (large files)and same goes the same story: $ sudo mount -o loop trunc.img /tmp/img/ mount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.I cannot use resize2fs since I am required to run e2fsck first: $ /sbin/e2fsck -f trunc.img e2fsck 1.42.9 (28-Dec-2013) The filesystem size (according to the superblock) is 967424 blocks The physical size of the device is 415232 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? yes
bad geometry: block count 967424 exceeds size of device (415232 blocks)
It’s OK to let fsck fix this, it refers to a deleted inode — the data has already been deleted, nothing more will be deleted.
when I try to resize the disk we get that resize2fs /dev/sdb resize2fs 1.42.9 (28-Dec-2013) Please run 'e2fsck -f /dev/sdb' first.so when I try to do e2fsck I get the following e2fsck -f /dev/sdb e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Deleted inode 142682 has zero dtime. Fix<y>?is it ok ? to continue by entering yes option , or this is something that can delete the data on disk ?
rhel + efsck + Deleted inode xxxxx has zero dtime
The crux of the problem is that a filesystem can only be expanded into the space that's seen as available on the block device you've put it onto. With partitions, that means the partition's starting and ending sector. As it stands now the kernel knows the space is there but your partition's end sector is essentially telling the filesystem to not use the new space. The resize2fs is for resizing the filesystem and so should come later in your workflow. It looks like it's all on md126p1 which might make this easier. Basically your lsblk shows that the underlying device is md126 which is 2.7TB but the partition is only 1.8TB. So you need to use either fdisk or gparted (whichever the case may be) on the md126 device and edit the first partition so that it ends on the last sector of the device instead of whatever it is now. You'll probably want all relevant filesystems unmounted when you do this. To get the kernel to pick up the new partition table you'll probably need to do a partprobe or do a full reboot. Once the partition has been updated the filesystem inside that partition can be told to expand into it with the resize2fs.
I have a RAID 6 array set up under CentOS 7 which originally had four 1TB drives assigned, resulting in a total capacity of 2TB. After much fussing about as described here, I was able to add a fifth drive to the array successfully, growing it out to 3TB. The confusion now is how to get the partition to grow out to the full 3TB size. According to this answer the sequence should be:unmount check partition grow array resize partition check partition mountWhich makes sense. Having now grown the array, I am attempting to use resize2fs to resize the array, but it is telling me I don't have anywhere near enough space to expand into, that I'm asking for 786432000 and there are only 488315387 available. e2fsck tells me the partition is currently using 448736046 of its 488315387 available blocks. Where is the 488315387 limit coming from if not from the raid array? Edit: Relevant output from lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─md126 9:126 0 2.7T 0 raid6 └─md126p1 259:1 0 1.8T 0 md sdb 8:16 0 931.5G 0 disk └─md126 9:126 0 2.7T 0 raid6 └─md126p1 259:1 0 1.8T 0 md sdc 8:32 0 931.5G 0 disk └─md126 9:126 0 2.7T 0 raid6 └─md126p1 259:1 0 1.8T 0 md sdd 8:48 0 931.5G 0 disk └─md126 9:126 0 2.7T 0 raid6 └─md126p1 259:1 0 1.8T 0 md sde 8:64 0 931.5G 0 disk └─md126 9:126 0 2.7T 0 raid6 └─md126p1 259:1 0 1.8T 0 md
Partition resize in CentOS 7
pvmove moves segments, not free space. You need to move this range /dev/sda2:97280-114339 to start at segment 59392 Those are 17061 segments. According to this you should: # pvmove --alloc anywhere /dev/sda2:97280-114339 /dev/sda2:59392-76453Then resize PV, then partition, and then enjoy your free space. While LVM tries to avoid shooting yourself in the foot, this is a risky operation. If you have any data that would cause you any trouble if it goes missing, backup it first since you will most likely destroy it. Also, you can create a linux VM and test this commands until you feel confident in it.
I want to shrink my LVM physical volume and use this free space to create another partition for another OS. I resized my root and home logical volumes using lvresize and now I'm trying to use pvresize, but I get the following error: /dev/sda2: cannot resize to xxxxx extents as later ones are allocated.This PV's free space is allocated between two logical volumes and I think that's the reason that can't shrink the partition. The output for pvs -v --segments /dev/sda2 is: PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges /dev/sda2 volgroup0 lvm2 a-- 446.64g 148.00g 0 7680 lv_root 0 linear /dev/sda2:0-7679 /dev/sda2 volgroup0 lvm2 a-- 446.64g 148.00g 7680 51712 lv_home 0 linear /dev/sda2:7680-59391 /dev/sda2 volgroup0 lvm2 a-- 446.64g 148.00g 59392 37888 0 free /dev/sda2 volgroup0 lvm2 a-- 446.64g 148.00g 97280 17060 lv_root 7680 linear /dev/sda2:97280-114339The free space is allocated between the segments 59392 and 97279. I know that pmove can move the free space to another segment, but I honestly don't know how to use it and I'm afraid of corrupting my data. Could anyone please help me on this? Thanks!
How to shrink LVM physical volume with free space
Thank you @sudodus and @fra-san. I think there is a compatibility issue when combining resize2fs and parted for shrinking a fs/partition. resize2fs uses 4k blocks, when parted uses Byte or MB, GB etc. I eventually found another way to shrink the 2nd partition: gnome-disks. It is provided with Linux Mint and works pretty well. Where parted and gparted failed in shrinking the 2nd partition, gnome-disks succeeded in resizing both fs and partition, in one operation. After the fs/partition shrink, there was trailing empty space in loop0p2. I want to shrink the image file. So, I did: root@O3:/home/m# fdisk -l /dev/loop0 Disk /dev/loop0: 7,5 GiB, 8068792320 bytes, 15759360 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x8889db7fDevice Boot Start End Sectors Size Id Type /dev/loop0p1 8192 532479 524288 256M c W95 FAT32 (LBA) /dev/loop0p2 532480 8355839 7823360 3,7G 83 Linuxtruncate size? (8192 + 524288 + 7823360) * 512 = 4278190080 B truncate --size=4278190080 image-file.img After mapping again the resulting image file to loop0, no more fs/partition errors.
I tried the process from this post resize partition on an image file. I didn't succeed in understanding why it goes wrong in my case. I produced a 8GB image using dd. The image contains two partitions. I map the image with losetup -P /dev/loop0 $image-file. Then: resize2fs /dev/loop0p2 4000M resize2fs 1.44.1 (24-Mar-2018) Resizing the filesystem on /dev/loop0p2 to 1536000 (4k) blocks. The filesystem on /dev/loop0p2 is now 1536000 (4k) blocks long.e2fsck -f /dev/loop0p2 ->>> cleanparted /dev/loop0 (parted) resizepart 2 4000MB` ; print gives 4GB partition (parted) quitpartprobe -s /dev/loop0lsblk /dev/loop0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 7,5G 0 loop ├─loop0p1 259:2 0 256M 0 loop └─loop0p2 259:3 0 3,5G 0 loop root@O3:/home/m/tmp# e2fsck -f /dev/loop0p2 e2fsck 1.44.1 (24-Mar-2018) The filesystem size (according to the superblock) is 1903360 blocks The physical size of the device is 1154143 blocks **Either the superblock or the partition table is likely to be corrupt**! Abort? yesThe partition resize with parted creates the inconsistency, because after resize2fs the e2fsck is clean. Any ideas to explore to be able to shrink the image file?
How to shrink a file image, produced with dd?
As far as I know, there is no direct way for this purpose. The only idea which sprang to my mind was to check out and examine the contents - e.g. filesystem - metadata of the partition. If the size recorded in metadata does not match the size of the partition, it may have been resized. Even if the contents has been resized too, some metadata may not have been updated to reflect the new size. For instance, the number of inodes in an ext2/3/4 filesystem is fixed at the creation and does not change when the filesystem gets resized. So being aware of concepts and rules is needed. So assuming the filesystem was created with the default values you can compare the output of e2mkfs in sumulation mode and the output of tune2fs -l.
Is there any way to tell if a file system (regardless of its type) has been resized? Specifically shrunk?
How to tell if a file system has been shrunk?
The apparent answer is to run these two commands lvcreate --name opt --size 23Gi group mkfs -t ext4 -L opt /dev/group/optHowever, via the comments thread it became apparent that lvcreate threw an error message, /dev/group/opt: not found: device not cleared Aborting: Failed to wipe start of new LVA search on Google finds that this is a known error, and the suggested workaround is to avoid zeroing the first part of the LV by using lvcreate --zero n .... I've done some more investigation and it appears that lvcreate will work fine once udev is restarted (either via a system reboot or with udevadm reload). I would surmise that the /dev/{volumegroup} node is simply not being created, and so lvcreate can't find it. (Answer extracted from the comments thread)
I've shrinked my /home from 2.7TB to 100G, I've extended /root, /usr, /tmp and /var but I have been looking for a way to create an /opt partition for 3 hours now, and can't find it. The setup is 3TB luks encrypted partition on /dev/sdb3(container) inside it are my lvm partitions /root, /usr, /tmp, /var and /home in a Debian Wheezy system. I used lvreduce, lvextend, e2fsck and resize2fs from a booted live cd to change to current partitions. With directions from tutorials and webpages, like this is one. How do I create an /opt partition 23G from the unused space on /dev/sdb3? I've tried this: lvcreate -L 23G -n opt GroupI don't understand this enough to find the right command, here is the output: /dev/group/opt: not found: device not cleared Aborting: Failed to wipe start of new LVEDIT #1 Here is fdisk -l Disk /dev/sdb: 3000.6 GB, 30005929282016 bytesDevice Boot start end Blocks Id system /dev/sdb1 1 4294967295 2147483647+ ee GPTDisk /dev/mapper/crypt1: 3000.3 GB 3000332451840 bytes 255 heads, 63 sectors/track, 364769 cylinders, total 58600024320 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/0 size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/mapper/group-root: 5716MB, 5716836352 bytesDisk /dev/mapper/group-usr: 105,6 GBDisk /dev/mapper/group-var: 10.5 GBDisk /dev/mapper/group-swap 8048MBDisk /dev/mapper/group-tmp 7914MBDisk /dev/mapper/goup-home 107.4GBHere is parted /dev/sdb print devices: /dev/sdb (3001GB) /dev/mapper/group-tmp (7915MB) /dev/mapper/group-swap (8049MB) /dev/mapper/group-home (107GB) /dev/mapper/group-var (10.5GB) /dev/mapper/group-usr (106GB) /dev/mapper/group-root (5717MB) /dev/mapper/crypt1 (3000GB)Here is parted /dev/sdb print free: Model: ATA TOSHIBA (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 5128/4096B Partition Table: gptNumber start end Size File system Name Flags 17.4kb 1049kb 1031kb Free space 1 1049kb 2097kb 1049kb bios_grub 2 2097kb 258MB 256MB ext2 3 258MB 3001GB 3000GB 3001GB 3001GB 466kb Free spaceEDIT #2 Here is vgdisplay: ---- Volume group ----- VG Name Group SystemID Format lvm2 Metadata Areas 1 Metadata Sequence No 18 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715334 Alloc PE / Size 58461 / 228.36 GiB Free PE / Size 656873 / 2.51 TiB VG UUID 239082309572039572039
How to create an /opt partition on an existing installation without loosing data?
If the underlying partition is larger than the filesystem within it, resize2fs will, by default, attempt to expand the filesystem to fill the partition. For example, if /dev/sdd3 is a 1TB partition, and we were to run: # mke2fs /dev/sdd3 500GWe will have a 500GB partition within a 1TB partition. If we then resize2fs /dev/sdd3, it will be expanded to the full 1TB.
For resize2fs,If ``size parameter is not specified, it will default to the size of the partitionThe size of a filesystem is by default the size of its underlying partition. So by default, resize2fs doesn't change the size of a filesystem. Does it do nothing? Thanks.
Does `resize2fs` by default do nothing?
Given your comment on Anthon's answer, I think the actual solution to your problem may be to tighten down your OS's logrotate configuration. While it is possible to move /var/log per Anthon's answer, I wouldn't recommend it.
I have a problem with my remote server hosted by my provider, I have only SSH access. The problem consist of getting this error file system rootfs has reached critical status that causes problems with several services like smtp, I want to resize my partitions. I want to: - Decrease size of /home - Increase the size of / Is it possible to do that? is yes how to do that without losing my data and my CentOS installation? root@web [~]# df -hT Filesystem Type Size Used Avail Use% Mounted on rootfs rootfs 20G 16G 3.4G 82% / /dev/root ext3 20G 16G 3.4G 82% / devtmpfs devtmpfs 16G 256K 16G 1% /dev /dev/md3 ext3 1.8T 137G 1.6T 8% /home tmpfs tmpfs 16G 0 16G 0% /dev/shm /dev/loop0 ext3 510M 22M 463M 5% /tmp /dev/loop0 ext3 510M 22M 463M 5% /var/tmproot@web [~]# findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/root ext3 rw,relatime,errors=remount-ro,u ├─/dev devtmpfs devtmpfs rw,relatime,size=16419940k,nr_i │ ├─/dev/pts devpts devpts rw,relatime,mode=600 │ └─/dev/shm tmpfs tmpfs rw,relatime ├─/proc proc rw,relatime │ └─/proc/sys/fs/binfmt_misc binfmt_m rw,relatime ├─/sys sysfs rw,nosuid,nodev,noexec,relatime ├─/home /dev/md3 ext3 rw,relatime,errors=continue,use ├─/tmp /dev/loop0 ext3 rw,nosuid,noexec,relatime,error └─/var/tmp /dev/loop0 ext3 rw,nosuid,noexec,relatime,error
Live resizing of an ext3 filesytem on CentOS6.5
I resized the partition to a too small value have corrupted the fs?It's unlikely in your case, especially since you were kind enough to stop that fs(c)killer, but you can't rule out the possibility entirely. For example, corruption happens when it's a logical partition inside the extended partition of a msdos partition table. Logical partitions are linked lists, so between logical partitions there is a sector used to point to the next partition in the list. If you shrink/resize such a logical partition there is a sector (partially) overwritten somewhere in the middle of the disk. Also some partitioner programs might enjoy zeroing things out. This is also the case with LVM, on each lvcreate it zeroes out like the first 4K of the created LV, and besides there is no guarantee that reversing a botched lvresize will give you the same extents back that were used before. If unlucky the LV might be located physically elsewhere, which is why you can only undo such accidents by vgcfgrestore something from /etc/lvm/{backup,archive}/ that was created before the lvresize. With SSDs there's this TRIM fad that causes all sorts of programs to issue unwarranted TRIM commands to the SSD. LVM does this if issue_discards=1 in lvm.conf (always set it to 0), here's to hoping that the various partitioning programs will never adopt this behaviour.Is the successful run of e2fsck enough to be sure that data has not been damaged?Most filesystems are not able to detect data corruption outside of their own metadata. Which is usually not a problem since you're not supposed to pull stunts like these. If you have a backup you could compare file timestamps / checksums with what you have in your backups.I haven't mounted the filesystem in the whole process (and not mounted it yet even).You can mount it read-only like so: mount -o loop,ro /dev/sdn1 /mnt/somewhereand then check out the files. The loop,ro tells mount to create a read-only loop device and mount that. Surprisingly, ro by itself does not guarantee readonlyness for some filesystems including ext4. (And for multiple-device filesystems like btrfs, the loop,ro doesn't either because it affects only one device, not all of them).
I shrinked an ext4 filesystem with resize2fs: resize2fs -p /dev/sdn1 3500G(FS is used for 2.3 TB) Then I resized the partition with parted and left a 0.3% margin (~10 GB) when setting the new end: (parted) resizepart 1 3681027097kbEventually, this turned out to be too tight: # e2fsck -f /dev/sdn1 e2fsck 1.42.9 (4-Feb-2014) The filesystem size (according to the superblock) is 917504000 blocks The physical size of the device is 898688000 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? yesThen I resized the partition again, this time with 3% margin: (parted) resizepart 1 3681027097kbAfter this, filesystem checks pass: # e2fsck -f /dev/sdn1 e2fsck 1.42.9 (4-Feb-2014) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/sdn1: 278040/114688000 files (12.4% non-contiguous), 608536948/917504000 blocksI have run partprobe /dev/sdn after the two resizepart commands. I haven't mounted the filesystem in the whole process (and not mounted it yet even). May the intermediate step in which I resized the partition to a too small value have corrupted the fs? Is the successful run of e2fsck enough to be sure that data has not been damaged?
Resized partition to too small value after shrinking filesystem
With a VPS, I assume you do not have physical access to the machine, so the usual approach to resizing an in-use filesystem will not work (that would be to use a rescue cdrom). In your listing, the /dev/mapper/vgxxx mountpoints are the way LVM volumes are mounted. Tutorials on LVM are fairly easy to find. The problem if you have used all of your space is that shrinking the volume group for a live filesystem is said to be risky. If you cannot empty out /home then one way to rescue your system would be like this:seeing that you have enough space one either /usr or /var to hold both filesystems, you could copy all of /var to a temporary directory in the /usr tree, and add a line in /etc/fstab to use a bind-mount to make that copy of /var mounted as /var. comment-out the line in /etc/fstab for the "old" /var, reboot You would lose "some" updates to /var (between copying and rebooting), but the reboot would get the system back to normal operation. after rebooting, you could resize the (now-inactive) volume group where you have /var, and add the space (physical volumes) to the group containing /home.Still, making a backup first is always a good idea. Further reading:What is a bind mount? How to Shrink an LVM Volume SafelyPer comment, you actually have unused space on your disk, so in this instance it is not necessary to shrink one volume to make room for another. I will leave the work-around suggested since it could be useful advice. However - when simply growing a logical volume, you have to add space to the volume and then resize the filesystem (the part that you care about). LVM is three layers (seen with pvdisplay, vgdisplay and lvdisplay). If pvdisplay does not reflect your 1Tb, you will have to use fdisk to add a partition to the set of physical volumes. Then update the volume group, adding the physical volume. Finally use resize2fs to increase the size of the filesystem inside that volume group. Here are some useful links:Extending an LVM volume: Physical volumes (partitions) -> Volume groups -> Logical volume -> Filesystem 11.9. Extending a logical volume (LVM HOWTO)
I am not too familiar with how volume sizing works, but I have a VPS running Ubuntu 14.04, and I noticed the home directory is all used up. I have a 1TB drive on this machine, how can I allocate more space to /home? $ df Filesystem 1K-blocks Used Available Use% Mounted on udev 8186844 4 8186840 1% /dev tmpfs 1639632 572 1639060 1% /run /dev/md1 4095616 378936 3716680 10% / none 4 0 4 0% /sys/fs/cgroup none 5120 0 5120 0% /run/lock none 8198144 4 8198140 1% /run/shm none 102400 0 102400 0% /run/user /dev/mapper/vg00-usr 3997376 901812 2869468 24% /usr /dev/mapper/vg00-var 3997376 550088 3221192 15% /var /dev/mapper/vg00-home 3997376 3771276 4 100% /home
Resizing directories
Enlarging a mounted volume has been officially supported for ext3 and ext4 for some time now. I don't know of any strong assessment regarding a change in safety. Obviously both the resizing and the other activities take even longer when done on parallel. But it seems strange to me that this takes so long. In my experience shrinking is slow but enlarging fast. Maybe you should open another question about resizing optimization. Maybe you can do something about the image in order to speed up this process.
I'm working on a script for automatically setting up Amazon Linux servers. I create them with 100gb virtual disks, but the main partition is always 8gb. No problem, I call sudo resize2fs /dev/sda1 at the start of the script to expand it to the full 100gb. The process is fairly slow, though. Later on in my script I download various tools and components and set them up. I was wondering if it's safe to do that in parallel with the resize. Intuitively it seems like it would not be safe to write to a partition that's in the process of getting resized, but I thought I'd ask in case there's some clever magic Linux does that makes this ok.
Is it safe to resize a partition while writing to it?
Use gparted to move sdb2 toward the end of the disk, so that the free space is before it. Then you can resize sdb1.
Trying to upgrade from F15 to F17. I need to find a way to increase /boot size without destroying data. Details: I tried upgrading using preupgrade process and via booting from Net iso on USB and both lead to the same thing: and 'Error' message in the first package (filesystem) transaction indicating the installer needs (about) 1GB more on / Cannot proceed with the install. I also tried the trick of reducing the avail space in /boot to < 100M to trigger a network load of the installer image... but that leads to a series of mirror http/404 messages and no progress. I'd like to not have to nuke everything, yet again, just to do an upgrade. I would have thought that, by now, this issue would have drawn a more elegant solution than continually trying to guess how big /boot will have to be for the next upgrade. (unsnark) I have 4GB freespace on /dev/sdb but it is not contiguous with sdb1 (/boot): the process of shrinking the LVM volume released space at the end and not the beginning of sdb: > df -kl | grep boot: /dev/sdb1 3064704 300520 2764184 10% /boot(note that what is now in /boot is irrelevant since 1G > 300 MB: I have to increase the volume size) fdisk: /dev/sdb1 * 2048 6146047 3072000 83 Linux /dev/sdb2 6146048 612354047 303104000 8e Linux LVMcfdisk: Pri/Log Free Space 1.05* sdb1 Boot Primary ext4 3145.73* sdb2 Primary LVM2_member 310378.50* Pri/Log Free Space 4474.28*Edit: After attempting to use gparted, watching it crash in horror, and eventually yanking and replacing the drive, I created a 15 GB boot partition (FCS!), reinstalling F15 and all my files. Then preupgrade to F17 succeeded. I accepted the preferred answer assuming that it would have worked had it not destroyed my harddrive ;)
/boot too small to upgrade
UPDATE - I found this answer, and the others, to be quite helpful. You may want to compare those too. You need to do like this:swapoff, thus "freeing" the swap partition fdisk, and delete both the extended partition and the physical partition.You are now left with just /dev/sda1. You can now enlarge the image using fdisk again, up to the maximum "physical" size offered by VMware less the new swap size. You can either use a partition-resizing tool, or you can delete /dev/sda1 and recreate it with the same starting point (and type and boot flag). If you can't do so, do not save changes and exit immediately fdisk, then find a tool such as partition-resize or growpart, or a different fdisk (e.g. cfdisk) which can. Exiting fdisk, run kpartx /dev/sda to inform the kernel of the size change. I'm sure I must have forgotten more often than not, and never did anything bad happen to me, but it might just have been luck on my part. Once the partition has been enlarged, you can add a new physical partition /dev/sda2. Leave it type 82h; there's no need to create an extended partition and then another swap partition inside. Keep the swap on /dev/sda2. Then run mkswap on /dev/sda2 and verify/recreate its UUID, because you want it to be correct in /etc/fstab if it's UUID-based (if it's referred as /dev/sda5, just correct to /dev/sda2) Finally you can run resize2fs to make the FS grow to fill the new /dev/sda1.
My Debian vmware image has run out of space. I've expanded the disk image but now need to increase my root partition to see the additional space. My volume is setup as follows Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors Disk model: VMware Virtual S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x37ce2932Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 48236543 48234496 23G 83 Linux /dev/sda2 48238590 52426751 4188162 2G 5 Extended /dev/sda5 48238592 52426751 4188160 2G 82 Linux swap / SolarisI understand that in order to expand sda1, any new space has to be directly after it. All the examples I've read either a) use LVM or b) dont have an Extended sda2 partition directly after sda1. Can anyone point me to a reference that will show me how to expand sda1 in this scenario? I know I will have to switch off/remove swap on sda5, but what do I do about sda2?
How to resize root ext3 file system without LVM
Resize the /dev/sda2 partition with fdisk or parted Resize the PV format on /dev/sda2 with pvresize /dev/sda2 Resize the logical volumes you want to resize with lvresize -L+<size> --resizefs /dev/vg/var (to resize your /var) or lvresize --resizefs -L+<size> /dev/vg/system1 (to resize your /). <size> can be either 100%FREE to use all free space available or something 1G to add 1 GB. So to resize your / to use all available free space you'd use lvresize --resizefs -L+100%FREE /dev/vg/system1
I have a 16GB msata but my rootfs is only 4GB. I need to increase the size of a volume group /dev/mapper/vg-var/ for my embedded system to full capacity. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 14.9G 0 disk |-sda1 8:1 0 39.2M 0 part `-sda2 8:2 0 3.7G 0 part |-vg-system1 254:0 0 800M 0 lvm / |-vg-system2 254:1 0 800M 0 lvm `-vg-var 254:2 0 2.1G 0 lvm /var$ df -Th Filesystem Type Size Used Available Use% Mounted on tmpfs tmpfs 890.7M 104.0K 890.6M 0% /tmp tmpfs tmpfs 890.7M 484.0K 890.2M 0% /run devtmpfs devtmpfs 888.1M 0 888.1M 0% /dev /dev/mapper/vg-system1 ext3 771.4M 549.6M 181.8M 75% / tmpfs tmpfs 890.7M 0 890.7M 0% /dev/shm /dev/mapper/vg-var ext3 2.0G 3.5M 1.9G 0% /var$ sudo vgs VG #PV #LV #SN Attr VSize VFree vg 1 3 0 wz--n- 3.68g 0How should i proceed ? Do i Need to extend /dev/sda2 first with fdisk ?
Increase size of a volume group
When shrinking a filesystem, resize2fs first checks if the part of the filesystem that is going to be cut away is free. If not, it can try to move those files out of the area that will be cut away, if there is space to do so. If this cannot be done, it stops and reports an error without shrinking the filesystem. resizepart does not care about the filesystem at all. It just changes the partition table to specify a new location where the partition now ends. It does not overwrite anything at or near that location. After modifying the partition table, it will signal the kernel that the partition table has been changed. The kernel will read the new table and apply it if possible. But for the filesystem driver, the end of the partition will be a hard wall. If the filesystem was not shrunk before the partition was, or the partition was accidentally shrunk more than the partition was, a part of the filesystem will now be cut off from the rest. The filesystem will assume that the cut-off space is still available, until it actually attempts to use it. At that point the part of the kernel that is responsible for mapping any partition-relative block numbers to actual whole-disk block numbers will return an error to the filesystem driver, as the filesystem is trying to access beyond the end of its partition. The filesystem driver will usually drop to read-only mode as such an error tends to indicate that the filesystem may be corrupted. At that point, the system administrator usually gets involved. At this point, if the sysadmin realizes that the partition resize operation has cut off part of the filesystem, and undoes the partition resize operation, the filesystem can be fully accessed through the partition device again, and everything may still be just fine: the filesystem may need a fsck to clear the error flag, but the files will still be there. After mounting the filesystem again, the files that were bisected by the partition resize operation will be fully accessible again. But if the sysadmin simply runs a filesystem check on the partition in its shrunken state, the filesystem checker will see that there are files that appear to continue beyond the end of the partition, and say to itself: "Let's amputate". Since it takes the partition size as a solid fact, it has no choice but to truncate or delete the files that seem to go beyond the end of the partition. This is where the actual damage is done. The filesystem metadata will also need some adjustment to remove the space that is beyond the end of the partition from the "books". After the filesystem checker is done, the cut-off parts of the files are still there on the physical disk, beyond the new end of the partition, unchanged... but the parts of the files still inside the filesystem are now truncated into stumps.
Does parted's resizepart command by default not modify or remove existing files on a partition? Furthermore, does it never modify or remove existing files on a partition (even by some option)? Similar questions for resize2fs? Thanks.
Do `parted resizepart` and `resize2fs` not modify or remove existing files on a partition?
The size of /var greatly depends on what the system is doing. For example, if the system is a mail server, /var/mail and /var/spool could grow arbitrarily large, depending on the size of the user base; those directories would then effectively be the main reason of the system's existence. The size of /var/lib depends on what software you have installed, and /var/cache and /var/snap might also grow depending on what applications are installed and how they are used. You're using GPT and UEFI boot, but otherwise a quite classic partitioning setup. (GPT is good; it means we don't have to deal with MBR's size restrictions and the primary/extended/logical partition nonsense; all partitions will be equally usable for any purpose.) If I'm not mistaken, there is still roughly 500G of unused space towards the end of the disk. And right next to the too-small /var partition, there's a swap partition of 103G. It could be repurposed to expand /var fairly easily. Unfortunately the swap is located before the /var partition, which complicates things a bit. I would start by creating a new swap partition into the unused space, and running mkswap on it. Then I'd comment out the old swap partition in /etc/fstab, and add in the information of the new swap partition. At this point, I would also check if the suspend/resume configuration needs to be updated to point to the new swap partition (/etc/initramfs-tools/conf.d/resume in Debian, probably similar in Ubuntu). Then sudo update-initramfs -u and reboot. After confirming the system is using the new swap partition and no longer the old one, it would be time to remove the swap flag from the old swap partition and set its partition type to be the same as /var has. Then the old swap could be initialized as the future /var using sudo mkfs.ext4, and mounted temporarily to /mnt. Then, it would be time to shut down any applications that might be actively writing to /var, and copy the data from the old partition to the new one, with a simple cp -a /var/.updated /var/* /mnt/. Then another edit to /etc/fstab to make the system mount /dev/nvme0n1p2 instead of /dev/nvme0n1p3 as /var (probably using filesystem UUIDs though, as is the current recommended practice), and a reboot. If the system now boots successfully with the new, expanded /var, it would be a simple matter to use gparted to remove the old, now-unused small /var and use its space to expand (even on-line!) the new /varstill further. This is what I would do in this situation; you should use your own judgement to decide if this is a good plan for you or not. If you need more specific instructions than this outline, you should consider doing this together with someone that has more Linux experience.
I just installed Ubuntu 24.04, and I made a mistake: I put the /var directory on its own partition, and its size is 10 GB. After a few days it is already full. Is there a way to fix this problem without reinstalling the OS from scratch? Is it possible to resize a partition, even loosing its contents? What is a suggested size for the /var directory? This is my partition table: Model: WD_BLACK SN850X 2000GB (nvme) Disk /dev/nvme0n1: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 8 1049kB 1075MB 1074MB fat32 boot boot, esp 1 1075MB 323GB 322GB xfs / 2 323GB 426GB 103GB linux-swap(v1) swap swap 3 426GB 437GB 10.7GB ext4 var 4 437GB 439GB 2147MB ext4 temp 6 439GB 547GB 107GB xfs 3rdp 5 547GB 1406GB 859GB ext4 home 7 1406GB 1427GB 21.5GB ext4 cryptThis is the contents of my /var directory: ~> sudo bash -c 'shopt -s dotglob; du -hxs /var/*' 7.0M /var/backups 661M /var/cache 4.0K /var/crash 8.9G /var/lib 4.0K /var/local 0 /var/lock 49M /var/log 16K /var/lost+found 4.0K /var/mail 4.0K /var/metrics 4.0K /var/opt 0 /var/run 13M /var/snap 52K /var/spool 108K /var/tmp 4.0K /var/.updatedHere are more details about my partitions: ~> df -h Filesystem Size Used Avail Use% Mounted on tmpfs 5.9G 2.7M 5.9G 1% /run /dev/nvme0n1p1 300G 29G 272G 10% / tmpfs 30G 180M 30G 1% /dev/shm tmpfs 5.0M 12K 5.0M 1% /run/lock efivarfs 148K 62K 81K 44% /sys/firmware/efi/efivars /dev/nvme0n1p6 100G 2.5G 98G 3% /opt /dev/nvme0n1p7 20G 24K 19G 1% /crypt /dev/nvme0n1p4 2.0G 103M 1.7G 6% /tmp /dev/nvme0n1p3 9.8G 9.6G 0 100% /var /dev/nvme0n1p5 787G 13G 734G 2% /home /dev/nvme0n1p8 1022M 41M 982M 4% /boot/efi tmpfs 5.9G 196K 5.9G 1% /run/user/1000
Can I expand my /var partition?
From what you write, you have accidentally shrunk a partition smaller than the file system it contains. On it's own this shouldn't lose any data but almost every action you might do after that could [have]. This definitely includes resize2fs, e2fsck and mount. It appeares you were very lucky since the two commands you executed both detected the problem and aborted. The big question is did you do anything with the extra space you created by shrinking the partition? If you did, if you made an additional partition and formatted it, then you may have damaged your data beyond repair. If not then you may be okay. To fix the immediate issue you must use a tool such as parted to increase the partition back to its original size. If you did nothing with that free space already then your data should be right where you left it. This will fix the immediate problem and you can use e2fsck to double check. Do abort if it gives you a similar warning to the first time.The root cause of your problem is that you have not properly shrunk the file system with resize2fs before you shrink the partition with parted. This is necessary to move any file data out of the space you are going to remove from the partition. I note that the wiki you reference correctly indicates that you should specify the size in resize2fs.... ... Please be very careful with units and take the time to understand the numbers you are entering. 4k blocks in an ext2/3/4 means 4096 bytes. Elsewhere the term "block" can mean something completely different. Also many partitioning programs including parted make a distinction between KB, MB... and KiB, MiB. Make sure you know which units you intend:KB = 1,000 MB = 1,000,000 KiB = 1,024 MiB = 1,048,576
I was trying to shrink my home partition. I followed this ArchWiki article for that. According to this I first resized my filesystem using resize2fs and then resized my physical device using parted. In resize2fs parameter I gave my intended size as XG and after resizing, it reported that new size is Y (4k blocks). From this info I calculated my partition size is (Y * 4) KiB and when resizing physical partition using parted I used this size. But in reality it is (Y * 4) KB. So now total block number of the filesystem is higher than the total block number of the physical device. In resize2fs man page it is stated that if size isn't specified it will take up whole space from the device. So to solve this problem I ran resize2fs again so that it match the fs size with physical size. But it gave following error: resize2fs 1.45.6 (20-Mar-2020) Resizing the filesystem on /dev/sda3 to 159907584 (4k) blocks. resize2fs: Can't read a block bitmap while trying to resize /dev/sda3 Please run 'e2fsck -fy /dev/sda3' to fix the filesystem after the aborted resize operation.But when I issue e2fsck it reported the mismatch and suggested to abort. So, now I am stuck in a loop: e2fsck 1.45.6 (20-Mar-2020) The filesystem size (according to the superblock) is 186122240 blocks The physical size of the device is 159907584 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? Is there any way to recover from this? Is it safe to mount and access the partition so that I can take backup? Thanks!
How to recover filesystem and physical size mismatch
Solution is simple: Don't shrink the partition and copy it. Instead, make a new partition on the target SSD, and copy over the files from the old partion. There's no reason why you couldn't do that – and it's both easier and safer.
I need to move a Pop-OS installation from a 250GB HDD to a 128GB SSD. So far I have been trying to use GParted (which worked for moving my Ubuntu installation between drives of the same size). The recovery and boot partitions copied properly, but to copy the main (root) partition I need to shrink it first (there is enough space). Using GParted to try and shrink it seems to do something for a while, but then errors at the same point (judging by the progress bar) each time. (The title is not related to this problem to try and avoid A/B problem). I have tried running the e2fsck command written in the GParted details file, and rebooting the machine. None of these have made the shrink work. Without the partition shrink, I don't know how I can move the installation to the smaller drive. Below is the gparted_details.htm contents generated by the error. Any and all ideas on how I can move the OS are appreciated.GParted 1.3.1configuration --enable-libparted-dmraid --enable-online-resizelibparted 3.4======================================== Device: /dev/nvme0n1 Model: CT1000P5PSSD8 Serial: Sector size: 512 Total sectors: 1953525168 Heads: 255 Sectors/track: 2 Cylinders: 3830441 Partition table: gpt Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/nvme0n1p1 Primary 34 32767 msftres Microsoft reserved partition unknown /dev/nvme0n1p2 Primary 32768 819232767 msftdata Basic data partition ntfs New Volume ======================================== Device: /dev/nvme1n1 Model: RPFTJ128PDD2EWX Serial: Sector size: 512 Total sectors: 250069680 Heads: 255 Sectors/track: 2 Cylinders: 490332 Partition table: gpt Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/nvme1n1p1 Primary 2048 250068991 ext4 /======================================== Device: /dev/sda Model: ATA CT250MX500SSD1 Serial: 2013E298798B Sector size: 512 Total sectors: 488397168 Heads: 255 Sectors/track: 2 Cylinders: 957641 Partition table: gpt Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/sda1 Primary 2048 1050623 boot, esp EFI System Partition fat32 /boot/efi /dev/sda2 Primary 1050624 1083391 msftres Microsoft reserved partition ext4 /dev/sda3 Primary 1083392 487322748 msftdata Basic data partition ntfs /dev/sda4 Primary 487323648 488394751 hidden, diag ntfs ======================================== Device: /dev/sdb Model: ATA ST31000528AS Serial: 5VP2CLXV Sector size: 512 Total sectors: 1953525168 Heads: 255 Sectors/track: 2 Cylinders: 3830441 Partition table: msdos Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/sdb1 Primary 63 1953520127 boot ntfs ExtDisk ======================================== Device: /dev/sdc Model: ATA ST500DM002-1BD14 Serial: Z2AXE6DG Sector size: 512 Total sectors: 976773168 Heads: 255 Sectors/track: 2 Cylinders: 1915241 Partition table: msdos Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/sdc1 Primary 2048 976769023 ntfs stuff ======================================== Device: /dev/sdd Model: ATA WDC WD2500BEVT-7 Serial: WD-WXR1A60R1236 Sector size: 512 Total sectors: 488397168 Heads: 255 Sectors/track: 2 Cylinders: 957641 Partition table: gpt Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/sdd1 Primary 4096 2097150 boot, esp fat32 /dev/sdd2 Primary 2097152 10485758 msftdata recovery fat32 /dev/sdd3 Primary 10485760 480004462 ext4 /dev/sdd4 Primary 480004464 488393070 swap linux-swap ======================================== Device: /dev/sde Model: USB DISK Serial: Sector size: 512 Total sectors: 15730688 Heads: 255 Sectors/track: 2 Cylinders: 30844 Partition table: msdos Partition Type Start End Flags Partition Name Filesystem Label Mount Point /dev/sde1 Primary 8192 15728639 ntfs NTFS /media/yee/NTFS /dev/sde2 Primary 15728640 15730687 lba fat16 UEFI_NTFS /media/yee/UEFI_NTFS======================================== Shrink /dev/sdd3 from 223.88 GiB to 107.42 GiB 00:11:10 ( ERROR ) calibrate /dev/sdd3 00:00:02 ( SUCCESS ) path: /dev/sdd3 (partition) start: 10485760 end: 480004462 size: 469518703 (223.88 GiB) check filesystem on /dev/sdd3 for errors and (if possible) fix them 00:00:15 ( SUCCESS ) e2fsck -f -y -v -C 0 '/dev/sdd3' 00:00:15 ( SUCCESS ) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information527061 inodes used (3.59%, out of 14680064) 962 non-contiguous files (0.2%) 411 non-contiguous directories (0.1%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 502974/140 24348903 blocks used (41.49%, out of 58689837) 0 bad blocks 15 large files454992 regular files 45072 directories 15 character device files 1 block device file 7 fifos 4994 links 26959 symbolic links (23910 fast symbolic links) 6 sockets ------------ 532046 files e2fsck 1.46.5 (30-Dec-2021) shrink filesystem 00:10:53 ( ERROR ) resize2fs -p '/dev/sdd3' 112640000K 00:10:53 ( ERROR ) Resizing the filesystem on /dev/sdd3 to 28160000 (4k) blocks. Begin pass 2 (max = 10272100) Relocating blocks XXXXXXXX-------------------------------- resize2fs 1.46.5 (30-Dec-2021) resize2fs: Attempt to read block from filesystem resulted in short read while trying to resize /dev/sdd3 Please run 'e2fsck -fy /dev/sdd3' to fix the filesystem after the aborted resize operation.
Moving Pop-OS installation to a smaller drive (using GParted?)
easy: lvresize to, say, 350 GB (I'm assuming df -h /var/lib/vz gives you something like 340GB; if it's far less, you can of course shrink this way more!): Since you need to shrink the file system, you first have to unmount it: umount /var/lib/vzThen, resize the logical volume; we can ask the LVM tools to correctly resize the underlying file system: lvresize -L 350G -r /dev/vg/data | | | | new size in | | | bytes | | | | | | 350GB-/ | | | | resize the under- | lying file sys- | tem automatically | | which LV to resizeThis of course only works if there's enough free space in /var/lib/vz, such that the ext4 file system can be successfully shrunk. If there isn't: tough luck! Can't conjure space out of nothing :( You can now mount /var/lib/vz again. Afterwards, create swap to eat up all your free space: lvcreate -l 100%FREE -n swaplv vg | | | | | size in extents-/ | | | | | | | | 100% of the available | | | space in the volume | | | group | | | | | | name of the new LV -/--/ | | volume group in which to create the new volumeNote of course that instead of -l 100%FREE you could of course also specify a size (e.g. -L 16G). Note the difference between -l and -L! "Format" it as swap device: mkswap /dev/vg/swaplvfinally, you want to add that new swap to /etc/fstab: /dev/vg/swaplv swap swap defaults 0 0and enable it right now: swapon -a
How am I able to reduce /var/lib/vz logical volume (/dev/vg/data) and use it/increase the current swap size? /etc/fstab UUID=c4408a1c-aa5b-4ce2-a9e8-1673660331e9 / ext4 defaults 0 1 LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1 UUID=c90b3083-1b43-427c-8016-1d2406c36417 /var/lib/vz ext4 defaults 0 0 UUID=e585755c-9908-4c01-a89b-d7fb1880b8f8 swap swap defaults 0 0 UUID=aea8f278-23a8-4ce0-97ca-4354720ca602 swap swap defaults 0 0vgdisplay --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 386.97 GiB PE Size 4.00 MiB Total PE 99065 Alloc PE / Size 99065 / 386.97 GiB Free PE / Size 0 / 0 VG UUID e2YzU3-HzQe-DIqH-HGNr-tFqc-cWO1-K92uORlvdisplay | grep "LV Path|LV Size" LV Path /dev/vg/data LV Size 386.97 GiB
How can I shrink/use a Logical Volume and use it as swap
Analyse ntfsfix -n /dev/sda5 the n parameter will make the tool output the repair solution without applying it (be very prudent using such tool as automated repair tools can choose the wrong decision to repair the partition) ntfsresize -if /dev/sda5 this will tell us what's going on exactly... Backup First thing first before doing anything a full image backup is recommended... otherwise just backup the partition table with sfdisk -d /dev/sda > sda.partition.table.txtExplanation In this particular case Failed to read last sector (345345...) this mean that the partition is bigger than what's indicated on the partition table, this can happen when the partition is resized (shrinked) without shrinking the file-system (ntfs here)... the solution is to revert the resize (on the partition table)... Note that ntfsfix may guess the good old value and restore it BUT the tool can also guess a wrong value and make you loose part/all of your datas... if the partition can be mounted after the repair that does not mean you did not lost any datas especially when chkdsk is correcting a lot of errors... SolutionBackup the current partition table with sfdisk -d /dev/sda > sda.partition.table.txtFailed to read last sector (345345...) indicate that the real partition end sector is [start.sector]+[345345...] thus, we need to calculate the real end sector location by adding the start sector of the partition and the last sector shown on the errorEdit sda.partition.table.txt and replace the end sector with the new calculated one... (for the sda5)Restore the partition table with sfdisk /dev/sda < sda.partition.table.txt
After a failed resizing operation, mount operation is failing with: Failed to read last sector (718198764): Invalid argumentThe partition is not accessible with Gparted and other GUI tools. How can we fix such issue?
Partition mounting/resizing failed to read last sector?
You can simply use sfdisk to resize the 2nd partition. # write the current partition table into a machine readable text file sfdisk --dump /dev/vda > /var/tmp/vda.old cp /var/tmp/vda.old /var/tmp/vda.new # also copy vda.old to another machine to have a safe backup# edit the dump to set the new size for partion 2 # (you may remove the size parameter ", size= 1234" and the # whole line "last-lba: 1234" to get the max possible size) vim /var/tmp/vda.new# now apply the edited partition table to the harddisk sfdisk --no-reread /dev/vda </var/tmp/vda.new# check if it looks good, otherwise repair/try again fdisk -l /dev/vda# after reboot, resize the filesystem too, for example in case of ext{2,3,4} resize2fs /dev/vda2Note, you can do this without using LVM because there is free space on your HD directly after the partition you want to resize. With LVM you would not have to reboot and also the whole resizing steps would look a bit less dangerous. So in general I would recommend to use LVM from the beginning, when installing the server. But for you it should work here also without LVM.
I have Ubuntu 16.04 installed on remote server and I have requested another 20GB for my /dev/vda2 partition (now its 20GB), so the total size would be 40GB. Since vda2 is full of very valuable data (disk usage is 100%), I want to extend it. Now, I have searched for ways to do it but I found out my LVM is not configured, or at least it looks like it, because when I run for example vgdisplay nothing happens. Then I tried vgscan and it says Reading volume groups from cache. but that's it. I have tried to follow this tutorial https://www.linuxtechi.com/extend-lvm-partitions/ but since I can't run # vgdisplay < Volume-Group-Name> because I don't know my volume-group-name, I am stuck. What can I do ? I am looking for the safest way and also easiest since I have 2 databases running there, I backed up all my data from them but I really don't want to set it all up once again... Just for info, when I run fdisk -l I get this:Edit: this is what I got regarding to answer below:label: gpt label-id: 88501878-0C4F-486D-B09A-1AD0A6C81982 device: /dev/vda unit: sectors first-lba: 34 last-lba: 41943006/dev/vda1 : start= 2048, size= 2048, type=21686148-6449-6E6F-744E-656564454649, uuid=6FA9DDEF-760F-4276-9DF0-B8A62F9C51BD /dev/vda2 : start= 4096, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=E0912389-7C07-41F7-A21E-B8B131F2C491
Resize partition without using LVM
It is not enough that a LV exists on the PV, it must also be active for being used i.e. the device mapper device (/dev/mapper/fedora-root) must be created: lvchange -ay fedora/rootor vgchange -ay fedora
I extended an lvm from the terminal in system rescue live CD using the commands: # pvcreate /dev/sda7 # vgextend fedora /dev/sda7 # lvextend -l +100%FREE /dev/fedora/rootThe above worked but when I try to check the LV file system or resize it I get the following errors: # e2fsck -f /dev/fedora/roote2fsck: No such file or directory while trying to open /dev/fedora/root Possibly non-existent device?# resize2fs /dev/fedora/root open: No such file or directory while opening /dev/fedora/rootDo I have to activate or mount the volume before I run those commands? I didn't change the name of the volume group.UPDATE Resolved by simply adding command provided by Hauke Laging before resize2fs or e2fsck
LVM not able to be resized or checked with resize2fs and e2fsck
Your question is inconsistent: if there's a partition, the filesystem was created by a command like mkfs.ext4 /dev/sdd1. If there's no partition, the filesystem was created by a command line mkfs.ext4 /dev/sdd. Check the output of df /path/to/some/directory/on/that/filesystem to see which one it is. Either way, you can call resize2fs to shrink the filesystem. This is independent of any use of LVM. You can only shrink the filesystem while it's unmounted, so if it's your root filesystem, you need to do that from a rescue system. Note that the disk letter might be different in the rescue system, e.g. sdb instead of sdd. After shrinking the filesystem to the desired size, if it's on a partition, you need to shrink the partition. You can use fdisk for that, but it's a bit delicate: you need to delete and recreate the partition, making sure that you don't change its start location. You can also use parted, which combines filesystem resizing and partition resizing, but it's also cumbersome to use because you need to compute the target start address. After this, you can shrink the size of the disk image in VMware. Make sure that the filesystem (and partition, if applicable) fit inside the disk image; if there's a partition, remember that the partition table uses an extra 512B at the beginning.
I have an vmware ext4 file system that non-lvm, non-partitioned file system that resides on a virtual 300GB disk. In other words, there is no partition and the file system was probably created by: mkfs.ext4 /dev/sdd1The disk is barely used (1%) but I would like to keep the data on it. Is there a safe way to shrink it? I was thinking of resizing the disk in Vmware from 300 to 100 and running resize2fs /dev/sdd1 but am not certain if I will not lose anything on it. Any pieces of advice would be highly appreciated. P.
Shrink/reduce non-lvm disk file system
I don't understand the problem. If the motivation for shrinking the partition is that you want to move it to another physical storage then the "shrinking magic" is:create the partition on the target storage format the new partition mount the partition (and the source partition) cp -a /path/to/source/. /path/to/targetMuch faster, much easier, less dangerous, clean filesystem.
I'm trying to shrink a partition on a 64GB SD Card down so that I can fit it on a 32GB USB thumb drive, but I'm not having any success. I have the SD card plugged into a USB adapter, which is plugged into a Raspberry Pi running Raspbian. Here is the output of fdisk -l: Disk /dev/mmcblk0: 7948 MB, 7948206080 bytes 4 heads, 16 sectors/track, 242560 cylinders, total 15523840 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002c262 Device Boot Start End Blocks Id System /dev/mmcblk0p1 8192 122879 57344 c W95 FAT32 (LBA) /dev/mmcblk0p2 122880 15523839 7700480 83 LinuxDisk /dev/sda: 63.9 GB, 63864569856 bytes 4 heads, 32 sectors/track, 974496 cylinders, total 124735488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000798a3 Device Boot Start End Blocks Id System /dev/sda1 4096 147455 71680 c W95 FAT32 (LBA) /dev/sda2 151552 124735487 62291968 83 LinuxIt's /dev/sda2 that I want to shrink, but when I try resize2fs /dev/sda2 20G I get: resize2fs 1.42.5 (29-Jul-2012) resize2fs: Bad magic number in super-block while trying to open /dev/sda2 Couldn't find valid filesystem superblock.I also tried shrinking the partition first via fdisk and then running resize2fs, but it failed with the same error message. How can I shrink my partition? PS. I have already imaged the card so I can restore should anything go wrong.
Shrinking a partition
In order to use parted correctly, you unfortunately have to do a little math sometimes.parted /dev/loop13p1 resizepart 1 7GThis command probably does not do what you expect. parted works with block devices that have partition tables on them. So in the case of /dev/loop13p1 it would be a partition table on a partition. Resizing partition 1 of that would mean you're trying to resize a (fictional) device like /dev/loop13-p1-p1. You probably want to use /dev/loop13 here. Then, resizepart 1 7G does not resize partition 1 to 7G of size. The syntax for resizepart is resizepart NUMBER END. END, not SIZE. So it moves the end point of partition 1 to the offset 7G. The size of the partition then depends on the start sector of partition 1. If the partition starts at 1MiB it would be 7G minus 1MiB large. Too small for a filesystem of 7G size. Furthermore, for parted, G means GB (power of 1000) not GiB (power of 1024). So the unit itself can also be an additional source of confusion. If you resize to G when you meant GiB, the partition will be way too small. Finally for new partition sizes to take, the kernel has to successfully re-read the partition table. Sometimes this fails if the device is in use etc. so always double check with lsblk, blockdev --getsize64, etc. or via head /sys/block/loop13/loop13p1/{start,size} what size the kernel currently believes it to be.The filesystem on /dev/loop13p1 is now 1835008 (4k) blocks long.1835008 * 4096 = 7516192768 So the partition must be 7516192768 bytes or larger. # parted /dev/loop0 unit b print free Model: Loopback device (loopback) Disk /dev/loop0: 15032385536B Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags:Number Start End Size Type File system Flags 1024B 1048575B 1047552B Free Space 1 1048576B 15032385535B 15031336960B primary ext2Trying resizepart: # parted /dev/loop0 resizepart 1 7G Warning: Shrinking a partition can cause data loss, are you sure you want to continue?Yes/No? Yes# parted /dev/loop0 unit b print free Model: Loopback device (loopback) Disk /dev/loop0: 15032385536B Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags:Number Start End Size Type File system Flags 1024B 1048575B 1047552B Free Space 1 1048576B 7000000511B 6998951936B primary ext2 7000000512B 15032385535B 8032385024B Free SpaceAfter resizepart 1 7G the partition ends at (around) 7GB (7000000511B) which is way smaller than the required 7516192768B. # parted /dev/loop0 resizepart 1 7GiB Information: You may need to update /etc/fstab.# parted /dev/loop0 unit b print free Model: Loopback device (loopback) Disk /dev/loop0: 15032385536B Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags:Number Start End Size Type File system Flags 1024B 1048575B 1047552B Free Space 1 1048576B 7516192767B 7515144192B primary ext2 7516192768B 15032385535B 7516192768B Free SpaceAfter resizepart 1 7GiB, the partition ends (around) 7GiB (7516192768 Bytes). This is closer but still too small since we have to consider 1 MiB (1048576B) offset. So there is no easy command to get it right, you just have to do the math yourself. # parted /dev/loop0 resizepart 1 $((1+7*1024))MiB Information: You may need to update /etc/fstab.# parted /dev/loop0 unit b print freeModel: Loopback device (loopback) Disk /dev/loop0: 15032385536B Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags:Number Start End Size Type File system Flags 1024B 1048575B 1047552B Free Space 1 1048576B 7517241343B 7516192768B primary ext2 7517241344B 15032385535B 7515144192B Free Space Only then have we reached the desired partition size of 7516192768 Bytes.
What am I doing wrong? I have an image, I added it as a loop device: losetup -P /dev/loop13 ./my_image.imggparted screenshot:Then I try to change the FS size for the partition first: e2fsck -f /dev/loop13p1 resize2fs /dev/loop13p1 7GIt outputs: Resizing the filesystem on /dev/loop13p1 to 1835008 (4k) blocks. The filesystem on /dev/loop13p1 is now 1835008 (4k) blocks long.Then I shrink the section itself: parted /dev/loop13p1 resizepart 1 7Ggparted screenshot:After which I perform: resize2fs /dev/loop13p1Output Resizing the filesystem on /dev/loop13p1 to 3659264 (4k) blocks. The filesystem on /dev/loop13p1 is now 3659264 (4k) blocks long.And it rolls back to the original value... gparted screenshot:UPD I tried to reduce the partition via sfdisk and I succeeded, but now I don't understand why even more... resize2fs -p /dev/loop13p1 7G echo '2048,7G' | sfdisk /dev/loop13 -N 1 resize2fs /dev/loop13p1Output: The filesystem is already 1835008 (4k) blocks long. Nothing to do!gparted screenshot:
Change the size of the partition using parted
Apple court, Apple rules. Try diskutil: $ diskutil list ...# if mounted somewhere $ sudo diskutil unmount $device# all the partitions (there's also a "force" option, see the manual) $ sudo diskutil unmountDisk $device# remember zip drives? this would launch them. good times! $ sudo diskutil eject $device(In the case of a disk image, the hdiutil command may also be of interest. You can also click around in Disk Utility.app.)
I just formatted microSD card, and would like to run a dd command. Unfortunately dd command fails: $ sudo dd bs=1m if=2016-02-26-raspbian-jessie-lite.img of=/dev/rdisk2 dd: /dev/rdisk2: Resource busy $Everyone on the internet says I need to unmount the disk first. Sure, can do that and move on. But I want to understand why / what exactly in OS X is making the device busy? How do I diagnose this? So far I tried:Listing open files: $ lsof /dev/disk2 $ lsof /dev/disk2s1 $Also: $ lsof /Volumes/UNTITLED $Listing users working on the file: $ fuser -u /dev/disk2 /dev/disk2: $ fuser -u /dev/disk2s1 /dev/disk2s1: $Also: $ fuser -u /Volumes/UNTITLED $Check for system messages: $ sudo dmesg | grep disk $Also: $ sudo dmesg | grep /Volumes/UNTITLED $My environmentOperating system: Darwin Eugenes-MacBook-Pro-2.local 15.3.0 Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64 x86_64Information about my microSD: diskutil list disk2 /dev/disk2 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *31.9 GB disk2 1: DOS_FAT_32 UNTITLED 31.9 GB disk2s1P.S. I'm using OS X 10.11. Update 22/3/2016. Figured it out. I re-ran the lsof and fuser from above using sudo, and finally got to the bottom of the issue: $ sudo fuser /Volumes/UNTITLED/ /Volumes/UNTITLED/: 62 282 $And: $ sudo lsof /Volumes/UNTITLED/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mds 62 root 8r DIR 1,6 32768 2 /Volumes/UNTITLED mds 62 root 22r DIR 1,6 32768 2 /Volumes/UNTITLED mds 62 root 23r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD mds 62 root 25u REG 1,6 0 999999999 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/journalExclusion mds_store 282 root txt REG 1,6 3277 17 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexGroups mds_store 282 root txt REG 1,6 8 23 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexCompactDirectory mds_store 282 root txt REG 1,6 312 19 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexTermIds mds_store 282 root txt REG 1,6 3277 29 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexGroups mds_store 282 root txt REG 1,6 1024 35 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexCompactDirectory mds_store 282 root txt REG 1,6 312 21 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexPositionTable mds_store 282 root txt REG 1,6 8192 31 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexTermIds mds_store 282 root txt REG 1,6 2056 22 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexDirectory mds_store 282 root txt REG 1,6 8192 33 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexPositionTable mds_store 282 root txt REG 1,6 8224 34 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexDirectory mds_store 282 root txt REG 1,6 16 16 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexIds mds_store 282 root txt REG 1,6 65536 48 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/reverseDirectoryStore mds_store 282 root txt REG 1,6 704 24 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexArrays mds_store 282 root txt REG 1,6 65536 26 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.directoryStoreFile mds_store 282 root txt REG 1,6 32768 28 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexIds mds_store 282 root txt REG 1,6 65536 36 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexArrays mds_store 282 root txt REG 1,6 65536 38 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.directoryStoreFile mds_store 282 root 5r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD mds_store 282 root 17u REG 1,6 8192 12 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/psid.db mds_store 282 root 32r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD mds_store 282 root 41u REG 1,6 28 15 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/indexState $From the above it's easy to see that processes called mds and mds_store have created and are holding lots of files on the volume.
Running dd. Why resource is busy?
UPDATE: Note that the answer below applies to RHEL 6. In RHEL 7, most cgroups are managed by systemd, and libcgroup is deprecated.Since posting this question I have studied the entire guide that I linked to above, as well as the majority of the cgroups.txt documentation and cpusets.txt. I now know more than I ever expected to learn about cgroups, so I'll answer my own question here. There are multiple approaches you can take. Our company's contact at Red Hat (a Technical Architect) recommended against a blanket restriction of all processes in preference to a more declarative approach—restricting only the processes we specifically wanted restricted. The reason for this, according to his statements on the subject, is that it is possible for system calls to depend on user space code (such as LVM processes) which if restricted could slow the system down—the opposite of the intended effect. So I ended up restricting several specifically-named processes and leaving everything else alone. Additionally, I want to mention some cgroup basic data that I was missing when I posted my question.Cgroups do not depend on libcgroup being installed. However, that is a set of tools for automatically handling cgroup configuration and process assignments to cgroups and can be very helpful. I found that the libcgroup tools can also be misleading, because the libcgroup package is built on its own set of abstractions and assumptions about your use of cgroups, which are slightly different than the actual kernel level implementation of cgroups. (I can put examples but it would take some work; comment if you're interested.) Therefore, before using libcgroup tools (such as /etc/cgconfig.conf, /etc/cgrules.conf, cgexec, cgcreate, cgclassify, etc.) I highly recommend getting very familiar with the /cgroup virtual filesystem itself, and manually creating cgroups, cgroup hierarchies (including hierarchies with multiple subsystems attached, which libcgroup sneakily and leakily abstracts away), reassigning processes to different cgroups by running echo $the_pid > /cgroup/some_cgroup_hierarchy/a_cgroup_within_that_hierarchy/tasks, and other seemingly magical tasks that libcgroup performs under the hood.Another basic concept I was missing was that if the /cgroup virtual filesystem is mounted on your system at all (or more accurately, if any of the cgroup subsystems aka "controllers" are mounted at all), then every process on your entire system is in a cgroup. There is no such thing as "some processes are in a cgroup and some aren't". There is what is called the root cgroup for a given hierarchy, which owns all the system's resources for the attached subsystems. For example a cgroup hierarchy that has the cpuset and blkio subsystems attached, would have a root cgroup which would own all the cpus on the system and all the blkio on the system, and could share some of those resources with child cgroups. You can't restrict the root cgroup because it owns all your system's resources, so restricting it wouldn't even make sense.Some other simple data I was missing about libcgroup: If you use /etc/cgconfig.conf, you should ensure that chkconfig --list cgconfig shows that cgconfig is set to run at system boot. If you change /etc/cgconfig.conf, you need to run service cgconfig restart to load in the changes. (And problems with stopping the service or running cgclear are very common when fooling around testing. For debugging I recommend, for example, lsof /cgroup/cpuset, if cpuset is the name of the cgroup hierarchy you are using.) If you want to use /etc/cgrules.conf, you need to ensure the "cgroup rules engine daemon" (cgrulesengd) is running: service cgred start and chkconfig cgred on. (And you should be aware of a possible but unlikely race condition regarding this service, as described in the Red Hat Resource Management Guide in section 2.8.1 at the bottom of the page.) If you want to fool around manually and set up your cgroups using the virtual filesystem (which I recommend for first use), you can do so and then create a cgconfig.conf file to mirror your setup by using cgsnapshot with its various options.And finally, the key piece of info I was missing when I wrote the following:However, the caveat on this seems to be...that the children of myprocessname will be reassigned to the restricted cpu0only cgroup.I was correct, but there is an option I was unaware of. cgexec is the command to start a process/run a command and assign it to a cgroup. cgclassify is the command to assign an already running process to a cgroup. Both of these will also prevent cgred (cgrulesengd) from reassigning the specified process to a different cgroup based on /etc/cgrules.conf. Both cgexec and cgclassify support the --sticky flag, which additionally prevents cgred from reassigning child processes based on /etc/cgrules.conf.So, the answer to the question as I wrote it (though not the setup I ended up implementing, because of the advice from our Red Hat Technical Architect mentioned above) is: Make the cpu0only and anycpu cgroup as described in my question. (Ensure cgconfig is set to run at boot.) Make the * cpuset cpu0only rule as described in my question. (And ensure cgred is set to run at boot.) Start any processes I want unrestricted with: cgexec -g cpuset:anycpu --sticky myprocessname. Those processes will be unrestricted, and all their child processes will be unrestricted as well. Everything else on the system will be restricted to CPU 0 (once you reboot, since cgred doesn't apply cgrules to already running processes unless they change their EUID). This is not completely advisable, but that was what I initially requested and it can be done with cgroups.
There is a guide to cgroups from Red Hat which is maybe sort of kind of helpful (but doesn't answer this question). I know how to limit a specific process to a specific CPU, during the command to start that process, by: First, putting the following* in /etc/cgconfig.conf: mount { cpuset = /cgroup/cpuset; cpu = /cgroup/cpu; cpuacct = /cgroup/cpuacct; memory = /cgroup/memory; devices = /cgroup/devices; freezer = /cgroup/freezer; net_cls = /cgroup/net_cls; blkio = /cgroup/blkio; }group cpu0only { cpuset { cpuset.cpus = 0; cpuset.mems = 0; } }And then start a process and assign it specifically to that cgroup by using: cgexec -g cpuset:cpu0only myprocessnameI can limit all instances of a specific process name automatically by (I think this is correct) putting the following in /etc/cgrules.conf: # user:process controller destination *:myprocessname cpuset cpu0onlyMy question is: How can I do the reverse? In other words, How can I assign all processes except for a specific set of whitelisted processes and their children to a restricted cgroup?Based on what I have studied, but haven't tested, I believe that a partial solution would be: Add an "unrestricted" cgroup: group anycpu { cpuset { cpuset.cpus = 0-31; cpuset.mems = 0; # Not sure about this param but it seems to be required } }Assign my process explicitly to the unrestricted group, and everything else to the restricted group: # user:process controller destination *:myprocessname cpuset anycpu * cpuset cpu0onlyHowever, the caveat on this seems to be (from reading the docs, not from testing, so grain of salt) that the children of myprocessname will be reassigned to the restricted cpu0only cgroup. A possible alternative approach would be to create a user to run myprocessname and have all of that user's processes unrestricted, and everything else restricted. However, in my actual use case, the process needs to be run by root, and there are other processes that also must be run by root which should be restricted. How can I accomplish this with cgroups?If this is not possible with cgroups (which I now suspect is the case), are my ideas of partial solutions correct and will they work as I think? *Disclaimer: This is probably not a minimal code example;I don't understand all the parts so I don't know which are not necessary.
How to use cgroups to limit all processes except whitelist to a single CPU?
The idea behind this is to ensure you don't receive packets targeted for the previous program listening on that port. This TIME_WAIT state is defined in RFC793 as two times the maximum segment lifetime. I don't know about other Operating Systems but I assume that all of these have some kind of similar behavior. A workaround for this problem is to set SO_REUSEADDR on the socket which should ignore the TIME_WAIT state.
If I kill a program that is listening on a TCP port, it takes up to several minutes until the port is reclaimed by the system and usable again. I've seen several Q/A mentioning this phenomenon, but without an explanation. Why does that happen, why doesn't the system reclaim the port right away? Does it also happen on another systems, such as Windows or Mac?
Why does it take up to several minutes to clean a listening TCP port after a program dies?
I don't know that limiting CPU to the whole system is something that's possible without a lot of hacking, but you can easily limit the amount of CPU used by a single process using cpulimit The only way I can think of you being able to use this effectively is writing a wrapper script (can't really call it a script, it's so small) for the applications which you know are resource hogs. Say for example, you find google-chrome uses a lot of CPU, you could replace the google-chrome binary in your path with something like: #! /bin/bash cpulimit --limit 70 /usr/bin/google-chrome-binI haven't tested this so take it with a grain of salt. From cpulimit's website, it seems like you might be able to set rules for cpu limits on different applications. I'm not sure, you'd have to take a look.
My laptop (an HP with an i3 chip) overheats like crazy every time I run a resource heavy process (like a large compilation, extracting large tarballs or ... playing Flash). I am currently looking into some cooling solutions but got the idea of limiting global CPU consumption. I figured that if the CPU is capped, chances are the temperature will stop increasing frantically, and I'm willing to sacrifice a little performance in order to get the job done. Am I wrong in my reasoning? How can I proceed to cap the CPU usage overall?If it helps, I'm running Debian.
Is there a way to limit overall CPU consumption?
I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has been updated recently. As slm said in his comment, cgroups can also be used for this. You might have to install the utilities for managing cgroups, assuming you are on Linux you should look for libcgroups. sudo cgcreate -t $USER:$USER -a $USER:$USER -g memory:myGroupMake sure $USER is your user. Your user should then have access to the cgroup memory settings in /sys/fs/cgroup/memory/myGroup. You can then set the limit to, lets say 500 MB, by doing this: echo 500000000 > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytesNow lets run Vim: cgexec -g memory:myGroup vimThe vim process and all its children should now be limited to using 500 MB of RAM. However, I think this limit only applies to RAM and not swap. Once the processes reach the limit they will start swapping. I am not sure if you can get around this, I can not find a way to limit swap usage using cgroups.
There are plenty of questions and answers about constraining the resources of a single process, e.g. RLIMIT_AS can be used to constrain the maximum memory allocated by a process that can be seen as VIRT in the likes of top. More on the topic e.g. here Is there a way to limit the amount of memory a particular process can use in Unix? setrlimit(2) documentation says:A child process created via fork(2) inherits its parent's resource limits. Resource limits are preserved across execve(2).It should be understood in the following way: If a process has a RLIMIT_AS of e.g. 2GB, then it cannot allocate more memory than 2GB. When it spawns a child, the address space limit of 2GB will be passed on to the child, but counting starts from 0. The 2 processes together can take up to 4GB of memory. But what would be the useful way to constrain the sum total of memory allocated by a whole tree of processes?
How to limit the total resources (memory) of a process and its children
The man page you refer to comes from the procps version of top. But you're on an embedded system, so you have the busybox version of top. It looks like busybox top calculates %MEM as VSZ/MemTotal instead of RSS/MemTotal. The latest version of busybox calls that column %VSZ to avoid some confusion. commit log
I'm working on an embedded Linux system (128MB RAM) without any swap partition. Below is its top output: Mem: 37824K used, 88564K free, 0K shrd, 0K buff, 23468K cached CPU: 0% usr 0% sys 0% nic 60% idle 0% io 38% irq 0% sirq Load average: 0.00 0.09 0.26 1/50 1081 PID PPID USER STAT VSZ %MEM CPU %CPU COMMAND 1010 1 root S 2464 2% 0 8% -/sbin/getty -L ttyS0 115200 vt10 1081 1079 root R 2572 2% 0 1% top 5 2 root RW< 0 0% 0 1% [events/0] 1074 994 root S 7176 6% 0 0% sshd: root@ttyp0 1019 1 root S 13760 11% 0 0% /SecuriWAN/mi 886 1 root S 138m 112% 0 0% /usr/bin/rstpd 51234 <== 112% MEM?!? 1011 994 root S 7176 6% 0 0% sshd: root@ttyp2 994 1 root S 4616 4% 0 0% /usr/sbin/sshd 1067 1030 root S 4572 4% 0 0% ssh passive 932 1 root S 4056 3% 0 0% /sbin/ntpd -g -c /etc/ntp.conf 1021 1 root S 4032 3% 0 0% /SecuriWAN/HwClockSetter 944 1 root S 2680 2% 0 0% dbus-daemon --config-file=/etc/db 1030 1011 root S 2572 2% 0 0% -sh 1079 1074 root S 2572 2% 0 0% -sh 1 0 root S 2460 2% 0 0% init 850 1 root S 2460 2% 0 0% syslogd -m 0 -s 2000 -b 2 -O /var 860 1 root S 2460 2% 0 0% klogd -c 6 963 1 root S 2184 2% 0 0% /usr/bin/vsftpd /etc/vsftpd.conf 3 2 root SW< 0 0% 0 0% [ksoftirqd/0] 823 2 root SWN 0 0% 0 0% [jffs2_gcd_mtd6]ps (which doesn't understand any options besides -w on busybox) shows: PID USER VSZ STAT COMMAND 1 root 2460 S init 2 root 0 SW< [kthreadd] 3 root 0 SW< [ksoftirqd/0] 4 root 0 SW< [watchdog/0] 5 root 0 SW< [events/0] 6 root 0 SW< [khelper] 37 root 0 SW< [kblockd/0] 90 root 0 SW [pdflush] 91 root 0 SW [pdflush] 92 root 0 SW< [kswapd0] 137 root 0 SW< [aio/0] 146 root 0 SW< [nfsiod] 761 root 0 SW< [mtdblockd] 819 root 0 SW< [rpciod/0] 823 root 0 SWN [jffs2_gcd_mtd6] 850 root 2460 S syslogd -m 0 -s 2000 -b 2 -O /var/log/syslog 860 root 2460 S klogd -c 6 886 root 138m S /usr/bin/rstpd 51234 945 root 2680 S dbus-daemon --config-file=/etc/dbus-system.conf --for 964 root 2184 S /usr/bin/vsftpd /etc/vsftpd.conf 984 root 4616 S /usr/sbin/sshd 987 root 952 S /sbin/udhcpd /ftp/dhcpd.conf 1002 root 4056 S /sbin/ntpd -g -c /ftp/ntp.conf 1022 root 2464 S -/sbin/getty -L ttyS0 115200 vt102 1023 root 7176 S sshd: root@ttyp0 1028 root 2572 S -sh 1030 root 2572 R psWhen you look at process 886, you see that it uses 112% of the availble memory and has VSZ (virtual memory size) of 138MB. That doesn't make any sense to me. In the top man page it says: %MEM -- Memory usage (RES) A task's currently used share of available physical memory. How can this process consume more than 100% memory? And if it's such a memory hog, why are there still 88564K RAM free on the system?
What do top's %MEM and VSZ mean?
The pipe is a file opened in an in-kernel file-system and is not accessible as a regular file on-disk. It is automatically buffered only to a certain size and will eventually block when full. Unlike files sourced on block-devices, pipes behave very like character devices, and so generally do not support lseek() and data read from them cannot be read again as you might do with a regular file. The here-string is a regular file created in a mounted file-system. The shell creates the file and retains its descriptor while immediately removing its only file-system link (and so deleting it) before ever it writes/reads a byte to/from the file. The kernel will maintain the space required for the file until all processes release all descriptors for it. If the child reading from such a descriptor has the capability to do so, it can be rewound with lseek() and read again. In both cases the tokens <<< and | represent file-descriptors and not necessarily the files themselves. You can get a better idea of what's going on by doing stuff like: readlink /dev/fd/1 | cat...or... ls -l <<<'' /dev/fd/*The most significant difference between the two files is that the here-string/doc is pretty much an all-at-once affair - the shell writes all data into it before offering the read descriptor up to the child. On the other hand, the shell opens the pipe on the appropriate descriptors and forks off children to manage those for the pipe - and so it is written/read concurrently at both ends. These distinctions, though, are only generally true. As far as I am aware (which isn't really all that far) this is true of pretty much every shell which handles the <<< here-string short-hand for << a here-document redirection with the single exception of yash. yash, busybox, dash, and other ash variants do tend to back here-documents with pipes, though, and so in those shells there really is very little difference between the two after all. Ok - two exceptions. Now that I'm thinking about it, ksh93 doesn't actually do a pipe at all for |, but rather handles the whole business w/ sockets - though it does do a deleted tmp file for <<<* as most others do. What's more, it only puts the separate sections of a pipeline in a subshell environment which is a sort of POSIX euphemism for at least it acts like a subshell, and so doesn't even do the forks. The fact is that @PSkocik's benchmark (which is very useful) results here can vary widely for many reasons, and most of these are implementation dependent. For the here-document setup the biggest factors will be the target ${TMPDIR} file-system type and current cache configuration/availability, and still moreso the amount of data to be written. For the pipe it will be the size of the shell process itself, because copies are made for the required forks. In this way bash is terrible at pipeline setup (to include $(command) substitutions) - because it is big and very slow, but with ksh93 it makes hardly any difference at all. Here's another little shell snippet to demonstrate how a shell splits off subshells for a pipeline: pipe_who(){ echo "$$"; sh -c 'echo "$PPID"'; } pipe_who pipe_who | { pipe_who | cat /dev/fd/3 -; } 3<&032059 #bash's pid 32059 #sh's ppid 32059 #1st subshell's $$ 32111 #1st subshell sh's ppid 32059 #2cd subshell's $$ 32114 #2cd subshell sh's ppidThe difference between what a pipelined pipe_who() call reports and the report of one run in the current shell is due to a ( subshell's ) specified behavior of claiming the parent shell's pid in $$ when it is expanded. Though bash subshells definitely are separate processes, the $$ special shell parameter is not a reliable source of this information. Still, the subshell's child sh shell does not decline to accurately report its $PPID.
We can get the same result using the following two in bash, echo 'foo' | catand cat <<< 'foo'My question is what are the difference between these two as far as the resources used are concerned and which one is better ? My thought is that while using pipe we are using an extra process echo and pipe while in here string only a file descriptor is being used with cat.
Resource usage using pipe and here string
It says right there in the article:This has no effect on Linux. man setrlimit says it used to work only in ancient versions.The setrlimit man page says: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit has effect only in Linux 2.4.x, x < 30, and there affects only calls to madvise(2) specifying MADV_WILLNEED.So it stopped working in 2.4.30. The changelog for 2.4.30 says something about this:Marcelo Tosatti: o Ake Sandgren: Fix RLIMIT_RSS madvise calculation bug o Hugh Dickins: remove rlim_rss and this RLIMIT_RSS code from madvise. Presumably the code crept in by mistake
This article claims that the -m flag to ulimit does nothing in modern Linux. I can find nothing else to corroborate this claim. Is it accurate?You may try to limit the memory usage of a process by setting the maximum resident set size (ulimit -m). This has no effect on Linux. man setrlimit says it used to work only in ancient versions. You should limit the maximum amount of virtual memory (ulimit -v) instead.If it's true that it worked in older versions of Linux, which version stopped supporting this?
Does 'ulimit -m' not work on (modern) Linux?
The superuser or any process with the CAP_SYS_ADMIN or CAP_SYS_RESOURCE capabilities are not affected by that limitation, that's not something that can be changed. root can always fork processes. If some software is not trusted, it should not run as root anyway.
To prevent fork bomb I followed this http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm ulimit -a reflects the new settings but when I run (as root in bash) :(){ :|:&};: the VM still goes on max CPU+RAM and system will freeze. How to ensure users will not be bring down the system by using fork bombs or running a buggy application? OS: RHEL 6.4
How to prevent fork bomb?
I suggest these two: http://www.oldlinux.org/ and a more straightforward one from this site that contain Linux kernel 0.01, 0.10, 0.11,...,0.98: http://www.oldlinux.org/Linux.old/ and the other: http://www.codeforge.com/article/170371
I want to do research on the evolution of Linux. Therefore it would be nice if I could download the sources of Linux at several moments in time (from 1991 till now). Is there a site where one can find those sources? Similar sites for other Unix based operating systems are also welcome.
Where can I find the historical source code of the Linux sources
Generally speaking, I don't think you can unfortunately. (Some operating systems might provide for it, but I'm not aware of the ones I know supporting this.) Reference doc for resource limits: getrlimit from POSIX 2008. Take for example the CPU limit RLIMIT_CPU.If the process exceeds the soft limit, it gets sent a SIGXCPU If the process exceeds the hard limit, it gets a plain SIGKILLIf you can wait() on your program, you could tell if it was killed by SIGXCPU. But you could not differentiate a SIGKILL dispatched for breach of the hard limit from a plain old kill from outside. What's more, if the program handles the XCPU, you won't even see that from outside. Same thing for RLIMIT_FSIZE. You can see the SIGXFSZ from the wait() status if the program doesn't handle it. But once the file size limit is exceeded, the only thing that happens is that further I/O that attempts to test that limit again will simply receive EFBIG - this will be handled (or not, unfortunately) by the program internally. If the program handles SIGXFSZ, same as above - you won't know about it. RLIMIT_NOFILE? Well, you don't even get a signal. open and friends just return EMFILE to the program. It's not otherwise bothered, so it will fail (or not) in whichever way it was coded to fail in that situation. RLIMIT_STACK? Good old SIGSEGV, can't be distinguished from the score of other reasons to get delivered one. (You will know that that was what killed the process though, from the wait status.) RLIMIT_AS and RLIMIT_DATA will just make malloc() and a few others start to fail (or receive SIGSEGV if the AS limit is hit while trying to extend the stack on Linux). Unless the program is very well written, it will probably fail fairly randomly at that point. So in short, generally, the failures are either not visibly different from other process death reasons, so you can't be sure, or can be handled entirely from the program in which case it decides if/when/how it proceeds, not you from the outside. The best you can do as far as I know is write a bit of code that forks of your program, waits on it, and:check the exit status to detect SIGXCPU and SIGXFSZ (AFAIK, those signals will only be generated by the OS for resource limit problems). Depending on your exact needs, you could assume that SIGKILL and SIGSEGV were also related to resource limits, but that's a bit of a stretch. look at what you can get out of getrusage(RUSAGE_CHILDREN,...) on your implementation to get a hint about the other ones.OS-specific facilities might exist to help out here (possibly things like ptrace on Linux, or Solaris dtrace), or possibly debugger-type techniques, but that's going to be even more tied to your specific implementation.(I'm hoping someone else will answer with some magic thing I'm completely unaware of.)
Let's assume process runs in ulimited environment : ( ulimit ... -v ... -t ... -x 0 ... ./program )Program is terminated. There might be many reasons : memory/time/file limit exceeded ; just simple segfault ; or even normal termination with return code 0. How to check what was the reason of program termination, without modifying program? P.S. I mean "when binary is given". Maybe some wrapper (ptrace-ing etc) might help?
How to check, which limit was exceeded? (Process terminated because of ulimit. )
Think I figured out something that works. I used a program called LaunchControl to create a file called enable core dumps.plist at /System/Library/LaunchDaemons with the following contents: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>GroupName</key> <string>wheel</string> <key>InitGroups</key> <true/> <key>Label</key> <string>core dumps launchctl</string> <key>ProgramArguments</key> <array> <string>launchctl</string> <string>limit</string> <string>core</string> <string>unlimited</string> <string>unlimited</string> </array> <key>RunAtLoad</key> <true/> <key>UserName</key> <string>root</string> </dict> </plist>with these permissions: $ ls -al enable\ core\ dumps.plist -rw-r--r-- 1 root wheel 582 Dec 30 15:38 enable core dumps.plistand this seemed to do the trick: $ launchctl limit core core unlimited unlimited $ ulimit -a core core file size (blocks, -c) unlimited ... <output snipped> ...I created a little test program that just crashes: $ ./a.out Segmentation fault: 11 (core dumped)And, voila, a core dump was generated: $ # ls -al /cores/ total 895856 drwxrwxr-t@ 3 root admin 102 Dec 30 15:55 . drwxr-xr-x 31 root wheel 1122 Oct 18 10:32 .. -r-------- 1 root admin 458678272 Dec 30 15:55 core.426
I want to enable core dump generation by default upon reboot. Executing: ulimit -c unlimitedin a terminal seems to work until the computer is rebooted.
How to add persist shell ulimit settings on Mac? [duplicate]
Alternative #1: Monitor your process with monit Install M/Monit and create a configuration file based on this template: check process myprogram matching "myprogram.*" start program = "/usr/bin/myprogram" with timeout 10 seconds stop program = "/usr/bin/pkill thatscript" if cpu > 99% for 2 cycles then stop if loadavg (5min) > 80 for 10 cycles then stopAlternative #2: Limit process CPU usage with cgroups The most native Linux specific solution of them all. Offers a lot of options and complexity. Example: sudo cgcreate -g cpu:/cpulimited sudo cgset -r cpu.shares=512 cpulimited sudo cgexec -g cpu:cpulimited /usr/bin/myprogram > /dev/null &I encourage you to read more at: DigitalOcean - How-to: Limit resources using cgroups on CentOS 6 RedHat - Resource Management Guide Oracle - List of cgroups subsystems Alternative #3: Limit process CPU usage with cpulimit Get latest version of cpulimit from your package manager of choice, or by getting the source available at GitHub. Limit the CPU usage to 90%: cpulimit -l 90 /usr/bin/myprogram > /dev/null & Sidenote: You can also pin a certain process to use certain CPU core(s) to ensure that you always have some free CPU power.
I was thinking if there is "canonical" way to have this? Background & description I have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive. My first taught was nice - nice -n 19 thatscript.sh. This may and may not help, but I was thinking that it would be best that I write and activate script that would do following:run as daemon, on (example) 500ms-2scheck for labeled process with ps and grepif labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with SIGKILLMy second taught was - it would not be the first time that I'm reinventing the wheel. So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?
Prevent a script exhausing system resources and crashing entire system
I know this bit old; but it seems like no one answered this satisfactorily, and the requester never posted if his problem was solved or not. So here is an explanation. When you perform: # crm resource migrate r0 node2a cli-prefer-* rule is created. Now when you want to move the r0 back to node1, you don't do: # crm resource migrate r0 node1but you perform: # crm resource unmigrate r0Using umigrate or unmove gets rid of the cli-prefer-* rule automatically. If you try to delete this rule manually in cluster config, really bad things happen in cluster, or at least bad things happened in my case.
Using pacemaker in a 2 nodes master/slave configuration. In order to perform some tests, we want to switch the master role from node1 to node2, and vice-versa. For instance if the current master is node1, doing # crm resource migrate r0 node2does indeed move the resource to node2. Then, ideally, # crm resource migrate r0 node1would migrate back to node1. The problem is that migrate added a line in the configuration to perform the switch location cli-prefer-r0 r0 role=Started inf: node2and in order to migrate back I have first to remove that line... Is there a better way to switch master from one node to the other?
Pacemaker: migrate resource without adding a "prefer" line in config
@patbarron has still not posted his comments as an answer, and they are really excellent. So for anyone looking for the answer it is here. He writes:You can look at the source code from Seventh Edition, for example (minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/h/user.h) to see how this was implemented originally. "NOFILE" is the maximum number of open files per process, and it affects the sizes of data structures that are allocated per-process. These structures take up memory whether they're actually used or not. Again, mostly of historical interest, as it's not done this way anymore, but that might provide some additional background on where this came from.The other constant, "NFILE", is the maximum number of open files in the entire system (across all processes/users), and the per-process table of open files contains pointers into the "files" structure: minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/conf/c.c. This is also a compile-time constant and sizes the system-wide open files table (which also consume memory whether they're actually used or not).This explains that historically there was a reason. Each process would reserve NOFILE file descriptors - no matter whether they were used or not. When RAM is scarce you want to avoid reserving memory you do not use. Not only is RAM cheaper today, the reservation is no longer done this way. It confirms my observations: I have been unable to find a single reason why you would keep ulimit -n at 1024 instead of raising it to the max: ulimit -n $(ulimit -Hn). It only takes up memory when the file descriptors are actually used.
When I first borrowed an account on a UNIX system in 1990, the file limit was an astonishing 1024, so I never really saw that as a problem. Today 30 years later the (soft) limit is a measly 1024. I imagine the historical reason for 1024 was that it was a scarce resource - though I cannot really find evidence for that. The limit on my laptop is (2^63-1): $ cat /proc/sys/fs/file-max 9223372036854775807which I today see as astonishing as 1024 in 1990. The hard limit (ulimit -Hn) on my system limits this further to 1048576. But why have a limit at all? Why not just let RAM be the limiting resource? I ran this on Ubuntu 20.04 (from year 2020) and HPUX B.11.11 (from year 2000): ulimit -n `ulimit -Hn`On Ubuntu this increases the limit from 1024 to 1048576. On HPUX it increases from 60 to 1024. In neither case is there any difference in the memory usage as per ps -edalf. If the scarce resource is not RAM, what is the scarce resource then? I have never experienced the 1024 limit helping me or my users - on the contrary, it is the root cause for errors that my users cannot explain and thus cannot solve themselves: Given the often mysterious crashes they do not immediately think of ulimit -n 1046576 before running their job. I can see it is useful to limit the total memory size of a process, so if it runs amok, it will not take down the whole system. But I do not see how that applies to the file limit. What is the situation where the limit of 1024 (and not just a general memory limit) would help back in 1990? And is there a similar situation today?
What is the historical reason for limits on file descriptors (ulimit -n)
You can trace the system calls that a program makes. This is the usual method to find out what files it accesses. The tool to do this is called truss in many Unix systems, dtruss on OSX, strace on Linux. I'll describe Linux usage here; check the manual on other systems. The simplest form is strace myprogram arg1 arg2This prints a log of all the system calls made by myprogram. (Example.) To save the log in a file, use the option -o. To also log calls made by subprocesses, use the option -f. To select which system calls are logged, use the option -e. See the manual for details of what you can use as an argument to -e. For example, the following invocation logs file-related system calls (opening and closing, directory listing, etc.) except read and write. strace -e'file,!read,!write' -o /tmp/myprogram.log -f myprogram arg1 arg2
I'm working on a piece of software that requires me to know what files and resources any certain launched process are accessing. I'm not planning on attempting to track what every single script, application, and daemon is accessing, just a certain process provided by the user. Is there any way to do this in Python (or any other language for that matter)? I'm going to do some research of my own, I just figured I'd ask here in case there are knowledgeable users out there who know about this sort of thing and can provide a bit more explanation.
Is there any way to tell exactly what files a command is accessing?
According to man systemd.resource.control, CPUShares=weight would work as follows:The available CPU time is split up among all units within one slice relative to their CPU time share weight.Since you've told us nothing about other members of the same slice, I presume there are no other members, thus it would be appropriate for service to use all the CPU. If you want to play with CPU control, try CPUQuota=20%. This directive is documented like this:CPUQuota=20% ensures that the executed processes will never get more than 20% CPU time on one CPU.
Ok to get my hands dirty with cgroups and systemd, I wrote the most moronic C program I could think of (just a timer and a spinlocking while loop) and named it idiot, which I accompanied with the following idiot.service file in /sys/fs/systemd/system/: [Unit] Description=Idiot - pretty idiotic imo[Service] Type=simple ExecStart=/path/to/idiot User=bruno CPUShares=100[Install] WantedBy=default.targetThen I did sudo systemctl start idiot.service; top | grep idiot, which predictably told me idiot used 100% of CPU. Now, according to link, we should be able to limit the resources of this service by the following: sudo systemctl set-property idiot.service CPUShares=100 sudo systemctl daemon-reload sudo systemctl restart idiot.servicewhich I did, followed by top. But this still tells me that idiot is using 100% of CPU! What am I doing wrong? Note: I also tried adding CPUShares=100 to the unit file, to no avail
Why isn't this systemd service resource limited when using CPUShares property?
The simple answer is: It is not possible to force your users to use your wrapper script. The reason for this is fairly simple; a shell script is an interpreted program. That means that bash (or some other shell process) must read the file in order to run the commands that are called in it. This in turn means that a user who has permission to run the wrapper script, must have permission to do everything that is done in the wrapper script. In the vast majority of cases*, a shell script, even one with lots of internal logic and conditionals, does exactly the same thing when you run it as it would if you typed the entire script into your command prompt, line by line. If you are merely trying to make it difficult for uneducated users to slow down your system, there are a multitude of ways of doing this, such as what @mikeserv suggests in a comment on your question. I can think of at least five more ways offhand**, many of which could be used in combination; the crucial thing to understand about these is that they're not secure. They don't actually prevent the user from using the command directly instead of the wrapper script, and they also don't (and can't) prevent the user from making his own copy of the wrapper script (which he must have read permissions on to be able to run at all) and modifying it however he likes. It is possible to write a short C program to perform the function of your wrapper script, which compiles to a binary executable, and then make that C program SUID*** so it is the only way the user can run the command you are talking about, but that's beyond my scope and area of expertise. Other options involve extremely odd workarounds (hacks) like setting a cronjob to modify your sudoers file to allow permissions to run the command only during specific times of day...but that's getting really, really weird and Bad Idea territory.I think the standard way to accomplish this (although still without forcing tech-savvy users to use your wrapper script) would be: (I'll pretend the command to restrict is date.)Ensure that inside your script, its call to date uses the absolute path: /bin/date (You can find out what this is by running which date.) Also ensure your script has a proper shebang, so that it can be run without needing to type bash ./myscript but can just be run as ./myscript, and ensure it is readable and executable by everyone. (chmod 555 myscript) Put your wrapper script in /usr/local/bin/ and rename it as date. Check that users have /usr/local/bin at the start of their $PATH variable. (Just log in as a user and run echo "$PATH".) They should already have this by default. It doesn't have to be at the very start as long as it's in their path before /bin (or whatever the location of the original date command is).If they don't have it in their path, you can add it by running: echo 'PATH="/usr/local/bin:$PATH"' | sudo tee /etc/profile.d/my_path_prefix.sh Now any time a user tries to run the command directly, he will actually be running your wrapper script, because the directory where your wrapper script is appears first in his $PATH.A much more hack-y "blackhat" sort of a solution would be to actually mask the original binary, not by putting another version earlier in the path for users, but by putting the wrapper script in place of the command itself, in its original location. Use at your own risk:Put the command itself somewhere outside the normal bin directories so no one has it in their path. You could move it to, for example, /var/local. (There may be a better place, but this is a hack already, so it doesn't matter much, does it?) Ensure that the call to the date within your wrapper script points to the new location for date—its absolute path: /var/local/date in my example. Move your wrapper script into date's old location, with date's original name.The main caveat is that every time anyone tries to run that command, including system init scripts, they will get your wrapper script instead. This is purely a hack and would not qualify as good system administration. But it is possible and you may as well know that it could be done. The better solution is what I posted above.*The exceptions to this have to do with modifying the environment and programs that behave differently when they are run interactively vs. when they are run from a script. These exceptions have nothing to do with permissions, though, so they're not relevant to this discussion. **Ask about them in the comments if you are interested and I'll expand on them. ***NOT suid root. If you do this, just create a user, put him in a group which is the only one with permission to run the command you are talking about (chmod 010 or something) and then chown your fresh-compiled wrapper binary to be owned by that user and set its suid bit with chmod 4511.
Say one has a resource hungry command that users on a server need to run. I want to wrap said command with a wrapper script that will parse the arguments passed and ensure that the command is only being used under certain conditions or times. The problem is that if the program itself is not executable the wrapper won't be able to run it either. I'd also like the command not to run as root. Is this possible?
Allow only wrapper script but not command
Improvement #1 - Loops Your looping structure seems completely unnecessary if you use brace expansions instead, it can be condensed like so: $ more pass.bash #!/bin/bashfor str in $(echo {a..z}{a..z}{a..z}); do pass=$(openssl passwd -salt $1 $str) if [[ "$pass" == "$2" ]]; then echo "Password: $str" exit; fi done# vim: set nolist ts=2 :I'm showing 4 characters just to make it run faster, simply add additional {a..z} braces for additional characters for password length. Example runs 4 characters $ openssl passwd -salt ab hhhh abBYJnOuV8dUA$ time ./pass.bash ab abBYJnOuV8dUA Password: hhhhreal 18m3.304s user 6m58.204s sys 9m34.468sSo it completed in 18 minutes. 5 characters $ openssl passwd -salt ab ccccc abZwsITAI6uwM$ time ./pass.bash ab abZwsITAI6uwM Password: cccccreal 426m37.234s user 16m34.444s sys 398m20.399sThis took ~426 minutes. I actually Ctrl+C this, so it hadn't finished, but I didn't want to wait any more than this! NOTE: Both these runs were on this CPU: brand = "Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHzImprovement #2 - Using nice? The next logical step would be to nice the above runs so that they can consume more resources. $ nice -n -20 ./pass.bash ab hhhhhBut this will only get you so far. One of the "flaws" in your approach is the calling of openssl repeatedly. With {a..z}^5 you're calling openssl 26^5 = 11881376 times. One major improvement would be to generate the patterns of {a..z}.... and save them to a file, and then pass this as a single item to openssl one time. Thankfully openssl has 2 key features that we can exploit to get what we want. Improvement #3 - our call structure to openssl The command line tool openssl provides the switches -stdin and -table which we can make use of here to have a single invoke of openssl irregardless of how many passwords we want to pass to it. This is single modification will remove all the overhead of having to invoke openssl, do work, and then exit it, instead we keep a single instance of it open indefinitely, feeding it as many passwords as we want. The -table switch is also crucial since it tells openssl to include the original password along side the ciphers version, so we can make fairly quick work of looking for our match. Here's an example using just 3 characters to show what we're changing: $ openssl passwd -salt ab abc abFZSxKKdq5s6$ printf "%s\n" {a..z}{a..z}{a..z} | \ openssl passwd -stdin -table -crypt -salt ab | grep -m1 abFZSxKKdq5s6 abc abFZSxKKdq5s6So now we can really revamp our original pass.bash script like so: $ cat pass2.bash #!/bin/bashpass=$(printf "%s\n" {a..z}{a..z}{a..z}{a..z}{a..z} | \ openssl passwd -stdin -table -crypt -salt $1 | grep -m1 $2)if [[ "$pass" =~ "$2" ]]; then echo "Password: $pass" fi# vim: set nolist ts=2 :Now when we run it: $ time ./pass2.bash ab aboznNh9QV/Q2 Password: hhhhh aboznNh9QV/Q2real 1m11.194s user 1m13.515s sys 0m7.786sThis is a massive improvement! This same search that was taking more than 426 minutes is now done in ~1 minute! If we search through to say "nnnnn" that's roughly in the middle of the {a..z}^5 character set space. {a..n} is 14 characters, and we're taking 5 of them. $ echo "14^5" | bc 537824$ openssl passwd -salt ab nnnnn abRRCp5N3WN32$ time ./pass2.bash ab abRRCp5N3WN32 Password: nnnnn abRRCp5N3WN32real 1m10.865s user 1m12.842s sys 0m8.530sThis search took ~1.1 minutes. NOTE: We can search the entire space of 5 character passwords in ~1 minute too. $ time ./pass2.bash ab abBQdT5EcUvYA Password: zzzzz abBQdT5EcUvYAreal 1m10.783s user 1m13.556s sys 0m8.251sConclusions So with a restructuring we're running much faster. This approach scales much better too as we add a 6th, 7th, etc. character to the overall length of the password. Be warned though that we're using a smallish character set, mainly only the lowercase alphabet characters. If you mix in all the number, both cases, and special characters you can typically get ~96 characters per position. This may not seem like a big deal but this increase your pool tremendously: $ echo 26^5 | bc 11881376 $ echo 96^5 | bc 8153726976Adding all those characters just increased by 2 orders of magnitude our search space. If we go up to roughly 10-12 characters of length to the password, it really puts a brute force hacking methodology out of reach. Using proper a salt as well as additional NONCE's throughout the construction of a hashed password can add still more stumbling blocks. What else? You've mentioned using John (John the Ripper) or other cracking tools. Probably the state of the art currently would be HashCat. Where John is a tighter version of the approach you're attempting to use, HashCat takes it to another level by enlisting the use of GPUs (up to 128) to really make your hacking attempts fly. You can even make use of CloudCrack, which is a hosted version, and for a mere $17 US you can pay to have a password crack attempted. ReferencesReal World Uses For OpenSSL
I'm running a very time-consuming script which takes many hours to end. Watching top I see that it's only taking 5% of the CPU at best, usually around 3%. Is there any way to force the script to use more CPU in order to end faster? Edit: Basically the script is bruteforcing 5 chars length passwords given a the salt and the hash. Not at home right now, but something like: charset = ['a','b',.........'z']; for i in $charset do for j in $charset do for k in $charset do for l in $charset do for m in $charset do pass = `openssl passwd -salt $1 $i$j$k$l$m` if [ pass == $2 ]]; then echo "Password: $i$j$k$l$m"; exit; fi done done done done done
How can I force a script to use more resources?
xrdb -query lists the resources that are explicitly loaded on the X server. appres lists the resources that an application would receive. This includes system defaults (typically found in a directories like /usr/X11R6/lib/X11/app-defaults or /etc/X11/app-defaults) as well as the resources explicitly set on the server with xrdb. You can restrict a particular class and instance, e.g. appres XTerm foo to see what resources apply to an xterm invoked with xterm -name foo. The X server only stores a list of settings. It cannot know whether a widget will actually make use of these settings. Invalid resource names go unnoticed because you are supposed to be able to set resources at a high level in the hierarchy, and they will only apply to the components for which they are relevant and not overridden. X resource specs obey fairly intricate precedence rules. If one of your settings doesn't seem to apply, the culprit is sometimes a system default that takes precedence because it's more specific. Look at the output of appres Class to see if there's a system setting for something.reverseVideo. If your application is one of the few that support the Editres protocol, you can inspect its resource tree with the editres program.
Is there some way to inspect which .Xresources settings are in effect at the moment (unlike xrdb -query)? For example, I'm on a host which doesn't seem to respect *reverseVideo: true, but I don't know whether that is because I wrote it the wrong way (even *florb: glorb doesn't raise an error when running xrdb -merge $HOME/.Xresources), because the setting is not supported, or some other reason.
.Xresources settings in effect
You could probably use stress: stress: tool to impose load on and stress test systemsIf you wnat to stress memory you could use : stress --vm 2 --vm-bytes 512M --timeout 10s to use 2 vm using both 512MB of ram for 10 seconds. If you want to stress CPU add: stress --cpu ## -t 10swith ## equal to your number of core to simulate a 100% cpu usage on all core at the same time for 10 seconds. and if you want to simulate IO use the option : stress --io 4 -t 10sIt will add thread that do sync call to you disk but you could also write to your disk with this option: stress --hdd 4 --hdd-bytes 256M This would create 4 thread writing each 256 MB of data to your disk, this could of course adjusted to simulated either lots of small file write or huge file write. This will need some adaptation form you if you wan to stress thing one by one or all together or for longer time like this: stress --hdd 4 --hdd-bytes 256M --io 4 --cpu ## --vm 2 --vm-bytes 512M -t 60sAs for the GPU you could use glmark2 which should be available in Ubuntu. It's a basic GPU benchmark you could run it forever to simulate a gpuload: glmark2 --run-forever
I am looking for one or possibly more commands, or a combination of commands, to get my PC to use as much resources as possible. I want to check how my computer behaves when subject to the maximum amount of data it can handle. I've tried by running multiple programs such as browsers, graphic and system tools one by one and all together, I've also tried to download big files to monitor its network and storage performance but every time depending on what program(s) I run, I get different results in terms of RAM, CPU, I/O. Often my whole system crashes and I have to reboot or struggle to close some programs. To monitor, I'm using different commands such as iostat, iotop, htop, vmstat, lsof, iftop, iptraf but also some other little programs. (I'm using ubuntu) I would very much appreciate an answer that could list some way to exploit GPU, CPU and RAM and a way to write a file (even with zeros) in the quickest way possible to see how fast my computer can produce output and write it on the HDD.
Is there a command or a serie of commands to make the computer use as much resources as possible?
It is kibibytes (1024), those are raw interfaces to the getrusage()/ setrlimit() APIs. Those documentations are inaccurate (or old-school as you say). Also note that the resource limits/accountings and their units vary between systems, you'll find that it's not uncommon for shells to get it wrong on some systems (don't behave as documented). You'll find some additional scaling done by some shells to accommodate that or to be compatible with the original implementation in BSD csh, but in any case, the KMGTPE suffixes where supported are always 1024 based, not 1000 based. It reminds me I have a proposed patch for zsh covering that and more that I need to finalise. You'll see the code in there clearly states the unit for each resource. typedef struct resinfo_T { int res; /* RLIMIT_XXX */ char* name; /* used by limit builtin */ enum zlimtype type; int unit; /* 1, 512, or 1024 */ char opt; /* option character */ char* descr; /* used by ulimit builtin */ } resinfo_T; [...] {RLIMIT_RSS, "resident", ZLIMTYPE_MEMORY, 1024, 'm', "resident set size (kbytes)"},For the RSS limit. Also beware the unit for %M with zsh's time keyword is wrong on all systems but Darwin/macos. The standalone GNU time utility (many shells have their own time as a keyword), knows about the different units between Darwin/macOS and other systems.
From man time: M Maximum resident set size of the process during its lifetime, in Kilobytes.From ulimit -a: max memory size (kbytes, -m) unlimitedBut a "kilobyte" may mean either 1000 or 1024 bytes. I guess here it is a round 1024, but I want to be sure. Authoritative reference would be appreciated.
Is the kilobyte used by time and ulimit commands either 1000 (SI) or 1024 (old school) bytes?
You should use a combination of lsof (to find out which process opened which file or port) and strace (to attach to and follow a process's system calls). Use the man pages for each to find out how to use them in your case
I have some process running on my system. I need to list out which of the process at a moment has acquired/is using one or more of these in my system:Ethernet Camera USB Bluetooth WiFi File Systemetc. Is there a way to find this out ? Platform : Ubuntu/Fedora (Allowed to have SELinux as well if required to implement the above)
To check which resource is being accessed by which process
EDIT1 After stopping systemd-logind - which native Xorg responds to by dying - and restarting Xorg, I see the entire 6GB of swap wiped out.After the second time, I can confirm that this is a bug in systemd-logind. logind remembers to close the copy of the DRM fd which it holds, but it fails to close the copy which is held in PID1 (used support "seamless" restart of logind): $ sudo lsof /dev/dri/card0 | grep systemd [sudo] password for alan-sysop: lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. systemd 1 root 16u CHR 226,0 0t0 14690 /dev/dri/card0 systemd 1 root 87u CHR 226,0 0t0 14690 /dev/dri/card0 systemd 1 root 101u CHR 226,0 0t0 14690 /dev/dri/card0 systemd 1 root 106u CHR 226,0 0t0 14690 /dev/dri/card0 systemd 1 root 110u CHR 226,0 0t0 14690 /dev/dri/card0 systemd-l 860 root 21u CHR 226,0 0t0 14690 /dev/dri/card0 systemd-l 860 root 25u CHR 226,0 0t0 14690 /dev/dri/card0This feels very much like a known bug, which should already be fixed in v238 of systemd.Indeed, logind seems to be leaking a DRM fd this way every time I log in and out of GNOME. Presumably this bug only becomes obvious when you have display servers shut down uncleanly, so they don't get a chance to deallocate the buffers attached to their DRM fd.EDIT2: Am I right to guess that a file descriptor of a graphics device (DRM), can hold a reference to swappable memory? Note logind holds such file descriptors.Answer: yes.filp SHMEM file node used as backing storage for swappable buffer objects.-- https://www.kernel.org/doc/html/v4.15/gpu/drm-mm.html As I understand it, "SHMEM file node" here is something that does the exact same job as a tmpfs file / memfd. The above quote is regarding a "GEM buffer object"...The mmap system call can't be used directly to map GEM objects, as they don't have their own file handle. Two alternative methods currently co-exist to map GEM objects to userspace... The second method uses the mmap system call on the DRM file handle.-- https://01.org/linuxgraphics/gfx-docs/drm/drm-memory-management.html#id-1.3.4.6.6.8 CONCLUSION: someone should really double-check the current code in logind as it relates to the closing of file handles :).Appendix: how you might try to rule out memfdsDoes anyone have a nice way to check memory usage of memfds?The memory usage of memfds can be read using stat --dereference or du -D on the magic symlink in /proc/$PID. Either under fd/$FD for a file descriptor, or - which you forgot - map_files/... for memory-mapped objects. I don't have a really nice convenience for this, but you can at least search for the most massive individual FDs or mapped files. (The example below is not additional evidence; it was taken after the 6GB of swap usage went away). $ sudo du -aLh /proc/*/map_files/ /proc/*/fd/ | sort -h | tail -n 10 du: cannot access '/proc/self/fd/3': No such file or directory du: cannot access '/proc/thread-self/fd/3': No such file or directory 108M /proc/10397/map_files/7f1e141b4000-7f1e1ad84000 111M /proc/14862/map_files/ 112M /proc/10397/map_files/ 113M /proc/18324/map_files/7efdda2fb000-7efddaafb000 121M /proc/18324/map_files/7efdea2fb000-7efdeaafb000 129M /proc/18324/map_files/7efdc82fb000-7efdc8afb000 129M /proc/18324/map_files/7efdd42fb000-7efdd4afb000 129M /proc/18324/map_files/7efde52fb000-7efde5afb000 221M /proc/26350/map_files/ 3.9G /proc/18324/map_files/$ ps -x -q 18324 PID TTY STAT TIME COMMAND 18324 pts/1 S+ 0:00 journalctl -b -f$ ps -x -q 26350 PID TTY STAT TIME COMMAND 26350 ? Sl 4:35 /usr/lib64/firefox/firefox$ sudo ls -l /proc/18324/map_files/7efde52fb000-7efde5afb000 lr--------. 1 root root 64 Mar 19 00:32 /proc/18324/map_files/7efde52fb000-7efde5afb000 -> /var/log/journal/f211872a957d411a9315fd911006ef03/user-1001@c3f024d4b01f4531b9b69e0876e42af8-00000000002e2acf-00055bbea4d9059d.journal
I have a mystery: what is using 6GB of my swap? My kernel version is 4.15.9-300.fc27.x86_64. This happened following some crashes. dmesg shows I had a segfault in a gnome-shell process (which belonged to gdm) and later some firefox processes (Chrome_~dThread, in libxul.so). coredumpctl -r shows no other crashes on my current boot. 1. free and df -t tmpfs # free -h total used free shared buff/cache available Mem: 7.7G 1.2G 290M 5.4G 6.1G 761M Swap: 7.8G 6.0G 1.8G# swapoff -a swapoff: /dev/dm-1: swapoff failed: Cannot allocate memory# df -h -t tmpfs Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 17M 3.9G 1% /dev/shm tmpfs 3.9G 1.9M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 3.9G 40K 3.9G 1% /tmp tmpfs 786M 20K 786M 1% /run/user/1000I also manually checked the mount namespace of every process, for any extra tmpfs. There was no other mounted tmpfs (or they were the same - so only 17M, and there were less than 10 different mount namespaces). 2. ipcs # ipcs --human------ Message Queues -------- key msqid owner perms size messages ------ Shared Memory Segments -------- key shmid owner perms size nattch status 0x00000000 20643840 alan-sysop 600 512K 2 dest 0x00000000 22970369 alan-sysop 600 36K 2 dest 0x00000000 20774914 alan-sysop 600 512K 2 dest 0x00000000 20905987 alan-sysop 600 3.7M 2 dest 0x00000000 23461892 alan-sysop 600 2M 2 dest 0x00000000 20873221 alan-sysop 600 3.7M 2 dest 0x00000000 22511622 alan-sysop 600 2M 2 dest 0x00000000 28278791 alan-sysop 600 60K 2 dest 0x00000000 23003144 alan-sysop 600 36K 2 dest 0x00000000 27394057 alan-sysop 600 60K 2 dest 0x00000000 29622282 alan-sysop 600 156K 2 dest 0x00000000 27426828 alan-sysop 600 60K 2 dest 0x00000000 28246029 alan-sysop 600 60K 2 dest 0x00000000 29655054 alan-sysop 600 156K 2 dest 0x00000000 29687823 alan-sysop 600 512K 2 dest ------ Semaphore Arrays -------- key semid owner perms nsems 0x002fa327 98304 root 600 23. Process memory The per-process swap usage script says process memory only accounts for 54MB of swap: PID=1 swapped 2292 KB (systemd) PID=605 swapped 4564 KB (systemd-udevd) PID=791 swapped 324 KB (auditd) PID=793 swapped 148 KB (audispd) PID=797 swapped 232 KB (sedispatch) PID=816 swapped 120 KB (mcelog) PID=824 swapped 1544 KB (ModemManager) PID=826 swapped 152 KB (rngd) PID=827 swapped 300 KB (avahi-daemon) PID=829 swapped 1688 KB (abrtd) PID=830 swapped 836 KB (systemd-logind) PID=831 swapped 432 KB (dbus-daemon) PID=843 swapped 368 KB (chronyd) PID=848 swapped 312 KB (avahi-daemon) PID=854 swapped 476 KB (gssproxy) PID=871 swapped 1140 KB (abrt-dump-journ) PID=872 swapped 1280 KB (abrt-dump-journ) PID=873 swapped 1236 KB (abrt-dump-journ) PID=874 swapped 14196 KB (firewalld) PID=911 swapped 592 KB (mbim-proxy) PID=926 swapped 1356 KB (NetworkManager) PID=943 swapped 17936 KB (libvirtd) PID=953 swapped 200 KB (atd) PID=955 swapped 560 KB (crond) PID=1267 swapped 284 KB (dnsmasq) PID=1268 swapped 316 KB (dnsmasq) PID=10397 swapped 160 KB (gpg-agent) PID=14862 swapped 552 KB (systemd-journal) PID=18131 swapped 28 KB (login) PID=18145 swapped 384 KB (bash) Overall swap used: 54008 KBSo far I am assuming that there is no negligent program which used umount -l on a full tmpfs. I haven't tried to scrape /proc/*/fd for anyone holding such a hidden tmpfs open. I suppose I am also assuming no-one has constructed a giant memfd and is holding it open... haha why would I even suspect such a thing... sob.The memfd names attached to processes seem innocent to me: # ls -l /proc/*/fd/* 2>/dev/null|grep /memfd: lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/20889/fd/37 -> /memfd:xshmfence (deleted) lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/20889/fd/53 -> /memfd:xshmfence (deleted) lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/20889/fd/54 -> /memfd:xshmfence (deleted) lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/20889/fd/55 -> /memfd:xshmfence (deleted) lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/20889/fd/57 -> /memfd:xshmfence (deleted) lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/20889/fd/60 -> /memfd:xshmfence (deleted) lrwx------. 1 alan-sysop alan-sysop 64 Mar 18 22:52 /proc/21004/fd/6 -> /memfd:pulseaudio (deleted)These memfds seem innocent because: Process 20889 is my current Xorg, which post-dates the 6GB of swap. Similarly process 21004 is indeed my pulseaudio process, and the creation time on this process is later than the 6GB of swap was built up. In theory the ones I'm worried about could also be in limbo though, attached to a unix socket message and never read.EDIT1 After stopping systemd-logind - which native Xorg responds to by dying - and restarting Xorg, I see the entire 6GB of swap wiped out. Note I forgot I needed to start logind again. Although lennart told me logind is not supposed to be bus-activated, logind immediately restarted. This is from journalctl -b, i.e. the system log, with no messages removed in between: Mar 18 23:14:12 alan-laptop systemd[1]: Stopped Login Service. Mar 18 23:14:12 alan-laptop dbus-daemon[831]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1 Mar 18 23:14:12 alan-laptop systemd[1]: Starting Login Service...This is relevant in that logind then went through a cycle of a few crashes. This is expected for this version of logind (PRs to fix it have been merged upstream, following my issue reports). So this doesn't quite isolate an individual cause, and I really should have checked the fds logind was holding before killing it. Question Is there any possible swap user I have missed in the above checks? (The non-destructive ones, prior to EDIT1). Is there a better way to get usage reports for any of the possible users I listed above? That is, either an alternative that corrects some inaccuracy I haven't noticed? Or something that will be easier to run, and get a quick result when this happens again? Does anyone have a nice script to check for fds holding open a "hidden" tmpfs (a tmpfs which was detached with umount -l)? Does anyone have a nice way to check memory usage of memfds? Is there any way to check for massive memfds having been left in limbo in an unread unix socket message? (Did any of these geniuses think about this at all when implementing memfds, which were explicitly intended for passing over unix sockets?) EDIT2: Am I right to guess that a file descriptor of a graphics device (DRM), can hold a reference to swappable memory? Note logind holds such file descriptors.
What could be using 6GB of my swap?
Just for anybody following with my issue. It was a little odd what was happening but the user it was running that process happened to have the same ID inside of the docker container as the host, so when I listed all the processes the user ID of the user inside of the container was getting mapped to a specific user I had in the host. Which explains why when I deleted the user in the host I was still seeing that the process was running as "1001". So, now I'm clear that this is how it should be working. I also looked at this tool called csysdig in case anyone is interested, which seems like it would solve a problem like the one I had since it would give you specific information about each container. In my case I was seeing processes happening in the host as well as the containers so it was really hard to really inspect what was happening.
I have some resources used by a specific user that I had to delete because it was taking a lot of resources from the server. When I listed the processes in the server the deleted user now shows as “1001” instead of the name it used to show before I deleted it. %Cpu(s): 19.8 us, 29.5 sy, 0.0 ni, 50.7 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st KiB Mem : 3882456 total, 183568 free, 2003808 used, 1695080 buff/cache KiB Swap: 1679356 total, 1155300 free, 524056 used. 1463480 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9192 1001 20 0 2068436 74700 10284 S 0.3 1.9 3:02.86 node By using systemctl status I found the process and the docker container ID that the user is executing is in: ├─docker │ ├─42b40e73687acb7fcd9a0e43372ced7588b5568c942f740d06510ab0e85b1462 │ │ ├─17156 /bin/sh -e /usr/local/sbin/start.sh └─11148 node --debug --nolazy dist-release/serverSo, I went into the container and I look to the start.sh file but it’s just an executable file, there’s no indication inside of the file that the user is getting called inside of the executable file. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES apiassets_1 42b40e73687a local.io/api-statements:development "start.sh" 21 hours ago Up 18 hours 0.0.0.0:32785->3000/tcp, 0.0.0.0:5966->5858/tcp What I want to do is stopping this user to use this resources, so I was just curious how can I either find how this user is calling this script to stop it or how can I stop it.
Docker user executing a process cannot be removed
Your problem is that you don't have any swap space. Operating systems require a swap space so that they are able to free up ram space and store it on the hard drive. What you are going to need to do is reformat your hard drive. Red Hat has a suggest swap size chart here. Load up the arch live cd and repartition and swapon /dev/sdaX. If you need a reference see the Arch Wiki Beginner's Guide. I'll suggest a partition like the following one. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 298.1G 0 disk ├─sda1 8:1 0 100M 0 part /boot ├─sda2 8:2 0 20G 0 part / ├─sda3 8:3 0 4G 0 part [SWAP] └─sda4 8:4 0 rest 0 part /homeThis is just suggested, you can do everything in a single partition and not worry about much (but this is the basic format that most people use). If you are keeping your root partition separate then remember to keep it around 20-25G. This is a security thing, because users should be installing programs into their own folders. You won't run out of space, I promise. Pacman and yaourt will take care of this for you.
My Computer has been freezing a lot lately, and with no apparent reason.It freezes even if my usage is 3% CPU and 9% RAM. I was using Windows 8 until I installed Ubuntu 14.04. It was really slow, and after some researching, I adopted the idea that Ubuntu 14.04 wasn't really that stable, so I decided I'd download a less resource-heavy distro, so I installed Arch Linux (which is what I'm using to type this now) with GNOME. I'm not having any of the problems I used to have in Ubuntu, except for this mostly annoying freeze that happens to be absolutely random .. My Fan is working correctly, so it's not temperature, and my drivers are up-to-date (they're the same ones I used on Windows, which I had no problem at all with). Note that: The Whole OS just freezes, and when I was once able to Alt+F2 (to get to the run-a-command dialog) and managed to type in a command (I was struggling with the keyboard to type) and hit Enter, I got the message: No enough memory .. ? Which is pretty unexpected because I'm using a minimal system (arch linux) with only one application running .. Edit: Here's my /etc/fstab file # # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> # /dev/sda3 UUID=2268132b-7cfa-4c55-b773-467c4f691e83 / ext4 rw,relatime,data=ordered 0 1/dev/disk/by-uuid/2236F90308C55145 /mnt/2236F90308C55145 auto nosuid,nodev,nofail,x-gvfs-show,user 0 0 /dev/disk/by-uuid/4FF142A03DACFA48 /mnt/4FF142A03DACFA48 auto nosuid,nodev,nofail,x-gvfs-show,user 0 0lsblk outputs .. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 298.1G 0 disk ├─sda1 8:1 0 69.9G 0 part /mnt/2236F90308C55145 ├─sda2 8:2 0 59.2G 0 part /mnt/4FF142A03DACFA48 ├─sda3 8:3 0 90.3G 0 part / └─sda4 8:4 0 78.7G 0 part sr0 11:0 1 1024M 0 rom
Linux freezing randomly
It's actually other way around. The soft limit's value(s) is actually implemented i.e. in use, you can increase the limit upto the relevant hard limit's value(s) (assuming you are not super user or do not have CAP_SYS_RESOURCE capability).
I ran into a problem of : fork: Resource temporarily unavailableI know that nproc is the problem Some suggested to increase the soft limit of nproc while other suggested the hard limit. Which should I increase? Isn't the soft limit is there just to warn the user and the hard limit is the one that really limits eventually?
Soft limit vs hard limit
Run it with nice -n 20 ionice -c 3 That will make it use the remaining CPU cycles and access to I/O not used by other processes. For RAM, all you can do is kill the process when it uses more than the amount you want it to use (using ulimit).
This is the situation: I have a PHP/MySQL web application that does some PDF processing and thumbnail creation. This is done by using some 3rd party command line software on the server. Both kinds of processing consume a lot of resources, to the point of choking the server. I would like to limit the amount of resources these applications can use in order to enable the server to keep serving users without too much delay, because now when some heavy PDF is processed my users don't get any response. Is it possible to constrain the amount of RAM and CPU an application can use (all processes combined)? Or is there another way to deal with these kinds of situations? How is this usually done?
How to constrain the resources an application can use on a linux web server
[RDDSK / WRDSK] When the kernel maintains standard io statistics (>= 2.6.20): The [read / write] data transfer issued physically on disk (so writing to the disk cache is not accounted for). This counter is maintained for the application process that writes its data to the cache (assuming that this data is physically transferred to disk later on). Notice that disk I/O needed for swapping is not taken into account. Unfortunately, the kernel aggregates the data tranfer of a process to the data transfer of its parent process when terminating, so you might see transfers for (parent) processes like cron, bash or init, that are not really issued by them.https://www.systutorials.com/docs/linux/man/1-atop/ (I agree this is unfortunate. Especially given atop's advertised feature of showing resources used even by processes which exited at some point during the monitoring interval, implemented using process accounting aka psacct).
I just installed atop, waited half an hour, and looked at the logs with atop -r /var/log/atop/atop_20180216. Why does my systemd --user instance show hundreds of megs of disk usage, including tens of megs of writes, during one ten minute interval? What can systemd possibly be doing? PID TID RDDSK WRDSK WCANCL DSK CMD 1/285 2831 - 333.8M 25556K 1196K 87% systemd
systemd shows as reading 300M in atop?
stat -f /dev/mapper/fedora_12345-root returns information about the filesystem containing the device node, which is /dev. To return information about a mounted filesystem, you need to look at a file on that filesystem: stat -f /. The df utility automatically translates mounted block devices to a mount point for them, but stat doesn't do this.
I am supposed to track how file system's usage of resources (i-nodes, blocks) changes before I start a program, after I start a program, delete its executable file, and then finally after I kill its last process. The problem I reach is that I can't seem to register any change in resources even in the very first stage. Below I checked the block and i-node numbers for the root's file system , started firefox (in other terminal), and measured those values again: [root@12345 ttyid:1 nie cze 07 00:17:47 ~]# which firefox /usr/bin/firefox [root@12345 ttyid:1 nie cze 07 00:17:50 ~]# df /usr/bin/firefox System plików 1K-bl użyte dostępne %uż. zamont. na /dev/mapper/fedora_12345-root 8378368 5407812 2970556 65% / [root@12345 ttyid:1 nie cze 07 00:18:01 ~]# ps -a PID TTY TIME CMD 3687 pts/1 00:00:00 ps [root@12345 ttyid:1 nie cze 07 00:18:06 ~]# stat -f /dev/mapper /fedora_12345-root Plik: "/dev/mapper/fedora_12345-root" ID: 0 długość nazwy: 255 typ: tmpfs rozmiar bloku: 4096 podstawowy rozmiar bloku: 4096 bloków: Razem: 130573 wolnych: 130573 dostępnych: 130573 Inody: razem: 130573 wolnych: 130163 [root@12345 ttyid:1 nie cze 07 00:18:11 ~]# ps -a PID TTY TIME CMD 3697 pts/0 00:00:08 firefox 3783 pts/1 00:00:00 ps [root@12345 ttyid:1 nie cze 07 00:18:41 ~]# stat -f /dev/mapper/fedora_12345-root Plik: "/dev/mapper/fedora_12345-root" ID: 0 długość nazwy: 255 typ: tmpfs rozmiar bloku: 4096 podstawowy rozmiar bloku: 4096 bloków: Razem: 130573 wolnych: 130573 dostępnych: 130573 Inody: razem: 130573 wolnych: 130163(I tried it on firefox browser, and nano and vim programs so far; no observed change.) What options I should use with df and stat (the two required commands) to successfully track the change in resources? Am I tracking a wrong, constant and similarly-named value or making some other mistake?
How to track resources' (inodes, blocks) usage change upon starting a program