output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Did you enable homes for root ? If so it's still just a subfolder of your volume. Basically all data is on your volumes, which means accessing via ssh, switching to a root user and df -h should give you a correct breakdown of your mounts.
I recently bought a Synology nas server and installed a 4TB HDD. Now when accessing the nas through ssh, I checked how much space I have on my root account, and I found out it was only 1.5 GB. But when I access the Synology nas through the browser, it says I have 3.5 TB available. Is it possible for me to access this available space through ssh and if not, can I assign 3TB volume for example to my root account?
How to access Synology nas available hard disk space through ssh?
In Gnome just go to Settings->Keyboard and click the Shortcuts tab. You can redefine your 'Sound and Media' shortcuts and you can define custom shortcuts to execute special commands. As long you not use the same shortcut more than once, but in this case you'll get a warning message.
I would like to set multiple keyboard shortcuts doing the same thing. My particular example is Volume Up/Down, I would like to retain the standard settings I have (Sound/VolumeUp - XF86AudioRaiseVolume, my laptop dedicated button) and I would like to add a second set (Tux+Up). How can I do that? Thanks a lot.
Dual keyboard shortcuts in Gnome
Are PV and partition synonyms ?No. A PV is a block device used by LVM to store data. In your case that is a partition, but it doesn't have to be, it could be a complete drive, it could be a raid array.I want to resize a partition. I wonder if pvresize is the command to use.In general resizing a partition has two steps, resizing the partition itself and resizing whatever is stored on that partition. So if you have a partition containing a LVM PV you have to resize both the partition itself and the PV. Order matters, if you are making a partition larger, you first expand the partition itself, then you use pvresize to expand the PV to use the new larger partition. OTOH if you are making the partition smaller, you must first shrink the PV using pvresize before you shrink the partition itself.
If I call fdisk /dev/sdait names /dev/sda a disk, and dev/sda1 a partition. On the other hand, if I call pvdisplay then /dev/sda2/ is a PV name. Are PV and partition synonyms ? If not, what are the differences between the two ? context: I want to resize a partition. I wonder if pvresize is the command to use.
Are PV (physical volume) and partitions the same thing?
First, you'll have to edit the partition table to actually extend the sdb3 partition. You might use gparted, parted, gdisk or fdisk for this. If you use gdisk or fdisk, the changes are only written to the partition table when you tell the program to do it, so with a single gdisk/fdisk session, you can view the exact disk location (block/sector number) where sdb3 begins, delete the sdb3 partition, recreate it with the exact same starting point and a new end point, and then write the updated partition table to the disk. If the kernel does not accept the new partition size immediately, you might have to run sudo partprobe /dev/sdb at this point. Once the new partition size is visible in /proc/partitions, you can proceed with sudo pvresize /dev/sdb3 exactly as you did. After that, sudo pvdisplay should indicate increased PV size, Total PE and Free PE values. At that point, you can use sudo lvextend -r -L <desired new size> /dev/cl/root to extend your LV. Since sdb3 PV is already a member of the cl volume group, you don't need vgextend in this case: it is only used when you're adding a new, unused PV into an existing volume group. Since the VG is currently active and all its LVs are mounted/in use, the PV is locked for exclusive access by the LVM, so even the vgextend tool cannot access it directly. If you tried to do the vgextend by booting from an external media, so that the LVs would be unmounted, you would see an error message Physical volume '/dev/sdb3' is already in volume group 'cl'instead.
I have imaged a smaller drive to a larger drive. I now need to increase the size of the parition/volume group (correct term?). The drive has ~1.6 TiB of unallocated space which I want /dev/sdb3 to use then allocate that increase to /dev/cl/root. Below is some information cobbled together from google searching. The first four pieces are from gparted. /dev/sdb1 fat16 /boot/efi 200 MiB /dev/sdb2 ext4 /boot 1.0 GiB /dev/sdb3 lvm2 pv cl 221.68 GiG unallocated unallocated 1.60 TiB$ sudo lvmdiskscan /dev/sdb1 [ 200.00 MiB] /dev/sdb2 [ 1.00 GiB] /dev/sdb3 [ 221.68 GiB] LVM physical volume /dev/cl/var [ 100.00 GiB] /dev/cl/swap [ 2.00 GiB] /dev/cl/root [ 110.00 GiB] $ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 1.8T 0 disk ├─sdb2 8:18 0 1G 0 part ├─sdb3 8:19 0 221.7G 0 part │ ├─cl-swap 253:1 0 2G 0 lvm │ ├─cl-root 253:2 0 110G 0 lvm │ └─cl-var 253:0 0 100G 0 lvm └─sdb1 8:17 0 200M 0 part $ sudo pvdisplay --- Physical volume --- PV Name /dev/sdb3 VG Name cl PV Size 221.68 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 56749 Free PE 2478 Allocated PE 54271 PV UUID Uohq5b-Ubkr-f51E-y1tf-vfAi-JA06-dKWx7AI tried to increase physical size by using sudo pvresize /dev/sdb3 with the output of Physical volume "/dev/sdb3" changed 1 physical volume(s) resized / 0 physical volume(s) not resizedThough this didn't seem to do anything. I can't resize /dev/sdb3 within gparted. I tried to increase the volume group using $ sudo vgextend cl /dev/sdb3 Can't open /dev/sdb3 exclusively. Mounted filesystem?I am not sure why it failed as I don't think /dev/sdb3 is mounted as the physical drive is attached to within another Linux CentOS 7 system which was used to image the hard drive. How can I resize/extend /dev/sdb3 to use the reset of the unallocated space then increase /sdb3/cl-root to use all of the new space? Some response I have seen show creating a new partition within the unallocated space and then adding it to the group/volume, but I was hoping to increase /dev/sdb3 to use the remaining unallocated space then increase the size of /sdb3/cl-root.
Increase the size lvm2 partition to use all unallocated disk space
The very simple solution is now discovered. The crash occurred in uGet because of the fact that I had changed the download preferences so that the program would use aria2, but didn't notice that aria2 does not get installed by itself in the time of installation of uGet (the very big problem when people migrate from windows to unixish systems). Therefore, I fixed the problem by installing it as so: sudo apt-get install aria2
I have been using UGET in order to download what I need. I've had no problems with it until I tried downloading a file with 4.5 GB of volume. 3.6 GB of it has been downloaded and UGET stops working and gets killed unexpectedly. I checked for volume deficiency (Baobab 1.8.2 on Ubuntu-Mate 14.04) and viewed no problem with the capacity. The others files of 550 MB of volume continue to be downloaded but the problem with the larger file still exists. How would it be possible for me to continue and end the download of 4.5 GB file without any interruptions? Thank you very mech in advance.
Why uget stops working and exits unexpectedly?
To increase the size of a filesystem you must first grow the logical volume container and then increase the size of the filesystem within. When decreasing the size of a filesystem, shrinking the surrounding logical volume is done last. A shorthand way of expanding a logical volume and the filesystem is contains can be achieved using lvextend with the --resizefs option. For example, assume that you have a logical volume of 1000 extents that you want to grow to 1600 and then expand the filesystem within; do: lvextend -l 1600 --resizefs /dev/vg01/lvol1This increases the logical volume size to a total of 1600 extents and then grows the filesystem associated with it. There is no need to unmount the filesystem to perform this operation. In order to shrink the size of a filesystem, you must first unmount it and fsck it. Then, reduce the size of the filesystem first, followed by shrinking the size of the surrounding logical volume container. Use tune2fs to ascertain the "Block size" of the filesystem. Multiply the block size value by the number of physical extents you want the final logical volume to contain, and use that product as the argument to resize2fs. For example if the block size is 4096 and the final number of physical extents you want in your logical volume is 1200, then the product is 4915200 (blocks). Hence: umount /myfs e2fsck -f /dev/vg01/lvol1 resize2fs /dev/vg001/lvol1 4915200 lvreduce -l 1200 /dev/vg01/lvol1 [ respond "y" when asked if you really want to reduce it ]
How can I resize logical volume to fit filesystem automagically?
How to resize logical volume to fit filesystem
Run pactl set-sink-volume @DEFAULT_SINK@ 30%command as autostart. The other way: echo 'set-sink-volume @DEFAULT_SINK@ 20000' >> ~/.config/pulse/default.pa
Every time I boot up my Debian machine the sound volume level is at 100 % (what is way too loud). I am using PulseAudio instead of Alsa.How can I adjust the default sound volume level to an arbitrary value (e.g. 30 %)?
PulseAudio on Debian 9: How to adjust default sound volume level?
Can you try following the following format for creating: docker run -it --mount type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH> --mount type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH> -p host_port:container_port --name myservice <IMAGE>Edit: Creation command has been edited The above worked for me: docker run -ti --mount type=volume,src=cust_vol2,dst=/cust_vol2 --mount type=volume,src=cust_vol1,dst=/cust_vol1 -p 8024:8024 --name mycontainer centos$docker inspect mycontainer "Mounts": [ { "Type": "volume", "Source": "cust_vol2", "Target": "/cust_vol2" }, { "Type": "volume", "Source": "cust_vol1", "Target": "/cust_vol1" } ]and provide the verbose output if possible.
I've created two volumes I wish to use to store content of two folders /etc/php and /var/www inside of container: $ docker volume create dvwa_etcphp $ docker volume create dvwa_wwwI have a container, which I run using the command: docker run --rm -it -p 80:80 vulnerables/web-dvwa --name dvwatest \ --mount type=volume,source=dvwa_www,target=/var/www \ --mount type=volume,source=dvwa_etcphp,tagret=/etc/php$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c700546a86b7 vulnerables/web-dvwa "/main.sh --name dvw…" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp quirky_hawkingBut when I do: $ docker inspect c70it gives me (I've removed everything that seem to me superfluos): [ { "Id": "c700546a86b74de5f2f941ee92fd72406e2a29e2e06ca85532658d1fa6ddbae5", "Created": "2020-01-22T13:36:23.976013357Z", "Path": "/main.sh", "Args": [ "--name", "dvwatest", "--mount", "type=volume,source=dvwa_www,target=/var/www", "--mount", "type=volume,source=dvwa_etcphp,tagret=/etc/php" ], "Name": "/quirky_hawking", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "docker-default", "ExecIDs": null, "HostConfig": { "VolumeDriver": "", "VolumesFrom": null, "Name": "overlay2" }, "Mounts": [], "Config": { "Hostname": "c700546a86b7", }, "Cmd": [ "--name", "dvwatest", "--mount", "type=volume,source=dvwa_www,target=/var/www", "--mount", "type=volume,source=dvwa_etcphp,tagret=/etc/php" ], "Image": "vulnerables/web-dvwa", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/main.sh" ], "OnBuild": null, "Labels": { "maintainer": "[emailprotected]" } }, } ]I want to use volumes to save changes in two folders: /etc/php and /var/www. And when I stop container and run a new one I want it to use these two volumes with edited config files. Docker version is 19.03.2 Thank you!
Docker volume mount using CLI command
Your i3 version is very old, you need to update it to at least 4.11 (bindsym for i3bar is mentioned in its release notes). You can find the user's guide for your version here.
I can run bindsym button4 exec amixer -D pulse sset Master 5%+ and bindsym button5 exec amixer -D pulse sset Master 5%- to adjust the volume from a terminal session. However when I add the commands to my config for i3 like so: bar { #status_command i3status #status_command i3blocks -c ~/.i3/i3blocks.conf #status_command ~/.i3/Bar.sh status_command conky -c /etc/config/conky/conky.conf font pango:Monospace colors { background $bg-color separator #757575 # border background text focused_workspace $bg-color #000000 $text-color inactive_workspace $inactive-bg-color $inactive-bg-color $inactive-text-color urgent_workspace $urgent-bg-color $urgent-bg-color $text-color } bindsym button4 exec amixer -D pulse sset Master 5%+ bindsym button5 exec amixer -D pulse sset Master 5%- }I have an error in my config: ERROR: CONFIG: Expected one of these tokens: <end>, '#', 'set', 'i3bar_command', 'status_command', 'socket_path', 'mode', 'hidden_state', 'id', 'modifier', 'position', 'output', 'tray_output', 'font', 'binding_mode_indicator', 'workspace_buttons', 'verbose', 'colors', '}' ERROR: CONFIG: (in file /home/kalenpw/.i3/config) ERROR: CONFIG: Line 206: } ERROR: CONFIG: Line 207: ERROR: CONFIG: Line 208: bindsym button4 exec amixer -D pulse sset Master 5%+ ERROR: CONFIG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR: CONFIG: Line 209: bindsym button5 exec amixer -D pulse sset Master 5%- ERROR: CONFIG: Line 210: } ERROR: CONFIG: Expected one of these tokens: <end>, '#', 'set', 'i3bar_command', 'status_command', 'socket_path', 'mode', 'hidden_state', 'id', 'modifier', 'position', 'output', 'tray_output', 'font', 'binding_mode_indicator', 'workspace_buttons', 'verbose', 'colors', '}' ERROR: CONFIG: (in file /home/kalenpw/.i3/config) ERROR: CONFIG: Line 207: ERROR: CONFIG: Line 208: bindsym button4 exec amixer -D pulse sset Master 5%+ ERROR: CONFIG: Line 209: bindsym button5 exec amixer -D pulse sset Master 5%- ERROR: CONFIG: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR: CONFIG: Line 210: } ERROR: CONFIG: Line 211: ERROR: FYI: You are using i3 version 4.7.2 (2014-01-23, branch "tags/4.7.2")I'm mirroring the syntax found on the i3wm user guide: bar { # disable clicking on workspace buttons bindsym button1 nop # execute custom script when scrolling downwards bindsym button5 exec ~/.i3/scripts/custom_wheel_down }It appears to me like the syntax is right and the issue is definitely with those 2 bindsym lines because it is fine without them. How can I fix this so I can control the volume when I scroll on the statusbar?
i3wm amixer controls from i3status
A symbolic link can be used to implement your filesystem map: cd / ln -s home/user/project
I have lot of available space in /home df -h output Filesystem Size Used Avail Use% Mounted on /dev/sda2 8,9G 2,1G 6,4G 25% / tmpfs 499M 4,0K 499M 1% /dev/shm /vol/home 2,7T 2,3T 403G 86% /homeInside /home/user/project I have the following directories: $ ls /home/user/project log data binIs it possible to like "mount" this directories? I want to achieve this: $ df -h /dev/sda2 8,9G 2,1G 6,4G 25% / tmpfs 499M 4,0K 499M 1% /dev/shm /vol/home 2,7T 2,3T 403G 86% /home /vol/home/user 2,7T 2,3T 403G 86% /project/data /vol/home/user 2,7T 2,3T 403G 86% /project/log
Mount and/or simulate volumes with existing directories?
You should have --private option for vzctl create: vzctl create CTID --private /vz/primary vzctl create CTID --private /vz/secondaryIf each container has a dedicated partition, also consider specifying --layout simfs. ploop might be unneeded overhead.
I would like to install two OpenVZ templates, each one in a different logical volume. I have created a separate partition, made it a physical volume, assigned it a group volume and separated in two logical volumes vzprimary and vzsecondary, mounted on /vz/vzprimary and /vz/vzsecondary like this: [root@primary lost+found]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_box0-lv_root 18G 3.7G 13G 23% / tmpfs 946M 224K 946M 1% /dev/shm /dev/sda1 477M 65M 387M 15% /boot /dev/mapper/lvm-vzprimary 4.7G 9.7M 4.4G 1% /vz/primary /dev/mapper/lvm-vzsecondary 4.7G 9.7M 4.4G 1% /vz/secondaryI would like to install one template in eache logical volume. Can I do that? From what I read, I could not find an option like that: the vzctl create command requires only the CT ID and the name of the OS template as arguments.
Setting up OpenVZ container private area
This is quite tricky to do on a live system. The organization you've chosen is very inflexible to resizing. My recommendation is to move some large chunk of the root partition into /home and create a symbolic link.If you really want to resize, here's a way to do it. I recommend practicing first in a virtual machine, because you risk making your system unbootable (if you're lucky) or losing your data (if you're unlucky). Do make sure your backups are up to date.Stop all services other than sshd. We're going to desynchronize the RAID, and any modification to files on / or /home performed after this point will be lost. Manually fail the RAID components on /dev/sdb and remove them from the array. Also turn off swap from /dev/sdb2. mdadm /dev/md1 -f /dev/sdb1 mdadm /dev/md1 -r /dev/sdb1 mdadm /dev/md3 -f /dev/sdb3 mdadm /dev/md3 -r /dev/sdb3 swapoff /dev/sdb2Repartition /dev/sdb. I recommend that you use a more flexible partitioning scheme, with LVM. That way any resizing you want to do later will be a lot easier. Make a single RAID 1 volume spanning the whole disk, except for the swap space. I'll assume the new volume for RAID is /dev/sdb1 and /dev/sdb2 is again swap space. It doesn't matter in what order the volumes are. Make /dev/sdb1 part of a RAID 1 volume with a single component for now. mdadm --create /dev/md4 -l 1 -n 2 missing /dev/sdb2Make the new RAID volume an LVM physical volume, and create a volume group containing it. pvcreate /dev/md4 vgcreate main /dev/md4Create a root logical volume with the desired size, and a home LV spanning the rest of the available space. lvcreate --size 40g -n root main lvcreate --size 100%FREE -n home mainCreate filesystems on /dev/mapper/main_root and /dev/mapper/main_home. Also run mkswap /dev/sdb2. Mount the new filesystems and copy your data there. mkdir /media/new_root /media/new_home mount /dev/mapper/main_root /media/new_root mount /dev/mapper/main_home /media/new_home cp -ax / /media/new_root cp -ax / /media/new_homeRun chroot /media/new_root and update the storage configuration to the new organization. You'll need to update /etc/fstab to mount /dev/mapper/main_root on /home. Also comment out the swap entry for /dev/sda2. You'll also need to make the new system bootable, which depends on your bootloader. Note that LILO and Grub2 can boot from LVM but Grub 0.9x cannot. Reboot to the new system. Only do this after you've done all these steps in a VM and confirmed that it works! Repartition /dev/sda identically to /dev/sdb. Run mkswap /dev/sda2 then swapon /dev/sda2. You can now uncomment the entry for /dev/sda2 in /etc/fstab. Add /dev/sda1 to the new RAID1 array and let it synchronize in the background. mdadm --add /dev/md4 /dev/sda1
I have a server using software RAID (raid1) and I need to increase my volume on my root partition. I've been googling around with no luck of finding out how I can do this. I have 2x1TB RAID1. My df -h: Filesystem Size Used Avail Use% Mounted on rootfs 20G 20G 0 100% / /dev/root 20G 20G 0 100% /, devtmpfs 3.9G 4.0K 3.9G 1% /dev none 4.0K 0 4.0K 0% /sys/fs/cgroup none 788M 256K 788M 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 0 3.9G 0% /run/shm none 100M 0 100M 0% /run/user overflow 1.0M 4.0K 1020K 1% /tmp /dev/md3 898G 72M 852G 1% /homeMy fdisk -l: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000e1568Device Boot Start End Blocks Id System /dev/sdb1 * 4096 40962047 20478976 fd Linux RAID autodetect /dev/sdb2 40962048 42008575 523264 82 Linux swap / Solaris /dev/sdb3 42008576 1953517567 955754496 fd Linux RAID autodetectDisk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000a0d60Device Boot Start End Blocks Id System /dev/sda1 * 4096 40962047 20478976 fd Linux RAID autodetect /dev/sda2 40962048 42008575 523264 82 Linux swap / Solaris /dev/sda3 42008576 1953517567 955754496 fd Linux RAID autodetectDisk /dev/md3: 978.7 GB, 978692538368 bytes 2 heads, 4 sectors/track, 238938608 cylinders, total 1911508864 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000Disk /dev/md3 doesn't contain a valid partition tableDisk /dev/md1: 21.0 GB, 20970405888 bytes 2 heads, 4 sectors/track, 5119728 cylinders, total 40957824 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000Disk /dev/md1 doesn't contain a valid partition table
How do I increase my root volume
I found my answer in this link I was unable to install (ubuntu 18, production box) so I downloaded the sources and built it from here. I ran the following commands make sudo make installFor my specific issue I ran this command sudo nvme id-ctrl -v /dev/nvme1n1 > nvme1n1LogThe result was NVME Identify Controller: vid : 0x1d0f ssvid : 0x1d0f sn : vol063$$$$$$$$$$60 mn : Amazon Elastic Block StoreThe sn above gives the exact volume id for the dev EDIT sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 100G 0 disk └─nvme0n1p1 259:1 0 100G 0 part / nvme1n1 259:2 0 1.2T 0 disk ├─vg1-log (dm-0) 252:0 0 100G 0 lvm /mnt/logs ├─vg1-backups (dm-1) 252:1 0 300G 0 lvm /mnt/backups ├─vg1-data (dm-2) 252:2 0 692G 0 lvm /mnt/data └─vg1-swap (dm-3) 252:3 0 8G 0 lvm [SWAP]
I am running out of space on a particular filesystem. I know this with the following command df -H $ sudo df -H Filesystem Size Used Avail Use% Mounted on udev 4.1G 13k 4.1G 1% /dev tmpfs 807M 73M 734M 10% /run /dev/nvme0n1p1 106G 34G 68G 33% / none 4.1k 0 4.1k 0% /sys/fs/cgroup none 5.3M 0 5.3M 0% /run/lock none 4.1G 0 4.1G 0% /run/shm none 105M 0 105M 0% /run/user /dev/mapper/vg1-log 106G 97G 3.3G 97% /mnt/logs /dev/mapper/vg1-data 732G 615G 81G 89% /mnt/data /dev/mapper/vg1-backups 317G 317G 0 100% /mnt/backupsMy EC2 has the following Root device /dev/sda1 - EBS ID vol-0fe5#########3b0 Block devices /dev/sda1 /dev/sdb - EBS ID vol-0631########7560How do I map which volume I should increase the size of ? I ran the following commands to get any kind of mapping between the EBS ID and the /dev/device but did not find any $ ls -l /dev/mapper total 0 crw------- 1 root root 10, 236 May 28 14:17 control lrwxrwxrwx 1 root root 7 Jun 9 18:09 vg1-backups -> ../dm-1 lrwxrwxrwx 1 root root 7 Jun 9 18:09 vg1-data -> ../dm-2 lrwxrwxrwx 1 root root 7 Jun 9 18:09 vg1-log -> ../dm-0 lrwxrwxrwx 1 root root 7 Jun 9 18:09 vg1-swap -> ../dm-3Please share a simple process for me to map them. But I have tried more commands sudo dmsetup ls --tree, sudo df -H, $ sudo lsblk -o KNAME,TYPE,SIZE,MODEL KNAME TYPE SIZE MODEL nvme0n1 disk 100G Amazon Elastic Block Store nvme0n1p1 part 100G nvme1n1 disk 1.2T Amazon Elastic Block Store dm-0 lvm 100G dm-1 lvm 300G dm-2 lvm 692G dm-3 lvm 8GAll point to nvme0n1.
Mapping between EC2 volume and your mounted filesystem
Pulseaudio has its own set of volume controls, and the pulse device is the converter that lets ALSA-only applications use Pulseaudio. I very much doubt setting any mixer control on the pulse device does anything sensible. And I'm not sure which value reading mixer controls would return, possible the volume setting of the default sink (but I'd have to read the source code to find out). If you want to control the volume of Pulseaudio applications from the commandline (no matter if they use Pulseaudio via ALSA, or directly), have a look at pacmd.
Running the command: amixer -D pulse sset Master 30%should set the audio volume to 30% right? When running: amixer get MasterIt returns saying the audio volume is 52%. Any explanation or solution to my problem? Thanks!
Amixer returning wrong audio value
Yes, you can lvremove LV1 without affecting data on LV2. That's why they are separate LVs. Before vgreducing a PV out of the VG, you should check that the PV is reported as completely free by either the pvs or the pvdisplay command. If not, and you have other PVs in the VG with free space available, you can use the pvmove command to move the data out of the PV you wish to remove and onto one (or more, if necessary) of the PVs you're planning to keep - while the LVs are mounted and in use. (That's one of the things that makes LVM awesome when you need to avoid downtime.) The simplest way to use pvmove is just to specify the name of the PV you wish to make empty. It is smart enough to look at other PVs in the same VG and find free space for any data it needs to move. Of course, you can also specify the destination PV - or multiple destination PV if the data you need to move won't fit onto any single PV you wish to keep. pvmove will first move data from the source PV to the first destination PV until that destination becomes full, and then continue on to the next specified destination PV. Once the PV is completely free (pvs reports PFree = PSize for it, or pvdisplay <PV device name> reports "Allocated PE" = 0), you're free to vgreduce it out of the VG. After that, you're free to remove the PV from the system. If you're planning to reuse the disk without repartitioning or otherwise overwriting it, you can use pvremove to remove the LVM PV header from the disk, but any other way of that makes the system no longer see the LVM PV header will work just as well. (At that point, all the non-historical LVM metadata referring to that PV is on that PV itself. If that partition or disk vanishes, LVM will understand just fine that the PV is gone.)
I have 2 logical volumes lv1 and lv2 which are part of the same volume group vg0. I have to remove 2 physical disks that are associated to the vg0. Can I do a lvremove, vgreduce and pvremove on lv1 without affecting the data on lv2.
Can I delete a Logical Volume from a Volume group with out affecting data on an other logical volume in the same volume group
You can use --select to filter pvs output. In this case --select vgname=<name> will do the trick: # pvs --select vgname=test -o pv_free --unit=g --no-suffix --no-heading 0.07(I also have 3 VGs and test has only one PV with 70 MiB of free space.) Check pvs --select help for more options.
in my rhel 7.6 machine I used the pvs command in order to display the Pfree values pvs PV VG Fmt Attr PSize PFree /dev/sda2 rhel_rhel7 lvm2 a-- <39.00g 4.00m /dev/sdc vg_fg lvm2 a-- <200.00g 0 /dev/sdd docker lvm2 a-- <110.00g 0more correctly way to display only the Pfree in GIGA is like this pvs -o pv_free --unit=g --no-suffix --no-heading 0.00 0 0since we have different VG then we get 3 values on Pfree any advice how to get the Pfree values on GIGA for specific VG? for example lets say we want only the Pfree value of rhel_rhel7 Volume Group so expected results should be pvs .......................... 0.00
pvs + how to get the values of pfree in Giga for specific Volume group
Instead of adding disks on the level of operation system you can do this directly in hadoop. You can add them to the dfs.datanode.data.dir property. The format is <property> <name>dfs.datanode.data.dir</name> <value>file:///disk/c0t2,/disk/c0t3,/dev/sde,/dev/sdf</value> </property>I am not 100% sure hadoop can handle RAW disks. In such case you can create on each new disk one big partition, format it, mount it /var/hadoop3, /var/hadoop4 and use format: <property> <name>dfs.datanode.data.dir</name> <value>file:///disk/c0t2,/disk/c0t3,/var/hadoop3,/var/hadoop4</value> </property>
we have the following disks from lsblk , all disks are not lvm sdc 8:32 0 80G 0 disk /var/hadoop1 sdd 8:48 0 80G 0 disk /var/hadoop2 sde 8:64 0 80G 0 disk sdf 8:80 0 80G 0 disksdc and sdd disks are full ( 100% used ) the status is that sdc and sdd disks are full and we can use them but we have a new disks sde and sdf , each disk with size of 20G so is it possible to add the sde disk to sdc in order to give another 20G for sdc ?
is it possible to increase disk size by using/adding another clean disk
I assume that /mnt/volume_nyc1_01 is the mountpoint for the new ext4 volume. There is a line in /etc/fstab in which this volume is mounted on /mnt/volume_nyc1_01. The steps you mention are not technically wrong, but if you follow them, you'll end up having an empty /home directory - since it's a new ext4 fs only lost+found will be there. The steps I follow on such cases are:Stop any service, daemon, app using /home. lsof |grep "/home" can help you with this. Leave /etc/fstab as is for the time being and copy all data from /home to /mnt/volume_nyc1_01. I'd use this command: sudo rsync -aHAXS /home/* /mnt/volume_nyc1_01 After everything is successfully copied, you can proceed with the steps you described. The new volume will be mounted as /home and will include all your data.If everything is up and running, at some point in the future you could unmount the new volume from /home mountpoint and delete the files in the /home directory which will still be there, and reclaim space on your first volume. Be cautious, /home is still a directory on the 1st volume, which is also used as a mountpoint for 2nd volume. So deleting files from /home directory is possible, without deleting files from the 2nd volume, if you umount the 2nd volume first. If all this seems complicated, just let the old files there.
I mounted a new ext4 storage volume to my server after I had already installed an application that primarily uses /home. This application needs to take advantage of the additional storage, so I want to remount the volume so that it's used by the /home directory. Can anyone confirm my steps below? umount -v /mnt/volume_nyc1_01 # Edit /etc/fstab # Replace the second field of the mountpoint's entry with /home mount -avI appreciate the feedback. I'm asking in hopes to avoid messing up my system by overlooking important considerations.
Remounting /home in a new volume
This worked for me: # cat /etc/udev/rules.d/99-hide-partitions.rules KERNEL=="sda*", ENV{UDISKS_IGNORE}="1"You should run udevadm control --reload (as root) after modifying any udev rules, and log out and in to your desktop environment.
I am using Debian 9 with xfce and I would like to hide an unmounted volume in the Desktop. I tried to install udisk; however, I ran into a lot of problems. Does anyone know an easy way to hide an unmounted volume?
How to hide a specific unmounted volume
this sounds like your volume control might not be controlling the volume of the right device. I don't run LXQt, but this post by cipricus has an image of what the selection dialog would look like:Make sure you've selected "PulseAudio" (or Pipewire, if that's being offered). Pick the same sink you pick in pavucontrol to control the volume. As answer to the same post, cipricus als recommends using kmix. Don't know much KDE that pulls in (which you probably didn't want if you're using LXQt; you might, however, already have most of that, so it's not really a strong argument against; for me, it installs). You can also install qasmixer, which however seems not to be aware of pulseaudio at all, and good ole pasystray. That's the command I'd recommend, if the default LXQt volume chooser doesn't work. It's simple, it can just be put into the autostart application list, and it just… works.
I am on Fedora 38 with LXQt. The volume control has no effect on the volume: if I increase it or decrease it until it is mute, the volume always stays the same. The only way around I have found is to use pavucontrol, which is not straightforward. Moreover, whenever I click on the screen, or pause a video, the volume gets back to its previous state.
Volume control not working on Fedora 38
Stop the instance Detach the 8GB Snapshot the 8GB Create a new volume using the snapshot created of desired capacity, .e.g 50GB Attach the new volume using /dev/sda1 Boot the instance Grow the file system on /dev/sda1 (exact command depends on the file system, e.g. for xfs it is xfs_growfs), otherwise you will see only 8GB as available capacity, though the disk you created is larger. Optionally move data from the other 50GB to the new disk, and detach it
So, I an EC2 instance and 2 volume with 8GB and 50GB. Initially 8GB was mounted to root device (/dev/sda1). I intended to switch root to 50GB. Here are the steps, I followed: - Stop the instance - De-attach 8GB - De-attach 50GB - Attach 50GB to instance using /dev/sda1 - Start the instanceNow, I see status checks 1/2 passed. I can not login to machine. Can anyone explain how to fix this issue? I stopped and started the instance again but it did not work. Please help.
EC2 1/2 status checks while replacing the volume
Many device access problems can be resolved through group membership changes. You can find the device name by watching sudo journalctl --follow as you connect your device. OR ls -1 /dev >dev.before, connect the device, wait 10 seconds, ls -1 /dev >dev.after;diff dev.{before,after}. Specifically, if ls -l shows that the group permissions (the second "rwx" triplet) is "rw" (e.g."-rw-rw----"), then, adding oneself to the group that owns the device will grant rw access. Here's how: # change to your device name device="/dev/dvdrw" sudo adduser $USER $(stat -c "%G" $device)This allows you membership in the group that can rw the device, but there is one more step. To make all your processes members of the new group, logout and login. Group memberships are set up at login time. To create a single process in the new group (for testing, prior to logout/login): newgrp $(stat -c "%G" $device) or, just type the group name. See man newgrp.
I have just install brightnessctl to control my screen brightness, but can only run it as root. Doing otherwise prints the suggestion I "get write permission for device files". What is the correct way to do this? I would also like to be able to set volume with amixer without root privileges, which I assume is the same issue.
How do I "get write permission for device files" in Linux?
I successfully resized the disk by doing the following: After increasing the size of the image in AWS console from 150Gb to 300Gb, I executed the following commands: [AWS root@archive ~]$ pvresize /dev/xvda4 Physical volume "/dev/xvda4" changed 1 physical volume(s) resized / 0 physical volume(s) not resizedREBOOT [AWS root@archive ~]$ growpart /dev/xvda 4 CHANGED: partition=4 start=83875365 old: size=230693400 end=314568765 new: size=545262165,end=629137530 [AWS root@archive ~]$ pvresize /dev/xvda4 Physical volume "/dev/xvda4" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ lvextend /dev/vg_archive/lv_root -l+100%FREE New size (37776 extents) matches existing size (37776 extents) Run `lvextend --help' for more information. [AWS root@archive ~]$ resize2fs /dev/vg_archive/lv_root resize2fs 1.41.12 (17-May-2010) The filesystem is already 38682624 blocks long. Nothing to do!REBOOT again as it wasn't working [AWS root@archive ~]$ lvextend /dev/vg_archive/lv_root -l+100%FREE New size (37776 extents) matches existing size (37776 extents) Run `lvextend --help' for more information. [AWS root@archive ~]$ resize2fs /dev/vg_archive/lv_root resize2fs 1.41.12 (17-May-2010) The filesystem is already 38682624 blocks long. Nothing to do![AWS root@archive ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root 146G 131G 8.0G 95% / tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 485M 80M 380M 18% /boot [AWS root@archive ~]$ pvresize /dev/xvda4 Physical volume "/dev/xvda4" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ lvextend /dev/vg_archive/lv_root -l+100%FREE Extending logical volume lv_root to 297.56 GiB Logical volume lv_root successfully resized [AWS root@archive ~]$ resize2fs /dev/vg_archive/lv_root resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/vg_archive/lv_root is mounted on /; on-line resizing required old desc_blocks = 10, new_desc_blocks = 19 Performing an on-line resize of /dev/vg_archive/lv_root to 78004224 (4k) blocks. The filesystem on /dev/vg_archive/lv_root is now 78004224 blocks long.[AWS root@archive ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root 293G 131G 149G 47% / tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 485M 80M 380M 18% /bootSo I'm not entirely sure what command required a reboot, but I had to reboot in order for the resize to succeed. I hope this helps someone else
I tried following the instructions in: Can't resize a partition using resize2fs But nothing seemed to work. The output of lsblk is: [AWS root@archive ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 300G 0 disk ├─xvda1 202:1 0 500M 0 part /boot ├─xvda2 202:2 0 29.5G 0 part │ ├─vg_archive-lv_root (dm-0) 253:0 0 147.6G 0 lvm / │ └─vg_archive-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP] ├─xvda3 202:3 0 10G 0 part │ └─vg_archive-lv_root (dm-0) 253:0 0 147.6G 0 lvm / └─xvda4 202:4 0 110G 0 part └─vg_archive-lv_root (dm-0) 253:0 0 147.6G 0 lvm /You can see that 300Gb is available, but I've been unable to extend the root volume from 150Gb. Any help greatly appreciated, thanks. Update: thought I'd add the linux distro, it's old which might be part of the problem... Linux version 2.6.32-358.18.1.el6.x86_64 ([emailprotected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Wed Aug 28 17:19:38 UTC 2013 As requested in the comments, this is the output from the suggested commands from the link above: [AWS root@archive ~]$ sudo pvs PV VG Fmt Attr PSize PFree /dev/xvda2 vg_archive lvm2 a-- 29.51g 0 /dev/xvda3 vg_archive lvm2 a-- 9.99g 0 /dev/xvda4 vg_archive lvm2 a-- 110.00g 0 [AWS root@archive ~]$ sudo pvresize /dev/xvda2 Physical volume "/dev/xvda2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ sudo pvresize /dev/xvda3 Physical volume "/dev/xvda3" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ sudo pvresize /dev/xvda4 Physical volume "/dev/xvda4" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root 146G 131G 8.0G 95% / tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 485M 80M 380M 18% /boot [AWS root@archive ~]$ sudo lvextend -r -l +100%FREE /dev/mapper/vg_archive-lv_root Extending logical volume lv_root to 147.56 GiB Logical volume lv_root successfully resized resize2fs 1.41.12 (17-May-2010) The filesystem is already 38682624 blocks long. Nothing to do![AWS root@archive ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root 146G 131G 8.0G 95% / tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 485M 80M 380M 18% /bootUpdate: it would appear the fs type is ext4 from the below output [AWS root@archive ~]$ df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root ext4 146G 131G 8.0G 95% / tmpfs tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 ext4 485M 80M 380M 18% /bootUpdate: output of cfdisk /dev/xvda as requested: cfdisk (util-linux-ng 2.17.2) Disk Drive: /dev/xvda Size: 322122547200 bytes, 322.1 GB Heads: 255 Sectors per Track: 63 Cylinders: 39162 Name Flags Part Type FS Type [Label] Size (MB) --------------------------------------------------------------------------------------------------------------------------------------------- Unusable 1.05 * xvda1 Boot Primary Linux ext3 524.29 * xvda2 Primary Linux LVM 31686.92 * xvda3 Primary Linux LVM 10731.94 * xvda4 Primary Linux LVM 118115.03 Unusable 161063.34 *
cannot resize disk on aws instance
which is actually a bad way to do things like this, as it makes guesses about your environment based on $SHELL and the startup files (it thinks) that shell uses; not only does it sometimes guess wrong, but you can't generally tell it to behave differently. (which on my Ubuntu 10.10 doesn't understand --skip-alias as mentioned by @SiegeX, for example.) type uses the current shell environment instead of poking at your config files, and can be told to ignore parts of that environment, so it shows you what will actually happen instead of what would happen in a reconstruction of your default shell. In this case, type -P will bypass any aliases or functions: $ type -P vim /usr/bin/vimYou can also ask it to peel off all the layers, one at a time, and show you what it would find: $ type -a vim vim is aliased to `vim -X' vim is /usr/bin/vim(Expanding on this from the comments:) The problem with which is that it's usually an external program instead of a shell built-in, which means it can't see your aliases or functions and has to try to reconstruct them from the shell's startup/config files. (If it's a shell built-in, as it is in zsh but apparently not bash, it is more likely to use the shell's environment and do the right thing.) type is a POSIX-compliant command which is required to behave as if it were a built-in (that is, it must use the environment of the shell it's invoked from including local aliases and functions), so it usually is a built-in. It isn't generally found in csh/tcsh, although in most modern versions of those which is a shell builtin and does the right thing; sometimes the built-in is what instead, and sometimes there's no good way to see the current shell's environment from csh/tcsh at all.
Like most users, I have a bunch of aliases set up to give a default set of flags for frequently used programs. For instance, alias vim='vim -X' alias grep='grep -E' alias ls='ls -G'The problem is that if I want to use which to see where my vim/grep/ls/etc is coming from, the alias gets in the way: $ which vim vim: aliased to vim -XThis is useful output, but not what I'm looking for in this case; I know vim is aliased to vim -X but I want to know where that vim is coming from. Short of temporarily un-defining the alias just so I can use which on it, is there an easy way to have which 'unwrap' the alias and run itself on that? Edit: It seems that which is a shell-builtin with different behaviors across different shells. In Bash, SiegeX's suggestion of the --skip-alias flag works; however, I'm on Zsh. Does something similar exist there?
How to use `which` on an aliased command?
When you run a command in bash it will remember the location of that executable so it doesn't have to search the PATH again each time. So if you run the executable, then change the location, bash will still try to use the old location. You should be able to confirm this with hash -t pip3 which will show the old location. If you run hash -d pip3 it will tell bash to forget the old location and should find the new one next time you try.
When I do which pip3I get /usr/local/bin/pip3but when I try to execute pip3 I get an error as follows: bash: /usr/bin/pip3: No such file or directoryThis is because I recently deleted that file. Now which command points to another version of pip3 that is located in /usr/local/bin but the shell still remembers the wrong path. How do I make it forget about that path? The which manual says which returns the pathnames of the files (or links) which would be executed in the current environment, had its arguments been given as commands in a strictly POSIX-conformant shell. It does this by searching the PATH for executable files matching the names of the arguments. It does not follow symbolic links.Both /usr/local/bin and /usr/bin are in my PATH variable, and /usr/local/bin/pip3 is not a symbolic link, it's an executable. So why doesn't it execute?
Bash remembers wrong path to an executable that was moved/deleted
The three possibilities that come to mind for me:An alias exists for emacs (which you've checked) A function exists for emacs The new emacs binary is not in your shell's PATH hashtable.You can check if you have a function emacs: bash-3.2$ declare -F | fgrep emacs declare -f emacsAnd remove it: unset -f emacsYour shell also has a PATH hashtable which contains a reference to each binary in your PATH. If you add a new binary with the same name as an existing one elsewhere in your PATH, the shell needs to be informed by updating the hashtable: hash -rAdditional explanation: which doesn't know about functions, as it is not a bash builtin: bash-3.2$ emacs() { echo 'no emacs for you'; } bash-3.2$ emacs no emacs for you bash-3.2$ which emacs /usr/bin/emacs bash-3.2$ `which emacs` --version | head -1 GNU Emacs 22.1.1New binary hashtable behaviour is demonstrated by this script. bash-3.2$ PATH=$HOME/bin:$PATH bash-3.2$ cd $HOME/binbash-3.2$ cat nofile cat: nofile: No such file or directory bash-3.2$ echo echo hi > cat bash-3.2$ chmod +x cat bash-3.2$ cat nofile cat: nofile: No such file or directorybash-3.2$ hash -r bash-3.2$ cat nofile hi bash-3.2$ rm cat bash-3.2$ cat nofile bash: /Users/mrb/bin/cat: No such file or directorybash-3.2$ hash -r bash-3.2$ cat nofile cat: nofile: No such file or directoryAlthough I didn't call it, which cat would always return the first cat in my PATH, because it doesn't use the shell's hashtable.
I've compiled the last emacs version from the source code (v24.2) because the version installed on my machine is (quite) old for me (v21.3). I've done the usual: $configure --prefix=$HOME make make install Now I am testing emacs and realized that it still launches the previous version ... while my $HOME/bin path is supposed to override the system one (since it is prepended to $PATH in my .bashrc file). My first thought was to see the which command output. And surprise, it gives the path to the new emacs. I can't understand where is the discrepancy here. In the same session here is the different outputs: $ emacs --version GNU Emacs 21.3.1$ `which emacs` --version GNU Emacs 24.2.1I have no alias involving emacs. At all. $ alias | grep emacs $Any idea what is going on please?
My `which` command may be wrong (sometimes)?
zsh is one of the few shells (the other ones being tcsh (which originated as a csh script for csh users, which also had its limitation, tcsh made it a builtin as an improvement)) where which does something sensible since it's a shell builtin, but somehow you or your OS (via some rc file) broke it by replacing it with a call to the system which command which can't do anything sensible reliably since it doesn't have access to the interns of the shell so can't know how that shell interprets a command name. In zsh, all of which, type, whence and where are builtin commands that are all used to find out about what commands are, but with different outputs. They're all there for historical reason, you can get all of their behaviours with different flags to the whence command. You can get the details of what each does by running: info zsh which info zsh whence ...Or type info zsh, then bring up the index with i, and enter the builtin name (completion is available). And avoid using /usr/bin/which. There's no shell nowadays where that which is needed. As Timothy says, use the builtin that your shell provides for that. Most POSIX shells will have the type command, and you can use command -v to only get the path of a command (though both type and command -v are optional in POSIX (but not Unix, and not any longer in LSB), they are available in most if not all the Bourne-like shells you're likely to ever come across). (BTW, it looks like /usr/bin appears twice in your $PATH, you could add a typeset -U path to your ~/.zshrc)
What is the difference between where and which shell commands? Here are some examples ~ where cc /usr/bin/cc /usr/bin/cc ~ which cc /usr/bin/ccand ~ which which which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' /usr/bin/which ~ which where /usr/bin/which: no where in (/usr/local/bin:/bin:/usr/bin:/home/bnikhil/bin:/bin)also ~ where which which: aliased to alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde which: shell built-in command /usr/bin/which /usr/bin/which ~ where where where: shell built-in commandTo me it seems that they do the same thing one being a shell builtin, not quite sure how that is different from a command?
What is the difference between which and where
This should be a standard solution: type type -t type -p
If the which command is not available, is there another 'standard' method to find out where a command's executable can be found? If there is no other 'standard' method available, the actual system I face currently is a bare Android emulator with an ash Almquist shell, if that means anything.
Is there an alternative to the `which` command? [duplicate]
This is happening because ~ has not been expanded. Your shell knows how to deal with this, but which does not (nor would most other programs). Instead, do: export "PATH+=:$HOME/Unix/homebrew/bin"Alternatively, stop using which, and use the (almost always superior) type -p. Here is a demonstration of the issue: $ echo "$PATH" /usr/local/bin:/usr/bin:/bin $ export "PATH+=:~/git/yturl" $ yturl Usage: yturl id [itag ...] $ which yturl $ type -p yturl /home/chris/git/yturl/yturl $ export "PATH=/usr/local/bin:/usr/bin:/bin:$HOME/git/yturl" $ which yturl /home/chris/git/yturl/yturlBear in mind that some other programs that look at $PATH may not understand the meaning of ~ either, and take it as part of a relative path. It's more portable to use $HOME.
I have installed node.js at custom location and added the location to the $PATH in .profile file. $ node --version v0.6.2 $ which node $ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:~/Unix/homebrew/bin $ cat ~/.profile export PATH="$PATH:~/Unix/homebrew/bin"Node.js itself runs well. The problem is it is not listed by which command. So I can't install npm now. Because npm install cannot find the location of node.js. How can I make the node binary discovered by which?
How to add home directory path to be discovered by Unix which command?
This sounds like your package database is screwed up. First I'd identify all the versions of xdg-open that you have on your system. The type should always be used for doing this task, never rely on which or whereis. Example Identify all xdg-open's. $ type -a xdg-open xdg-open is /usr/bin/xdg-openFind out which packages they're a part of. $ dpkg -S /usr/bin/xdg-open xdg-utils: /usr/bin/xdg-openYou'll want to either repeat the above dpkg -S .. for each match returned by type -a or use this dpkg -S .. search instead. $ dpkg -S xdg-open xdg-utils: /usr/bin/xdg-open xdg-utils: /usr/share/man/man1/xdg-open.1.gzI would do each, one at a time. Reinstalling xdg-utils If you'd like to refresh this package's installation do this: $ sudo apt-get --reinstall xdg-utils
$ xdg-open The program 'xdg-open' is currently not installed. You can install it by typing: sudo apt-get install xdg-utils$ sudo apt-get install xdg-utils Reading package lists... Done Building dependency tree Reading state information... Done xdg-utils is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 89 not upgraded.$ whereis xdg-open xdg-open: /usr/bin/xdg-open /usr/bin/X11/xdg-open /usr/share/man/man1/xdg-open.1.gz$ which xdg-open$ xdg-open The program 'xdg-open' is currently not installed. You can install it by typing: sudo apt-get install xdg-utilsNo, I didn't mean "recursion". I'm on Linux Mint 15 MATE, but instead of MATE I'm using the i3 window manager. Edit taking @slm's advice $ type -a xdg-open type: xdg-open not foundBut it's in /usr/bin/xdg-open. I checked. $ dpkg -S /usr/bin/xdg-open xdg-utils: /usr/bin/xdg-openThe next one was even more interesting. $ dpkg -S xdg-open git-annex: /usr/share/doc/git-annex/html/bugs/Fix_for_opening_a_browser_on_a_mac___40__or_xdg-open_on_linux__47__bsd__63____41__.html xdg-utils: /usr/bin/xdg-open xdg-utils: /usr/share/man/man1/xdg-open.1.gzThe bug-fix is just a mail archive of a patch for an OSX problem. Anyway, I guess I could try using the full path: $ /usr/bin/xdg-open /usr/bin/xdg-open: No such file or directory
xdg-open is installed yet also is not installed
Use dirname: cd "`dirname $(which program)`"
I would like to take the output of a which command, and cd to the parent directory. For example, say I have the following: which someprogramWith output: /home/me/somedirectory/someprogramAnd I would like to cd to the directory that someprogram lives in: cd /home/me/somedirectoryI'd like to accomplish this in one line. What is the most elegant, tricky, short way to do this?
Output of which command used for input to cd
In the 21st century, especially if you're targeting machines that are likely to have bash or zsh, you can count on type being available. (It didn't exist in extremely old unices, as in, from the 1970s or early 1980s.) You can't count on its output meaning anything, but you can count on its returning 0 if there is a command by that name and nonzero otherwise. which isn't standard and is unreliable in practice. type is the recommended alternative. whereis suffers from the same problems as which and is less common. whence is specific to ksh and zsh. When that's possible, it would be more reliable to test the existence of a command and test whether its behavior looks reasonable. For example, test the presence of a suitable version of bash by running bash -c 'somecommand', e.g. # Test for the `-v` operator (which appeared in bash 4.2) if bash -c 'test -v HOME' 2>/dev/null; then …Today you can count on almost everything in the Singe UNIX specification version 2 (except for exotic stuff like Fortran and SCCS, which are optional anyway). You can count on most of version 3, too, but this isn't completely implemented everywhere yet. Version 4 support is sketchier. If you're going to read these specs, I recommend reading version 3, which is a lot more readable and less ambiguous than version 2. For examples as to how to detect system specificities, look at autoconf and at configure scripts of various software. See also Resources for portable shell programming for more tips.
I've been frustrated before with differences in output from the which command across different platforms (Linux vs. Solaris vx. OS X), with different shells possibly playing into the matter as well. type has been suggested as a better alternative, but how portable would that be? In the past I've written functions which parse the output of which and handle the different use cases I've run into. They work across the machines I use, and so are okay for my personal scripts, but this seems terribly unreliable for software that I'm going to post somewhere for others to use. To take just one possible example, suppose I have to detect from a script whether bash and zsh are available on a machine, and then run a command with zsh if it is present, and with bash if zsh is not and bash is of a sufficient version to not have a particular bug. Most of the rest of the script could be Bourne shell or Ruby or anything else, but this one particular thing must be done (AFAIK) with either zsh or a recent version of bash. Can I count on type being available across platforms? Is there some other alternative to which which can easily and consistently answer the question of whether a particular piece of software is installed? (If you want to also give ideas specifically related to the example I gave, that's great, but I'm mainly just asking about the general case: what is the most reliable way to find out if a particular thing is installed on a given machine?)
What is the best way to detect (from a script) whether software is installed?
Check your path. It's not that hard to end up with duplicates in it. Example: »echo $PATH /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin: »which -a bash /bin/bash /usr/bin/bashThis is because my /bin is a symlink to /usr/bin. Now: »export PATH=$PATH:/usr/bin »echo $PATH /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/bin »which -a bash /bin/bash /usr/bin/bash /usr/bin/bashSince /usr/bin is now in my $PATH twice, which -a finds the same bash twice.
which -a ruby gives me /usr/ruby /usr/ruby /usr/rubyIt gives the same path three times. Why does this happen?
Why does the "which" command give duplicate results?
Setup: $ /usr/bin/which --show-dot a ./a $ /usr/bin/which --show-tilde a ~/aIf you wanted the . version when run interactively, but the ~ version when redirected, you would could use this as an alias: /usr/bin/which --show-tilde --tty-only --show-dotDemo: # interactive / on a tty $ /usr/bin/which --show-tilde --tty-only --show-dot a ./a # not interactive / redirected to a file $ /usr/bin/which --show-tilde --tty-only --show-dot a > output $ cat output ~/aAll the options you specify after --tty-only are taken into account only when the output is a tty.
I just realized that my sysadmin has created a global alias for which: alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'The which manpage just says:Stop processing options on the right if not on tty.What does this mean?
What does which's --tty-only option do?
You probably have an alias or a shell function called “conda”. Type type condaand see what it says.
Background I log into a server to do scientific computations. It runs 'Scientific Linux version 7.4'. In order to get access to different software I have to run a command like 'module load x'. For instance to use python I need to write 'module load python'. I don't know much about this module system but from what I can tell it just modifies some environmental variables. Typing "module show python" reveals module-whatis This module sets up PYTHON 3.6 in your environment. conflict python append-path MODULEPATH /global/software/sl-7.x86_64/modfiles/python/3.6 setenv PYTHON_DIR /global/software/sl-7.x86_64/modules/langs/python/3.6 prepend-path PATH /global/software/sl-7.x86_64/modules/langs/python/3.6/bin prepend-path CPATH /global/software/sl-7.x86_64/modules/langs/python/3.6/include prepend-path FPATH /global/software/sl-7.x86_64/modules/langs/python/3.6/include prepend-path INCLUDE /global/software/sl-7.x86_64/modules/langs/python/3.6/include prepend-path LIBRARY_PATH /global/software/sl-7.x86_64/modules/langs/python/3.6/lib prepend-path PKG_CONFIG_PATH /global/software/sl-7.x86_64/modules/langs/python/3.6/lib/pkgconfig prepend-path MANPATH /global/software/sl-7.x86_64/modules/langs/python/3.6/share/manWhen I load python I also gain access to conda (whose executable is found in /global/software/sl-7.x86_64/modules/langs/python/3.6/bin). Problem Normally I cannot run conda without first loading the python module. But recently I noticed that this changed and now I can run conda without loading the python module. This confused me so I typed 'which conda' to see if I could find what executable is being run, but when I do it says that 'no conda is found' in any of the directories on my PATH variable. How is it possible that 'which' cannot find the conda executable despite the fact that I can still run conda?
"which" can't find location of executable even though it runs
Solved with: rpm -e --justdb --nodeps which sudo yum install whichStill not sure what could have caused this problem though...
We booted up our CentOS (6.4 Final, kernel version 2.6.32.... i686) VM today (running on a Windows machine, yeah I know...) and for some bizarre reason the 'which' binary has gone missing. (Last week, everything was fine). ls -la /usr/bin shows no which. Although another strange thing: there was a 'which-nodejs' symlink which pointed to a file that's missing. We since re-installed Node.js though (no 'which-nodejs' now, but it didn't help). We just noticed we're also missing the 'clear' command. Please might anyone be able to suggest a way we could get 'which' back, without reinstalling everything?
Missing 'which' executable on CentOS
The one that gets output when you run which without -a is the one which will get executed. (and the second one with -a is preferred over the third one). This doesn't take into account the shell's builtins, aliases, and functions which will run (from within the shell) before any other executable. Therefore, it's better to use type instead.
I run which and get the following, brendan$ which python /opt/local/bin/python brendan$ which -a python /opt/local/bin/python /usr/bin/python brendan$ ls -l /opt/local/bin/python lrwxr-xr-x 1 root admin 24 22 Jul 00:45 /opt/local/bin/python -> /opt/local/bin/python2.4 brendan$ python Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) ... (this is the python version in /usr/local/bin)My point is, which does not tell me the primary executable, i.e. the one that will be executed in preference. How do I find this out? I am running OSX 10.6 on a Macbook although the question is general to UNIX-likes. Update: I have been removing lots of redundant versions of Python on my system (I had at least half a dozen) and removing various crufty PATH declarations in a bunch of initialisation files. In the process, somehow, a fresh shell now shows the expected output (i.e. which shows /opt/local/bin/python and that is what is executed). In any case, thanks for the help!
How to determine which executable on my path will be run?
command -pv uses a "default value for PATH". $ which ruby /home/mikel/.rvm/rubies/ruby-1.9.3-p484/bin/ruby$ command -pv ruby /usr/bin/rubyUnfortunately that doesn't work in zsh, so based on Stephane's comment, we could use getconf PATH: $ PATH=$(getconf PATH) which rubyor use command -v in place of which, as recommended in Why not use "which"? What to use then? $ PATH=$(getconf PATH) command -v rubyThe downside with these approaches is that if the system administrator installed a system-wide version into say /usr/local/bin or /opt/local/bin, and all users had that in PATH (e.g. via /etc/profile or /etc/environment or similar), the above probably wouldn't find it. In your specific case, I'd suggest trying something specific to Ruby. Here's some ideas that might work:Filter out versions in the user's home directory (and relative paths): ( IFS=: set -f for dir in $PATH; do case $dir/ in "$HOME/"*) ;; /*/) if [ -f "$dir/ruby" ] && [ -x "$dir/ruby" ]; then printf '%s\n' "$dir/ruby" break fi;; esac done )Filter out versions in the rvm directory ( IFS=: set -f for dir in $PATH; do case $dir/ in "$rvmpath/"*) ;; /*/) if [ -f "$dir/ruby" ] && [ -x "$dir/ruby" ]; then printf '%s\n' "$dir/ruby" break fi;; esac done )Filter out writeable rubies (last resort, assumes not running as root) ( IFS=: set -f for dir in $PATH; do case $dir/ in /*/) ruby=$dir/ruby if [ -f "$ruby" ] && [ -x "$ruby" ] && [ ! -w "$ruby" ]; then printf '%s\n' "$ruby" break fi;; esac done )Ask rvm, chruby, etc. ( rvm system chruby system command -v ruby )The last way makes rvm select the default Ruby, but we do it in a subshell, so that the user's preferred Ruby is restored afterwards. (chruby part untested.)
I would like to use which with the system's default path, ignoring any embellishments from the user's shell configuration files. Motivation I am attempting to write a script to find the system's Ruby binary. Many Ruby developers use a Ruby version manager, which adds something like ~/.rvm/bin to the start of their $PATH. I want to bypass this and use the version of Ruby that came with the system, or was installed via the system's package manager. Current solution Here's what I've tried so far: $ env -i sh -c "which ruby"This gives no output, and exits with 1. I would expect it to work though, because the path includes /usr/bin, and my system came with a Ruby binary at /usr/bin/ruby: $ env -i sh -c "echo \$PATH" /usr/gnu/bin:/usr/local/bin:/bin:/usr/bin:. $ which -a ruby # ... /usr/bin/rubyA few additional details:env -s bash -c "which ruby" also doesn't find anything. env -i zsh -c "which ruby" does find /usr/bin/ruby, but I can't depend on zsh. Using the full path to which (to make sure I'm using the binary, not the shell built-in) doesn't make any difference.My environment I'm writing this in Bash on OS X, but would like it to be portable to other shells and operating systems.
How do I use which(1) with the system's default $PATH?
Classically, which was a csh script, and it printed an error message like no foo in /usr/bin:/bin and returned a success status. (At least one common version, there may have been others that behaved differently.) Example from FreeBSD 1.0 (yes, that's ancient): if ( ! $?found ) then echo no $arg in $path endif(This classic implementation is also notorious for loading the user's .cshrc, which could change the PATH, which would cause the output to be wrong.) Modern systems usually have a different implementation of which, either written in C or in sh, and follow modern standards of dealing with error conditions: no output to stdout and a nonzero exit status.
I am reading the source code of the Maven wrapper written for the Bourne shell. I came across these lines: if [ -z "$JAVA_HOME" ]; then javaExecutable="$(which javac)" if [ -n "$javaExecutable" ] && ! [ "$(expr "$javaExecutable" : '\([^ ]*\)')" = "no" ]; then # snipexpr when used with arg1 and arg2 and a : matches arg1 against the regex arg2. Normally, the result would be the amount of matching characters, e.g.: $ expr foobar : foo 3However, when using capturing parentheses (\( and \)), it returns the content of the first capturing parentheses: $ expr foobar : '\(foo\)' fooSo far, so good. If I evaluate the expression from the source quoted above on my machine, I get: $ javaExecutable=$(which javac) $ expr "$javaExecutable" : '\([^ ]*\)' /usr/bin/javacFor a non-existing executable: $ nonExistingExecutable=$(which sjdkfjkdsjfs) $ expr "$nonExistingExecutable" : '\([^ ]*\)'Which means that for a non-existing executable the output is a empty string with newline. What's puzzling me in the source is how the output of which javac (arg1 to expr) ever returns the string no? Is there some version of which which, instead of returning nothing, returns no when no executable can be found? If not, this statement always evaluates to true and that would be weird.
Does any implementation of `which` output "no" when executable cannot be found?
For zsh, which is shorthand for whence -c, and supports other whence options. In particular: -p Do a path search for name even if it is an alias, reserved word, shell function or builtin.So: $ which git git: aliased to noglob git $ which -p git /usr/bin/git
In zsh, when I enter which git it shows: git: aliased to noglob gitHow do I find out which git binary it actually invokes? (eg: /usr/bin/git vs ~/bin/git). Basically I want to bypass the aliases when I use which.
How do I find the actual binary/script using 'which' in zsh? [duplicate]
executable=mysqlexecutable_path=$(command -v -- "$executable") && dirname -- "$executable_path"(don't use which). Of course, that won't work if $executable is a shell builtin, function or alias. I'm not aware of any shell where mysql is a builtin. It won't be a function or alias unless you defined them earlier, but then you should know about it. An exception to that could be bash which supports exported functions. $ bash -c 'command -v mysql' /usr/bin/mysql $ mysql='() { echo test;}' bash -c 'command -v mysql' mysql
If I want to return the path of a given executable, I can run: which mysqlWhich returns for example: /usr/bin/mysqlI'd like to return only: /usr/binHow can I do that?
How to return the directory of a given executable?
I would guess that you have /home/sawa/foo/bar/ on your path - i.e. a path with a trailing slash. which is iterating over each element of $PATH and appending /argv[1] and checking for the existence of that file. That causes a double-slash - one from the $PATH part, and one from /argv[1]. A double-slash is no problem. It is collapsed to a single slash by the kernel. Only at the beginning of a path may a double-slash have special meaning, and not always then. As for test not working, ensure you are not using the shell built-in when calling test. You usually do this by using a full path, but with bash you can also use enable -n test to disable the built-in test command.
I have an executable script test under the full path /home/sawa/foo/bar/test. The directory /home/sawa/foo/bar is within $PATH, and has priority over the default ones including /usr/bin. When I do `which test`to see whether this command is correctly recognized, it returns /home/sawa/foo/bar//testwith the double slash //. I know that there is a built in command with the same name test, and when I remove mine, this one under /usr/bin/test is returned by which, so I think it's interfering in some way.What does this double slash mean here, and why is it appearing here? My executable test does not seem to work correctly. Why is that?
What does '//' mean in return from `which`
Bash caches the location of commands. Use hash foo to force it to update the cache. Also, which is a separate command that doesn't tell you where your shell is actually looking; it just consults the $PATH environment variable. In bash, you should use type instead: $ type foo foo is hashed (/a/foo)
Observation: I have an executable named foo, located in /b/foo. It is compiled against an old header of a dynamic library, causing it to segfault when executed: $ foo Segmentation fault. // Expected behaviour.Now, I compile a new version of foo to /a/foo against the new dynamic library that should execute just fine. Directory a/ is in my $PATH before b/, so /a/foo should be selected: $ which foo /a/fooWhen I execute foo, the following happens: $ foo Segmentation fault.Therefore, it seems that /b/foo gets executed, whereas "which" tells me that /a/foo should be executed. To make things weirder, when I run the full path $(which /a/foo), things run fine: $ /a/foo OK!$ cp /a/foo . $ ./foo OK!To go yet one step further, if I now delete /a/foo: $ rm /a/fooThen /b/foo must surely be chosen, right? $ which foo /b/foo $ foo bash: /a/foo: No such file or directory $ $(which foo) Segmentation fault. // Expected result.Nope! Fix: Source .bash_profile and .bashrc and the problem disappears. Reproducibility: Every time. Just remove /a/foo, source ~/.bash_profile, create /a/foo, and the above observation re-occurs. Question: Does anyone have an idea what went wrong here? Hypothesis: "which" is up-to-date, but the system chooses based on "what was used to be the case". In my example above, /a/foo did not yet exist when the terminal was opened: I had only just created it. Therefore, when /a/foo was created, "which" did detect /a/foo, but the system still chose /b/foo, because it was somehow out of sync? But why is the system out of sync?
Linux/bash does not execute the executable that "which" tells me [duplicate]
I placed unset which in .bashrc as a work around. This worked for both bash and php. This morning I commented the unset which from .bashrc. The issue no longer exists. I believe the issue was created and resolved by upstream code changes. Thank you @ilkkachu!
We are running: Red Hat Enterprise Linux release 8.5 (Ootpa). We allow the server to update weekly using "yum -y update". We recently began receiving the following errors when executing shell commands:sh: which: line 1: syntax error: unexpected end of file sh: error importing function definition for `which'I am not aware of anything we changed that could cause this error -- this why I am pointing to the yum system update. Much of our code is written in php where we use passthru() to run shell commands. When we rarely write a shell script we generally use bash. I have the ability to prevent these errors using "unset which" prior to running a php script that uses passthru(). We run many shell commands from many scripts. Updating each of these scripts is not an effective solution -- time consuming and potentially introduces defects. It is interesting to note that the following code Does Not cause an error: #!/bin/sh echo helloIs there a way to 'unset which' once at the system or user level so it would be executed each time we run the shell? Perhaps there is a shell profile that I can change or something similar. Update 1 I found the following in /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and kshwhich_declare="declare -f" which_opt="-f" which_shell="$(cat /proc/$$/comm)"if [ "$which_shell" = "ksh" ] || [ "$which_shell" = "mksh" ] || [ "$which_shell" = "zsh" ] ; then which_declare="typeset -f" which_opt="" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" }export which_declare export ${which_opt} whichThe syntax looks correct here. I can tell that it exports 'which' -- but I'm not sure why. If I knew what this is doing, I could evaluate the risk of unset which. Update 2 Created a small php script: #!/usr/bin/php <?php passthru('echo hello'); ?>This Does create the error.
sh: which: line 1: syntax error: unexpected end of file; sh: error importing function definition for `which'
It looks like at some previous time in your bash session, the "wrong" executable is called an then its pathname is remembered by Bash (that's normal, such feature prevents further PATH lookups for already known commands). To fix this you should run $ hash -d ipythonThis clears the remembered location of ipython, so Bash needs to search PATH again to find the command (and that's when it finds the right executable).
I am trying to run ipython from the bash (version 4.4.19) command line. As a Python developer, I have various installations of ipython at various versions in various virtualenvs' paths, and so it is important to know which one I am running. Hence the $PATH is always changed when I change virtualenv, and this would a typical value for PATH: $ echo $PATH /Users/jab/.virtualenvs/tools/bin:/Users/jab/bin:/Users/jab/src/git/hub/jab/bin:/usr/local/gnu:/bin:/usr/local/bin:/usr/binThe important detail in that is that the first entry is "/Users/jab/.virtualenvs/tools/bin", and that the file /Users/jab/.virtualenvs/tools/bin/ipython does exist: $ ls -l /Users/jab/.virtualenvs/tools/bin/ipython -rwxr-xr-x 1 jab staff 252 May 11 15:18 /Users/jab/.virtualenvs/tools/bin/ipythonAs expected, which says that that file will be run as the "$ ipython" command $ which ipython /Users/jab/.virtualenvs/tools/bin/ipython$ $(which ipython) -c "import sys; print(sys.executable)" /Users/jab/.virtualenvs/tools/bin/pythonHowever, that is not actually the case, and /usr/local/bin/ipython is run instead $ ipython -c "import sys; print(sys.executable)" /usr/local/bin/python3Can someone explain why bash is ignoring my $PATH and using the "wrong" executable? And what do I need to change (in my bashrc, or on my system (macOS 10.12.3)) so that executables are chosen by bash in the order determined by my $PATH. Note: This is not a duplicate Bash is not finding a program even though it's on my path, because that asks how PATH works to find any program, whereas this question is anout why the wrong program is found.
Why is the "wrong" executable being run? [duplicate]
In a shell command like PATH=~/bin:/opt/texbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gamesthe tilde is expanded to your home directory when the shell command is executed. Thus the resulting value of PATH is something like /home/theconjuring/bin:/opt/texbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games. Make sure that the tilde isn't within quotes (PATH="~/bin:…"), otherwise it stands for itself. To prepend a directory to the current value of PATH, you can use PATH=~/bin:$PATHIn general, in shells other than zsh, $PATH outside double quotes breaks when the value contains spaces or other special characters, but in an assignment, it's safe. With export, however, you need to write export PATH=~/bin:"$PATH" (though you don't need export with PATH since it's already in the environemnt). In zsh, you don't need double quotes except when the variable may be empty, but if you set PATH in .profile, it's processed by /bin/sh or /bin/bash. If you're setting PATH in ~/.pam_environment, however, you can't use ~ or $HOME to stand for your home directory. This file is not parsed by a shell, it's a simple list of NAME=value lines. So you need to write the paths in full.
I've made a fresh install of Debian Wheezy, and installed zsh to it. Few days after, I've done a vanilla installation of TeX Live 2014, so I added the necessary binary paths to my $PATH. Now I started writing little scripts so I would like to put them somewhere easily accessible, that is ~/bin. My path looks like this: ~/bin:/opt/texbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gamesNow, if I wanted to run something from TeX Live, it's easy: % which pdflatex /opt/texbin/pdflatexNo problem. But when I try running something from ~/bin, ... % which hello_world hello_world not foundSo I double-checked: % ls -l ~/bin total 18 -rwxr-xr-x 1 bozbalci bozbalci 5382 Sep 8 00:28 hello_worldAnd it shows that hello_world is doing fine in ~/bin with its execution permissions set. I've tried rehash, but it didn't work. Help?
$PATH environment variable does not seem to be recognized
It's probably because of the $PATH. Do this in your shell outside of crontab: command -v searchd | xargs dirnameThis command will return a directory where searchd is on your system or an error if you don't have searchd in your $PATH even in an interactive shell. Now do this at the top of your script you execute in crontab: PATH=<directory_from_above_command>:$PATHAlternatively just use a full path to searchd instead of which searchd. Also read this on which if you want to fully understand how it works: Why not use "which"? What to use then?.
I would like to make crontab run this script as a regular user: #!/usr/bin/env bashPIDFILE=wp-content/uploads/sphinx/var/log/searchd.pidif ! test -f ${PIDFILE} || ! kill -s 0 `cat ${PIDFILE}`; then `which searchd` --config /home/user/www/wordpress-page/wp-content/uploads/sphinx/sphinx.conf fiIt simply reruns Sphinx Search daemon, because my shared server kills all my daemons if anything exceeds 1GB of ram (its Webfaction). When I call that script by hand via CLI command it works, but if I attach it in crontab (using crontab -e) I got an email with an error which: no searchd in (/usr/bin:/bin) /home/user/www/wordpress-page/run-searchd.sh: line 8: --config: command not foundSimply which searches root level, but I would like it to behave as called by myself when I log in via ssh as regular user. How to make that happen?
How to use which command in Crontab?
It's because your root user has a different path. sudo echo $PATHprints your path. It's your shell that does the variable expansion, before sudo starts (and passes it as a command line argument, expanded). Try: sudo sh -c 'echo $PATH'
ssh bobby@tony:~$ which tmux /usr/bin/tmux ssh bobby@tony:~$ sudo which tmux /usr/local/bin/tmux ssh bobby@tony:~$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/git/bin:/usr/local/sbin:/usr/local/sbin ssh bobby@tony:~$ sudo echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/git/bin:/usr/local/sbin:/usr/local/sbinAnyone knows what's going on here? Why does sudo which tmux return /usr/local/bin/tmux instead of /usr/bin/tmux? PS: I have 2 versions of tmux installed (one in /usr/bin and the other in /usr/local/bin).
Why `which tmux` and `sudo which tmux` return 2 different values?
There’s no difference; in Ubuntu, bin is a symbolic link to /usr/bin, some Debian systems, and various other distributions, so binaries appear in both locations. Packages can ship files in either location; to find the package providing a given binary, look for bin/ followed by the binary: dpkg -S bin/uname
I have to find the uname file on a Debian machine, check from which package it is and delete it. When I use which to find it, I get /usr/bin/uname. When I try to check it by dpkg -S uname there is no such file. There is a /bin/uname though. What is the difference between them?
What's the difference between two uname files
exit is a shell special built-in command. It was built with the shell interpreter, the shell knows about it and can execute it directly without searching anywhere. On most shells, you can use: $ type exit exit is a shell builtinYou have to read source of the shell to see how its builtin implemented, here is link to source of bash exit builtin. With bash, zsh, ksh93, mksh, pdksh, to invoke the exit built-in explicitly, use builtin builtin command: builtin exitSee How to invoke a shell built-in explicitly? for more details.
Suppose I want a bash command to do something extra. As a simple example, imagine I just want it to echo "123" before running. One simply way to do this would be to alias the command. Since we still need the original, we can just refer to it by its exact path, which we can find using which. For example: $ which rm /bin/rm $ echo "alias rm='echo 123 && /bin/rm'" >> .bashrcThis was easy because I was able to look up the path to rm using which. However, I am trying to do this with exit, and which doesn't seem to know anything about it. $ which exit $ echo $? 1The command did not output a path, and in fact it returned a non-zero exit code, which which does when a command is not in $PATH. I thought maybe it's a function, but apparently that's not the case either: $ typeset -F | grep exit $ echo $? 1So the exit command is not defined anywhere as a function or as a command in $PATH, and yet, when I type exit, it closes the terminal. So it clearly is defined somewhere but I can't figure out where. Where is it defined, and how can I call to it explicitly?
Where is `exit` defined?
This is most likely due to ~ not acting as a variable inside double quotes in combination with which not doing its own expansion of the tilde. Use PATH="$HOME/Dev/ProductivityScripts:$PATH"instead. HOME is an environment variable and expands as usual within double quotes. Note also that since PATH is already exported, it does not need to be exported again (through it does not hurt). More information about tilde: Why doesn't the tilde (~) expand inside double quotes? See also Why not use "which"? What to use then?
Scenario I have a ProductivityScripts project on GitHub, and when I install Linux (Debian 9), I add this folder to PATH for ease of use. I.e., I add the following line to ~/.bashrc: export PATH="~/Dev/ProductivityScripts:$PATH"It works. I can now run scripts from inside this folder by name from anywhere. alec@my_host:~$ capsalt SUCCESS!However, if I type which capsalt I get no output. whiching most things works. alec@my_host:~$ which git /usr/bin/gitQuestion Shouldn't which also track down scripts that are available from locations added to PATH manually? Or is there another reason why this isn't working?
Command 'which' not showing output for custom PATH locations
popd and pushd are commands built into Bash, they're not actual executables that live on your HDD as true binaries. excerpt bash man page DIRSTACK An array variable (see Arrays below) containing the current contents of the directory stack. Directories appear in the stack in the order they are displayed by the dirs builtin. Assigning to members of this array variable may be used to modify directories already in the stack, but the pushd and popd builtins must be used to add and remove directories. Assignment to this variable will not change the current directory. If DIRSTACK is unset, it loses its special properties, even if it is subsequently reset.The full list of all the builtin commands is available in the Bash man page as well as here - http://structure.usc.edu/bash/bashref_4.html. You can also use compgen -b or enable to get a full list of all these builtins: compgen $ compgen -b | grep -E "^push|^pop" popd pushdenable $ enable -a | grep -E "\bpop|\bpus" enable popd enable pushdAdditionally if you want to get help on the builtins you can use the help command: $ help popd | head -5 popd: popd [-n] [+N | -N] Remove directories from stack. Removes entries from the directory stack. With no arguments, removes the top directory from the stack, and changes to the new top directory.$ help pushd | head -5 pushd: pushd [-n] [+N | -N | dir] Add directories to stack. Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working
I've been using pushd and popd for a long time while writing bash script. But today when I execute which pushd, I get nothing as output. I can't understand this at all. I was always thinking that pushd is simply a command, just like cd, ls etc. So why does which pushd give me nothing?
Why can't I which pushd
You should use the type command to know what is really under its name, i.e.: type cmakeThat might be an alias that run a different version of cmake, or a function with a similar behavior or finally a previously hashed command that in not any more the first one in your PATH, as you experienced.
I am running Ubuntu 12.04, which came with Cmake v 2.8.7. I had need for a more current CMake, so I downloaded the source for 12.8.12.1, built, and installed it per directions. The last step, make install I ran sudoed. ./bootstrap make sudo make installNow I want to run it, but I find that the old version is still invoked when I execute cmake from the command line: jdibling@hurricane:/$ cd /; cmake --version; which cmake cmake version 2.8.7 /usr/local/bin/cmake jdibling@hurricane:/$ Odd, I think. So I su and try it from there: root@hurricane:~# cd /; cmake --version; which cmake cmake version 2.8.12.1 /usr/local/bin/cmake root@hurricane:/# Why does which report the same directory, but cmake --version reports different versions? How can I find where the new cmake was actually installed? As suggested, I ran type: jdibling@hurricane:/tmp/cmake-2.8.12.1$ type cmake cmake is hashed (/usr/bin/cmake) jdibling@hurricane:/tmp/cmake-2.8.12.1$ sudo su - root@hurricane:~# type cmake cmake is /usr/local/bin/cmake root@hurricane:~#
'which' reports one thing, actual command is another [duplicate]
Should you really want which to behave this way, you can redefine it as a shell function that way : which() { if [ -n "$(type "$1" | grep "is aliased")" ]; then command which $(type "$1" | awk ' {cmd=gensub("[\140\047]", "", "g" , $NF);print cmd}') else command which "$1" fi }Note that while this should work if your shell is bash, the function might need to be slightly modified if you use a different shell.
Example: I have alias chrome='google-chrome'. I want which chrome to return the same thing which google-chrome returns, i.e.: /usr/bin/google-chrome Is that possible?
Can I use which command on aliases? [duplicate]
The program which determines the path of shell commands. What you did in the second statement is set a variable named python. Shell commands and variables are entirely different things. What you'd might like to use is an alias. alias python="/usr/local/bin/python2.7"Note, that (except in zsh or tcsh, or if your which is itself a shell function that invokes GNU which, as recommended by its manual), which will not show the alias, while e.g. type python will.
Why isn't export python=/usr/local/bin/python2.7 changing the path to python? I am baffled by the following: $ which python /usr/bin/python $ export python=/usr/local/bin/python2.7 $ which python /usr/bin/pythonI'm using OSX v10.12.
Why doesn't "export" overwrite existing values?
$PATH is expanded before sudo is run. Therefore you are seeing the value of PATH for you, and not for the user you sudo to. try this instead: $ sudo bash -c 'echo $PATH'
If I run sudo which abc I would expect it to search the superusers $PATH for the program 'abc', but it looks like it only searches a subset. I can see this by running sudo echo $PATH and comparing the paths searched. $ sudo which abc which: no abc in (/sbin:/bin:/usr/sbin:/usr/bin)$ sudo echo $PATH /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/binWhat is happening here?
Which is not searching full $PATH
In a nutshell: yes exhibits similar behavior to most other standard utilities which typically write to a FILE STREAM with output buffered by the libC via stdio. These only do the syscall write() every some 4kb (16kb or 64kb) or whatever the output block BUFSIZ is . echo is a write() per GNU. That's a lot of mode-switching (which is not, apparently, as costly as a context-switch). And that's not at all to mention that, besides its initial optimization loop, yes is a very simple, tiny, compiled C loop and your shell loop is in no way comparable to a compiler optimized program.But I was wrong: When I said before that yes used stdio, I only assumed it did because it behaves a lot like those that do. This was not correct - it only emulates their behavior in this way. What it actually does is very like an analog to the thing I did below with the shell: it first loops to conflate its arguments (or y if none) until they might grow no more without exceeding BUFSIZ. A comment from the source immediately preceding the relevant for loop states: /* Buffer data locally once, rather than having the large overhead of stdio buffering each item. */yes does its own write()s thereafter.Digression: (As originally included in the question and retained for context to a possibly informative explanation already written here):I've tried timeout 1 $(while true; do echo "GNU">>file2; done;) but unable to stop loop.The timeout problem you have with the command substitution - I think I get it now and can explain why it doesn't stop. timeout doesn't start because its command-line is never run. Your shell forks a child shell, opens a pipe on its stdout and reads it. It will stop reading when the child quits, and then it will interpret all the child wrote for $IFS mangling and glob expansions, and with the results, it will replace everything from $( to the matching ). But if the child is an endless loop that never writes to the pipe, then the child never stops looping, and timeout's command-line is never completed before (as I guess) you do Ctrl+C and kill the child loop. So timeout can never kill the loop which needs to complete before it can start.Other timeouts: ... simply aren't as relevant to your performance issues as the amount of time your shell program must spend switching between user- and kernel-mode to handle output. timeout, though, is not as flexible as a shell might be for this purpose: where shells excel is in their ability to mangle arguments and manage other processes. As is noted elsewhere, simply moving your [fd-num] >> named_file redirection to the loop's output target rather than only directing output there for the command looped over can substantially improve performance because that way at least the open() syscall need only be done the once. This also is done below with the | pipe targeted as output for the inner loops.Direct comparison: You might do like: for cmd in exec\ yes 'while echo y; do :; done' do set +m sh -c '{ sleep 1; kill "$$"; }&'"$cmd" | wc -l set -m done256659456 505401Which is kind of like the command sub relationship described before, but there's no pipe and the child is backgrounded until it kills the parent. In the yes case the parent has actually been replaced since the child was spawned, but the shell calls yes by overlaying its own process with the new one and so the PID remains the same and its zombie child still knows who to kill after all.Bigger buffer: Now let's see about increasing the shell's write() buffer. IFS=" "; set y "" ### sets up the macro expansion until [ "${512+1}" ] ### gather at least 512 args do set "$@$@";done ### exponentially expands "$@" printf %s "$*"| wc -c ### 1 write of 512 concatenated "y\n"'s 1024I chose that number because output strings any longer than 1kb were getting split out into separate write()'s for me. And so here's the loop again: for cmd in 'exec yes' \ 'until [ "${512+:}" ]; do set "$@$@"; done while printf %s "$*"; do :; done' do set +m sh -c $'IFS="\n"; { sleep 1; kill "$$"; }&'"$cmd" shyes y ""| wc -l set -m done268627968 15850496That's 300 times the amount of data written by the shell in the same amount of time for this test than the last. Not too shabby. But it's not yes.Felated: As requested, there is a more thorough description than the mere code comments on what is done here at this link.
Let me give an example: $ timeout 1 yes "GNU" > file1 $ wc -l file1 11504640 file1$ for ((sec0=`date +%S`;sec<=$(($sec0+5));sec=`date +%S`)); do echo "GNU" >> file2; done $ wc -l file2 1953 file2Here you can see that the command yes writes 11504640 lines in a second while I can write only 1953 lines in 5 seconds using bash's for and echo. As suggested in the comments, there are various tricks to make it more efficient but none come close to matching the speed of yes: $ ( while :; do echo "GNU" >> file3; done) & pid=$! ; sleep 1 ; kill $pid [1] 3054 $ wc -l file3 19596 file3$ timeout 1 bash -c 'while true; do echo "GNU" >> file4; done' $ wc -l file4 18912 file4These can write up to 20 thousand lines in a second. And they can be further improved to: $ timeout 1 bash -c 'while true; do echo "GNU"; done >> file5' $ wc -l file5 34517 file5$ ( while :; do echo "GNU"; done >> file6 ) & pid=$! ; sleep 1 ; kill $pid [1] 5690 $ wc -l file6 40961 file6These get us up to 40 thousand lines in a second. Better, but still a far cry from yes which can write about 11 million lines in a second! So, how does yes write to file so quickly?
How does `yes` write to file so quickly?
iostat is part of the sysstat package, which is able to show overall iops if desired, or show them separated by reads/writes. Run iostat with the -d flag to only show the device information page, and -x for detailed information (separate read/write stats). You can specify the device you want information for by simply adding it afterwards on the command line. Try running iostat -dx and looking at the summary to get a feel for the output. You can also use iostat -dx 1 to show a continuously refreshing output, which is useful for troubleshooting or live monitoring, Using awk, field 4 will give you reads/second, while field 5 will give you writes/second. Reads/second only: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4; }' Writes/sec only: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $5; }' Reads/sec and writes/sec separated with a slash: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4"/"$5; }' Overall IOPS (what most people talk about): iostat -d <your disk name> | grep <your disk name> | awk '{ print $2; }' For example, running the last command with my main drive, /dev/sda, looks like this: dan@daneel ~ $ iostat -dx sda | grep sda | awk '{ print $4"/"$5; }' 15.59/2.70 Note that you do not need to be root to run this either, making it useful for non-privileged users. TL;DR: If you're just interested in sda, the following command will give you overall IOPS for sda: iostat -d sda | grep sda | awk '{ print $2; }' If you want to add up the IOPS across all devices, you can use awk again: iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}' This produces output like so: dan@daneel ~ $ iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}' 18.88
How do I get read and write IOPS separately in Linux, using command line or in a programmatic way? I have installed sysstat package. Please tell me how do I calculate these separately using sysstat package commands. Or, is it possible to calculate them using file system? ex: /proc or /sys or /dev
How to get total read and total write IOPS in Linux?
It seems that y has turned off messages. In y's terminal, type: $ mesg is nmeaning y does not allow others to write to y's terminal. Then you should try: $ mesg yNote This option y in above command is different with y user in your case. From man mesg: NAME mesg - control write access to your terminalSYNOPSIS mesg [y|n]DESCRIPTION Mesg controls the access to your terminal by others. It's typically used to allow or disallow other users to write to your terminal (see write(1)).OPTIONS y Allow write access to your terminal. n Disallow write access to your terminal. If no option is given, mesg prints out the current access state of your terminal.
I have a user of name x in tty1 and y in tty2. Now x wants to write some message to y and vice-versa. Now I typed in tty1 terminal write y tty2 It is showing write:write:you have write permission turned off write:y has messages disabled Same thing is showing when y's sending message to x instead it is 'x' in the last line. What should I do?
Sending message from one terminal user to another user
It depends on the kernel, and on some kernels it might depend on the type of executable, but I think all modern systems return ETXTBSY (”text file busy“) if you try to open a running executable for writing or to execute a file that's open for writing. Documentation suggests that it's always been the case on BSD, but it wasn't the case on early Solaris (later versions did implement this protection), which matches my memory. It's been the case on Linux since forever, or at least 1.0. What goes for executables may or may not go as well for dynamic libraries. Overwriting a dynamic library causes exactly the same problem that overwriting an executable does: instructions will suddenly be loaded from the same old address in the new file, which probably has something completely different. But this is in fact not the case everywhere. In particular, on Linux, programs call the open system call to open a dynamic library under the hood, with the same flags as any data file, and Linux happily allows you to rewrite the library file even though a running process might load code from it at any time. Most kernels allow removing and renaming files while they're being executed, just like they allow removing and renaming files while they're open for reading or writing. Just like an open file, a file that's removed while it's being executed will not be actually removed from the storage medium as long as it is in use, i.e. until the last instance of the executable exits. Linux and *BSD allow it, but Solaris and HP-UX don't. Removing a file and writing a new file by the same name is perfectly safe: the association between the code to load and the open (or being-executed) file that contains the code goes by the file descriptor, not the file name. It has the additional benefit that it can be done atomically, by writing to a temporary file then moving that file into place (the rename system call atomically replaces an existing destination file by the source file). It's much better than remove-then-open-write since it doesn't temporarily put an invalid, partially-written executable in place Whether cc and ld overwrite their output file, or remove it and create a new one, depends on the implementation. GCC (at least modern versions) and Clang do this, in both cases by calling unlink on the target if it exists then open to create a new file. (I wonder why they don't do write-to-temp-then-rename.) I don't recommend depending on this behavior except as a safeguard since it doesn't work on every system (it may work on every modern systems for executables, but not for shared libraries), and common toolchains don't do things in the best way. In your build scripts, always generate files under a temporary file, then move them into place, unless you know the underlying tool does this.
I have a question about overwriting a running executable, or overwriting a shared library (.so) file that's in use by one or more running programs. Back in the day, for the obvious reasons, overwriting a running executable didn't work. There's even a specific errno value, ETXTBSY, that covers this case. But for quite a while now, I've noticed that when I accidentally try to overwrite a running executable (for example, by firing off a build whose last step is cc -o exefile on an exefile that happens to be running), it works! So my questions are, how does this work, is it documented anywhere, and is it safe to depend on it? It looks like someone may have tweaked ld to unlink its output file and create a new one, just to eliminate errors in this case. I can't quite tell if it's doing this all the time, or only if it needs to (that is, perhaps after it tries to overwrite the existing file, and encounters ETXTBSY). And I don't see any mention of this on ld's man page. (And I wonder why people aren't complaining that ld may now be breaking their hard links, or changing file ownership, and like that.)Addendum: The question wasn't specifically about cc/ld (although that does end up being a big part of the answer); the question was really just "How come I never see ETXTBSY any more? Is it still an error?" And the answer is, yes, it is still an error, just a rare one in practice. (See also the clarifying answer I just posted to my own question.)
Overwriting a running executable or .so
You must have "noclobber" set, check the following example: $ echo 1 > 1 # create file $ cat 1 1 $ echo 2 > 1 # overwrite file $ cat 1 2 $ set -o noclobber $ echo 3 > 1 # file is now protected from accidental overwrite bash: 1: cannot overwrite existing file $ cat 1 2 $ echo 3 >| 1 # temporary allow overwrite $ cat 1 3 $ echo 4 > 1 bash: 1: cannot overwrite existing file $ cat 1 3 $ set +o noclobber $ echo 4 > 1 $ cat 1 4"noclobber" is only for overwrite, you can still append though: $ echo 4 > 1 bash: 1: cannot overwrite existing file $ echo 4 >> 1To check if you have that flag set you can type echo $- and see if you have C flag set (or set -o |grep clobber). Q: How can I avoid writing a blank file when my base command fails? Any requirements? You could just simply store the output in a variable and then check if it is empty. Check the following example (note that the way you check the variable needs fine adjusting to your needs, in the example I didn't quote it or use anything like ${cmd_output+x} which checks if variable is set, to avoid writing a file containing whitespaces only. $ cmd_output=$(echo) $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e '\n\n\n') $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e ' ') $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e 'something') $ test $cmd_output && echo yes || echo no yes$ cmd_output=$(myAPICommand.exe parameters) $ test $cmd_output && echo "$cmd_output" > myFile.txtExample without using a single variable holding the whole output: log() { while read data; do echo "$data" >> myFile.txt; done; } myAPICommand.exe parameters |log
I'm trying to run a command, write that to a file, and then I'm using that file for something else. The gist of what I need is: myAPICommand.exe parameters > myFile.txtThe problem is that myAPICommand.exe fails a lot. I attempt to fix some of the problems and rerun, but I get hit with "cannot overwrite existing file". I have to run a separate rm command to cleanup the blank myFile.txt and then rerun myAPICommand.exe. It's not the most egregious problem, but it is annoying. How can I avoid writing a blank file when my base command fails?
How can I output a command to a file, without getting a blank file on error?
The issue You have a (mostly) exhaustive list of systems calls here. You will notice that there is no "replace the content of this inode" call. Modifying that content always implies:Opening the file to get a file descriptor. optional seek to the desired write offset Writing to the file. optional Truncating old data, if new data is smaller.Step 4 can be done earlier. There are some shortcuts as well, such as pwrite, which directly write at a specified offset, combining steps #2 and #3, or scatter writing. An alternate way is to use a memory mapping, but it gets worse as every byte written may be sent to the underlying file independently (conceptually as if every write was a 1-byte write call). → The point is the very best scenario you can have is still 2 operations: one write and one truncate. Whatever the order you perform them in, you still risk another process to mess with the file in between and end up with a corrupted file. Solutions Normal solution As you have noted, this is why the canonical approach is to create a new file, you know you are the only writer of (you can even guarantee this by combining O_TMPFILE and linkat), then atomically redirect the old name to the new file. There are two other options, however both fail in some way: Mandatory locking It enables file access to be denied to other processes by setting a special flag combination. Sounds like the tool for the job, right? However:It must be enabled at the filesystem level (it's a flag when mounting).Warning: the Linux implementation of mandatory locking is unreliable. Since Linux 4.5, mandatory locking has been made an optional feature. This is an initial step toward removing this feature completely.This is only logical, as Unix has always shun away from locks. They are error prone, and it is impossible to cover all edge cases and guarantee no deadlock. Advisory locking It is set using the fcntl system call. However, it is only advisory, and most programs simply ignore it. In fact it is only good for managing locks on shared file among several processes cooperating. ConclusionIs there some way to do it atomically like rename(2) but preserve hard links?No. Inodes are low level, almost an implementation detail. Very few APIs acknowledge their existence (I believe the stat family of calls is the only one). Whatever you try to do probably relies on either misusing the design of Unix filesystems or simply asking too much to it. Could this be somewhat of an XY-problem?
The normal way to safely, atomically write a file X on Unix is:Write the new file contents to a temporary file Y. rename(2) Y to XIn two steps it appears that we have done nothing but change X "in-place". It is protected against race conditions and unintentional data loss (where X is destroyed but Y is incomplete or destroyed). The drawback (in this case) of this is that it doesn't write the inode referred to by X in-place; rename(2) makes X refer to a new inode number. When X was a file with link count > 1 (an explicit hard link), now it doesn't refer to the same inode as before, the hard link is broken. The obvious way to eliminate the drawback is to write the file in-place, but this is not atomic, can fail, might result in data loss etc. Is there some way to do it atomically like rename(2) but preserve hard links? Perhaps to change the inode number of Y (the temporary file) to the same as X, and give it X's name? An inode-level "rename." This would effectively write the inode referred to by X with Y's new contents, but would not break its hard-link property, and would keep the old name. If the hypothetical inode "rename" was atomic, then I think this would be atomic and protected against data loss / races.
Atomically write a file without changing inodes (preserve hard link)
Because the standard requires it:3. If file is not of type directory, the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to the standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files.So a) this is a matter specific to the rm utility (it doesn't say anything about how permissions work in general) and b) you can override it with either rm -f file or true | rm file Also, this was rm's behaviour since quite a long time -- 46 years, or maybe even longer.
I have a regular file and I changed its permission to 444. I understand that as the file is write protected, we can't modify or remove the contents of file but when I try to remove this file using rm, it generates a warning stating whether I want to remove a write protected file or not. My doubt is that isn't that depends on the directory permissions that whether a file can be deleted or not ? Why rm is generating a warning even when directory is having a write and execute permission. Does it also depends on the file permission whether a file can be deleted or not ? or is it totally dependent on directory permissions only ?
Why rm gives warning when deleting a write protected file?
Read data is (directly) read from the cache only if it is already there. That implies that cached data was previously accessed by a process and kept in cache. There is no system call or any method for a process to know if some piece of data to be read is already in cache or not. On the other hand, a process can select if it wants written data to be immediately stored on the disk or only after a variable delay which is the general case. This is done by using the O_SYNC flag when opening the file. There is also the O_DIRECT flag which when supported force all I/Os to bypass the read and write cache and go directly to the disk. Finally, the hard-disk itself is free to implement its own cache so even after a synchronous write call has returned, there is no guarantee data is already on the disk platters.
I'm learning file operation calls under Linux. The read() and write() and many other functions use cache to increase performance, and I know fsync() can transfer data from cache to disk device. However, is there any commands or system calls that can determine whether the data is cached or written to disk?
How to determine whether the data is written to disk or cached?
I think your idea can work. You can write the data directly to the drive's device node (eg /dev/sdd). The rm command is not possible or necessary (it doesn't really remove much data anyway, rm only updates the metadata in the file system. You might consider writing all ones on one cycle, followed by all zeroes on the next cycle. Persistent counter The trick is to make a persistent counter that you can pick up after reboots. This can be easily accomplished with a file, in the example the COUNT_FILE is "$HOME/.counter". The count may be lower than actual because the system could have been rebooted or etc before the dd completes. You could also call something like this in /etc/rc.local to start it automatically when the system boots. #!/bin/ssCOUNT_FILE="$HOME/.counter"read COUNT < "$COUNT_FILE"if echo "$COUNT" | grep '[^0-9]' > /dev/null then echo >&2 "$0: ERROR: non-integer counter found in $COUNT_FILE." exit 1 fiwhile true do echo dd if=/dev/urandom of=/dev/sdd bs=61865984 COUNT=$(( COUNT + 1 )) echo $(( COUNT )) > "$COUNT_FILE" doneBadblocks You might also investigate the badblocks command which writes patterns to the disk and reads them back. The good thing about using badblocks is that it writes, reads and compares every byte on every cycle, so you should start seeing more and more "badblock" numbers as the disk begins to fail. Warning Also, if you accidently get a different USB drive connected as /dev/sdd, then you'll completely destroy it when this script runs.
I would like to find out how many write cycles I can get from my SD card. I have googled and found good answers like this but its too complicated for a normal person like me. Say its a 64GB exfat formatted card. Isn't it possible to just write a large 59GB random file to it. Delete it. Make a count. And repeat the whole cycle, until the card fails (I am assuming something will finally prevent a write operation).I guess a 59GB random file can be created like this: dd if=/dev/urandom of=/dev/sdd1/file.txt bs=61865984 count=1024Delete the file: rm /dev/sdd1/file.txtI am not sure how to do the count operation or do the loop or whether putting it in a .sh file has other syntax/restrictions. Could you please help me with this?Is my above idea ok (acceptable). I am not trying to be perfect. Also is there some ready software/script that does this? (I understand for this I will need to leave the PC on for several months, but I am ok with that. Or maybe when I run the script after a reboot, it will only add to the previous count.) Thank You. :-). PS: Why I am doing this - I find that there are huge capacity microsd cards available from oem/no name brands which are quite cheap compared to good brand cards. People say that these cards are unreliable. I just wanted to see how bad they actually were. Practically what I thought was - In 5 years I might write a total of 1TB to a card. That is just 17 cycles! Which I guess even the worst card might be able to do. :-)............
Stress (write) Test an SD card to destruction using a simple shell script
The remount itself ought to be fairly safe, though of course its a less-tested path in the kernel than, say, write(2). You may be causing a few extra writes (to mark the filesystem dirty/clean, etc.) You can you use the block dump feature (/proc/sys/vm/block_dump) to find out if you're causing any extra writes. It's also possible, if you're doing it a lot, that you're forcing smaller writes than would otherwise occur (e.g., no way a write can be combined across two rw-ro cycles). That may mean you cause more erases on the flash. (Of course, if you're doing it that often, then the fs will hardly ever be ro, and its pointless). This assumes you need to worry about corruption from those writes—if your flash controller handles powerfail during wear leveling (etc.) correctly, then you don't need to. A journaled filesystem will prevent corruption, provided you use update semantics that it supports. Of course, journaling amplifies writes.
I'm developing an embedded linux system running on a SD card. To protect the SD against corruption I've used a read only root filesystem as well as an extra partition where I mount /home, also in read only mode. It is in /home where the program runs and performs read-write operations. When the software needs to write some data on the disk, it echoes these two commands and saves the data between them: mount -o remount,rw /dev/mmcblk0p3 /home mount -o remount,ro /dev/mmcblk0p3 /homeI'm doing this to ensure maximum prevention agains corruption if power goes down. But, I don't know if the cure may be worse than the disease. Is it dangerous for the filesystem or the SD card to perform such a frequent partition remounting each time I want to save some data? Another question. Is it dangerous to remount a partition where a program is running? I mean, not writing data on the disk, just running with the own program variables.
Is it dangerous to remount a partition rw/ro frequently?
You need to add the -c option to do more than 64 blocks and probabky -b to specify a block size other than 1KiB. Right now you're doing 64KiB at a time, which is a lot of seeks. Something like: badblocks -c 2560 -b 4096 -wsv -t random /dev/«device»ought to run much faster. That's 10MiB (= 4KiB × 2560) at a time; go higher with -c if that's still not running full-speed. Also your disk likely has 4K sectors, hence the -b 4096. Otherwise one bad sector will be reported as 4. (You may wish to consider in addition—or even instead—smartctl -t long. And of course mirror your backups if you're paranoid.)
I have purchased a new HDD for my backups. Before entrusting the device with the job of keeping my data safe I want to make sure that it is in good condition. The drive is a new internal 3.5 inch SATA drive. I started a destructive write test with badblocks using the following command. (Important: DON'T just copy paste the following command it will erase all data on your disk) # badblocks -wsv -t random /dev/<device>After ~ 1:30h the badblocks run has reached 0.36% completion. iotop reports average writespeeds between 1.6 and 2.5 MB/s which is about 1% of the write speed the drive should actually be capable of. The IO load reported by iotop is 99.9% though. Is there something odd going on or is it really common for badblocks to perform that slow?
What data transfer / write speeds are to be expected for a badblock destructible write test?
I would not rely on this element of behaviour. Pipes are intended to be a continuous stream of data. Reads and writes cannot be matched against each other easily, the only real guarantee you should rely on is that the first bytes in will be the first bytes out. The reason for the manual comment regarding buffer paging is that pipes rely on a ring buffer. From the manual I would infer that the "ring" is a ring of pages not a ring of bytes. IE: pages fill up, when the page is full, the next page is used. Pages are not re-used until they have been fully read. This means a half-read page will not be available at all for writing. That's just an inference from the manual, I've not checked the source code. The biggest problem with relying on this behaviour is it's an implementation detail rather than an intended effect of the pipe. Kernel developers may change this at any time and your code will suddenly have race conditions.
I want to use pipes on Linux as a synchronization primitive between a master process and a slave process. The classic way is to create two pipes, but I believe there's a way to use a single fd instead. Consider:The slave creates r-w pipe. Read end r is passed to the master. When the slave is ready, it writes to w N bytes, then N bytes again, then 1 byte, where N is the pipe buffer size. The first write(2) returns immediately, the second blocks because the buffer is full. Master blocks and reads from r. The second write(2) returns, the third write(2) blocks. After the master has read data, it does whatever stuff it has to. When the slave is to be resumed, master reads once more from r. The third write(2) returns and the slave proceeds.However, the man page for fcntl says this: Changing the capacity of a pipe F_SETPIPE_SZ (int; since Linux 2.6.35) ... Note that because of the way the pages of the pipe buffer are employed when data is written to the pipe, the number of bytes that can be written may be less than the nominal size, depend‐ ing on the size of the writes.The man page seems to say that if the pipe buffer size is N bytes and I write M<=N bytes to the pipe, it is possible that the write will block. In what cases can that happen (except the simple case when there is already much data in the pipe)? Additionally, "depending on the size of the writes" sounds odd. Can I get this strange behavior if I write exactly N bytes?
How do I feed data to a pipe until it's full, no more no less?
This is very simple. The external echo command that you are running from strace is very probably the one from GNU coreutils. This is written in the C programming language, and uses the C runtime library functions such as putchar() and fputs() to write what it needs to write to the program's standard output. In the C language, output to standard output can be fully buffered, line buffered, or unbuffered. The rules for what happens are actually part of the C language specification, apply across operating systems, and are written in abstract terms of whether standard output "can be determined not to refer to an interactive device". On Unix and Linux operating systems, the concrete way that they apply is that standard output is fully buffered if the isatty() function says that the file descriptor is not a terminal. That's what "an interactive device" is in this case. Standard output is otherwise line buffered, on your operating system. The C language standard does not mandate that latter. It is what the GNU C library additionally documents that it does, on top of what the C language standard says. So when your echo command's standard output is not a terminal but a file, the C library in the program buffers up all of the individual writes to standard output and makes one big write() call, when the buffer is full or when the program finishes. Whereas when standard output is a terminal, the C library only buffers things until a linefeed character is output, at which point it write()s the contents of the buffer. Hence the observed system calls. Further readinghttps://unix.stackexchange.com/a/407472/5132 What prevents stdout/stderr from interleaving? https://unix.stackexchange.com/a/467061/5132 SSH output isn't line buffered?
If I use: strace echo 'a b c' > fileThe bottom lines are: write(1, "a\nb\nc\nd\n", 8) = 8but in strace echo 'a b c d' > /dev/pts/0These lines are: write(1, "a\n", 2) = 2 write(1, "b\n", 2) = 2 write(1, "c\n", 2) = 2 write(1, "d\n", 2) = 2In second case, why does it is writing line by line, whereas in first case it is writing together. May be because terminal is character device, but I got definition of character device as:A character (char) device is one that can be accessed as a stream of bytes (like a file).The only relevant difference between a char device and a regular file is that you can always move back and forth in the regular file, whereas most char devices are just data channels, which you can only access sequentially.Edit: Shell is bash.
Why does terminal takes input line by line?
If influencing the actual input is difficult (such as reacting on a disc error), you should make a thin wrapper around the function, that depends on some global state. In this case I would put such a wrapper around write() to return 0 or the actual return from write(). If the overhead of the wrapper is too big use some #define to be able to leave out the wrapper code in the production system altogether, but at least you can test the layers on top of write() to react correct during unittests by setting the global state as necessary.
I am writing unit tests and would like to test some code's handling of the case where a call to write(2) returns zero. As ever, it would be nice to keep the test as authentic as possible. I can use a file-descriptor of any kind for this purpose, as long as it returns zero on a call to write(2). I can also pass in pretty much any data of any size to write. However, I would like to be able to change the descriptor's behaviour from another thread, after zero has been returned a few times, so just writing data of length zero is not acceptable. Can anyone think of a reasonably portable, reliable means of getting a filedescriptor into such a state? The target is recent Linux, but working more broadly (*BSD, OS X, etc), would be great, if possible.
Forcing write(2) to return 0
luit -c <infile >outfilethe -c switch makes luit act like as a simple interpreter from stdin to stdout without its wrapping a child (your shell by default) in a pty and handling its i/o instead. if you also do: luit -olog /dev/tty -c <infile >outfileluit will write to both your terminal and the outfile. basically the -olog switch will log to a named file a copy of all that luit writes to its output as it writes it - and so it represents luit's processed input, but -ilog would do the same for all of luit's preprocessed input.
I need to format a luit command so I can write a file that I'm trying to fix the encoding for. What I have right now is luit -encoding gbk cat santi.txt, but I would like this to have the output written in a text file. Backstory, I am having trouble reformatting a text document that was originally Chinese characters. For whatever reason using programs such as Notepad++ and encoding websites both have not worked, and I've received error messages trying to use each of the Linux solutions offered here. I turned to luit because I've had some success using it as described here. Anyways, the luit -encoding gbk cat santi.txt successfully outputs Chinese characters into my terminal. However, it only has an output of ~200 lines, and the file is perhaps 2,000 +. Looking at the what looks like the luit manual, the two options below seem the most promising. -ilog filename Log into filename all the bytes received from the child. -olog filename Log into filename all the bytes sent to the terminal emulator.P.S According to chardef original encoding of the file is probably GB2312.
How to write a luit command that outputs a file
Your question suggests that Debian uses temp files for all writes, which isn't the case. This is simply the default for mp3gain. In version 1.4.3-2, the package maintainer (Stefan Fritsch) decided that as writing to a temp file is much quicker on ReiserFS, then this would be the default on Debian. This was sourced from the patch at https://packages.debian.org/source/squeeze/mp3gain Package maintainers on other distros presumably didn't agree with Stefan and therefore didn't change the default of not using temp files.
In the mp3gain manpages, you can read the following: -t mp3gain writes modified mp3 to temp file, then deletes original instead of modifying bytes in original file (This is the default in Debian) -T mp3gain modifies bytes in original file instead of writing to temp file.Most distros (and Windows for that matter) change some bytes (if possible in padded tag space of mp3's I guess). This has the added benefit of being faster. Especially when tagging thousands of files. This also has the added benefit of only syncing the changed cluster to e.g. Dropbox. Debian, however, rewrites the entire file, including the changed bytes, to a temporary file, after which the original file is replaced with the temporary file. I would like to know why exactly this is. I would like to know the actual reason(s) from someone who knows this for a fact. (You are free to make an educated guess, but I might hold off accepting your answer until I get more.)
Why does Debian prefer a temp file replacing the original over modifying bytes in original file?
You don't need to use column -t (in fact, that's going to expand your tabs with spaces so that the columns align correctly no matter the widths). Just use printf. And remember to double-quote your variables. e.g. for file in "$path/"*.g.vcf; do sample_name=$(echo "$file" | grep -P 'HG(\d+)(?=.g)' -o) printf "%s\t%s\n" "$sample_name" "$file" >> "$output_file" doneBTW, there's no need to touch the file to create it. >> redirection will create a file if it doesn't already exist. Also, you can use <<< instead of echo with the grep line. e.g. sample_name=$(grep -oP 'HG(\d+)(?=.g)' <<< "$file")This redirects the contents (value) of variable $file into the grep command. There's not really any significant benefit, either way (unless the variable contains value(s) that change echo's behaviour, such as -n, -e, -E, or some backslash-escaped chars like \n, \t, \0nnn, \xHH, etc - see help echo in bash. BTW, this is why printf is recommended over echo these days), but you may find it easier to read.
Hello GNU/Linux newbie here. I want to write two variables in a two-columns tab-separated file. In my code, the variables are $sample_name and $file. I use the commands:touch to create the file and echo -e $sample_name $file | column -t >> $output_file to write each line. Although this results in an one-column file.Any ideas? Simplified script: touch $output_file for file in $path/*.g.vcf; do sample_name=`echo $file | grep -P 'HG(\d+)(?=.g)' -o` echo -e $sample_name $file | column -t >> $output_file doneExpected output (viewing the output file): HG00321 ./.../HG00321/HG00321.g.vcf HG00322 ./.../HG00322/HG00322.g.vcf # and so on
Write two variables in a two-column file (tab-separated)
Redirecting using > would create the output file that you redirect to, or, if it already exists, would truncate it to zero size. Any writes to the file would start at the beginning of the file and data would be written sequentially. This is not what you want to do. Redirecting using >> would create the output file, or, if it already exists, would not truncate it. Any writes to the file would be happening at the end of the file. This is what you want to do. Additionally, you have a syntax error in the code. I'm assuming you wanted to use a brace expansion in the loop: #!/bin/bash for timestamp in {1262300400..1264978800..600}; do date -d @"$timestamp" '+%Y %m %d %H %M %S' done | grep -v '[15]0 00$' >>file.txtAlso, this is an extremely slow loop, calling date a large number of times (4465 times, to be exact; the number of ten-minute time segments in a 31 day month). To speed things up, use the fact that GNU date can read from a file (here, we provide the timestamps on standard input, which date reads using -f -): #!/bin/bash printf '@%s\n' {1262300400..1264978800..600} | date -f - '+%Y %m %d %H %M %S' | grep -v '[15]0 00$' >>file.txtThis would run in a second or less. I've also removed the -E from the invocation of grep as you don't use an extended regular expression.
I have a file which looks like this: ********************************** Some notes are here Year Month Day Hour Minute Second . . . . . . . . . . . . . . . . .Undertneath this, I would like to have dates appear using the following code #!/bin/bash for timestamp in (1262300400..1264978800..600) do date -d @"$timestamp" '+%Y %m %d %H %M %S'; done | grep -Ev '[15]0 00$' > file.txtIf you want to know how I got this code, please read question Range of dates with minutes The difficulty in this question is the last part "> file.txt". The current code overwrites what is already in the file.txt. I want this loop to print the dates underneath the notes of the file 'file.txt', so for it to start writing at lets say the 5th line or something. So the desired output would be ********************************** Some notes are here Year Month Day Hour Minute Second . . . . . . . . . . . . . . . . . 2010 01 01 00 00 00 2010 01 01 00 20 00 2010 01 01 00 30 00 2010 01 01 00 40 00 2010 01 01 01 00 00
write in file starting from certain line
printf (of my specific libc) internally does a newfstatat() syscall on the stdout file descriptor (which is 1). The kernel fills in the st_mode fiels with S_IFREG if it's a regular file you're piping into, and s_IFCHR if it's a character device (like a pseudo-terminal). How I figured out: gcc -o foo foo.c # compile your program strace -o file.strace ./foo > tempfile strace -o term.strace ./foo diff *.strace #and look for things towards the end that concern the 1 file descriptor
#include <stdio.h> #include <unistd.h>int main(void) { printf("If I had more time, \n"); write(STDOUT_FILENO, "I would have written you a shorter letter.\n", 43); return 0; }I read thatI/O handling functions (stdio library functions) and system calls perform buffered operations for increased performance. The printf(3) function used stdio buffer at user space. The kernel also buffers I/O so that is does not have to write to the disk on every system call. By default, when the output file is a terminal, the writes using the printf(3) function are line-buffered as the stdio uses line buffering for the stdout i.e. when newline-character '\n' is found the buffered is flushed to the Buffer Cache. However when is not a terminal i.e., the standard output is redirected to a disk file, the contents are only flushed when ther is no more space at the buffer (or the file stream is close). If the standard output of the program above is a terminal, then the first call to printf will flush its buffer to the Kernel Buffer (Buffer Cache) when it finds a newline-character '\n', hence, the output would be in the same order as in the above statements. However, if the output is redirected to a disk file, then the stdio buffers would not be flushed and the contents of the write(2) system call would hit the kernel buffers first, causing it to be flushed to the disk before the contents of the printf call.When stdout is a terminal If I had more time, I would have written you a shorter letter.When stdout is a disk file I would have written you a shorter letter. If I had more time,But my question is that how the stdio library functions knows whether the stdout is directed to a terminal or to a disk file ?
How `stdio` recognizes whether the output is redirected to the terminal or a disk file? [duplicate]
For tty devices, you must use tcdrain() on the file descriptor.
I need to synchronize an IO pin value with a write to a serial port from user space (because I wasn't yet able to do it from kernel space - see my other question). My code (leaving out error checking) is as follows: char buf[3] = {'U','U','U'}; int fd = open("/dev/ttyS1", O_RDWR | O_NOCTTY); // supposed to be blocking // fcntl(fd, F_SETFL, fcntl(fd, F_GETFL) & ~O_NONBLOCK); <-- makes no difference FILE *f = fopen("/sys/class/gpio/gpio200/value", "w"); // the relevant IO// set IO fprintf(f, "1"); fflush(f); // send data write(fd, buf, sizeof(buf)); // unset IO fprintf(f, "0"); fflush(f);The behavior is that the IO is quickly toggled to 1 and back at the start of the write. In other words, write() returns long before the data has been actually put on the wire. Is there a hope here?
Knowing when a write() on a serial port has finished transmitting data
From man 1 write:You can prevent people (other than the superuser) from writing to you with the mesg(1) command. Some commands, for example nroff(1) and pr(1), may automatically disallow writing, so that the output they produce isn't overwrittenFrom man 1 mesg:mesg [option] [n|y]Therefore, running mesg n should disable this.
I have an ssh account on a server. Someone is spamming me with write messages. So I can't run any command in an interactive login. Is there anyway I can prevent them from sending me write messages or any way I could just have a session without incoming write messages and do my things? I think they are just sending write messages to my usernames to whatever terminal I login with (pts/1 2 3 4 and so on) I don't want to contact the system admin for this.
How to block messages from write?
Some filesystems allow to use read() on directories, but this must be seen as a mistake since the data structures in such a directory may be undocumented. You never can use write() since this would destroy the integrity of the affected directory. The official interfaces for directories are opendir(), closedir() readdir(), telldir(), seekdir()
Can we use read(), write() on a directory just like on any other file in Unix/Linux? I have a confusion here because directories are also considered as files.
Can we use read(), write() on a directory in unix/linux?
Quote the EOF terminator passed to the << operator (in any way): cat << 'EOT' > file $var EOTOr cat << \EOT cat << EO\T cat << "E"'O'T cat << ""EOTThat's the documented and standard way to prevent any type of expansion inside the here-document.
I am having trouble with writing case $1 in a bash file. I tried with cat <<EOT > /etc/init.d/startup.sh #!/bin/bashPATH=/sbin:/bin:/usr/sbin:/usr/bincase "$1" in start) bash /root/install.sh >> /root/installation_log.txt 2>&1 ;; stop|restart|reload) ;; esac EOTBut the problem is it writes everything to startup.sh but $1. Line case "$1" in become case "" in after the operation. What to do?
How to write text containing $var to a bash file? [duplicate]
Most likely this is because your startup script will run in a root environment by default. Assuming that you're using ~/.ssh/id_rsa.pub as your mode of authentication (You never mentioned that you're using a password and using such a thing while automate is often a bad idea anyway, so i'll assume key authentication). Then I'll go ahead and assume even further that you haven't allowed the (or even have a) root key generated that is trusted on your laptop? You have two options in this case.run ssh-keygen as root, and copy the content of /root/.ssh/id_rsa.pub to your /home/<user>/.ssh/authorized_keys file.orchange your command to /usr/bin/ssh -i /home/<user>/.ssh/id_rsa.pub 'laptop_user'@'laptop_ip' "echo '### RaspberryPi 2 online ###' | /usr/bin/write 'laptop_user' pts/0"The second is neater since it uses your users certificate that you already know works. debian startup order and network connectivity issue Most likely (after reading your comments) this is a script startup order issue, meaning that your script is run betwork network.d has had a chance to DHCP your interface and bring it up. even rc.local is run after network.target but that's not the same as network-online.target sorry to say. You have again a few options here, one is to simply add this to your crontab line: @reboot sleep 60 && /usr/bin/ssh -i /home/<user>/.ssh/id_rsa.pub 'laptop_user'@'laptop_ip' "echo '### RaspberryPi 2 online ###' | /usr/bin/write 'laptop_user' pts/0"Which will sleep your command for 60 seconds before executing the SSH command. It's not the most pretty thing in the world, but if you don't care about real time "notifications" go with it, it's quick and it works. If you want a more reliable options tho, i'd suggest you create a init.d script with a target requirement for network-online.target which won't trigger your init script until the network is online. This is the most fastest and reliable option to go with. I use systemd so I can't write or verify a proper init.d script atm, try following this guide and see if it works: https://www.debian-administration.org/article/28/Making_scripts_run_at_boot_time_with_Debian
I recently got myself a Raspberry Pi 2 to learn a couple of things in my spare time. It is now running on Raspbian and I control it remotely via ssh from a laptop with Linux Mint 17.2 installed. Now I would like the Pi to automatically tell the laptop that it is online after a reboot so that I know that I can connect to it via ssh. I know I can just wait a few seconds or ping the Pi, but somehow I got it into my head that it would be nice if a small message popped up in my terminal on the laptop. What I got so far after some tinkering is the following (I'm VERY new to this, so I'm not even aware of the levels this might be horribly wrong on): /usr/bin/ssh 'laptop_user'@'laptop_ip' "echo '### RaspberryPi 2 online ###' | /usr/bin/write 'laptop_user' pts/0"This works when run in a terminal on the Pi if my laptop has the IP 'laptop_ip' and if 'laptop_user' is logged in on pts/0 (lots of if's, but I figured I would get to those after I got the initial idea up and running). On the laptop terminal something like Message from 'laptop_user'@'laptop_host' on pts/0 at 09:58: ### RaspberryPi 2 online ### EOFappears. (yeah!) I then put the command into a small script: #! /bin/sh /usr/bin/ssh 'laptop_user'@'laptop_ip' "echo '### RaspberryPi 2 online ###' | /usr/bin/write 'laptop_user' pts/0" exit 0saved it as /etc/network/if-up.d/sayhi on the Pi, and made it executable (following the best answer on this question). I checked that this script does indeed get executed after each reboot of the Pi. The thing is, if I run the script manually everything works fine and I get the message on my laptop terminal. But if the script is automatically executed on reboot I don't get the message. Putting the command into rc.local or crontab didn't work either. I unfortunately lack the knowledge of how a startup of the Pi (or any computer) actually works. So I don't know if the services required for this command are already good to go. So my question is: Why don't I get the "online" message when the script is run automatically and when should I run my little script to achieve the desired behavior. Also, there might be way better alternatives to my way of doing this. So if anyone could point me in the right direction I'd appreciate it. Thanks in advance! edit:I forgot to mention that I'm using key authentication and as the script will be run as the Pi's root I added its public RSA key to authorized_keys on the laptop and I'm using the private key as the identify file for the ssh command. I'm now logging the output of /sbin/ip addr while running the script on startup and if gives me: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether b8:27:eb:34:66:ce brd ff:ff:ff:ff:ff:ff When running the script later (manually over ssh) /sbin/ip addr gives me: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether b8:27:eb:34:66:ce brd ff:ff:ff:ff:ff:ff inet 192.168.0.105/24 brd 192.168.0.255 scope global eth0 valid_lft forever preferred_lft forever So the problem seems to be that the Pi does not have a local IP while running the scripts in /etc/network/if-up.d. I now have to run my script after the IP is assigned. Unfortunately, I don't know enough about networking to be able to do so.
No output when running script on startup (but correct output if run manually)
The man page for chattr contains all the info you need to understand the lsattr output. excerptThe letters 'aAcCdDeFijmPsStTux' select the new attributes for the files: append only (a), no atime updates (A), compressed (c), no copy on write (C), no dump (d), synchronous directory updates (D), extent format (e), case-insensitive directory lookups (F), immutable (i), data journaling (j), don't compress (m), project hierarchy (P), secure deletion (s), synchronous updates (S), no tail-merging (t), top of directory hierarchy (T), undeletable (u), and direct access for files (x). The following attributes are read-only, and may be listed by lsattr(1) but not modified by chattr: encrypted (E), indexed directory (I), inline data (N), and verity (V).If you take a look at the descriptions' of the tags further down in that same man page:The e attribute indicates that the file is using extents for mapping the blocks on disk. It may not be removed using chattr(1). The I attribute is used by the htree code to indicate that a directory is being indexed using hashed trees. It may not be set or cleared using chattr(1), although it can be displayed by lsattr(1).
I'm wondering what the output of lsattr means.It prints so oddly as follows,when I have tried: lsattr /usr. $ lsattr /usr -----------------e- /usr/local -----------------e- /usr/src -----------------e- /usr/games --------------I--e- /usr/include --------------I--e- /usr/share --------------I--e- /usr/lib -----------------e- /usr/lib32 --------------I--e- /usr/bin --------------I--e- /usr/sbinI've read the man page of chattr and lsattr but still have no idea.
What's the meaning of output of lsattr
I hate to do this but the answer is (after more research): getfattr -d -m - fileI apparently missed this in my reading of the man page:-m pattern, --match=pattern Only include attributes with names matching the regular expression pattern. [...] Specify "-" for including all attributes.
Getfattr dumps a listing of extended attributes for a selected file. However, getfattr --dump filename only to dumps the user.* namespace and not the security.*, system.*, and trusted.* namespaces. Generally, there are no user namespace attributes unless you attached one to a file manually. Yes I know I can get the SELinux information by using getfattr -n security.selinux filename. In this case, I know the specific identification of the extended attribute. I have tried this as the root user. I'd assume that the root user with full capabilities is able to access this information. But you only get the user.* namespace dump. The question is how can I easily get a full dump of all the extended attribute namespaces of a file without knowing the names of all the keys in all the namespaces?
How do I get a dump of all extended attributes for a file?
After quite a bit of trial and error on the commandline, I think I've found the answer. But it isn't a cp-related answer. rsync -ptgo -A -X -d --no-recursive --exclude=* first-dir/ second-dir This does: -p, --perms preserve permissions -t, --times preserve modification times -o, --owner preserve owner (super-user only) -g, --group preserve group -d, --dirs transfer directories without recursing -A, --acls preserve ACLs (implies --perms) -X, --xattrs preserve extended attributes --no-recursive disables recursionFor reference --no-OPTION turn off an implied OPTION (e.g. --no-D) -r, --recursive recurse into directories
I want to copy the attributes (ownership, group, ACL, extended attributes, etc.) of one directory to another but not the directory contents itself. This does not work: cp -v --attributes-only A B cp: omitting directory `A' Note: It does not have to be cp.
How to clone/copy all file/directory attributes onto different file/directory?
As you have found, xattrs will work, but there are rough edges. Sometimes you have to approach open source code like an Anthropologist. If this isn't helpful in itself, maybe this will provoke some better contributions (or eventually code fixes!) I found this in the source code: https://github.com/freebsd/freebsd/blob/c829c2411ae5da594814773175c728ea816d9a12/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c#L514 /* * Register property callbacks. * * It would probably be fine to just check for i/o error from * the first prop_register(), but I guess I like to go * overboard... */ error = dsl_prop_register(ds, zfs_prop_to_name(ZFS_PROP_ATIME), atime_changed_cb, zfsvfs); error = error ? error : dsl_prop_register(ds, zfs_prop_to_name(ZFS_PROP_XATTR), xattr_changed_cb, zfsvfs); error = error ? error : dsl_prop_register(ds, zfs_prop_to_name(ZFS_PROP_RECORDSIZE), blksz_changed_cb, zfsvfs);and this https://github.com/freebsd/freebsd/blob/386ddae58459341ec567604707805814a2128a57/sys/cddl/contrib/opensolaris/common/zfs/zfs_prop.c#L302 and yet this gives you pause: https://github.com/freebsd/freebsd/blob/e95b1e137c604a612291fd223fce89c2095cddf2/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_dataset.c#L1638 So what I think is actually happening is that xattrs work but the functionality to turn them off (or on) by ZFS dataset properties is broken, so the "not supported" message means "you're on your own." There is some code in there which sets MNTOPT_XATTR but I haven't traced it out. trying to change it using zfs set gets you the unsupported message. My guess is that explains the zfs xattr property weirdness with /, /usr, /var, and the conflicted setting/behavior of /home. This sheds some light on things. https://www.lesbonscomptes.com/pages/extattrs.html
I'm trying to work out whether or not, or rather to what extend, xattrs are supported in FreeBSD using ZFS. I've read some conflicting information.zfs get xattr lists it as on (default) for /, /usr and /var, but as off (temporary) for all other datasets, including children of those mentioned above. Running zfs set xattr=on zroot/usr/home I get the messageproperty 'xattr' not supported on FreeBSD: permission denied.This agrees with the zfs man page:The xattr property is currently not supported on FreeBSD.setextattr, getextattr and lsextattr seem to work well enough. I also managed to save and restore a device file node using rsync --fake-super, and could see its data using lsextattr and getextattr. Wikipedia has some discussion in the xattr talk page. Apparently there once was a claim that ZFS supports xattr since FreeBSD 8, but that was removed later on, with reference to the manpage (see 3.).Currently I get the impression that extended attributes on zfs work in practice, but that the xattr property which would control their use does not work as it would in other zfs distributions. But I'd like to hear that confirmed (or corrected) before I trust large amounts of backup data to an rsync --fake-super running on such a machine. I'd rather not lose all my metadata due to known xattr problems. If it matters, this is a very fresh FreeBSD 10.2 install I just set up, with ZFS set up by the installer.
State of ZFS xattr support in FreeBSD
find itself doesn't support extended atttribute but you can use such as: find ~/ -type f -iname "*" -exec lsattr {} + | grep -v -- '-------------'
Can I use find to find all files which have extended attributes set? Let's say, I want to find all files with +i, immutable attribute in /foo and its subfolders. I could not find any, mention of extended attributes in man find. Is there any other way to find all files with attributes I am using Debian Wheezy
find files with extended attributes [duplicate]
According to the ls man page, you should be able -O option combined with the -l option to view flags with ls. For example: ls -Ol foo.txt -rw-r--r-- 1 harry staff - 0 18 Aug 19:11 foo.txt chflags hidden foo.txt ls -Ol foo.txt -rw-r--r-- 1 harry staff hidden 0 18 Aug 19:11 foo.txt chflags nohidden foo.txt ls -Ol foo.txt -rw-r--r-- 1 harry staff - 0 18 Aug 19:11 foo.txtEdit: Just to give a more specific solution to what the OP wanted (see comments below): To see if a folder is hidden or not, we can pass the -a option to ls to view the folder itself. We can then pipe the output into sed -n 2p (thanks Stack Overflow) to get the required line of that output. An example: mkdir foo chflags hidden foo ls -aOl foo | sed -n 2p drwxr-xr-x@ 2 harry staff hidden 68 18 Aug 19:11 .Edit 2: For a command that should work regardless of whether it's a file or a folder, we need to do something slightly more hacky. The needed line of output from ls -al varies depending on whether the thing is a file or folder, as folders show a total count, whereas files do not. To get around this, we can grep for the character r. This should be in ~all of all files/folders (nearly all should have at least one read permission), but not in the totals line. As the line we want to get then becomes the first line, we can use head -n 1 to get the first line (alternative, if you prefer sed, sed -n 1p could be used). So, for example with a directory: mkdir foo chflags hidden foo ls -aOl foo | grep r | head -n 1 drwxr-xr-x@ 2 harry staff hidden 68 18 Aug 19:11 .and with a file: touch foo.txt chflags hidden foo.txt ls -aOl foo.txt | grep r | head -n 1 -rw-r--r-- 1 harry staff hidden 0 18 Aug 19:11 foo.txtEdit 3: See Tyilo's answer below for a nicer way than grepping for r :)
I know you can set or unset the hidden flag of a folder/file by doing chflags hidden foo.txt and chflags nohidden foo.txt. But is there anyway of telling whether the folder/file is currently hidden or not? I don't want to just determine if the folder/file is beginning with a dot.
Tell if a folder/file is hidden in Mac OS X
The answer to you question is filesystem specific. For ext3, for example, have a look at fs/ext3/xattr.c, it contains the following description: 16 /* 17 * Extended attributes are stored directly in inodes (on file systems with 18 * inodes bigger than 128 bytes) and on additional disk blocks. The i_file_acl 19 * field contains the block number if an inode uses an additional block. All 20 * attributes must fit in the inode and one additional block. Blocks that 21 * contain the identical set of attributes may be shared among several inodes. 22 * Identical blocks are detected by keeping a cache of blocks that have 23 * recently been accessed. 24 * 25 * The attributes in inodes and on blocks have a different header; the entries 26 * are stored in the same format: 27 * 28 * +------------------+ 29 * | header | 30 * | entry 1 | | 31 * | entry 2 | | growing downwards 32 * | entry 3 | v 33 * | four null bytes | 34 * | . . . | 35 * | value 1 | ^ 36 * | value 3 | | growing upwards 37 * | value 2 | | 38 * +------------------+ 39 * 40 * The header is followed by multiple entry descriptors. In disk blocks, the 41 * entry descriptors are kept sorted. In inodes, they are unsorted. The 42 * attribute values are aligned to the end of the block in no specific order. 43 * 44 * Locking strategy 45 * ---------------- 46 * EXT3_I(inode)->i_file_acl is protected by EXT3_I(inode)->xattr_sem. 47 * EA blocks are only changed if they are exclusive to an inode, so 48 * holding xattr_sem also means that nothing but the EA block's reference 49 * count can change. Multiple writers to the same block are synchronized 50 * by the buffer lock. 51 */Regarding the "how are attributes connected" question, the link is in the other way round, the inode has a link to the extended attributes, see EXT3_XATTR_NEXT and ext3_xattr_list_entries in xattr.h and xattr.c respectively. To recap, the attributes are linked to the inode and are fs dependent, so yes, you will lose the attributes when burning a CD rom or emailing a file.
I have a small question about extended file attributes. Assume I label my files with metadata in extended attributes (e.g. to account for the integrity - but this does not matter for my question). The questions that arise now:Where are these attributes stored? Surely not in the inode I guess, but in what location - or better: structure? How are these attributes connected to a file? Is there a link from the attribute structture to the inode or so? What happens when copying/moving around files? I just tested it, when moving a file, the file remains its attributes. When copying it, the copy does not have attributes. So I assume whe burning it to CD or emailing the file, it will also lose its attributes?
How are extended attributes stored and preserved?
The attributes as handled by lsattr/chattr on Linux and some of which can be stored by quite a few file systems (ext2/3/4, reiserfs, JFS, OCFS2, btrfs, XFS, nilfs2, hfsplus...) and even queried over CIFS/SMB (when with POSIX extensions) are flags. Just bits than can be turned on or off to disable or enable an attribute (like immutable or archive...). How they are stored is file system specific, but generally as a 16/32/64 bit record in the inode. The full list of flags is found on Linux native filesystems (ext2/3/4, btrfs...) though not all of the flags apply to all of FS, and for other non-native FS, Linux tries to map them to equivalent features in the corresponding file system. For instance the simmutable flag as stored by OSX on HFS+ file systems is mapped to the corresponding immutable flag in Linux chattr. What flag is supported by what file system is hardly documented at all. Often, reading the kernel source code is the only option. Extended attributes on the other hand, as set with setfattr or attr on Linux store more than flags. They are attached to a file as well, and are key/value pairs that can be (both key and value) arbitrary arrays of bytes (though with limitation of size on some file systems). The key can be for instance: system.posix_acl_access or user.rsync.%stat. The system namespace is reserved for the system (you wouldn't change the POSIX ACLs with setfattr, but more with setfacl, POSIX ACLs just happen to be stored as extended attributes at least on some file systems), while the user namespace can be used by applications (here rsync uses it for its --fake-super option, to store information about ownership or permissions when you're not superuser). Again, how they are stored is filesystem specific. See WikiPedia for more information.
What is the relation and the difference between xattr and chattr? I want to know when I set a chattr attribute in Linux what is happening inside the Linux kernel and inode metadata.
Difference between xattr and chattr
Your /etc/resolv.conf is probably a symlink. See this explanation for further information. You could try: chattr +i "$(realpath /etc/resolv.conf)"Does the root mountpoint support Access Control Lists (acl) or Extended Attributes? Check it via: findmnt -fn / | grep -E "acl|user_xattr" || echo "acl or user_xattr mount option not set for mountpoint /"Is your root partition of the type 'VFAT'? I believe 'VFAT' does not support ACLs. Check it via: findmnt -fn / | grep vfatOr maybe your symlink target directory is a tmpfs? ACLs are lost on tmpfs Test it: findmnt -fn $(dirname $(realpath /etc/resolv.conf)) | grep tmpfs && echo $(dirname $(realpath /etc/resolv.conf)) is tmpfscheers
My os: debian9. The filesystem on my disk: $ sudo blkid | awk '{print $1 ,$3}' /dev/sda2: TYPE="ext4" /dev/sda1: TYPE="vfat" /dev/sda3: TYPE="ext4" /dev/sda4: TYPE="ext4" /dev/sda5: TYPE="swap"Now to chattr +i for my /etc/resolv.conf : sudo chattr +i /etc/resolv.conf chattr: Operation not supported while reading flags on /etc/resolv.conf ls -al /etc/resolv.conf lrwxrwxrwx 1 root root 31 Jan 8 15:08 /etc/resolv.conf -> /etc/resolvconf/run/resolv.conf sudo mount -o remount,acl / sudo chattr +i /etc/resolvconf/run/resolv.conf chattr: Inappropriate ioctl for device while reading flags on /etc/resolvconf/run/resolv.confHow to set chattr +i for my /etc/resolve.conf? /dev/sda1 is empty for windows. My debian is installed on /dev/sda2 $ df Filesystem 1K-blocks Used Available Use% Mounted on udev 1948840 0 1948840 0% /dev tmpfs 392020 5848 386172 2% /run /dev/sda2 95596964 49052804 41644988 55% /acl is installed. $ dpkg -l acl Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============-============-============-================================= ii acl 2.2.52-3+b1 amd64 Access control list utilities No output info from these findmnt commands: sudo findmnt -fn / | grep -E "acl|user_xattr" sudo findmnt -fn / | grep vfat sudo findmnt -fn $(dirname $(realpath /etc/resolv.conf)) | grep tmpfs
How to set `chattr +i` for my `/etc/resolv.conf `?
NFS doesn't have a concept of immutable files, which is why you get the error. I'd suggest that you just remove write access from everyone instead, which is probably close enough for your purposes. $ > foo $ chmod a-w foo $ echo bar > foo bash: foo: Permission deniedThe main differences between removing the write bit for all users instead of using the immutable attribute:The immutable attribute must be unset by root, whereas chmod can be changed by the user owning the file; The immutable attribute removes the ability to remove the file without removing the immutable attribute, which removing the write bit doesn't do (although you can change the directory permissions to disallow modification, if that is acceptable).If either of these things matter to you when dealing with authorized_keys, you probably have a more fundamental problem with your security model.
I'm trying to secure my authorized_keys file to prevent it from being modified. I run this: [root@localhost]# chattr +i authorized_keys chattr: Inappropriate ioctl for device while reading flags on authorized_keysI think it may be due to the filesystem: [root@localhost]# stat -f -c %T /home/user/ nfsthere is a way to modify it with chattr?
`chattr +i` error on NFS
This doesn’t provide a solution, but it explains why chattr can’t make a symlink immutable. On Linux, immutable attributes are part of a set of flags which are controlled using the FS_IOC_SETFLAGS ioctl. Historically this was implemented first in ext2, and chattr itself is still part of e2fsprogs. When it attempts to retrieve the flags, before it can set them, chattr explicitly checks that the file it’s handling is a regular file or a directory: if (!lstat(name, &buf) && !S_ISREG(buf.st_mode) && !S_ISDIR(buf.st_mode)) { goto notsupp; }One might think that removing these checks, or changing them to allow symlinks too, would be a good first step towards allowing chattr to make a symlink immutable, but the next hurdle comes up immediately thereafter: fd = open (name, OPEN_FLAGS); if (fd == -1) return -1; r = ioctl (fd, EXT2_IOC_GETFLAGS, &f);ioctl operates on file descriptors, which means the target has to be opened before its flags can be set. Symlinks can’t be opened for use with ioctl; while open supports O_NOFOLLOW and O_NOPATH on symlinks, the former on its own will fail with ELOOP, and the latter will return a file descriptor which can’t be used with ioctl.
How can we lock a symlink so it cannot be deleted? With a normal file/directory chattr +i /file/location can achieve this but doing so with a symlink we get chattr: Operation not supported while reading flags on my-file. There is a similar question, How to set `chattr +i` for my `/etc/resolv.conf `?, but without a solution that could be applied here.
How to make a symlink read only (`chattr +i /location/symlink`)?
This is related to capabilities thing: chattr requires CAP_LINUX_IMMUTABLE which is disabled in docker by default. Just add --cap-add LINUX_IMMUTABLE to docker container start options to enable it. Here's an example: user@test:~$ docker run --cap-add LINUX_IMMUTABLE -it bash bash-5.0# cd home bash-5.0# touch test bash-5.0# apk add e2fsprogs-extra fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz (1/6) Installing libuuid (2.33-r0) (2/6) Installing libblkid (2.33-r0) (3/6) Installing libcom_err (1.44.5-r0) (4/6) Installing e2fsprogs-libs (1.44.5-r0) (5/6) Installing e2fsprogs (1.44.5-r0) (6/6) Installing e2fsprogs-extra (1.44.5-r0) Executing busybox-1.29.3-r10.trigger OK: 15 MiB in 24 packages bash-5.0# chattr +i test bash-5.0# echo $? 0Here you can read more about linux capabilities in docker.
I'm ssh'ed into a local Centos 7 docker container* and I'm trying to run sudo chattr +i file1but I'm getting an error: chattr: Operation not permitted while setting flags on file1What's going on here? What flags is it talking about? Is there a workaround? Changing the +i to +a also makes the command fail with that error, but when I change it to +d the command succeeds. The command also succeeds for me when I'm not ssh'ed into a docker container. *I'm running the Centos 7 docker container in a Ubuntu VirtualBox VM host on top of Windows 10 (I'd like to avoid having to deal with Windows as much as possible). The ultimate goal of all of this is to test some Ansible scripts using these containers.
In docker, "chattr: Operation not permitted while setting flags on file"
xattr -d requires you to specify which attribute you want to remove. You can find this out by listing the file attributes by passing ls the -@ flag as in: ls -l@ filenameOnce you know what the attribute is, you can target it for removal with -d or you can use the following to clear all attributes: xattr -c filename
I downloaded a .pem file and my Mac OS X (10.8.2) added an @ sign at the end of the file permissions. This is causing file permission issues. I can't seem to remove the quarantine flag. I even tried the command xattr -d <filename>.pem but that didn't work
how to remove quarantine from file permissions in os x
The a attribute means that the file is append-only: you can't overwrite it or delete it, only append data to it. This is explained in the chattr man page. Only root can remove the attribute. The practical consequence is that you can't erase your old history lines. This is presumably intended as a security measure by your system administrator. I'm not completely convinced it's secure, but off the top of my head I can't think of a way to remove some of the file's contents. (It is however easy to bypass the file and run commands without their showing in the history, which is why it's not a particularly useful security measure against competent users — an obvious way being to run the commands from something other than bash.).
I was trying to remove some old history from my .bash_history file, but I was receiving this message: [john ~] /home/john $ mv .bash_history .bas mv: impossible to move `.bash_history' to `.bas': Operation not permitedI suspected the file/directory permission: [john ~] /home/john $ ls -ld .bash_history . drwxrwx--T+ 5 root john 4096 Out 11 19:45 . -rw-r--r-- 1 john john 2977 Out 10 14:36 .bash_history [john ~] /home/john $Then I tried: [john ~] /home/john $ lsattr .bash* -----a------- .bash_history ------------- .bash_logout ------------- .bash_profile ------------- .bashrc [john ~] /home/john $Probably it is this a attribute; what does it mean?
What does the 'a' attribute in lsattr mean?
For ext4 (I can't speak for BtrFS), storing small xattrs fit directly into the inode, and do not affect path resolution or directory iteration performance. The amount of space available for "small" xattrs depends on what size the inodes are formatted as. Newer ext4 filesystems use a default inode size of 512 bytes, older ext4 filesystems used 256 bytes, less about 192 bytes for the inode itself and xattr header. The rest can be used for xattrs, though typically there are already xattrs for SELinux and possibly others ("getfattr -d -m - -e hex /path/to/file" will dump all xattrs on an inode). Any xattrs that do not fit into this space will be stored in an external block, or if they are larger than 4KB and you have a new kernel (4.18ish or newer) they can be stored in an external inode. It is possible to change the inode size at format time with the "mke2fs -I <size>" option to provide more space for xattrs if xattr performance is important for your workload (e.g. Samba).
I imagine that adding n xattrs of length l of to f files and d directories may generate costs:storage path resolution time / access time ? iteration over directories? (recursive find over (fresh after reboot not-cached) filesystem?)I wonder what are those costs? E.g. if tagging all files would significantly impact storage and performance? What are critical values below which it's negligible, and after which is hammering file-system? For such analysis, obviously it would be nice to consider what are limits of xattr -> how much and how bit xattrs we can put on different filesystems. (Be welcome to include bits regarding other filesystems than just ext4 and btrfs if you find it handy - Thank you)
What are costs of storing xattrs on ext4, btrfs filesystems?
Yes it is expected behaviour. I don't have a document that says it but you can see in this patch from 2007When a file with posix capabilities is overwritten, the file capabilities, like a setuid bit, should be removed. This patch introduces security_inode_killpriv(). This is currently only defined for capability, and is called when an inode is changed to inform the security module that it may want to clear out any privilege attached to that inode. The capability module checks whether any file capabilities are defined for the inode, and, if so, clears them.security_inode_killpriv is still in the kernel today, being called from notify_change when an inode is changed in "response to write or truncate": see dentry_needs_remove_privs /* Return mask of changes for notify_change() that need to be done as a * response to write or truncate... */ int dentry_needs_remove_privs(struct dentry *dentry)
When I modify a file, the file capabilities I had set earlier are lost. Is this the expected behavior? I first set a file capability: $ setcap CAP_NET_RAW+ep ./test.txt $ getcap ./test.txt ./test.txt = cap_net_raw+epAs expected I found the file capability is set. Then I modify the file. $ echo hello >> ./test.txtNow when I check the file capabilities, no capabilities are found. $ getcap ./test.txt
Linux File Capabilities are lost when I modify the file. Is this expected behavior?
I got a bit curious reading this questoion, so let’s do some “forensics”: First trying the opposite: How is åäöåä encoded in Base64? $ echo åäöåä | base64 w6XDpMO2w6XDpAo=This clearly looks a lot like the 0sw6XDpMO2w6XDpA== that you’ve got. There’s an extra 0s at the beginning, and the end doesn’t exactly match. Suppressing the newline at the end of åäöåä (automatically inserted by echo), we get: $ echo -n åäöåä | base64 w6XDpMO2w6XDpA==This is exactly the user.xdg.comment-value except the 0s at the start. Conclusion The comment is Base64 encoded and prefixed with 0s, and testing a few other strings confirms this. Example: $ ./set-comment xyz 日本語 # file: xyz user.xdg.comment=0s5pel5pys6Kqe$ base64 -d <<<'5pel5pys6Kqe' ; echo 日本語(where the ; echo is to not mess up the next prompt since the output of base64 does not end in a new-line.) However... This just shows that in these cases (where the comment is non-ASCII), it gets encoded in Base64 and prefixed with 0s. The “real” answer After doing this I got the splendid idea of checking the man-page for getfattr and it mentions, among other things: Regarding th option -e en, --encoding=enEncode values after retrieving them. Valid values of en are "text", "hex", and "base64". Values encoded as text strings are enclosed in double quotes ("), while strings encoded as hexidecimal and base64 are prefixed with 0x and 0s, respectively.So, changing your script to: (File set-comment:) #!/bin/sh test "$2" && setfattr -n user.xdg.comment -v "$2" "$1" getfattr -e text -d -m '^user.xdg.comment$' "$1"will always print the attribute as text, giving, for example: $ ./set-comment xyz åäöåä # with fixed script # file: xyz user.xdg.comment="åäöåä"However, there is still some caveats left... like: $ ./set-comment xyz 0x414243 # file: xyz user.xdg.comment="ABC"and $ ./set-comment xyz 0s5pel5pys6Kqe # file: xyz user.xdg.comment="日本語"where the output doesn’t match the input. These can be fixed by “massaging” the argument into a form that setfattr likes. See man setfattr.
I have written a short shell script that simply wraps setfattr in a slightly more convenient form for setting the extended attribute that corresponds to a free-text comment: #!/bin/sh test "$2" && setfattr -n user.xdg.comment -v "$2" "$1" getfattr -d -m '^user.xdg.comment$' "$1"For storing US ASCII comments as xattrs, this works great. However, if I try to set a comment that contains non US ASCII characters, it gives me back what appears to be Base64 encoded data: $ touch xyz $ set-comment xyz åäöåä # file: xyz user.xdg.comment=0sw6XDpMO2w6XDpA== $ But it isn't just Base64: $ printf "0sw6XDpMO2w6XDpA==" | \base64 --decode ��:\:L;l:\:@base64: invalid input $ Most of the time, I get just random-looking garbage back. Some times, like this, the Base64 decoder throws "invalid input" back at me. What is this string? What is its relationship to the original input value? How do I go from what getfattr gives me back to the original input value (such as åäöåä in this case)? setfattr --version on my system responds with setfattr 2.4.46. I'm running the version packaged by Debian Wheezy. In the unlikely event that it matters, I'm running ZFS On Linux 0.6.3 (saw the same behavior with 0.6.2 as well) on the stock Wheezy kernel.
What is this seemingly base64 data set by setfattr?
I believe Mark Cohen’s comment is correct: this functionality seems to be absent from the coreutils version of ls. I didn’t actually have a good reason to be using coreutils ls, so I’ve switched back to the built-in BSD version.
I have coreutils installed via MacPorts on my Mac running OS X 10.8.4. I have ls set to use the coreutils version of ls [(GNU coreutils) 8.21] when available: if [ -e /opt/local/libexec/gnubin ]; then alias ls='/opt/local/libexec/gnubin/ls --color=auto' else alias ls='/bin/ls -G' fiWhen I run ls -l in a directory with files known to have extended attributes (xattrs), I expect to see an @ sign after the permissions in those listings. However, I see no @ sign. If I run /bin/ls -l, I get the @ sign. File listing from /bin/ls -l: -rw-r--r--@ 1 zev.eisenberg staff 132887 Jul 19 16:24 flowchart.graffleFile listing from ls -l (using coreutils): -rw-r--r-- 1 zev.eisenberg staff 132887 Jul 19 16:24 flowchart.graffleHow can I get the coreutils version of ls to show me the @ sign when xattrs are present?
See extended attributes with coreutils ls on Mac
Makes sense to have a look at the man page of the programs you use:BUGS AND LIMITATIONS The c', 's', andu' attributes are not honored by the ext2 and ext3 filesystems as implemented in the current mainline Linux kernels.This is not supposed to mean "ext4 works" I guess.
I'm trying to keep a bunch of plain text files compressed using the extended attribute option - c on a debian ppc64 system. I ran the following commands: # mkfs.ext4 /dev/test/compressed # mount /dev/test/compressed /mnt/compressed/ # mkdir /mnt/compressed/some/txts/ # chattr +c /mnt/compressed/some/txts/ # df -l# cp /some/txts/* /mnt/compressed/some/txts/ # sync # df -lTo my surprise, the output of df -l tells me the files I copied weren't compressed at all. I also tried to mount the test file system with the option user_xattr and I tried creating it with mkfs.ext4dev, but neither worked. I also checked the output of the commands lsattr /mnt/compressed/some/txts/; every line has a c in it. Did I miss something? How come the xattr option c doesn't work as expected?
What does the command "chattr +c /some/dir/" do?
In case anyone runs into the same issue as I did in the future - unison doesn't work with extended file attributes. One way to go around it is the copyprog + copythreshold=0 hack (see the profile in the original question), but this doesn't solve the problem of unison not noticing changes in xattr's. As I mentioned in one of the comments, even changing the modification time of the file won't make unison sync modified xattr's. Not only that, but it will even erase them the next time that file has its content changed. The only way I could get bi-directional syncing with extended file attributes to work is to use bsync, change it by adding -X flag to rsync arguments AND change modification time of the file. This is far from the ideal solution: changing modification time of the file, no Windows support, Python 3 dependency, last commit was last year etc, but that's the only software I found that does the job.
I have two machines, one Debian, one Ubuntu, both on ext4 with extended file attributes enabled in fstab. getfattr and setfattr are installed and work perfectly on both machines locally. However, unison (version 2.40.102) doesn't sync extended file attributes by default. I googled around and found this blog post with profile settings that are supposed to enable extended attribute sync. So, I changed my profile, and now it looks something like this: root=/path/to/dir root=ssh://[emailprotected]//path/to/dir2 auto=true batch=true perms=0 rsync=true maxthreads=1 retry=3 confirmbigdeletes=false copythreshold=0 copyprog = rsync -aX --rsh='ssh -p 22' --inplace --compress copyprogrest = rsync -aX --rsh='ssh -p 22' --partial --inplace --compress copyquoterem = true copymax = 1This profile syncs extended attributes for new files, but when I change extended attributes on a file that has already been synced and execute unison I get: Nothing to do: replicas have not changed since last sync.Everything else syncs perfectly, but unison is unaware of the changes in extended attributes. I also tried disabling fastcheck, hoping it would make it check the files in more detail; didn't work. I tried rsync'ing in one direction and it worked perfectly. But I need bi-directional syncing so I'm stuck with unison. I have looked through the official manual but it only mentions extended file attributes in passing. So my question is this: can this be done with unison? Am I missing something simple here? Alternatively, are there other open source tools that can achieve this? (I'm aware of bsync and bitpocket, but in my preliminary tests they also fail to notice extended file attribute changes).
Unison and extended file attributes