output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
The file /etc/mtab is written by the mount and umount commands. Keeping it accurate requires a bit of work because they can only update /etc/mtab if that file is available and writable.
For the usual case where /etc is mounted read-write at some point during boot, distributions set up a script that rewrite /etc/mtab during startup, as soon as the root partition has been mounted read-write. This is necessary in case the system shut down without unmounting everything (e.g. due to a system crash or power failure).
In your case, where /etc is on overlayfs, either the startup script runs at the wrong time when /etc is still read-only, or it doesn't support the case of an overlay root. So if you want to keep /etc/mtab as a regular file, you'll have to tweak this script or the time when it's executed.
But you probably don't need to do this. A common setup is to have /etc/mtab be a symbolic link to /proc/mounts. The two files contain mostly the same information with mostly the same syntax; from the point of view of applications that read them, they're compatible. Since /proc/mounts reflects the current kernel information, it is always up-to-date, and the mount and umount commands won't touch them.
The downside of /proc/mounts compared with /etc/mtab is that it shows information (especially mount options) as printed back by the kernel, rather than the exact parameters passed to the mount command. So a little information is lost. That information is rarely useful though.
|
I'm running a read only filesystem on a raspberry pi so far everything works fine until i tried to mount /var as overlayfs for nginx and other services to work using this:
VAROVRL="-o lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work"
mount -t overlay ${VAROVRL} overlay /varwhile this is working and all services start no issues i noticed that the mount command outputs only the overlay mount and it gets duplicated every time i reboot.
after 3 reboots:
mount
overlay on /var type overlay (rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work)
overlay on /var type overlay (rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work)
overlay on /var type overlay (rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work)output of /etc/mount
/dev/root / ext4 ro,relatime,data=ordered 0 0
devtmpfs /dev devtmpfs rw,relatime,size=469532k,nr_inodes=117383,mode=755 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
tmpfs /tmp tmpfs rw,relatime,size=102400k 0 0
/dev/mmcblk0p1 /boot vfat ro,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro 0 0
/dev/mmcblk0p5 /mnt/persist ext4 rw,relatime,data=ordered 0 0
/dev/mmcblk0p6 /mnt/cache ext4 rw,relatime,data=ordered 0 0
/dev/mmcblk0p7 /mnt/osboot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro 0 0
/dev/mmcblk0p8 /mnt/osimage ext4 rw,relatime,data=ordered 0 0
/dev/mmcblk0p9 /mnt/userdata ext4 rw,relatime,data=ordered 0 0
overlay /etc overlay rw,relatime,lowerdir=/etc,upperdir=/mnt/persist/etc-rw,workdir=/mnt/persist/etc-work 0 0
overlay /var overlay rw,relatime,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0output of /etc/mtab
overlay /var overlay rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0
overlay /var overlay rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0
overlay /var overlay rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0note that /etc is also mounted as overlayfs but does not generate this problem when it's the only overlay mount.
anyone can spot something i'm doing wrong here?
| mounting /var as overlayfs |
You can use copy-on-write disk images to get something like what you want.
Let's assume we have a disk image windows-base.img. We'd like to use this as the "lower" image, and create multiple "upper" clones from it so we can create multiple virtual machines that start with the same base configuration.
If you're working with libvirt under Linux, you can do something like this:
virt-install --disk pool=default,size=40,backing_store=windows-base.img,backing_format=raw ...The backing_store option creates a new copy-on-write clone of the named image. Initially, this will have the same content as windows-base.img and will consume effectively zero space, but it will grow over time as the virtual machine modifies disk blocks.
This article explores the process in more detail.If you're not using libvirt (e.g., if you're running qemu directly), you can do the same thing manually using qemu-img; that looks something like:
qemu-img create -F raw -b windows-base.img -f qcow2 windows-1.qcow2 40g |
I've web searched, read Where does KVM hypervisor store VM files? and consider my idea not doable by standard means, still maybe it is or can somebody advice some trick/workaround?
I want to have main VM file(s) of say installed Windows in KVM (or other VM if you know how to do below for other but not KVM), store it as "lower"/"main" set of files and be able to add programs to it by storing all additions/changes in file system in separate location in "upper" (like Linux overlayfs). TIA
That would allow to store many similar but different installations of client system in much more compact way.
| Any way to make Virtual Machine (e.g. KVM) use overlayfs like system to have main set of file(s) and additons separately? |
Well, I didn't realized that you can choose a specific size in the mountoptions when you are mounting a tmpfs; from the tmpfs manpage:
Mount options
The tmpfs filesystem supports the following mount options: size=bytes
Specify an upper limit on the size of the filesystem. The
size is given in bytes, and rounded up to entire pages. The size may have a k, m, or g suffix for Ki, Mi, Gi
(binary kilo (kibi), binary mega (mebi), and binary giga
(gibi)). The size may also have a % suffix to limit this instance
to a percentage of physical RAM. The default, when neither size nor nr_blocks is specified,
is size=50%.So replacing the line 86 into my script with this:
mount -t tmpfs -o size=100% tmpfs /upperThe system doesn't report problems with the free space anymore.
|
I'm trying to mount the rootfs / of a Debian Buster system as overlayfs because I'm interested in using tmpfs for the /upper directory. My idea is to use this to preserve the root filesystem integrity by making it fake-writable. I know there are a few packages intended to do this, like fsprotect and bilibop-lockfs, however I thin the former one is maybe a little outdated and the latter one seems to be more promising, but both use aufs and I'd like to learn about initrd and this early user space and the Linux booting process, maybe in a future I'll consider to try bilibop-lockfs.
Anyway ... my script is based on the current raspi-config script; as you can see I'm basically adding the very same script as an initramfs module and rebuilding, then this module is being triggered when boot=overlay is passed as a kernel command line parameter. This script apparently does the work of mounting the rootfs as an overlayfs, however ... I'm having problems with the following; as you can see in the df -h output, it shows the size is just 3.9G
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 781M 17M 764M 3% /run
overlay 3.9G 1.2G 2.7G 30% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mmcblk1p2 236M 96M 123M 44% /boot
/dev/mmcblk1p1 511M 5.2M 506M 2% /boot/efi
/dev/mmcblk0p1 58G 811M 54G 2% /data
tmpfs 781M 0 781M 0% /run/user/1001And some programs are having problems with this size because when they are running a while, they start to print "no left space on the device" in the journal logs. My question is ... what's specifying this size? I cannot see anything about the size in the overlay script. Could I set a bigger size to give a wider margin for those programs?
Thank you all.
| How to control the OverlayFS size |
lxc.pre.mount gets executed before the rootfs gets loaded:
lxc.hook.pre-mount = /var/lib/lxc/container0/mount-squashfs.sh
lxc.rootfs.path = overlayfs:/var/lib/lxc/container0/rootfs:/var/lib/lxc/container0/delta0And in the mount script:
#!/bin/bash
mount -nt squashfs -o ro /var/lib/lxc/container0/rootfs.sqsh /var/lib/lxc/container0/rootfs |
We are using a Centos LXC container with the rootfs contained in a squashfs filesystem. I really like the fact that a user cannot edit the rootfs from the host.
During the development, developers would infact like to make changes to the filesystem, and I'd like to move to an overlayfs. But I notice that although the upper layer can be used to make changes to the lower layer, it is also possible to make changes to lower layer rootfs by simply editing the files on the host. How can I prevent this?
| LXC Container with Overlayfs/Squashfs |
On first glance it seems like I can only have one writable location and the other locations are just there to provide the files they have.This is correct, OverlayFS only supports one writable layer at the top. As such, I'd say it's not really suitable for the use case you describe.The goal is to make it so I can use all the storage as one giant location.I would say RAID (such as RAID 0 for just striping, or higher levels for redundancy that can withstand disk failures) or a volume manager (such as LVM, which can concatenate disk volumes and can also do striping) are the typical solutions for the problem you describe.
Though you mentioned:All hard drives are formatted with ext4.And that's not how these solutions work, they work on block devices such as disk partitions, so you'd end up creating a single filesystem (ext4 or otherwise) on top of the LVM logical volume or RAID device instead.
I'd still recommend using one of these two solutions, since they were made specifically for the use case you describe and they're really stable (having been around for a long time and used in many mainstream products and enterprise deployments.)
|
I've been looking into using OverlayFS. I'd like to be able to combine a bunch of already formatted, already containing data, hard drives. All hard drives are formatted with ext4.
The goal is to make it so I can use all the storage as one giant location. I currently make use of MergerFS so all the files contained do not overlap anywhere. I would like to move away from MergerFS because I had some issues and OverlayFS seems to be supported in the upstream kernel itself.
But I'm not sure how to configure OverlayFS to do this, is it even possible? On first glance it seems like I can only have one writable location and the other locations are just there to provide the files they have.
| How can I use OverlayFS to "combine" multiple storage into one? Is this possible? |
Once a file is open, it stays open until the process closes it. Reading and writing from a file don't care whether the file is still available under its original name. The file may have been renamed, deleted, shadowed… it's still the same file.
If you open a file /somewhere/somefile then mount a filesystem at /somewhere, then after that point /somewhere/somefile will designate a file on the new filesystem. But that only matters when you open /somewhere/somefile. If a process already had /somewhere/somefile open on the original filesystem, that doesn't change.
Redirection opens a file when the shell processes the redirection operator. After that, it keeps being the same open file, even if multiple processes are involved. For example, in the snippet below, program2 writes to the file at /mnt/log.txt on the root partition, whereas program3 writes to /log.txt on /dev/something.
do_stuff () {
program1
mount /dev/something /mnt
program2
program3 >/mnt/log.txt
}
do_stuff >/mnt/log.txtIf you've already started a program and you want to change where its output goes, you need to ask the program to do it. In theory, you can force the program to do it using a debugger —this thread lists some programs that can do it. But getting down and dirty with a program like this can crash it.
If you really need to change where the program's output goes midway, you need to relay the output through a helper that is able to change its own output. Here's a tiny proof-of-concept in a shell script:
my_background_process | {
while IFS= read -r line; do
printf '%s\n' "$line" >>/mnt/log.txt
done
} &This script opens the output file for each line, so if the file designated by /mnt/log.txt changes (because the file has been moved, because a new filesystem has been mounted at /mnt, etc.) then the subsequent lines will be written to the new file. Note that you need to specify the name of the directory: with just >log.txt, this would always open the file in the current directory, so it wouldn't be affected by a mount operation (the current directory works like an open file: mounting something on /mnt doesn't affect what processes see as their current directory even if their current directory is /mnt).
|
or I should close file - then mount overlay - and then re-open file again?
i.e.
#!/bin/bash
my_background_process >log.txt &
...
pkill my_background_process
mount overlay
my_background_process >>log.txt &
...is it necessary?
| Does overlayfs redirect opened files automatically on the fly? |
Transform the input into the required syntax and splice it into the command line with a command substitution.
dirs_with_photos="$(<~/dirs_with_photos.txt tr '\n' :)"
if [ -n "$dirs_with_photos" ]; then
unionfs-fuse "${dirs_with_photos%:}" /photos
fiWith mount_unionfs you need to issue one mount command per directory. You can use a loop around the read builtin.
while IFS= read -r dir; do
mount_unionfs "$dir" /photos
done <~/dirs_with_photos.txt |
Is it possible to feed the branch paths from stdin to the mount (or mount_unionfs) command, instead of supplying them as arguments or from a file?
cat ~/dirs_with_photos.txt | mount -t unionfsI don't want to use /etc/fstab, because ideally I want to automatically generate these txt files dynamically, such as with a cron job:
@weekly find $HOME -type d -iname "*photos*" > ~/dirs_with_photos.txt | Mount unionfs (or aufs) branches fed from stdin? |
It turned out that not the httpd was in slave namespace, but my entire X session with all graphic terminals.
So I removed PrivateTmp=yes from sddm.service, and now Apache process can see my mounts
|
have mounted overlay fs:
overlay on /srv/www/site type overlay (rw,relatime,lowerdir=/srv/www/site_orig,upperdir=/srv/www/site_custom,workdir=/srv/www/overlay_workdir)I can see and edit files under /srv/www/site, but apache shows that dir is empty.
I tried to direct apache's doc root to /srv/www/site, to /srv/www; tried to remove PrivateTmp option from systemd service, but that didn't help.
sudo -u http /srv/www/siteworks - I can see the files. (apache runs as http user)
Permissions are correct.
cat /proc/<apache_pid>/mountinfodoesn't show this mount
Kernel 4.14.7
| Apache does not see files under overlay mount |
I'll try to address your first question regarding the storage device durability as I'm a bit familiar with that.
Switching from SD to eMMC might not improve the situation if you don't do an accessment of your system's storage usage and take action to improve things, because both SD and eMMC use NAND.
Do you have an estimate of data writes to your storage?
Use the following to evaluate your use case [see [1] for details]
total bytes written throughout device life = (device capacity in bytes) * (max program/erase cycles) / (write amplification factor)Say for exampleyou write 0.5GiB per day
want your device to operate for 5 years
partitions you write data to totals 4GiB (storage capacity is more
than this, but other partitions are read-only)
max program/erase cycles is 3000 for your multi-level cell (MLC) NANDThis gives you a write amplification factor of
4 * 3000 / (0.5 * 365 * 5) = ~13
What is write amplification
NAND in the SD or eMMC is written in NAND pages. Suppose you write/modify 1KiB (two 512-byte sectors) from the host, but say NAND page is 16KiB. So, the eMMC controller will write a whole NAND page.
Things get more complicated when you think of erasures, because NAND is erased in NAND blocks, and a NAND block consists of many NAND pages.
So, what can you do to improve device life
From the above equation, you canincrease device capacity (but that'll add to the cost)
improve program/erase cycles: go for SLC or turn your data write partitions from MLC to pSLC (but this reduces the capacity)
reduce write amplification by improving your apps to perform NAND page aligned, NAND page sized (or multiples) writes from host (see eMMC EXT_CSD[265] optimal write size), enabling eMMC cache etc.What else can you do
You can monitor your eMMC health using mmc-utils (https://git.kernel.org/pub/scm/utils/mmc/mmc-utils.git) or sysfs, and take necessary steps before the failure comes as a surprise.eMMC extended CSD register providesestimate for life time of SLC and MLC blocks in steps of 10%(0x01 = 0-10% device life time used, 0x02 = 10-20%, .. , 0x0B = end of life)
type-B (MLC): EXT_CSD[268], type-A (SLC): EXT_CSD[269]status of the spare blocks that are used to replace bad blocks(0x01: Normal, 0x02: Warning: 80% of blocks used, 0x03: Urgent: 90% of blocks used)
Pre EOL info: EXT_CSD[267]vendor may provide a proprietary health report in EXT_CSD[301:270] (but so far, I have only seen all zeros here)e.g.
mmc-utils:
# mmc extcsd read /dev/mmcblk0
:
eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x01
eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x00
eMMC Pre EOL information [EXT_CSD_PRE_EOL_INFO]: 0x01
:sysfs:
# cat /sys/block/mmcblk0/device/life_time
0x00 0x01
# cat /sys/block/mmcblk0/device/pre_eol_info
0x01vendor may provide health related information you can access from mmc generic command CMD56 (using mmc-utils, mmc gen_cmd read < device > [arg])See the following for a good explanation:
[1] https://www.kingston.com/en/embedded/emmc-embedded-flash
'Estimating, validating & monitoring eMMC life cycle'
|
In the past our company used raspberry pi's for our IOT application.
The problem with that was that SD cards wear out and get corrupt.
We now ordered Compulab SBC's with eMMC storage running Debian.
So what would be the best practices to configure durable embedded IOT devices?
I would say:Choose an SBC with eMMC storage
Make sure you have a journaling filesystem (has_journal is enabled on EXT4)
Write logs to ram to prevent wear on storage (in /etc/systemd/journald.conf Storage=volatile)
Ensure fsck runs at boot (in /etc/fstab the last field is set to 1 or 2)
Swap should be disabled (run free -> total Swap should be 0)Any more suggestions?
Overlay file system
Raspbian has an option in 'raspi-config'->'Performance Options'->'Overlay File System'
I asked Compulab if they would recommend also using it, but they think it is already as robust as it can be with filesystem journaling and fsck that runs at boot.
Would using an Overlay File System to prevent writes to storage be worth the extra complexity of needing to reboot the device multiple times to disable it and enable it again if you ever want to update it later?
| What are the best practices for configuring durable IOT Linux devices? Should I use an Overlay File System? |
I've decided that this is a bug. As such, I've lodged a bug report with Debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=896646
Oops! Turns out I was doing it wrong! As noted in response to the Debian bug:overlayfs is behaving as documented. The documentation
(filesystems/overlayfs.txt) says: "The specified lower directories will
be stacked beginning from the rightmost one and going left. In the
above example lower1 will be the top, lower2 the middle and lower3 the
bottom layer."
In your example this means that "layer1.upper" is the lowest layer, and
its whiteout is overridden by the file in "base" which is on top of it.
I think you just need to swap the order of these directories in the
mount options.I had read that doc, but missed the "right to left" bit!
I can confirm, that when done correctly (i.e. swapped the order so it is right to left) it works as expected.
|
On Debian Stretch (running as root) this the current behaviour:
# Create base directory
mkdir base
touch base/example# Create merge, upper and work directories for 2 layers
mkdir layer1 layer1.upper layer1.work
mkdir layer2 layer2.upper layer2.work# Mount layer1 as the merged directory using layer1.upper as the true upper layer,
# with base as a lower layer and layer1.work as the necessary work directory
mount -t overlay overlay -o lowerdir=$(pwd)/base,upperdir=$(pwd)/layer1.upper,workdir=$(pwd)/layer1.work layer1
ls layer1 # should show example as expected
ls layer1.upper # shows no file (this is expected behaviour, it should only show files written on layer1)
rm layer1/example
ls layer1 # should show no files
ls layer1.upper # should show a special character device called "example", this is the "whiteout" file# unmount, and remount with layer2 being the new upper layer and using layer1.upper directory as the top level lower layer.
umount layer1
mount -t overlay overlay -o lowerdir=$(pwd)/base:$(pwd)/layer1.upper,upperdir=$(pwd)/layer2.upper,workdir=$(pwd)/layer2.work layer2
ls layer2 # now shows example again as if it was never deletedIs this a bug? Or is this a limitation/expected behaviour?
If expected, any suggestions on a quick and easy workaround?
FWIW it works as desired under auFS, so one workaround is to install aufs-dkms and continue to use auFS... I may do that regardless, but I would really like clarity on whether this is a bug or expected behaviour.[update] I was doing it wrong, please see (the now corrected) answer!
| (user error) OverlayFS - files deleted in current merged dir (mountpoint) reappear when merged dir remounted as lower |
This means that the file system being mounted contains a root directory owned by user 1000 and group 1000. The ownership of a mounted file system’s root directory becomes the ownership of the mount point.
|
I'm mounting a filesystem as root and I don't understand why it is not owned by root but by an unprivileged user.
Here's fstab:
cat /etc/fstab
[...]
/dev/sdb /mnt/projects ext4 defaults 0 2And here's what happens when mounting:
ls -al /mnt/projects/
total 8
drwxr-xr-x 2 root root 4096 mai 25 17:55 .
drwxr-xr-x 3 root root 4096 mai 25 17:55 ..mount /dev/sdbls -al /mnt/projects/
total 24
drwx------ 3 jerome jerome 4096 mai 25 17:52 .
drwxr-xr-x 3 root root 4096 mai 25 17:55 ..
drwx------ 2 root root 16384 mai 25 17:52 lost+foundI'm not using sudo. I switch to root user with the su command.
The user that gains ownership is my normal user, the first declared when installing the system (uid: 1000).
The mount point is owned by root. I don't think that matters anyway.
My normal user doesn't have the permissions to mount the filesystem here himself.
| Filesystem mounted as root but owned by user. Why? |
As per your steps, you protected the file /etc/resolv.conf from being deleted/overwritten with chattr +i (immutable)
So, you won't be able to move it to another file without doing sudo chattr -i /etc/resolv.conf first.
From man chattrA file with the 'i' attribute cannot be modified: it cannot be deleted
or renamed, no link can be created to this file and no data can be
written to the file. Only the superuser or a process possessing the
CAP_LINUX_IMMUTABLE capability can set or clear this attribute. |
I am trying to activate NordVPN CyberSec by completing the following instructions in Debian 9.
I should be able to do the changes as root and with sudo like described for Ubuntu in the thread Should I edit my resolv.conf file to fix wrong DNS problem? and in the thread Linux: How do i edit resolv.conf but I cannot.If you are using Linux or Mac OS X, please open the terminal and type
in: su You will be asked for your root password, please type it in and
press enter rm -r /etc/resolv.conf nano /etc/resolv.conf
When the text
editor opens, please type in these lines:
nameserver 103.86.99.99
nameserver 103.86.96.96 Now you have to close and save the file, you
can do that by clicking Ctrl + X and pressing Y. Then please
continue typing in the terminal:
chattr +i /etc/resolv.conf
reboot nowThat is it. Your computer will reboot and everything should work
correctly. If you will ever need to change your DNS addresses, please
open the terminal and type in the following: su You will be asked for
your root password, please type it in and press enter
chattr -i /etc/resolv.conf
nano /etc/resolv.conf Change DNS addresses, save and close the file.
chattr +i /etc/resolv.confI do the first step as su/root but get the following.
Trying to change the file /etc/resolv.conf content there with sudo, I get operation not permitted.
root@masi:/etc# ls -la * | grep resolv.conf
-rw-r--r-- 1 root root 89 Jan 22 2017 resolv.conf
-rw-r--r-- 1 root root 89 Jul 25 17:10 resolv.conf~
-rw-r--r-- 1 root root 0 Jan 22 2017 resolv.conf.tmp
-rwxr-xr-x 1 root root 1301 Nov 12 2015 update-resolv-confroot@masi:/etc# sudo mv resolv.conf resolv.conf.tmp2
mv: cannot move 'resolv.conf' to 'resolv.conf.tmp2': Operation not permittedOS: Debian 9
| Cannot rename resolv.conf file as root |
From the Linux kernel documentation for the hfsplus module:Mount options
uid=n, gid=n
Specifies the user/group that owns all files on the filesystem that have uninitialized permissions structures. Default: user/group id of the mounting process.501 is the default UID of the first regular user on modern macOS.
So, apparently macOS does not initialize "permissions structures" for some files. Also, the Apple Technote #1150 indicates the storage of owner ID has an added wrinkle:ownerID
The Mac OS X user ID of the owner of the file or folder. Mac OS X versions prior to 10.3 treats user ID 99 as if it was the user ID of the user currently logged in to the console. If no user is logged in to the console, user ID 99 is treated as user ID 0 (root). Mac OS X version 10.3 treats user ID 99 as if it was the user ID of the process making the call (in effect, making it owned by everyone simultaneously). These substitutions happen at run time. The actual user ID on disk is not changed.and later:Note:
If the S_IFMT field (upper 4 bits) of the fileMode field is zero, then Mac OS X assumes that the permissions structure is uninitialized, and internally uses default values for all of the fields. The default user and group IDs are 99, but can be changed at the time the volume is mounted. This default ownerID is then subject to substitution as described above.
This means that files created by Mac OS 8 and 9, or any other implementation that sets the permissions fields to zeroes, will behave as if the "ignore ownership" option is enabled for those files, even if "ignore ownership" is disabled for the volume as a whole.S_IFMT referred here is the highest 4 bits of the 16-bit value that is used to store the Unix-style permission bits: 3x read/write/execute, and the setuid/setgid/sticky bits. A regular file needs those highest 4 bits to be set to a specific non-zero value (S_IFREG) or else the backwards compatibility mechanism described above will kick in.
The structure of the HFS+ filesystem clearly opens up a possibility to sometimes play "fast and loose" with the file ownerships, and your results indicate macOS seems to do exactly that in some situations.
For removable media, it would make a certain kind of sense for macOS to automatically enable the "ignore ownership" option as the system that writes the files to the media might not be the same as the one that will be reading it, and the two systems might have entirely different UID mappings, resulting in inconvenience to the user.
So this might just be macOS trying to be user friendly on removable media, and assuming that the user's physical possession of the removable media is equivalent to a proof of ownership of the data within.
Ubuntu's first regular user account is created with UID 1000, and that's apparently the account you mounted the HFS+ volume to Ubuntu as.
Since files created by Linux keep their UID 1000 into macOS, that indicates Linux will populate the HFS+ "permissions structures" with file owner UIDs, and once macOS reads them, they will work as expected.The classic POSIX timestamps are:ctime = time of last status/metadata change
mtime = time of last modification of contents
atime = time of last access.A creation time (crtime, or birth time) is not one of them. A filesystem may or may not support creation times, and the exact semantics of it may vary between filesystem types and Unix-style operating systems.
Some filesystem drivers handle assigning the creation time internally and make it outright impossible to modify the crtime of a file afterwards: in such a filesystem, a file that's been accidentally deleted and restored from a backup may have their classic ctime and mtime restored, but the creation time will reflect the time of restoration from backup, since the file is now no longer the original, although it might be an exact copy of it.
When you copy a file, you plainly create a new file: the idea of "preserving the creation time" across a copy operation is an oxymoron.
A filesystem on its own can track the creation time of a file, but that is not necessarily the same thing as the creation time of the data within the file. If you want to track the latter, you usually need either a version control system, or a file format that can include a metadata field on data creation time... and all applications using that data format must agree on the semantics of what the "data creation time" means, or else it will become meaningless.
|
I copied some files to HFS+, using macOS, ensuring that it was copied exactly. On macOS these copied files have 501 as owner according to ls -han.
I then plug in the HFS+ usb stick into Ubuntu, and there the files have 1000 as owner according to ls -han. Why?
I then tried copying one of the 501 owned files in Ubuntu (to the same HFS+ volume), ensuring that it was copied exactly using cp -a.
Now macOS ls sees the new file as owned by user 1000...
Really? I don't understand — what was the point of using cp with the -a option if it doesn't even preserve the owner's user id? What did I miss?
Update: To clarify, I think my confusion here stems from that — in my mind — HFS supports Unix file permissions natively and should "just work" with them.I recently learned that cps preserve=timestamps does not, in fact, preserve time stamps (creation dates are reset). Am I now to believe that its preserve=ownership does not preserve ownership?
| Why does `ls` in Linux and macOS show different owners (uid) for the same file? |
When you did:
sudo usermod -a -G test-group stephenOnly the group database (the contents of /etc/group in your case) was modified. The corresponding gid was not automagically added to the list of supplementary gids of the process running your shell (or any process that has stephen's uid as their effective or real uids).
If you run id -Gn (which starts a new process (inheriting the uids/gids) and executes id in it), or ps -o user,group,supgrp -p "$$" (if supported by your ps) to list those for the shell process, you'll see test-group is not among the list.
You'd need to log out and log in again to start new processes with the updated list of groups (login (or other logging-in application) calls initgroups() which looks at the passwd and group database to set the list of gids of the ancestor process of your login session).
If you do sudo -u stephen id -Gn, you'll find that test-group is in there as sudo does also use initgroups() or equivalent to set the list of gids for the target user. Same with sudo zsh -c 'USERNAME=stephen; id -Gn'
Also, as mentioned separately, you need search (x) permission to a directory be able to access (including create) any of its entries.
So here, without having to log out and back in, you could still do:
# add search permissions for `a`ll:
sudo chmod a+x /var/www/testdir# copy as the new you:
sudo -u stephen cp example.txt /var/www/testdir/You can also use newgrp test-group to start a new shell process with test-group as its real and effective gid, and it added to the list of supplementary gids.
newgrp will allow it since you've been granted membership of that group in the group database. No need for admin privilege in this case.
Or sg test-group -c 'some command' to run something other than a shell. Doing sg test-group -c 'newgrp stephen' would have the effect of adding only adding test-group to your supplementary gids while restoring your original (e)gid.
It's also possible to make a copy of a file and specify owner, group and permissions all at once with the install utility:
sudo install -o stephen -g test-group -m a=r,ug+w example.txt /var/www/testdir/To copy example.txt, make it owned by stephen, with group test-group and rw-rw-r-- permissions.
To copy the timestamps, ownership and permissions in addition to contents, you can also use cp -p. GNU cp also has cp -a to copy as much as possible of the metadata (short for --recursive --no-dereference --preserve=all).
|
I am attempting to copy a file into a directory where my user account is not the directory owner but belongs to a group that is the directory group owner. These are the steps I have taken:
Create a group and add me to that group
stephen@pi:~ $ sudo groupadd test-group
stephen@pi:~ $ sudo usermod -a -G test-group stephen
stephen@pi:~ $ grep 'test-group' /etc/group
test-group:x:1002:stephenCreate a file and list permission
stephen@pi:~ $ touch example.txt
stephen@pi:~ $ ls -l example.txt
-rw-r--r-- 1 stephen stephen 0 Feb 9 10:46 example.txtCreate a directory, modify the group owner to the new group and alter permission to the directory to grant write permission to the group
stephen@pi:~ $ sudo mkdir /var/www/testdir
stephen@pi:~ $ sudo chown :test-group /var/www/testdir/
stephen@pi:~ $ sudo chmod 664 /var/www/testdir/
stephen@pi:~ $ sudo ls -l /var/www
total 8
drwxr-xr-x 2 root root 4096 Oct 31 12:17 html
drw-rw-r-- 2 root test-group 4096 Feb 9 10:48 testdirCopy the newly created file into this directory
stephen@pi:~ $ cp example.txt /var/www/testdir/straight-copy.txt
cp: failed to access '/var/www/testdir/straight-copy.txt': Permission deniedTo me, this should have been successful; I'm a member of the group that has ownership of this directory, and the group permission is set to rw. Ultimately, I want any files that are copied into this directory to inherit the permission of the parent directory (/var/www/testdir).
I can copy with sudo, but this does not inherit the owner or permission from the parent directory, nor does it retain the original ownership (probably as I'm elevated to root to copy):
Copy with sudo and list ownership/permission of file
stephen@pi:~ $ sudo cp example.txt /var/www/testdir/straight-copy.txt
stephen@pi:~ $ sudo ls -l /var/www/testdir/
total 0
-rw-r--r-- 1 root root 0 Feb 9 11:06 straight-copy.txtPlease is someone able to explain to me what is happening?
| Copying a file into a directory where I am a member of the directory group |
When you run this as the user armoken the file is created according to your current permissions settings, which are such that you can read/write the file but no-one else can:
ls -l /var/tmp/lll.log
-rw------- 9 armoken 1 May 10:52 /var/tmp/lll.logSo when other users try to write to this file they have no permission to do so.
However, it's more complicated than this because you have the protected regular files security feature enabled in your system's kernel (cat /proc/sys/fs/protected_regular returns non-zero). This means that, regardless of these permissions, no-one other than the owner can write to a file in a sticky directory such as /var/tmp - not even root - unless the file is owned by the owner of the directory itself.
So, if you want everyone to be able to read/write this file in this directory you need to set it up so that root owns it and that anyone can write to it. But bear in mind this means other people can erase or change content in the file too.
#!/bin/sh
if [ ! -f /var/tmp/lll.log ]
then
# File does not exist
if [ "$(id -u)" -eq '0' ]
then
# We are root so create the file (and continue)
>/var/tmp/lll.log
chmod a=rw /var/tmp/lll.log
else
echo 'ERROR: Log file does not exist. Have your systems administrator create it before proceeding' >&2
exit 1
fi
fi# Now anyone can read/write the contents of the file
echo 'This is a test message' >>/var/tmp/lll.logThis is not defensive coding, though, as anyone can still create the file and prevent others from using it.
A better solution might be to use a logger. For example, this will write to the files managed through journalctl (and/or /var/log/user.log otherwise)
logger 'This is a test message'journalctl --since today | tail
…
May 01 10:14:24 myServer myUser[18892]: This is a test message
… |
I have script that can be runned from different users on the same machine. This script should write logs to the same file on every run.
Minimal version of script:
#!/usr/bin/env bash
# 2
touch /var/tmp/lll.log # 3
chmod 666 /var/tmp/lll.log # 4 (You can comment this line, but this will change nothing)
echo ghghhghg >> /var/tmp/lll.log # 5There is no problem when it started from root and then from other user, but error thrown when order is opposite.
./savetmp.sh: line 5: /var/tmp/lll.log: Permission deniedOutput of ls -ld /var/tmp /var/tmp/lll.log:
.rw------- 9 armoken 1 May 10:52 /var/tmp/lll.log
drwxrwxrwt - root 1 May 10:52 /var/tmpcat /proc/sys/fs/protected_regular:
1How to fix that?
| Root couldn't write to file with rw permissions for all users and owned by other user |
The .gnupg directory and its contents should be owned by the user whose keys are stored therein and who will be using them. There is in principle no problem with a root-owned .gnupg directory in your home directory, if root is the only user that you use GnuPG as (in that case one could argue that the directory should live in /root or that you should do things differently).
I can see nothing wrong with the file permissions in the file listing that you have posted. The .gnupg folder itself should additionally be inaccessible by anyone other than the owner and user of the keys.The reason why the files may initially have been owned by root could be because GnuPG was initially run as root or by a process executing as root (maybe some package manager software or similar).GnuPG does permission checks and will warn you if any of the files have unsafe permissions. These warnings may be turned off (don't do that):--no-permission-warning
Suppress the warning about unsafe file and home directory
(--homedir) permissions. Note that the permission checks that
GnuPG performs are not intended to be authoritative, but rather
they simply warn about certain common permission problems. Do
not assume that the lack of a warning means that your system is
secure.
Note that the warning for unsafe --homedir permissions cannot be
suppressed in the gpg.conf file, as this would allow an attacker
to place an unsafe gpg.conf file in place, and use this file to
suppress warnings about itself. The --homedir permissions
warning may only be suppressed on the command line.The --homedir directory referred to above is the .gnupg directory, usually at $HOME/.gnupg unless changed by using --homedir or setting GNUPGHOME.
Additionally, the file storing the secret keys will be changed to read/write only by default by GnuPG, unless this behaviour is turned off (don't do that either):--preserve-permissions
Don't change the permissions of a secret keyring back to user
read/write only. Use this option only if you really know what
you are doing.This applies to GnuPG 2.2.3, and the excerpts above are from the gpg2 manual on an OpenBSD system.
|
What are the standard ownership settings for files in the .gnupg folder?
After doing sudo chown u:u * mine now looks like this:
drwx------ 2 u u 4,0K jan 18 22:53 crls.d
drwx------ 2 u u 4,0K jan 18 22:33 openpgp-revocs.d
drwx------ 2 u u 4,0K jan 18 22:33 private-keys-v1.d
-rw------- 1 u u 0 sep 28 02:12 pubring.gpg
-rw-rw-r-- 1 u u 2,4K jan 18 22:33 pubring.kbx
-rw------- 1 u u 32 jan 18 22:28 pubring.kbx~
-rw------- 1 u u 600 jan 19 22:15 random_seed
-rw------- 1 u u 0 sep 28 02:13 secring.gpg
srwxrwxr-x 1 u u 0 jan 20 10:20 S.gpg-agent
-rw------- 1 u u 1,3K jan 18 23:47 trustdb.gpgHowever, before that, originally at least pubring.gpg,secring.gpg and random_seed were owned by root.
| What are the standard ownership settings for files in the `.gnupg` folder? |
CentOS (and other Fedora/RHEL derivatives) enables an additional security mechanism known as SELinux. It applies additional restrictions on most system daemons. These additional restrictions are checked after regular unix permissions.
For non-default configurations you often need to adjust SELinux. Files contain a specific security label, which is used by SELinux to apply the security policy. If your problem only occurs with some files, you need to correct the SELinux labels on the problematic files. Use chcon with --reference= option to copy the label from a file which works to apply the same label on your problematic file(s):
chcon --reference=<path to working file> <path to not working file(s)>If your files are in non-standard location, you should add a rule in file labeling database. This avoids problems the next time file system is relabeled or restorecon is used. Choose the label appropriately or use the already applied label (check the existing security labels with ls -lZ).
Adding a labeling rule for /path/to/directory and its contents using semanage:
semanage fcontext -a -t httpd_user_rw_content_t '/path/to/directory(/.*)?'If your files are on different file system, you can use context option for the mount point to apply/override the default labeling.
|
The problem is that I have PHP files that do not work in the browser. I suspect because the user is missing read permissions.
The files are located in a dir called "ajax"
drwxrwxrwx. 2 root root 4096 Sep 13 14:33 ajaxThe content of that dir:
-rwxrwxrwx. 1 root root 13199 Sep 13 14:33 getOrderDeliveryDates.php
-rwxrwxrwx. 1 root root 20580 Sep 13 14:33 getParcelShops.php
-rwxrwxrwx. 1 root root 1218 Sep 13 14:33 index.php
-rwxrwxrwx. 1 root root 814 Sep 13 14:33 lang.php
-rwxrwxrwx. 1 root root 6001 Sep 13 14:33 prod_reviews.phpI'm 100% certain logged in as root:
[root@accept: nl (MP-git-branch)] $doublecheck with command id:
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023It is driving me nuts.
tried sudo (even though I already am root).
sudo chmod 777 filenametried chown (even though I already am the owner root).
sudo root filenameThere are no errors or warnings at all.
OS is CentOS 6
| Cannot change permissions as root on a file owned by root |
I think you want this, which assumes that you are using GNU find specifically:
find -type f \! -perm /u=rwx -exec echo rm -f {} \;Note that I added an echo for testing.
If the files that get printed match your expectations, take it out. :)
|
I have a prompt that asks me to delete all the files in a directory that the owner (u) can't r, w, nor x, in one command.
I tried this command:
find data -type f ! -perm -u=rwx -exec rm -f {} \;... but I think it removes too many files.
| Delete all files without user permissions |
To determine only the "numerical group ID":
stat -c %g /path/to/file/or/directoryTo determine only the "numerical user ID":
stat -c %u /path/to/file/or/directory |
For a bash script,
I need to find the numeric group ID from the file ownership attributes, similar to the output of ls -nl, but only the number.
If possible, I would like to avoid big parsing magic ...
| How to get the numeric group owner of a file? |
The extraction is what determines the ownership, not the creation of the archive. You can see that by looking at the archive's table of contents, e.,g.,
tar tvf dist.tarIf creating the file as regular user
tar --owner 0 --group 0 dist.tar distdo the magic
|
I want to build a tar file as regular (non-root) user with some prepared binaries and configuration files like this;
etc/binaries.conf
usr/bin/binary1
usr/bin/binary2that are mean to be extracted into the file system under the / directory.
Like a traditional software package .deb, .rpm etc but I need to be "package manager independent". So probably I will just have a .tar file (maybe some gzip, bzip, lzip should be added to the mix but that's outside).
PROBLEM / QUESTION
My problem here is that I don't want to build this tar as the root user, and I want to know if there is a way to build this tar as a regular (non-root) user and then, when the .tar file is distributed to the machines and the real root user extract those binaries, they will be installed as files owned by the root user or the user who extract the binaries ?
EXAMPLE
Because right now, when I just create the .tar file as a regular (non-root) user with
$ tar cf dist.tar dist/And then extract the .tar as root user with
# tar xf dist.tar -C /I see the binaries and the config file with the regular user as owner, not the root user.
$ ls -la /usr/bin/binary1
-rwxr-xr-x 1 user user 30232 jun 20 19:06 /usr/bin/binary1And I wan to have
$ ls -la /usr/bin/binary1
-rwxr-xr-x 1 root root 30232 jun 20 19:06 /usr/bin/binary1Just to clarify, this hand made packaging is very specific for some task in a closed infrastructure, so right now, using .deb, .rpm or any other more sophisticated packaging system is not an option.
| Create, as a regular user, a tar with files owned by root |
Is this drive mapped in /etc/fstab? If so, you can modify the options there, this "nosuid" option need to be removed as others have pointed, and you can also add "gid=ownerGroupID, uid=ownerID" to the options list in order to have the files on the drive explicitly mapped to particular uid/gid and these be further usable to you.
|
I have 5 internal drives and 3 external.
I would like the internal drive's files to be owned by my default user hutber
I have tried to chown them with sudo as seen here:It looks successful, howeverI am unsure if its possible to change, but here is my mount options for the driveAnd just an overview of all drivers. With the drive in question on display. | hdd mount option to grant ownership of files on drive |
If you have write access to the directory, you can remove the files in it, regardless of the files' owners. The sticky bit on the directory would prevent you from removing other users' files, but if you own the directory you can just unset the bit... Same for giving yourself write access.
However, a non-empty directory owned by some other user would be more of a problem.
In any case, if you can chown the files in the directory, you're probably superuser already, and should be able to just rm -r the whole tree. Though this is somewhat system specific, e.g. on Linux you could have the CAP_CHOWN capability allowing chown but not CAP_DAC_OVERRIDE which would allow bypassing the lack of write access.
|
I have a directory in my home folder containing some build output, which the build process has chown to another user (for reasons unknown to me.) I want to delete the directory, but can't because it's not empty, and I can't delete the files it contains because they are not owned by me. Of course, I could recursively chown all the files, but getting all the hidden files is a pain. Is there a straight forward way?
| Delete a directory owned by me, containing files not owned by me |
You're not using -numeric-ids and/or -fake-super for your backups (and restores). If you modify your rsync command a little you'll get the mappings saved and restored correctly.
In these examples, the -M tells rsync to apply the next option, i.e. the fakery, on the remote side of the connection. An extra side effect is you don't need the remote side (where the backups are stored) to run as root
This pushes the backups from the client to the backups server
sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --numeric-ids -M--fake-super --exclude-from="${exc_path}" "${src_path}" "${dst_addr}:${dst_path}"This would pull backups from the client (i.e. restore)
sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --numeric-ids -M--fake-super --exclude-from="${exc_path}" "${dst_addr}:${dst_path}" "${src_path}"And this, run on the backups server, would push the backups to the client (i.e. restore)
sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --numeric-ids --fake-super "${dst_path}" "${src_host}:${src_path}" |
My apologies for the silly/simple question - yet after searching the web and SE, I cannot find an answer for this specific issue.
Question:
How does one change the owner and group (system-wide) only for files owned by a specific owner?
Use-case:
We have a number of RasPis running as various servers and use rsync to back them up. When we're unfortunate enough to have to perform a restore, the owner and group of all 'user' files is pi:pi, rather than the original owner adminuser:adminuser, for example.
Without hunting the files owned by pi, is where a way to accomplish the owner/group reassignment?
Edit:
This is the rsync command:
sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --exclude-from="${exc_path}" "${src_path}" "${dst_addr}:${dst_path}" | Change owner and group for specific owners only |
In the default /etc/rsnapshot configuration file is the following:
# Specify the path to a script (and any optional arguments) to run right
# after rsnapshot syncs files
# cmd_postexec /path/to/postexec/scriptYou can use cmd_postexec to run a chgrp command on the resulting files which need their group ownership changing.
|
I am using rsnapshot to make daily backups of a MYSQL database on a server. Everything works perfectly except the ownership of the directory is root:root. I would like it to be root:backups to enable me to easily download these backups to a local computer over an ssh connection. (My ssh user has sudo permissions but I don't want to have to type in the password every time I make a local copy of the backups. This user is part of the backups group.)
In /etc/rsnapshot.conf I have this line:
backup_script /usr/local/bin/backup_mysql.sh mysql/
And in the file /usr/local/bin/backup_mysql.sh I have:
umask 0077
# backup the database
date=`date +"%y%m%d-%h%m%s"`
destination=$date'-data.sql.gz'
/usr/bin/mysqldump --defaults-extra-file=/root/.my.cnf --single-transaction --quick --lock-tables=false --routines data | gzip -c > $destination
/bin/chmod 660 $destination
/bin/chown root:backups $destinationThe file structure that results is:
/backups/
├── [drwxrwx---] daily.0
│ └── [drwxrwx---] mysql [error opening dir]
├── [drwxrwx---] daily.1
│ └── [drwxrwx---] mysql [error opening dir]The ownership of the backup data file itself is correct, as root:backups, but I cannot access that file because the folder it is in, mysql, belongs to root:root.
| Rsnapshot: folder ownership permissions to 'backups' group instead of root |
This happens because
tar -cpf out.tar folderA/folderBdoesn’t store folderA as a separate object in the tarball, so it doesn’t have any way of recording the ownership and permissions of folderA.
To preserve the ownership, you need to tell tar to do so when you create the tarball; with GNU tar at least, the following works:
tar -cpf out.tar --no-recursion folderA --recursion folderA/folderBThis stores folderA (and its permissions etc.) without recursing, and folderA/folderB with its contents.
|
I can preserve ownership of folderB and all files and folders inside when creating and extracting a tar file as follows:
tar -cpf out.tar folderA/folderB
sudo tar -xpf out.tar --same-ownerHowever, folderA is owned by root when extracting unless the folder already exists. Is there any way to preserve ownership of the entire folder hierarchy with tar?
| Preserve ownership of entire folder hierarchy in tar? |
You're using NTFS-3g, a user-space NTFS filesystem driver.
Between the kernel and any such user-space filesystem drivers, there is an interface layer called FUSE (short of Filesystem in USErspace).
Note that the filesystem type is listed as fuseblk, not as ntfs or ntfs-3g. When you see type fuseblk (some options), then the options within parentheses are FUSE options, not actual filesystem options. See man 8 fuse if you want to know more details.
Specifically, the user_id=0 means "this FUSE filesystem was mounted by root" and nothing else. The actual mount options are handed to the filesystem driver process, which can do whatever it wants with them. (FUSE allows only the user that mounted the filesystem to access it, unless the FUSE option allow_other is specified.)
Unfortunately the FUSE interface layer does not allow showing the actual mount options of the FUSE-based filesystem in the mount command output the same way as classic kernel-based filesystems show them.
Instead, if you run pgrep -a ntfs-3g, you will see the ntfs-3g filesystem driver processes and their command-line options, which will include the mount options you specified.
For example, on my system, I have these lines in /etc/fstab:
UUID="A268B5B668B599AD" /win/c ntfs-3g defaults,windows_names,inherit,nofail 0 0
UUID="56A31D4569A3B7B7" /win/d ntfs-3g defaults,windows_names,inherit,nofail 0 0And so, I'll see these processes:
$ pgrep -a ntfs-3g
775 /sbin/mount.ntfs-3g /dev/nvme0n1p3 /win/c -o rw,windows_names,inherit
1008 /sbin/mount.ntfs-3g /dev/sdb2 /win/d -o rw,windows_names,inherit |
Why I cannot change the ownership on mounting ntfs drive?
I give uid=1000,gid=1000, etc in my /etc/fstab file, but found it is not working. So I'm testing it out on command line:
root@host:~# mount | grep /mnt/tmp1 | wc
0 0 0root@host:~# mount -o uid=1000 /dev/nvme0n1p4 /mnt/tmp1/root@host:~# mount | grep /mnt/tmp1
/dev/nvme0n1p4 on /mnt/tmp1 type fuseblk (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)root@host:~# umount /mnt/tmp1root@host:~# mount -o user_id=1000 /dev/nvme0n1p4 /mnt/tmp1/root@host:~# mount | grep /mnt/tmp1
/dev/nvme0n1p4 on /mnt/tmp1 type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 21.10
Release: 21.10
Codename: impish$ apt-cache policy mount
mount:
Installed: 2.36.1-8ubuntu1
Candidate: 2.36.1-8ubuntu2
Version table:
2.36.1-8ubuntu2 500
500 http://archive.ubuntu.com/ubuntu impish-updates/main amd64 Packages
*** 2.36.1-8ubuntu1 500
500 http://archive.ubuntu.com/ubuntu impish/main amd64 Packages
100 /var/lib/dpkg/statusAm I missing something?
Why I cannot change the ownership on mounting ntfs drive?
| Cannot change the ownership mounting ntfs drive |
The owner of a directory can change the contents of the directory however they want. Even if there's a file in the directory that the directory owner isn't allowed to write, the directory owner can remove that file and create a new file by the same name.
More generally, if you have write permission to a directory, then you can remove and create files in that directory. Thus you can change files in that directory, not by writing to them if you don't have write permission on the file, but by deleting the existing file and creating a new file by the same name.
If you own a directory parent and it contains a subdirectory child that is owned by root and you don't have write permission on child, then you can't modify files in child. However, you can rename child and create a new subdirectory called child, which will be owned by you and thus can contain whatever you want.
This is why security checks that verify file control (e.g. the sanity checks that OpenSSH makes on private key files) verify the whole directory chain up to the root. Likewise, if you give a user sudo rights to run a file, the whole path to the file should be controlled by root. For example, don't give a user sudo rights to run a program that's under their home directory. (On the other hand, a setuid root program anywhere is fine, because setuid is attached to the file itself, not to its path.) Anyone who controls any intermediate step in the directory path can substitute their own content, not by editing the actual file, but by renaming a directory at the point in the path.
|
As a non-privileged user, owning a directory on an EXT4 filesystem where I have all the necessary rights (rwx) gives me the possibility to change content and ownership of files (e.g. vim file and :w!) within it even if they are owned by root and even if I don't have the right to change them (root:root and 0644).
Is that somehow possible with a directory owned by root if that directory is within a directory owned by my non-privileged user?
| Change ownership of directory owned by root |
Set up a root cron job and then the script will run as root anyways. To access the root crontab, run sudo crontab -e.
|
I'm using rsync to back up a set of files in /etc. The 'source' files are on an ext4 filesystem, and the 'destination' is an ext4 partition on a USB thumb drive. My incantation is similar to this:
rsync -av --recursive --files-from=my/etcfiles /etc ./my/backup/etc/Not unexpectedly, I get errors with this including:
rsync: failed to set times on "/my/backup/etc/somefile": Operation not permitted (1)
rsync: mkstemp "/my/backup/etc/somefile.erGL4a" failed: Permission denied (13)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1196) [sender=3.1.2] I think this is because the a option in rsync preserves ownership (root) of the files I'm backing up, and so root privileges are required to complete rsync's operations.
This rsync operation completes successfully when run under sudo, but I need to set this up as a cron job, and using sudo in a crontab brings an issue (password storage) I'd like to avoid.
Another possibility may be changing ownership of the root-owned files (using the chown=USER:GROUP option) during the rsync backup. I've not tried that because it occurs to me that, even if it works without sudo, the ownership would have to be restored if the backups were ever needed.
I've been stewing over this for the better part of a day now, and growing weary of wrangling with rsync's myriad options. So - My question is this:
How can I avoid using sudo to make backups of files in /etc without committing an even worse bodge?
| Can I avoid using `sudo` to back up files owned by root? |
Use chgrp nobody file instead.
|
How do I make there be no group owner of a file in Mac OSX, since
chgrp nogroup filedoesn't work? If I try, the group owner doesn't change at all.
| How do I make there be no group owner of a file in Mac OSX? |
The manpage you linked to explains well in fact that since 1.42 version the UID:GID of the root directory no longer default to those of the user running mke2fs.
If, under >1.42 version you want UID:GID of the root directory to be those of the user running mke2fs, you must explicitly specify root_owner as part of the feature list eventually omitting its uid:gid optional parameters.
This is a consequence of a patch from T. Ts'o (mke2fs: don't set root dir UID/GID automatically) which explicitly instructs to:Add the "-E root_owner[=uid:gid]" option to mke2fs so that the user
and group can be explicitly specified for the root directory. If the
"=uid:gid" argument is not specified, the current UID and GID are
extracted from the running process, as was done in the past. |
As you know, if there is no root_owner option, mke2fs use the user and group ID of the user running mke2fs. Let's test it on Ubuntu 22 x86_64 (mke2fs 1.46.5 (30-Dec-2021)):Generate image
mke2fs -t ext2 -I 256 -E 'lazy_itable_init=0,lazy_journal_init=0' -O '^large_file' -O '^huge_file' -L ext2test 'diskEmpty.img' 102400kMount image
gnome-disk-image-mounter -w diskEmpty.imgBut only root user can write to this... Why?
Let's test root_owner option:Generate image
mke2fs -t ext2 -I 256 -E 'root_owner=1000:1000,lazy_itable_init=0,lazy_journal_init=0' -O '^large_file' -O '^huge_file' -L ext2test 'diskEmpty.img' 102400kMount imagegnome-disk-image-mounter -w diskEmpty.img
Now I can write to my disk.
Why can't I write to disk without root_owner feature?
| mke2fs ignore root_owner |
Generally speaking, a non-privileged user cannot create files with different ownership than his own UID, so when he copies a file, the new file in the destination will always be owned by the UID of the user who ran the cp command.
This only applies for the case that a non-privileged user (non-root) copies the files, and it does't matter if he copies them from a remote machine or from a local one, and who was the original owner of the file.
If some user copies a file to a remote machine, the file will belong to the UID of that user on the remote machine. For instance, let's say you have user foo that has UID 100 on machine A, and on machine B there's also a user foo but with UID 101. If user foo copies a file from machine A to machine B (and it doesn't matter who was the original owner of the file and what was the method of copying), it will be created on machine B under the same user, but with his UID on machine B - 101. And again, this doesn't apply to copies ran by root.
|
As far I know, a file's ownership on Linux depends on the file's owner's UID.
What happens if a user in a different machine has the same UID as a user on the server and then the file is copied to the server? Who owns that file?
What happens if a user on a different machine has UID that is not the same as any user on the server and then the file is copied to the server? Who owns that file?
I have created few users and a group. Then copy pasted:
$ sudo adduser --gecos "" --disabled-password --no-create-home user1
$ sudo adduser --gecos "" --disabled-password --no-create-home user2
$ sudo adduser --gecos "" --disabled-password --no-create-home user3
$ sudo adduser --gecos "" --disabled-password --no-create-home user4
$ sudo addgroup userstart
$ sudo gpasswd -M user1,user2,user3,user4 userstart
$ sudo chown :userstart /home/blueray/Desktop/Permissions
$ sudo runuser -u user1 -- cp /home/blueray/Desktop/Permissions/test.html /home/blueray/Desktop/Permissions/test-copy.html
$ ls -la /home/blueray/Desktop/Permissions
total 72
drwxrwxr-x 2 blueray userstart 4096 Feb 8 11:57 .
drwxr-xr-x 3 blueray blueray 4096 Feb 8 11:55 ..
-rw-r--r-- 1 user1 user1 31017 Feb 8 11:57 test-copy.html
-rw-rw-r-- 1 blueray blueray 31017 Feb 6 05:50 test.htmlThe user who copied the file seems to own the file. Is it always the case?
| How does Linux handle permissions of files created on a different machine? |
Ok, looking like I solved this. I can open chromium from either user with exact same user-dir/profile.
The trick is add all users to the acl owners list both for existing files and as default. Then to get the mask set run some chmod commands and do that all recursively.
To make this easy I wrote script and it's here.
https://gist.github.com/dkebler/23c8651bd06769770773f07854e161fc
I will keep it updated and with bug fixes but so far its working for me. If you are going to try it I strongly suggest experimenting on a test directory as you can easily mess things up. I did write in a bunch of confirmations to avoid this but still read the script and use at your own risk.
That said here is output from the script showning the commands executed.
david@giskard:[common/applications] $ share_dir -o root . sysadmin david
share directory /mnt/AllData/users/common/applications/ with users: sysadmin david ? confirm y
adding acl user sysadmin
these are the acl commands that you will run
******************
sudo setfacl -R -m u:sysadmin:rwX /mnt/AllData/users/common/applications/
sudo setfacl -dR -m u:sysadmin:rwX /mnt/AllData/users/common/applications/
******************
Double Check. Do you want to continue? y
*** new acl entries ***
user:sysadmin:rwx
default:user:sysadmin:rwx
adding acl user david
these are the acl commands that you will run
******************
sudo setfacl -R -m u:david:rwX /mnt/AllData/users/common/applications/
sudo setfacl -dR -m u:david:rwX /mnt/AllData/users/common/applications/
******************
Double Check. Do you want to continue? y
*** new acl entries ***
user:david:rwx
default:user:david:rwx
done adding acl users sysadmin david
these are the chown/chmod commands that you will run
******************
sudo chown -R root:users /mnt/AllData/users/common/applications/
sudo chmod -R u+rwX /mnt/AllData/users/common/applications/
sudo chmod -R g+rwX /mnt/AllData/users/common/applications/
sudo find /mnt/AllData/users/common/applications/ -type d -exec chmod g+s {} +
******************
Double Check. Do you want to continue? y
all done!
total 24
drwxrwsr-x+ 2 root users 4096 Feb 6 13:05 ./
drwxrwsr-x+ 5 root users 4096 Feb 6 11:39 ../
-rwxrwxr-x+ 1 root users 169 Jan 30 11:01 'Hacking Chromium.desktop'*
-rwxrwxr-x+ 1 root users 161 Jan 30 16:03 'Incognito Chromium.desktop'*
# file: /mnt/AllData/users/common/applications/
# owner: root
# group: users
# flags: -s-
user::rwx
user:sysadmin:rwx
user:david:rwx
group::rwx
mask::rwx
other::r-x
default:user::rwx
default:user:sysadmin:rwx
default:user:david:rwx
default:group::rwx
default:mask::rwx
default:other::r-xhere is script (as of 5/24) but I will not be updating this so see the gist for latest
#!/bin/bash# Usage:
# share_dir [ -o <owner> -g <group> ] <directory> <list of space delimited users names/uid>
# use . for current directory
# -o forces own for directory, default is $USER
# -g forces group name for directory, default is "users" and if not available then $USER
# Note: script operates recursively on given directory!, use with caution## HELPERSadirname() {
# passed entire path
echo "$(cd "$(dirname "$1")" >/dev/null 2>&1 ; pwd -P )"
}chmod_dirs() {
# passed entire path
local usesudo
[[ $1 == -s ]] && usesudo="sudo" && shift 2
$usesudo find $1 -type f -exec chmod $2 {} +
}function confirm()
{
echo -n "$@ "
read -e answer
for response in y Y yes YES Yes Sure sure SURE OK ok Ok
do
if [ "_$answer" == "_$response" ]
then
return 0
fi
done # Any answer other than the list above is considered a "no" answer
return 1
}# End Helpers# Usage:
# adding: acladduserdir <user> <directory>
# deleting: acladduserdir -d <user> <directory>
# add -s flag to force run as sudo
# Note: script operates recursively on given directory!, use with cautionacladduserdir() { module_load confirm
local uid
local usesudo
local del
local spec
local dir
local cmd="-R -m "
local cmdd="-dR -m" declare OPTION
declare OPTARG
declare OPTIND while getopts 'ds' OPTION; do
# echo $OPTION $OPTARG
case "$OPTION" in
d)
del=true
;;
s)
usesudo="sudo"
;;
*)
echo unknown option $OPTION
;;
esac
done shift $((OPTIND - 1)) if [[ $del ]]; then
echo deleting an acl entries for $1
opts="-R -x"
optsd="-dR -x"
spec="u:$1"
else
opts="-R -m "
optsd="-dR -m"
spec="u:$1:rwX"
fi
[[ ! $2 ]] && echo acluserdir: both user and direcotory must be passed && return 1
dir=$2
uid=$(id -u $1 2>/dev/null)
[[ $uid -lt 1000 ]] && echo no such regular user $1 && return 2
[[ ! -d $2 ]] && echo no such directory $2 && return 3
if [[ ! -w $2 ]]; then
echo $2 not writable by current user $USER
if [[ ! $(sudo -l -U $USER 2>/dev/null) ]]; then
echo user does not have sudo privilges, aborting
return 4
else
confirm "do you want to elevate to root and continue?" || return 5
usesudo="sudo"
fi
fi
echo these are the acl commands that you will run
echo '******************'
echo $usesudo setfacl $opts $spec $dir
echo $usesudo setfacl $optsd $spec $dir
echo '******************'
confirm Double Check. Do you want to continue? || return 6
$usesudo setfacl $opts $spec $dir
$usesudo setfacl $optsd $spec $dir
echo '*** new acl entries ***'
$usesudo getfacl -p --omit-header $2 | grep $1}# Usage:
# share_dir [ -o <owner> -g <group> ] <directory> <list of space delimited users names/uid>
# -o forces own for directory, default is $USER
# -g forces group name for directory, default is "users" and if not available then $USER
# Note: script operates recursively on given directory!, use with cautionshare_dir() {
[[ ! $(sudo -l -U $USER 2>/dev/null) ]] && echo current user does not have sudo privilges, aborting && return 4
local group
local owner=$USER
[[ $(getent group users) ]] && group=users || group=$USER declare OPTION
declare OPTARG
declare OPTIND while getopts 'g:o:' OPTION; do
# echo $OPTION $OPTARG
case "$OPTION" in
o)
owner=$OPTARG
;;
g)
group=$OPTARG
;;
*)
echo unknown option $OPTION
;;
esac
done shift $((OPTIND - 1))
local dir=$([[ ! $1 == /* ]] && echo $(adirname $1)/)$([[ $1 == . ]] && echo "" || echo $1)
if [[ ! -d $dir ]]; then
confirm no such directory $dir, create it? && sudo mkdir -p $dir || return 6
fi
shift
confirm share directory $dir with users: $@ ? confirm || return 6
for user in "$@"; do
echo adding acl user $user
acladduserdir -s $user $dir
done
echo done adding acl users $@
echo these are the chown/chmod commands that you will run
echo '******************'
echo sudo chown -R $owner:$group $dir
echo sudo chmod -R u+rwX $dir
echo sudo chmod -R g+rwX $dir
echo sudo find $dir -type d -exec chmod g+s {} +
echo '******************'
confirm Double Check. Do you want to continue? || return 6
sudo chown -R $owner:$group $dir
sudo chmod -R u+rwX $dir
sudo find $dir -type d -exec chmod g+s {} +
echo all done!
ls -la $dir
getfacl -p $dir} |
Background:
I'm trying to share a folder between two users on the same machine . The normal way would be to have the two users in the same group set the parent folder to that group with rw and the s bit set.
That works great. Except .... the folder I am trying to share is one used by Chromium. When Chromium launches it writes some session files with only owner rw (i.e 600) permission ignroing the s bit. I guess some misbehaved programs can do that. That means when the other user tries later to open that same chromimum profile they can't set those sessions files cause they already exist with the owner only rw of the other user. :(
I gave bindfs a try but that requires sudo at login and thus I have to use a sudoers.d file if I want to get that set up non-interactive at login.
Anway I gave ACL a try and am not grokking some aspect because it's not working like I think it should.# user1: syadmin
# user2: david
# directory: /opt/stest
# user1 sysadmin is logged in.# sysadmin owns /opt/stest
$ llag stest
drwxrwsr-x+ 2 sysadmin users 4096 Feb 3 13:45 stest/$getfacl stest
# file: stest
# owner: sysadmin
# group: sysadmin
user::rwx
group::rwx
other::r-x# now run
setfacl -R -m u:david:rwX /opt/stest
setfacl -dR -m u:david:rwX /opt/stest# gives
user:david:rwx
default:user:david:rwx#now create a file as other user
$ su david -c "touch /opt/stest/test"
-rw-rw-r--+ 1 david users 0 Feb 3 13:51 test#set with owner only rw like how chromium does
-rw-------+ 1 david users 0 Feb 3 13:51 test$ getfacl test
# file: test
# owner: david
# group: users
user::rw-
user:david:rwx #effective:---
group::rwx #effective:---
mask::---
other::---So this is the part I'm not getting. Why is the "non-acl" owner of the file test david
# file: test
# owner: davidinstead of sysadmin given sysadmin owns the directory. Bascially I thought that setfacl would always give access to the directory owner. It seems as though even if the acl entry was made by sysadmin sysdamin must be manually added to any file created by another allowed user or it can get locked out of its own files. That was not intuitive for me.
Is that what i need to do? Do I need to run inotify wait on the directory and then add sysamdin to the acl list if another user creates a file. What is the best solution to my situation ACL or otherwise.
I am running ubuntu 20.04 with kernel 5.4.0-65-
--two days later
I tried another tack. I added both users to the file and default acl list using sudo. Then I logged out and into the other user. Then did a getfacl on one of the offending files. You see both users listed but under effective there is nothing instead of rw. Arrgh. Still the current user sysadmin can't access the file created by david. Why is effective not showing rw???
-rw-------+ 1 david david 125146 Feb 6 09:12 Preferences
getfacl Preferences
# file: Preferences
# owner: david
# group: david
user::rw-
user:sysadmin:rwx #effective:---
user:david:rwx #effective:---
group::rwx #effective:---
group:users:rwx #effective:---
mask::---
other::--- | Sharing a directory between users (using ACL) when some files are created with only owner rw (600) permissions |
You can execute the sample command ls
hdfs dfs -ls /pathAnd from here this is expected result:For a file returns stat on the file with the following format:permissions number_of_replicas userid groupid filesize modification_date modification_time filenameFor a directory it returns list of its direct children as in Unix. A
directory is listed as:permissions userid groupid modification_date modification_time dirname |
we can grant the permissions as hdfs user for hive as the following
su hdfs
$ hdfs dfs -chown hive:2098but how to do the opposite way?
in order to verify the owner of hive and hive group?
| how to find the owner of user and group from user HDFS |
Although you might be adverse to using chmod commands, it is one of the most direct and simple ways of doing this. You could still run the file browser with sudo as guillermo mentioned, but there is no garuntee that it'll stick and you still need to run a command in the command line to start it with sudo.
chown -R tomcat <DIRECTORY>
This is the simplest way to change the owner of every file and directory inside a directory to user tomcat. To put you at ease of exactly what it is doing, let's look at the man page.
man chown
The syntax of the command is:chown [OPTION]... [OWNER][:[GROUP]] FILE...
We have called chown with the -R option, have selected tomcat as the owner, and the file is a directory of your choosing.
Looking at the man pages, the -R flag: -R, --recursive
operate on files and directories recursively
If you would like, you could even use the -v flag to show exactly what it has done. Making the new command chown -Rv tomcat <DIRECTORY>
man: -v, --verbose
output a diagnostic for every file processed
|
Have a directory on a hard drive where I want to change all contents from owned by root to owned by tomc. I have tried Nemo, Krusader, and Nautilus (all launched as root, using sudo), all of which claim to be able to apply such changes recursively. None do, when I check after issuing the command.
So I now have a primary dir owned by user tomc which has subdirs owned by root. In each of those subdirs are hundreds of files whose ownership needs to be tomc but is root.
I could venture to try a chmod command, but looking into this the sheer complexity of it is intimidating and even dangerous. These are backup files I'm messing with, and while not irreplaceable, I do not have a copy nor room to make one.
Is there any easy way for me to achieve what I'm wanting? I certainly am not averse to working on the command line. It just seems like a GUI is safer and surely ought to work. But it does not.
| Problem with recursive change of file ownership |
The problem is that the executable on your USB drive cannot be executed with the current mount options (which are default options you did not set yourself). Also, your root/home file system within the virtual machine (VM) does not have enough space to copy over the files and execute them there.
Your options are therefore:Remount your USB drive to allow execution of files; and
Increase your hard disk space to be able to copy over the program and its filesad 1 - Your mount command shows that the USB drive is mounted at /media/mint/3424-9F51 and it includes the showexec option which prevents the execution. In this situation the command
mount -o remount,exec /media/mint/3424-9F51(run as root, e.g. prepend sudo) should bring the desired result.
Please note that the file system is still not a Linux file system and you may run into other problems like filename case sensitivity.
ad 2 - In order to resize the disk in the virtual machine you would need to
(a) resize the simulated hard disk (often a "qcow2" file) using the appropriate command from the host machine while the VM is shut down, e.g.
qemu-img resize /var/lib/libvirt/images/linux_mint.qcow2 +2GB(again run as root, substitute your file name) which would add 2 GB virtual hard disk space. You need at least 2496752k-1969872k which is a little over 514 MB just to copy over the files but then the hard disk would be full; use at least 1 GB more, perhaps much more like 10 GB if you want to work with the program, save files and update the system in the future.
(b) resize the system partition of the simulated hard disk, again from outside the VM. Since I do not understand your unusual partitioning setup within the VM (with /cow as an overlay file system apparently on a simulated DVD) this would need more work to figure out.
(c) resize the file system on the partition we just resized - again this depends on your setup
(d) copy over the files to the newly increased root/home partition, e.g.
rsync -uav /media/mint/3424-9F51/real-lisp/portacle-linux /home/mint/then find your files in /home/mint/portacle-linux and try working from there.
Alternatively to 2 (a) to (c) you could add an additional disk to your virtual machine and use this as a /home partition, thereby making space available to continue with (d). This would be easier to set up. Please let us know if you need instructions for that. (You would need to copy/move over all files from your previous /home unless it is OK to "start fresh".)
|
I use VirtualBox 6.0.6 on Windows 10 to work in Linux Mint. I use a USB drive with a programming environment on it (Portacle). It contains an executable file (portacle.desktop). I found myself unable to run the file. A window always popped up:
The application "portacle.desktop" has not been marked as trusted (executable).Clicking "Launch Anyway" or "Mark as Trusted" achieved nothing. It turned out that the file option "Allow executing file as program" was turned off. However, when I turned it on, it immediately turned itself off. Owner was "mint", changing it resulted in "The group could not be changed. You do not have the permissions necessary to change the group of 'portacle.desktop'", even when running as root. Many people have had similar problems and asked here, and they were told to change attributes/permissions. Changing permissions didn't solve the problem. Changing the owner (even as root) gave the error:
chown: changing ownership of 'portacle.desktop': Operation not permittedTrying to see (or change) the file attributes resulted in:
lsattr: Inappropriate ioctl for device while reading flags on portacle.desktopSearching that, I found several people with the same problem, but their solutions were specific workarounds not applicable to my case. I also tried moving the files from the USB drive to the main drive. Besides bizarre problems like a folder suddenly being seen as 140 TB in size, the ioctl problem did not go away and everything went more or less along the same lines.
Full path of file: /media/mint/3424-9F51/real-lisp/portacle-linux/portacle.desktop
Output of mount | grep /dev:
root@mint:/media/mint/3424-9F51/real-lisp/portacle-linux# mount | grep /dev
udev on /dev type devtmpfs (rw,nosuid,relatime,size=1998648k,nr_inodes=499662,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
/dev/sr0 on /cdrom type iso9660 (ro,noatime,nojoliet,check=s,map=n,blocksize=2048)
/dev/loop0 on /rofs type squashfs (ro,noatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
/dev/sda1 on /media/mint/3424-9F51 type vfat (rw,nosuid,nodev,relatime,uid=999,gid=999,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)Output of df:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 1998648 0 1998648 0% /dev
tmpfs 403956 1092 402864 1% /run
/dev/sr0 1927648 1927648 0 100% /cdrom
/dev/loop0 1845760 1845760 0 100% /rofs
/cow 2019772 49900 1969872 3% /
tmpfs 2019772 0 2019772 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 2019772 0 2019772 0% /sys/fs/cgroup
tmpfs 2019772 4 2019768 1% /tmp
tmpfs 403952 28 403924 1% /run/user/999Output of free:
total used free shared buff/cache available
Mem: 4039548 1201060 1581880 158384 1256608 2445112
Swap: 0 0 0Output of du -ks /media/mint/3424-9F51/real-lisp/portacle-linux:
2496752 /media/mint/3424-9F51/real-lisp/portacle-linux
| Can't run file on Linux Mint because of permissions/ownership issues |
If you add a setgid bit on the directory bart like chmod 2775 bart; chgrp maggie bart, then all files inside the directory will have group ownership changed to maggie, and add bart to the maggie group, then anyone who is in the group maggie, like you and bart will be able to access those files. There is a setuid concept for directories, but it is not implemented. The alternative is posix ACLs, which has pros and cons, but for what you need, setgid directories might work.
|
Let's say thatmy user account is homer
there is a background service marge running an account bart.
marge is using a directory lisa for its data.
I have set the owner of lisa to bart.If I create a file and try to copy it to lisa, it fails due to permission. I can copy it by sudo cp, but then, the file's owner becomes root, which bart cannot read. I want the owner of all files in lisa to be bart. I can manually change the owner of the file to bart after copying it into lisa, but can't it be automatically done? That is, I want the owner of all files in lisa, no matter who copied/created them into lisa, to be bart by default.
| Copied file to have the same owner as its directory |
You could try the find command:
find /my/path -maxdepth 1 -type d -printf "%u %g " -exec du -h --max-depth=0 {} \;should locate all directories (filter -type d) one level below the starting point /my/path (option -maxdepth 1). It will thenuse the -printf option to print owner and group, and then
invoke du --max-depth=0 on each directory found ({}) to print the name and total size directly behind the output of the preceding -printf option, using the -exec mechanism. |
Anyone know how to view all the folders within an directory with size, folder/file, owner?
The only command I know of is du -hs *
But that shows all the subfolders aswell and does not show owner.
For example,
I would like to get the info size, folder/file, owner of the folder/file under "/my/path/".
Any know of command which could provide me with this info?
Br
Hultman
| View size and owner on folders, within a folder |
There is no tool do this. Only ipcrm (for deleting presented shared memory objects), ipcmk (for creating shared memory objects) and ipcs (for showing existing shared memory objects) are present (I mean util-linux project).
The kernel doesn't provide /proc interface for Sys V Shared Memory Objects instead of POSIX Shared Memory (/dev/shm/<object>).
You can write you own tool that using shmctl(2) syscall. Many tutorials and books about Unix IPC have huge count of examples about shmctl.
|
When I run ipcs -m, I can see a list of the shared memory segments on the system, like
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 0 user1 664 342110 0
0x00000000 32769 user1 664 28391740 5
0x00000000 65538 user1 664 1929302 4How can I change the owner of a shared memory segment?
| Change ownership of shared memory |
The "conflict" in your link refers to a non-root user. Non-root users can only change the group of a file to a group he belongs to (due to the reasons mentioned there).
However, root himself could set any user and any group to any file, and the owner of the file doesn't have to belong to the group. So there's no conflict.
|
It may be a stupid question, but I don't understand a detail about the command chown. I haven't found any explanation for this detail yet, maybe because it's so obvious to everyone.
When you change a file ownership, you can set a user parameter and a group parameter, using the following basic syntax:
chown <username>:<groupname> <filename>This syntax allows you to insert into the field a user which belongs to a certain group, and to insert into the field a group.
When I learned about the chown command for the first time I thought the groupname must be the same as the user group.
But then I found out that the groupname can refer to a different group from the one the user belongs to.
Does this mean that you can set ownership to a user and a group, with the group being unrelated to the user group?
If yes, it seems to me that this issue conflicts with what I found here. Or am I just getting confused?
Thank you!
| Does chown command allow to set group different from user group? |
In Unix, there are files and directories (and some weird "files" like pipes and devices, but permissions on them work just like plain files), and symbolic links (in essence, files containing the name of the file they point to). A directory is just a list of file names and references to the corresponding physical files. This way you can have the same file appearing under different names, or under the same (or another) name in different directories.
There are three basic permissions on filessystem objects: r(ead), w(rite) and e(x)ecute. For regular files, read means to be able to read it's contents (e.g. copy it, view it, ...), write means to be able to modify it's contents (overwrite, add stuff at the end, truncate to length zero; note that this is independent of reading, you can have a file you can modify but not read), execute means running it as a program. For directories, read means listing it's contents (file names), writing means modifying (adding/deleting files), execution means using the directory to get at the files themselves (if you have r but not x on the directory, you can see the file names, but not get at them). Symbolic link's permissions are irrelevant, just take them as mentioned above: A short file containing the file name pointed to, and the contents is processed normally. Yes, quite orthogonal (independent).
The system classifies permissions into three groups: The owner, the group the object belongs to, and everybody else. Each user belongs to one (or more) groups. When checking if an operation is allowed, first check if you are the owner, if so, the owner permissions rule; if you aren't the owner but belong to the group, group permissions are considered; otherwise, other permissions are checked.
True, it allows rather nonsensical combinations of permissions. But it is a simple model, and some day you'll find use for some "nonsense" combination.
The owner of some object has the power to change permissions at will.
|
An example of my question would be the /home directory:
drwxr-xr-x 8 root root 4096 Jan 29 23:44 home/So, the owner of /home is root.
But I'm the owner of my personal home folder:
drwx--x--- 85 teo teo 4096 Jan 30 16:22 teo/Why is my user able to modify things under teo/ folder if the /home is owned by the root?
I mean, modifications on my personal folder are also modifications in the /home folder, because it is a subfolder of /home, and I'm not in the root group.
| Why owners of files and folders can modify it's contents if they don't have permissions on the parent directory? |
The form of su that you're looking for is as follows:
$ su -c pkill -9 "RFBEventHelperd" <user>On OSX this form may not work. In those situations you'll likely have to yield to using sudo instead:
$ sudo -u <user> <cmd>For this to run passwordless you'll have to create an entry for this in your /etc/sudoers file and utilize the NOPASSWD feature for an explicit command that your use running the original script has access to execute without being challenged for the password.
Using sudo
To set up a rule in /etc/sudoers file to allow this user access to do the pkill command one could add this to /etc/sudoers:
%admin ALL=(ALL) NOPASSWD: ALLAnd with this the shell script can then run this command without any password:
$ sudo -u root /usr/bin/pkill -9 "RFBEventHelperd"NOTE: When dealing with /etc/sudoers edits you can use visudo like so:
$ sudo visudoReferencesEnable sudo without a password on MacOS |
How to use pkill as su on a Mac?
When I run su pkill -9 "RFBEventHelperd" in the terminal I get su: Sorry (but works as sudo). I am looking for a way to kill this process from inside a script that runs as su.
My application
I use SleepWatcher in MacOS to open Screen Sharing apps on wake up. After launching Screen Sharing app I want to kill the process RFBEventHelperd as this gives back Command+Tab to Mac instead of being bound to OS running in Screen Sharing.
SleepWatcher runs wake script as su:
$ su - $user -c "$home/.wakeup"Question
How can I put the pkill -9 "RFBEventHelperd" into my script, and have it kill a process that's running as another user.
| MacOS: su pkill -9 “process_name” = sorry |
I don't know how to transfer ownership of files from an account on one installation to an account on another.Files are not owned by a username, they are owned by a UID. The mapping between username and UID is usually managed in the users database file /etc/passwd. Here's an example snippet
root:x:0:0:root:/root:/bin/bash
tom:x:1000:1000:Tom Pearce,,,:/home/tom:/bin/bash
bill:x:1001:1001:Bill Brewer,,,:/home/bill:/bin/bash
jan:x:1002:1002:Jan Stewer,,,:/home/jan:/bin/bash
peter:x:1003:1003:Peter Gurney,,,:/home/peter:/bin/bashWhen you run ls -l the UID/GID owners for each file are translated using this database to the corresponding names. You can see the actual names with ls -ln.
So, to "transfer" ownership of files you have a couple of choicesMake sure that the mapping of name to UID/GID is the same on both systems. No chown/chgrp is required in this instance because the files ownerships are mapped to the same set of names on both systems.
Find out the original UID/GID and the target UID/GID and change every affected file one by one. This isn't quite as simple as it sounds because you have to be careful not to change a file to a UID/GID pair that will then later be changed once again. Typically, you would chown/chgrp each file to a temporary range of UIDs that isn't used anywhere on either system, and then change them from that set to the actual set.
# Example to change file UIDs from 1000 to 1010
find / -mount -user 1000 -exec chown 61010 {} +# Later, when you've moved all the file ownwerships out of the 1xxx range
find / -mount -user 61010 -exec chown 1010 {} + |
I currently have Ubuntu installed on one partition and my personal files (Pictures, Documents, etc.) on a second partition. I would like to install KDE Neon in the partition containing Ubuntu, while keeping the personal files partition.
I have yet to install Neon, but I have used a bootable USB.
The problem I've run into is that I don't know how to transfer ownership of files from an account on one installation to an account on another.
If I were transferring files between accounts on the same OS, I would just use chown and be done with it, but I don't know how to do that across OSes.
I realize that I could set the permissions so that others have read access and then copy all of my files using the Neon account, but that would take hours due to how many files I have. I would rather use chown or something similar.
| How Do I Transfer Ownership of Files Between Distros? |
They should all be owned by root.On the other hand, the system (Lubuntu 20.04 x64) runs normally. So does installed software, I do not face any troubles at the moment. Can it be the case when this saying «If it is not broken, don't fix it» is to be remembered?No. I would consider your system broken (not horrible broken but broken). It just doesn't hurt you yet. The directories are usually owned by root. So for any other user it makes no big difference whether it is owned by root or someone else and root is root and just access it regardless of the owner or permissions.
Apart from the security aspect some software might to refuse to run if the owner is not as expected. E.g. I can imagine a swapfile owned by user can become problematic. It is at least a warning.
|
Today, I have accidentally noticed that the following directories of the / are owned by user, rather than root:/home
/lost+found
/media
/mnt
/opt
/snap
/srv
/swapfile
/varI have no idea how that could happen. It seems only logical that some of them with the obvious exception of /home should be owned by root. If so, which ones?
On the other hand, the Lubuntu 20.04 x64 system and installed software run both normally.
I do not face any troubles at the moment.
Should I follow the "If it is not broken, don't fix it" approach?
| Which first level directories in Linux should be owned by user? |
You could use GNU find and GNU xargs to search for the wp-content directories and pass the result NUL-terminated to a shell script:
find /path/to/directory -type d -name 'wp-content' -print0 | xargs -0 sh -c '
for dir; do
# change user and group recursively to nginx
chown -R nginx:nginx "$dir" # change dirs to 755
find "$dir" -type d -exec chmod 755 {} + # change files to 644
find "$dir" -type f -exec chmod 644 {} +
done
' shAlternatively, you could save the script part in a shell script myscript.sh:
#!/bin/shfor dir; do
# change user and group recursively to nginx
chown -R nginx:nginx "$dir" # change dirs to 755
find "$dir" -type d -exec chmod 755 {} + # change files to 644
find "$dir" -type f -exec chmod 644 {} +
doneThen make the shell script executable with
chmod +x myscript.shand run find (not necessarily the GNU implementation) using the -exec action and pass the result to the script:
find /path/to/directory -type d -name 'wp-content' -exec ./myscript.sh {} + |
I'm running into security issues with multiple Wordpress websites, and I need to recursively change the ownership (and permissions) for folder "wp-content" (and whatever is inside them).
I need to find all the folders named wp-content (there are several) and change them and all their contents so that they're owned by nginx:nginx with permissions 755 for folders and 644 for files.
I can't figure out a way to find those folders and then change the ownership.
Any clues? :/
| Find specific folders and then change their ownership |
There are 3 ways to change user of a process in Unix.
2 system level ways to change user of a processif the process has capability CAP_SETUID, traditionally root has this capability (and all other capabilities), then it can use setuid, setreuid, setresuid, setfsuid, system calls, to change to any other user. Any other user can shuffle uids: A process has 3 uids, it can move them around, at will: it can swap them, or remove them until it is down to one. It can not add uids, unless it has capability CAP_SETUID. In general a process can only loose privileges or move them around, using these system calls. These calls allow the program to continue.
exec a suid executable: If an executable file has its suid bit set, and if it is of a valid type (not a scripts, not java, not …), then when it is run, its effective user id is changed to that of the files owner. (same can be done for group with sgid bit). This is the only way to gain privileges. The current program ends when exec is called, it is replaced with the new program, but it is the same process, it also inherits open files (e.g. stdin, stdout, stderr).fork dose not change user.
A forked process is an exact duplicate of its parent, with a few exceptions (see man fork). In particular the uid, gid, and capabilities are not changed.
Utility methods
These programs use the 2 system methods described above.use sudo or su:su will ask for the password of the other user.
sudo will ask for your password, but will only work if you are registered in the sudoers file.sudo, su, login, cron etc use the 2 system methods. (And will create a new process. The other system methods do not create a new process.)
What does sudo, su do?
#↳ ll /usr/bin/sudo
-rwsr-xr-x 1 root root 155K Sep 9 2017 /usr/bin/sudo*As use can see the sudo executable is owned by root, and has the suid bit set (the s, where you would expect to see the first x).
When sudo is run, it runs as root (don't try this, unless you know what you are doing). It then does security checks. Then it uses set??uid to become the required user, it then execs (and maybe a fork) the required program.
Running a process, without logging in
Use some timed start service.cron
atSend a network message, e.g. a web-server may run a task in response to a web request.
Use automated login: use ssh to launch a process, via a script on another machine.
| According to https://unix.stackexchange.com/a/489913/674cron jobs can run as any user, without that user being logged in.
root doesn’t need to log in to start the init process, thankfully (imagine handling a fleet of thousands of servers and millions of VMs otherwise);If I want to run a process, with me as its owner, without logging in, how can I do that at both system/library call level and utility level?
If root wants to do that, how can it do it?
How can a service user which can't log in start a process as its owner or become its owner later?
Is the only way to call setuid() or seteuid() in the program run by the process?
Thanks.
| How can I run a process as its owner or become its owner without logging in? [closed] |
I don't understand why I need to own the file my group has every permissionThere may not be any technical reason for this check but a program is free in which conditions it checks.
These checks are probably done during the start-up of wine so you could createa wrapper script
a sudo ruleso that the wrapper script would change the owner of these files to the caller:
# start-my-game.shfile_paths=( '/mnt/steam/SteamLibrary/steamapps/compatdata/1118200/pfx' )sudo chown "$USER" "${file_paths[@]}"# or for simpler sudoers rules (allow each file separately)
for file in "${file_paths[@]}"; do
sudo chown "$USER" "$file"
donethe usual start command line |
I have a multi user system and if possible I'd like to prevent downloading every game 3 times.
Because of that I made a separate ext4 partition for my steam games and selected the partition in steam
drwxr-xr-x 4 root root 4096 May 21 17:06 ..
drwxrwxr-x 2 root steam 16384 May 4 19:59 lost+found
drwxrwxr-x 3 root steam 4096 May 21 17:22 SteamLibraryAll proton games don't launch
I used the launch option PROTON_LOG=1 and the log said this
wineserver: /mnt/steam/SteamLibrary/steamapps/compatdata/1118200/pfx is not owned by you
wine: '/mnt/steam/SteamLibrary/steamapps/compatdata/1118200/pfx' is not owned by youI know that this is solvable with a
chown -R user:user /mnt/steambut I don't understand why I need to own the file
my group has every permission
I can't find anything online because there is a similar issue with having a ntfs partition that's filling my search results.
also my user is in the steam group.
ChatGPT mentioned something about acl and advanced permissions but this
sudo setfacl -R -d -m group:sharedgroup:rwx /mnt/steam/SteamLibraryIt seems that it would only set read/execute/write permissions, which I already have.
I wanted to ask if it is possible to fake own a mountpoint to multiple users or find some other workaround for this.
| Steam not running proton games on separate EXT4 partition |
In comments you say that you used the command
chown memsql singlestore1This command would set the owner of singlestore1 to memsql. However, you appear to want to set the group to memsql. You can do this in three different ways:Set both owner and group with chown:
chown memsql:memsql singlestore1Set only group with chown (this seems to work in practice on the systems that I'm using, but is strictly not how chown is supposed to be used):
chown :memsql singlestore1Set only group with chgrp:
chgrp memsql singlestore1 |
I have two users,memsql and 4px. I have copied dir from 4px user, home dir to memsql user home dir.
I want the permission for dir poc & singlestore1 to be memsql , memsql. Instead of memsql & 4px user in below logs.
Logs:
[memsql@rnd-2 ~]$ ls -al
total 32
drwxr-x---. 3 memsql 4px 20 Jan 13 07:17 poc
-rw-rw-r--. 1 memsql memsql 1032 Jan 13 08:16 setup.cnf
drwxrwxr-x. 5 memsql 4px 4096 Jan 13 03:11 singlestore1
-rw-------. 1 memsql memsql 2425 Jan 13 08:16 .viminfoI tried chown command but 4px user for both dir did not changed to memsql user.
Command I ran:
chown memsql singlestore1Expected output:
Logs:
[memsql@rnd-2 ~]$ ls -al
total 32
drwxr-x---. 3 memsql memsql 20 Jan 13 07:17 poc
-rw-rw-r--. 1 memsql memsql 1032 Jan 13 08:16 setup.cnf
drwxrwxr-x. 5 memsql memsql 4096 Jan 13 03:11 singlestore1
-rw-------. 1 memsql memsql 2425 Jan 13 08:16 .viminfo | How to keep ownership to same user for dir |
There are typically other users owning files; which specifically depends on your distribution and the packages you have installed.
You can find them by running
find /usr/bin \! -user root(with other paths too, depending on what files you’re curious about). For example, on Debian-based systems, /usr/bin/man is owned by man.
More comprehensively, if your system assigns ids starting at 1000 for “real” users,
find / \( \! -user root \) -uid -1000will list all files owned by a non-root, system user.
Distributions commonly set up quite a few “system” users; see List of group id actually used on Debian for a detailed list of default Debian system users. Additional users can be added by packages. These users don’t necessarily own files, they can be used to run programs; but any Unix-style system will generally have quite a few files owned by users other than root and actual human users.
|
Typically, after installing a distro we get a root user and a sudo user.
I don't know if these are typically the only users owning files (as a user/u or "owner"), are they? or are there typically more users owning files and if so, what are these?
| Typically, which file users (owners) come with a distro? |
if you want to not copy everything over you will have to do an initial run of -a --size-only which will avoid using timestamps to determine how to get things in sync. after running with --size-only, and -a rsync will correct the permissions and timestamps on the destination. After that you can use just -a which is a better check, since files don't always change in size when modified. Giving you are using -u I don't know if that means there are files being written to the destination that should not be overwritten. I would caution you use --dry-run so you are comfortable with the rsync execution before making any changes.
|
I'd like to know if the following scenario will update permissions, ownership, timestamps etc.
Say I transfer a folder from a destination to another using rsync -zr source/ dest/, and then use the command rsync -auzr source/ dest/ - will the latter command then update the permissions, ownership, and timestamps or will I have re-transfer all the files again?
| Update only permissions etc. with rsync |
As follow up to Ansible: Fails at curl command I've performed the following test
---
- hosts: test
become: true
gather_facts: false vars: VERSION: "9.0.65" tasks: - name: Uncompressing the Tomcat source
unarchive:
src: https://dlcdn.apache.org/tomcat/tomcat-9/v{{ VERSION }}/bin/apache-tomcat-{{ VERSION }}.tar.gz
dest: "/home/{{ ansible_user }}/tomcat"
owner: "{{ ansible_user }}"
group: "ansible_users"
mode: 0770
extra_opts: [--strip-components=1]
remote_src: true
environment:
http_proxy: 'localhost:3128'
https_proxy: 'localhost:3128'
register: _extract - name: Show result
debug:
var: _extractand found it working as expected.It extracts them but doesn't change the owner and group of the extracted files.I wasn't able to produce the mentioned issue as owner and group of the extracted files became changed as they should.
|
I wrote this to extract the files and folder on aremote server.
Itextracts them but doesn't change the owner and group of the extracted files.
---
- name: Uncompressing the Tomcat source
become: true
become_user: someuser
unarchive:
src: /rmtdir/apache-tomcat-{{ VERSION }}.tar.gz
dest: /rmtdir/
owner: someuser
group: somegroup
mode: 0770
remote_src: yes
register: _extract
- name:
debug:
var: _extractObviously, I want ittochange the owner and group of the extracted files.
How can I get it todo that?
| 'unarchive' module in Ansible doesn't change owner and group of extracted files and folder |
When a drive (or partition, or other block device, or a disk image file, etc) is formatted, the top-level directory of the filesystem is owned by the user running the mkfs command.
Usually, that is root unless you're formatting a disk image file (or a block device you happen to have RW perms on) as a non-root uid.
If you want to change the ownership, mount it and then chown the mounted directory. This will change the ownership of that top-level directory in the formatted fs itself, so the ownership change will persist after unmounting. For example (as root):
mkfs.ext4 /dev/sdaX
mount /dev/sdaX /mnt
chown user:group /mntThis has to be done while the fs is mounted, otherwise it will only change the owner of the mount-point itself (i.e. the directory in the parent filesystem), and this will be over-ridden by the owner in the mounted filesystem when you mount it.
For example, /mnt is just a directory in / until you mount another filesystem on it. It has whatever ownership and perms that are set for it in the / fs. When you mount another fs on /mnt, it now has whatever ownership and permissions are set for the top-level directory of that filesystem.
FAT is not a unix filesystem and does not support unix ownership or permissions. When you mount a FAT filesystem, you specify the ownership and permissions of all files in the fs when you mount it (default is the uid & gid of the the mounting process).Note that mkfs for some filesystems allow you to specify the owner when formatting but, because each such fs has its own method of doing that, it's generally easier just to chown it after mounting it for the first time (as shown above), and not have to remember a minor convenience feature of a rarely-used tool. e.g. mkfs.ext4 does this with an extended option (-E):
mkfs.ext4 -E root_owner=uid:gid /dev/sdaX |
I encountered an issue with a disk drive which I already posted on. (For the curious: Issue with device after formatting)
In short, one of my disks stopped allowing me to copy files by drag and drop using Dolphin (Debian), and allowed me only if I was doing it from the terminal using sudo.
I researched about my issue and noticed something:This has already happened to me with another disk drive.That disk drive and this one were erased with dd if=/dev/zero of=/dev/sdX where sdX is the drive in questionIt did not happen with other disk drives which were not erased with dd but only formatted (with mkfs) and/or partitioned (e.g. gpt partition created with multiple primary partitions).In that disk and this one, the owner was changed to root, and no longer user.So my questions are:Why did this happen with fully erased disks and not with formatted or partitioned disks?How do permissions work exactly? Are they written into the disks? Or is ownership written into the disk?Is it possible to change the owner of the disk so that the change is persistent across Linux distributions?Edit: I tried to format the disk with exfat. Drap and drop with Dolphin works and the ownership is changed to user. I tried to format the disk with ext4. Drag and drop does not work anymore. The ownership was changed to root. I tried to change the ownership of the disk drive to the current user. The command line exited without issue (terminal: sudo chown ...: /dev/sdX -R -w). However, when using it with Dolphin, drag and drop does not work. Dolphin still lists ownership as root. If manually mounted from terminal, the directory created for mounting will only show ownership as root (even though the directory was created without requiring sudo). If automatically mounted from Dolphin, it will also only show ownership as root. Mount point name changes between two automatic mountings by Dolphin.
I should also add that I did format other drives with ext* filesystems. There are no issues with them (even with ext4) as long as I did not run dd if=... of=... on them (to erase them completely).
Can you explain to me what is going on?
Why does it seemingly show that ext* format automatically makes root the owner and the exfat format not? Both commands were run using mkfs.
Edit: Forgot to write that I use Debian.
| Ownership, disk drives and permissions |
I think what you are looking for, is to:Use a setgid on each user's directory, so that each new file in that directory will have the same group as the directory; and
Set your system's umask to 0002, as it appears to be 0022. umask removes existing permissions from the default permissions, which is 0777 for directories and 0666 for files. With the new setting, the default permissions will change for files: from 0644 to 0664, and for directories: from 0755 to 0775. My understanding of the details you've given is that it will apply to your system.To put a setgid on all the subdirectories, use the find command as follows, but ensure that your starting directory is the one just on top of the users' directories, so that a simple ls will list them all, as using the wrong starting directory can cause a bit of a pain reversing all that has been done:
find ./ -type d -mindepth 1 -maxdepth 1 -exec chmod --preserve-root g+s '{}' \;
The given options do the following:
-type d returns everything of filetype 'directory';
-mindepth 1 prevents the starting directory from being listed, so that it's permissions will not change;
-maxdepth 1 lists the 1st level subdirectories, but does not go deeper into their own subdirectories;
-exec executes the following command on every item that passes the tests, which is what '{}' stands for; and
--preserve-root is a protection in chmod to prevent the permission change to accidentally be applied to the root directory (and potentially the whole filesystem).
If you're not sure what will be affected, simply run the find command without the -exec argument, like so:
find ./ -type d -mindepth 1 -maxdepth 1, which will give you a list of every file it would pass on to whatever command you use with the -exec argument.
Possible duplicate
With a small search I found this question may have been (partially) answered here:
How to set default file permissions for all folders/files in a directory?
The accepted answer refers to a step-by-step tutorial on how to set default permissions for a directory in https://www.linuxquestions.org/questions/linux-desktop-74/applying-default-permissions-for-newly-created-files-within-a-specific-folder-605129/
Please confirm whether this is or is not the case.
|
I have a NAS with a directory for each member of my house.
My user name, is member of each member's group. So I have access to folders of all users.
I want, when I place a file, inside a user's directory, the file take the ownership from parent directory. Or get specific permissions.
Directory of my wife is "zoe_folder" with ownership zoe:zoe and permissions rwxrwx---.
When I run e.g. as root the command touch file.txt inside my wife's directory (or subdirectories), then file.txt has ownership root:root and permissions rw-r--r--. I want ownership zoe:zoe (or as workaround permissions rw-rw-rw-).
And of course without run any of commands chmod, chown. It is a NAS and client PCs are Windows. I cannot login every time in NAS with console and change permissions.
Any ideas?
| Change owner to a file when saved in a specific folder |
I reproduced the problem you see, and looked through the very verbose debug output, and the OCaml copy.ml code (I am not familiar with this language) but did not see any obvious reason for why it could not use the copyprog setting.
However, this issue from February 2017 says that copyprog only works for new files (copyprogrest is for continuing an interrupted copyprog). A fix for that would probably also solve your problem. It is now marked as an enhancement. You might like to post a new issue there giving your use case.
|
I am using Unison to synchronize files between several clients. Each client is identical, meaning that whenever one client updates a certain file, all other clients must be updated consequently.
The files are stored in a centralized cloud server. Each client has non-root SSH access to the centralized cloud server. There is no link between the clients.
It's important that ownership of the files is preserved. For this reason, I am using --rsync-path="rsync --fake-super" below. This stores the owner/group in the extended file attributes, so ownership on the client can be restored during synchronization afterwards. That said, if there is a better method to preserve ownership, feel free to let me know, as this might also eliminate the problem below.
A relevant snippet from the configuration is as following:
copythreshold = 0
copyprog = /usr/bin/rsync -avzX --rsync-path="rsync --fake-super" --inplace -e ssh
copyprogrest = /usr/bin/rsync -avzX --rsync-path="rsync --fake-super" --inplace --partial -e sshI observe the following behavior:When a file is created, rsync as configured in copyprog is used to transfer the files.
This is great, because now the newly created file has the user.rsync.%stat attribute set (which holds the owner/group) on the cloud server. A consecutive synchronization on the other clients will indeed preserve the ownership.
However, when the file is updated, rsync as configured in copyprog is not used. I believe Unison does some custom built-in transfer logic instead.
This is not so great, because now the user.rsync.%stat attribute is lost on the cloud server. A consecutive synchronization on the other clients will now loose the owernship.Is it possible to configure Unison such that copyprog is also always used for updates? The documentation mentions:If you set copythreshold to 0, Unison will use the external copy utility for all whole-file transfers.Unfortunately, nothing is mentioned about updates.
| Unison: always use 'copyprog' for updates |
From a security stand point, for me, this directory, as well as any directory that the webserver needs to go to should be set in such a way that the webserver is not the owner of the files/directories and in such a way that it has no rights to write in any directory/file.
By doing so it means, whatever breach you have in the server, would never result in more than a DOS and an exfiltration, files would not be able to be written to by the webserver.
Of course, in real life things may get more complicated as:some CGI/application inside the webserver may need to have write access somewhere, for example to store session data if not done with a database
the webserver itself typically needs write access somewhere to write logfiles, etc.
in some setup, and it can have positive security effects, each application inside the webserver could run under another UID than the webserver itself. The juggling on ownership and rights could become complicated. This is probably why you can see many times online many people saying: just put rwx everywhere and it will work. Of course if you give everyone every right it "works" but they are consequences security wise.So if you have like users connecting to your server to upload new files through SFTP so that they can be served by a webserver, I would:make each user own its specific directory with full rights for them
make the webserver main group be the group of each user specific directory, with rx right for it
no rights at all for other usersBy doing so, each users sees only its own files and nothing else (SFTP has also the option of chrooting for added security but this comes with its own complexities), and the webserver has access to all files only for reading.
Inside each directory you can have all files owned by the group of the webserver or even put rx for anybody, since that will be protected by the top directory having no rights at all for other users than the owner or the webserver.
| So I've searched this a lot and found conflicting answers.
I would appreciate, for everyone's sake, if someone could come up with a fully reasoned and cited answer for how to set ownership / permissions of /var/www/html
e.g. lets say you want to access the directory using sftp but your user cannot write files there by default. What is the correct solution to this, from a security standpoint.
(Let's assume a LAMP environment)
| Ownership of /var/www/html [closed] |
Thank you guys for Commenting. During experiments with the suggested parameters, I discovered the cause of this error message. My question didn't include the information which was crucial for this, so it was impossible to guess the cause.
I rechecked all directories and discovered that the files were out of sync. The files which show the error were owned by me on my local copy and by another person on the server. So, my rsync process read the ownership of the local copy and wanted to copy the user permissions which was denied by the (NFS) server.
|
I have a directory on my server to which multiple users copy their data using rsync. All users use the options -a -h -v. All users are in the same group used for this directory. All users mount the share via NFS and use rsync in "local mode".
While syncing the output of rsync shows several errors:
rsync: failed to set permissions on <path>: Operation not permitted (1)which happens on those files the user which executes rsync is not the owner. This behaviour is correct, since only the owner should change permissons. I want to prevent the error from being posted to the log since it is very hard to detect "real" errors in the sync. How can I prevent rsync from setting permissions on files the executing user is no the owner?
This question is similar to this: Setting permissions with rsync only if the file is owned by the user althoug i do not use --chmod option and this problem only applies to existing files.
Edit 1:
The fact that this relies on the owner is my interpretation. All files which show this error are owned by someone else. Additionally, this user does not exist on my machine. So, when I inspect the directory via ls -al, I only see the UID, not the username. May this be an issue? The UIDs are identical on the server an all clients.
Edit 2:
Added info about local mode of rsync
| rsync permissions only for owned files |
You need to run it as root and use the --chown switch.
rsync -rlvz --chown=user:group your_options source destinationThat will set the ownership and to your required user and group.
|
I know there's lot of question around the subject here but I did not found the answer yet (after many research).
I have to upload the code of a web through rsync with a command line like the following:
rsync -rlvz --exclude-from=exclude_list.txt -e "ssh -i /home/user/.ssh/rsa -o -p $PORT" * user@$INSTANCE_IP:/home/public_html/foo/ The issue here is rsync is changing the the ownership from __apache:apache__ to __user:apache__ of certain folder inside /home/public_html/foo/ even if the folders are in the exclude_list.txt and causing downtime on our production website.
Any ideas to prevent rsync from changing ownership of these specific folders?
| Prevent Rsync from changing ownership of the folder of the exclude list |
AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip.Correction: it rejects those packets which their destination MAC address is not equal to its MAC address (or multicast or any additional addresses in its filter.
Packet capture utilities can trivially put the network device into promiscuous mode, which is to say that the above check is bypassed and the device accepts everything it receives. In fact, this is usually the default: with tcpdump, you have to specify the -p option in order to not do it.
The more important issue is whether the packets you are interested are even being carried down the wire to your sniffing port at all. Since you are using an unmanaged ethernet switch, they almost certainly are not. The switch is deciding to prune packets that don't belong to you from your port before your network device can hope to see them.
You need to connect to a specially configured mirroring or monitoring port on a managed ethernet switch in order to do this.
|
AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip.
I want to develop an application that monitors the internet usage of users. Each user has a fixed IP address.
I and some other people are connected to a DES-108 8-Port Fast Ethernet Unmanaged Desktop Switch
As said earlier I want to capture all the traffics from all users not only those packets that are belong to me.
How should I force my NIC or other components to receive all of packets?
| How to capture all incoming packets to NIC even those packets are not belonging to me |
Take a look at this answer: How does a transparent SOCKS proxy know which destination IP to use?
Quotation:
iptables overrites the original destination address but it remembers the old one. The application code can then fetch it by asking for a special socket option, SO_ORIGINAL_DST.
|
Out of curiosity I'm reading some tutorials about transparent TOR proxies as it's quite interesting topic from a networking standpoint. As opposed to VPN gateways which just use tun/tap interfaces and are totally clear to me, TOR proxy uses a single port. All tutorials repeat the magic line:
iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040where eth0 is the input (LAN) interface and 9040 is some TOR port. The thing is, I completely don't get why such a thing makes sense at all from networking standpoint.
According to my understanding of redirect / dst-nat chains and how it seems to work in physical routers, dst-nat chain takes dst-port and dst-addr BEFORE routing decision is taken and changes them to something else. So for example:before dst-nat: 192.168.1.2:46364 -> 88.88.88.88:80
after dst-nat: 192.168.1.2:46364 -> 99.99.99.99:8080And 99.99.99.99:8080 is what further chains in IP packet flow lane see (for example filter table) and this is how the packet looks from now on after leaving device for example.
Now many people around the internet (including on this stackexchange) claimed that redirect is basically the same as dst-nat with dst-addr set to local address of interface. In such light, this rule:
iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040clearly doesn't make sense. If that would be how it works, then TOR would get all packets with destination 127.0.0.1:9040. For typical applications where app takes packet and responds to it somehow (for example web servers) it totally makes sense because after all, such a server process is the final destination of the packet anyways so it's okay that the destination address is localhost. But TOR router is well... a router so it has to know original destination of packet. Am I missing something? Does DNAT not affect what local applications receive? Or is it specific behavior of REDIRECT directive?
| What does iptables -j REDIRECT *actually* do to packet headers? |
I suspect your problem is more because whatever sends the UDP packets is not adding a newline character the commands (as in they should send "play\n" and not just "play").
In any case, if you want a new TCP connection to be created for each of the UDP packets, you should use udp-recvfrom instead of udp-listen in socat:
socat -u udp-recvfrom:3333,fork tcp:localhost:50000Then every UDP packet should trigger one TCP connection that is only brought up to send the content of the packet and then closed.
Test by doing:
echo play | socat -u - udp-sendto:localhost:3333(which sends a UDP packet whose payload contains the 5 bytes "play\n").
|
The UDP - must listen on port.
The TCP - must connect to a server.
I tried netcat and socat.
nc -v -u -l -p 3333 | nc -v 127.0.0.1 50000socat -v UDP-LISTEN:3333,fork TCP:localhost:50000Both work -- they delivered the message -- but the line is not ended.
VLC will only take the command if I close netcat/socat.
I monitored the connection with sockettest and the messages are one after another in the same line, like this:
playpausestopexitaddI need the line to be ended so that the message transmitted looks like this:
play
stop
exit
addMaybe the packet is not ended?
I am wondering if nc or socat have options to send the packet/end line after a certain amount of time.
If I add \n to the output as suggested by @roaima, I get play\nstop\nplay\n on a single line.
| Create UDP to TCP bridge with socat/netcat to relay control commands for vlc media-player |
There are counters for each rule in iptables which can be shown with the -v option. Add -x to avoid the counters being abbreviated when they are very large (eg 1104K). For example,
$ sudo iptables -L -n -v -x
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
39 22221 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp spts:67:68 dpts:67:68
...
182 43862 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix "input_drop: "
182 43862 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibitedshows no dropped packets on my local network but 182 rejected with icmp and a log message such as the one you listed. The last two rules in the configuration with a policy of DROP were
-A INPUT -j LOG --log-prefix "input_drop: "
-A INPUT -j REJECT --reject-with icmp-host-prohibitedYou can zero the counters for all chains with iptables -Z.These counts are for the packets that iptables itself dropped. However,
there may be other filtering software that is also dropping packets due
to congestion, for example. You need to look at each one for whatever
statistics they provide. The (obsolete) netstat program can easily show the counts of packets that were dropped at the ethernet interface due to congestion before they are even delivered to iptables:
$ netstat -i
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
enp5s0 1500 1097107 0 38 0 2049166 0 0 0 and you can also get some statistics on packets dropped elsewhere by the kernel for various reasons:
$ netstat -s | grep -i drop
27 outgoing packets dropped
16 dropped because of missing route
2 ICMP packets dropped because socket was locked |
We are using iptables firewall. It is logging and dropping various packages depending on its defined rules.
Iptables log file entries look like:
2017-08-08T19:42:38.237311-07:00 compute-nodeXXXXX kernel: [1291564.163235] drop-message : IN=vlanXXXX OUT=cali95ada065ccc MAC=24:6e:96:37:b9:f0:44:4c:XX:XX:XX:XX:XX:XX SRC=10.50.188.98 DST=10.49.165.68 LEN=60 TOS=0x00 PREC=0x00 TTL=57 ID=14005 DF PROTO=TCP SPT=52862 DPT=50000 WINDOW=29200 RES=0x00 SYN URGP=0Is there any way to get the count of the dropped packets ?
I want to calculate metrics like the number of dropped packets in the last minute, hour…. so on.
The main purpose is monitoring for configuration mistakes and security breaches. If the firewall rules have a mistake, abruptly bunch of packets start to get dropped. Similarly if an attack is happening we expect variation in the number of denied packets.
| How to get metrics about dropped traffic via iptables? |
The most basic form would look like this, in your /etc/pf.conf config:
block from any to 192.0.2.2# which is equivalent to:
block drop from any to 192.0.2.2By default this block action will drop packets silently on all interfaces, from any source IP, in both directions. Because a client is unaware it is being blocked it will timeout and likely try again, and again...
block return is the 'friendly neighbor' way to let the client know the address is unreachable by responding in a protocol specific way, with a TCP RST (reset) or ICMP UNREACHABLE packet. A client can use this information to give up, or try again in a sane way.
block return from any to 192.0.2.2The default block behavior can be changed using the set block-policy option.A more involved example - but easier to manage and read when your rule set starts to grow:
mybadhosts = "{ 192.0.2.2, 203.0.113.0/24 }"
ext_if = "em0"block return on $ext_if from any to $mybadhosts # example #1
block return on em0 from any to { 192.0.2.2, 203.0.113.0/24 } # ^expanded formblock drop out on egress from any to $mybadhosts # example #2example #1 Shows simple use of variables, a list {}, a netmask /24, and specifies an interface em0. (Note variables are defined without a $ sign, and quotes are removed, when rules are expanded at runtime)
example #2 Drops out outbound packets, on the egress interface group (see ifconfig(8))See Also:OpenBSD Manual Pages - pf.conf(5)
OpenBSD PF Packet Filter User's Guide
Firewalling with OpenBSD's PF packet filter by Peter Hansteen |
Can someone give me a hint on how to setup a basic deny rule whenever any TCP request is sent to a specific IP address? I am using the PF packet filter. Any help?
| Block outgoing connections to certain IP using PF |
The NFLOG target can be used for this purpose. Here is a very basic example:
# Drop traffic by default
iptables -P INPUT DROP# add your whitelists here
# iptables -A INPUT ...# Pass the packets to NFLOG (just like LOG, but instead of syslog,
# it uses netlink). You can add extra filters such as '-p tcp' as usual
iptables -A INPUT -j NFLOG
# packets that get here will now be dropped per INPUT policy# Finally you can use tcpdump to capture from this interface (there
# can only be one active user of nflog AFAIK)
tcpdump -i nflog ...Refer to the iptables-extensions manual page for a description of the NFLOG target.
|
I'm trying to find a way to record the entire contents of packets (possibly with tcpdump) that have been dropped according to rules in iptables.
At present, I have a rule to log these packets (with a log prefix), then follow this with a rule to drop them.
Is there a way to record the contents of those packets for review afterwards?
So, I'm looking for this:A rule that logs the matching packet
A rule that passes the packet to a new target that records its contents (maybe QUEUE target?)
A rule that drops the packet2 & 3 may even be combined.
My understanding is that tcpdump may not be able to do this as it examines packets before iptables and therefore will not record just the dropped packets.
Thanks.
| record contents of packets dropped in iptables |
Most Linux distributions include the config parameters used to compile the kernel in /boot/config-<kernel-version>.
So
grep -x 'CONFIG_PACKET=[ym]' "/boot/config-$(uname -r)"Should tell you if AF_PACKET socket support is included (m for as a module).
Otherwise, you can just try and create a socket (using socket(2), see packet(7) for how to do it) in the AF_PACKET family and check if reports an error.
|
How do I check that packet socket support has been compiled into my kernel? I'm running Crunchbang, a Debian-based distribution.
| How do I check if I have packet socket support enabled in my distro's kernel? |
To fix the EULA selection, you can change the value stored in the debconf database directly:
debconf-get-selections |
grep PacketTracer_731_amd64/accept-eula |
sed s/false/true/ |
sudo debconf-set-selectionsThen configure pending packages:
sudo dpkg-reconfigure --pending |
I have question similar to this one
dpkg: new pre-installation script returned error exit status 1
I'm getting error same as above when trying to install PacketTracer 7.3.1. I think I declined EULA. I know nothing about bash and debconf. Does anyone know how to modify this script?
#!/bin/sh -e# Source debconf library.
. /usr/share/debconf/confmoduleremove_pt ()
{
if [ -e /opt/pt ]; then
echo "Removing old version of Packet Tracer from /opt/pt"
sudo rm -rf /opt/pt
sudo rm -rf /usr/share/applications/cisco-pt7.desktop
sudo rm -rf /usr/share/applications/cisco-ptsa7.desktop
sudo rm -rf /usr/share/icons/hicolor/48x48/apps/pt7.png
fi
}db_fset PacketTracer_731_amd64/show-eula seen false
db_fset PacketTracer_731_amd64/accept-eula seen false
STATE=1
while [ "$STATE" != 0 -a "$STATE" != 4 ]; do
case "$STATE" in
1)
db_input critical PacketTracer_731_amd64/show-eula || true
;;
2)
db_input critical PacketTracer_731_amd64/accept-eula || true
;;
3)
db_get PacketTracer_731_amd64/accept-eula
if [ "$RET" = "false" ]; then
exit 1
fi
;;
esac if db_go; then
STATE=$(($STATE + 1))
else
STATE=$(($STATE - 1))
fi
doneThis is what i got after adding set -x to preinst script and trying to install the packet.
+ . /usr/share/debconf/confmodule
+ [ ! ]
+ PERL_DL_NONLAZY=1
+ export PERL_DL_NONLAZY
+ [ ]
+ exec /usr/share/debconf/frontend /var/lib/dpkg/tmp.ci/preinst install 8.0.0 7.3.1
+ . /usr/share/debconf/confmodule
+ [ ! 1 ]
+ [ -z ]
+ exec
+ [ ]
+ exec
+ DEBCONF_REDIR=1
+ export DEBCONF_REDIR
+ db_fset PacketTracer_731_amd64/show-eula seen false
+ _db_cmd FSET PacketTracer_731_amd64/show-eula seen false
+ _db_internal_IFS= + IFS=
+ printf %%s\n FSET PacketTracer_731_amd64/show-eula seen false
+ IFS= + IFS=
read -r _db_internal_line
+ RET=false
+ return 0
+ db_fset PacketTracer_731_amd64/accept-eula seen false
+ _db_cmd FSET PacketTracer_731_amd64/accept-eula seen false
+ _db_internal_IFS= + IFS=
+ printf %%s\n FSET PacketTracer_731_amd64/accept-eula seen false
+ IFS= + IFS=
read -r _db_internal_line
+ RET=false
+ return 0
+ STATE=1
+ [ 1 != 0 -a 1 != 4 ]
+ db_input critical PacketTracer_731_amd64/show-eula
+ _db_cmd INPUT critical PacketTracer_731_amd64/show-eula
+ _db_internal_IFS= + IFS=
+ printf %%s\n INPUT critical PacketTracer_731_amd64/show-eula
+ IFS= + IFS=
read -r _db_internal_line
+ RET=question will be asked
+ return 0
+ db_go
+ _db_cmd GO
+ _db_internal_IFS= + IFS=
+ printf %%s\n GO
+ IFS= + IFS=
read -r _db_internal_line
+ RET=ok
+ return 0
+ STATE=2
+ [ 2 != 0 -a 2 != 4 ]
+ db_input critical PacketTracer_731_amd64/accept-eula
+ _db_cmd INPUT critical PacketTracer_731_amd64/accept-eula
+ _db_internal_IFS= + IFS=
+ printf %%s\n INPUT critical PacketTracer_731_amd64/accept-eula
+ IFS= + IFS=
read -r _db_internal_line
+ RET=question will be asked
+ return 0
+ db_go
+ _db_cmd GO
+ _db_internal_IFS= + IFS=
+ printf %%s\n GO
+ IFS= + IFS=
read -r _db_internal_line
+ RET=ok
+ return 0
+ STATE=3
+ [ 3 != 0 -a 3 != 4 ]
+ db_get PacketTracer_731_amd64/accept-eula
+ _db_cmd GET PacketTracer_731_amd64/accept-eula
+ _db_internal_IFS= + IFS=
+ printf %%s\n GET PacketTracer_731_amd64/accept-eula
+ IFS= + IFS=
read -r _db_internal_line
+ RET=false
+ return 0
+ [ false = false ]
+ exit 1
dpkg: error processing archive /home/yanaz/Pobrane/packet_tracer_modified.deb (--install):
new packettracer package pre-installation script subprocess returned error exit status 1
gtk-update-icon-cache: Cache file created successfully.``` | Package pre-installation script subprocess returned error exit status 1 |
Consider this Python3 example.
Server A:
#!/usr/bin/env python3
# coding=utf8from subprocess import check_call
from xmlrpc.server import SimpleXMLRPCServer
from xmlrpc.server import SimpleXMLRPCRequestHandler# Restrict to a particular path
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/JRK75WAS5GMOHA9WV8GA48CJ3SG7CHXL',)# Create server
server = SimpleXMLRPCServer(
('127.0.0.1', 8888),
requestHandler=RequestHandler)# Register your function
server.register_function(check_call, 'call')# Run the server's main loop
server.serve_forever()Server B:
#!/usr/bin/env python3
# coding=utf8import xmlrpc.clienthost = '127.0.0.1'
port = 8888
path = 'JRK75WAS5GMOHA9WV8GA48CJ3SG7CHXL'# Create client
s = xmlrpc.client.ServerProxy('http://{}:{}/{}'.format(host, port, path))# Call your function on the remote server
s.call(['alarm']) |
I want one of machine have a remote control alarm running that can be triggered by any remote machine. More preciselyMachine A is running the service in the background
Any remote machine B can send a packet to machine A to trigger the alarm (a command called alarm)How would you suggest do do it?
I would use nc:Service on machine A:
nc -l 1111; alarmMachine B triggers the alarm with
nc <IP of machine A> 1111I can also write some python to open a socket...
| Remote control alarm |
There's a service that's already included with Linux that provides this feature, it's called xinetd. Red Hat maintains pretty good documentation on their website, titled: 2.6.4. xinetd Configuration Files. The service xinetd allows you to setup a master service that will listen on specific ports, and then launch other applications when connections are made on said ports.
excerpt from xinetd man pagexinetd performs the same function as inetd: it starts programs that provide Internet services. Instead of having such servers started at system initialization time, and be dormant until a connection request arrives, xinetd is the only daemon process started and it listens on all service ports for the services listed in its configuration file. When a request comes in, xinetd starts the appropriate server. Because of the way it operates, xinetd (as well as inetd) is also referred to as a super-server.NOTE: If it isn't installed you can install it, the package is typically called xinetd.
Once it's installed you place configuration files under this directory, /etc/xinetd.d. For example, let's create a service called minecraft.
# /etc/xinetd.d/minecraft
service minecraft
{
disable = no
type = UNLISTED
socket_type = stream
protocol = tcp
wait = no
server = /path/to/minecraft/server
bind = <ip of minecraft server>
port = 25565
user = root
}With the above file in place you can then manually start xinetd to check things out.
$ sudo service xinetd startNow when you attempt to connect to your system via port 25565 the minecraft server should start up and you should be able to access it. You might need to adjust the user = .. line to whatever user ultimately owns the server.
To make this persistent you can use whatever mechanism your distro uses to start services automatically during boot-up.
ReferencesPort Forwarding with xinetd
xinetd man page
xinetd.conf man page |
I want to start my minecraft server when someone tries to connect to it on port 25565. I have a plugin for the server which shuts it down after x amount of minutes without players online. With a shell script I created a loop that starts the server when it shuts down:
#!/bin/bash
while true
do
# run server
java -Xms2048M -Xmx2048M -Djava.awt.headless=true -jar "craftbukkit.jar"
# server shut down # run MCSignOnDoor
java -jar MCSignOnDoor.jar --sentrymode -m "Gone Fishin' Back in Five Minutes!"
# McSignOnDoor shut down # stop loop if error code is not 12
# so only restart the server when the program ended because of a packet
if [ "$?" -ne "12" ]; then
break
fi
doneMcSignOnDoor was a java program someone made that emulates an active server, and exits as soon as someone pings it on port 25565 with exit code 12. Sadly, this does not work since a protocol update, so I'm looking for an alternative.
Is there a way to wait until it receives a packet on port 25565 (or any other port) and then continue the script?
| Shell script to wait for a packet from a certain port |
One possibility is that if you are sending data out predominantly, most of the packets coming back to your system will be ACKs, and those are going to be much smaller than the PUSH you're sending.
|
According to a computation I did on the data from ifconfig, my ethernet connection between my router and computer averages 1298 bytes/packet for TX (close to the MTU of 1500) and only 131 bytes/packet for RX. What could cause such a large discrepancy in the average TX vs. RX packet sizes?
| What can cause a large TX vs. RX average bytes/packet discrepancy? |
You need to Content-filtering not Packet-filtering .
Packet filtering : Working on Port, IP, layers , redirecting , icmp, udp, and other necessary protocol.
Content Filtering: Suppose you have a packet and it have a payload such as sex term.you need to drop it.
Content filtering softwares: Dansguardian , SquidGuard, HostsFile, OpenDNS, FoxFilter (FireFox extension) , webcleaner .
|
I am curious if most Linux distros make it possible to intercept incoming network traffic as soon as it enters the system and filter its content based on some rules before any other client can use it or at least before it gets to a specified client.
E.g., let's say I wanted to have a filter that intercepts all HTTP traffic before it gets to a specific client (e.g. Firefox) and, if some pattern is matched, modify the HTML. Or replace all content coming from a certain remote host. I would like to be able to do that before it hits any client, regardless of the client.
Does Linux allow for that kind of packet filtering?
Additionally, I would also like to know what the workflow of a network packet is once it enters the computer from the port, i.e. if there is a sequence of steps assigned that gets performed before it becomes available to the client app that invoked it.
| Packet analyzer to intercept and filter incoming traffic before any client app |
You can generate some of the ICMP unreachable variants with qualifiers to iptables ... -j REJECT on a separate target host. (Or a VM.) The possible qualifiers are icmp-net-unreachable, icmp-host-unreachable, icmp-port-unreachable, icmp-proto-unreachable, icmp-net-prohibited, icmp-host-prohibited, icmp-admin-prohibited, and tcp-reset.
For example:
iptables -j REJECT --reject-with icmp-admin-prohibited |
I have a task on which I have spent a lot of time. I am not fluent in Linux, but I can manage basic things.
The task is to gather different types of ICMP packets. I can harvest them by tcpdump (which I prefer) or Wireshark.
I am able get the ICMP types of echo reply and echo request using ping, and time exceeded using tracepath or traceroute. Now, what I am trying to get is unreachable or timestamp or something else. I need two more types, however I don't know a way to produce.
I have tried pinging a nonexistent host or wrong port, and using tracepath the same way, but I am not getting anything.
Can someone advise me or tell me what commands I can use, and in which way, to obtain two more types of ICMP packets?
| How to get different types of ICMP |
The initial default qdisc set by the kernel with special handle 0: can't be modified nor referenced. It can only be overridden by a new qdisc. Using change references the existing root qdisc, but as this can't be the default kernel's qdisc, that's an error.
So the first time this netem qdisc is used, the add keyword should be used, and that's probably what was done at some point in the past. Then later the change keyword can be used to alter some of its parameters (like the corruption percent), since referencing it by the root keyword is enough.
As a shortcut replace will attempt change and if it fails will perform add instead.
So in the end this command will work the first and the following times too:
sudo tc qdisc replace dev ens8 root netem corrupt 5%To remove this qdisc this should be done once (it would fail the 2nd time because that would be again done on the default qdisc installed by the kernel which is off-limits):
sudo tc qdisc delete dev ens8 rootThe usage of add, change, replace (which is change or else add) and delete follows a similar pattern among many other iproute2 commands.
|
The following rule corrupts 5% of the packets by introducing a single bit error at a random offset in the packet:sudo tc qdisc change dev ens8 root netem corrupt 5%But recently it gave me the following error:Error: Qdisc not found. To create specify NLM_F_CREATE flagCould you kindly help me or provide me with some other methods to simulate packet corruption?
I'm trying to simulate packet corruption to see how well my error detection mechanism works.
| Error when trying to corrupt packets in linux terminal (netem) |
The name server you queried isn't in the US. It's much closer to you. (So, unfortunately, your Nobel will have to wait.)
That trace output shows www-contestwinners.com is using CloudFlare for its DNS provider. CloudFlare operates numerous servers around the world and your query gets directed to the closest (or as best they can manage) server.
(Note that the name server need not be—and often isn't—anywhere near the web server. Often name servers aren't even handled by the same company.)
|
I personally love to dig sites that I know. .Here's a wierd thing I saw up on my terminal after running dig www-contestwinners.com:-
; <<>> DiG 9.9.5-3ubuntu0.1-Ubuntu <<>> www-contestwinners.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27237
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www-contestwinners.com. IN A;; ANSWER SECTION:
www-contestwinners.com. 300 IN A 8.29.143.192;; Query time: 40 msec
;; SERVER: 127.0.1.1#53(127.0.1.1)
;; WHEN: Sun Jun 06 20:31:40 IST 2015
;; MSG SIZE rcvd: 67Which is absurd as my ISP is about 200kms away and I'm on a DSL connection. Considering the packets travel at speed of light, which makes the time to reach the packet from my device to the ISP server about 0.6 ms which is not much but the site's server is in US which is at a distance of 13000 km from my ISP which makes the time to reach the server about 43 ms. So, the total round trip takes about (43+0.6)x2= 87.2 ms.
I'm totally confused/excited. I think I just broke a basic law of Physics :p
EDIT: I checked dig www-contestwinners.com +trace to check against caching and got:-
;; global options: +cmd
. 412410 IN NS i.root-servers.net.
. 412410 IN NS k.root-servers.net.
. 412410 IN NS f.root-servers.net.
. 412410 IN NS g.root-servers.net.
. 412410 IN NS e.root-servers.net.
. 412410 IN NS l.root-servers.net.
. 412410 IN NS h.root-servers.net.
. 412410 IN NS b.root-servers.net.
. 412410 IN NS a.root-servers.net.
. 412410 IN NS m.root-servers.net.
. 412410 IN NS j.root-servers.net.
. 412410 IN NS c.root-servers.net.
. 412410 IN NS d.root-servers.net.
. 518400 IN RRSIG NS 8 0 518400 20150618050000 20150608040000 48613 . TJF2HD0Ob5niqlCZNhlOYHvwlZmEpebgV8uFwgvRLBCQb22sq+S8Hr4d CX9S5WgzRlTxCSQ3Bi9TJNlyf221rE1K53kFbRae6/vzjR2MukvF5d8G SEWinOcJ9n7l6fTq/HoxCv/GfliY6gTPWxrc8uiABdYYOj3u3XoUmbF7 Cug=
;; Received 397 bytes from 127.0.1.1#53(127.0.1.1) in 5306 mscom. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
com. 86400 IN DS 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766
com. 86400 IN RRSIG DS 8 1 86400 20150618050000 20150608040000 48613 . kSsNvyvdzfiJAxfpaRq4+bAe2JuKcDTcRnHDgGhiHNRsbcg04fHv/TNt Kkl0LuBpLcBWhBr74OkCLJxx5Q1KFkRhum2R7gHj6h5u8s4J84feqWeu fx69Defg2NWhToWDnqz0WzlUKF0nDsEyXTJDjsUeFrXu+baR3NSMLxvb zdU=
;; Received 746 bytes from 192.203.230.10#53(e.root-servers.net) in 5221 mswww-contestwinners.com. 172800 IN NS dan.ns.cloudflare.com.
www-contestwinners.com. 172800 IN NS anna.ns.cloudflare.com.
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN NSEC3 1 1 0 - CK0QFMDQRCSRU0651QLVA1JQB21IF7UR NS SOA RRSIG DNSKEY NSEC3PARAM
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN RRSIG NSEC3 8 2 86400 20150613045314 20150606034314 33878 com. syTPVIWKgitzBOsgVgzOl7nEIsu7jhsmSXPzzLuGVUwZZC1QHc4dxmKP MkZUR+VcaY657/Knjk7Il5oOKWo8ZlTatk3+34504gWwdnbB3BShqTKS CFsWOEdw5wyf0gumuQk5GKnVR5Noo+q2+ZOxxy7LkEl0F/h7fuYj7sJA VkM=
A7FUA6LKSQCDHL6JDO7SFU649KJ6FAU5.com. 86400 IN NSEC3 1 1 0 - A7G276R35K72HBA9TE7NSAOEFTS5CADU NS DS RRSIG
A7FUA6LKSQCDHL6JDO7SFU649KJ6FAU5.com. 86400 IN RRSIG NSEC3 8 2 86400 20150614050705 20150607035705 33878 com. lIZrvjQdR4oNJTo8gW1uuzs1IuFXiqwZbI757xxBRdrYl22IDSDM4U4G i8PNVSOQ2T3ub++0VhoioWnp3aD+Uc1XmdR/jI5Z5bosIsfIrCj+CSCm ZlDShTEDsfOBLxvZ2LByGwibTHi/yuH57O+Zx3zp21RZu3xLAn2WT2aZ TrE=
;; Received 675 bytes from 192.12.94.30#53(e.gtld-servers.net) in 1539 mswww-contestwinners.com. 300 IN A 8.29.143.192
;; Received 67 bytes from 173.245.59.108#53(dan.ns.cloudflare.com) in 53 msHow is this possible?
| Dig faster than speed of light, Possible? |
Analysis
From what I can gather looking over the docs & via Google it looks like mtr is tracking packet loss itself by sending traffic and then keeping track of any drops that occur due to network congestion.
For example, the Linode tutorial titled: Diagnosing Network Issues with MTR states the following:The i option flag runs the report at a faster rate to reveal packet loss that can occur only during network congestion. This flag instructs MTR to send one packet every n seconds. The default is 1 second, so setting it to a few tenths of a second (0.1, 0.2, etc.) is generally helpful.The nature of this traffic is ICMP ECHO requests.
-i SECONDS
--interval SECONDS Use this option to specify the positive number of seconds between ICMP ECHO
requests. The default value for this parameter is one second.And that method of measuring loss is why you can have loss. mtr is not using TCP to measure any loss, it's using ICMP, which can and does have packets that either get dropped or timeout.
What about Snt?
The column Snt is telling you how many ICMP ECHO packets have been "sent".
|
I thought TCP protocol itself will guarantee not to loose any bytes while connecting. About this viewpoint, please refer to
https://stackoverflow.com/questions/23841896/will-tcp-connection-lose-packets
What puzzled me was how mtr (run with TCP protocol) calculate loss? TCP just has segment rather than packets. So, what 'Snt' means?
[root@ ~]# mtr --report --tcp --port=443 stackoverflow.comhere, if some of intermediary hosts do not want to reply at all hence Loss% = 100.0, some of them reply ACK hence Loss% = 0.0, then how to explain hops #14 loss% = 25.0%?
| How does MTR (run with TCP protocol) calculate the loss rate? |
You can do it :
rdr pass quick on $ext_inf inet proto tcp from any to any port 1394 -> $target port 1394 |
I am running MacOS(really BSD), and I want to redirect certain traffic over an ssh tunnel using a a local forward. Seems easy enough, but I am repeatedly blocked by the ambiguous "/etc/pf.conf:29: syntax error" message at every turn. I must have gone through 30 iterations of the rule by now. Additionally, I have read the relevant OpenBSD packet filter information regarding syntax and redirection. Am at quiet the loss, and seek the help of someone smarter than myself about the BSD packet filter.
The goal is to take any traffic sourcing from my local machine destined to a machine on the internet to port 1234 and redirect the traffic to 127.0.0.1:1234. My specific os is OS X 10.10.2 Yosemite.
Here is the latest iteration of the rule which causes pfctl to return "syntax error"
pass out quick on en6 from any to en6 port 1234 rdr-to 127.0.0.1 port 1234
Based on the documentation and other random blogs on the Internet, this rule looks correct; pfctl however, disagrees.
The breakdown based on my understanding of the documentation is:
pass - the action to pass the traffic
out - the direction of traffic flow
quick - if the packet matches this rule, then consider this the last rule in the chain
on en6 - the interface on which to apply the rule
from any - the source of the packet (should always be my machine)
to en6 port 1234 - to anything on the interface destined for port 1234
rdr-to 127.0.0.1 port 1234 - redirect the packet to this interface
| macos - local port redirection using pfctl and syntax errors |
If the machine is compromised, everything you typed in when logging in (such as your username and password) can be compromised, so "Remember me" doesn't really matter anymore.
But even if we stick to cookies only, the hacker can extract the session cookies from the browser's profile and then use them in his browser.
Example : Firefox stores all its data in ~/.mozilla, the hacker can just copy that folder to his system and put it in place of his own profile folder, and when he uses that browser with your profile folder, all websites will think that it's actually you (except some websites that also look at the user's IP which will be the attacker's one, sadly not many sites offer that feature).
|
I was reading an article on how to sniff network packets. (of course for knowledge purposes only). I came across these particular lines. For instance, say I was sniffing traffic on the network, and you
logged in to Facebook and left the Remember Me On This Computer check
box checked. That signals Facebook to send you a session cookie that
your browser stores. I potentially could collect that cookie through
packet sniffing, add it to my browser and then have access to your
Facebook account.So, assuming my Linux client is compromised and am unaware of it currently, does that mean if I have clicked on remember me on this machine to login to my accounts, my personal details are compromised? How can the compromised machine's cookie information can be used in any hacker's browser?
| Is it bad to select remember me option in the browsers of a compromised machine? |
A possible solution using cgroups net_cls subsystem to group certain processes group, mark the packets in this cgroup using the iptables rule with the match extension, and then use tcpdump to monitor the packets from this cgroup. by listening to an nflog interface.
Preperations
Creating a cgroup in the net_cls subsystem
$ mkdir /sys/fs/cgroup/net_cls/firefoxAdd the relevant pid to the cgroup
The best way, to ensure that all related pids are grouped, is to do so before you start running the application.
For instance, if you want to run firefox, first check the pid of your current shell (echo $$). Then add it to the cgroup you created.
$ echo <pid> > /sys/fs/cgroup/net_cls/firefox/tasksAll the process spawning from your shell will now be assigend to the "firefox" cgroup.
Assigning a class id to the cgroup
From the documentation of the cgroups net_cls:You can write hexadecimal values to net_cls.classid; the format for
these values is 0xAAAABBBB; AAAA is the major handle number and BBBB
is the minor handle number.echo 0x100001 > /sys/fs/cgroup/net_cls/firefox/net_cls.classidMark those packets in an iptables rule
iptables has match extensions that you can leverage:MATCH EXTENSIONS
iptables can use extended packet matching modules
with the -m or --match options, followed by the matching module name;
after these, various extra command line options become available,
depending on the specific module.You use the cgroup extension module, and mark those packets by assigning the NFLOG as a group target:NFLOG
This target provides logging of matching packets. When this target is
set for a rule, the Linux kernel will pass the packet to the loaded
logging backend to log the packet. This is usually used in combination
with nfnetlink_log as logging backend, which will multicast the packet
through a netlink socket to the specified multicast group. One or more
userspace processes may subscribe to the group to receive the packets.
--nflog-group nlgroup
The netlink group (0 - 2^16-1) to which packets are (only applicable for nfnetlink_log). The default value is 0.So it would look something like (take the net_cls.classid you created, and decide of some number to the nflog group):
$ iptables -I INPUT 1 -m cgroup --cgroup 0x100001 -j NFLOG --nflog-group 123
$ iptables -I OUTPUT 1 -m cgroup --cgroup 0x100001 -j NFLOG --nflog-group 123This rule would mark all the incoming/outgoing packets in the cgroup with a nflog group number 123.
Run tcpdump
You can use the nflog interface. Not all versions of tcpdump support this. You can check if your versions does:
$ tcpdump --list-interfaces |grep nflog
5.nflog (Linux netfilter log (NFLOG) interface) [none]If it does, you can listen to all the packets in this interfaces, which are the packets sent/received by the processes in the cgroup you created:
$ tcpdump -v -i nflog:123 |
How to mark all packets (inbound and outbound) for specific program/ cmd in Linux using iptables or any other firewall/ tool
Given that --cmd-owner option was deprecated ref:http://www.spinics.net/lists/netfilter/msg49716.html.
For example, how to mark all Firefox's packets, knowing that Firefox can spawn processes so the PID option isn't feasible.
| How to mark packets by program |
The subject you're talking about is weak/strong Host model and Linux uses weak one by default.Is there a way to check if a packet intended to interface eth1 did reach that interface?I'd say that firewall would allow to put such a constraint.I have cases where a packet arrives on an interface of a host (the first one on its way) but does not seem to reach the intended interface (the actual application is not brought up). Such a test would ensure that the firewalling/forwarding is OK and that the problem is elsewhere (in the configuration of the app, a bug in the app, etc)You've mentioned ICMP in your example earlier. What's wrong with using exactly it to verify there's connectivity for real?
Also, at this point you're pretty close to practice of using loopback interfaces for services — it's mainly used coupled with dynamic routing but the main idea is pretty simple: it shouldn't matter what was the interface the request came from, it only matters that it found its way at all and reply can be sent as well. Loopback interfaces are never down unless manually told so — "real" interfaces OTOH can change theirs state due link loss and so on.
|
Consider the following topology:I am sending ICMP (ping) packets from host B to 10.0.1.1. They reach the target and the target answers with a reply. The connectivity works fine.
When running, on host Atcpdump -i eth1 icmp: I do not see any packets
tcpdump -i eth2 icmp: I see the packetsIs there a way to check if a packet intended to interface eth1 did reach that interface?
I vaguely remember reading one day that the kernel was handing the reply at eth2 level (which is the first interface the packet reaches) so that could explain why monitoring eth1 does not yield any results.
As for why I would need to check such specific case: I have cases where a packet arrives on an interface of a host (the first one on its way) but does not seem to reach the intended interface (the actual application is not brought up). Such a test would ensure that the firewalling/forwarding is OK and that the problem is elsewhere (in the configuration of the app, a bug in the app, etc)
| How to check if a packet reached an interface in a multi-interface context? |
This is not possible. Why ? Communication-Protocol http/https is the key here. tcpdump is "only" designated to capture lower level protocol. There is no filter on tcpdump to analyze traffic in-flight and do a expansion on the filter. But this would be necessary to meat your requirements.
You could use a Web-Proxy Cache or some other "in the middle" software.
Also it is easy to impediment using Perl or Python scripts if HTTP would be the target. A little more work and in special with certificates in case of HTTPS. +-------------+ +------------+ +------------+
| web-Browser |-----| HTTP/HTTPS |-----| HTTP/HTTPS |
+-------------+ | Proxy | | Server |
+------------+ +------------+You can use iptables on Linux do redirect all traffice to target port 80/443 to the web-proxy/"man in the middle"-program/script or use the proxy-parameter on you web-browser.
|
I was wondering if its possible to capture all network traffic coming from a single website using tcpdump. I am interested in capturing the sizes of all incoming and outgoing packets from and to a certain website. However, something like tcpdump -n host washingtonpost.com will only (I tried this on the terminal) give the traffic coming from washingtonpost's directly resolved servers and not for example all the external CDNs, or the Advertising Servers et cetera. Since I do not know the IP addresses or hostnames of these servers beforehand, I cannot create a filter of somekind. Does anyone know of a way I could do this or whether this is indeed possible?
Thanks.
| How to capture traffic from an entire website (including external servers) using tcpdump |
Anyplace we can get hold of how wireshark does it?Here. Start with some of the README files in the doc directory, such as README.design (doesn't say much, but gives a quick overall view of Wireshark) and README.dissector (discusses how dissectors are written).
Bear in mind that Wireshark has been developed over the course of fifteen years and "CONTAINS OVER TWO MILLION LINES OF SOURCE CODE", to quote the README.packaging file. If you want to be able to do all that Wireshark does, that's going to take a lot of work.
|
Hi all we have a system which could capture packets accordingly. The only problem now we need some codes on how to interpret the packet just like how wireshark does it so well. Anyplace we can get hold of how wireshark does it?
| Wireshark packet dissection codes? |
Usually, your ISP gives you a single IP address, and your home router does network address translation (NAT) to pretend to your ISP that all the devices in your home network are just a single device with the same address as the router itself.
Because of this, if anyone wants to contact your home network from the outside, the router has to "forward a port" to the device in your home network where the service is running, because the only IP address visible from the outside is the router's IP address.
If you don't want that, the only alternative is either to run your service directly on the router, or disconnect the router from your ISP and instead connect the computer where the services run directly (if it has the hardware to do so).
There is no other way, no matter what protocol you use.
You can also pay your ISP to give your more than one IPv4 address (this will be expensive). Or, if your ISP gives you an IPv6 global prefix, then each of the devices in your home network will have its own IPv6 address, which is reachable from the outside. So there's no NAT, no port forwarding is necessary, but it will only work for IPv6.
OTOH, setting up port forwarding isn't exactly black magic, so just do it.
Edit
When you "visit a web site", i.e. your local http client will contact the remote http server, the NAT in the router will rewrite the source address of the outgoing packets from the address of your private device to the ISP's IP address. It will also remember that a local device opened an outgoing connection, and when incoming packets arrive that belong to that connection, it will conversely rewrite the destination address and send them to the local device. That's what NAT is.
So for outgoing connections from your local device, you don't need port forwarding. NAT handles this for you. For incoming connections, e.g. if you run a http server, you need to tell your router "please forward port 80 to the following local device".How can i open the port without accesing router settings?For incoming connections, you can't. As I already tried to explain.
Edit
If you google example programs for UDP, there'll always be a server listening on a port, and a client not listening, but contacting the server (and after that both server and client can exchange packets in any direction). So "how do you receive info without listening to a port" is that you write the client, not the server, and then the server can send data to the client, so the client "receives info".
You can't run the server behind NAT without port forwarding. Period. No matter how often you ask. Not for TCP, not for UDP, not by "using a low level protocol".
If you don't want to enable port forwarding in the GUI of the router: Many routers allow you to set port forwarding, sometimes even temporary port forwarding, via UPnP. Your router may or may not have this feature.
There are also other tricks, like first contacting a general kind of server, which then will establish a connection with some other peer behind NAT (see e.g. the STUN protocol).
But if you are behind NAT, you first have to contact a server on the "real" internet. This server will be listening on a port, your client won't. Or, if you have a server listening to a port, you need to set up port forwarding. There is no other choice. Live with it.
|
I am doing UDP socket programming in C. In order to listen to a port, I need to forward ports in my router. My question is how to avoid doing that and still being able to communicate over the internet, if not possible with sockets, what is the lowest level possible? In other words, every device can listen to an http server, so is http the only unlocked way to go?
| How to avoid forwarding ports? |
This is a translation of my comment into a response.
The rules should be adjusted to rely on an outbound MASQUERADE of source port to handle return packets. Thus, outgoing packets should be DNAT-ed with the rule you have, and MASQUERADE-ed with a rule:
iptables -t nat -A POSTROUTING -p tcp --destination-port $PROXY_PORT -j MASQUERADE --to-ports $TCP_PORTUse that rule instead of your SNAT rule.
Incoming packets relating to those that have been MASQUERADE-ed will get their destination ports duly return-mapped.
(corrected as per comment)
|
problem:
I have a TCP server and client that each listen on port 9000. I have the server and client deployed on two different hosts where traffic can only pass through port 80 between them. I want the source port (9000) to be maintained when packets are sent between them (see the SNAT rule below) so that the PREROUTING rule can identify the packets with --source-port.
approach:
I'm trying to setup iptables rules such that the server routes its traffic from port 9000 to port 80, and a complimentary rule for the client where the incoming traffic on port 80 is routed to 9000 locally.
I've come up with this script to apply the rules. I've tried this with a few variations and packets seem to get accepted by the server host, but not accepted by the PREROUTING (inbound) rule.
#!/bin/bashapply_inbound_rules() {
# Allow incoming server traffic from port 80 to the TCP client
sudo iptables -t nat \
-I PREROUTING \
-p tcp --destination-port $PROXY_PORT \
-j REDIRECT --to-port $TCP_PORT
}apply_outbound_rules() {
# Setup outgoing packets created by the TCP server
# to route through local port 80
# and received on port 80 on the client host
sudo iptables -t nat \
-I OUTPUT \
-p tcp --destination-port $TCP_PORT \
-j DNAT --to-destination :$PROXY_PORT # To maintain the TCP_PORT
sudo iptables -t nat \
-I POSTROUTING \
-p tcp --destination-port $PROXY_PORT \
-j SNAT --to-source :$TCP_PORT
}apply_inbound_rules
apply_outbound_rulesDoes anyone have experience creating rules like this? It seems like it would be a common problem but I can't seem to figure it out.
| iptables: transparent tcp traffic proxy |
TCP is a stateful protocol, UDP is stateless, so you cannot use ctstate with it.
Either you let traffic for a particular port for UDP or you don't.
Also --udp-flags FIN,SYN,RST,ACK SYN is just pure nonsense.
In short familiarize yourself with TCP/IP and UDP a bit before rushing to set up iptables.
|
On a random day I was googling iptables rules to harden my desktop, and came across this post[1]. At some point the guide mentions blocking invalid TCP packets using tcp-modules with these rules;
iptables -A INPUT -p tcp -m tcp --tcp-flags ALL FIN,PSH,URG -j DROP
iptables -A INPUT -p tcp -m tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
iptables -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -j DROP
I pressed return on the above commands and the rules were applied successfully. Then I tried replacing the tcp portions on each of the commands with udp for eg, in case of the 3rd command I'd do,
iptables -A INPUT -p udp -m conntrack --ctstate NEW -m udp ! --udp-flags FIN,SYN,RST,ACK SYN -j DROP
Which returned me an error saying these rules are not valid for udp packets. I am on a Debian OS, Kernel version 4.9.x
The article I was reading onlinehttps://www.booleanworld.com/depth-guide-iptables-linux-firewall/ | Why is it that TCP packets can be modified to block invalid packets, but not UDP packets |
Stated in the ncat(1) man page-m numconns, --max-conns numconns (Specify maximum number of
connections)
The maximum number of simultaneous connections accepted by an
Ncat instance. 100 is the default (60 on Windows).100 is the default maximum number of connections. It can be modified with the -m flag.
|
I'm continuously sending packets to a UDP server after 1 second. To listen for UDP packets:
ncat -klup 1234 --sh-exec "cat > /proc/$$/fd/1"However, after printing 100 packets, nothing else prints. With Wireshark I can see that packets are still being sent but on the server side nothing prints.
$ ncat -klup 1234 --sh-exec "cat > /proc/$$/fd/1"
Hello Server!
1 Send
2 Send
3 Send
4 Send
5 Send
6 Send
7 Send
8 Send
9 Send
10 Send
11 Send
12 Send
13 Send
14 Send
15 Send
16 Send
17 Send
18 Send
19 Send
20 Send
21 Send
22 Send
23 Send
24 Send
25 Send
26 Send
27 Send
28 Send
29 Send
30 Send
31 Send
32 Send
33 Send
34 Send
35 Send
36 Send
38 Send
39 Send
40 Send
41 Send
42 Send
43 Send
44 Send
45 Send
46 Send
47 Send
48 Send
49 Send
50 Send
51 Send
52 Send
53 Send
54 Send
55 Send
56 Send
57 Send
58 Send
59 Send
60 Send
61 Send
62 Send
63 Send
64 Send
65 Send
66 Send
67 Send
68 Send
69 Send
70 Send
71 Send
72 Send
73 Send
74 Send
75 Send
76 Send
77 Send
78 Send
79 Send
80 Send
81 Send
82 Send
83 Send
84 Send
85 Send
86 Send
87 Send
88 Send
89 Send
90 Send
91 Send
92 Send
93 Send
94 Send
95 Send
96 Send
98 Send
99 Send
100 Send
101 SendRegardless of how many times I try is always stops after 100 packets.
| ncat stops listening after 100 UDP packets |
Write the captured packet data into a file with the -w option and read it into wireshark, or capture directly in wireshark. Then select the Request item of the HTTP submenü in the Statistics menu.
|
I have been trying to figure out how many sites does firefox connect wth and for that have been using wireshark. What I have done is made a new profile and whenever I run firefox in the browser it is with
$ firefox --ProfileManager --safe-modeObviously before this command is run I run -
$ script$ tshark -V -i wlan0I set it by instructions from https://superuser.com/questions/319865/how-to-set-up-wireshark-to-run-without-root-on-debian
and had added myself to wireshark group.
So, what I did was run these three commands one after other -
$ script$ tshark -V -i wlan0and finally -
$ firefox --ProfileManager --safe-modeThe new tab/window opens and I'm able to capture the packets. Immediately after, I shut down the browser..
Now I need to grep through the packets. around 80 odd packets which came like -
Queries
self-repair.mozilla.org: type A, class INwhich seems to be answered by the amazon domain -
Answers
self-repair.mozilla.org: type CNAME, class IN, cname self-repair.r53-2.services.mozilla.com
Name: self-repair.mozilla.org
Type: CNAME (Canonical NAME for an alias) (5)
Class: IN (0x0001)
Time to live: 57
Data length: 40
CNAME: self-repair.r53-2.services.mozilla.com
self-repair.r53-2.services.mozilla.com: type CNAME, class IN, cname shield-normandy-elb-prod-2099053585.us-west-2.elb.amazonaws.comIs there a way to grep through the contents so that a list of domains which were touched can be known instead of trawling manually ?
Update - Did the -
$ tshark -V -i wlan0 -w trace1.pcap | how to get list of sites/domains linked to in wireshark |
Monitoring Internet Traffic in Switched Networks
These days, the majority of local networks are switch-based. Unlike a hub, a switch, when it has received a packet from some port, retransmits it only to one port, where the recipient computer is connected to it. Switches maintain a table of MAC addresses and ports associated with each of those addresses (Content Addressable Memory table). When it has received a packet, switch validates the recipient’s MAC address in the table and selects the matching port to route the packet to. Due to this feature, Internet monitoring with LanDetective may be limited – your adapter will accept only packets that are addressed to you explicitly, because the switch would prevent other packets from getting into your network segment. Note that switches were created not for cutting traffic monitoring opportunities but rather for minimizing network load and maximizing its bandwidth. Moreover, there are special managed switches available on the market (and they are widely spread), which on top of their common features have a special one – to simplify the operation of traffic analysis systems and Internet monitoring solutions. Thanks to this capability, a managed switch can be configured in a way that all packets passing through it would be replicated to a certain switch port. Different manufacturers call the function a different name: Port Mirroring, Switched Port Analyzer (SPAN), or Roving Analysis Port (RAP). If you are a happy owner of a managed switch, turn to the specification for your device to find out whether this feature is supported and how you can activate it. In order to start Internet monitoring after the activation of Port Mirroring, you will need to just connect to the specified switch port and use the promiscuous Capture mode in LanDetective.
If you have linksys router then following way you can use Port Mirroring to get traffic on other switch interface. Linksys Router Port MirrorningDirty way to capture trafficIf your switch is unmanageable or you don't have access you can connect your IP Phone directly to your laptop/Desktop and use Wireless Interface of laptop to connect your wireless router, also you need to configure routing in laptop to pass IP phone traffic through laptop that way you can run wireshark on your laptop and capture packets.
|
I have an IP phone on my home network that I am trying to call using SIP from outside of my home network. I have port forwarding setup, but the phone is responding with a SIP 404 message. All the documentation I've seen says to look at the incoming traffic for this connection via Wireshark. How do I do this?
My computer and this phone are both plugged directly into the router. Is there any way I can listen to the traffic on my computer with the current setup? I'm not sure if 'switched network' is the correct term, but I remember that on older networks that used hubs instead of switches, all packets were broadcast to all devices and listening to traffic not meant for you was easier.
I appreciate the privacy that switching provides, but what about in this case where I need to do debugging and have control over the network? Do I need to connect both the computer and the phone to a hub and then connect that to my router so the traffic is broadcast to both of their network cards? I don't have a hub at the moment so I have not tried this yet.
What are the other ways to capture packets meant for other devices in this scenario?
| How can I debug traffic on a switched network with Wireshark? |
It’s only security, performance isn’t affected (at least, when perf isn’t running; and even then, perf’s impact is supposed to be minimal). Changing perf_event_paranoid doesn’t change the performance characteristics of the system, whether perf is running or not.
There’s a detailed discussion of the security implications of perf in the kernel documentation. The recommendation there is to set up a group for users with access to perf, and set perf up with the appropriate capabilities for that group, instead of changing perf_event_paranoid:
cd /usr/bin
groupadd perf_users
chgrp perf_users perf
chmod o-rwx perf
setcap cap_sys_admin,cap_sys_ptrace,cap_syslog=ep perfand add yourself to the perf_users group.
Version 5.8 of the kernel added a dedicated capability, so instead of granting all of cap_sys_admin, the last command can be reduced to
setcap cap_perfmon,cap_sys_ptrace,cap_syslog=ep perf |
I'd like to use the perf utility to gather measurements for my program. It runs on a shared cluster machine with Debian 9 where by default the /proc/sys/kernel/perf_event_paranoid is set to 3, therefore disallowing me to gather measurements. Before changing it, I'd like to know what the implications of this are.
Is it just security that would allow other users to profile stuff run by other uses and therefore gain insights? We do not care about this as it is a inner circle of users anyway. Or is it performance perhaps, which will impact everyone else as well?
| Security implications of changing “perf_event_paranoid” |
About 1.2 microseconds which is about a thousand Cycles
https://eli.thegreenplace.net/2018/measuring-context-switching-and-memory-overheads-for-linux-threads/
|
I'm interessted in getting the number of context switches a two processes in a KVM vm takes on a singel CPU over some time.
Earlier i have used perf, is this best practice?
And how much time is used on a context switch per CPU?
| How long time does a context switch take in Linux (ubuntu 18.04) |
I know this question is pretty old (Feb 16) but here a response in case it helps someone else.
The problem is that you've entered the '-F 999' indicating that you want to sample the events at a frequency of 999 times a second. For 'trace' events, you don't generally want to do sampling. For instance, when I select sched:sched_switch, I want to see every context switch.
If you enter -F 999 then you will get a sampling of the context switches...
If you look at the output of your 'perf record' cmd with something like:
perf script --verbose -I --header -i perf.dat -F comm,pid,tid,cpu,time,period,event,trace,ip,sym,dso > perf.txtthen you would see that the 'period' (the number between the timestamp and the event name) would not (usually) be == 1.
If you use a 'perf record' cmd like below, you'll see a period of 1 in the 'perf script' output like:
Binder:695_5 695/2077 [000] 16231.700440: 1 sched:sched_switch: prev_comm=Binder:695_5 prev_pid=2077 prev_prio=120 prev_state=S ==> next_comm=kworker/u16:17 next_pid=7665 next_prio=120A long winded explanation but basically: don't do that (where 'that' is '-F 999').
If you just do something like:
perf record -a -g -e sched:sched_switch -e sched:sched_blocked_reason -e sched:sched_stat_sleep -e sched:sched_stat_wait sleep 5then the output would show every context switch with the call stack for each event.
And you might need to do:
echo 1 > /proc/sys/kernel/sched_schedstatsto get the sched_stat events.
|
I've been trying to enable context switch events on perf and use perf script's dump from perf.data to investigate thread blocked time.
So far the only two recording options that seem to be helpful are context switch and all the sched events.
Here's the command I'm running on perf:
perf record -g -a -F 999 -e cpu-clock,sched:sched_stat_sleep,sched:sched_switch,sched:sched_process_exit,context-switchesHowever, both seem to be incomplete, usually a sched_switch event looks something like this:
comm1 0/0 [000] 0.0: 1 sched:sched_switch: prev_comm=comm1 prev_pid=0 prev_prio=0 prev_state=S ==> next_comm=comm2 next_pid=1 next_prio=1
stacktrace...From my understanding, the prev_comm is always the thread that is going to be blocked, and the next_comm is the thread that is going to be unblocked. Is this a correct assumption? If it is, I can't seem to get complete data on the events since there are many threads that get blocked on prev_comm, but never seem to get a corresponding next_comm.
Enabling context switches doesn't seem to do much since there is no information on the thread being blocked or unblocked (unless I'm completely missing something, in which I would appreciate an explanation on how they work).
Here's how a typical context switch event looks like:
comm1 0/0 [000] 0.0: 1 context-switch:
stacktrace...tl;dr, how can I do blocked time investigations on linux through perf script's output and what options need to be enabled on perf record?
Thanks.
| Understanding Linux Perf sched-switch and context-switches |
Older versions of perf ~2.6.x
I'm using perf version: 2.6.35.14-106.
Capture all the output
I don't have the -x switch on my Fedora 14 system so I'm not sure if that's your actual problem or not. I'll investigate on a newer Ubuntu 12.10 system later on but this worked for me:
$ (perf stat -ecache-misses ls ) > stat.log 2>&1
$
$ more stat.log
maccheck.txt
sample.txt
stat.log Performance counter stats for 'ls': 13209 cache-misses 0.018231264 seconds time elapsedI only want perf's output
You could try this, the output from ls will get redirected to /dev/null. The output form perf (both STDERR and STDOUT) goes to the file, stat.log.
$ (perf stat -ecache-misses ls > /dev/null ) > stat.log 2>&1
[saml@grinchy 89576]$ more stat.log Performance counter stats for 'ls': 12949 cache-misses 0.022831281 seconds time elapsedNewer versions of perf 3.x+
I'm using perf version: 3.5.7
Capturing only perf's output
With the newer versions of perf there are dedicated options for controlling where messages get sent. You have the choice of either sending them to a file via the -o|--output option. Simply give either of those switches a filename to capture the output.
-o file, --output file
Print the output into the designated file.The alternative is to redirect the output to a alternate file descriptor, 3, for example. All you need to do is direct this alternate file handle prior to streaming to it.
--log-fd
Log output to fd, instead of stderr. Complementary to --output, and
mutually exclusive with it. --append may be used here. Examples:
3>results perf stat --log-fd 3 — $cmd
-or-
3>>results perf stat --log-fd 3 --append — $cmdSo if we wanted to collect the perf output for the ls command you could use this command:
$ 3>results.log perf stat --log-fd 3 ls > /dev/null
$
$ more results.log Performance counter stats for 'ls': 2.498964 task-clock # 0.806 CPUs utilized
0 context-switches # 0.000 K/sec
0 CPU-migrations # 0.000 K/sec
258 page-faults # 0.103 M/sec
880,752 cycles # 0.352 GHz
597,809 stalled-cycles-frontend # 67.87% frontend cycles idle
652,087 stalled-cycles-backend # 74.04% backend cycles idle
1,261,424 instructions # 1.43 insns per cycle
# 0.52 stalled cycles per insn [55.31%]
<not counted> branches
<not counted> branch-misses 0.003102139 seconds time elapsedIf you use the --append version then the contents of multiple commands will be appended to the same log file, results.log in our case.
Installing perf
Installation is pretty trivial:
Fedora
$ yum install perfUbuntu/Debian
$ apt-get install linux-tool-common linux-toolsReferencesSystem wide profiling
Tracing on Linux
perf: Linux profiling with performance counters
Counting with perf stat |
What stream does the perf command use!? I've been trying to capture it with
(perf stat -x, -ecache-misses ./a.out>/dev/null) 2> resultsfollowing https://stackoverflow.com/q/13232889/50305, but to no avail. Why can I not capture the input... it's like letting some fish get away!!
| What stream does perf use? |
Get the sources of your 3.11.10-03111002 kernel
Jump to it cd ./linux-3.11.10-03111002/tools/perf
Type make and hit enter.To run, type ./perf
that's it.
For other options type make help
|
I'm trying to run perf on my Ubuntu Precise box, which I recently upgraded to kernel 3.11.10-03111002 (manual install). The problem is that perf and kernel versions must match, and the requested version is not available in the repositories (linux-tools-VERSION package). I can only install up to v3.8.0.
What can I do? Kernel upgrade/downgrade is an option, but I'd rather get the correct perf version.
| Getting correct perf version |
Your processor does not support so many counters and too frequent switching between them, I guess.
You see in the last example the last column, where the counters are multiplexed (counted only over 33% of the time). If you use small enough task (or over more cores?), they are not counted, because all of the time the others were used. In your first example, only the cycles were managed to count in the time.
|
perf stat -d ./sample.outOutput is:
Performance counter stats for './sample.out': 0.586266 task-clock (msec) # 0.007 CPUs utilized
2 context-switches # 0.003 M/sec
1 cpu-migrations # 0.002 M/sec
116 page-faults # 0.198 M/sec
7,35,790 cycles # 1.255 GHz [81.06%]
<not counted> stalled-cycles-frontend
<not supported> stalled-cycles-backend
<not counted> instructions
<not counted> branches
<not counted> branch-misses
<not supported> L1-dcache-loads:HG
<not counted> L1-dcache-load-misses:HG
<not counted> LLC-loads:HG
<not supported> LLC-load-misses:HG 0.088013919 seconds time elapsedI read why <not supported> will show up from <not supported>. But I am getting <not counted> for even basic counters like instructions, branches etc. Can anyone suggest how to make it work?
Interesting thing is:sudo perf stat sleep 3gives output:
Performance counter stats for 'sleep 3': 0.598484 task-clock (msec) # 0.000 CPUs utilized
2 context-switches # 0.003 M/sec
0 cpu-migrations # 0.000 K/sec
181 page-faults # 0.302 M/sec
<not counted> cycles
<not counted> stalled-cycles-frontend
<not supported> stalled-cycles-backend
<not counted> instructions
<not counted> branches
<not counted> branch-missessudo perf stat -C 1 sleep 3 Performance counter stats for 'CPU(s) 1': 3002.640578 task-clock (msec) # 1.001 CPUs utilized [100.00%]
425 context-switches # 0.142 K/sec [100.00%]
9 cpu-migrations # 0.003 K/sec [100.00%]
5 page-faults # 0.002 K/sec
7,82,97,019 cycles # 0.026 GHz [33.32%]
9,38,21,585 stalled-cycles-frontend # 119.83% frontend cycles idle [33.32%]
<not supported> stalled-cycles-backend
3,09,81,643 instructions # 0.40 insns per cycle
# 3.03 stalled cycles per insn [33.32%]
70,15,390 branches # 2.336 M/sec [33.32%]
6,38,644 branch-misses # 9.10% of all branches [33.32%] 3.001075650 seconds time elapsedWhy is this unexpected working.??
| How to resolve <not counted> problem in perf tool? |
I was experiencing this too, and was able to get it working by building and installing the latest version of libcap from source. This may not be the best solution, but it worked for me.
libcap-2.53
$ git clone https://kernel.googlesource.com/pub/scm/linux/kernel/git/morgan/libcap
$ cd libcap
$ git checkout libcap-2.53
$ make
$ make test
$ make sudotest
$ sudo make installI ran the tests to confirm everything was working before install.
Once it had been installed I was able to run the commands listed in the perf-security doc as expected.
|
I'd like to use the perf utility. I was following instructions to set up a privileged group of users who are permitted to execute performance monitoring and observability without limits (as instructed here: https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html). I added the group and limited access to users not in the group. I started having problems when assigning capabilities to the perf tool:
setcap cap_sys_admin,cap_sys_ptrace,cap_syslog=ep perfI get an invalid arguments error saying
fatal error: Invalid argument
usage: setcap [-q] [-v] [-n <rootid>] (-r|-|<caps>) <filename> [ ... (-r|-|<capsN>) <filenameN> ]Note <filename> must be a regular (non-symlink) file.But running stats perf gives me this
File: ./perf
Size: 1622 Blocks: 8 IO Block: 4096 regular file
Device: 10307h/66311d Inode: 35260925 Links: 1
Access: (0750/-rwxr-x---) Uid: ( 0/ root) Gid: ( 1001/perf_users)
Access: 2021-12-03 13:08:48.923220351 +0100
Modify: 2021-11-05 17:02:56.000000000 +0100
Change: 2021-12-03 12:31:49.451991980 +0100
Birth: -which says the file is a regular file. What could be the problem? How can I set the capabilities for the Perf tool?
Linux distribution: Ubuntu 20.04
EDIT:
Last 20 output lines of strace setcap cap_sys_admin,cap_sys_ptrace,cap_syslog=ep perf:
munmap(0x7f825054c000, 90581) = 0
prctl(PR_CAPBSET_READ, CAP_MAC_OVERRIDE) = 1
prctl(PR_CAPBSET_READ, 0x30 /* CAP_??? */) = -1 EINVAL (Invalid argument)
prctl(PR_CAPBSET_READ, 0x28 /* CAP_??? */) = 1
prctl(PR_CAPBSET_READ, 0x2c /* CAP_??? */) = -1 EINVAL (Invalid argument)
prctl(PR_CAPBSET_READ, 0x2a /* CAP_??? */) = -1 EINVAL (Invalid argument)
prctl(PR_CAPBSET_READ, 0x29 /* CAP_??? */) = -1 EINVAL (Invalid argument)
brk(NULL) = 0x55de3e858000
brk(0x55de3e879000) = 0x55de3e879000
capget({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, NULL) = 0
capget({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, {effective=0, permitted=0, inheritable=0}) = 0
capget({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, NULL) = 0
capset({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, {effective=1<<CAP_SETFCAP, permitted=0, inheritable=0}) = -1 EPERM (Operation not permitted)
dup(2) = 3
fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x1), ...}) = 0
write(3, "unable to set CAP_SETFCAP effect"..., 72unable to set CAP_SETFCAP effective capability: Operation not permitted
) = 72
close(3) = 0
exit_group(1) = ?
+++ exited with 1 +++ | how to set capabilities (setcap) on perf |
perf was silently failing to count context switches because you were not root.
(Linux has 64k pipe buffers. In either case, you can see very close to 2 context switches per 64k transferred. Not exactly sure how that works, but I suspect it's only counting context switches away from dd, either to the other dd, or to the idle task for that cpu).
$ sudo perf stat taskset 0x1 sh -c 'dd bs=1M </dev/zero|dd bs=1M >/dev/null'
^C14508+0 records in
14507+0 records out
15211692032 bytes (15 GB, 14 GiB) copied, 3.87098 s, 3.9 GB/s
14508+0 records in
14508+0 records out
15212740608 bytes (15 GB, 14 GiB) copied, 3.87044 s, 3.9 GB/s
taskset: Interrupt Performance counter stats for 'taskset 0x1 sh -c dd bs=1M </dev/zero|dd bs=1M >/dev/null': 3872.597645 task-clock (msec) # 1.000 CPUs utilized
464,325 context-switches # 0.120 M/sec
0 cpu-migrations # 0.000 K/sec
928 page-faults # 0.240 K/sec
11,099,016,844 cycles # 2.866 GHz
13,765,220,898 instructions # 1.24 insn per cycle
3,053,464,009 branches # 788.480 M/sec
15,462,959 branch-misses # 0.51% of all branches 3.874121023 seconds time elapsed$ echo $((15212740608 / 464325))
32763$ sudo perf stat sh -c 'dd bs=1M </dev/zero|dd bs=1M >/dev/null'
^C7031+0 records in
7031+0 records out
7032+0 records in
7031+0 records out
7372537856 bytes (7.4 GB, 6.9 GiB) copied, 4.27436 s, 1.7 GB/s7372537856 bytes (7.4 GB, 6.9 GiB) copied, 4.27414 s, 1.7 GB/ssh: Interrupt Performance counter stats for 'sh -c dd bs=1M </dev/zero|dd bs=1M >/dev/null': 3736.056509 task-clock (msec) # 0.873 CPUs utilized
218,047 context-switches # 0.058 M/sec
206 cpu-migrations # 0.055 K/sec
877 page-faults # 0.235 K/sec
8,328,413,541 cycles # 2.229 GHz
7,617,859,285 instructions # 0.91 insn per cycle
1,671,904,009 branches # 447.505 M/sec
13,827,669 branch-misses # 0.83% of all branches 4.277591869 seconds time elapsed$ echo $((7372537856 / 218047))
33811 |
I ran a shell pipeline under perf stat, using taskset 0x1 to pin the whole pipeline to a single CPU. I know taskset 0x1 had an effect, because it more than doubled the throughput of the pipeline. However, perf stat shows 0 context switches between the different processes of the pipeline.
So what exactly does perf stat mean by context switches?
I think I was interested in the number of context switches to/from the individual tasks in the pipeline. Is there a better way to measure that?
This was in the context of comparing dd bs=1M </dev/zero, to dd bs=1M </dev/zero | dd bs=1M >/dev/null. If I can measure context switches as desired, I assume that it would be useful in quantifying why the first version is several times more "efficient" than the second.
$ rpm -q perf
perf-4.15.0-300.fc27.x86_64
$ uname -r
4.15.17-300.fc27.x86_64$ perf stat taskset 0x1 sh -c 'dd bs=1M </dev/zero | dd bs=1M >/dev/null'
^C18366+0 records in
18366+0 records out
19258146816 bytes (19 GB, 18 GiB) copied, 5.0566 s, 3.8 GB/s Performance counter stats for 'taskset 0x1 sh -c dd if=/dev/zero bs=1M | dd bs=1M of=/dev/null': 5059.273255 task-clock:u (msec) # 1.000 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
414 page-faults:u # 0.082 K/sec
36,915,934 cycles:u # 0.007 GHz
9,511,905 instructions:u # 0.26 insn per cycle
2,480,746 branches:u # 0.490 M/sec
188,295 branch-misses:u # 7.59% of all branches 5.061473119 seconds time elapsed$ perf stat sh -c 'dd bs=1M </dev/zero | dd bs=1M >/dev/null'
^C6637+0 records in
6636+0 records out
6958350336 bytes (7.0 GB, 6.5 GiB) copied, 4.04907 s, 1.7 GB/s
6636+0 records in
6636+0 records out
6958350336 bytes (7.0 GB, 6.5 GiB) copied, 4.0492 s, 1.7 GB/s
sh: Interrupt Performance counter stats for 'sh -c dd if=/dev/zero bs=1M | dd bs=1M of=/dev/null': 3560.269345 task-clock:u (msec) # 0.878 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
355 page-faults:u # 0.100 K/sec
32,302,387 cycles:u # 0.009 GHz
4,823,855 instructions:u # 0.15 insn per cycle
1,167,126 branches:u # 0.328 M/sec
88,982 branch-misses:u # 7.62% of all branches 4.052844128 seconds time elapsed | Why does `perf stat` show 0 context switches? |
The source code for perf is included in the Linux kernel source tree under tools/perf.
|
I am looking for source package of perf tool ,which I wanted to compile for ARM
Linux ,I have already set up the cross compile tool chain.
I have compiled the the oprofile and got it source(Oprofile-0.9.8.tar.bz2) from sourceforge.net.
Can anyone point me to perf tool source ??
| Where Do I get Source package for Perf tool |
First of all, check if the processor has even the hardware counters. Intel Haswell architecture stopped to provide hardware counters in recent processors (for some reason).
Second of all, I would check if you can see hardware event through, for example papi. The command papi_native_avail should list you native events, if Ubuntu provides recent enough databases.
The third possibility is that the events are here, but not supported by the old perf. Yes, Ubuntu 14.04 is two years old and the kernel/tools might not support current processors fully.
|
I have a problem in using linux perf on a newly bought laptop: there is no available hardware cache effect in my perf list!!! Well, this is really all important information that I wish to sample!! Here is my perf list:
List of pre-defined events (to be used in -e):
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
cache-references [Hardware event]
cache-misses [Hardware event]
branch-instructions OR branches [Hardware event]
branch-misses [Hardware event]
bus-cycles [Hardware event]
ref-cycles [Hardware event]cpu-clock [Software event]
task-clock [Software event]
page-faults OR faults [Software event]
context-switches OR cs [Software event]
cpu-migrations OR migrations [Software event]
minor-faults [Software event]
major-faults [Software event]
alignment-faults [Software event]
emulation-faults [Software event]
dummy [Software event]branch-instructions OR cpu/branch-instructions/ [Kernel PMU event]
branch-misses OR cpu/branch-misses/ [Kernel PMU event]
bus-cycles OR cpu/bus-cycles/ [Kernel PMU event]
cache-misses OR cpu/cache-misses/ [Kernel PMU event]
cache-references OR cpu/cache-references/ [Kernel PMU event]
cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event]
instructions OR cpu/instructions/ [Kernel PMU event]
power/energy-cores/ [Kernel PMU event]
power/energy-gpu/ [Kernel PMU event]
power/energy-pkg/ [Kernel PMU event]
power/energy-ram/ [Kernel PMU event]
ref-cycles OR cpu/ref-cycles/ [Kernel PMU event]rNNN [Raw hardware event descriptor]
cpu/t1=v1[,t2=v2,t3 ...]/modifier [Raw hardware event descriptor]
(see 'man perf-list' on how to encode it)mem:<addr>[:access] [Hardware breakpoint][ Tracepoints not available: Permission denied ]while this is the perf list I used to see: https://perf.wiki.kernel.org/index.php/Tutorial#Events.
What I used to do is:
sudo perf stat -e L1-dcache-loads,L1-dcache-load-misses,LLC-loads,LLC-load-misses -a --append -o perf.txt [some command to run a file]but this does not work on my new machine. How can I collect data I want in this case?
I am using Ubuntu 14.04, with kernel <3.19.0-56>. Perf version <3.19.8-ckt15>.update
I installed the papi-tools library, and papi_native_avail gives me
Available native events and hardware information.PAPI Version : 5.3.0.0
Vendor string and code : GenuineIntel (1)
Model string and code : Intel(R) Core(TM) M-5Y71 CPU @ 1.20GHz (61)
CPU Revision : 4.000000
CPUID Info : Family: 6 Model: 61 Stepping: 4
CPU Max Megahertz : 2900
CPU Min Megahertz : 500
Hdw Threads per core : 1
Cores per Socket : 2
Sockets : 2
NUMA Nodes : 1
CPUs per Node : 4
Total CPUs : 4
Running in a VM : no
Number Hardware Counters : 0
Max Multiplex Counters : 64 | Why can't I find hardware cache event in my perf list? |
Your hunch is correct, perf trace record isn’t recording enough data; man perf-trace suggests that it takes care of it itself, but you need to record syscalls:
perf trace record -e 'raw_syscalls:*' ...Then
perf trace -i perf.datawill work as you’d expect.
|
I can use perf trace as a low-overhead replacement for strace, e.g. to trace all Apache instances:
perf trace -p $(pidof apache2 | tr ' ' ',')To run the trace only for up to 10 seconds:
perf trace -p $(pidof apache2 | tr ' ' ',') -- sleep 10Some example output:
server ~ # perf trace -p $(pidof apache2 | tr ' ' ',') -- sleep 10 2>&1 | head
? ( ): apache2/8661 ... [continued]: poll()) = 0 Timeout
0.022 ( 0.005 ms): apache2/8661 close(fd: 28 ) = 0
0.066 ( 0.007 ms): apache2/8661 read(fd: 13<pipe:[3452760950]>, buf: 0x7ffe815038ff, count: 1 ) = -1 EAGAIN Resource temporarily unavailable
? ( ): apache2/26492 ... [continued]: semop()) = 0
0.088 ( ): apache2/8661 semop(semid: 557481986, tsops: 0x7f846e0cfd6c, nsops: 1 ) ...
? ( ): apache2/7580 ... [continued]: epoll_wait()) = 1
46.136 ( ): apache2/26492 epoll_wait(epfd: 27<anon_inode:[eventpoll]>, events: 0x7f846dd0c698, maxevents: 5, timeout: 10000) ...
46.081 ( 0.013 ms): apache2/7580 accept4(fd: 12<socket:[3452759675]>, upeer_sockaddr: 0x7ffe81503830, upeer_addrlen: 0x7ffe81503810, flags: 524288) = 28
46.100 ( 0.010 ms): apache2/7580 semop(semid: 557481986, tsops: 0x7f846e0cfd60, nsops: 1 ) = 0
46.116 ( 0.002 ms): apache2/7580 getsockname(fd: 28<socket:[3465711918]>, usockaddr: 0x7f846dd0a130, usockaddr_len: 0x7f846dd0a110) = 0This works as expected. Now I want to record these events in a file so that I can later analyze them in detail. I had expected that perf trace record does this, but I'm not even sure if this is recording properly:
server ~ # perf trace record -p $(pidof apache2 | tr ' ' ',') -- sleep 10
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0,312 MB perf.data (67 samples) ]perf trace ... | wc -l amounts to ~12000 lines, so why does record only record 67 samples?
I'm not even sure what the correct command is to read this file; the man page unfortunately doesn't say. I'd assumed it's perf trace -i perf.data, but that doesn't print anything:
server ~ # perf trace -i perf.data
server ~ # perf script does print something, but it doesn't look like the perf trace output:
server ~ # perf script | head
apache2 10215 [002] 29556325.787512: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 20085 [006] 29556325.787597: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 20754 [000] 29556325.790512: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 7580 [007] 29556325.790757: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 8661 [001] 29556325.796044: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 10215 [006] 29556325.796845: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 20085 [004] 29556325.798481: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 10215 [004] 29556325.802922: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 20754 [001] 29556325.815999: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms])
apache2 20085 [003] 29556325.816025: 1 cycles:ppp: ffffffff83e5a704 native_write_msr+0x4 ([kernel.kallsyms]) | How do I use perf trace record? |
Unfortunately the Ubuntu mainline kernel builds don’t publish packages for the kernel-related tools (perf etc.).
You can try using the packaged versions of the tools; most of their functionality should work fine with a newer kernel (see Why 'perf' needs to match the exact running Linux kernel version?). You can also build them yourself, using the source tree matching your kernel package.
A better long-term solution would be to report your issues to the Linux Mint bug tracker, with details of the kernel which fixed them; that way the relevant fixes might get backported to the distribution kernels.
|
I'm currently running kernel 5.16.15-051615-generic - I've landed on it while diagnosing and trying to fix an audio issue. Everything works great now so I'm reluctant to go back to 5.4.0
I'd like to install and use perf, but the linux-tools-common package in apt shows version 5.4.0-107.121.
I've read that you need to use a version of the package compiled for your kernel version, and haven't found anything for 5.16. I guess that's to be expected - but how can I get a version for my kernel? Is there some in-development unofficial repository that has them? Or am I out of luck?
If it helps, I used Ubuntu Mainline Kernel Installer to install the new kernel.
| linux-tools-common when on a newer kernel version |
perf record -a --no-syscalls -e sched:sched_process_exec sh -c read | perf script(sh -c read provides a way to stop this trace, just hit Enter. If I omit this command and try to interrupt the pipeline with ctrl+C, my output is lost, probably because it also interrupts perf script).
However this output is not "live", due to buffering. E.g. running the above command shows nothing, but hitting enter causes it to stop and show a line for the exec() of sh. blktrace has special-case code to handle output to a pipe, including disabling the default C stdio buffering. Attempting to run perf record under the unbuffer command gives the error "incompatible file format"; I presume the error comes from perf script.man perf-report
...
OPTIONS
-i, --input= Input file name. (default: perf.data unless stdin is a fifo) |
I have tried looking at the documentation for perf script, perf trace, and trace-cmd, including the list of commands in "SEE ALSO".
I can trace e.g. sched:sched_process_exec "live" using perf trace -a --no-syscalls -e sched:sched_process_exec. However, it only shows the process name (e.g. ls). It does not show the PID, unless the tracepoint has a specific parameter for it. perf script always shows the PID, but it does not show live output; it shows the contents of a perf.data file.
I don't need this to be a single command, like btrace is for blktrace. I am happy to use a pipeline, analogous to blktrace -d /dev/sda -o - | blkparse -i -.
(Both of the above commands show PIDs :-). It is frustrating to see the blktrace family of commands, which also use trace events, can print live output in the same format as they can print recorded traces. I can't find such power in the general-purpose tracing tools!)
| Is there a command to show tracepoints "live", which includes the PID? |
The Ghz value in perf stat -a does not show the cycles per second. 4,719,000 cycles divided by 0.0016 seconds is 2.9Ghz, not 0.76Ghz.
I guess what perf shows is an average of the cycles per second on each cpu core. Dividing 2.9Ghz by 0.76Ghz gives 3.8. This is not quite a whole number of cpus, but it's about right. I notice it exactly matches the strange "CPUs utilized" figure above.Compare perf stat without -a:
# time perf stat -r 500 mount --make-rprivate /mnt/a Performance counter stats for 'mount --make-rprivate /mnt/a' (500 runs):
1.323450 task-clock (msec) # 0.812 CPUs utilized ( +- 0.84% )
0 context-switches # 0.008 K/sec ( +- 44.54% )
0 cpu-migrations # 0.000 K/sec
122 page-faults # 0.092 M/sec ( +- 0.04% )
2,668,696 cycles # 2.016 GHz ( +- 0.28% )
3,090,908 instructions # 1.16 insn per cycle ( +- 0.04% )
611,827 branches # 462.297 M/sec ( +- 0.03% )
20,252 branch-misses # 3.31% of all branches ( +- 0.09% ) 0.001630517 seconds time elapsed ( +- 0.82% )real 0m1.089s
user 0m0.378s
sys 0m0.715sNote also, the cycles reported by perf stat -a don't exactly represent productive computation. perf record -a followed by perf report showed the top hotspot as follows:
# perf record -a sh -c "for i in {1..500}; do mount --make-rprivate /mnt/a; done"
...
# perf report
...
19.40% swapper [kernel.kallsyms] [k] intel_idle
...I.e., although the cpu frequency is being lowered on idle cores, the cycles counted by perf appear also appear to include a large number "spent" while the kernel has halted the CPU and entered a cpu idle state.
(Or at least the kernel was trying to put the cpu in a low-power idle state. I don't know if perf interrupts the cpu often enough to completely interfere with idling).
|
Why does perf stat -a show a clock speed three times lower than my CPU is rated for?
I don't think power management is an issue, because I made sure the test ran for a whole second to allow the cpu frequency to rise to maximum.
# time perf stat -a -r 500 mount --make-rprivate /mnt/a Performance counter stats for 'system wide' (500 runs): 6.217301 cpu-clock (msec) # 3.782 CPUs utilized ( +- 0.63% )
6 context-switches # 0.998 K/sec ( +- 1.31% )
0 cpu-migrations # 0.018 K/sec ( +- 15.14% )
122 page-faults # 0.020 M/sec ( +- 0.04% )
4,719,129 cycles # 0.759 GHz ( +- 1.93% )
3,998,374 instructions # 0.85 insn per cycle ( +- 0.44% )
805,593 branches # 129.573 M/sec ( +- 0.44% )
22,548 branch-misses # 2.80% of all branches ( +- 0.26% ) 0.001644054 seconds time elapsed ( +- 0.62% )real 0m1.152s
user 0m0.386s
sys 0m0.824s# rpm -q perf
perf-4.14.16-300.fc27.x86_64 | Why does `perf stat -a` show clock (Ghz) lower than my cpu is rated? |
Which clock source is recorded technically depends on how perf-record(1) was run (see the description of the -k option). The undocumented default is (each) CPU’s “local” clock, which is not otherwised exposed to userspace except in dmesg but seems to be close to CLOCK_MONOTONIC... unless you suspend... I think?.. It’s a bit of a mess at first glance. And CLOCK_MONOTONIC is usually off by several seconds from CLOCK_BOOTTIME as reported by /proc/uptime.
Anyway, if you pass anything to -k at recording time (e.g. MONOTONIC or MONOTONIC_RAW), your perf.data will contain a wall clock reference, which you’ll be able to see in the output of perf script --header. If you can’t do that, assuming it’s CLOCK_MONOTONIC is probably okay; you can check just how okay by compiling and running the following program:
#include <stdio.h>
#include <time.h>int main(void) {
struct timespec ts;
if (clock_gettime(CLOCK_MONOTONIC, &ts) < 0) {
perror("clock_gettime");
return 1;
}
printf("%llu.%09lu\n", (unsigned long long)ts.tv_sec, (unsigned long)ts.tv_nsec);
return 0;
} |
The third column from perf script seems to be close but not quite the uptime, where is that timestamp coming from? Is there a way to access that timestamp other than accessing sampled events?
$ perf record cat /proc/uptime
1392597.79 16669901.66
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.002 MB perf.data (19 samples) ]
$ perf script
cat 902536 1392640.417831: 1 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417849: 1 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417863: 3 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417876: 10 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417889: 33 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417902: 108 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417915: 351 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417929: 1136 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417942: 3657 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417958: 11701 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.417998: 34018 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.418060: 55936 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.418117: 77496 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.418195: 110190 cycles:u: ffffffffac000163 [unknown] ([unknown])
cat 902536 1392640.418296: 140857 cycles:u: ffffffffac000163 [unknown] ([unknown])
cat 902536 1392640.418422: 166643 cycles:u: ffffffffac000b47 [unknown] ([unknown])
cat 902536 1392640.418589: 187277 cycles:u: ffffffffac000163 [unknown] ([unknown])
cat 902536 1392640.418763: 198929 cycles:u: 7fdb1734b9a8 _dl_addr+0x108 (/usr/lib/libc-2.31.so)
cat 902536 1392640.418959: 209655 cycles:u: ffffffffac000163 [unknown] ([unknown]) | Where does `perf script` timestamps come from? |
Take a look at this article titled: perf Examples, it has a number of examples that show how you can make flame graphs such as this one:The above graph can be generated as an interactive SVG file as well. The graph was generated using the FlameGraph tool. This is separate software from perf.
A series of commands similar to this were used to generate that graph:
$ perf record -a -g -F 99 sleep 60
$ perf script | ./stackcollapse-perf.pl > out.perf-folded
$ ./flamegraph.pl out.perf-folded > perf-kernel.svgThe CPU FlameGraphs (above) are covered in more detail in this article, titled: CPU Flame Graphs. For details on FlameGraphs for other resources such as memory check out this page titled: Flame Graphs.
|
It's possible to create a bitmap or a vector image out of the data collected by the perf profiler under linux ?
| Does perf includes some "graphing" abilities? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.